idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
51,201
How to model the relationship between geocoded and ungeocoded sales data?
As I understand it, the situation is as follows. You have data about store, its whereabouts and the sales-geocoded or ungeocoded. For each block, you have some portion of sales as geocoded and the rest ungeocoded. Now, if you have data available for many stores, you can use a model to predict FRACTION OF UNGEOCODED SALES from the data available for the new block. For example, you already have noticed that "For example, if my store is near a college campus or has lots of construction, I would it expect it to have higher ungeocoded sales, all else being equal." From that, you can forecast the fraction of geocoded sales=1- (fraction of ungeocoded sales). Now, check whether the fractions forecasted according to two models correspond to the above formula i.e. the fractions sum to 1. Checking correlation between them for significance using Fisher's test would give an idea of whether the idea of finding one fraction from other is correct or not.
How to model the relationship between geocoded and ungeocoded sales data?
As I understand it, the situation is as follows. You have data about store, its whereabouts and the sales-geocoded or ungeocoded. For each block, you have some portion of sales as geocoded and the res
How to model the relationship between geocoded and ungeocoded sales data? As I understand it, the situation is as follows. You have data about store, its whereabouts and the sales-geocoded or ungeocoded. For each block, you have some portion of sales as geocoded and the rest ungeocoded. Now, if you have data available for many stores, you can use a model to predict FRACTION OF UNGEOCODED SALES from the data available for the new block. For example, you already have noticed that "For example, if my store is near a college campus or has lots of construction, I would it expect it to have higher ungeocoded sales, all else being equal." From that, you can forecast the fraction of geocoded sales=1- (fraction of ungeocoded sales). Now, check whether the fractions forecasted according to two models correspond to the above formula i.e. the fractions sum to 1. Checking correlation between them for significance using Fisher's test would give an idea of whether the idea of finding one fraction from other is correct or not.
How to model the relationship between geocoded and ungeocoded sales data? As I understand it, the situation is as follows. You have data about store, its whereabouts and the sales-geocoded or ungeocoded. For each block, you have some portion of sales as geocoded and the res
51,202
Understanding similarity sensitive hashing algorithm in AdaBoost
It seems like the exponential loss they refer to is: $\sum_{f,f' \in N}\exp^{−sign(A_{i}f_p+b_i)sign(A_if'_p+b_i)}$ so I would imagine they simply take the argmin over $A \in \mathcal{A}$ and $b \in \mathcal{B}$ where $\mathcal{A}$ is the set of $A$ represented by the top ten eigenvectors, and $\mathcal{B}$ is some range (e.g. $b \in \left[-1,1\right]$, since $\left| A \right| \leq 1$ from the constrained eigendecomposition) $\arg\min_{A \in \mathcal{A}, b \in \mathcal{B}} \sum_{f,f' \in N}\exp^{−sign(A_{i}f_p+b_i)sign(A_if'_p+b_i)}$
Understanding similarity sensitive hashing algorithm in AdaBoost
It seems like the exponential loss they refer to is: $\sum_{f,f' \in N}\exp^{−sign(A_{i}f_p+b_i)sign(A_if'_p+b_i)}$ so I would imagine they simply take the argmin over $A \in \mathcal{A}$ and $b \in \
Understanding similarity sensitive hashing algorithm in AdaBoost It seems like the exponential loss they refer to is: $\sum_{f,f' \in N}\exp^{−sign(A_{i}f_p+b_i)sign(A_if'_p+b_i)}$ so I would imagine they simply take the argmin over $A \in \mathcal{A}$ and $b \in \mathcal{B}$ where $\mathcal{A}$ is the set of $A$ represented by the top ten eigenvectors, and $\mathcal{B}$ is some range (e.g. $b \in \left[-1,1\right]$, since $\left| A \right| \leq 1$ from the constrained eigendecomposition) $\arg\min_{A \in \mathcal{A}, b \in \mathcal{B}} \sum_{f,f' \in N}\exp^{−sign(A_{i}f_p+b_i)sign(A_if'_p+b_i)}$
Understanding similarity sensitive hashing algorithm in AdaBoost It seems like the exponential loss they refer to is: $\sum_{f,f' \in N}\exp^{−sign(A_{i}f_p+b_i)sign(A_if'_p+b_i)}$ so I would imagine they simply take the argmin over $A \in \mathcal{A}$ and $b \in \
51,203
What kind of statistical method do I need for comparing element concentrations?
Yes, this is an analysis of variance problem. Check out the aov function. You might want something like this ("geno" is the thing you want to test, right?): > model1 <- aov(conc ~ region*elem*geno+Error(id), data=elemconc) > model.null <- aov(conc ~ region*elem+Error(id), data=elemconc) > summary(model1) Error: id Df Sum Sq Mean Sq F value Pr(>F) Residuals 1 0.312 0.312 Error: Within Df Sum Sq Mean Sq F value Pr(>F) region 1 0.04 0.039 0.04 0.84 elem 2 1.70 0.848 0.87 0.43 geno 1 0.02 0.020 0.02 0.89 region:elem 2 3.23 1.614 1.66 0.21 region:geno 1 0.86 0.862 0.89 0.36 elem:geno 2 3.56 1.779 1.83 0.18 region:elem:geno 2 2.32 1.158 1.19 0.32 Residuals 23 22.37 0.973 > summary(model.null) Error: id Df Sum Sq Mean Sq F value Pr(>F) Residuals 1 0.312 0.312 Error: Within Df Sum Sq Mean Sq F value Pr(>F) region 1 0.04 0.039 0.04 0.84 elem 2 1.70 0.848 0.84 0.44 region:elem 2 3.23 1.614 1.61 0.22 Residuals 29 29.13 1.005 > pf(((29.13-22.37)/6) / (29.13/29),6,29, lower.tail=F) [1] 0.3744 >
What kind of statistical method do I need for comparing element concentrations?
Yes, this is an analysis of variance problem. Check out the aov function. You might want something like this ("geno" is the thing you want to test, right?): > model1 <- aov(conc ~ region*elem*geno+E
What kind of statistical method do I need for comparing element concentrations? Yes, this is an analysis of variance problem. Check out the aov function. You might want something like this ("geno" is the thing you want to test, right?): > model1 <- aov(conc ~ region*elem*geno+Error(id), data=elemconc) > model.null <- aov(conc ~ region*elem+Error(id), data=elemconc) > summary(model1) Error: id Df Sum Sq Mean Sq F value Pr(>F) Residuals 1 0.312 0.312 Error: Within Df Sum Sq Mean Sq F value Pr(>F) region 1 0.04 0.039 0.04 0.84 elem 2 1.70 0.848 0.87 0.43 geno 1 0.02 0.020 0.02 0.89 region:elem 2 3.23 1.614 1.66 0.21 region:geno 1 0.86 0.862 0.89 0.36 elem:geno 2 3.56 1.779 1.83 0.18 region:elem:geno 2 2.32 1.158 1.19 0.32 Residuals 23 22.37 0.973 > summary(model.null) Error: id Df Sum Sq Mean Sq F value Pr(>F) Residuals 1 0.312 0.312 Error: Within Df Sum Sq Mean Sq F value Pr(>F) region 1 0.04 0.039 0.04 0.84 elem 2 1.70 0.848 0.84 0.44 region:elem 2 3.23 1.614 1.61 0.22 Residuals 29 29.13 1.005 > pf(((29.13-22.37)/6) / (29.13/29),6,29, lower.tail=F) [1] 0.3744 >
What kind of statistical method do I need for comparing element concentrations? Yes, this is an analysis of variance problem. Check out the aov function. You might want something like this ("geno" is the thing you want to test, right?): > model1 <- aov(conc ~ region*elem*geno+E
51,204
Understanding MANOVA in case of a single predictor
1) Is this approach (MANOVA followed by series of ANOVA) justified? Are there strict assumptions before we could take this path? In the example data you provide, there is no correlation between A, B, and C. Hence, a MANOVA seems beside the point. Unless you are interested in the relationship between A, B, and C, or have some reason to think that the three will be somehow correlated, just skip to the ANOVA. 2) If yes, should there be some kind of correction for the second step, i.e. series of ANOVAs for individual DVs (multiple-comparisons)? What kind of correction? No. Likely not. 3) What is the recommended approach for problems like this with multiple DVs? Well, if you know a relationship between them, or know that they will all be influenced by, say, subject, you have two possible options. If you know the relationship between them, try something like Structural Equation Modeling. If there is some reason to suspect that each metric will be influenced in the same way by subject, then you need to control for this. Might I recommend you see the following paper, as it addresses most of your questions: H. J. Keselman, Carl J. Huberty, Lisa M. Lix, Stephen Olejnik, Robert A. Cribbie, Barbara Donahue, Rhonda K. Kowalchuk, Laureen L. Lowman, Martha D. Petoskey, Joanne C. Keselman and Joel R. Levin. 1998. Statistical Practices of Educational Researchers: An Analysis of their ANOVA, MANOVA, and ANCOVA Analyses. REVIEW OF EDUCATIONAL RESEARCH. 68; 350-386 DOI: 10.3102/00346543068003350
Understanding MANOVA in case of a single predictor
1) Is this approach (MANOVA followed by series of ANOVA) justified? Are there strict assumptions before we could take this path? In the example data you provide, there is no correlation between A, B,
Understanding MANOVA in case of a single predictor 1) Is this approach (MANOVA followed by series of ANOVA) justified? Are there strict assumptions before we could take this path? In the example data you provide, there is no correlation between A, B, and C. Hence, a MANOVA seems beside the point. Unless you are interested in the relationship between A, B, and C, or have some reason to think that the three will be somehow correlated, just skip to the ANOVA. 2) If yes, should there be some kind of correction for the second step, i.e. series of ANOVAs for individual DVs (multiple-comparisons)? What kind of correction? No. Likely not. 3) What is the recommended approach for problems like this with multiple DVs? Well, if you know a relationship between them, or know that they will all be influenced by, say, subject, you have two possible options. If you know the relationship between them, try something like Structural Equation Modeling. If there is some reason to suspect that each metric will be influenced in the same way by subject, then you need to control for this. Might I recommend you see the following paper, as it addresses most of your questions: H. J. Keselman, Carl J. Huberty, Lisa M. Lix, Stephen Olejnik, Robert A. Cribbie, Barbara Donahue, Rhonda K. Kowalchuk, Laureen L. Lowman, Martha D. Petoskey, Joanne C. Keselman and Joel R. Levin. 1998. Statistical Practices of Educational Researchers: An Analysis of their ANOVA, MANOVA, and ANCOVA Analyses. REVIEW OF EDUCATIONAL RESEARCH. 68; 350-386 DOI: 10.3102/00346543068003350
Understanding MANOVA in case of a single predictor 1) Is this approach (MANOVA followed by series of ANOVA) justified? Are there strict assumptions before we could take this path? In the example data you provide, there is no correlation between A, B,
51,205
Understanding MANOVA in case of a single predictor
The answer is in how you have simulated your data. It defines the stochastic process that you are assuming, and is enlightening on how you ought to draw inference. This set.seed(100) group <- rep(c(0,1), each=40) A <- rnorm(80, 5, .5) + .1 * group B <- rnorm(80, 9, .3) + .2 * group + .5 * A C <- rnorm(80, 12, .3) + .2 * group + .7 * B d.1 <- data.frame(A = A, B = B, C = C, group = group) sets up a multivariate normal distribution. If you know the dependency structure and it is a directed acyclic graph--like your example where A depends on group, B depends on A and group, C depends on B and group--just do a series of linear regressions and do inference on the coefficient of the group term. If the dependency structure is more complicated, you should have a look at structural equations models. Either way, the superior method is to write out your likelihood function (which is the product of multivariate normals) and figure out how to estimate the parameters of your link function (in this case, linear).
Understanding MANOVA in case of a single predictor
The answer is in how you have simulated your data. It defines the stochastic process that you are assuming, and is enlightening on how you ought to draw inference. This set.seed(100) group <- rep(c(0,
Understanding MANOVA in case of a single predictor The answer is in how you have simulated your data. It defines the stochastic process that you are assuming, and is enlightening on how you ought to draw inference. This set.seed(100) group <- rep(c(0,1), each=40) A <- rnorm(80, 5, .5) + .1 * group B <- rnorm(80, 9, .3) + .2 * group + .5 * A C <- rnorm(80, 12, .3) + .2 * group + .7 * B d.1 <- data.frame(A = A, B = B, C = C, group = group) sets up a multivariate normal distribution. If you know the dependency structure and it is a directed acyclic graph--like your example where A depends on group, B depends on A and group, C depends on B and group--just do a series of linear regressions and do inference on the coefficient of the group term. If the dependency structure is more complicated, you should have a look at structural equations models. Either way, the superior method is to write out your likelihood function (which is the product of multivariate normals) and figure out how to estimate the parameters of your link function (in this case, linear).
Understanding MANOVA in case of a single predictor The answer is in how you have simulated your data. It defines the stochastic process that you are assuming, and is enlightening on how you ought to draw inference. This set.seed(100) group <- rep(c(0,
51,206
Numerical accuracy of multivariate normal distribution
I am guessing you tested the code using a sigma which is not positive semidefinite. The 'accurate' implementation (which I am guessing is from the statistics toolbox) is computing $y^{\top}y$ where $y = C^{-\top} \left(x - \mu\right)$ where $C$ is the output of cholcov on $\Sigma$ (sigma). Note that $y^{\top}y$ must be non-negative by design. If you feed in a $\Sigma$ which is not PD, cholcov silently returns some version of $C$ and the rest of the code proceeds (I would have made this an error, I think). The code you wrote, however, is computing $\left(x - \mu\right)^{\top}\Sigma^{-1}\left(x - \mu\right)$. If $\Sigma$ is not PD, this quantity can be negative. When $\Sigma$ is positive definite, cholcov returns a proper Cholesky factorization, and the results are the same (up to round off).
Numerical accuracy of multivariate normal distribution
I am guessing you tested the code using a sigma which is not positive semidefinite. The 'accurate' implementation (which I am guessing is from the statistics toolbox) is computing $y^{\top}y$ where $y
Numerical accuracy of multivariate normal distribution I am guessing you tested the code using a sigma which is not positive semidefinite. The 'accurate' implementation (which I am guessing is from the statistics toolbox) is computing $y^{\top}y$ where $y = C^{-\top} \left(x - \mu\right)$ where $C$ is the output of cholcov on $\Sigma$ (sigma). Note that $y^{\top}y$ must be non-negative by design. If you feed in a $\Sigma$ which is not PD, cholcov silently returns some version of $C$ and the rest of the code proceeds (I would have made this an error, I think). The code you wrote, however, is computing $\left(x - \mu\right)^{\top}\Sigma^{-1}\left(x - \mu\right)$. If $\Sigma$ is not PD, this quantity can be negative. When $\Sigma$ is positive definite, cholcov returns a proper Cholesky factorization, and the results are the same (up to round off).
Numerical accuracy of multivariate normal distribution I am guessing you tested the code using a sigma which is not positive semidefinite. The 'accurate' implementation (which I am guessing is from the statistics toolbox) is computing $y^{\top}y$ where $y
51,207
How do I analyse data with a ceiling effect?
Is it the case that each individual's score is composed of the sum of 30 binary questions? If so, then you should analyze the raw data (1 or 0 for each question for each individual) genearlized additive mixed effects models, treating individuals as random effects, and specify a binomial link. For example (in R): library(lme4) fit1 = lmer( data = my_raw_data , formula = accuracy ~ (1|individual) , family = binomial ) This would fit a model where there is only an intercept. If you have a between-individuals manipulation coded in a variable called "A", you could evaluate the amount of evidence for an effect of A by: fit2 = lmer( data = my_raw_data , formula = accuracy ~ (1|individual) + A , family = binomial ) (AIC(fit1)-AIC(fit2))*log2(exp(1)) #bits of evidence for an effect of A Where "bits of evidence" refers to a likelihood ratio represented on the log-base-2 scale. Negative bits would represent evidence against an effect of A. The ezMixed() function from the ez package automates the computation of such evidence metrics, and the ezPredict() and ezPlot2() functions facilitate obtaining and visualizing effects. If I am wrong and the score does not represent the sum of 30 binary questions but instead the sum of some smaller number of likert-coded questions, you could recode the likert responses to binomial as I suggest here, then proceed as above.
How do I analyse data with a ceiling effect?
Is it the case that each individual's score is composed of the sum of 30 binary questions? If so, then you should analyze the raw data (1 or 0 for each question for each individual) genearlized additi
How do I analyse data with a ceiling effect? Is it the case that each individual's score is composed of the sum of 30 binary questions? If so, then you should analyze the raw data (1 or 0 for each question for each individual) genearlized additive mixed effects models, treating individuals as random effects, and specify a binomial link. For example (in R): library(lme4) fit1 = lmer( data = my_raw_data , formula = accuracy ~ (1|individual) , family = binomial ) This would fit a model where there is only an intercept. If you have a between-individuals manipulation coded in a variable called "A", you could evaluate the amount of evidence for an effect of A by: fit2 = lmer( data = my_raw_data , formula = accuracy ~ (1|individual) + A , family = binomial ) (AIC(fit1)-AIC(fit2))*log2(exp(1)) #bits of evidence for an effect of A Where "bits of evidence" refers to a likelihood ratio represented on the log-base-2 scale. Negative bits would represent evidence against an effect of A. The ezMixed() function from the ez package automates the computation of such evidence metrics, and the ezPredict() and ezPlot2() functions facilitate obtaining and visualizing effects. If I am wrong and the score does not represent the sum of 30 binary questions but instead the sum of some smaller number of likert-coded questions, you could recode the likert responses to binomial as I suggest here, then proceed as above.
How do I analyse data with a ceiling effect? Is it the case that each individual's score is composed of the sum of 30 binary questions? If so, then you should analyze the raw data (1 or 0 for each question for each individual) genearlized additi
51,208
Interrupted time-series analysis for panel data
I would identify the true date of the intervention i.e. when it was fully realized for each state separately using Intervention Detection schemes. The true date (de facto) can often be different from the "known date" (de jure) because of either a pre or a post effect (delay). With a common ARIMA model and one composite intervention variable constructed from the de facto dates, I would then globally estimate all parameters ie. the ARIMA coefficients and the national impact. The coefficient for the composite Intervention Variable would then reflect the national effect. When conducting this Global estimation you need to ensure that the prediction/fitted value for the first reading in the second state is not affected by the latest readings for the first state. For example if the ARIMA model is an AR(12) the first 12 expectations for each state would be the 12 observed values thus generating 12 zero errors. Commercial software with this built-in feature is rare but available.
Interrupted time-series analysis for panel data
I would identify the true date of the intervention i.e. when it was fully realized for each state separately using Intervention Detection schemes. The true date (de facto) can often be different from
Interrupted time-series analysis for panel data I would identify the true date of the intervention i.e. when it was fully realized for each state separately using Intervention Detection schemes. The true date (de facto) can often be different from the "known date" (de jure) because of either a pre or a post effect (delay). With a common ARIMA model and one composite intervention variable constructed from the de facto dates, I would then globally estimate all parameters ie. the ARIMA coefficients and the national impact. The coefficient for the composite Intervention Variable would then reflect the national effect. When conducting this Global estimation you need to ensure that the prediction/fitted value for the first reading in the second state is not affected by the latest readings for the first state. For example if the ARIMA model is an AR(12) the first 12 expectations for each state would be the 12 observed values thus generating 12 zero errors. Commercial software with this built-in feature is rare but available.
Interrupted time-series analysis for panel data I would identify the true date of the intervention i.e. when it was fully realized for each state separately using Intervention Detection schemes. The true date (de facto) can often be different from
51,209
Creating an estimator with varying shock levels (SD) in R?
As it turns out, the code above does work. If you remove the hash tags in the middle portion of the code, you have a simulation-based logit estimator (producing intercepts, slopes, the number of iterations until converging on the true value, and an output saying "Done"). The estimator adjusts the standard deviation after a given number of correct predictions. The variable "ti" provides the number of iterations the logit estimator runs until the estimator converges on the true value. To manipulate the shock (standard deviation), adjust the value of the variable "f". Two examples are below: nb0<-b0 + rnorm(1,0,1*f) nb1<- b1+ rnorm(1,0,1*f) nb0<-b0 + rnorm(1,0,1/f) nb1<- b1+ rnorm(1,0,1/f) To further clarify, below is code that plots the estimated slopes and intercepts. You'll notice that the estimates will be less accurate at the beginning, but will then converge toward the true value as shock decreases after a given number of iterations plot(1:n,bin1,ylim=c(-10,0), ylab="", main="intercepts") plot(1:n,bin2,ylim=c(-2,2), ylab="", main="slopes") Enjoy!
Creating an estimator with varying shock levels (SD) in R?
As it turns out, the code above does work. If you remove the hash tags in the middle portion of the code, you have a simulation-based logit estimator (producing intercepts, slopes, the number of iter
Creating an estimator with varying shock levels (SD) in R? As it turns out, the code above does work. If you remove the hash tags in the middle portion of the code, you have a simulation-based logit estimator (producing intercepts, slopes, the number of iterations until converging on the true value, and an output saying "Done"). The estimator adjusts the standard deviation after a given number of correct predictions. The variable "ti" provides the number of iterations the logit estimator runs until the estimator converges on the true value. To manipulate the shock (standard deviation), adjust the value of the variable "f". Two examples are below: nb0<-b0 + rnorm(1,0,1*f) nb1<- b1+ rnorm(1,0,1*f) nb0<-b0 + rnorm(1,0,1/f) nb1<- b1+ rnorm(1,0,1/f) To further clarify, below is code that plots the estimated slopes and intercepts. You'll notice that the estimates will be less accurate at the beginning, but will then converge toward the true value as shock decreases after a given number of iterations plot(1:n,bin1,ylim=c(-10,0), ylab="", main="intercepts") plot(1:n,bin2,ylim=c(-2,2), ylab="", main="slopes") Enjoy!
Creating an estimator with varying shock levels (SD) in R? As it turns out, the code above does work. If you remove the hash tags in the middle portion of the code, you have a simulation-based logit estimator (producing intercepts, slopes, the number of iter
51,210
Sampling considerations in psychometric applications of item response theory
There are two sets of questions that are relevant here. Does the model work the same way for different people and different groups in the population? E.g. if males and females have different slopes with respect to their "ability" (in IRT slang; for your case, this would the degree of social anxiety), then that would be bad for an instrument, as it would give different picture for males and females. With ordinal items, you may need to worry that people with different degree of anxiety may have different thresholds, which is again bad for the instrument. Psychometricians investigate that under the title of differential item functioning (DIF), and you may want to check some intro reading on that topic. Heterogeneity of the sample is important here to the extent that different groups complete the instrument, and you have enough data to identify the problems with DIF, if any. Identification: assuming that your instrument works exactly the same way for everybody, you need to ensure that all categories have been endorsed on all items. Only using the high "ability"/anxiety people to calibrate the instrument will likely make it impossible to figure out what the thresholds for the ordinal scale are on its low end, and that's why you need enough controls that will rarely endorse the higher values on the scale. Once again, heterogeneity here is important, but it hinges on absence of DIFs. On a side note, you should not trust every published result these days. Your judgement may be as good as the reviewers' judgement... especially if that journal has contacted you to write a referee report for them... especially if you did it more than once.
Sampling considerations in psychometric applications of item response theory
There are two sets of questions that are relevant here. Does the model work the same way for different people and different groups in the population? E.g. if males and females have different slopes w
Sampling considerations in psychometric applications of item response theory There are two sets of questions that are relevant here. Does the model work the same way for different people and different groups in the population? E.g. if males and females have different slopes with respect to their "ability" (in IRT slang; for your case, this would the degree of social anxiety), then that would be bad for an instrument, as it would give different picture for males and females. With ordinal items, you may need to worry that people with different degree of anxiety may have different thresholds, which is again bad for the instrument. Psychometricians investigate that under the title of differential item functioning (DIF), and you may want to check some intro reading on that topic. Heterogeneity of the sample is important here to the extent that different groups complete the instrument, and you have enough data to identify the problems with DIF, if any. Identification: assuming that your instrument works exactly the same way for everybody, you need to ensure that all categories have been endorsed on all items. Only using the high "ability"/anxiety people to calibrate the instrument will likely make it impossible to figure out what the thresholds for the ordinal scale are on its low end, and that's why you need enough controls that will rarely endorse the higher values on the scale. Once again, heterogeneity here is important, but it hinges on absence of DIFs. On a side note, you should not trust every published result these days. Your judgement may be as good as the reviewers' judgement... especially if that journal has contacted you to write a referee report for them... especially if you did it more than once.
Sampling considerations in psychometric applications of item response theory There are two sets of questions that are relevant here. Does the model work the same way for different people and different groups in the population? E.g. if males and females have different slopes w
51,211
Sampling considerations in psychometric applications of item response theory
@Behacad, Building on the comment above, I think the answer is that the article in question was appropriate. Please check out the following link, as Dr. Panayides used the Rasch model and a polytomous set of data. I think you will find his methods of interest and that they will provide the information you need. All the best!
Sampling considerations in psychometric applications of item response theory
@Behacad, Building on the comment above, I think the answer is that the article in question was appropriate. Please check out the following link, as Dr. Panayides used the Rasch model and a polyto
Sampling considerations in psychometric applications of item response theory @Behacad, Building on the comment above, I think the answer is that the article in question was appropriate. Please check out the following link, as Dr. Panayides used the Rasch model and a polytomous set of data. I think you will find his methods of interest and that they will provide the information you need. All the best!
Sampling considerations in psychometric applications of item response theory @Behacad, Building on the comment above, I think the answer is that the article in question was appropriate. Please check out the following link, as Dr. Panayides used the Rasch model and a polyto
51,212
Project management for remote collaboration in prediction
I don't know of any off-the-shelf products specifically designed for collaboratively building predictive models, but I do think you can roll your own solution starting from a good version control system like git or hg and scripting tasks to make every step reproducible. All datasets should absolutely stay out of version control, but you should write shell or SQL or python etc. scripts that fetch the raw data from your various sources, and perform any data "munging" type tasks (filtering, cleaning, transforming, and so on). These data manipulation scripts should be tracked in version control, and I like to name them such that if there are dependencies or an implied order, this is maintained in the natural (alphabetic) directory listing. For instance I may have scripts: 01-fetch_census_data.sh 02-scrub_census_incomes.sh 03-60train_20test_20val_split.sh 03-75train_25test_split.sh ... and so on. Depending on how much time you want to invest, you can write code to cache intermediate results so long running steps are only run when necessary, or have dependencies automatically identified and handled. Similarly you'll have some amount of code written in your favorite model development language that will take in processed datasets and perhaps learned model (hyper-)parameters and will produce some output in the form of learned model parameters, or predictions on some dataset. You want to track this code, but you also want to track in version control the learned parameter values (for later blending). Finally you'll need some top-level driver script that picks out the appropriate sequential combination of data processing and modeling algorithm scripts. This should represent a complete end-to-end experiment starting from the raw data and ending with a trained model (and likely some form of evaluation ex. plots or performance metrics which again can be kept under version control for reference). The important thing to keep in mind is that any of your collaborators should be able to simply take a clone of your repository and assuming they have the necessary access rights to your raw data sources, completely reproduce (and extend) any of your experiments If you use R, check out something like the ProjectTemplate package as a good skeletal starting point for your project
Project management for remote collaboration in prediction
I don't know of any off-the-shelf products specifically designed for collaboratively building predictive models, but I do think you can roll your own solution starting from a good version control syst
Project management for remote collaboration in prediction I don't know of any off-the-shelf products specifically designed for collaboratively building predictive models, but I do think you can roll your own solution starting from a good version control system like git or hg and scripting tasks to make every step reproducible. All datasets should absolutely stay out of version control, but you should write shell or SQL or python etc. scripts that fetch the raw data from your various sources, and perform any data "munging" type tasks (filtering, cleaning, transforming, and so on). These data manipulation scripts should be tracked in version control, and I like to name them such that if there are dependencies or an implied order, this is maintained in the natural (alphabetic) directory listing. For instance I may have scripts: 01-fetch_census_data.sh 02-scrub_census_incomes.sh 03-60train_20test_20val_split.sh 03-75train_25test_split.sh ... and so on. Depending on how much time you want to invest, you can write code to cache intermediate results so long running steps are only run when necessary, or have dependencies automatically identified and handled. Similarly you'll have some amount of code written in your favorite model development language that will take in processed datasets and perhaps learned model (hyper-)parameters and will produce some output in the form of learned model parameters, or predictions on some dataset. You want to track this code, but you also want to track in version control the learned parameter values (for later blending). Finally you'll need some top-level driver script that picks out the appropriate sequential combination of data processing and modeling algorithm scripts. This should represent a complete end-to-end experiment starting from the raw data and ending with a trained model (and likely some form of evaluation ex. plots or performance metrics which again can be kept under version control for reference). The important thing to keep in mind is that any of your collaborators should be able to simply take a clone of your repository and assuming they have the necessary access rights to your raw data sources, completely reproduce (and extend) any of your experiments If you use R, check out something like the ProjectTemplate package as a good skeletal starting point for your project
Project management for remote collaboration in prediction I don't know of any off-the-shelf products specifically designed for collaboratively building predictive models, but I do think you can roll your own solution starting from a good version control syst
51,213
Project management for remote collaboration in prediction
These might be helpful: Kepler, Process Makna, Taverna, and this paper.
Project management for remote collaboration in prediction
These might be helpful: Kepler, Process Makna, Taverna, and this paper.
Project management for remote collaboration in prediction These might be helpful: Kepler, Process Makna, Taverna, and this paper.
Project management for remote collaboration in prediction These might be helpful: Kepler, Process Makna, Taverna, and this paper.
51,214
Modeling a 1D random walk with nonconstant probabilities
Here is a variant on your suggestion: If at $0$, go right with probability $k$ and left with probability $1-k$. If right of $0$, go right with probability $p$ and left with probability $1-p$; if left of $0$, go right with probability $1-p$ and left with probability $p$. Providing $p \lt \frac{1}{2}$, the expected time that the walk is right of $0$ as a proportion of the time it is not at $0$ is $k$ (which I think is really what you are asking for) and the closer $p$ is to $\frac{1}{2}$ the smaller the proportion of the total time the walk will be at $0$. For $p \gt \frac{1}{2}$, there is a positive probability the walk will never return to $0$ while for $p = \frac{1}{2}$ the expected time for each return is infinite.
Modeling a 1D random walk with nonconstant probabilities
Here is a variant on your suggestion: If at $0$, go right with probability $k$ and left with probability $1-k$. If right of $0$, go right with probability $p$ and left with probability $1-p$; if left
Modeling a 1D random walk with nonconstant probabilities Here is a variant on your suggestion: If at $0$, go right with probability $k$ and left with probability $1-k$. If right of $0$, go right with probability $p$ and left with probability $1-p$; if left of $0$, go right with probability $1-p$ and left with probability $p$. Providing $p \lt \frac{1}{2}$, the expected time that the walk is right of $0$ as a proportion of the time it is not at $0$ is $k$ (which I think is really what you are asking for) and the closer $p$ is to $\frac{1}{2}$ the smaller the proportion of the total time the walk will be at $0$. For $p \gt \frac{1}{2}$, there is a positive probability the walk will never return to $0$ while for $p = \frac{1}{2}$ the expected time for each return is infinite.
Modeling a 1D random walk with nonconstant probabilities Here is a variant on your suggestion: If at $0$, go right with probability $k$ and left with probability $1-k$. If right of $0$, go right with probability $p$ and left with probability $1-p$; if left
51,215
Modeling a 1D random walk with nonconstant probabilities
How about something like $$dx = a*dt + b*d W^1_t - c*\text{sgn}(x)dW^2_t$$ where $W^1_t$ and $W_t^2$ are two distinct Wiener processes? It has a central tendency as long as $c>a$, and $a$ should determine the fraction $k$ it spends over the origin.
Modeling a 1D random walk with nonconstant probabilities
How about something like $$dx = a*dt + b*d W^1_t - c*\text{sgn}(x)dW^2_t$$ where $W^1_t$ and $W_t^2$ are two distinct Wiener processes? It has a central tendency as long as $c>a$, and $a$ should deter
Modeling a 1D random walk with nonconstant probabilities How about something like $$dx = a*dt + b*d W^1_t - c*\text{sgn}(x)dW^2_t$$ where $W^1_t$ and $W_t^2$ are two distinct Wiener processes? It has a central tendency as long as $c>a$, and $a$ should determine the fraction $k$ it spends over the origin.
Modeling a 1D random walk with nonconstant probabilities How about something like $$dx = a*dt + b*d W^1_t - c*\text{sgn}(x)dW^2_t$$ where $W^1_t$ and $W_t^2$ are two distinct Wiener processes? It has a central tendency as long as $c>a$, and $a$ should deter
51,216
How to predict how much data to collect?
If you can put your question in the form of a t-test, then Lehr's rule can be used to estimate the required sample size. For a 2-sided, one sample t-test at the 0.05 level, one can achieve power of 0.80 by using $n = 8 / \Delta^2$, where $\Delta$ is the 'signal to noise' (mean divided by standard deviation). I cannot figure out if you mean this to be 0.155%, but if so, you're looking at roughly 100K years!
How to predict how much data to collect?
If you can put your question in the form of a t-test, then Lehr's rule can be used to estimate the required sample size. For a 2-sided, one sample t-test at the 0.05 level, one can achieve power of 0.
How to predict how much data to collect? If you can put your question in the form of a t-test, then Lehr's rule can be used to estimate the required sample size. For a 2-sided, one sample t-test at the 0.05 level, one can achieve power of 0.80 by using $n = 8 / \Delta^2$, where $\Delta$ is the 'signal to noise' (mean divided by standard deviation). I cannot figure out if you mean this to be 0.155%, but if so, you're looking at roughly 100K years!
How to predict how much data to collect? If you can put your question in the form of a t-test, then Lehr's rule can be used to estimate the required sample size. For a 2-sided, one sample t-test at the 0.05 level, one can achieve power of 0.
51,217
How do I calculate error propagation with different measures of error?
In some sense this depends on what you mean by $x$ and $\delta x$. Usually people mean that they are modeling $X$ as a random variable with mean $x$ and variance $(\delta x)^2$. Sometimes they mean the stronger condition that $X$ is actually Gaussian, and sometimes they have a broader meaning that $x$ and $\delta x$ can possible be other measures of the center and the spread. A bit of calculus and handwaving shows that for small variations that are also approximatable as Gaussian, and $X$ and $Y$ independent, $f(X, Y)$ can be approximately described as having mean $f(x, y)$, and $(\delta f)^2 = (\delta x)^2 (\frac{\partial f}{\partial x})^2 + (\delta y)^2 (\frac{\partial f}{\partial y})^2$. We can do the same thing for $a(m, r) = m /r$, where $a$ is the calculated age, $m$ is the mass, and $r$ is the rate. $$ \begin{align*} (\delta a)^2 &= (\delta m)^2 / r^2 + (\delta r)^2 m^2 / r^4 \\ a^2 &= m^2 / r^2 \\ (\delta a)^2/a^2 &= (\delta m)^2/m^2 + (\delta r)^2 / r^2 \\ (\delta a)/a &= \sqrt{(\delta m)^2/m^2 + (\delta r)^2 / r^2} \\ \end{align*} $$ This matches the formula you have. You just have to convert between absolute errors and relative errors to be able to use it. *EDIT*ed to add (incorporating comments): To convert the sedimentation rate to relative error, just use $(\delta r)/r = 10\% = 0.1$. You need to find the $\delta m$ = standard error for the mean. It's not clear whether you have $\delta m_i$ for each individual sediment core measurements. If you do, you want to find $m$ with a weighted mean and calculating the standard error is a bit tricky, but the prescription given above for general $f$ expands fine to three arguments. If it's not, the standard mean can be used and the variance in the sample can be used to calculate the standard error of the mean. The relative error is of course just $(\delta m)/m$.
How do I calculate error propagation with different measures of error?
In some sense this depends on what you mean by $x$ and $\delta x$. Usually people mean that they are modeling $X$ as a random variable with mean $x$ and variance $(\delta x)^2$. Sometimes they mean
How do I calculate error propagation with different measures of error? In some sense this depends on what you mean by $x$ and $\delta x$. Usually people mean that they are modeling $X$ as a random variable with mean $x$ and variance $(\delta x)^2$. Sometimes they mean the stronger condition that $X$ is actually Gaussian, and sometimes they have a broader meaning that $x$ and $\delta x$ can possible be other measures of the center and the spread. A bit of calculus and handwaving shows that for small variations that are also approximatable as Gaussian, and $X$ and $Y$ independent, $f(X, Y)$ can be approximately described as having mean $f(x, y)$, and $(\delta f)^2 = (\delta x)^2 (\frac{\partial f}{\partial x})^2 + (\delta y)^2 (\frac{\partial f}{\partial y})^2$. We can do the same thing for $a(m, r) = m /r$, where $a$ is the calculated age, $m$ is the mass, and $r$ is the rate. $$ \begin{align*} (\delta a)^2 &= (\delta m)^2 / r^2 + (\delta r)^2 m^2 / r^4 \\ a^2 &= m^2 / r^2 \\ (\delta a)^2/a^2 &= (\delta m)^2/m^2 + (\delta r)^2 / r^2 \\ (\delta a)/a &= \sqrt{(\delta m)^2/m^2 + (\delta r)^2 / r^2} \\ \end{align*} $$ This matches the formula you have. You just have to convert between absolute errors and relative errors to be able to use it. *EDIT*ed to add (incorporating comments): To convert the sedimentation rate to relative error, just use $(\delta r)/r = 10\% = 0.1$. You need to find the $\delta m$ = standard error for the mean. It's not clear whether you have $\delta m_i$ for each individual sediment core measurements. If you do, you want to find $m$ with a weighted mean and calculating the standard error is a bit tricky, but the prescription given above for general $f$ expands fine to three arguments. If it's not, the standard mean can be used and the variance in the sample can be used to calculate the standard error of the mean. The relative error is of course just $(\delta m)/m$.
How do I calculate error propagation with different measures of error? In some sense this depends on what you mean by $x$ and $\delta x$. Usually people mean that they are modeling $X$ as a random variable with mean $x$ and variance $(\delta x)^2$. Sometimes they mean
51,218
How do I calculate error propagation with different measures of error?
See my book: "Propagation of Errors" by Mike Peralta at amazon.com It treats the seldom covered topic of Second Order "Propagation of Errors"
How do I calculate error propagation with different measures of error?
See my book: "Propagation of Errors" by Mike Peralta at amazon.com It treats the seldom covered topic of Second Order "Propagation of Errors"
How do I calculate error propagation with different measures of error? See my book: "Propagation of Errors" by Mike Peralta at amazon.com It treats the seldom covered topic of Second Order "Propagation of Errors"
How do I calculate error propagation with different measures of error? See my book: "Propagation of Errors" by Mike Peralta at amazon.com It treats the seldom covered topic of Second Order "Propagation of Errors"
51,219
Automatically detecting sudden change of mean
If I understand you you correctly, you might need to learn about multiple comparisons: http://en.wikipedia.org/wiki/Multiple_comparisons The choice of a particular procedure is a different question, e.g., Scheffe vs. Tukey vs. Bonferroni. At least in this framework, there is a clear and straightforward way to have hypothesis testing as well as confidence interval estimation.
Automatically detecting sudden change of mean
If I understand you you correctly, you might need to learn about multiple comparisons: http://en.wikipedia.org/wiki/Multiple_comparisons The choice of a particular procedure is a different question, e
Automatically detecting sudden change of mean If I understand you you correctly, you might need to learn about multiple comparisons: http://en.wikipedia.org/wiki/Multiple_comparisons The choice of a particular procedure is a different question, e.g., Scheffe vs. Tukey vs. Bonferroni. At least in this framework, there is a clear and straightforward way to have hypothesis testing as well as confidence interval estimation.
Automatically detecting sudden change of mean If I understand you you correctly, you might need to learn about multiple comparisons: http://en.wikipedia.org/wiki/Multiple_comparisons The choice of a particular procedure is a different question, e
51,220
Automatically detecting sudden change of mean
The answer to your question can be found deep in http://www.unc.edu/~jbhill/tsay.pdf and easily available from software like AUTOBOX (which I have helped develop) and elsewhere. What you have is a sequence of medians from 1 to 19 and what you want to do is to somehow discriminate between the first k of these medians and the remaining 19-k medians. Searching for break-points is an iterative process sometimes requiring pre-filtering to deal with ARIMA structure. One has to pre-specify the minimum number in a class in order to determine the number of classes. If you specified "3" then no conclusions could be drawn about when the regime shift took place. In the other hand if you specified a "1" then one might conclude that a number of break-points were found (2,3,4,9,10,11,12,19) . Given that you specify a "2" , it is fairly obvious to both your eye and to AUTOBOX and to R. Tsay's procedure that a significant change occured at period 3. If you would like to post the 19 medians, I will post an analysis of the data.
Automatically detecting sudden change of mean
The answer to your question can be found deep in http://www.unc.edu/~jbhill/tsay.pdf and easily available from software like AUTOBOX (which I have helped develop) and elsewhere. What you have is a seq
Automatically detecting sudden change of mean The answer to your question can be found deep in http://www.unc.edu/~jbhill/tsay.pdf and easily available from software like AUTOBOX (which I have helped develop) and elsewhere. What you have is a sequence of medians from 1 to 19 and what you want to do is to somehow discriminate between the first k of these medians and the remaining 19-k medians. Searching for break-points is an iterative process sometimes requiring pre-filtering to deal with ARIMA structure. One has to pre-specify the minimum number in a class in order to determine the number of classes. If you specified "3" then no conclusions could be drawn about when the regime shift took place. In the other hand if you specified a "1" then one might conclude that a number of break-points were found (2,3,4,9,10,11,12,19) . Given that you specify a "2" , it is fairly obvious to both your eye and to AUTOBOX and to R. Tsay's procedure that a significant change occured at period 3. If you would like to post the 19 medians, I will post an analysis of the data.
Automatically detecting sudden change of mean The answer to your question can be found deep in http://www.unc.edu/~jbhill/tsay.pdf and easily available from software like AUTOBOX (which I have helped develop) and elsewhere. What you have is a seq
51,221
Resume buzz words [closed]
I contacted Sean at RezScore and he clarified some things for me. In a nutshell, inserting buzzwords into a hidden text box seems to be a good idea if you don't want to put them in your actual resume. However, you should be selective about which words you include because many of the algorithms penalize verbosity. Maybe RezScore will include a feature to do just this for specific industries, I'd bet it would be cheaper than a resume rewrite.
Resume buzz words [closed]
I contacted Sean at RezScore and he clarified some things for me. In a nutshell, inserting buzzwords into a hidden text box seems to be a good idea if you don't want to put them in your actual resume.
Resume buzz words [closed] I contacted Sean at RezScore and he clarified some things for me. In a nutshell, inserting buzzwords into a hidden text box seems to be a good idea if you don't want to put them in your actual resume. However, you should be selective about which words you include because many of the algorithms penalize verbosity. Maybe RezScore will include a feature to do just this for specific industries, I'd bet it would be cheaper than a resume rewrite.
Resume buzz words [closed] I contacted Sean at RezScore and he clarified some things for me. In a nutshell, inserting buzzwords into a hidden text box seems to be a good idea if you don't want to put them in your actual resume.
51,222
What research tool to use when researching a project?
Steffen's comment and link to the question above is very useful. I would say that it depends. I am going to assume that you are using LaTeX, at least (for mathematical typesetting), and possibly R (cos its free, and great). In that case, especially given that you have mentioned org-mode, I would suggest using Emacs to organise your statistical analysis. The advantages are as follows: unify your statistical analysis (Emacs Speaks Statistics), your paper writing (LaTeX) - Emacs sweave support is also the best I have found. You also have Ebib to manage references, and this is very nice. Emacs is so customizable that anything you need to do, can be done from within it. However, it has quite a steep learning curve, and you may need to unlearn some shortcuts and ways of dealing with programs that are commonplace elsewhere. Emacs also has integrated version control. I cannot speak to their quality as I have not used it. On reference management, I personally didn't like Zotero, and use JabRef. JabRef is nice for the GUI and its simplicity, and could support you while you learn enough about Emacs to be productive within it. It also has a cite while you write plugin for open office, if you like that kind of thing. HTH.
What research tool to use when researching a project?
Steffen's comment and link to the question above is very useful. I would say that it depends. I am going to assume that you are using LaTeX, at least (for mathematical typesetting), and possibly R (co
What research tool to use when researching a project? Steffen's comment and link to the question above is very useful. I would say that it depends. I am going to assume that you are using LaTeX, at least (for mathematical typesetting), and possibly R (cos its free, and great). In that case, especially given that you have mentioned org-mode, I would suggest using Emacs to organise your statistical analysis. The advantages are as follows: unify your statistical analysis (Emacs Speaks Statistics), your paper writing (LaTeX) - Emacs sweave support is also the best I have found. You also have Ebib to manage references, and this is very nice. Emacs is so customizable that anything you need to do, can be done from within it. However, it has quite a steep learning curve, and you may need to unlearn some shortcuts and ways of dealing with programs that are commonplace elsewhere. Emacs also has integrated version control. I cannot speak to their quality as I have not used it. On reference management, I personally didn't like Zotero, and use JabRef. JabRef is nice for the GUI and its simplicity, and could support you while you learn enough about Emacs to be productive within it. It also has a cite while you write plugin for open office, if you like that kind of thing. HTH.
What research tool to use when researching a project? Steffen's comment and link to the question above is very useful. I would say that it depends. I am going to assume that you are using LaTeX, at least (for mathematical typesetting), and possibly R (co
51,223
Modeling multinomial problems with unknown sample size in BUGS
Better late than never... The covariance matrix has diagonal entries $np_i(1-p_i)$ and off-diagonal entries $-np_ip_j$. JAGS and BUGS allow you to invert a matrix numerically (sigma[1:K, 1:K] <- inverse(tau[,]) in WinBUGS), so you don't actually need a closed-form expression for the precision matrix. Your approach doesn't sound unreasonable, if your sample size is large enough. An alternative would be to just run a bunch of models with different specified effective sample sizes, and pick the one with the best deviance. A manual implementation of golden section search, if you will.
Modeling multinomial problems with unknown sample size in BUGS
Better late than never... The covariance matrix has diagonal entries $np_i(1-p_i)$ and off-diagonal entries $-np_ip_j$. JAGS and BUGS allow you to invert a matrix numerically (sigma[1:K, 1:K] <- inve
Modeling multinomial problems with unknown sample size in BUGS Better late than never... The covariance matrix has diagonal entries $np_i(1-p_i)$ and off-diagonal entries $-np_ip_j$. JAGS and BUGS allow you to invert a matrix numerically (sigma[1:K, 1:K] <- inverse(tau[,]) in WinBUGS), so you don't actually need a closed-form expression for the precision matrix. Your approach doesn't sound unreasonable, if your sample size is large enough. An alternative would be to just run a bunch of models with different specified effective sample sizes, and pick the one with the best deviance. A manual implementation of golden section search, if you will.
Modeling multinomial problems with unknown sample size in BUGS Better late than never... The covariance matrix has diagonal entries $np_i(1-p_i)$ and off-diagonal entries $-np_ip_j$. JAGS and BUGS allow you to invert a matrix numerically (sigma[1:K, 1:K] <- inve
51,224
Interpretation of a one cluster solution using the EM cluster algorithm
Two assumptions here: 1) Weka's finding the number of clusters (k) without issues, and 2) I believe EM uses mixtures of Gaussians which means the clusters need to be round/elliptical. So, given that Weka's algorithm is finding the best k, the answer would be that using round/elliptical clusters, the most likely clustering is one group. That doesn't mean that your data doesn't cluster at all (using other shapes, essentially).
Interpretation of a one cluster solution using the EM cluster algorithm
Two assumptions here: 1) Weka's finding the number of clusters (k) without issues, and 2) I believe EM uses mixtures of Gaussians which means the clusters need to be round/elliptical. So, given that W
Interpretation of a one cluster solution using the EM cluster algorithm Two assumptions here: 1) Weka's finding the number of clusters (k) without issues, and 2) I believe EM uses mixtures of Gaussians which means the clusters need to be round/elliptical. So, given that Weka's algorithm is finding the best k, the answer would be that using round/elliptical clusters, the most likely clustering is one group. That doesn't mean that your data doesn't cluster at all (using other shapes, essentially).
Interpretation of a one cluster solution using the EM cluster algorithm Two assumptions here: 1) Weka's finding the number of clusters (k) without issues, and 2) I believe EM uses mixtures of Gaussians which means the clusters need to be round/elliptical. So, given that W
51,225
Simultaneous Equation System for logit/probit?
This is possible. The type of model you need is called multivariate probit. For a textbook treatment you can refer to Greene's Econometric Analysis. However, from the computational point of view, these models can be laborious. Convergence can be slow or can even fail. Multivariate probit models are implemented in R and in Stata.
Simultaneous Equation System for logit/probit?
This is possible. The type of model you need is called multivariate probit. For a textbook treatment you can refer to Greene's Econometric Analysis. However, from the computational point of view, the
Simultaneous Equation System for logit/probit? This is possible. The type of model you need is called multivariate probit. For a textbook treatment you can refer to Greene's Econometric Analysis. However, from the computational point of view, these models can be laborious. Convergence can be slow or can even fail. Multivariate probit models are implemented in R and in Stata.
Simultaneous Equation System for logit/probit? This is possible. The type of model you need is called multivariate probit. For a textbook treatment you can refer to Greene's Econometric Analysis. However, from the computational point of view, the
51,226
Simultaneous Equation System for logit/probit?
I am also trying to find a solution to it. But the current available (user-written in stata) command is so slow that I got no solution. A simple solution I am using is the propensity score matching method that solves selective treatment effect. it works only when you have large observations. The estimated variance is tricky. so you have to estimate relative risk that has a neat variance estimator and convert the relative risk (rr) to probability change as a result of the other binary variable (the treatment variable). The confidence interval is not available although you can test the significance of the using rr. The Heckman model only estimate effect of a treatment (binary variable) on continuous output.
Simultaneous Equation System for logit/probit?
I am also trying to find a solution to it. But the current available (user-written in stata) command is so slow that I got no solution. A simple solution I am using is the propensity score matching me
Simultaneous Equation System for logit/probit? I am also trying to find a solution to it. But the current available (user-written in stata) command is so slow that I got no solution. A simple solution I am using is the propensity score matching method that solves selective treatment effect. it works only when you have large observations. The estimated variance is tricky. so you have to estimate relative risk that has a neat variance estimator and convert the relative risk (rr) to probability change as a result of the other binary variable (the treatment variable). The confidence interval is not available although you can test the significance of the using rr. The Heckman model only estimate effect of a treatment (binary variable) on continuous output.
Simultaneous Equation System for logit/probit? I am also trying to find a solution to it. But the current available (user-written in stata) command is so slow that I got no solution. A simple solution I am using is the propensity score matching me
51,227
Monte Carlo estimation of convex hull overlap probability
I know it's been a while since @Ganesh posted this question, but hopefully you're still interested in a response. I've written some R code that does what you want, I think: library(ggplot2) library(MASS) ################################################# ################################################# ## Steps: ## ## 1. draw b 'blue' points and r 'red' points ## ## 2. perform LDA ## ## 3. check error rate ## ################################################# ################################################# ################################################## # function: drawPoints # # # # description: draws b + r points uniformly from # # a unit square # # # # inputs: b - number of 'blue' points to draw # # r - number of 'red' points to draw # # # # outputs: data frame containing points # ################################################## drawPoints <- function(b,r) { x <- runif(b+r) y <- runif(b+r) class <- c(rep('b',b),rep('r',r)) return(data.frame(x = x, y = y, class = class)) } ################################################## # function: checkOverlap # # # # description: if the data is linearly separable # # there is no overlap in the convex # # hulls of the different classes # # # # inputs: df - data frame containing classified # # points # # # # outputs: FALSE if 0 error rate # # TRUE otherwise # ################################################## checkOverlap <- function(df) { disc.anal <- lda(class ~ x + y, data = df) return(!identical(predict(disc.anal)$class,df$class)) } ###################################################### ###################################################### ## Simulate many trials to estimate rate of overlap ## ###################################################### ###################################################### trials <- 1000 performance <- rep(as.numeric(NA),10) for(i in 1:10) { results <- replicate(n = trials, expr = checkOverlap(drawPoints(10,i))) performance[i] <- prop.table(table(results))['TRUE'] } qplot(x = 1:length(performance), y = performance, xlab = 'Number of Red Points', ylab = 'Proportion of Simulations with Overlap', main = 'Proportion of Simulations with Overlap, 10 Blue Points')
Monte Carlo estimation of convex hull overlap probability
I know it's been a while since @Ganesh posted this question, but hopefully you're still interested in a response. I've written some R code that does what you want, I think: library(ggplot2) library(MA
Monte Carlo estimation of convex hull overlap probability I know it's been a while since @Ganesh posted this question, but hopefully you're still interested in a response. I've written some R code that does what you want, I think: library(ggplot2) library(MASS) ################################################# ################################################# ## Steps: ## ## 1. draw b 'blue' points and r 'red' points ## ## 2. perform LDA ## ## 3. check error rate ## ################################################# ################################################# ################################################## # function: drawPoints # # # # description: draws b + r points uniformly from # # a unit square # # # # inputs: b - number of 'blue' points to draw # # r - number of 'red' points to draw # # # # outputs: data frame containing points # ################################################## drawPoints <- function(b,r) { x <- runif(b+r) y <- runif(b+r) class <- c(rep('b',b),rep('r',r)) return(data.frame(x = x, y = y, class = class)) } ################################################## # function: checkOverlap # # # # description: if the data is linearly separable # # there is no overlap in the convex # # hulls of the different classes # # # # inputs: df - data frame containing classified # # points # # # # outputs: FALSE if 0 error rate # # TRUE otherwise # ################################################## checkOverlap <- function(df) { disc.anal <- lda(class ~ x + y, data = df) return(!identical(predict(disc.anal)$class,df$class)) } ###################################################### ###################################################### ## Simulate many trials to estimate rate of overlap ## ###################################################### ###################################################### trials <- 1000 performance <- rep(as.numeric(NA),10) for(i in 1:10) { results <- replicate(n = trials, expr = checkOverlap(drawPoints(10,i))) performance[i] <- prop.table(table(results))['TRUE'] } qplot(x = 1:length(performance), y = performance, xlab = 'Number of Red Points', ylab = 'Proportion of Simulations with Overlap', main = 'Proportion of Simulations with Overlap, 10 Blue Points')
Monte Carlo estimation of convex hull overlap probability I know it's been a while since @Ganesh posted this question, but hopefully you're still interested in a response. I've written some R code that does what you want, I think: library(ggplot2) library(MA
51,228
Why are cumulative residuals from regression on stock and index returns mean reverting
The significance of modeling the cumulative sum of residuals is to better approximate the Ornstein-Uhlembeck process of equation $(12)$ with discrete real-life data. This process $X_i(t)$ represents the idiosyncratic above- or below- market fluctuations of the particular stock. More specifically, it is the difference between the stock's return and that of its industry sector (ETF). The expected value of the infinitesimal increment $dX_i(t)$ of the $X_i(t)$ process is based on the previous value of the process: $$ E[dX_i(t)|X_i(s),s{\le}t] = {\kappa}_i(m_i-X_i(t))dt $$ Note the $X_i(t)$ on the right-hand side, suggesting a cumulative process. The authors approximate a stock's $X_i(t)$ process with actual market data by first regressing the stock on its industry ETF (top of p.45), and then summing the residuals up to a certain point in time. This represents the cumulative above- or below- market return of the stock before the end of the regression time window.
Why are cumulative residuals from regression on stock and index returns mean reverting
The significance of modeling the cumulative sum of residuals is to better approximate the Ornstein-Uhlembeck process of equation $(12)$ with discrete real-life data. This process $X_i(t)$ represents t
Why are cumulative residuals from regression on stock and index returns mean reverting The significance of modeling the cumulative sum of residuals is to better approximate the Ornstein-Uhlembeck process of equation $(12)$ with discrete real-life data. This process $X_i(t)$ represents the idiosyncratic above- or below- market fluctuations of the particular stock. More specifically, it is the difference between the stock's return and that of its industry sector (ETF). The expected value of the infinitesimal increment $dX_i(t)$ of the $X_i(t)$ process is based on the previous value of the process: $$ E[dX_i(t)|X_i(s),s{\le}t] = {\kappa}_i(m_i-X_i(t))dt $$ Note the $X_i(t)$ on the right-hand side, suggesting a cumulative process. The authors approximate a stock's $X_i(t)$ process with actual market data by first regressing the stock on its industry ETF (top of p.45), and then summing the residuals up to a certain point in time. This represents the cumulative above- or below- market return of the stock before the end of the regression time window.
Why are cumulative residuals from regression on stock and index returns mean reverting The significance of modeling the cumulative sum of residuals is to better approximate the Ornstein-Uhlembeck process of equation $(12)$ with discrete real-life data. This process $X_i(t)$ represents t
51,229
Adding high-dimensional data to mutivariate Cox model
One approach would simply be to carry on with the forward LR testing, although this would leave me very prone to overfitting. You could penalise model complexity to avoid overfitting. My favourite is the stepAIC function from the MASS package that uses AIC (can be configured to use BIC) as a goodness of fit.
Adding high-dimensional data to mutivariate Cox model
One approach would simply be to carry on with the forward LR testing, although this would leave me very prone to overfitting. You could penalise model complexity to avoid overfitting. My favourite i
Adding high-dimensional data to mutivariate Cox model One approach would simply be to carry on with the forward LR testing, although this would leave me very prone to overfitting. You could penalise model complexity to avoid overfitting. My favourite is the stepAIC function from the MASS package that uses AIC (can be configured to use BIC) as a goodness of fit.
Adding high-dimensional data to mutivariate Cox model One approach would simply be to carry on with the forward LR testing, although this would leave me very prone to overfitting. You could penalise model complexity to avoid overfitting. My favourite i
51,230
Adding high-dimensional data to mutivariate Cox model
Edit: after the comment below from EdS my original answer was not meaningful any more. @EdS, thanks for the further information!
Adding high-dimensional data to mutivariate Cox model
Edit: after the comment below from EdS my original answer was not meaningful any more. @EdS, thanks for the further information!
Adding high-dimensional data to mutivariate Cox model Edit: after the comment below from EdS my original answer was not meaningful any more. @EdS, thanks for the further information!
Adding high-dimensional data to mutivariate Cox model Edit: after the comment below from EdS my original answer was not meaningful any more. @EdS, thanks for the further information!
51,231
How to average quantized and truncated data?
http://en.wikipedia.org/wiki/Truncation_%28statistics%29 This is not much help, but at least it gives the correct buzzword (truncated, not quantized; quantization is not your problem) and one pointer to a paper. This should do as a starting point for further search. Oh, and Winsorized mean is the exact opposite from what you want.
How to average quantized and truncated data?
http://en.wikipedia.org/wiki/Truncation_%28statistics%29 This is not much help, but at least it gives the correct buzzword (truncated, not quantized; quantization is not your problem) and one pointer
How to average quantized and truncated data? http://en.wikipedia.org/wiki/Truncation_%28statistics%29 This is not much help, but at least it gives the correct buzzword (truncated, not quantized; quantization is not your problem) and one pointer to a paper. This should do as a starting point for further search. Oh, and Winsorized mean is the exact opposite from what you want.
How to average quantized and truncated data? http://en.wikipedia.org/wiki/Truncation_%28statistics%29 This is not much help, but at least it gives the correct buzzword (truncated, not quantized; quantization is not your problem) and one pointer
51,232
How to average quantized and truncated data?
If your data follow a truncated normal distribution, this link gives you a implementation in R language for the computation of the mean and variance of a truncated normal distribution : http://www.r-bloggers.com/truncated-normal-distribution/
How to average quantized and truncated data?
If your data follow a truncated normal distribution, this link gives you a implementation in R language for the computation of the mean and variance of a truncated normal distribution : http://www.r-
How to average quantized and truncated data? If your data follow a truncated normal distribution, this link gives you a implementation in R language for the computation of the mean and variance of a truncated normal distribution : http://www.r-bloggers.com/truncated-normal-distribution/
How to average quantized and truncated data? If your data follow a truncated normal distribution, this link gives you a implementation in R language for the computation of the mean and variance of a truncated normal distribution : http://www.r-
51,233
How to estimate time-per-product in a factory?
If you are interested in the amount of time it takes to complete an order, it seems that a duration analysis (aka survival or event history analysis) would be most appropriate. See the Wikipedia entry for an overview: http://en.wikipedia.org/wiki/Survival_analysis This introduction, which covers issues such as censoring, looks relevant and accessible: Survival Analysis Introduction And if you are so inclined, R has a task view dedicated to survival analysis: R Survival Analysis Task View Since you know pretty well what steps go into the production of each item, and because you seem interested in forecasting, you may begin by estimating a parametric model, such as a Weibull or log-logistic/log-normal. Most software capable of estimating these models will also provide the tools to forecast average time-to-completion for different orders. You should also be able to produce plots of estimated durations.
How to estimate time-per-product in a factory?
If you are interested in the amount of time it takes to complete an order, it seems that a duration analysis (aka survival or event history analysis) would be most appropriate. See the Wikipedia entry
How to estimate time-per-product in a factory? If you are interested in the amount of time it takes to complete an order, it seems that a duration analysis (aka survival or event history analysis) would be most appropriate. See the Wikipedia entry for an overview: http://en.wikipedia.org/wiki/Survival_analysis This introduction, which covers issues such as censoring, looks relevant and accessible: Survival Analysis Introduction And if you are so inclined, R has a task view dedicated to survival analysis: R Survival Analysis Task View Since you know pretty well what steps go into the production of each item, and because you seem interested in forecasting, you may begin by estimating a parametric model, such as a Weibull or log-logistic/log-normal. Most software capable of estimating these models will also provide the tools to forecast average time-to-completion for different orders. You should also be able to produce plots of estimated durations.
How to estimate time-per-product in a factory? If you are interested in the amount of time it takes to complete an order, it seems that a duration analysis (aka survival or event history analysis) would be most appropriate. See the Wikipedia entry
51,234
Inverse logistic regression vs. repeated-measures vs. latent class?
My first thought would be to regress education (using a proportional odds model or whatever is appropriate for your education variable) on person-level variables and a few simple transportation choice aggregates. The main variable that comes to mind is the proportion of train vs. bus rides (%train), but if you only have two event level variables -- distance and duration -- then another option would be %train-near, %train-far, %train-short, %train-long. If something simple like the above won't work because you have too many event level variables or you're not willing to categorize them, then your first thought of using a logistic regression with random effects for person-level variables (I presume) is the right idea. However, I would modify your suggestion by using a structural equation model (SEM) to regress education on transportation choice, which is in turn regressed on event and person level variables (except for education) and the random effects. Education can additionally be regressed directly on the event and person level variables. All regressions are estimated simultaneously. This can be done in Mplus, but currently is not possible in R, as far as I know, because none of the SEM packages (lavaan, sem, e.g.) allow for mixed effects like those offered by the lme4 package. It can probably be done in SAS with a lot of coding. No idea about other software. Is your second thought of regressing education on combinations of your predictors feasible given the number of combinations and amount of data? How many event and person level variables do you have? Latent class regression wouldn't make sense for your data because individual response patterns aren't comparable (e.g. person 1 might have chosen 00 for near-short, near-short and person 2 might have chosen 0000 for far-long, far-long, far-long, far long -- you could recode response vectors with a lot of missing values, but there are better approaches).
Inverse logistic regression vs. repeated-measures vs. latent class?
My first thought would be to regress education (using a proportional odds model or whatever is appropriate for your education variable) on person-level variables and a few simple transportation choice
Inverse logistic regression vs. repeated-measures vs. latent class? My first thought would be to regress education (using a proportional odds model or whatever is appropriate for your education variable) on person-level variables and a few simple transportation choice aggregates. The main variable that comes to mind is the proportion of train vs. bus rides (%train), but if you only have two event level variables -- distance and duration -- then another option would be %train-near, %train-far, %train-short, %train-long. If something simple like the above won't work because you have too many event level variables or you're not willing to categorize them, then your first thought of using a logistic regression with random effects for person-level variables (I presume) is the right idea. However, I would modify your suggestion by using a structural equation model (SEM) to regress education on transportation choice, which is in turn regressed on event and person level variables (except for education) and the random effects. Education can additionally be regressed directly on the event and person level variables. All regressions are estimated simultaneously. This can be done in Mplus, but currently is not possible in R, as far as I know, because none of the SEM packages (lavaan, sem, e.g.) allow for mixed effects like those offered by the lme4 package. It can probably be done in SAS with a lot of coding. No idea about other software. Is your second thought of regressing education on combinations of your predictors feasible given the number of combinations and amount of data? How many event and person level variables do you have? Latent class regression wouldn't make sense for your data because individual response patterns aren't comparable (e.g. person 1 might have chosen 00 for near-short, near-short and person 2 might have chosen 0000 for far-long, far-long, far-long, far long -- you could recode response vectors with a lot of missing values, but there are better approaches).
Inverse logistic regression vs. repeated-measures vs. latent class? My first thought would be to regress education (using a proportional odds model or whatever is appropriate for your education variable) on person-level variables and a few simple transportation choice
51,235
Properties of Battacharyya distance vs Kullback-Leibler divergence
First properties are explained competently here and there. Which one is better suited to a given purpose will depend on said given purpose so you might think about rephrasing this part of your question.
Properties of Battacharyya distance vs Kullback-Leibler divergence
First properties are explained competently here and there. Which one is better suited to a given purpose will depend on said given purpose so you might think about rephrasing this part of your quest
Properties of Battacharyya distance vs Kullback-Leibler divergence First properties are explained competently here and there. Which one is better suited to a given purpose will depend on said given purpose so you might think about rephrasing this part of your question.
Properties of Battacharyya distance vs Kullback-Leibler divergence First properties are explained competently here and there. Which one is better suited to a given purpose will depend on said given purpose so you might think about rephrasing this part of your quest
51,236
Quantile extrapolation?
If you are also interested in quantiles with $q>1-1/n$ there is no definitive answer. You need to supply more details, since for distributions with heavy tails the estimation of such quantiles involves quite complicated mathematics. Try google search for tail index estimation and you will get plethora of links.
Quantile extrapolation?
If you are also interested in quantiles with $q>1-1/n$ there is no definitive answer. You need to supply more details, since for distributions with heavy tails the estimation of such quantiles involve
Quantile extrapolation? If you are also interested in quantiles with $q>1-1/n$ there is no definitive answer. You need to supply more details, since for distributions with heavy tails the estimation of such quantiles involves quite complicated mathematics. Try google search for tail index estimation and you will get plethora of links.
Quantile extrapolation? If you are also interested in quantiles with $q>1-1/n$ there is no definitive answer. You need to supply more details, since for distributions with heavy tails the estimation of such quantiles involve
51,237
Is there a bias correction for effect size in a data mining context?
This answer may be way, way off base as I don't understand the medical context of your question and the nature of the medical test results you allude to, but it might be possible to estimate data mining bias by some sort of Monte Carlo permutation of your results. The type of approach I'm thinking of is taken from a book, Evidence Based Technical Analysis, which has a companion website here. Although this is written with the financial markets in mind it is, in my opinion, a book about Monte Carlo techniques as much as it is about markets.
Is there a bias correction for effect size in a data mining context?
This answer may be way, way off base as I don't understand the medical context of your question and the nature of the medical test results you allude to, but it might be possible to estimate data mini
Is there a bias correction for effect size in a data mining context? This answer may be way, way off base as I don't understand the medical context of your question and the nature of the medical test results you allude to, but it might be possible to estimate data mining bias by some sort of Monte Carlo permutation of your results. The type of approach I'm thinking of is taken from a book, Evidence Based Technical Analysis, which has a companion website here. Although this is written with the financial markets in mind it is, in my opinion, a book about Monte Carlo techniques as much as it is about markets.
Is there a bias correction for effect size in a data mining context? This answer may be way, way off base as I don't understand the medical context of your question and the nature of the medical test results you allude to, but it might be possible to estimate data mini
51,238
Is there a bias correction for effect size in a data mining context?
Ok, duh, one approach would be to use the James-Stein shrinkage. This will not, I believe, unbias the estimates, but will reduce the mean squared error.
Is there a bias correction for effect size in a data mining context?
Ok, duh, one approach would be to use the James-Stein shrinkage. This will not, I believe, unbias the estimates, but will reduce the mean squared error.
Is there a bias correction for effect size in a data mining context? Ok, duh, one approach would be to use the James-Stein shrinkage. This will not, I believe, unbias the estimates, but will reduce the mean squared error.
Is there a bias correction for effect size in a data mining context? Ok, duh, one approach would be to use the James-Stein shrinkage. This will not, I believe, unbias the estimates, but will reduce the mean squared error.
51,239
How should I objectively test my program results?
You can count the recognition of a chorus track as a 'Success' and the lack of recognition as a 'Failure'. Thus, you have the following data: Method 1: Proportion of success (say, $p_1$) = $\frac{40}{50}$ Method 2: Proportion of success (say, $p_2$) = $\frac{21}{50}$ It seems that method 2 fails completely for 20 music tracks and hence I am assuming that they should be counted as a failure. Let: $\pi_1$ and $\pi_2$ be the true proportions of successes for the two methods. Then you wish to assess if method 1 is superior to method 2. Thus, you would assume that: Null Hypothesis: $\pi_1 = \pi_2$ Your alternative hypothesis is $\pi_1 \ne \pi_2$. (Note: Your alternative hypothesis could also be framed as $\pi_1 \ge \pi_2$ which would impact how you would do the testing but that is a nuance that you probably need not worry about.) And attempt to see to what extent the data is consistent with the null hypothesis. The way to test for the null hypothesis is to use a Two-proportion z-test (See the 7th row of Common Test Statistics on wiki titled "Two-proportion z-test, pooled for d0 = 0". The symbols used in the table are explained at the bottom of the table.) If the calculated Z value as per the formula is greater than 1.96 or less than -1.96 then you would reject the null in favor of the alternative hypothesis.
How should I objectively test my program results?
You can count the recognition of a chorus track as a 'Success' and the lack of recognition as a 'Failure'. Thus, you have the following data: Method 1: Proportion of success (say, $p_1$) = $\frac{40}
How should I objectively test my program results? You can count the recognition of a chorus track as a 'Success' and the lack of recognition as a 'Failure'. Thus, you have the following data: Method 1: Proportion of success (say, $p_1$) = $\frac{40}{50}$ Method 2: Proportion of success (say, $p_2$) = $\frac{21}{50}$ It seems that method 2 fails completely for 20 music tracks and hence I am assuming that they should be counted as a failure. Let: $\pi_1$ and $\pi_2$ be the true proportions of successes for the two methods. Then you wish to assess if method 1 is superior to method 2. Thus, you would assume that: Null Hypothesis: $\pi_1 = \pi_2$ Your alternative hypothesis is $\pi_1 \ne \pi_2$. (Note: Your alternative hypothesis could also be framed as $\pi_1 \ge \pi_2$ which would impact how you would do the testing but that is a nuance that you probably need not worry about.) And attempt to see to what extent the data is consistent with the null hypothesis. The way to test for the null hypothesis is to use a Two-proportion z-test (See the 7th row of Common Test Statistics on wiki titled "Two-proportion z-test, pooled for d0 = 0". The symbols used in the table are explained at the bottom of the table.) If the calculated Z value as per the formula is greater than 1.96 or less than -1.96 then you would reject the null in favor of the alternative hypothesis.
How should I objectively test my program results? You can count the recognition of a chorus track as a 'Success' and the lack of recognition as a 'Failure'. Thus, you have the following data: Method 1: Proportion of success (say, $p_1$) = $\frac{40}
51,240
Lumping in Markov process with absorbing states
If the transition matrix was constant, then your two approaches would yield the same results. So this question is of interest in the non-constant case. The inf-time redistribution of state 4 into state 2 and 3 is inappropriate. Consider that from time $1$ to time $T$ your matrix was $1 \rightarrow 1$ with probability $0.5$, $1 \rightarrow 2$ with probability $0.25$ and $1 \rightarrow 4$ with probability $0.25$, then after time $T$ your matrix was $1 \rightarrow 3$ with probability $1$. Clearly assuming state $4$ would distribute as the inf-time stable state would be weird in this case. On the other hand, simple lumping state $1$ and $4$ together, would correspond to saying that loosing track of an apprentice at time-step $t$ is the same as assuming they remained in the program; also probably not what you want. What you probably want to say is that "if we lost track of a student at time $t$ then that student had the same probability as any other student at time $t$ of continuing, quiting, or completing the program." To capture this, you can simply eliminate the transition to state 4 and re-normalize the resultant matrix (i.e. $p_{ij}' = \frac{p_{ij}}{1 - p_{14}}$) and proceed as usual.
Lumping in Markov process with absorbing states
If the transition matrix was constant, then your two approaches would yield the same results. So this question is of interest in the non-constant case. The inf-time redistribution of state 4 into stat
Lumping in Markov process with absorbing states If the transition matrix was constant, then your two approaches would yield the same results. So this question is of interest in the non-constant case. The inf-time redistribution of state 4 into state 2 and 3 is inappropriate. Consider that from time $1$ to time $T$ your matrix was $1 \rightarrow 1$ with probability $0.5$, $1 \rightarrow 2$ with probability $0.25$ and $1 \rightarrow 4$ with probability $0.25$, then after time $T$ your matrix was $1 \rightarrow 3$ with probability $1$. Clearly assuming state $4$ would distribute as the inf-time stable state would be weird in this case. On the other hand, simple lumping state $1$ and $4$ together, would correspond to saying that loosing track of an apprentice at time-step $t$ is the same as assuming they remained in the program; also probably not what you want. What you probably want to say is that "if we lost track of a student at time $t$ then that student had the same probability as any other student at time $t$ of continuing, quiting, or completing the program." To capture this, you can simply eliminate the transition to state 4 and re-normalize the resultant matrix (i.e. $p_{ij}' = \frac{p_{ij}}{1 - p_{14}}$) and proceed as usual.
Lumping in Markov process with absorbing states If the transition matrix was constant, then your two approaches would yield the same results. So this question is of interest in the non-constant case. The inf-time redistribution of state 4 into stat
51,241
Why would an instrumental variable have its strength measured by an F-statistic?
Here I expand somewhat more formally on the comment I made earlier, although the arguments here are not complete or fully rigorous (as you will see). This derivation is a more detailed version of the one given on page 207 of Mostly Harmless Econometrics (Angrist and Pischke) (thanks @dmitriy for the reference). The references given on the footnotes of that page should help resolve remaining technical ambiguities. Let us consider the following simple model: the second stage given by $$ y = \beta x + \epsilon \tag{1}$$ where for simplicity we assume that $x$ is scalar (here compactly represented as a $n \times 1$ vector) and endogenous (i.e. $\mathbb{E}[x_i\epsilon_i] \neq 0$) and the first stage given by $$ x = Z\Pi + \nu \tag{2}$$ where $Z$ is some $n \times L$ matrix of instruments, and $\Pi$ is a $L \times 1$ matrix of parameters. Suppose that the exogeneity assumption for instruments is satisfied, in that $\mathbb{E}[Z_i \nu_i] = \mathbb{E}[Z_i \epsilon_i] = 0$. Further, suppose that both the error terms are conditionally homoskedastic both separately and together (i.e $\mathbb{E}[\epsilon \epsilon^T \mid X ] = \sigma^2_\epsilon I$, $\mathbb{E}[\nu \nu^T \mid Z] = \sigma^2_{\nu} I $ and $\mathbb{E}[\epsilon \nu^T \mid Z] = \sigma_{\nu\epsilon} I$). The 2SLS estimator is given by $$ \hat{\beta}_{2SLS} = (x^T P_Z x)^{-1} (x^T P_Z y). \tag{3}$$ where $P_Z = Z(Z^TZ)^{-1}Z^T$ is the matrix of the operator that projects into the subspace of $\mathbb{R}^n$ spanned by $Z$. Substituting (1) in (3), we can write down the bias of the 2SLS estimator as $$ \hat{\beta}_{2SLS} - \beta = (x^T P_Z x)^{-1}(x^T P_Z \epsilon). $$ Now we can plug in (2) into the above to yield $$ \hat{\beta}_{2SLS} - \beta = (x^T P_Z x)^{-1}\left[(\Pi^T Z^T \epsilon) + (\nu^T P_Z \epsilon)\right]. $$ Here is the part which I will not completely justify: one can show that the following approximation holds $$ \begin{align} \mathbb{E}[\hat{\beta}_{2SLS} - \beta] &\approx \mathbb{E}[(x^T P_Z x)]^{-1}\left[\mathbb{E}[\Pi^T Z^T \epsilon] + \mathbb{E}[\nu^T P_Z \epsilon]\right] \\ &= \mathbb{E}[(x^T P_Z x)]^{-1}\mathbb{E}[\nu^T P_Z \epsilon] \\ &= \mathbb{E}[((Z\Pi + \nu)^T P_Z (Z \Pi + \nu) ]^{-1} \mathbb{E}[\nu^T P_Z \epsilon] \\ &= \left(\mathbb{E}[\Pi^T Z^T Z \Pi ] + \mathbb{E}[\nu^T P_Z \nu]\right)^{-1} \mathbb{E}[\nu^T P_Z \epsilon]. \end{align}$$ where we have used the exogeneity assumptions on lines 2 and 4. Now we can use trace tricks that are standard in basic econometrics: observe that $\nu^T P_Z \nu $ is a scalar and so equal to its own trace. The trace is also invariant under cyclic permutations of conformable matrices. Further, the trace of a symmetric idempotent matrix like $P_Z$ equals its rank (this can be seen easily by looking at the spectral decomposition of an idempotent matrix $P$ and noticing that the eigenvalues of such a matrix can only be zeroes or ones and then applying the cyclic invariance of the trace operator). These facts help with the following series of computations $$ \begin{align} \mathbb{E}[\nu^T P_Z \nu] &= \mathbb{E}[\text{tr}(\nu^T P_Z \nu)] \\ &=\mathbb{E}[\text{tr}(P_Z \nu \nu^T)]\\ &= \text{tr} \left( \mathbb{E}[P_Z\mathbb{E}[\nu \nu^T \mid Z]]\right) \\ &= \sigma^2_{\nu}\text{rk}(\mathbb{E}[P_Z]). \end{align}$$ Similarly, one can show that $\mathbb{E}[\nu^T P_Z \epsilon] = \sigma_{\nu \epsilon}\text{rk}(\mathbb{E}[P_Z])$. Of course, the rank of $P_Z$ is the rank of $Z$ which is just $L$. Or equivalently $$\text{rk}(\mathbb{E}[P_Z]) = \text{tr}(\mathbb{E}[P_Z]) = \mathbb{E}[\text{tr}(Z(Z^TZ)^{-1}Z^T)] = \mathbb{E}[\text{tr}((Z^TZ)^{-1}Z^T Z)] = \mathbb{E}[\text{tr}(I_L)] = L$$ This all helps us show that $$\begin{align} \mathbb{E}[\hat{\beta}_{2SLS} - \beta] &\approx \sigma_{\nu \epsilon} L \left( \mathbb{E}[\Pi^T Z^T Z \Pi] + \sigma^2_\nu L\right)^{-1} \\ &= \frac{\sigma_{\epsilon \nu}}{\sigma^2_\nu} \left(\frac{\mathbb{E}[\Pi^T Z^T Z \Pi]/L}{\sigma^2_{\nu}} + 1\right)^{-1} \\ &= \frac{\sigma_{\epsilon \nu}}{\sigma^2_\nu} \left(\frac{1}{F+1}\right) \end{align}$$ where $F$ is the population F-statistic from the first stage equation (2). Note that this tells us that if the F-stat is zero, then the bias of the 2SLS estimator reduces to the bias of the OLS estimator given by $\sigma_{\nu \epsilon}/\sigma^2_{\nu}$. More generally, the bias of the 2SLS estimator is in the same direction as the bias of the OLS estimator, but the bias goes down as the the F-stat rises. This makes sense since the 2SLS estimator is consistent under the standard assumptions and so the asymptotic bias should go to zero intuitively as the sample size goes up. But for a fixed sample size, a stronger relationship between the instruments an the endogenous regressions should also reduce the bias. The F-stat term captures both possible ways in which the bias could be reduced.
Why would an instrumental variable have its strength measured by an F-statistic?
Here I expand somewhat more formally on the comment I made earlier, although the arguments here are not complete or fully rigorous (as you will see). This derivation is a more detailed version of the
Why would an instrumental variable have its strength measured by an F-statistic? Here I expand somewhat more formally on the comment I made earlier, although the arguments here are not complete or fully rigorous (as you will see). This derivation is a more detailed version of the one given on page 207 of Mostly Harmless Econometrics (Angrist and Pischke) (thanks @dmitriy for the reference). The references given on the footnotes of that page should help resolve remaining technical ambiguities. Let us consider the following simple model: the second stage given by $$ y = \beta x + \epsilon \tag{1}$$ where for simplicity we assume that $x$ is scalar (here compactly represented as a $n \times 1$ vector) and endogenous (i.e. $\mathbb{E}[x_i\epsilon_i] \neq 0$) and the first stage given by $$ x = Z\Pi + \nu \tag{2}$$ where $Z$ is some $n \times L$ matrix of instruments, and $\Pi$ is a $L \times 1$ matrix of parameters. Suppose that the exogeneity assumption for instruments is satisfied, in that $\mathbb{E}[Z_i \nu_i] = \mathbb{E}[Z_i \epsilon_i] = 0$. Further, suppose that both the error terms are conditionally homoskedastic both separately and together (i.e $\mathbb{E}[\epsilon \epsilon^T \mid X ] = \sigma^2_\epsilon I$, $\mathbb{E}[\nu \nu^T \mid Z] = \sigma^2_{\nu} I $ and $\mathbb{E}[\epsilon \nu^T \mid Z] = \sigma_{\nu\epsilon} I$). The 2SLS estimator is given by $$ \hat{\beta}_{2SLS} = (x^T P_Z x)^{-1} (x^T P_Z y). \tag{3}$$ where $P_Z = Z(Z^TZ)^{-1}Z^T$ is the matrix of the operator that projects into the subspace of $\mathbb{R}^n$ spanned by $Z$. Substituting (1) in (3), we can write down the bias of the 2SLS estimator as $$ \hat{\beta}_{2SLS} - \beta = (x^T P_Z x)^{-1}(x^T P_Z \epsilon). $$ Now we can plug in (2) into the above to yield $$ \hat{\beta}_{2SLS} - \beta = (x^T P_Z x)^{-1}\left[(\Pi^T Z^T \epsilon) + (\nu^T P_Z \epsilon)\right]. $$ Here is the part which I will not completely justify: one can show that the following approximation holds $$ \begin{align} \mathbb{E}[\hat{\beta}_{2SLS} - \beta] &\approx \mathbb{E}[(x^T P_Z x)]^{-1}\left[\mathbb{E}[\Pi^T Z^T \epsilon] + \mathbb{E}[\nu^T P_Z \epsilon]\right] \\ &= \mathbb{E}[(x^T P_Z x)]^{-1}\mathbb{E}[\nu^T P_Z \epsilon] \\ &= \mathbb{E}[((Z\Pi + \nu)^T P_Z (Z \Pi + \nu) ]^{-1} \mathbb{E}[\nu^T P_Z \epsilon] \\ &= \left(\mathbb{E}[\Pi^T Z^T Z \Pi ] + \mathbb{E}[\nu^T P_Z \nu]\right)^{-1} \mathbb{E}[\nu^T P_Z \epsilon]. \end{align}$$ where we have used the exogeneity assumptions on lines 2 and 4. Now we can use trace tricks that are standard in basic econometrics: observe that $\nu^T P_Z \nu $ is a scalar and so equal to its own trace. The trace is also invariant under cyclic permutations of conformable matrices. Further, the trace of a symmetric idempotent matrix like $P_Z$ equals its rank (this can be seen easily by looking at the spectral decomposition of an idempotent matrix $P$ and noticing that the eigenvalues of such a matrix can only be zeroes or ones and then applying the cyclic invariance of the trace operator). These facts help with the following series of computations $$ \begin{align} \mathbb{E}[\nu^T P_Z \nu] &= \mathbb{E}[\text{tr}(\nu^T P_Z \nu)] \\ &=\mathbb{E}[\text{tr}(P_Z \nu \nu^T)]\\ &= \text{tr} \left( \mathbb{E}[P_Z\mathbb{E}[\nu \nu^T \mid Z]]\right) \\ &= \sigma^2_{\nu}\text{rk}(\mathbb{E}[P_Z]). \end{align}$$ Similarly, one can show that $\mathbb{E}[\nu^T P_Z \epsilon] = \sigma_{\nu \epsilon}\text{rk}(\mathbb{E}[P_Z])$. Of course, the rank of $P_Z$ is the rank of $Z$ which is just $L$. Or equivalently $$\text{rk}(\mathbb{E}[P_Z]) = \text{tr}(\mathbb{E}[P_Z]) = \mathbb{E}[\text{tr}(Z(Z^TZ)^{-1}Z^T)] = \mathbb{E}[\text{tr}((Z^TZ)^{-1}Z^T Z)] = \mathbb{E}[\text{tr}(I_L)] = L$$ This all helps us show that $$\begin{align} \mathbb{E}[\hat{\beta}_{2SLS} - \beta] &\approx \sigma_{\nu \epsilon} L \left( \mathbb{E}[\Pi^T Z^T Z \Pi] + \sigma^2_\nu L\right)^{-1} \\ &= \frac{\sigma_{\epsilon \nu}}{\sigma^2_\nu} \left(\frac{\mathbb{E}[\Pi^T Z^T Z \Pi]/L}{\sigma^2_{\nu}} + 1\right)^{-1} \\ &= \frac{\sigma_{\epsilon \nu}}{\sigma^2_\nu} \left(\frac{1}{F+1}\right) \end{align}$$ where $F$ is the population F-statistic from the first stage equation (2). Note that this tells us that if the F-stat is zero, then the bias of the 2SLS estimator reduces to the bias of the OLS estimator given by $\sigma_{\nu \epsilon}/\sigma^2_{\nu}$. More generally, the bias of the 2SLS estimator is in the same direction as the bias of the OLS estimator, but the bias goes down as the the F-stat rises. This makes sense since the 2SLS estimator is consistent under the standard assumptions and so the asymptotic bias should go to zero intuitively as the sample size goes up. But for a fixed sample size, a stronger relationship between the instruments an the endogenous regressions should also reduce the bias. The F-stat term captures both possible ways in which the bias could be reduced.
Why would an instrumental variable have its strength measured by an F-statistic? Here I expand somewhat more formally on the comment I made earlier, although the arguments here are not complete or fully rigorous (as you will see). This derivation is a more detailed version of the
51,242
Definition and Interpretation of Likelihood for non-PhD's
Your interpretations 1 and 2 are both wrong. Bayesians knew all along you have to multiply the likelihood by a prior to get a posterior probability distribution. The problem is there is no universally agreed upon prior for any given data analysis, the differences are not even proportional to each other. For frequentists, the absolute value of a likelihood means nothing. L=50 may as well be L=0.0005. The relative likelihood (such as a ratio) tells us more. When ratios are monotone, Karlin Rubin tells us that the likelihood ratio test has optimal power. For nicely behaved families (concave, regular exponential famliies) indexed by $\theta$, the maximum is an interesting estimator, but much breaks down when you go beyond that.
Definition and Interpretation of Likelihood for non-PhD's
Your interpretations 1 and 2 are both wrong. Bayesians knew all along you have to multiply the likelihood by a prior to get a posterior probability distribution. The problem is there is no universall
Definition and Interpretation of Likelihood for non-PhD's Your interpretations 1 and 2 are both wrong. Bayesians knew all along you have to multiply the likelihood by a prior to get a posterior probability distribution. The problem is there is no universally agreed upon prior for any given data analysis, the differences are not even proportional to each other. For frequentists, the absolute value of a likelihood means nothing. L=50 may as well be L=0.0005. The relative likelihood (such as a ratio) tells us more. When ratios are monotone, Karlin Rubin tells us that the likelihood ratio test has optimal power. For nicely behaved families (concave, regular exponential famliies) indexed by $\theta$, the maximum is an interesting estimator, but much breaks down when you go beyond that.
Definition and Interpretation of Likelihood for non-PhD's Your interpretations 1 and 2 are both wrong. Bayesians knew all along you have to multiply the likelihood by a prior to get a posterior probability distribution. The problem is there is no universall
51,243
Definition and Interpretation of Likelihood for non-PhD's
The likelihood does give us what can often be equated with 'plausibility', but it is important to say that it is the relative plausibility according to the statistical model. And it is probably useful to note that you are, in effect, using likelihood as a definition of (statistical?) plausibility. That is, in my opinion, a reasonable usage, but it will likely grate on people who view likelihood-based inference with mistrust. The problem is that 'plausible' refers to a state of mind as much as it refers to a statistically definable thing, and different minds are, well, different. Your statement 2 seems to match Fisher's original definition of likelihood and so it must be at east approximately correct. I would say that it would be greatly improved by omitting the "(or equal...)" and by adding something like 'according to the statistical model being used.' Because likelihoods always have arbitrary scaling we really can only make inferences about plausibility of parameter values with reference to the plausibility of at least one other parameter value. The likelihoods that give those plausibilities must be taken from the same statistical model and data, as a ratio of likelihoods across models is meaningless with respect to parameter values, even if they might be helpful when choosing the statistical models themselves. The sentence you've taken from Wikipedia ("The likelihood function does not specify the probability that 𝜃 is the truth, given the observed sample X=x.") is correct, but it is correct in a way that people seem to miss. Because the likelihoods only exist and are relevant to other likelihoods within a statistical model, their relevance to any real-world truth value is necessarily limited by the statistical model, and we respect the notion that all models are wrong!
Definition and Interpretation of Likelihood for non-PhD's
The likelihood does give us what can often be equated with 'plausibility', but it is important to say that it is the relative plausibility according to the statistical model. And it is probably useful
Definition and Interpretation of Likelihood for non-PhD's The likelihood does give us what can often be equated with 'plausibility', but it is important to say that it is the relative plausibility according to the statistical model. And it is probably useful to note that you are, in effect, using likelihood as a definition of (statistical?) plausibility. That is, in my opinion, a reasonable usage, but it will likely grate on people who view likelihood-based inference with mistrust. The problem is that 'plausible' refers to a state of mind as much as it refers to a statistically definable thing, and different minds are, well, different. Your statement 2 seems to match Fisher's original definition of likelihood and so it must be at east approximately correct. I would say that it would be greatly improved by omitting the "(or equal...)" and by adding something like 'according to the statistical model being used.' Because likelihoods always have arbitrary scaling we really can only make inferences about plausibility of parameter values with reference to the plausibility of at least one other parameter value. The likelihoods that give those plausibilities must be taken from the same statistical model and data, as a ratio of likelihoods across models is meaningless with respect to parameter values, even if they might be helpful when choosing the statistical models themselves. The sentence you've taken from Wikipedia ("The likelihood function does not specify the probability that 𝜃 is the truth, given the observed sample X=x.") is correct, but it is correct in a way that people seem to miss. Because the likelihoods only exist and are relevant to other likelihoods within a statistical model, their relevance to any real-world truth value is necessarily limited by the statistical model, and we respect the notion that all models are wrong!
Definition and Interpretation of Likelihood for non-PhD's The likelihood does give us what can often be equated with 'plausibility', but it is important to say that it is the relative plausibility according to the statistical model. And it is probably useful
51,244
Why typically minimizing a cost instead of maximizing a reward?
Minimising $f(x)$ is entirely equivalent to maximising $-f(x)$, in every aspect: result, numerical precision, computational complexity... everything. Historically, the convention might have been established because of the "least squares" in linear regression (but don't take my word for it). If it were the other way round, you'd be asking why we don't minimise some cost function...
Why typically minimizing a cost instead of maximizing a reward?
Minimising $f(x)$ is entirely equivalent to maximising $-f(x)$, in every aspect: result, numerical precision, computational complexity... everything. Historically, the convention might have been estab
Why typically minimizing a cost instead of maximizing a reward? Minimising $f(x)$ is entirely equivalent to maximising $-f(x)$, in every aspect: result, numerical precision, computational complexity... everything. Historically, the convention might have been established because of the "least squares" in linear regression (but don't take my word for it). If it were the other way round, you'd be asking why we don't minimise some cost function...
Why typically minimizing a cost instead of maximizing a reward? Minimising $f(x)$ is entirely equivalent to maximising $-f(x)$, in every aspect: result, numerical precision, computational complexity... everything. Historically, the convention might have been estab
51,245
Why typically minimizing a cost instead of maximizing a reward?
You tagged this question with the tag "Maximum Likelihood". In maximum likelihood estimation you explicitly maximize an objective function (namely the likelihood). It just so happens that for an observation that we assume to be drawn from a Gaussian random variable, the likelihood function usually takes a nice form after you take a logarithm. Then there is usually a leading negation, encouraging the entrepreneurial optimizer to switch away from maximizing the objective to minimizing the negative of objective, or roughly the "cost". For discrete maximum likelihood estimation the "cost" also has another meaningful name since it takes the same form as the euclidean distance in the observation space. (Note that this notion of distance is always there whether or not you're doing discrete parameter estimation, but it's a little less obvious than the discrete ML estimate which just boils down to picking the nearest valid point). Since there's no such thing as negative distance there is a seemingly strong preference for minimizing the cost and not maximizing the objective in these cases. You should feel comfortable swapping back and forth between minimizing a cost and maximizing an objective. There is a real reason that ML and MAP estimates specifically choose to maximize an objective function (pdf's are purely positive, and the highest values are quite interesting spots [the mode]), but the practical realization of an estimator is going to be several mathematical manipulations away from the textbook definition.
Why typically minimizing a cost instead of maximizing a reward?
You tagged this question with the tag "Maximum Likelihood". In maximum likelihood estimation you explicitly maximize an objective function (namely the likelihood). It just so happens that for an obser
Why typically minimizing a cost instead of maximizing a reward? You tagged this question with the tag "Maximum Likelihood". In maximum likelihood estimation you explicitly maximize an objective function (namely the likelihood). It just so happens that for an observation that we assume to be drawn from a Gaussian random variable, the likelihood function usually takes a nice form after you take a logarithm. Then there is usually a leading negation, encouraging the entrepreneurial optimizer to switch away from maximizing the objective to minimizing the negative of objective, or roughly the "cost". For discrete maximum likelihood estimation the "cost" also has another meaningful name since it takes the same form as the euclidean distance in the observation space. (Note that this notion of distance is always there whether or not you're doing discrete parameter estimation, but it's a little less obvious than the discrete ML estimate which just boils down to picking the nearest valid point). Since there's no such thing as negative distance there is a seemingly strong preference for minimizing the cost and not maximizing the objective in these cases. You should feel comfortable swapping back and forth between minimizing a cost and maximizing an objective. There is a real reason that ML and MAP estimates specifically choose to maximize an objective function (pdf's are purely positive, and the highest values are quite interesting spots [the mode]), but the practical realization of an estimator is going to be several mathematical manipulations away from the textbook definition.
Why typically minimizing a cost instead of maximizing a reward? You tagged this question with the tag "Maximum Likelihood". In maximum likelihood estimation you explicitly maximize an objective function (namely the likelihood). It just so happens that for an obser
51,246
Why typically minimizing a cost instead of maximizing a reward?
It's my understanding that the only reason for this distinction is that in numerical analysis, it's the standard to talk about convex optimization rather than concave optimization, even though they are really the same procedures. For example, if you do a google scholar search for "concave optimization", you get about 300,000 hits, but "convex optimization" gets about 2,000,000. Because convex optimization is talked about more in the numerical analysis literature, this nomenclature is followed in the machine learning community. As you state, the differences are trivial, so the reason for the distinction is trivial.
Why typically minimizing a cost instead of maximizing a reward?
It's my understanding that the only reason for this distinction is that in numerical analysis, it's the standard to talk about convex optimization rather than concave optimization, even though they ar
Why typically minimizing a cost instead of maximizing a reward? It's my understanding that the only reason for this distinction is that in numerical analysis, it's the standard to talk about convex optimization rather than concave optimization, even though they are really the same procedures. For example, if you do a google scholar search for "concave optimization", you get about 300,000 hits, but "convex optimization" gets about 2,000,000. Because convex optimization is talked about more in the numerical analysis literature, this nomenclature is followed in the machine learning community. As you state, the differences are trivial, so the reason for the distinction is trivial.
Why typically minimizing a cost instead of maximizing a reward? It's my understanding that the only reason for this distinction is that in numerical analysis, it's the standard to talk about convex optimization rather than concave optimization, even though they ar
51,247
Why typically minimizing a cost instead of maximizing a reward?
History. A lot of this connects back to estimation in statistics. For example Gauss. He wanted to estimate the position of an asteroid that was obscured by the sun. He had the idea to minimize the squared error and got much better predictions than his colleagues. When estimating the position of an asteroid, what would be the "gain"? The error cost, however, is easy to see: how far is the asteroid from the expected position.
Why typically minimizing a cost instead of maximizing a reward?
History. A lot of this connects back to estimation in statistics. For example Gauss. He wanted to estimate the position of an asteroid that was obscured by the sun. He had the idea to minimize the squ
Why typically minimizing a cost instead of maximizing a reward? History. A lot of this connects back to estimation in statistics. For example Gauss. He wanted to estimate the position of an asteroid that was obscured by the sun. He had the idea to minimize the squared error and got much better predictions than his colleagues. When estimating the position of an asteroid, what would be the "gain"? The error cost, however, is easy to see: how far is the asteroid from the expected position.
Why typically minimizing a cost instead of maximizing a reward? History. A lot of this connects back to estimation in statistics. For example Gauss. He wanted to estimate the position of an asteroid that was obscured by the sun. He had the idea to minimize the squ
51,248
Why typically minimizing a cost instead of maximizing a reward?
I suspect it may also be because quite a lot of optimisation algorithms are developed by people working on Operations Research who I think have historically posed problems in terms of minimisation of losses. We minimise because it what the best software supports.
Why typically minimizing a cost instead of maximizing a reward?
I suspect it may also be because quite a lot of optimisation algorithms are developed by people working on Operations Research who I think have historically posed problems in terms of minimisation of
Why typically minimizing a cost instead of maximizing a reward? I suspect it may also be because quite a lot of optimisation algorithms are developed by people working on Operations Research who I think have historically posed problems in terms of minimisation of losses. We minimise because it what the best software supports.
Why typically minimizing a cost instead of maximizing a reward? I suspect it may also be because quite a lot of optimisation algorithms are developed by people working on Operations Research who I think have historically posed problems in terms of minimisation of
51,249
Why typically minimizing a cost instead of maximizing a reward?
You can just as well maximize a function that is equal to -1 times your cost function. It just happens that it's usually more natural to define a function that increases in value the farther from the optimum we get than the reverse, and for that reason we usually try to minimize a function rather than maximize a function. As an illustrative example of what I mean, see this table, where in the rightmost column you encounter $\operatorname{argmin}$ much more often than $\operatorname{argmax}$.
Why typically minimizing a cost instead of maximizing a reward?
You can just as well maximize a function that is equal to -1 times your cost function. It just happens that it's usually more natural to define a function that increases in value the farther from the
Why typically minimizing a cost instead of maximizing a reward? You can just as well maximize a function that is equal to -1 times your cost function. It just happens that it's usually more natural to define a function that increases in value the farther from the optimum we get than the reverse, and for that reason we usually try to minimize a function rather than maximize a function. As an illustrative example of what I mean, see this table, where in the rightmost column you encounter $\operatorname{argmin}$ much more often than $\operatorname{argmax}$.
Why typically minimizing a cost instead of maximizing a reward? You can just as well maximize a function that is equal to -1 times your cost function. It just happens that it's usually more natural to define a function that increases in value the farther from the
51,250
Distribution that doesn't belong to any maximum domain of attraction?
Does there exist any non-degenerate probability distribution function $F$ such that if $X_1,X_2,\dots \overset{\text{iid}}{\sim} F$, then there do not exist any sequences $(a_n) \subset \mathbb R_{>0}$, $(b_n) \subset \mathbb R$ such that $$ \frac{\max\{ X_1, \dots, X_n\} - b_n}{a_n} $$ converges in distribution to a non-degenerate distribution? Any discrete distribution whose maximum value in the domain has non-zero probability is an example for a distribution $F$ that is not degenerate but $\max\{ X_1, \dots, X_n\}$ converges to the maximum value of the domain and becomes a degenerate distribution. Hence, we can not find $a_n$ and $b_n$ such that there is convergence to a non-degenerate distribution. An other example, for continuous distributions, that does not converge are the distributions with Super-Heavy Tails as described here: How do we call a more extreme case of fat tails than a power law? (they are distributions for which any order statistic of a sample will have infinite expectation values). (Edit note: I got to that idea of super heavy tails by thinking of neccesary condition of the tail behaviour. In my edits you can see a line of thought about it, but it is incorrect and I have to refine it, so I deleted it.)
Distribution that doesn't belong to any maximum domain of attraction?
Does there exist any non-degenerate probability distribution function $F$ such that if $X_1,X_2,\dots \overset{\text{iid}}{\sim} F$, then there do not exist any sequences $(a_n) \subset \mathbb R_{>0}
Distribution that doesn't belong to any maximum domain of attraction? Does there exist any non-degenerate probability distribution function $F$ such that if $X_1,X_2,\dots \overset{\text{iid}}{\sim} F$, then there do not exist any sequences $(a_n) \subset \mathbb R_{>0}$, $(b_n) \subset \mathbb R$ such that $$ \frac{\max\{ X_1, \dots, X_n\} - b_n}{a_n} $$ converges in distribution to a non-degenerate distribution? Any discrete distribution whose maximum value in the domain has non-zero probability is an example for a distribution $F$ that is not degenerate but $\max\{ X_1, \dots, X_n\}$ converges to the maximum value of the domain and becomes a degenerate distribution. Hence, we can not find $a_n$ and $b_n$ such that there is convergence to a non-degenerate distribution. An other example, for continuous distributions, that does not converge are the distributions with Super-Heavy Tails as described here: How do we call a more extreme case of fat tails than a power law? (they are distributions for which any order statistic of a sample will have infinite expectation values). (Edit note: I got to that idea of super heavy tails by thinking of neccesary condition of the tail behaviour. In my edits you can see a line of thought about it, but it is incorrect and I have to refine it, so I deleted it.)
Distribution that doesn't belong to any maximum domain of attraction? Does there exist any non-degenerate probability distribution function $F$ such that if $X_1,X_2,\dots \overset{\text{iid}}{\sim} F$, then there do not exist any sequences $(a_n) \subset \mathbb R_{>0}
51,251
How do we derive the conditional mode as the solution to linear regression, for uniform cost function?
I got the answer from the paper, Is the mode elicitable relative to unimodal distributions?. If we consider the cost function, $C(x,y)=\mathrm{1}_{x\ne y}$, the minimization corresponding to mode is $\beta = \min \sum_i \mathrm{1}_{y_i\ne \beta}$. The minimum is obviously attained at the mode, i.e., $\beta = \text{mode}(y_i)\mid_{i=1}^n$. The paper also talks about the mode not being "elicitable" wrt Lebesgue densities. The above derivations can also be carried out inside the conditional expectation $\mathbb{E}[\cdot\mid X=x]$, which gives rise to the conditional mean, median and mode of the posterior distribution as a solution of the regression with the corresponding cost function.
How do we derive the conditional mode as the solution to linear regression, for uniform cost functio
I got the answer from the paper, Is the mode elicitable relative to unimodal distributions?. If we consider the cost function, $C(x,y)=\mathrm{1}_{x\ne y}$, the minimization corresponding to mode is $
How do we derive the conditional mode as the solution to linear regression, for uniform cost function? I got the answer from the paper, Is the mode elicitable relative to unimodal distributions?. If we consider the cost function, $C(x,y)=\mathrm{1}_{x\ne y}$, the minimization corresponding to mode is $\beta = \min \sum_i \mathrm{1}_{y_i\ne \beta}$. The minimum is obviously attained at the mode, i.e., $\beta = \text{mode}(y_i)\mid_{i=1}^n$. The paper also talks about the mode not being "elicitable" wrt Lebesgue densities. The above derivations can also be carried out inside the conditional expectation $\mathbb{E}[\cdot\mid X=x]$, which gives rise to the conditional mean, median and mode of the posterior distribution as a solution of the regression with the corresponding cost function.
How do we derive the conditional mode as the solution to linear regression, for uniform cost functio I got the answer from the paper, Is the mode elicitable relative to unimodal distributions?. If we consider the cost function, $C(x,y)=\mathrm{1}_{x\ne y}$, the minimization corresponding to mode is $
51,252
Expectation of a function of a random variable from CDF
When $F$ is the CDF of a random variable $X$ and $g$ is a (measurable) function, the expectation of $g(X)$ can be found as a Riemann-Stieltjes integral $$\mathbb{E}(g(X)) = \int_{-\infty}^\infty g(x) dF(x).$$ This expresses the Law of the Unconscious Statistician. If $g$ is also differentiable, write $dF = -d(1-F)$ and integrate by parts to give $$\mathbb{E}(g(X)) = -g(x)(1-F(x)){\big|}_{-\infty}^\infty + \int_{-\infty}^\infty (1-F(x)) g^\prime(x)\, \text{d}x$$ provided both addends converge. This means several things, which may be simply expressed by breaking the integral at some definite finite value such as $0$: ${\lim}_{x\to -\infty} g(x)(1-F(x))$ and ${\lim}_{x\to \infty} g(x)(1-F(x))$ exist and are finite. If so, the first addend is the difference of these two. $\lim_{t\to -\infty} \int_t^0 (1-F(x))g^\prime(x)\,\text{d}x$ and $\lim_{t\to \infty} \int_0^t (1-F(x))g^\prime(x)\,\text{d}x$ exist and are finite. If so, the second addend is the sum of these two. A good place to break the integral is at any zero of $g$, because--provided $g$ eventually decreases fast enough for large $|x|$--that causes the first addend to vanish, leaving only the integral of $g^\prime$ against the survival function $1-F$. Example The expectation of a non-negative variable $X$ is obtained by applying the formula to the identity function $g(x)=x$ for which $g^\prime(x)=1$ and utilizing the fact that the integration may begin at zero: $$\mathbb{E}(X) = -x(1-F(x))\big|_{0}^\infty + \int_{0}^\infty (1-F(x))\,\text{d}x.$$ Provided $\lim_{x\to\infty} x (1-F(x)) = 0$ (that is, the survival function does not have an overly heavy tail), the upper limit of the first term vanishes. Its lower limit obviously vanishes. We are left only with the integral, giving the expression in the question.
Expectation of a function of a random variable from CDF
When $F$ is the CDF of a random variable $X$ and $g$ is a (measurable) function, the expectation of $g(X)$ can be found as a Riemann-Stieltjes integral $$\mathbb{E}(g(X)) = \int_{-\infty}^\infty g(x)
Expectation of a function of a random variable from CDF When $F$ is the CDF of a random variable $X$ and $g$ is a (measurable) function, the expectation of $g(X)$ can be found as a Riemann-Stieltjes integral $$\mathbb{E}(g(X)) = \int_{-\infty}^\infty g(x) dF(x).$$ This expresses the Law of the Unconscious Statistician. If $g$ is also differentiable, write $dF = -d(1-F)$ and integrate by parts to give $$\mathbb{E}(g(X)) = -g(x)(1-F(x)){\big|}_{-\infty}^\infty + \int_{-\infty}^\infty (1-F(x)) g^\prime(x)\, \text{d}x$$ provided both addends converge. This means several things, which may be simply expressed by breaking the integral at some definite finite value such as $0$: ${\lim}_{x\to -\infty} g(x)(1-F(x))$ and ${\lim}_{x\to \infty} g(x)(1-F(x))$ exist and are finite. If so, the first addend is the difference of these two. $\lim_{t\to -\infty} \int_t^0 (1-F(x))g^\prime(x)\,\text{d}x$ and $\lim_{t\to \infty} \int_0^t (1-F(x))g^\prime(x)\,\text{d}x$ exist and are finite. If so, the second addend is the sum of these two. A good place to break the integral is at any zero of $g$, because--provided $g$ eventually decreases fast enough for large $|x|$--that causes the first addend to vanish, leaving only the integral of $g^\prime$ against the survival function $1-F$. Example The expectation of a non-negative variable $X$ is obtained by applying the formula to the identity function $g(x)=x$ for which $g^\prime(x)=1$ and utilizing the fact that the integration may begin at zero: $$\mathbb{E}(X) = -x(1-F(x))\big|_{0}^\infty + \int_{0}^\infty (1-F(x))\,\text{d}x.$$ Provided $\lim_{x\to\infty} x (1-F(x)) = 0$ (that is, the survival function does not have an overly heavy tail), the upper limit of the first term vanishes. Its lower limit obviously vanishes. We are left only with the integral, giving the expression in the question.
Expectation of a function of a random variable from CDF When $F$ is the CDF of a random variable $X$ and $g$ is a (measurable) function, the expectation of $g(X)$ can be found as a Riemann-Stieltjes integral $$\mathbb{E}(g(X)) = \int_{-\infty}^\infty g(x)
51,253
Noise in regression problems and ways to reduce it
As stated by Dr. Kilian Weinberger, that you mentioned in your question, you can never beat this error. The optimal classifier gives you the mean of the distribution of all the data P, which you can never get. If you want to detect a car, for example, P has to contain all pictures of cars ever exists. But if you do find P, there will still some points that vary from its mean. This variation is the noise that cannot be omitted. It defers from one distribution to another. According to the lecture: error (noise) = E(x,y) = [(y¯(x)−y)^2]). y¯(x) is the label you would expect to obtain from the whole distribution P, given a feature vector x; y is the label that you are testing. P.S. excuse me for the bad equation, I am new to the platform...
Noise in regression problems and ways to reduce it
As stated by Dr. Kilian Weinberger, that you mentioned in your question, you can never beat this error. The optimal classifier gives you the mean of the distribution of all the data P, which you can n
Noise in regression problems and ways to reduce it As stated by Dr. Kilian Weinberger, that you mentioned in your question, you can never beat this error. The optimal classifier gives you the mean of the distribution of all the data P, which you can never get. If you want to detect a car, for example, P has to contain all pictures of cars ever exists. But if you do find P, there will still some points that vary from its mean. This variation is the noise that cannot be omitted. It defers from one distribution to another. According to the lecture: error (noise) = E(x,y) = [(y¯(x)−y)^2]). y¯(x) is the label you would expect to obtain from the whole distribution P, given a feature vector x; y is the label that you are testing. P.S. excuse me for the bad equation, I am new to the platform...
Noise in regression problems and ways to reduce it As stated by Dr. Kilian Weinberger, that you mentioned in your question, you can never beat this error. The optimal classifier gives you the mean of the distribution of all the data P, which you can n
51,254
Noise in regression problems and ways to reduce it
After some googling I found a great blogpost, whose author (A. Muehlemann, PHD from Oxford) seems to understand noise in the same way as me. I think that his explanation will give your better understanding of my original post. His conclusion is the following: most of the techniques that reduce bias will also reduce noise (for example, adding some important features will reduce noice, as I said above). He also said that "In pretty much all practical situations, we find ourselves in the second scenario where we have some (but not all) possible features and thus have some apparent noise. The takeaway message is not that (apparent) “noise” doesn’t exist but rather that “noise” is a misleading term. Perhaps it would be more honest to call it “feature bias”, as it stems from our bias of ignoring parts of reality. In this language, what is classically called “bias” should really be called “modeling bias”, as it stems from our particular choice of one model over another." In the abovementioned post there is a link to this videolection, where the lecturer made one important assumption about the noise, he modeled $Y$ with additive error model, i.e. $Y=\mathrm{E}[Y|X] + \epsilon(X)$. This is a typical assumption for regression tasks, and using this assumption Noise from my original post can be written as $\mathrm{Noise} = \mathrm{E}_{X,Y}[(Y - \mathrm{E}[Y|X])^2] = \mathrm{E}_{X, \epsilon}[\epsilon(X)^2]$ and called "stochastic noise".
Noise in regression problems and ways to reduce it
After some googling I found a great blogpost, whose author (A. Muehlemann, PHD from Oxford) seems to understand noise in the same way as me. I think that his explanation will give your better understa
Noise in regression problems and ways to reduce it After some googling I found a great blogpost, whose author (A. Muehlemann, PHD from Oxford) seems to understand noise in the same way as me. I think that his explanation will give your better understanding of my original post. His conclusion is the following: most of the techniques that reduce bias will also reduce noise (for example, adding some important features will reduce noice, as I said above). He also said that "In pretty much all practical situations, we find ourselves in the second scenario where we have some (but not all) possible features and thus have some apparent noise. The takeaway message is not that (apparent) “noise” doesn’t exist but rather that “noise” is a misleading term. Perhaps it would be more honest to call it “feature bias”, as it stems from our bias of ignoring parts of reality. In this language, what is classically called “bias” should really be called “modeling bias”, as it stems from our particular choice of one model over another." In the abovementioned post there is a link to this videolection, where the lecturer made one important assumption about the noise, he modeled $Y$ with additive error model, i.e. $Y=\mathrm{E}[Y|X] + \epsilon(X)$. This is a typical assumption for regression tasks, and using this assumption Noise from my original post can be written as $\mathrm{Noise} = \mathrm{E}_{X,Y}[(Y - \mathrm{E}[Y|X])^2] = \mathrm{E}_{X, \epsilon}[\epsilon(X)^2]$ and called "stochastic noise".
Noise in regression problems and ways to reduce it After some googling I found a great blogpost, whose author (A. Muehlemann, PHD from Oxford) seems to understand noise in the same way as me. I think that his explanation will give your better understa
51,255
Causal tree v. causal forest - when to use which for HTE?
As it stands at the moment in my mind, a causal forest is built using a combination of causal trees, my bare understanding is that when estimating for a group or cluster average treatment effects given a treatment assignment condition then a causal forest is the best choice, whereas if estimating treatment effect heterogeneity for an individual then the causal tree is more optimized for that. I may be wrong but that is how I get my head around it.
Causal tree v. causal forest - when to use which for HTE?
As it stands at the moment in my mind, a causal forest is built using a combination of causal trees, my bare understanding is that when estimating for a group or cluster average treatment effects give
Causal tree v. causal forest - when to use which for HTE? As it stands at the moment in my mind, a causal forest is built using a combination of causal trees, my bare understanding is that when estimating for a group or cluster average treatment effects given a treatment assignment condition then a causal forest is the best choice, whereas if estimating treatment effect heterogeneity for an individual then the causal tree is more optimized for that. I may be wrong but that is how I get my head around it.
Causal tree v. causal forest - when to use which for HTE? As it stands at the moment in my mind, a causal forest is built using a combination of causal trees, my bare understanding is that when estimating for a group or cluster average treatment effects give
51,256
Causal tree v. causal forest - when to use which for HTE?
As usual, the short answer is: it depends. From a methodological point of view, a causal tree estimates the CATE function $E \left[ Y (1) - Y(0) | X \right]$ (i.e., the treatment effects conditional on the covariates) by constructing a multi-variate step function. This means that you would be given groups of units and, for each group, an estimate of their average treatment effect, typically called GATE (Group Average Treatment Effect). On the other hand, a causal forest is an ensemble of several causal trees (I assume we are talking about the causal forest proposed by Athey and Wager (2018)) The idea is to smooth over several step functions to get a smooth estimate of the CATE. This way, the causal forest yields treatment effects that vary across individuals - as opposed to groups of units. This discussion abstracts from our belief about the true Data Generating Process. If for some reason, we believe that treatment effects are constant across groups, then a causal tree should work better than a causal forest. Moreover, one could also discuss the relative merits of interpretability vs explainability. A causal forest yields individual treatment effects for each unit in your data set. Thus, if you have 10,000 units, then you estimate 10,000 effects, and really nobody is able to look at each of them and make a coherent and strong story out of it (although there is now a vast literature on how to post-process CATEs estimates, starting from this wonderful paper). A causal tree is way more interpretable than a causal forest (this is generally true for any type of trees/forests). You get a particular number of groups (in my experience, no more than 10/12) and an estimate of the average effect of each group. This is highly interpretable, which is essential for understanding and communication.
Causal tree v. causal forest - when to use which for HTE?
As usual, the short answer is: it depends. From a methodological point of view, a causal tree estimates the CATE function $E \left[ Y (1) - Y(0) | X \right]$ (i.e., the treatment effects conditional o
Causal tree v. causal forest - when to use which for HTE? As usual, the short answer is: it depends. From a methodological point of view, a causal tree estimates the CATE function $E \left[ Y (1) - Y(0) | X \right]$ (i.e., the treatment effects conditional on the covariates) by constructing a multi-variate step function. This means that you would be given groups of units and, for each group, an estimate of their average treatment effect, typically called GATE (Group Average Treatment Effect). On the other hand, a causal forest is an ensemble of several causal trees (I assume we are talking about the causal forest proposed by Athey and Wager (2018)) The idea is to smooth over several step functions to get a smooth estimate of the CATE. This way, the causal forest yields treatment effects that vary across individuals - as opposed to groups of units. This discussion abstracts from our belief about the true Data Generating Process. If for some reason, we believe that treatment effects are constant across groups, then a causal tree should work better than a causal forest. Moreover, one could also discuss the relative merits of interpretability vs explainability. A causal forest yields individual treatment effects for each unit in your data set. Thus, if you have 10,000 units, then you estimate 10,000 effects, and really nobody is able to look at each of them and make a coherent and strong story out of it (although there is now a vast literature on how to post-process CATEs estimates, starting from this wonderful paper). A causal tree is way more interpretable than a causal forest (this is generally true for any type of trees/forests). You get a particular number of groups (in my experience, no more than 10/12) and an estimate of the average effect of each group. This is highly interpretable, which is essential for understanding and communication.
Causal tree v. causal forest - when to use which for HTE? As usual, the short answer is: it depends. From a methodological point of view, a causal tree estimates the CATE function $E \left[ Y (1) - Y(0) | X \right]$ (i.e., the treatment effects conditional o
51,257
XGBOOST objective function derivation algebra
The quick answers to your question are: You are absolutely right: it is not generally valid to swap the order of summation when the interior sum is squared. It's not even true when $n=1$ and $m=2$: $$ \sum_{i=1}^{1} \left(\sum_{j=1}^{2} x_{ij} \right)^2 = (x_{11} + x_{12})^2 = x_{11}^2 + 2x_{11}x_{12} + x_{12}^2 $$ $$ \sum_{j=1}^{2} \left(\sum_{i=1}^{1} x_{ij} \right)^2 = x_{11}^2 + x_{12}^2 $$ The blog post has a few mistakes in it, and you would be better working through the (quite nicely written and much better type-scripted) introduction in the XGBoost docs.
XGBOOST objective function derivation algebra
The quick answers to your question are: You are absolutely right: it is not generally valid to swap the order of summation when the interior sum is squared. It's not even true when $n=1$ and $m=2$:
XGBOOST objective function derivation algebra The quick answers to your question are: You are absolutely right: it is not generally valid to swap the order of summation when the interior sum is squared. It's not even true when $n=1$ and $m=2$: $$ \sum_{i=1}^{1} \left(\sum_{j=1}^{2} x_{ij} \right)^2 = (x_{11} + x_{12})^2 = x_{11}^2 + 2x_{11}x_{12} + x_{12}^2 $$ $$ \sum_{j=1}^{2} \left(\sum_{i=1}^{1} x_{ij} \right)^2 = x_{11}^2 + x_{12}^2 $$ The blog post has a few mistakes in it, and you would be better working through the (quite nicely written and much better type-scripted) introduction in the XGBoost docs.
XGBOOST objective function derivation algebra The quick answers to your question are: You are absolutely right: it is not generally valid to swap the order of summation when the interior sum is squared. It's not even true when $n=1$ and $m=2$:
51,258
Once you have used LASSO to generate regression coefficients, is there another step that gives you information about the model?
You can do repeated cross validation and if it is logistic get the ROC-AUC or perhaps the Brier score. If it is linear regression you can use R^2. For variable importance you can use the coefficient size.
Once you have used LASSO to generate regression coefficients, is there another step that gives you i
You can do repeated cross validation and if it is logistic get the ROC-AUC or perhaps the Brier score. If it is linear regression you can use R^2. For variable importance you can use the coefficient s
Once you have used LASSO to generate regression coefficients, is there another step that gives you information about the model? You can do repeated cross validation and if it is logistic get the ROC-AUC or perhaps the Brier score. If it is linear regression you can use R^2. For variable importance you can use the coefficient size.
Once you have used LASSO to generate regression coefficients, is there another step that gives you i You can do repeated cross validation and if it is logistic get the ROC-AUC or perhaps the Brier score. If it is linear regression you can use R^2. For variable importance you can use the coefficient s
51,259
$R^2$ and adjusted $R^2$ in presence of overlapping observations
I will refer to population, vanilla, adjusted as (1), (2), (3), respectively. Q1) As (1) is for population while (2), (3) are its sample analogue, the same will hold for LRVar. For the population you will use $k=\infty$ and some integer for the sample. Q2) I haven't done the calculation, but using LRVar will make a difference. Given that Newey-West is to take autocorrelation in errors into account, it will have less SE compared to OLS or HC estimators. It returns "more conservative" values, so I presume using LRVar will result in smaller $R^2$. Q3) Technically they will differ, but in terms of interpretation I wouldn't bother. SEs or p-values does matter, but $R^2$ are just to denote the overall fit and often the value themselves are not that meaningful. Adjusted $R^2$ < unadjusted $R^2$ will hold for both regular variance and long-run variance, so I will just use the regular one, which is easier.
$R^2$ and adjusted $R^2$ in presence of overlapping observations
I will refer to population, vanilla, adjusted as (1), (2), (3), respectively. Q1) As (1) is for population while (2), (3) are its sample analogue, the same will hold for LRVar. For the population you
$R^2$ and adjusted $R^2$ in presence of overlapping observations I will refer to population, vanilla, adjusted as (1), (2), (3), respectively. Q1) As (1) is for population while (2), (3) are its sample analogue, the same will hold for LRVar. For the population you will use $k=\infty$ and some integer for the sample. Q2) I haven't done the calculation, but using LRVar will make a difference. Given that Newey-West is to take autocorrelation in errors into account, it will have less SE compared to OLS or HC estimators. It returns "more conservative" values, so I presume using LRVar will result in smaller $R^2$. Q3) Technically they will differ, but in terms of interpretation I wouldn't bother. SEs or p-values does matter, but $R^2$ are just to denote the overall fit and often the value themselves are not that meaningful. Adjusted $R^2$ < unadjusted $R^2$ will hold for both regular variance and long-run variance, so I will just use the regular one, which is easier.
$R^2$ and adjusted $R^2$ in presence of overlapping observations I will refer to population, vanilla, adjusted as (1), (2), (3), respectively. Q1) As (1) is for population while (2), (3) are its sample analogue, the same will hold for LRVar. For the population you
51,260
Correlation between logrank (log-rank) test statistics with common control
The correlation of 0.5 is an assumption, it is not derived. I will not accept this as an answer to my question because I do not have a source. This was based on a conversation with an expert, for whatever that's worth. The assumption is made when there are equal sample sizes in the experimental arm and the control arm. The numerator of the hazard ratio is calculated using the experimental arms and the denominator is calculated using the control arm. So the numerators of the hazard ratios are independent while the denominator is perfectly correlated. This heuristic is why the correlation is assumed to be 1/2
Correlation between logrank (log-rank) test statistics with common control
The correlation of 0.5 is an assumption, it is not derived. I will not accept this as an answer to my question because I do not have a source. This was based on a conversation with an expert, for what
Correlation between logrank (log-rank) test statistics with common control The correlation of 0.5 is an assumption, it is not derived. I will not accept this as an answer to my question because I do not have a source. This was based on a conversation with an expert, for whatever that's worth. The assumption is made when there are equal sample sizes in the experimental arm and the control arm. The numerator of the hazard ratio is calculated using the experimental arms and the denominator is calculated using the control arm. So the numerators of the hazard ratios are independent while the denominator is perfectly correlated. This heuristic is why the correlation is assumed to be 1/2
Correlation between logrank (log-rank) test statistics with common control The correlation of 0.5 is an assumption, it is not derived. I will not accept this as an answer to my question because I do not have a source. This was based on a conversation with an expert, for what
51,261
How to win this dice probability game?
This is how I look at it, but I'll admit I may have misunderstood the game! Assuming you have a current banked score of B, the expected return for any given round is: $$E(return)=\frac{1}{6}(-B)+\frac{1}{6}(1+3+4+5+6)$$ $$E(return)=\frac{1}{6}(19-B)$$ So once you have a bank of 19 points, it is better to get out than take the chance. I believe this will maximize your average score in the long run. However, when it comes to games, sometimes things are more complicated than simple optimization. For a 2 player game, I think I would follow the advice of my analysis above. However, for a 20 player game, it is clear that you will need some luck, since 2nd place is the first loser, you want to give yourself a chance at a very high score, not just try to avoid a very low one. Intuitively, I expect that this means you need to push your luck passed the 19 score mark, but I'll have to think harder about how to quantify that for a game of n people. Running a simulation, I find that the mean is, indeed, optimized near a threshold of 19. However, as I also suspected, the lucky game (mean + 2$\sigma$) is actually optimized out around 29 or 30. So if you need to beat 19 other players wait till then.
How to win this dice probability game?
This is how I look at it, but I'll admit I may have misunderstood the game! Assuming you have a current banked score of B, the expected return for any given round is: $$E(return)=\frac{1}{6}(-B)+\frac
How to win this dice probability game? This is how I look at it, but I'll admit I may have misunderstood the game! Assuming you have a current banked score of B, the expected return for any given round is: $$E(return)=\frac{1}{6}(-B)+\frac{1}{6}(1+3+4+5+6)$$ $$E(return)=\frac{1}{6}(19-B)$$ So once you have a bank of 19 points, it is better to get out than take the chance. I believe this will maximize your average score in the long run. However, when it comes to games, sometimes things are more complicated than simple optimization. For a 2 player game, I think I would follow the advice of my analysis above. However, for a 20 player game, it is clear that you will need some luck, since 2nd place is the first loser, you want to give yourself a chance at a very high score, not just try to avoid a very low one. Intuitively, I expect that this means you need to push your luck passed the 19 score mark, but I'll have to think harder about how to quantify that for a game of n people. Running a simulation, I find that the mean is, indeed, optimized near a threshold of 19. However, as I also suspected, the lucky game (mean + 2$\sigma$) is actually optimized out around 29 or 30. So if you need to beat 19 other players wait till then.
How to win this dice probability game? This is how I look at it, but I'll admit I may have misunderstood the game! Assuming you have a current banked score of B, the expected return for any given round is: $$E(return)=\frac{1}{6}(-B)+\frac
51,262
How to win this dice probability game?
The following results are from my simulation in R. Assuming there are 6 rounds in total, the first two throws are always performed in each round, and after each round (the first 2) everybody resets. I will simulate the game for only one person, since we assume the players play independently. I will test which strategy performs better, always banking on the first throw, second, third, and so on. result=replicate(1e4,{ bank=rep(0,20) for(i in 1:6){ round_draws=sample(1:6,20,replace=T) first_two=which(round_draws[3:20]==2)[1]+2 if(!is.na(first_two)){ round_draws[first_two:20]=NA } new_bank=cumsum(round_draws) new_bank[is.na(new_bank)]=0 bank=bank+new_bank } return(bank) }) And the results look like this. The best result (in the long run, on average) is banking every 6 throws.
How to win this dice probability game?
The following results are from my simulation in R. Assuming there are 6 rounds in total, the first two throws are always performed in each round, and after each round (the first 2) everybody resets. I
How to win this dice probability game? The following results are from my simulation in R. Assuming there are 6 rounds in total, the first two throws are always performed in each round, and after each round (the first 2) everybody resets. I will simulate the game for only one person, since we assume the players play independently. I will test which strategy performs better, always banking on the first throw, second, third, and so on. result=replicate(1e4,{ bank=rep(0,20) for(i in 1:6){ round_draws=sample(1:6,20,replace=T) first_two=which(round_draws[3:20]==2)[1]+2 if(!is.na(first_two)){ round_draws[first_two:20]=NA } new_bank=cumsum(round_draws) new_bank[is.na(new_bank)]=0 bank=bank+new_bank } return(bank) }) And the results look like this. The best result (in the long run, on average) is banking every 6 throws.
How to win this dice probability game? The following results are from my simulation in R. Assuming there are 6 rounds in total, the first two throws are always performed in each round, and after each round (the first 2) everybody resets. I
51,263
Stop-gradient operator in vector-quantized variational autoencoder
I have been looking for the same question. I have finally deduced the following. I think it is a learning factor that balance the importance between terms (codebook loss and commitment loss). If the Beta factor is smaller than 1, it means that the encoder is updated more faster than the codebook. That is interesting for example if we think about it from a centroid perspective (codebook), we do not want them to update strongly in each iteration because we have to preserve some information of the previous batches (and more important if the batch is small). In short, we want the centroids (codebook) to move slowly and the encoder samples can be updated faster. Probably this technique can minimize the noise produced by the mini-batch sampling in contrast than use all the dataset. This is what I have deduced, if it is not correct please someone indicate it.
Stop-gradient operator in vector-quantized variational autoencoder
I have been looking for the same question. I have finally deduced the following. I think it is a learning factor that balance the importance between terms (codebook loss and commitment loss). If the B
Stop-gradient operator in vector-quantized variational autoencoder I have been looking for the same question. I have finally deduced the following. I think it is a learning factor that balance the importance between terms (codebook loss and commitment loss). If the Beta factor is smaller than 1, it means that the encoder is updated more faster than the codebook. That is interesting for example if we think about it from a centroid perspective (codebook), we do not want them to update strongly in each iteration because we have to preserve some information of the previous batches (and more important if the batch is small). In short, we want the centroids (codebook) to move slowly and the encoder samples can be updated faster. Probably this technique can minimize the noise produced by the mini-batch sampling in contrast than use all the dataset. This is what I have deduced, if it is not correct please someone indicate it.
Stop-gradient operator in vector-quantized variational autoencoder I have been looking for the same question. I have finally deduced the following. I think it is a learning factor that balance the importance between terms (codebook loss and commitment loss). If the B
51,264
Stop-gradient operator in vector-quantized variational autoencoder
If you take gradients, the two formulas you mentioned are the same. However, if we write in the form of the first formula, fix $z_e$ (zero gradient), let $e$ approach $z_e$ and inversely. I think it would be easier for training.
Stop-gradient operator in vector-quantized variational autoencoder
If you take gradients, the two formulas you mentioned are the same. However, if we write in the form of the first formula, fix $z_e$ (zero gradient), let $e$ approach $z_e$ and inversely. I think it
Stop-gradient operator in vector-quantized variational autoencoder If you take gradients, the two formulas you mentioned are the same. However, if we write in the form of the first formula, fix $z_e$ (zero gradient), let $e$ approach $z_e$ and inversely. I think it would be easier for training.
Stop-gradient operator in vector-quantized variational autoencoder If you take gradients, the two formulas you mentioned are the same. However, if we write in the form of the first formula, fix $z_e$ (zero gradient), let $e$ approach $z_e$ and inversely. I think it
51,265
Calculate percentiles of angles/bearings in Python
If some angular region has very few crimes, you can start from an angle there. E.g. Consider crime in Chicago, and its angle relative to the center of Chicago’s street grid at Madison & State. The angles with the fewest crimes are due east of the center, where there is little land. The bearings in the original data probably go between 0 for north 90 for east 180 for south 270 for west 359 for just west of north Adding 360 to some of those numbers gives a new scale going between 90 for east 180 for south 270 for west 359 for just west of north 360 for north 449 for just north of east This replaces the bearings at 1 and 359 with new bearings at 361 and 359, which are close both numerically and geographically. So the crimes on Lake Shore Drive and in Lincoln Park will show up close together. Since the new scale is smooth in the regions where the crimes are concentrated, the quantiles derived from it will be sensible.
Calculate percentiles of angles/bearings in Python
If some angular region has very few crimes, you can start from an angle there. E.g. Consider crime in Chicago, and its angle relative to the center of Chicago’s street grid at Madison & State. The an
Calculate percentiles of angles/bearings in Python If some angular region has very few crimes, you can start from an angle there. E.g. Consider crime in Chicago, and its angle relative to the center of Chicago’s street grid at Madison & State. The angles with the fewest crimes are due east of the center, where there is little land. The bearings in the original data probably go between 0 for north 90 for east 180 for south 270 for west 359 for just west of north Adding 360 to some of those numbers gives a new scale going between 90 for east 180 for south 270 for west 359 for just west of north 360 for north 449 for just north of east This replaces the bearings at 1 and 359 with new bearings at 361 and 359, which are close both numerically and geographically. So the crimes on Lake Shore Drive and in Lincoln Park will show up close together. Since the new scale is smooth in the regions where the crimes are concentrated, the quantiles derived from it will be sensible.
Calculate percentiles of angles/bearings in Python If some angular region has very few crimes, you can start from an angle there. E.g. Consider crime in Chicago, and its angle relative to the center of Chicago’s street grid at Madison & State. The an
51,266
Predictive Accuracy of a Survival Model using Concordance
The best choice would be something similar to the second (bootstrap) option, but taken an additional step to get an estimate closer to how the model will perform when applied to the underlying population. Bootstrapping doesn't just give you the ability to estimate the standard error of your performance metric. It also gives you the ability to estimate the bias in the performance metric, arising from potential overfitting of the data set at hand. Under the bootstrap principle, the multiple bootstrap samples from your data set mimic repeatedly taking data sets from the underlying population. So you train models on multiple bootstrap samples and evaluate each of them both on the corresponding bootstrap sample and on your full data set. Calculate the average bias in your performance metric between the models applied to the corresponding bootstrap samples and to the full data set. That estimates the bias of the metric of your full model, built on your complete data set, when applied to the underlying population. Then you can report an optimism-corrected metric for your model in addition to the original metric. This optimism bootstrap is implemented in the validate() function of the R rms package for many model types including parametric survival models. It also evaluates several performance metrics beyond the C-index. Something similar can be accomplished with repeated cross-validation, and cross-validation is an option in the validate() function. It's not clear that provides any advantage over bootstrapping, with its justification noted above. In terms of number of events versus censored cases, the main issue is that the precision of any estimate will depend upon the total number of events. If there are many predictors relative to the number of events you might have problems with some bootstrapped samples not having enough events to fit your model, but in that case you might already be overfitting.
Predictive Accuracy of a Survival Model using Concordance
The best choice would be something similar to the second (bootstrap) option, but taken an additional step to get an estimate closer to how the model will perform when applied to the underlying populat
Predictive Accuracy of a Survival Model using Concordance The best choice would be something similar to the second (bootstrap) option, but taken an additional step to get an estimate closer to how the model will perform when applied to the underlying population. Bootstrapping doesn't just give you the ability to estimate the standard error of your performance metric. It also gives you the ability to estimate the bias in the performance metric, arising from potential overfitting of the data set at hand. Under the bootstrap principle, the multiple bootstrap samples from your data set mimic repeatedly taking data sets from the underlying population. So you train models on multiple bootstrap samples and evaluate each of them both on the corresponding bootstrap sample and on your full data set. Calculate the average bias in your performance metric between the models applied to the corresponding bootstrap samples and to the full data set. That estimates the bias of the metric of your full model, built on your complete data set, when applied to the underlying population. Then you can report an optimism-corrected metric for your model in addition to the original metric. This optimism bootstrap is implemented in the validate() function of the R rms package for many model types including parametric survival models. It also evaluates several performance metrics beyond the C-index. Something similar can be accomplished with repeated cross-validation, and cross-validation is an option in the validate() function. It's not clear that provides any advantage over bootstrapping, with its justification noted above. In terms of number of events versus censored cases, the main issue is that the precision of any estimate will depend upon the total number of events. If there are many predictors relative to the number of events you might have problems with some bootstrapped samples not having enough events to fit your model, but in that case you might already be overfitting.
Predictive Accuracy of a Survival Model using Concordance The best choice would be something similar to the second (bootstrap) option, but taken an additional step to get an estimate closer to how the model will perform when applied to the underlying populat
51,267
Machine learning models for regression on small data sets
Small datasets and few features are a domain where traditional statistical models tend to do very well, because they offer the ability to actually interpret the importance of your features. I'm assuming by "simple regression" you mean predicting a real-valued, continuous variable y from your input variables. You mention that you suspect you may have non-linear relationships, and you don't know much about the importance of the features. My instincts in this case would be to use a generalized additive model (GAM), like the mgcv package for R. mgcv has very nice default methods for choosing some of the more arcane parameters in a GAM, like how many knots and where to put them. Maybe you have three predictors, x1, x2, and x3, where x1 and x2 are continuous and x3 is a categorical variable. In this case you could do (in R): library(mgcv) x3 <- as.factor(x3) my.model <- gam(y ~ s(x1) + s(x2) + x3, method = "REML") summary(my.model) plot(my.model, shade=TRUE, pages=1) That last part about using REML is personal preference. It sets how "wiggly" the nonlinear curves are allowed to be. The default method uses, if I recall, generalized cross-validation, which works fine though in my experience tends to give "wigglier" curves.
Machine learning models for regression on small data sets
Small datasets and few features are a domain where traditional statistical models tend to do very well, because they offer the ability to actually interpret the importance of your features. I'm assum
Machine learning models for regression on small data sets Small datasets and few features are a domain where traditional statistical models tend to do very well, because they offer the ability to actually interpret the importance of your features. I'm assuming by "simple regression" you mean predicting a real-valued, continuous variable y from your input variables. You mention that you suspect you may have non-linear relationships, and you don't know much about the importance of the features. My instincts in this case would be to use a generalized additive model (GAM), like the mgcv package for R. mgcv has very nice default methods for choosing some of the more arcane parameters in a GAM, like how many knots and where to put them. Maybe you have three predictors, x1, x2, and x3, where x1 and x2 are continuous and x3 is a categorical variable. In this case you could do (in R): library(mgcv) x3 <- as.factor(x3) my.model <- gam(y ~ s(x1) + s(x2) + x3, method = "REML") summary(my.model) plot(my.model, shade=TRUE, pages=1) That last part about using REML is personal preference. It sets how "wiggly" the nonlinear curves are allowed to be. The default method uses, if I recall, generalized cross-validation, which works fine though in my experience tends to give "wigglier" curves.
Machine learning models for regression on small data sets Small datasets and few features are a domain where traditional statistical models tend to do very well, because they offer the ability to actually interpret the importance of your features. I'm assum
51,268
Machine learning models for regression on small data sets
If you are considering linear models, and are concerned with overfitting, you can consider using linear regression with regularization ie. ridge regression or lasso or a combination("elastic net"). If you want to try out non-linear as well as interaction terms, you can try SVM regression with a polynomial kernel or an RBF kernel. This will still require you to divide the data into train-tune-test pieces, but, you can use k-fold cross-validation for "train-tune" part (to trade-off lack of data for additional computation). You can keep 25% for test. It difficult to avoid a test sample if you are concerned about overfitting - this is because unless you test the fit model on an unseen sample, you can't get an unbiased estimate of model's performance.
Machine learning models for regression on small data sets
If you are considering linear models, and are concerned with overfitting, you can consider using linear regression with regularization ie. ridge regression or lasso or a combination("elastic net"). I
Machine learning models for regression on small data sets If you are considering linear models, and are concerned with overfitting, you can consider using linear regression with regularization ie. ridge regression or lasso or a combination("elastic net"). If you want to try out non-linear as well as interaction terms, you can try SVM regression with a polynomial kernel or an RBF kernel. This will still require you to divide the data into train-tune-test pieces, but, you can use k-fold cross-validation for "train-tune" part (to trade-off lack of data for additional computation). You can keep 25% for test. It difficult to avoid a test sample if you are concerned about overfitting - this is because unless you test the fit model on an unseen sample, you can't get an unbiased estimate of model's performance.
Machine learning models for regression on small data sets If you are considering linear models, and are concerned with overfitting, you can consider using linear regression with regularization ie. ridge regression or lasso or a combination("elastic net"). I
51,269
Machine learning models for regression on small data sets
The problem with daily data and only 250 days is that you could face seasonality issues that you can't really evaluate statistically, only by business knowledge. But regardless of seasonality, 250 samples and 10 features are quiet enough in my opinion to build a predictive modeling. The best way to do it is use boosted regression (see xgboost, does a great job right now, very popular and easy to understand) with a good validation process like this one I use a lot right now on small datasets: http://dataneel.github.io/nx2_cross_validation/ I'm not a fan of doing regularization on 10 features only, really not necessary, you can have the luxury to study the impact of each variable on your target with basics analysis of correlation, plot each x with your y to see the shape of the relationship, ....
Machine learning models for regression on small data sets
The problem with daily data and only 250 days is that you could face seasonality issues that you can't really evaluate statistically, only by business knowledge. But regardless of seasonality, 250 sam
Machine learning models for regression on small data sets The problem with daily data and only 250 days is that you could face seasonality issues that you can't really evaluate statistically, only by business knowledge. But regardless of seasonality, 250 samples and 10 features are quiet enough in my opinion to build a predictive modeling. The best way to do it is use boosted regression (see xgboost, does a great job right now, very popular and easy to understand) with a good validation process like this one I use a lot right now on small datasets: http://dataneel.github.io/nx2_cross_validation/ I'm not a fan of doing regularization on 10 features only, really not necessary, you can have the luxury to study the impact of each variable on your target with basics analysis of correlation, plot each x with your y to see the shape of the relationship, ....
Machine learning models for regression on small data sets The problem with daily data and only 250 days is that you could face seasonality issues that you can't really evaluate statistically, only by business knowledge. But regardless of seasonality, 250 sam
51,270
Machine learning models for regression on small data sets
With such a small data set, I would consider a few options: Neural network with some transfer learning if you can find a larger well-labeled data set to complement what you have A semi-supervised approach if you can find a large source of unlabeled data to complement your data Bayes net if you are able to reasonably hand craft some priors Adding regularization to your model It's tough to say exactly what is best without understanding the dataset and the operating goals of your model. Typically, you're better off spending your effort improving your data sources than your model with this little data though if you hope to generalize well in the real world
Machine learning models for regression on small data sets
With such a small data set, I would consider a few options: Neural network with some transfer learning if you can find a larger well-labeled data set to complement what you have A semi-supervised app
Machine learning models for regression on small data sets With such a small data set, I would consider a few options: Neural network with some transfer learning if you can find a larger well-labeled data set to complement what you have A semi-supervised approach if you can find a large source of unlabeled data to complement your data Bayes net if you are able to reasonably hand craft some priors Adding regularization to your model It's tough to say exactly what is best without understanding the dataset and the operating goals of your model. Typically, you're better off spending your effort improving your data sources than your model with this little data though if you hope to generalize well in the real world
Machine learning models for regression on small data sets With such a small data set, I would consider a few options: Neural network with some transfer learning if you can find a larger well-labeled data set to complement what you have A semi-supervised app
51,271
Which constant to add when applying 'Box-Cox transformation' to negative values? [duplicate]
Determine the smallest number in your time series , say -10. for example . The constant you would need is then 10.0000000001 or larger in order to make all the adjusted values positive. It doesn't make make any difference as the reverse transformation needed to obtain forecasts will use the same adjustment factor. Please see https://www.ime.usp.br/~abe/lista/pdfm9cJKUmFZp.pdf where Box & Cox suggested an alternative to deal with negative numbers without adding a constant . EDITED after OP's question re logs ... From the above reference ...perhaps this helps ... ..logs are used
Which constant to add when applying 'Box-Cox transformation' to negative values? [duplicate]
Determine the smallest number in your time series , say -10. for example . The constant you would need is then 10.0000000001 or larger in order to make all the adjusted values positive. It doesn't mak
Which constant to add when applying 'Box-Cox transformation' to negative values? [duplicate] Determine the smallest number in your time series , say -10. for example . The constant you would need is then 10.0000000001 or larger in order to make all the adjusted values positive. It doesn't make make any difference as the reverse transformation needed to obtain forecasts will use the same adjustment factor. Please see https://www.ime.usp.br/~abe/lista/pdfm9cJKUmFZp.pdf where Box & Cox suggested an alternative to deal with negative numbers without adding a constant . EDITED after OP's question re logs ... From the above reference ...perhaps this helps ... ..logs are used
Which constant to add when applying 'Box-Cox transformation' to negative values? [duplicate] Determine the smallest number in your time series , say -10. for example . The constant you would need is then 10.0000000001 or larger in order to make all the adjusted values positive. It doesn't mak
51,272
GAMM with multiple and crossed random effects
Specifying random effect terms in gamm4 is different to mgcv. The syntax I show is provided in this book. Two random effect terms in gamm4 is: random = ~(1|xr1 + 1|xr2) If they are nested, it is: random = ~(1|xr1/xr2)
GAMM with multiple and crossed random effects
Specifying random effect terms in gamm4 is different to mgcv. The syntax I show is provided in this book. Two random effect terms in gamm4 is: random = ~(1|xr1 + 1|xr2) If they are nested, it is: ran
GAMM with multiple and crossed random effects Specifying random effect terms in gamm4 is different to mgcv. The syntax I show is provided in this book. Two random effect terms in gamm4 is: random = ~(1|xr1 + 1|xr2) If they are nested, it is: random = ~(1|xr1/xr2)
GAMM with multiple and crossed random effects Specifying random effect terms in gamm4 is different to mgcv. The syntax I show is provided in this book. Two random effect terms in gamm4 is: random = ~(1|xr1 + 1|xr2) If they are nested, it is: ran
51,273
What is the difference between Dice loss vs Jaccard loss in semantic segmentation task?
Jaccard Index is basically the Intersection over Union (IoU). If you subtract Jaccard Index from 1, you will get the Jaccard Loss (or IoU loss). Similarly if you do the same on Dice Coef., you will get the Dice Loss. For a comparison of IoU (or Jaccard) and Dice, I recommend reading this article.
What is the difference between Dice loss vs Jaccard loss in semantic segmentation task?
Jaccard Index is basically the Intersection over Union (IoU). If you subtract Jaccard Index from 1, you will get the Jaccard Loss (or IoU loss). Similarly if you do the same on Dice Coef., you will ge
What is the difference between Dice loss vs Jaccard loss in semantic segmentation task? Jaccard Index is basically the Intersection over Union (IoU). If you subtract Jaccard Index from 1, you will get the Jaccard Loss (or IoU loss). Similarly if you do the same on Dice Coef., you will get the Dice Loss. For a comparison of IoU (or Jaccard) and Dice, I recommend reading this article.
What is the difference between Dice loss vs Jaccard loss in semantic segmentation task? Jaccard Index is basically the Intersection over Union (IoU). If you subtract Jaccard Index from 1, you will get the Jaccard Loss (or IoU loss). Similarly if you do the same on Dice Coef., you will ge
51,274
Importance of regressors in time series data
I would always assess this using a holdout sample. Fit models with and without the predictor and see how much your forecasts improve. I realize my recommendation makes most sense in the context of forecasting. If you are just looking for in-sample fit, you can do the same thing. If you are indeed forecasting, it would also make sense to use forecasted values of predictors. A predictor that does a great job explaining your time series but that can't be forecasted accurately is not really very helpful. Of course, the effect of a predictor may depend on whether or not another predictor is in the model. For instance, the effect of weather information will be much lower if we already have a seasonal model than if our model is nonseasonal. Thus, it is doubtful whether we can reasonably speak of "the" effect of a predictor. And yes, if we want to look at all possible combinations of predictors, that may lead to a combinatorial explosion. It may make sense to just test the full model with $k$ predictors against $k$ submodels that each drop just one predictor. Finally, note that whether one or the other forecast are "better" depends on your accuracy measure (Kolassa, 2020).
Importance of regressors in time series data
I would always assess this using a holdout sample. Fit models with and without the predictor and see how much your forecasts improve. I realize my recommendation makes most sense in the context of for
Importance of regressors in time series data I would always assess this using a holdout sample. Fit models with and without the predictor and see how much your forecasts improve. I realize my recommendation makes most sense in the context of forecasting. If you are just looking for in-sample fit, you can do the same thing. If you are indeed forecasting, it would also make sense to use forecasted values of predictors. A predictor that does a great job explaining your time series but that can't be forecasted accurately is not really very helpful. Of course, the effect of a predictor may depend on whether or not another predictor is in the model. For instance, the effect of weather information will be much lower if we already have a seasonal model than if our model is nonseasonal. Thus, it is doubtful whether we can reasonably speak of "the" effect of a predictor. And yes, if we want to look at all possible combinations of predictors, that may lead to a combinatorial explosion. It may make sense to just test the full model with $k$ predictors against $k$ submodels that each drop just one predictor. Finally, note that whether one or the other forecast are "better" depends on your accuracy measure (Kolassa, 2020).
Importance of regressors in time series data I would always assess this using a holdout sample. Fit models with and without the predictor and see how much your forecasts improve. I realize my recommendation makes most sense in the context of for
51,275
Proof that predictions are unbiased in in endogenous linear model
The answers are actually pretty straightforward. In general the predictions will not be unbiased, to see that just notice: $$ E[y|X]= X\beta + E[\epsilon|X] $$ Thus, if $E[\epsilon|X]$ is not linear function of $X$, the population linear regression will not recover the true expectation function $E[y|X]$. Instead, it will give you the best linear approximation of $y$ (best as in minimizing the quadratic error). Since this is true for the population, of course sample estimates are also not consistent. Analogously, if $E[\epsilon|X]$ is a linear function of $X$, then the population linear regression is by definition $E[y|X]$. So all standard results for linear regression apply, and you will get unbiased predictions. You just won't recover the structural $\beta$, instead you will recover the population regression coefficients $E[XX']^{-1}E[XY]$.
Proof that predictions are unbiased in in endogenous linear model
The answers are actually pretty straightforward. In general the predictions will not be unbiased, to see that just notice: $$ E[y|X]= X\beta + E[\epsilon|X] $$ Thus, if $E[\epsilon|X]$ is not linear f
Proof that predictions are unbiased in in endogenous linear model The answers are actually pretty straightforward. In general the predictions will not be unbiased, to see that just notice: $$ E[y|X]= X\beta + E[\epsilon|X] $$ Thus, if $E[\epsilon|X]$ is not linear function of $X$, the population linear regression will not recover the true expectation function $E[y|X]$. Instead, it will give you the best linear approximation of $y$ (best as in minimizing the quadratic error). Since this is true for the population, of course sample estimates are also not consistent. Analogously, if $E[\epsilon|X]$ is a linear function of $X$, then the population linear regression is by definition $E[y|X]$. So all standard results for linear regression apply, and you will get unbiased predictions. You just won't recover the structural $\beta$, instead you will recover the population regression coefficients $E[XX']^{-1}E[XY]$.
Proof that predictions are unbiased in in endogenous linear model The answers are actually pretty straightforward. In general the predictions will not be unbiased, to see that just notice: $$ E[y|X]= X\beta + E[\epsilon|X] $$ Thus, if $E[\epsilon|X]$ is not linear f
51,276
Proof that predictions are unbiased in in endogenous linear model
Claims 1 and 2 are actually both false. To show that they are false over a broad class of models in which $E[\epsilon|X] \neq 0$, it is enough to pick a single model or a narrow set of models from within that broad class of models, and to disprove the claims for the narrow set. I will follow this strategy in my answer. Suppose the true model is $y_i = z_i\Gamma + x_i\beta + \delta_i$ for some zero-mean $\delta_i$'s and for a hidden set of confounders $Z$. Suppose $E[z_i|x_i] = x_iA$. Your $\epsilon_i$ is my $z_i\Gamma + \delta_i = x_iA\Gamma + \delta_i$. Then $E[y_i] = x_iA\Gamma + x_i\beta + \delta_i$, and the OLS estimates are targeting $\tilde\beta \equiv A\Gamma + \beta$ instead of just $\beta$. But, the true function is still linear, so predictions will be unbiased. To address your claims directly, using my notation: Claim 1 should read $E[ X \hat \beta] = X\tilde\beta$, not $E[X \hat \beta ] = X\beta$. So as stated, it is false. Your second term, in my notation, is $A\Gamma$, which is indeed nonzero. Claim 2 is likewise missing a tilde. Claim 1 is worth a little more discussion. If it were modified to include $A\Gamma$, it would become true, at least for the limited set of models I have discussed. This is significant because it is a formal statement of the earlier assertion that predictions are unbiased. To expand on this, it's worth stating explicitly what I mean by an unbiased prediction. The predictor that best avoids systematic differences with the true model is $X \beta+ E[\epsilon|X] = X \beta+ E[Z\Gamma + \delta|X] = X \beta+ X A\Gamma$. I would call unbiased any prediction method whose expectation is this. Since $\beta+ A\Gamma$ is exactly what the OLS estimates target, OLS produces unbiased predictions here. (By contrast, this is still biased if you want to infer $\beta$.)
Proof that predictions are unbiased in in endogenous linear model
Claims 1 and 2 are actually both false. To show that they are false over a broad class of models in which $E[\epsilon|X] \neq 0$, it is enough to pick a single model or a narrow set of models from wit
Proof that predictions are unbiased in in endogenous linear model Claims 1 and 2 are actually both false. To show that they are false over a broad class of models in which $E[\epsilon|X] \neq 0$, it is enough to pick a single model or a narrow set of models from within that broad class of models, and to disprove the claims for the narrow set. I will follow this strategy in my answer. Suppose the true model is $y_i = z_i\Gamma + x_i\beta + \delta_i$ for some zero-mean $\delta_i$'s and for a hidden set of confounders $Z$. Suppose $E[z_i|x_i] = x_iA$. Your $\epsilon_i$ is my $z_i\Gamma + \delta_i = x_iA\Gamma + \delta_i$. Then $E[y_i] = x_iA\Gamma + x_i\beta + \delta_i$, and the OLS estimates are targeting $\tilde\beta \equiv A\Gamma + \beta$ instead of just $\beta$. But, the true function is still linear, so predictions will be unbiased. To address your claims directly, using my notation: Claim 1 should read $E[ X \hat \beta] = X\tilde\beta$, not $E[X \hat \beta ] = X\beta$. So as stated, it is false. Your second term, in my notation, is $A\Gamma$, which is indeed nonzero. Claim 2 is likewise missing a tilde. Claim 1 is worth a little more discussion. If it were modified to include $A\Gamma$, it would become true, at least for the limited set of models I have discussed. This is significant because it is a formal statement of the earlier assertion that predictions are unbiased. To expand on this, it's worth stating explicitly what I mean by an unbiased prediction. The predictor that best avoids systematic differences with the true model is $X \beta+ E[\epsilon|X] = X \beta+ E[Z\Gamma + \delta|X] = X \beta+ X A\Gamma$. I would call unbiased any prediction method whose expectation is this. Since $\beta+ A\Gamma$ is exactly what the OLS estimates target, OLS produces unbiased predictions here. (By contrast, this is still biased if you want to infer $\beta$.)
Proof that predictions are unbiased in in endogenous linear model Claims 1 and 2 are actually both false. To show that they are false over a broad class of models in which $E[\epsilon|X] \neq 0$, it is enough to pick a single model or a narrow set of models from wit
51,277
Detrending positive data, avoiding negative data
You need to re-scale your data on the real domain. If the log transform does not work, then try another transform. Then take differences, say.
Detrending positive data, avoiding negative data
You need to re-scale your data on the real domain. If the log transform does not work, then try another transform. Then take differences, say.
Detrending positive data, avoiding negative data You need to re-scale your data on the real domain. If the log transform does not work, then try another transform. Then take differences, say.
Detrending positive data, avoiding negative data You need to re-scale your data on the real domain. If the log transform does not work, then try another transform. Then take differences, say.
51,278
Covering the unit sphere with sparse vectors
After some thought, I think we can conclude that finding such a cover is impossible. Consider the vector of all $1$'s with appropriate scaling: $$ v = \frac{1}{\sqrt{d}} \left(1 \; \dots \; 1\right)^\top \in \mathbb{S}^{d-1} $$ and consider a $k$-sparse vector $\bar{v}$. Then $$ \#\{i: (v - \bar{v})_i \neq 0\} \geq d - k $$ so the approximation error will satisfy $$ \|v - \bar{v}\|_2 \geq \sqrt{ \sum_{j=1}^{d - k} \left( \frac{1}{\sqrt{d}} \right)^2 } = \sqrt{\frac{d - k}{d}} = \sqrt{1 - \frac{k}{d}} $$ It is obvious that for this error to be independent of $d$ we would have to make sure that $k \sim c d$, otherwise for a large enough $d$ this approximation error could be driven arbitrarily close to $1$.
Covering the unit sphere with sparse vectors
After some thought, I think we can conclude that finding such a cover is impossible. Consider the vector of all $1$'s with appropriate scaling: $$ v = \frac{1}{\sqrt{d}} \left(1 \; \dots \; 1\right)^\
Covering the unit sphere with sparse vectors After some thought, I think we can conclude that finding such a cover is impossible. Consider the vector of all $1$'s with appropriate scaling: $$ v = \frac{1}{\sqrt{d}} \left(1 \; \dots \; 1\right)^\top \in \mathbb{S}^{d-1} $$ and consider a $k$-sparse vector $\bar{v}$. Then $$ \#\{i: (v - \bar{v})_i \neq 0\} \geq d - k $$ so the approximation error will satisfy $$ \|v - \bar{v}\|_2 \geq \sqrt{ \sum_{j=1}^{d - k} \left( \frac{1}{\sqrt{d}} \right)^2 } = \sqrt{\frac{d - k}{d}} = \sqrt{1 - \frac{k}{d}} $$ It is obvious that for this error to be independent of $d$ we would have to make sure that $k \sim c d$, otherwise for a large enough $d$ this approximation error could be driven arbitrarily close to $1$.
Covering the unit sphere with sparse vectors After some thought, I think we can conclude that finding such a cover is impossible. Consider the vector of all $1$'s with appropriate scaling: $$ v = \frac{1}{\sqrt{d}} \left(1 \; \dots \; 1\right)^\
51,279
Test for comparing log likelihoods that include error terms
Taking this from the comment section to the answer part: If you are doing Bayesian model comparison, then there is no way to include any uncertainties. The comparison will tell you, which model describes the data better. But, as mentioned in the wikipedia article (https://en.wikipedia.org/wiki/Bayes_factor), this doesn't mean that either model "fits" the data or even that either is correct. If the uncertainty is to be taken into account, then hypothesis testing can be used to determine which fits better. But again, this will not give any clue about whether either (actually: if any!) model is correct. That being said, you cannot determine a probability for a certain model to be correct. Nothing in the likelihood method will do this for you or enable you to do it. If you use it for comparison -- as in the question --, then you can determine which model is more probable. If (!) you get the priors right, then you can state the actual ratio of the probabilities. But nothing more.
Test for comparing log likelihoods that include error terms
Taking this from the comment section to the answer part: If you are doing Bayesian model comparison, then there is no way to include any uncertainties. The comparison will tell you, which model descri
Test for comparing log likelihoods that include error terms Taking this from the comment section to the answer part: If you are doing Bayesian model comparison, then there is no way to include any uncertainties. The comparison will tell you, which model describes the data better. But, as mentioned in the wikipedia article (https://en.wikipedia.org/wiki/Bayes_factor), this doesn't mean that either model "fits" the data or even that either is correct. If the uncertainty is to be taken into account, then hypothesis testing can be used to determine which fits better. But again, this will not give any clue about whether either (actually: if any!) model is correct. That being said, you cannot determine a probability for a certain model to be correct. Nothing in the likelihood method will do this for you or enable you to do it. If you use it for comparison -- as in the question --, then you can determine which model is more probable. If (!) you get the priors right, then you can state the actual ratio of the probabilities. But nothing more.
Test for comparing log likelihoods that include error terms Taking this from the comment section to the answer part: If you are doing Bayesian model comparison, then there is no way to include any uncertainties. The comparison will tell you, which model descri
51,280
Direct way of calculating $\mathbb{E} \left[ \frac{\textbf{h}^{H} \textbf{y}\textbf{y}^{H} \textbf{h}}{ \| \textbf{y} \|^{4} } \right]$
I've found approximations for both cases, i.e., independent and dependent cases. Case (1) where $\textbf{h}$ and $\textbf{y}$ are independent. $$\mathbb{E} \left[ \frac{\textbf{h}^{H}_{l} \textbf{y}_{k} \textbf{y}^{H} _{k} \textbf{h}_{l} }{ \| \textbf{y}_{k} \|^{4} } \right] = \frac{d_{l}[(M+1)(M-2)+4M+6]}{\zeta_{k}M(M+1)^{2}}$$ where $\zeta_{k} = d_{k} + \frac{1}{p}$, $\textbf{h}_{l} \sim \mathcal{CN}\left(\textbf{0}_{M},d_{l}\textbf{I}_{M \times M}\right)$ and $\textbf{h}_{k} \sim \mathcal{CN}\left(\textbf{0}_{M},d_{k}\textbf{I}_{M \times M}\right)$. Note that $\textbf{y}_{k} = \textbf{h}_{k} + w$ and that $\textbf{h}_{k}$ and $\textbf{h}_{l}$ are independent. Case (2) where $\textbf{h}$ and $\textbf{y}$ are dependent. $$\mathbb{E} \left[ \frac{\textbf{h}^{H}_{k} \textbf{y}_{k} \textbf{y}^{H} _{k} \textbf{h}_{k} }{ \| \textbf{y}_{k} \|^{4} } \right] = \frac{pd_{k}[pd_{k}M(M+1)^2 + M^2+3M+4]}{(pd_{k}+1)^2 M(M+1)^2}$$ where $\textbf{h}_{k} \sim \mathcal{CN}\left(\textbf{0}_{M},d_{k}\textbf{I}_{M \times M}\right)$ and $\textbf{y}_{k} \sim \mathcal{CN}\left(\textbf{0}_{M}, \left(d_{k} + \frac{1}{p}\right)\textbf{I}_{M \times M}\right)$. Note that $\textbf{y}_{k} = \textbf{h}_{k} + w$ and therefore, $\textbf{h}_{k}$ and $\textbf{y}_{k}$ are not independent. I have used the following approximation in both updates: $$\mathbb{E} \left[ \frac{\textbf{x}}{\textbf{z}} \right] \approx \frac{\mathbb{E}[\textbf{x}]}{\mathbb{E}[\textbf{z}]} - \frac{\text{cov}(\textbf{x},\textbf{z})}{\mathbb{E}[\textbf{z}]^{2}} + \frac{\mathbb{E}[\textbf{x}]}{\mathbb{E}[\textbf{z}]^{3}}\text{var}(\mathbb{E}[\textbf{z}]).$$
Direct way of calculating $\mathbb{E} \left[ \frac{\textbf{h}^{H} \textbf{y}\textbf{y}^{H} \textbf{h
I've found approximations for both cases, i.e., independent and dependent cases. Case (1) where $\textbf{h}$ and $\textbf{y}$ are independent. $$\mathbb{E} \left[ \frac{\textbf{h}^{H}_{l} \textbf{y}_
Direct way of calculating $\mathbb{E} \left[ \frac{\textbf{h}^{H} \textbf{y}\textbf{y}^{H} \textbf{h}}{ \| \textbf{y} \|^{4} } \right]$ I've found approximations for both cases, i.e., independent and dependent cases. Case (1) where $\textbf{h}$ and $\textbf{y}$ are independent. $$\mathbb{E} \left[ \frac{\textbf{h}^{H}_{l} \textbf{y}_{k} \textbf{y}^{H} _{k} \textbf{h}_{l} }{ \| \textbf{y}_{k} \|^{4} } \right] = \frac{d_{l}[(M+1)(M-2)+4M+6]}{\zeta_{k}M(M+1)^{2}}$$ where $\zeta_{k} = d_{k} + \frac{1}{p}$, $\textbf{h}_{l} \sim \mathcal{CN}\left(\textbf{0}_{M},d_{l}\textbf{I}_{M \times M}\right)$ and $\textbf{h}_{k} \sim \mathcal{CN}\left(\textbf{0}_{M},d_{k}\textbf{I}_{M \times M}\right)$. Note that $\textbf{y}_{k} = \textbf{h}_{k} + w$ and that $\textbf{h}_{k}$ and $\textbf{h}_{l}$ are independent. Case (2) where $\textbf{h}$ and $\textbf{y}$ are dependent. $$\mathbb{E} \left[ \frac{\textbf{h}^{H}_{k} \textbf{y}_{k} \textbf{y}^{H} _{k} \textbf{h}_{k} }{ \| \textbf{y}_{k} \|^{4} } \right] = \frac{pd_{k}[pd_{k}M(M+1)^2 + M^2+3M+4]}{(pd_{k}+1)^2 M(M+1)^2}$$ where $\textbf{h}_{k} \sim \mathcal{CN}\left(\textbf{0}_{M},d_{k}\textbf{I}_{M \times M}\right)$ and $\textbf{y}_{k} \sim \mathcal{CN}\left(\textbf{0}_{M}, \left(d_{k} + \frac{1}{p}\right)\textbf{I}_{M \times M}\right)$. Note that $\textbf{y}_{k} = \textbf{h}_{k} + w$ and therefore, $\textbf{h}_{k}$ and $\textbf{y}_{k}$ are not independent. I have used the following approximation in both updates: $$\mathbb{E} \left[ \frac{\textbf{x}}{\textbf{z}} \right] \approx \frac{\mathbb{E}[\textbf{x}]}{\mathbb{E}[\textbf{z}]} - \frac{\text{cov}(\textbf{x},\textbf{z})}{\mathbb{E}[\textbf{z}]^{2}} + \frac{\mathbb{E}[\textbf{x}]}{\mathbb{E}[\textbf{z}]^{3}}\text{var}(\mathbb{E}[\textbf{z}]).$$
Direct way of calculating $\mathbb{E} \left[ \frac{\textbf{h}^{H} \textbf{y}\textbf{y}^{H} \textbf{h I've found approximations for both cases, i.e., independent and dependent cases. Case (1) where $\textbf{h}$ and $\textbf{y}$ are independent. $$\mathbb{E} \left[ \frac{\textbf{h}^{H}_{l} \textbf{y}_
51,281
Combining cross-sectional data with panel data
Visually check the correlation between irrigation and the other variables for that year. If they are correlated, which I imagine they are (different locations probably have different irrigation patterns based on demographics, etc...maybe not though!) then no. If there seems to be little correlation and irrigation just seems to increase yield by a fixed value...sure, you could make that assumption so long as it’s clearly stipulated and interpreted that way. This is where subject matter expertise really comes into play.
Combining cross-sectional data with panel data
Visually check the correlation between irrigation and the other variables for that year. If they are correlated, which I imagine they are (different locations probably have different irrigation patter
Combining cross-sectional data with panel data Visually check the correlation between irrigation and the other variables for that year. If they are correlated, which I imagine they are (different locations probably have different irrigation patterns based on demographics, etc...maybe not though!) then no. If there seems to be little correlation and irrigation just seems to increase yield by a fixed value...sure, you could make that assumption so long as it’s clearly stipulated and interpreted that way. This is where subject matter expertise really comes into play.
Combining cross-sectional data with panel data Visually check the correlation between irrigation and the other variables for that year. If they are correlated, which I imagine they are (different locations probably have different irrigation patter
51,282
Multiple imputation: What has to be reported in a paper
In general, it is appropriate to report the results of the planned primary analysis, possibly also all or some of the foreseen sensitivity/supportive analyses (depending on space considerations) and potentially additional analyses requested e.g. by peer reviewers (e.g. in case of a pre-specified complete case analysis I would as a reviewer request some more appropriate analysis to be also reported). The results of the MI analysis (estimates, CIs etc. from aggregating the analyses of each imputation) are indeed the logical thing to report in case this is the pre-specified analysis. Another question is what else to report, I would certainly expect that somewhere in the methods the multiple imputation approach (what variables were entered, was it some kind of imputation model longitudinally for each time point, or jointly across all times using some joint normality, how many imputations etc.) is described. Multiple imputation certainly comes in many flavors and variants and it is important for the reader to be able to find out what was done. For contingency tables or baseline characteristics, to me the main question is whether you are primarily trying to describe the data descriptively or whether you are seeing it as something that people would compare/making some kind of mental inference on. Both have some value and for the first it may be the most transparent the number of missing or non-missing values in addition to summary statistics of the complete cases (that is certainly very common, especially for baseline characteristics), but as soon as it has more of a "let's compare these between groups" feeling, imputed results may be more appropriate. In either case, one should be transparent about what is being reported. In the contingency table example you mention, the average percentages across all the imputations could be one thing to report. By the way, 10 imputations is a really low number. It may be enough to ensure type I error control, but by using a much larger number, you avoid that the results depend too much on the pseudorandom number seed you specify and usually gain a bit of power. I tend to go for something like 250 to 1000 by default, if it is not computationally too expensive and there is up to a low double-digit percentage of missing data across time points.
Multiple imputation: What has to be reported in a paper
In general, it is appropriate to report the results of the planned primary analysis, possibly also all or some of the foreseen sensitivity/supportive analyses (depending on space considerations) and p
Multiple imputation: What has to be reported in a paper In general, it is appropriate to report the results of the planned primary analysis, possibly also all or some of the foreseen sensitivity/supportive analyses (depending on space considerations) and potentially additional analyses requested e.g. by peer reviewers (e.g. in case of a pre-specified complete case analysis I would as a reviewer request some more appropriate analysis to be also reported). The results of the MI analysis (estimates, CIs etc. from aggregating the analyses of each imputation) are indeed the logical thing to report in case this is the pre-specified analysis. Another question is what else to report, I would certainly expect that somewhere in the methods the multiple imputation approach (what variables were entered, was it some kind of imputation model longitudinally for each time point, or jointly across all times using some joint normality, how many imputations etc.) is described. Multiple imputation certainly comes in many flavors and variants and it is important for the reader to be able to find out what was done. For contingency tables or baseline characteristics, to me the main question is whether you are primarily trying to describe the data descriptively or whether you are seeing it as something that people would compare/making some kind of mental inference on. Both have some value and for the first it may be the most transparent the number of missing or non-missing values in addition to summary statistics of the complete cases (that is certainly very common, especially for baseline characteristics), but as soon as it has more of a "let's compare these between groups" feeling, imputed results may be more appropriate. In either case, one should be transparent about what is being reported. In the contingency table example you mention, the average percentages across all the imputations could be one thing to report. By the way, 10 imputations is a really low number. It may be enough to ensure type I error control, but by using a much larger number, you avoid that the results depend too much on the pseudorandom number seed you specify and usually gain a bit of power. I tend to go for something like 250 to 1000 by default, if it is not computationally too expensive and there is up to a low double-digit percentage of missing data across time points.
Multiple imputation: What has to be reported in a paper In general, it is appropriate to report the results of the planned primary analysis, possibly also all or some of the foreseen sensitivity/supportive analyses (depending on space considerations) and p
51,283
Linear Constraint in SVM optimization
Note that generally, the loss$(w,x_i,y_i)$ term you have written would actually be the $c_i$ value from the constraint, in both versions. The dual-formulation constraint $\alpha^T y = 0$ arises from the bias term $b$ which offsets the solution plane from the origin. The most common alternative to using the bias term is extending all the inputs $x_i$ by adding on a dimension with constant value $1$. A less commonly used approach is to use a slightly modified kernel function formulation to replicate $b$ (i.e., so that calculating $<w,x>_{k'}$ with the modified kernel $k'$ is similar to evalutating $<w,x>_k + b$ with the original kernel). From a practical perspective, there is little difference in the classifier performance between biased and unbiased SVM. Specific algorithms differ of course, and the time it takes to learn a classifier can be different. From a theoretical perspective, it is often easier to determine bounds for the unbiased version.
Linear Constraint in SVM optimization
Note that generally, the loss$(w,x_i,y_i)$ term you have written would actually be the $c_i$ value from the constraint, in both versions. The dual-formulation constraint $\alpha^T y = 0$ arises from t
Linear Constraint in SVM optimization Note that generally, the loss$(w,x_i,y_i)$ term you have written would actually be the $c_i$ value from the constraint, in both versions. The dual-formulation constraint $\alpha^T y = 0$ arises from the bias term $b$ which offsets the solution plane from the origin. The most common alternative to using the bias term is extending all the inputs $x_i$ by adding on a dimension with constant value $1$. A less commonly used approach is to use a slightly modified kernel function formulation to replicate $b$ (i.e., so that calculating $<w,x>_{k'}$ with the modified kernel $k'$ is similar to evalutating $<w,x>_k + b$ with the original kernel). From a practical perspective, there is little difference in the classifier performance between biased and unbiased SVM. Specific algorithms differ of course, and the time it takes to learn a classifier can be different. From a theoretical perspective, it is often easier to determine bounds for the unbiased version.
Linear Constraint in SVM optimization Note that generally, the loss$(w,x_i,y_i)$ term you have written would actually be the $c_i$ value from the constraint, in both versions. The dual-formulation constraint $\alpha^T y = 0$ arises from t
51,284
What type of model can be used to detect changes in periodic behavior?
If you know the original periodicity of the pulses, a simple approach would be to use any seasonal time series forecasting algorithm with this seasonal frequency. Fit the model to your data, holding out the last (say) 10 observations. Forecast, and calculate prediction intervals to a specified level for the holdout data. Compare the holdout to the prediction intervals. If "enough" observations fall outside the PIs, raise a flag on the series. You would need to calibrate this a little, especially with respect to the PI level (80%, 95%, 99%, whatever) and the number of holdout data points. If the original frequency varies, you may want to cut off the older observations. Here is a simulated example with 80% and 95% prediction intervals in dark and light gray, calculated with a simple canned seasonal decomposition method. Any observation falling outside these bands would be an indication that your process has changed. A single such observation may be enough to investigate, or only a number of them in a row. R code: set.seed(1) series <- ts(round(rnorm(1000)),frequency=100) index <- abs((seq_along(series)%%100)-20)<=4 series[index] <- round(rnorm(sum(index),10,2)) library(forecast) model <- stlf(series) plot(forecast(model,h=200),las=1)
What type of model can be used to detect changes in periodic behavior?
If you know the original periodicity of the pulses, a simple approach would be to use any seasonal time series forecasting algorithm with this seasonal frequency. Fit the model to your data, holding o
What type of model can be used to detect changes in periodic behavior? If you know the original periodicity of the pulses, a simple approach would be to use any seasonal time series forecasting algorithm with this seasonal frequency. Fit the model to your data, holding out the last (say) 10 observations. Forecast, and calculate prediction intervals to a specified level for the holdout data. Compare the holdout to the prediction intervals. If "enough" observations fall outside the PIs, raise a flag on the series. You would need to calibrate this a little, especially with respect to the PI level (80%, 95%, 99%, whatever) and the number of holdout data points. If the original frequency varies, you may want to cut off the older observations. Here is a simulated example with 80% and 95% prediction intervals in dark and light gray, calculated with a simple canned seasonal decomposition method. Any observation falling outside these bands would be an indication that your process has changed. A single such observation may be enough to investigate, or only a number of them in a row. R code: set.seed(1) series <- ts(round(rnorm(1000)),frequency=100) index <- abs((seq_along(series)%%100)-20)<=4 series[index] <- round(rnorm(sum(index),10,2)) library(forecast) model <- stlf(series) plot(forecast(model,h=200),las=1)
What type of model can be used to detect changes in periodic behavior? If you know the original periodicity of the pulses, a simple approach would be to use any seasonal time series forecasting algorithm with this seasonal frequency. Fit the model to your data, holding o
51,285
Why does collecting data until finding a significant result increase Type I error rate?
The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog: I'll flip you to see who pays for dinner. OK, I call heads. Rats, you won. Best two out of three? To understand this better, consider a simplified--but realistic--model of this sequential procedure. Suppose you will start with a "trial run" of a certain number of observations, but are willing to continue experimenting longer in order to get a p-value less than $0.05$. The null hypothesis is that each observation $X_i$ comes (independently) from a standard Normal distribution. The alternative is that the $X_i$ come independently from a unit-variance normal distribution with a nonzero mean. The test statistic will be the mean of all $n$ observations, $\bar X$, divided by their standard error, $1/\sqrt{n}$. For a two-sided test, the critical values are the $0.025$ and $0.975$ percentage points of the standard Normal distribution, $ Z_\alpha=\pm 1.96$ approximately. This is a good test--for a single experiment with a fixed sample size $n$. It has exactly a $5\%$ chance of rejecting the null hypothesis, no matter what $n$ might be. Let's algebraically convert this to an equivalent test based on the sum of all $n$ values, $$S_n=X_1+X_2+\cdots+X_n = n\bar X.$$ Thus, the data are "significant" when $$\left| Z_\alpha\right| \le \left| \frac{\bar X}{1/\sqrt{n}} \right| = \left| \frac{S_n}{n/\sqrt{n}} \right| = \left| S_n \right| / \sqrt{n};$$ that is, $$\left| Z_\alpha\right| \sqrt{n} \le \left| S_n \right| .\tag{1}$$ If we're smart, we'll cut our losses and give up once $n$ grows very large and the data still haven't entered the critical region. This describes a random walk $S_n$. The formula $(1)$ amounts to erecting a curved parabolic "fence," or barrier, around the plot of the random walk $(n, S_n)$: the result is "significant" if any point of the random walk hits the fence. It is a property of random walks that if we wait long enough, it's very likely that at some point the result will look significant. Here are 20 independent simulations out to a limit of $n=5000$ samples. They all begin testing at $n=30$ samples, at which point we check whether the each point lies outside the barriers that have been drawn according to formula $(1)$. From the point at which the statistical test is first "significant," the simulated data are colored red. You can see what's going on: the random walk whips up and down more and more as $n$ increases. The barriers are spreading apart at about the same rate--but not fast enough always to avoid the random walk. In 20% of these simulations, a "significant" difference was found--usually quite early on--even though in every one of them the null hypothesis is absolutely correct! Running more simulations of this type indicates that the true test size is close to $25\%$ rather than the intended value of $\alpha=5\%$: that is, your willingness to keep looking for "significance" up to a sample size of $5000$ gives you a $25\%$ chance of rejecting the null even when the null is true. Notice that in all four "significant" cases, as testing continued, the data stopped looking significant at some points. In real life, an experimenter who stops early is losing the chance to observe such "reversions." This selectiveness through optional stopping biases the results. In honest-to-goodness sequential tests, the barriers are lines. They spread faster than the curved barriers shown here. library(data.table) library(ggplot2) alpha <- 0.05 # Test size n.sim <- 20 # Number of simulated experiments n.buffer <- 5e3 # Maximum experiment length i.min <- 30 # Initial number of observations # # Generate data. # set.seed(17) X <- data.table( n = rep(0:n.buffer, n.sim), Iteration = rep(1:n.sim, each=n.buffer+1), X = rnorm((1+n.buffer)*n.sim) ) # # Perform the testing. # Z.alpha <- -qnorm(alpha/2) X[, Z := Z.alpha * sqrt(n)] X[, S := c(0, cumsum(X))[-(n.buffer+1)], by=Iteration] X[, Trigger := abs(S) >= Z & n >= i.min] X[, Significant := cumsum(Trigger) > 0, by=Iteration] # # Plot the results. # ggplot(X, aes(n, S, group=Iteration)) + geom_path(aes(n,Z)) + geom_path(aes(n,-Z)) + geom_point(aes(color=!Significant), size=1/2) + facet_wrap(~ Iteration)
Why does collecting data until finding a significant result increase Type I error rate?
The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog: I'll flip you to see who pays for dinner. OK, I call heads. Rats, you won. Be
Why does collecting data until finding a significant result increase Type I error rate? The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog: I'll flip you to see who pays for dinner. OK, I call heads. Rats, you won. Best two out of three? To understand this better, consider a simplified--but realistic--model of this sequential procedure. Suppose you will start with a "trial run" of a certain number of observations, but are willing to continue experimenting longer in order to get a p-value less than $0.05$. The null hypothesis is that each observation $X_i$ comes (independently) from a standard Normal distribution. The alternative is that the $X_i$ come independently from a unit-variance normal distribution with a nonzero mean. The test statistic will be the mean of all $n$ observations, $\bar X$, divided by their standard error, $1/\sqrt{n}$. For a two-sided test, the critical values are the $0.025$ and $0.975$ percentage points of the standard Normal distribution, $ Z_\alpha=\pm 1.96$ approximately. This is a good test--for a single experiment with a fixed sample size $n$. It has exactly a $5\%$ chance of rejecting the null hypothesis, no matter what $n$ might be. Let's algebraically convert this to an equivalent test based on the sum of all $n$ values, $$S_n=X_1+X_2+\cdots+X_n = n\bar X.$$ Thus, the data are "significant" when $$\left| Z_\alpha\right| \le \left| \frac{\bar X}{1/\sqrt{n}} \right| = \left| \frac{S_n}{n/\sqrt{n}} \right| = \left| S_n \right| / \sqrt{n};$$ that is, $$\left| Z_\alpha\right| \sqrt{n} \le \left| S_n \right| .\tag{1}$$ If we're smart, we'll cut our losses and give up once $n$ grows very large and the data still haven't entered the critical region. This describes a random walk $S_n$. The formula $(1)$ amounts to erecting a curved parabolic "fence," or barrier, around the plot of the random walk $(n, S_n)$: the result is "significant" if any point of the random walk hits the fence. It is a property of random walks that if we wait long enough, it's very likely that at some point the result will look significant. Here are 20 independent simulations out to a limit of $n=5000$ samples. They all begin testing at $n=30$ samples, at which point we check whether the each point lies outside the barriers that have been drawn according to formula $(1)$. From the point at which the statistical test is first "significant," the simulated data are colored red. You can see what's going on: the random walk whips up and down more and more as $n$ increases. The barriers are spreading apart at about the same rate--but not fast enough always to avoid the random walk. In 20% of these simulations, a "significant" difference was found--usually quite early on--even though in every one of them the null hypothesis is absolutely correct! Running more simulations of this type indicates that the true test size is close to $25\%$ rather than the intended value of $\alpha=5\%$: that is, your willingness to keep looking for "significance" up to a sample size of $5000$ gives you a $25\%$ chance of rejecting the null even when the null is true. Notice that in all four "significant" cases, as testing continued, the data stopped looking significant at some points. In real life, an experimenter who stops early is losing the chance to observe such "reversions." This selectiveness through optional stopping biases the results. In honest-to-goodness sequential tests, the barriers are lines. They spread faster than the curved barriers shown here. library(data.table) library(ggplot2) alpha <- 0.05 # Test size n.sim <- 20 # Number of simulated experiments n.buffer <- 5e3 # Maximum experiment length i.min <- 30 # Initial number of observations # # Generate data. # set.seed(17) X <- data.table( n = rep(0:n.buffer, n.sim), Iteration = rep(1:n.sim, each=n.buffer+1), X = rnorm((1+n.buffer)*n.sim) ) # # Perform the testing. # Z.alpha <- -qnorm(alpha/2) X[, Z := Z.alpha * sqrt(n)] X[, S := c(0, cumsum(X))[-(n.buffer+1)], by=Iteration] X[, Trigger := abs(S) >= Z & n >= i.min] X[, Significant := cumsum(Trigger) > 0, by=Iteration] # # Plot the results. # ggplot(X, aes(n, S, group=Iteration)) + geom_path(aes(n,Z)) + geom_path(aes(n,-Z)) + geom_point(aes(color=!Significant), size=1/2) + facet_wrap(~ Iteration)
Why does collecting data until finding a significant result increase Type I error rate? The problem is that you're giving yourself too many chances to pass the test. It's just a fancy version of this dialog: I'll flip you to see who pays for dinner. OK, I call heads. Rats, you won. Be
51,286
Why does collecting data until finding a significant result increase Type I error rate?
People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothesis, a p value is uniformly distributed between 0 and 1 and can bounce around quite a bit in that range. I've simulated some data in R (my R skills are quite basic). In this simulation, I collect 5 data points - each with a random selected group membership (0 or 1) and each with a randomly selected outcome measure ~N(0,1). Starting on participant 6, I conduct a t-test at every iteration. for (i in 6:150) { df[i,1] = round(runif(1)) df[i,2] = rnorm(1) p = t.test(df[ , 2] ~ df[ , 1], data = df)$p.value df[i,3] = p } The p values are in this figure. Notice that I find significant results when the sample size is around 70-75. If I stop there, I'll end up beleiving that my findings are significant because I'll have missed the fact that my p values jumped back up with a larger sample (this actually happened to me once with real data). Since I know both populations have a mean of 0, this must be a false positive. This is the problem with adding data until p < .05. If you add conduct enough tests, p will eventually cross the .05 threshold and you can find a significant effect is any data set.
Why does collecting data until finding a significant result increase Type I error rate?
People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothes
Why does collecting data until finding a significant result increase Type I error rate? People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothesis, a p value is uniformly distributed between 0 and 1 and can bounce around quite a bit in that range. I've simulated some data in R (my R skills are quite basic). In this simulation, I collect 5 data points - each with a random selected group membership (0 or 1) and each with a randomly selected outcome measure ~N(0,1). Starting on participant 6, I conduct a t-test at every iteration. for (i in 6:150) { df[i,1] = round(runif(1)) df[i,2] = rnorm(1) p = t.test(df[ , 2] ~ df[ , 1], data = df)$p.value df[i,3] = p } The p values are in this figure. Notice that I find significant results when the sample size is around 70-75. If I stop there, I'll end up beleiving that my findings are significant because I'll have missed the fact that my p values jumped back up with a larger sample (this actually happened to me once with real data). Since I know both populations have a mean of 0, this must be a false positive. This is the problem with adding data until p < .05. If you add conduct enough tests, p will eventually cross the .05 threshold and you can find a significant effect is any data set.
Why does collecting data until finding a significant result increase Type I error rate? People who are new to hypothesis testing tend to think that once a p value goes below .05, adding more participants will only decrease the p value further. But this isn't true. Under the null hypothes
51,287
Why does collecting data until finding a significant result increase Type I error rate?
This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model. As in the model of @whuber, let $S(t)=X_1 + X_2 + \dots + X_t$ denote the value of the test statistic after $t$ observations have been collected and assume that the observations $X_1,X_2,\dots$ are iid standard normal. Then $$ S(t+h)|S(t)=s_0 \sim N(s_0, h), \tag{1} $$ such that $S(t)$ behaves like a continuous-time standard Brownian motion, if we for the moment ignore the fact that we have a discrete-time process (left plot below). Let $T$ denote the first passage time of $S(t)$ across the the time-dependent barriers $\pm z_{\alpha/2}\sqrt{t}$ (the number of observations needed before the test turns significant). Consider the transformed process $Y(\tau)$ obtained by scaling $S(t)$ by its standard deviation at time $t$ and by letting the new time scale $\tau=\ln t$ such that $$ Y(\tau)=\frac{S(t(\tau))}{\sqrt{t(\tau)}}=e^{-\tau/2}S(e^\tau). \tag{2} $$ It follows from (1) and (2) that $Y(\tau+\delta)$ is normally distributed with \begin{align} E(Y(\tau+\delta)|Y(\tau)=y_0) &=E(e^{-(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2}) \\&=y_0e^{-\delta/2} \tag{3} \end{align} and \begin{align} \operatorname{Var}(Y(\tau+\delta)|Y(\tau)=y_0) &=\operatorname{Var}(e^{(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2}) \\&=1-e^{-\delta}, \tag{4} \end{align} that is, $Y(\tau)$ is a zero-mean Ornstein-Uhlenbeck (O-U) process with a stationary variance of 1 and return time 2 (right plot below). An almost identical transformation is given in Karlin & Taylor (1981), eq. 5.23. For the transformed model, the barriers become time-independent constants equal to $\pm z_{\alpha/2}$. It is then known (Nobile et. al. 1985; Ricciardi & Sato, 1988) that the first passage-time $\mathcal{T}$ of the O-U process $Y(\tau)$ across these barriers is approximately exponentially distributed with some parameter $\lambda$ (depending on the barriers at $\pm z_{\alpha/2}$) (estimated to $\hat\lambda=0.125$ for $\alpha=0.05$ below). There is also an extra point mass in of size $\alpha$ in $\tau=0$. "Rejection" of $H_0$ eventually happens with probability 1. Hence, $T=e^\mathcal{T}$ (the number of observations that needs to be collected before getting a "significant" result) is approximately Pareto distributed with density $f_T(t)=f_\mathcal{T}(\ln t)\frac{d\tau}{dt}=\lambda/t^{\lambda+1}$. The expected value is $$ ET\approx 1+(1-\alpha)\int_0^\infty e^\tau \lambda e^{-\lambda \tau}d\tau.\tag{5} $$ Thus, $T$ has a finite expectation only if $\lambda>1$ (for sufficiently large levels of significance $\alpha$). The above ignores the fact that $T$ for the real model is discrete and that the real process is discrete- rather than continuous-time. Hence, the above model overestimates the probability that the barrier has been crossed (and underestimates $ET$) because the continuous-time sample path may cross the barrier only temporarily in-between two adjacent discrete time points $t$ and $t+1$. But such events should have negligible probability for large $t$. The following figure shows a Kaplan-Meier estimate of $P(T>t)$ on log-log scale together with the survival curve for the exponential continuous-time approximation (red line). R code: # Fig 1 par(mfrow=c(1,2),mar=c(4,4,.5,.5)) set.seed(16) n <- 20 npoints <- n*100 + 1 t <- seq(1,n,len=npoints) subset <- 1:n*100-99 deltat <- c(1,diff(t)) z <- qnorm(.975) s <- cumsum(rnorm(npoints,sd=sqrt(deltat))) plot(t,s,type="l",ylim=c(-1,1)*z*sqrt(n),ylab="S(t)",col="grey") points(t[subset],s[subset],pch="+") curve(sqrt(t)*z,xname="t",add=TRUE) curve(-sqrt(t)*z,xname="t",add=TRUE) tau <- log(t) y <- s/sqrt(t) plot(tau,y,type="l",ylim=c(-2.5,2.5),col="grey",xlab=expression(tau),ylab=expression(Y(tau))) points(tau[subset],y[subset],pch="+") abline(h=c(-z,z)) # Fig 2 nmax <- 1e+3 nsim <- 1e+5 alpha <- .05 t <- numeric(nsim) n <- 1:nmax for (i in 1:nsim) { s <- cumsum(rnorm(nmax)) t[i] <- which(abs(s) > qnorm(1-alpha/2)*sqrt(n))[1] } delta <- ifelse(is.na(t),0,1) t[delta==0] <- nmax + 1 library(survival) par(mfrow=c(1,1),mar=c(4,4,.5,.5)) plot(survfit(Surv(t,delta)~1),log="xy",xlab="t",ylab="P(T>t)",conf.int=FALSE) curve((1-alpha)*exp(-.125*(log(x))),add=TRUE,col="red",from=1,to=nmax)
Why does collecting data until finding a significant result increase Type I error rate?
This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model. As in the model of @whuber, let $S(t)=X_1 +
Why does collecting data until finding a significant result increase Type I error rate? This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model. As in the model of @whuber, let $S(t)=X_1 + X_2 + \dots + X_t$ denote the value of the test statistic after $t$ observations have been collected and assume that the observations $X_1,X_2,\dots$ are iid standard normal. Then $$ S(t+h)|S(t)=s_0 \sim N(s_0, h), \tag{1} $$ such that $S(t)$ behaves like a continuous-time standard Brownian motion, if we for the moment ignore the fact that we have a discrete-time process (left plot below). Let $T$ denote the first passage time of $S(t)$ across the the time-dependent barriers $\pm z_{\alpha/2}\sqrt{t}$ (the number of observations needed before the test turns significant). Consider the transformed process $Y(\tau)$ obtained by scaling $S(t)$ by its standard deviation at time $t$ and by letting the new time scale $\tau=\ln t$ such that $$ Y(\tau)=\frac{S(t(\tau))}{\sqrt{t(\tau)}}=e^{-\tau/2}S(e^\tau). \tag{2} $$ It follows from (1) and (2) that $Y(\tau+\delta)$ is normally distributed with \begin{align} E(Y(\tau+\delta)|Y(\tau)=y_0) &=E(e^{-(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2}) \\&=y_0e^{-\delta/2} \tag{3} \end{align} and \begin{align} \operatorname{Var}(Y(\tau+\delta)|Y(\tau)=y_0) &=\operatorname{Var}(e^{(\tau+\delta)/2}S(e^{\tau+\delta})|S(e^\tau)=y_0e^{\tau/2}) \\&=1-e^{-\delta}, \tag{4} \end{align} that is, $Y(\tau)$ is a zero-mean Ornstein-Uhlenbeck (O-U) process with a stationary variance of 1 and return time 2 (right plot below). An almost identical transformation is given in Karlin & Taylor (1981), eq. 5.23. For the transformed model, the barriers become time-independent constants equal to $\pm z_{\alpha/2}$. It is then known (Nobile et. al. 1985; Ricciardi & Sato, 1988) that the first passage-time $\mathcal{T}$ of the O-U process $Y(\tau)$ across these barriers is approximately exponentially distributed with some parameter $\lambda$ (depending on the barriers at $\pm z_{\alpha/2}$) (estimated to $\hat\lambda=0.125$ for $\alpha=0.05$ below). There is also an extra point mass in of size $\alpha$ in $\tau=0$. "Rejection" of $H_0$ eventually happens with probability 1. Hence, $T=e^\mathcal{T}$ (the number of observations that needs to be collected before getting a "significant" result) is approximately Pareto distributed with density $f_T(t)=f_\mathcal{T}(\ln t)\frac{d\tau}{dt}=\lambda/t^{\lambda+1}$. The expected value is $$ ET\approx 1+(1-\alpha)\int_0^\infty e^\tau \lambda e^{-\lambda \tau}d\tau.\tag{5} $$ Thus, $T$ has a finite expectation only if $\lambda>1$ (for sufficiently large levels of significance $\alpha$). The above ignores the fact that $T$ for the real model is discrete and that the real process is discrete- rather than continuous-time. Hence, the above model overestimates the probability that the barrier has been crossed (and underestimates $ET$) because the continuous-time sample path may cross the barrier only temporarily in-between two adjacent discrete time points $t$ and $t+1$. But such events should have negligible probability for large $t$. The following figure shows a Kaplan-Meier estimate of $P(T>t)$ on log-log scale together with the survival curve for the exponential continuous-time approximation (red line). R code: # Fig 1 par(mfrow=c(1,2),mar=c(4,4,.5,.5)) set.seed(16) n <- 20 npoints <- n*100 + 1 t <- seq(1,n,len=npoints) subset <- 1:n*100-99 deltat <- c(1,diff(t)) z <- qnorm(.975) s <- cumsum(rnorm(npoints,sd=sqrt(deltat))) plot(t,s,type="l",ylim=c(-1,1)*z*sqrt(n),ylab="S(t)",col="grey") points(t[subset],s[subset],pch="+") curve(sqrt(t)*z,xname="t",add=TRUE) curve(-sqrt(t)*z,xname="t",add=TRUE) tau <- log(t) y <- s/sqrt(t) plot(tau,y,type="l",ylim=c(-2.5,2.5),col="grey",xlab=expression(tau),ylab=expression(Y(tau))) points(tau[subset],y[subset],pch="+") abline(h=c(-z,z)) # Fig 2 nmax <- 1e+3 nsim <- 1e+5 alpha <- .05 t <- numeric(nsim) n <- 1:nmax for (i in 1:nsim) { s <- cumsum(rnorm(nmax)) t[i] <- which(abs(s) > qnorm(1-alpha/2)*sqrt(n))[1] } delta <- ifelse(is.na(t),0,1) t[delta==0] <- nmax + 1 library(survival) par(mfrow=c(1,1),mar=c(4,4,.5,.5)) plot(survfit(Surv(t,delta)~1),log="xy",xlab="t",ylab="P(T>t)",conf.int=FALSE) curve((1-alpha)*exp(-.125*(log(x))),add=TRUE,col="red",from=1,to=nmax)
Why does collecting data until finding a significant result increase Type I error rate? This answer only concerns the probability of ultimately getting a "significant" result and the distribution of the time to this event under @whuber's model. As in the model of @whuber, let $S(t)=X_1 +
51,288
Why does collecting data until finding a significant result increase Type I error rate?
It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to exist. The root cause of the problem is that p-values and type I errors use backwards-time backwards-information flow conditioning, which makes it important "how you got here" and what could have happened instead. On the other hand, the Bayesian paradigm encodes skepticism about an effect on the parameter itself, not on the data. That makes each posterior probability be interpreted the same whether you computed another posterior probability of an effect 5 minutes ago or not. More details and a simple simulation may be found at http://www.fharrell.com/2017/10/continuous-learning-from-data-no.html
Why does collecting data until finding a significant result increase Type I error rate?
It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to
Why does collecting data until finding a significant result increase Type I error rate? It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to exist. The root cause of the problem is that p-values and type I errors use backwards-time backwards-information flow conditioning, which makes it important "how you got here" and what could have happened instead. On the other hand, the Bayesian paradigm encodes skepticism about an effect on the parameter itself, not on the data. That makes each posterior probability be interpreted the same whether you computed another posterior probability of an effect 5 minutes ago or not. More details and a simple simulation may be found at http://www.fharrell.com/2017/10/continuous-learning-from-data-no.html
Why does collecting data until finding a significant result increase Type I error rate? It needs to be said that the above discussion is for a frequentist world view for which multiplicity comes from the chances you give data to be more extreme, not from the chances you give an effect to
51,289
Why does collecting data until finding a significant result increase Type I error rate?
We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. If it does not, he collects another sample of size $n$, $x_2$, and rejects if the test rejects for the combined sample $(x_1,x_2)$. If he still obtains no rejection, he proceeds in this fashion, up to $K$ times in total. This problem seems to already have been addressed by P. Armitage, C. K. McPherson and B. C. Rowe (1969), Journal of the Royal Statistical Society. Series A (132), 2, 235-244: "Repeated Significance Tests on Accumulating Data". The Bayesian point of view on this issue, also discussed here, is, by the way, discussed in Berger and Wolpert (1988), "The Likelihood Principle", Section 4.2. Here is a partial replication of Armitage et al's results (code below), which shows how significance levels inflate when $K>1$, as well as possible correction factors to restore level-$\alpha$ critical values. Note the grid search takes a while to run---the implementation may be rather inefficient. Size of the standard rejection rule as a function of the number of attempts $K$ Size as a function of increasing critical values for different $K$ Adjusted critical values to restore 5% tests as a function of $K$ reps <- 50000 K <- c(1:5, seq(10,50,5), seq(60,100,10)) # the number of attempts a researcher gives herself alpha <- 0.05 cv <- qnorm(1-alpha/2) grid.scale.cv <- cv*seq(1,1.5,by=.01) # scaled critical values over which we check rejection rates max.g <- length(grid.scale.cv) results <- matrix(NA, nrow = length(K), ncol=max.g) for (kk in 1:length(K)){ g <- 1 dev <- 0 K.act <- K[kk] while (dev > -0.01 & g <= max.g){ rej <- rep(NA,reps) for (i in 1:reps){ k <- 1 accept <- 1 x <- rnorm(K.act) while(k <= K.act & accept==1){ # each of our test statistics for "samples" of size n are N(0,1) under H0, so just scaling their sum by sqrt(k) gives another N(0,1) test statistic rej[i] <- abs(1/sqrt(k)*sum(x[1:k])) > grid.scale.cv[g] accept <- accept - rej[i] k <- k+1 } } rej.rate <- mean(rej) dev <- rej.rate-alpha results[kk,g] <- rej.rate g <- g+1 } } plot(K,results[,1], type="l") matplot(grid.scale.cv,t(results), type="l") abline(h=0.05) cv.a <- data.frame(K,adjusted.cv=grid.scale.cv[apply(abs(results-alpha),1,which.min)]) plot(K,cv.a$adjusted.cv, type="l")
Why does collecting data until finding a significant result increase Type I error rate?
We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. I
Why does collecting data until finding a significant result increase Type I error rate? We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. If it does not, he collects another sample of size $n$, $x_2$, and rejects if the test rejects for the combined sample $(x_1,x_2)$. If he still obtains no rejection, he proceeds in this fashion, up to $K$ times in total. This problem seems to already have been addressed by P. Armitage, C. K. McPherson and B. C. Rowe (1969), Journal of the Royal Statistical Society. Series A (132), 2, 235-244: "Repeated Significance Tests on Accumulating Data". The Bayesian point of view on this issue, also discussed here, is, by the way, discussed in Berger and Wolpert (1988), "The Likelihood Principle", Section 4.2. Here is a partial replication of Armitage et al's results (code below), which shows how significance levels inflate when $K>1$, as well as possible correction factors to restore level-$\alpha$ critical values. Note the grid search takes a while to run---the implementation may be rather inefficient. Size of the standard rejection rule as a function of the number of attempts $K$ Size as a function of increasing critical values for different $K$ Adjusted critical values to restore 5% tests as a function of $K$ reps <- 50000 K <- c(1:5, seq(10,50,5), seq(60,100,10)) # the number of attempts a researcher gives herself alpha <- 0.05 cv <- qnorm(1-alpha/2) grid.scale.cv <- cv*seq(1,1.5,by=.01) # scaled critical values over which we check rejection rates max.g <- length(grid.scale.cv) results <- matrix(NA, nrow = length(K), ncol=max.g) for (kk in 1:length(K)){ g <- 1 dev <- 0 K.act <- K[kk] while (dev > -0.01 & g <= max.g){ rej <- rep(NA,reps) for (i in 1:reps){ k <- 1 accept <- 1 x <- rnorm(K.act) while(k <= K.act & accept==1){ # each of our test statistics for "samples" of size n are N(0,1) under H0, so just scaling their sum by sqrt(k) gives another N(0,1) test statistic rej[i] <- abs(1/sqrt(k)*sum(x[1:k])) > grid.scale.cv[g] accept <- accept - rej[i] k <- k+1 } } rej.rate <- mean(rej) dev <- rej.rate-alpha results[kk,g] <- rej.rate g <- g+1 } } plot(K,results[,1], type="l") matplot(grid.scale.cv,t(results), type="l") abline(h=0.05) cv.a <- data.frame(K,adjusted.cv=grid.scale.cv[apply(abs(results-alpha),1,which.min)]) plot(K,cv.a$adjusted.cv, type="l")
Why does collecting data until finding a significant result increase Type I error rate? We consider a researcher collecting a sample of size $n$, $x_1$, to test some hypothesis $\theta=\theta_0$. He rejects if a suitable test statistic $t$ exceeds its level-$\alpha$ critical value $c$. I
51,290
Trying to emulate linear regression using Keras [closed]
You seem to read data with Pandas and, probably, there is a mess with your input. You should definitely try generating it using numpy: import numpy as np X_train = np.linspace(0, 80, 100).reshape(-1, 1) Y_train = 5 * X_train For linear approximation single neuron is ok: # define base mode def baseline_model(): # create model model = Sequential() model.add(Dense(1, input_dim=1, init='normal', activation='relu', input_shape=(100,))) model.compile(loss='mean_squared_error', optimizer='adam') return model Worked for me:
Trying to emulate linear regression using Keras [closed]
You seem to read data with Pandas and, probably, there is a mess with your input. You should definitely try generating it using numpy: import numpy as np X_train = np.linspace(0, 80, 100).reshape(-1,
Trying to emulate linear regression using Keras [closed] You seem to read data with Pandas and, probably, there is a mess with your input. You should definitely try generating it using numpy: import numpy as np X_train = np.linspace(0, 80, 100).reshape(-1, 1) Y_train = 5 * X_train For linear approximation single neuron is ok: # define base mode def baseline_model(): # create model model = Sequential() model.add(Dense(1, input_dim=1, init='normal', activation='relu', input_shape=(100,))) model.compile(loss='mean_squared_error', optimizer='adam') return model Worked for me:
Trying to emulate linear regression using Keras [closed] You seem to read data with Pandas and, probably, there is a mess with your input. You should definitely try generating it using numpy: import numpy as np X_train = np.linspace(0, 80, 100).reshape(-1,
51,291
Probability theory books for self-study
I recommend Head First Statistics. The 'Head First' Series is of superior didactic quality and fun to read. It has a lot of exercises, and was one of the few books were I liked doing the exercises. http://www.amazon.com/Head-First-Statistics-Dawn-Griffiths/dp/0596527586
Probability theory books for self-study
I recommend Head First Statistics. The 'Head First' Series is of superior didactic quality and fun to read. It has a lot of exercises, and was one of the few books were I liked doing the exercises. ht
Probability theory books for self-study I recommend Head First Statistics. The 'Head First' Series is of superior didactic quality and fun to read. It has a lot of exercises, and was one of the few books were I liked doing the exercises. http://www.amazon.com/Head-First-Statistics-Dawn-Griffiths/dp/0596527586
Probability theory books for self-study I recommend Head First Statistics. The 'Head First' Series is of superior didactic quality and fun to read. It has a lot of exercises, and was one of the few books were I liked doing the exercises. ht
51,292
Probability theory books for self-study
I was looking for the same a week ago. I found from another post on stackexchange about this book Intuitive biostatistics: A Nonmathematical Guide to Statistical Thinking by Harvey Motulsky. I think the second part of the title is pretty lame. But generally I have/had no problems in understanding the math, but found none of them explaining concepts clear enough for me. I just ordered this book based on the reviews, so I can't give an opinion about it myself yet. There were good reviews on amazon and on stackexchange (although many preferred the 1st edition to 2nd). If you're looking for something totally different, this might interest you.
Probability theory books for self-study
I was looking for the same a week ago. I found from another post on stackexchange about this book Intuitive biostatistics: A Nonmathematical Guide to Statistical Thinking by Harvey Motulsky. I think t
Probability theory books for self-study I was looking for the same a week ago. I found from another post on stackexchange about this book Intuitive biostatistics: A Nonmathematical Guide to Statistical Thinking by Harvey Motulsky. I think the second part of the title is pretty lame. But generally I have/had no problems in understanding the math, but found none of them explaining concepts clear enough for me. I just ordered this book based on the reviews, so I can't give an opinion about it myself yet. There were good reviews on amazon and on stackexchange (although many preferred the 1st edition to 2nd). If you're looking for something totally different, this might interest you.
Probability theory books for self-study I was looking for the same a week ago. I found from another post on stackexchange about this book Intuitive biostatistics: A Nonmathematical Guide to Statistical Thinking by Harvey Motulsky. I think t
51,293
Probability theory books for self-study
Schaeffer's book from Duxbury press seems ok. Sheldon Ross' books are always awesome. Note, these are both books on Probability, not stats, which is what you asked.
Probability theory books for self-study
Schaeffer's book from Duxbury press seems ok. Sheldon Ross' books are always awesome. Note, these are both books on Probability, not stats, which is what you asked.
Probability theory books for self-study Schaeffer's book from Duxbury press seems ok. Sheldon Ross' books are always awesome. Note, these are both books on Probability, not stats, which is what you asked.
Probability theory books for self-study Schaeffer's book from Duxbury press seems ok. Sheldon Ross' books are always awesome. Note, these are both books on Probability, not stats, which is what you asked.
51,294
Probability theory books for self-study
I'd strongly recommend Bulmer's Principles of Statistics as a leaping-off point. It's a touch dated, but it's short, clear and available in a cheap Dover edition - around $10 from Amazon. For a more modern and to the point statistical book I'd suggest Wasserman's "All of Statistics". I got it a few months back and it's been a good survey of everything - I've not read the first few chapters in detail but it seems ok on a skim. I like some of the practical advice which would be useful in a self study context - e.g. "Unbiasdness used to receive much attention but these days is considered less important". But this is assuming you want a practical statistics text which covers some probabilty rather than a probability theory text. For probability theory, I'd suggest reading a lot on measure theory and hit something on Lebegue integration first - but this doesn't sound like where you're at.
Probability theory books for self-study
I'd strongly recommend Bulmer's Principles of Statistics as a leaping-off point. It's a touch dated, but it's short, clear and available in a cheap Dover edition - around $10 from Amazon. For a more
Probability theory books for self-study I'd strongly recommend Bulmer's Principles of Statistics as a leaping-off point. It's a touch dated, but it's short, clear and available in a cheap Dover edition - around $10 from Amazon. For a more modern and to the point statistical book I'd suggest Wasserman's "All of Statistics". I got it a few months back and it's been a good survey of everything - I've not read the first few chapters in detail but it seems ok on a skim. I like some of the practical advice which would be useful in a self study context - e.g. "Unbiasdness used to receive much attention but these days is considered less important". But this is assuming you want a practical statistics text which covers some probabilty rather than a probability theory text. For probability theory, I'd suggest reading a lot on measure theory and hit something on Lebegue integration first - but this doesn't sound like where you're at.
Probability theory books for self-study I'd strongly recommend Bulmer's Principles of Statistics as a leaping-off point. It's a touch dated, but it's short, clear and available in a cheap Dover edition - around $10 from Amazon. For a more
51,295
Probability theory books for self-study
https://www.crcpress.com/Introduction-to-Probability/Blitzstein-Hwang/p/book/9781466575578 - Introduction to Probability I had no experience in probability before; this is a good book that explains the basic probability distributions with motivating context. Begins with discrete random variables and moves to continuous, which is good for the beginner. Builds your foundation up so you can tackle more advanced topics in the future.
Probability theory books for self-study
https://www.crcpress.com/Introduction-to-Probability/Blitzstein-Hwang/p/book/9781466575578 - Introduction to Probability I had no experience in probability before; this is a good book that explains th
Probability theory books for self-study https://www.crcpress.com/Introduction-to-Probability/Blitzstein-Hwang/p/book/9781466575578 - Introduction to Probability I had no experience in probability before; this is a good book that explains the basic probability distributions with motivating context. Begins with discrete random variables and moves to continuous, which is good for the beginner. Builds your foundation up so you can tackle more advanced topics in the future.
Probability theory books for self-study https://www.crcpress.com/Introduction-to-Probability/Blitzstein-Hwang/p/book/9781466575578 - Introduction to Probability I had no experience in probability before; this is a good book that explains th
51,296
Probability theory books for self-study
Yes, I agree that the John Rice book is not good for self-study. It jumps around topics of varying difficulty, although it is obvious the author is a very experienced practitioner. For my level, it does more confusion than enlightenment. I would highly recommend: Dimitri Bertsekas Sheldon Ross Ron C. Mittelhammer Richard Larsen You can contact me if you want copies.
Probability theory books for self-study
Yes, I agree that the John Rice book is not good for self-study. It jumps around topics of varying difficulty, although it is obvious the author is a very experienced practitioner. For my level, it do
Probability theory books for self-study Yes, I agree that the John Rice book is not good for self-study. It jumps around topics of varying difficulty, although it is obvious the author is a very experienced practitioner. For my level, it does more confusion than enlightenment. I would highly recommend: Dimitri Bertsekas Sheldon Ross Ron C. Mittelhammer Richard Larsen You can contact me if you want copies.
Probability theory books for self-study Yes, I agree that the John Rice book is not good for self-study. It jumps around topics of varying difficulty, although it is obvious the author is a very experienced practitioner. For my level, it do
51,297
Difference between time series prediction vs point process prediction
Let's start with AR. AR by default is a time-discrete model, HP or general point process can be defined on continuous space, such as a timeline. In practice, point process data can be discretized using time bins. The data counts in each interval can be either binary or integers. In AR, $X_t$ is a linear combination of the past values $X_{t-1}, X_{t-2},...$ with additive Normal noise. In HP, the underlying intensity function is a linear combination of past events. To be more specific, $$\lambda_t = \mu+ \alpha_1 X_{t-1} + \alpha_2 X_{t-2} + ... $$ usually people write this as $$\lambda_t = \mu+ \sum_i h(t - i)X_i $$ $h(t - i)$ can be seen as $\alpha_i$. The probability of observing $n$ events in time interval $t$ follows Poisson distribution, $$ \mathbb{P}( X_t =n ) \sim \text{Poisson}(\lambda_t \Delta) $$ $\Delta$ is the width of the time interval. In brief, the noise of the observation in HP is not Gaussian. Come back to ARIMA, HP does not incorporate error terms in the past (no "MA" part), not to mention the temporal difference (no "I" part). You can add something in HP like the "MA" and "I" components, but I've never seen that.
Difference between time series prediction vs point process prediction
Let's start with AR. AR by default is a time-discrete model, HP or general point process can be defined on continuous space, such as a timeline. In practice, point process data can be discretized usin
Difference between time series prediction vs point process prediction Let's start with AR. AR by default is a time-discrete model, HP or general point process can be defined on continuous space, such as a timeline. In practice, point process data can be discretized using time bins. The data counts in each interval can be either binary or integers. In AR, $X_t$ is a linear combination of the past values $X_{t-1}, X_{t-2},...$ with additive Normal noise. In HP, the underlying intensity function is a linear combination of past events. To be more specific, $$\lambda_t = \mu+ \alpha_1 X_{t-1} + \alpha_2 X_{t-2} + ... $$ usually people write this as $$\lambda_t = \mu+ \sum_i h(t - i)X_i $$ $h(t - i)$ can be seen as $\alpha_i$. The probability of observing $n$ events in time interval $t$ follows Poisson distribution, $$ \mathbb{P}( X_t =n ) \sim \text{Poisson}(\lambda_t \Delta) $$ $\Delta$ is the width of the time interval. In brief, the noise of the observation in HP is not Gaussian. Come back to ARIMA, HP does not incorporate error terms in the past (no "MA" part), not to mention the temporal difference (no "I" part). You can add something in HP like the "MA" and "I" components, but I've never seen that.
Difference between time series prediction vs point process prediction Let's start with AR. AR by default is a time-discrete model, HP or general point process can be defined on continuous space, such as a timeline. In practice, point process data can be discretized usin
51,298
Difference between time series prediction vs point process prediction
A time series has time measurements made at regular time intervals, whereas in a Poisson process, including the Hawkes process, the time measurements are distributed in a Poissonian way.
Difference between time series prediction vs point process prediction
A time series has time measurements made at regular time intervals, whereas in a Poisson process, including the Hawkes process, the time measurements are distributed in a Poissonian way.
Difference between time series prediction vs point process prediction A time series has time measurements made at regular time intervals, whereas in a Poisson process, including the Hawkes process, the time measurements are distributed in a Poissonian way.
Difference between time series prediction vs point process prediction A time series has time measurements made at regular time intervals, whereas in a Poisson process, including the Hawkes process, the time measurements are distributed in a Poissonian way.
51,299
How to maximize min.entropy from a bounded log normal distribution?
This answer is valid for $H$ as defined by the OP. It is also valid for $H=p\ln{p}$, because the solution will still occur where $P(V_{mode})=P(V_{max})$, the factor of $p$ will drop out. This answer assumes that the discrete (integer-valued) distribution in the question is "similar enough" to a log-normal that the following two approximations are valid: $P(V_{mode})$ is proportional to the probability density at the mode of the log-normal. $P(V_{max})$ is proportional to the probability mass of the log normal that lies to the right of $V_{max}$ Therefore, with (positive) proportionality constants ($\alpha_1$ and $\alpha_2$), we have that $$P(V_{mode})=\alpha_1 \frac{e^{-\frac{(\ln{(e^{\mu-\sigma^2})} - \mu)^2}{2\sigma^2}}}{e^{\mu-\sigma^2}\sigma\sqrt{2\pi}}= \alpha_1 \frac{e^{\frac{\sigma^2}{2}-\mu}}{\sigma\sqrt{2\pi}}$$ and $$P(V_{max})=\alpha_2\left[\frac{1}{2}-\frac{1}{2} \mathrm{erf}\left(\frac{\ln{(V_{max})}-\mu}{\sqrt{2}\sigma} \right)\right]$$ where erf is the error function (the first equation comes from plugging in the mode of the lognormal to the PDF for the log normal, and the second comes from plugging in $V_{max}$ to one minus the CDF of the log-normal). Note that for any value of $\sigma$, $P(V_{mode})$ monotonically decreases in $\mu$. Also note that the erf is monotonically increasing, so for any value of $\sigma$, $P(V_{max})$ is monotonically increasing in $\mu$. Note further that $P_{max}$ occurs either at $P(V_{mode})$ or at $P(V_{max})$. So for any value of $\sigma$, we maximize $H$ by varying $\mu$ until $P(V_{mode})=P(V_{max})$. At this value of $\mu$, increasing $\mu$ would increase $P(V_{max})$ and decreasing $\mu$ would increase $P(V_{mode})$. Thus, it is certainly the case that $H$ is maximized where $P(V_{mode})=P(V_{max})$. Furthermore, given the constraint $P(V_{mode})=P(V_{max})$ (and $\sigma >= 0$), it is possible to pick a value of $\sigma$ that maximizes $H$. This optimization problem could be solved numerically if it were possible to write down the actual probability mass function for this discrete "log-normalish" distribution.
How to maximize min.entropy from a bounded log normal distribution?
This answer is valid for $H$ as defined by the OP. It is also valid for $H=p\ln{p}$, because the solution will still occur where $P(V_{mode})=P(V_{max})$, the factor of $p$ will drop out. This answer
How to maximize min.entropy from a bounded log normal distribution? This answer is valid for $H$ as defined by the OP. It is also valid for $H=p\ln{p}$, because the solution will still occur where $P(V_{mode})=P(V_{max})$, the factor of $p$ will drop out. This answer assumes that the discrete (integer-valued) distribution in the question is "similar enough" to a log-normal that the following two approximations are valid: $P(V_{mode})$ is proportional to the probability density at the mode of the log-normal. $P(V_{max})$ is proportional to the probability mass of the log normal that lies to the right of $V_{max}$ Therefore, with (positive) proportionality constants ($\alpha_1$ and $\alpha_2$), we have that $$P(V_{mode})=\alpha_1 \frac{e^{-\frac{(\ln{(e^{\mu-\sigma^2})} - \mu)^2}{2\sigma^2}}}{e^{\mu-\sigma^2}\sigma\sqrt{2\pi}}= \alpha_1 \frac{e^{\frac{\sigma^2}{2}-\mu}}{\sigma\sqrt{2\pi}}$$ and $$P(V_{max})=\alpha_2\left[\frac{1}{2}-\frac{1}{2} \mathrm{erf}\left(\frac{\ln{(V_{max})}-\mu}{\sqrt{2}\sigma} \right)\right]$$ where erf is the error function (the first equation comes from plugging in the mode of the lognormal to the PDF for the log normal, and the second comes from plugging in $V_{max}$ to one minus the CDF of the log-normal). Note that for any value of $\sigma$, $P(V_{mode})$ monotonically decreases in $\mu$. Also note that the erf is monotonically increasing, so for any value of $\sigma$, $P(V_{max})$ is monotonically increasing in $\mu$. Note further that $P_{max}$ occurs either at $P(V_{mode})$ or at $P(V_{max})$. So for any value of $\sigma$, we maximize $H$ by varying $\mu$ until $P(V_{mode})=P(V_{max})$. At this value of $\mu$, increasing $\mu$ would increase $P(V_{max})$ and decreasing $\mu$ would increase $P(V_{mode})$. Thus, it is certainly the case that $H$ is maximized where $P(V_{mode})=P(V_{max})$. Furthermore, given the constraint $P(V_{mode})=P(V_{max})$ (and $\sigma >= 0$), it is possible to pick a value of $\sigma$ that maximizes $H$. This optimization problem could be solved numerically if it were possible to write down the actual probability mass function for this discrete "log-normalish" distribution.
How to maximize min.entropy from a bounded log normal distribution? This answer is valid for $H$ as defined by the OP. It is also valid for $H=p\ln{p}$, because the solution will still occur where $P(V_{mode})=P(V_{max})$, the factor of $p$ will drop out. This answer
51,300
Visualising activity
You can try using a horizontal stacked bar in Excel to show people's activity days in colour blocks with gaps painted white, like so: It will mean doing some transformations to your data, though, as you'd need two rows for each day with 1 and 0 depending on whether a customer engaged in an activity that day. If you're familiar with VBA it should be possible to do it programmatically.
Visualising activity
You can try using a horizontal stacked bar in Excel to show people's activity days in colour blocks with gaps painted white, like so: It will mean doing some transformations to your data, though, as
Visualising activity You can try using a horizontal stacked bar in Excel to show people's activity days in colour blocks with gaps painted white, like so: It will mean doing some transformations to your data, though, as you'd need two rows for each day with 1 and 0 depending on whether a customer engaged in an activity that day. If you're familiar with VBA it should be possible to do it programmatically.
Visualising activity You can try using a horizontal stacked bar in Excel to show people's activity days in colour blocks with gaps painted white, like so: It will mean doing some transformations to your data, though, as