idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
54,601 | Will the silhouette formula change depending on the distance metric? | Silhouette statistic works on distances, not similarities. One should revert similarities into distances. The general pass to do it: 1) set diagonal as 0, 2) revert sign of elements, 3) find the smallest element and substract it from each element, 4) set diagonal as 0.
For cosine or correlation there is also a geometrically more correct way: distance = sqrt[2(1-similarity)]; it comes from trigonometric "cosine theorem".
BTW, if you use SPSS you can find a collection of macros on my web-page that compute a number of clustering criterions, including Silhouette. | Will the silhouette formula change depending on the distance metric? | Silhouette statistic works on distances, not similarities. One should revert similarities into distances. The general pass to do it: 1) set diagonal as 0, 2) revert sign of elements, 3) find the small | Will the silhouette formula change depending on the distance metric?
Silhouette statistic works on distances, not similarities. One should revert similarities into distances. The general pass to do it: 1) set diagonal as 0, 2) revert sign of elements, 3) find the smallest element and substract it from each element, 4) set diagonal as 0.
For cosine or correlation there is also a geometrically more correct way: distance = sqrt[2(1-similarity)]; it comes from trigonometric "cosine theorem".
BTW, if you use SPSS you can find a collection of macros on my web-page that compute a number of clustering criterions, including Silhouette. | Will the silhouette formula change depending on the distance metric?
Silhouette statistic works on distances, not similarities. One should revert similarities into distances. The general pass to do it: 1) set diagonal as 0, 2) revert sign of elements, 3) find the small |
54,602 | Will the silhouette formula change depending on the distance metric? | You can calculate the silhouette for similarity matrix. The seminal paper, by P.J Rousseeuw about silhouette, explains about how to calculate silhouette from similarity matrix:
calculation for Cohesion remains same.
For computing Separation, take maximum instead of minimum.
for calculating silhouete, the numerator changes as follows: cohesion-separation.
Refer to page 57 in the paper Silhouettes:a graphical aid to the interpretation and validation of cluster analysis, by Peter Rousseeuw | Will the silhouette formula change depending on the distance metric? | You can calculate the silhouette for similarity matrix. The seminal paper, by P.J Rousseeuw about silhouette, explains about how to calculate silhouette from similarity matrix:
calculation for Cohesi | Will the silhouette formula change depending on the distance metric?
You can calculate the silhouette for similarity matrix. The seminal paper, by P.J Rousseeuw about silhouette, explains about how to calculate silhouette from similarity matrix:
calculation for Cohesion remains same.
For computing Separation, take maximum instead of minimum.
for calculating silhouete, the numerator changes as follows: cohesion-separation.
Refer to page 57 in the paper Silhouettes:a graphical aid to the interpretation and validation of cluster analysis, by Peter Rousseeuw | Will the silhouette formula change depending on the distance metric?
You can calculate the silhouette for similarity matrix. The seminal paper, by P.J Rousseeuw about silhouette, explains about how to calculate silhouette from similarity matrix:
calculation for Cohesi |
54,603 | What is the best way to determine if pageviews are trending upward or downward? | General thoughts about pageviews
I think there is a fair amount of domain specific knowledge that can be brought to bear on page views.
From examining my Google Analytics statistics from particular blog posts, I observe the following characteristics:
Large initial spike in pageviews when an article is first posted related to hits coming from RSS feeds, links from syndication sites, prominence on home page, spikes related to newness and social media.
This effect tends to decline rapidly, but seems to still provide some boost for a few weeks.
Day of the week effects. At least in my blog on statistics, I get a consistent day of the week effect. There is a lull on the weekend.
The implication is that if I were trying to understand meaningful trends in an article, I would be looking at changes from week to week rather than day to day.
Seasonal effects: I also get more subtle seasonal effects presumably related to when people are working or holidays and for some posts more than others when university students are studying or not. For example, the week between Christmas and New Years is very quiet.
After the initial spike, I find most traffic is driven by Google searches, although a few posts derive considerable traffic from links from other blogs or websites. Links from Social media and blog posts tend to lead to abrupt spikes in page views and depending on the medium may or may not lead to a consistent stream over time.
Implications for identifying upward or downward trends in a page
The above analysis provides a general model that I use to understand pageviews on my own blog posts.
It is a theory of some of the major factors that influence page views, at least on my site and from my experience.
I think having a model like this, or something similar, helps to refine the research question.
For instance, presumably you are interested in only some forms of upward and downward trends.
Trends that operate on the whole site such as day of the week and seasonal trends are probably not the main focus.
Likewise, trends related to the initial spike in pageviews and subsequent decline following a posting are relatively obvious and may not be of interest (or maybe they are).
There is also an issue related to the time frame and functional form of trending.
A page may be gradually increasing in weekly pageviews due to gradual improvements in its positioning in Google's algorithms or general popularity of the topic of the post.
Alternatively, a post may experience an abrupt increase as a result of it being linked to by a high profile website.
Another issue relates to thresholds for defining trending.
This includes both statistical significance and effect sizes.
I.e., is the trend statistically significantly different from random variation that you might see, and is the change worthy of your attention.
Simple strategy for detecting interesting trends in pageviews
I'm not an expert in time series analysis, but here are a few thoughts about how I might implement such a tool.
I'd compute a table that compares pageviews for the preceding 28 days with the 28 days prior to the most recent 28 days.
You could make this more advance by making time frame a variable quantity (e.g., 7 days, 14 days, 56 days, etc.).
The more popular the page (and the site in general), the more likely that you are going to have enough page views in a period to do meaningful comparisons.
Each row of the table would be a page on your site.
You'd start with three columns (page title, current page views, comparison page views)
Filter out pages that did not exist for the entire comparison period.
Add columns that assist in the assessment of the effect size of any change, and the statistical significance of any change.
A simple summary statistic to use would be percentage change from comparison to current. You could also include raw change from comparison to current.
Perhaps a chi-square could be used to provide a rough quantification of the significance of any change (although I'm aware that the assumption of independence of observations is often compromised, which also raises the issue of whether you are using pageviews or unique page views).
I'd then create a composite of the effect size and the significance test to represent "interestingness".
You could also adopt a cut-off for when a change is sufficiently interesting, and of course classify it as upward or downward.
You could then apply sorting and filtering tools to answer particular questions.
In terms of implementation, this could all be done using R and data exported from tools like Google Analytics. There are also some interfaces between R and Google Analytics, but I haven't personally tried them. | What is the best way to determine if pageviews are trending upward or downward? | General thoughts about pageviews
I think there is a fair amount of domain specific knowledge that can be brought to bear on page views.
From examining my Google Analytics statistics from particular bl | What is the best way to determine if pageviews are trending upward or downward?
General thoughts about pageviews
I think there is a fair amount of domain specific knowledge that can be brought to bear on page views.
From examining my Google Analytics statistics from particular blog posts, I observe the following characteristics:
Large initial spike in pageviews when an article is first posted related to hits coming from RSS feeds, links from syndication sites, prominence on home page, spikes related to newness and social media.
This effect tends to decline rapidly, but seems to still provide some boost for a few weeks.
Day of the week effects. At least in my blog on statistics, I get a consistent day of the week effect. There is a lull on the weekend.
The implication is that if I were trying to understand meaningful trends in an article, I would be looking at changes from week to week rather than day to day.
Seasonal effects: I also get more subtle seasonal effects presumably related to when people are working or holidays and for some posts more than others when university students are studying or not. For example, the week between Christmas and New Years is very quiet.
After the initial spike, I find most traffic is driven by Google searches, although a few posts derive considerable traffic from links from other blogs or websites. Links from Social media and blog posts tend to lead to abrupt spikes in page views and depending on the medium may or may not lead to a consistent stream over time.
Implications for identifying upward or downward trends in a page
The above analysis provides a general model that I use to understand pageviews on my own blog posts.
It is a theory of some of the major factors that influence page views, at least on my site and from my experience.
I think having a model like this, or something similar, helps to refine the research question.
For instance, presumably you are interested in only some forms of upward and downward trends.
Trends that operate on the whole site such as day of the week and seasonal trends are probably not the main focus.
Likewise, trends related to the initial spike in pageviews and subsequent decline following a posting are relatively obvious and may not be of interest (or maybe they are).
There is also an issue related to the time frame and functional form of trending.
A page may be gradually increasing in weekly pageviews due to gradual improvements in its positioning in Google's algorithms or general popularity of the topic of the post.
Alternatively, a post may experience an abrupt increase as a result of it being linked to by a high profile website.
Another issue relates to thresholds for defining trending.
This includes both statistical significance and effect sizes.
I.e., is the trend statistically significantly different from random variation that you might see, and is the change worthy of your attention.
Simple strategy for detecting interesting trends in pageviews
I'm not an expert in time series analysis, but here are a few thoughts about how I might implement such a tool.
I'd compute a table that compares pageviews for the preceding 28 days with the 28 days prior to the most recent 28 days.
You could make this more advance by making time frame a variable quantity (e.g., 7 days, 14 days, 56 days, etc.).
The more popular the page (and the site in general), the more likely that you are going to have enough page views in a period to do meaningful comparisons.
Each row of the table would be a page on your site.
You'd start with three columns (page title, current page views, comparison page views)
Filter out pages that did not exist for the entire comparison period.
Add columns that assist in the assessment of the effect size of any change, and the statistical significance of any change.
A simple summary statistic to use would be percentage change from comparison to current. You could also include raw change from comparison to current.
Perhaps a chi-square could be used to provide a rough quantification of the significance of any change (although I'm aware that the assumption of independence of observations is often compromised, which also raises the issue of whether you are using pageviews or unique page views).
I'd then create a composite of the effect size and the significance test to represent "interestingness".
You could also adopt a cut-off for when a change is sufficiently interesting, and of course classify it as upward or downward.
You could then apply sorting and filtering tools to answer particular questions.
In terms of implementation, this could all be done using R and data exported from tools like Google Analytics. There are also some interfaces between R and Google Analytics, but I haven't personally tried them. | What is the best way to determine if pageviews are trending upward or downward?
General thoughts about pageviews
I think there is a fair amount of domain specific knowledge that can be brought to bear on page views.
From examining my Google Analytics statistics from particular bl |
54,604 | What is the best way to determine if pageviews are trending upward or downward? | Simply build an ARIMA MODEL that separate signal from noise incorporating any identifiable deterministic structure such as changes in levels/trends/seaonal pulses/parameter or variance change over time. Develop a prediction for the next 5 days and use the uncertainty in that sum to create possible bounds. Compare the actual sum of the "new five readings" and compute the pobability of yielding a value as "high" or as diverse as this. | What is the best way to determine if pageviews are trending upward or downward? | Simply build an ARIMA MODEL that separate signal from noise incorporating any identifiable deterministic structure such as changes in levels/trends/seaonal pulses/parameter or variance change over tim | What is the best way to determine if pageviews are trending upward or downward?
Simply build an ARIMA MODEL that separate signal from noise incorporating any identifiable deterministic structure such as changes in levels/trends/seaonal pulses/parameter or variance change over time. Develop a prediction for the next 5 days and use the uncertainty in that sum to create possible bounds. Compare the actual sum of the "new five readings" and compute the pobability of yielding a value as "high" or as diverse as this. | What is the best way to determine if pageviews are trending upward or downward?
Simply build an ARIMA MODEL that separate signal from noise incorporating any identifiable deterministic structure such as changes in levels/trends/seaonal pulses/parameter or variance change over tim |
54,605 | What is the best way to determine if pageviews are trending upward or downward? | Jeromy Anglim and IrishStat both give great answers, but they sound maybe a little more complex than what you're looking for.
A simpler method could could be to perform a linear regression on your data, to get PageViews = a * Date + b for some constants a and b; the constant a is then a measure of the linear "slope" of your data, which you could use to measure how much the link is trending. However, this might not work so well if your data doesn't follow a linear trend (the example in your other link looks pretty linear, but you could imagine that your link has instead been growing exponentially lately).
So another approach could be to convert your pageviews into ranks (e.g., in article 1, 100 is the lowest value, so convert that into a 1; 80 is the 2nd-lowest value, so convert that into a 2; 60 is the highest value, so convert that into a 3), and then take the correlation of these ranks with (1,2,...,n) (where n is the total number of dates you have).
For example, if your article behaves like
Date, PageViews, Rank
June 1, 100, 1
June 2, 120, 3
June 3, 115, 2
June 4, 125, 4
June 5, 150, 5
Then you would take the correlation between (1,3,2,4,5) and (1,2,3,4,5) to get a trending score of 0.9. (Note that under this method, though, pageviews of (100, 120, 115, 125, 150) have the same trending score as (100, 300, 299, 7000, 35000), which may or may not be what you want, since the latter is growing faster. In other words, this method tells you how strong the direction of the trend is, but not the magnitude. If you do want to get a sense of the magnitude, then you could just repeat these methods on the day-by-day changes of pageviews, i.e., determine whether the day-by-day changes are trending upwards or downwards.) | What is the best way to determine if pageviews are trending upward or downward? | Jeromy Anglim and IrishStat both give great answers, but they sound maybe a little more complex than what you're looking for.
A simpler method could could be to perform a linear regression on your da | What is the best way to determine if pageviews are trending upward or downward?
Jeromy Anglim and IrishStat both give great answers, but they sound maybe a little more complex than what you're looking for.
A simpler method could could be to perform a linear regression on your data, to get PageViews = a * Date + b for some constants a and b; the constant a is then a measure of the linear "slope" of your data, which you could use to measure how much the link is trending. However, this might not work so well if your data doesn't follow a linear trend (the example in your other link looks pretty linear, but you could imagine that your link has instead been growing exponentially lately).
So another approach could be to convert your pageviews into ranks (e.g., in article 1, 100 is the lowest value, so convert that into a 1; 80 is the 2nd-lowest value, so convert that into a 2; 60 is the highest value, so convert that into a 3), and then take the correlation of these ranks with (1,2,...,n) (where n is the total number of dates you have).
For example, if your article behaves like
Date, PageViews, Rank
June 1, 100, 1
June 2, 120, 3
June 3, 115, 2
June 4, 125, 4
June 5, 150, 5
Then you would take the correlation between (1,3,2,4,5) and (1,2,3,4,5) to get a trending score of 0.9. (Note that under this method, though, pageviews of (100, 120, 115, 125, 150) have the same trending score as (100, 300, 299, 7000, 35000), which may or may not be what you want, since the latter is growing faster. In other words, this method tells you how strong the direction of the trend is, but not the magnitude. If you do want to get a sense of the magnitude, then you could just repeat these methods on the day-by-day changes of pageviews, i.e., determine whether the day-by-day changes are trending upwards or downwards.) | What is the best way to determine if pageviews are trending upward or downward?
Jeromy Anglim and IrishStat both give great answers, but they sound maybe a little more complex than what you're looking for.
A simpler method could could be to perform a linear regression on your da |
54,606 | How to extract data from published articles (RCTs) to do a meta-analysis? | Your question indicates, to me, you're not yet ready to embark on the data abstraction portion of your meta-analysis. Your question needs refining, and you need to decide exactly what you're interested in asking. In your examples above, you appear to be interested in the main reported effects of the RCTs, which are found in the following places:
"Among the 866 participants who had patency assessed, the primary outcome of fistula thrombosis at 6 weeks occurred in 53 participants (12.2%) in the clopidogrel group compared with 84 participants (19.5%) in the placebo group (relative risk, 0.63; 95% CI, 0.46-0.97;P=.018)."
and
"The hazard ratio was 0.81 (95% CI, 0.47 to 1.40) in favor of aspirin and clopidogrel therapy".
All the other information is set dressing for specific study-related information, and you can probably discard it unless you think it matters to your question. Generally however, data can be abstracted into a simple Excel spreadsheet, for later analysis in the statistics package of your choice. I'd include the following fields:
Study Authors
Publication Year
Sample Size
Some coding for type of study (Case-Control, RCT, Cohort) - you can ignore this if you are really just extracting RCTs
ln(Effect Estimate). So in the case of the examples above, ln (0.63) and ln(0.81) if you are looking into the main effect of treatment.
Standard error of the estimates, which you can back calculate from the confidence interval.
What kind of estimate was this? (RR vs. HR in your case)
Multiple columns of any study covariates that interest you. What setting, country, etc. the studies were in, were they prospective or retrospective, etc. | How to extract data from published articles (RCTs) to do a meta-analysis? | Your question indicates, to me, you're not yet ready to embark on the data abstraction portion of your meta-analysis. Your question needs refining, and you need to decide exactly what you're intereste | How to extract data from published articles (RCTs) to do a meta-analysis?
Your question indicates, to me, you're not yet ready to embark on the data abstraction portion of your meta-analysis. Your question needs refining, and you need to decide exactly what you're interested in asking. In your examples above, you appear to be interested in the main reported effects of the RCTs, which are found in the following places:
"Among the 866 participants who had patency assessed, the primary outcome of fistula thrombosis at 6 weeks occurred in 53 participants (12.2%) in the clopidogrel group compared with 84 participants (19.5%) in the placebo group (relative risk, 0.63; 95% CI, 0.46-0.97;P=.018)."
and
"The hazard ratio was 0.81 (95% CI, 0.47 to 1.40) in favor of aspirin and clopidogrel therapy".
All the other information is set dressing for specific study-related information, and you can probably discard it unless you think it matters to your question. Generally however, data can be abstracted into a simple Excel spreadsheet, for later analysis in the statistics package of your choice. I'd include the following fields:
Study Authors
Publication Year
Sample Size
Some coding for type of study (Case-Control, RCT, Cohort) - you can ignore this if you are really just extracting RCTs
ln(Effect Estimate). So in the case of the examples above, ln (0.63) and ln(0.81) if you are looking into the main effect of treatment.
Standard error of the estimates, which you can back calculate from the confidence interval.
What kind of estimate was this? (RR vs. HR in your case)
Multiple columns of any study covariates that interest you. What setting, country, etc. the studies were in, were they prospective or retrospective, etc. | How to extract data from published articles (RCTs) to do a meta-analysis?
Your question indicates, to me, you're not yet ready to embark on the data abstraction portion of your meta-analysis. Your question needs refining, and you need to decide exactly what you're intereste |
54,607 | Uniform random variable distribution | $[t]$ is the floor function, and $t$ just represents a generic argument. So for example $[0.5]=0$, $[0.9]=0$, $[1.01]=1$, $[1]=1$, $[23.567]=23$, and so on. You simply ignore whats written after the decimal point (note: this is not the same thing as rounding, for $[0.9]=0$ whereas rounding would give $1$.)
With non-smooth functions such as the floor function, the safest way to go is to use the cumulative distribution function, or CDF. For the uniform distribution this is given by:
$$F_{U}(y)=\Pr(U<y)=\int_{0}^{y}f_{U}(t)dt=\int_{0}^{y}dt=y$$
Now the good thing about CDFs is that you can simply substitute the functional relation in, but only once you have inverted the floor function. Now this inversion is not 1-to-1, so a standard change of variables using jacobian's doesn't apply. For example, suppose $X=0$. Then we know that $[nU]=0$, which means that $nU<1$, which implies that $U<n^{-1}$. We can work out this probability directly from the CDF:
$$\Pr(X=0)=\Pr(U<n^{-1})=F_{U}(n^{-1})=n^{-1}$$
The reason we can do this is that the two propositions " $X=0$ " and " $U<n^{-1}$ " are equivalent - one occurs if and only if the other occurs. So they must have the same "truth value" and hence also the same probability.
This is not too hard to continue on. Suppose $X=1$, then we must have $nU<2$ (or else $X>1$) and we must also have $nU>1$ (or else $X=0$ as we have just seen). So the equivalent condition to $X=1$ in terms of $U$ is $1<nU<2$. I'll stop my answer here so you can work out the general form of the probability mass function for $X$ ($\Pr(X=z)$ for general argument $z$).
One small hint is to note that $\Pr(a<U<b)=\Pr(U<b)-\Pr(U<a)=b-a$ for a uniform distribution.
I can post the full answer if you wish, but you may not learn as well compared to if you do it yourself. | Uniform random variable distribution | $[t]$ is the floor function, and $t$ just represents a generic argument. So for example $[0.5]=0$, $[0.9]=0$, $[1.01]=1$, $[1]=1$, $[23.567]=23$, and so on. You simply ignore whats written after the | Uniform random variable distribution
$[t]$ is the floor function, and $t$ just represents a generic argument. So for example $[0.5]=0$, $[0.9]=0$, $[1.01]=1$, $[1]=1$, $[23.567]=23$, and so on. You simply ignore whats written after the decimal point (note: this is not the same thing as rounding, for $[0.9]=0$ whereas rounding would give $1$.)
With non-smooth functions such as the floor function, the safest way to go is to use the cumulative distribution function, or CDF. For the uniform distribution this is given by:
$$F_{U}(y)=\Pr(U<y)=\int_{0}^{y}f_{U}(t)dt=\int_{0}^{y}dt=y$$
Now the good thing about CDFs is that you can simply substitute the functional relation in, but only once you have inverted the floor function. Now this inversion is not 1-to-1, so a standard change of variables using jacobian's doesn't apply. For example, suppose $X=0$. Then we know that $[nU]=0$, which means that $nU<1$, which implies that $U<n^{-1}$. We can work out this probability directly from the CDF:
$$\Pr(X=0)=\Pr(U<n^{-1})=F_{U}(n^{-1})=n^{-1}$$
The reason we can do this is that the two propositions " $X=0$ " and " $U<n^{-1}$ " are equivalent - one occurs if and only if the other occurs. So they must have the same "truth value" and hence also the same probability.
This is not too hard to continue on. Suppose $X=1$, then we must have $nU<2$ (or else $X>1$) and we must also have $nU>1$ (or else $X=0$ as we have just seen). So the equivalent condition to $X=1$ in terms of $U$ is $1<nU<2$. I'll stop my answer here so you can work out the general form of the probability mass function for $X$ ($\Pr(X=z)$ for general argument $z$).
One small hint is to note that $\Pr(a<U<b)=\Pr(U<b)-\Pr(U<a)=b-a$ for a uniform distribution.
I can post the full answer if you wish, but you may not learn as well compared to if you do it yourself. | Uniform random variable distribution
$[t]$ is the floor function, and $t$ just represents a generic argument. So for example $[0.5]=0$, $[0.9]=0$, $[1.01]=1$, $[1]=1$, $[23.567]=23$, and so on. You simply ignore whats written after the |
54,608 | Uniform random variable distribution | $t$ is just a placeholder name for a variable, the actual focus in that explanation is on the square brackets which refer to the floor function.
I would start by plotting the function that maps from $U$ to $X$, that is $X(u)=[nu]$ in the range of all the values $U$ can assume. What does the set of possible function values (i.e. the image of the function) look like? How big are their relative proportions regarding their preimages? That should give you a clue how to get the distribution function of $X$. | Uniform random variable distribution | $t$ is just a placeholder name for a variable, the actual focus in that explanation is on the square brackets which refer to the floor function.
I would start by plotting the function that maps from $ | Uniform random variable distribution
$t$ is just a placeholder name for a variable, the actual focus in that explanation is on the square brackets which refer to the floor function.
I would start by plotting the function that maps from $U$ to $X$, that is $X(u)=[nu]$ in the range of all the values $U$ can assume. What does the set of possible function values (i.e. the image of the function) look like? How big are their relative proportions regarding their preimages? That should give you a clue how to get the distribution function of $X$. | Uniform random variable distribution
$t$ is just a placeholder name for a variable, the actual focus in that explanation is on the square brackets which refer to the floor function.
I would start by plotting the function that maps from $ |
54,609 | Sample size required for mixed design ANOVA to achieve adequate statistical power | You need to decide what is acceptable statistical power for a given significance test. The rule of thumb of 80% power being reasonable is often bandied about.
However, I think it is more sensible to see sample size selection as an optimisation problem, where statistical power is but one consideration, and the cost of collecting additional data is considered (see here for further discussion of my thoughts).
Mixed ANOVA involves multiple potential significance tests. At the very least there are two main effects and an interaction. Potentially, there are also various comparisons. Each significance test can have different statistical power, and thus, literally, it does not make sense to speak of "the" statistical power of a mixed design as if it was a singular property. When determining the minimum sample size, you may want to think about which of the possible significance tests on the mixed ANOVA are important. If all of them are important, then you may want to consider the sample size required by the least powerful significance test.
G Power 3 permits power analysis for all three types of significance tests in a mixed ANOVA (between subjects; within subjects; and interaction).
Download this free software and go to the tests - means - Repeated Measures ... menu.
The software permits a priori (i.e., calculate sample size for given effect size, alpha, power, and design) or post hoc (i.e., calculate power for given sample size, effect size, alpha, and design) power analysis. | Sample size required for mixed design ANOVA to achieve adequate statistical power | You need to decide what is acceptable statistical power for a given significance test. The rule of thumb of 80% power being reasonable is often bandied about.
However, I think it is more sensible to s | Sample size required for mixed design ANOVA to achieve adequate statistical power
You need to decide what is acceptable statistical power for a given significance test. The rule of thumb of 80% power being reasonable is often bandied about.
However, I think it is more sensible to see sample size selection as an optimisation problem, where statistical power is but one consideration, and the cost of collecting additional data is considered (see here for further discussion of my thoughts).
Mixed ANOVA involves multiple potential significance tests. At the very least there are two main effects and an interaction. Potentially, there are also various comparisons. Each significance test can have different statistical power, and thus, literally, it does not make sense to speak of "the" statistical power of a mixed design as if it was a singular property. When determining the minimum sample size, you may want to think about which of the possible significance tests on the mixed ANOVA are important. If all of them are important, then you may want to consider the sample size required by the least powerful significance test.
G Power 3 permits power analysis for all three types of significance tests in a mixed ANOVA (between subjects; within subjects; and interaction).
Download this free software and go to the tests - means - Repeated Measures ... menu.
The software permits a priori (i.e., calculate sample size for given effect size, alpha, power, and design) or post hoc (i.e., calculate power for given sample size, effect size, alpha, and design) power analysis. | Sample size required for mixed design ANOVA to achieve adequate statistical power
You need to decide what is acceptable statistical power for a given significance test. The rule of thumb of 80% power being reasonable is often bandied about.
However, I think it is more sensible to s |
54,610 | How to perform diallel analysis in R? | There is beta package plantbreeding, which can do diallel analysis.
https://r-forge.r-project.org/projects/plantbreeding/
They has a blog:
http://rplantbreeding.blogspot.com/
The following is example from this package:
require(plantbreeding)
data(fulldial)
out <-diallele1(dataframe = fulldial, male = "MALE", female = "FEMALE",
progeny = "TRT", replication = "REP", yvar = "YIELD" )
print(out)
out$anvout # analysis of variance
out$anova.mod1 # analysis of variance for GCA and SCA effects
out$components.model1 # model1 GCA, SCA and reciprocal components
out$gca.effmat # GCA effects
out$sca.effmat # SCA effect matrix
out$reciprocal.effmat # reciprocal effect matrix
out$varcompare # SE for comparisions
out$anovadf.mod2 # ANOVA for model 2
out$varcomp.model2 # variance components for model 2 | How to perform diallel analysis in R? | There is beta package plantbreeding, which can do diallel analysis.
https://r-forge.r-project.org/projects/plantbreeding/
They has a blog:
http://rplantbreeding.blogspot.com/
The following is examp | How to perform diallel analysis in R?
There is beta package plantbreeding, which can do diallel analysis.
https://r-forge.r-project.org/projects/plantbreeding/
They has a blog:
http://rplantbreeding.blogspot.com/
The following is example from this package:
require(plantbreeding)
data(fulldial)
out <-diallele1(dataframe = fulldial, male = "MALE", female = "FEMALE",
progeny = "TRT", replication = "REP", yvar = "YIELD" )
print(out)
out$anvout # analysis of variance
out$anova.mod1 # analysis of variance for GCA and SCA effects
out$components.model1 # model1 GCA, SCA and reciprocal components
out$gca.effmat # GCA effects
out$sca.effmat # SCA effect matrix
out$reciprocal.effmat # reciprocal effect matrix
out$varcompare # SE for comparisions
out$anovadf.mod2 # ANOVA for model 2
out$varcomp.model2 # variance components for model 2 | How to perform diallel analysis in R?
There is beta package plantbreeding, which can do diallel analysis.
https://r-forge.r-project.org/projects/plantbreeding/
They has a blog:
http://rplantbreeding.blogspot.com/
The following is examp |
54,611 | How to perform diallel analysis in R? | I think it's unlikely that you'll find worked examples in R for the analysis of diallels.
I did find some references for diallel analysis in SAS (e.g., here, and there's a chapter on DIALLEL-SAS in the book Handbook of formulas and software for plant geneticists and breeders). | How to perform diallel analysis in R? | I think it's unlikely that you'll find worked examples in R for the analysis of diallels.
I did find some references for diallel analysis in SAS (e.g., here, and there's a chapter on DIALLEL-SAS in th | How to perform diallel analysis in R?
I think it's unlikely that you'll find worked examples in R for the analysis of diallels.
I did find some references for diallel analysis in SAS (e.g., here, and there's a chapter on DIALLEL-SAS in the book Handbook of formulas and software for plant geneticists and breeders). | How to perform diallel analysis in R?
I think it's unlikely that you'll find worked examples in R for the analysis of diallels.
I did find some references for diallel analysis in SAS (e.g., here, and there's a chapter on DIALLEL-SAS in th |
54,612 | How to perform diallel analysis in R? | There's a nice worked example in the book Statistical and Biometrical Techniques in Plant Breeding by Jawahar R. Sharma on about page 184. Visible in Google books. | How to perform diallel analysis in R? | There's a nice worked example in the book Statistical and Biometrical Techniques in Plant Breeding by Jawahar R. Sharma on about page 184. Visible in Google books. | How to perform diallel analysis in R?
There's a nice worked example in the book Statistical and Biometrical Techniques in Plant Breeding by Jawahar R. Sharma on about page 184. Visible in Google books. | How to perform diallel analysis in R?
There's a nice worked example in the book Statistical and Biometrical Techniques in Plant Breeding by Jawahar R. Sharma on about page 184. Visible in Google books. |
54,613 | How do I use the GPML package for multi dimensional input? | Here is a more minimal example of a 2-d regression problem (I haven't got octave, only matlab, but hopefully the difference won't matter). meanfunc and covfunc should be happy with any number of inputs, provided that the covariance function doesn't have a hyper-parameter per inpit feature (as e.g. covSEiso does). Hope this helps
[X1,X2] = meshgrid(-pi:pi/16:+pi, -pi:pi/16:+pi);
Y = sin(X1).*sin(X2) + 0.1*randn(size(X1));
imagesc(Y); drawnow;
x = [X1(:) X2(:)];
y = Y(:);
covfunc = @covSEiso;
likfunc = @likGauss; sn = 0.1; hyp.lik = log(sn);
hyp2.cov = [0 ; 0];
hyp2.lik = log(0.1);
hyp2 = minimize(hyp2, @gp, -100, @infExact, [], covfunc, likfunc, x, y);
exp(hyp2.lik)
nlml2 = gp(hyp2, @infExact, [], covfunc, likfunc, x, y)
[m s2] = gp(hyp2, @infExact, [], covfunc, likfunc, x, y, x);
m = reshape(m, size(Y));
figure(2); imagesc(m); | How do I use the GPML package for multi dimensional input? | Here is a more minimal example of a 2-d regression problem (I haven't got octave, only matlab, but hopefully the difference won't matter). meanfunc and covfunc should be happy with any number of inpu | How do I use the GPML package for multi dimensional input?
Here is a more minimal example of a 2-d regression problem (I haven't got octave, only matlab, but hopefully the difference won't matter). meanfunc and covfunc should be happy with any number of inputs, provided that the covariance function doesn't have a hyper-parameter per inpit feature (as e.g. covSEiso does). Hope this helps
[X1,X2] = meshgrid(-pi:pi/16:+pi, -pi:pi/16:+pi);
Y = sin(X1).*sin(X2) + 0.1*randn(size(X1));
imagesc(Y); drawnow;
x = [X1(:) X2(:)];
y = Y(:);
covfunc = @covSEiso;
likfunc = @likGauss; sn = 0.1; hyp.lik = log(sn);
hyp2.cov = [0 ; 0];
hyp2.lik = log(0.1);
hyp2 = minimize(hyp2, @gp, -100, @infExact, [], covfunc, likfunc, x, y);
exp(hyp2.lik)
nlml2 = gp(hyp2, @infExact, [], covfunc, likfunc, x, y)
[m s2] = gp(hyp2, @infExact, [], covfunc, likfunc, x, y, x);
m = reshape(m, size(Y));
figure(2); imagesc(m); | How do I use the GPML package for multi dimensional input?
Here is a more minimal example of a 2-d regression problem (I haven't got octave, only matlab, but hopefully the difference won't matter). meanfunc and covfunc should be happy with any number of inpu |
54,614 | What is a good way of estimating the dependence of an output variable on the input parameters? | EDIT: After some reflection, I modified my answer substantially.
The best thing to do would be to try to find a reasonable model for your data (for example, by using multiple linear regression). If you cannot get enough data to do this, I would try the following "non-parametric" approach. Suppose that in your data set, the covariate $A$ takes on the values $A=a_1, ..., a_{n_A}$, and likewise for $B$, $C$, etc. Then what you can do is perform a linear regression on your dependent variables against the indicator variables $I(A= a_1), I(A=a_2), ..., I(A = a_{n_A}), I(B = b_1),...$ etc. If you have enough data you can also include interaction terms such as $I(A=a_1, B=b_1)$. Then you can use model selection techniques to eliminate the covariates that have the least effect. | What is a good way of estimating the dependence of an output variable on the input parameters? | EDIT: After some reflection, I modified my answer substantially.
The best thing to do would be to try to find a reasonable model for your data (for example, by using multiple linear regression). If y | What is a good way of estimating the dependence of an output variable on the input parameters?
EDIT: After some reflection, I modified my answer substantially.
The best thing to do would be to try to find a reasonable model for your data (for example, by using multiple linear regression). If you cannot get enough data to do this, I would try the following "non-parametric" approach. Suppose that in your data set, the covariate $A$ takes on the values $A=a_1, ..., a_{n_A}$, and likewise for $B$, $C$, etc. Then what you can do is perform a linear regression on your dependent variables against the indicator variables $I(A= a_1), I(A=a_2), ..., I(A = a_{n_A}), I(B = b_1),...$ etc. If you have enough data you can also include interaction terms such as $I(A=a_1, B=b_1)$. Then you can use model selection techniques to eliminate the covariates that have the least effect. | What is a good way of estimating the dependence of an output variable on the input parameters?
EDIT: After some reflection, I modified my answer substantially.
The best thing to do would be to try to find a reasonable model for your data (for example, by using multiple linear regression). If y |
54,615 | What is a good way of estimating the dependence of an output variable on the input parameters? | A few comments:
Why did you go with your particular experimental design set-up? For example, fix A+B and vary C. What would you fix A + B at? If you are interesting in determining the effect of A and B, it seems a bit strange that you can fix them at "optimal values". There are standard statistical techniques for sampling from multi-dimension space. For example, latin hypercubes.
Once you have your data, why not start with something simple, say multiple linear regression. You have 3 inputs A, B, C and one response variable. I suspect from your description, you may have to include interaction terms for the covariates.
Update
A few comments on your regression:
Does the data fit your model? You need to check the residuals. Try googling "R and regression".
Just because one of your covariates has a smaller p-value, it doesn't mean that it has the strongest effect. For that, look at the estimates of the $\beta_i$ terms: 0.8, -0.23, -0.31.
So a one unit change in $A$ results in $T$ increasing by 0.8, whereas a one unit change in $S$ results in $T$ decreasing by 0.23. However, are the units of the covariates comparable? For example, is it may be physically impossible for $A$ to change by 1 unit. Only you can make that decision.
BTW, try not to update your question so that it changes your original meaning. If you have a new question, then just ask a new question. | What is a good way of estimating the dependence of an output variable on the input parameters? | A few comments:
Why did you go with your particular experimental design set-up? For example, fix A+B and vary C. What would you fix A + B at? If you are interesting in determining the effect of A and | What is a good way of estimating the dependence of an output variable on the input parameters?
A few comments:
Why did you go with your particular experimental design set-up? For example, fix A+B and vary C. What would you fix A + B at? If you are interesting in determining the effect of A and B, it seems a bit strange that you can fix them at "optimal values". There are standard statistical techniques for sampling from multi-dimension space. For example, latin hypercubes.
Once you have your data, why not start with something simple, say multiple linear regression. You have 3 inputs A, B, C and one response variable. I suspect from your description, you may have to include interaction terms for the covariates.
Update
A few comments on your regression:
Does the data fit your model? You need to check the residuals. Try googling "R and regression".
Just because one of your covariates has a smaller p-value, it doesn't mean that it has the strongest effect. For that, look at the estimates of the $\beta_i$ terms: 0.8, -0.23, -0.31.
So a one unit change in $A$ results in $T$ increasing by 0.8, whereas a one unit change in $S$ results in $T$ decreasing by 0.23. However, are the units of the covariates comparable? For example, is it may be physically impossible for $A$ to change by 1 unit. Only you can make that decision.
BTW, try not to update your question so that it changes your original meaning. If you have a new question, then just ask a new question. | What is a good way of estimating the dependence of an output variable on the input parameters?
A few comments:
Why did you go with your particular experimental design set-up? For example, fix A+B and vary C. What would you fix A + B at? If you are interesting in determining the effect of A and |
54,616 | Effect size of Spearman's rank test | I see no obvious reason not to do so. As far as I know, we usually make a distinction between two kind of effect size (ES) measures for qualifying the strength of an observed association: ES based on $d$ (difference of means) and ES based on $r$ (correlation). The latter includes Pearson's $r$, but also Spearman's $\rho$, Kendall's $\tau$, or the multiple correlation coefficient.
As for their interpretation, I think it mainly depends on the field you are working in: A correlation of .20 would certainly not be interpreted in the same way in psychological vs. software engineering studies. Don't forget that Cohen's three-way classification--small, medium, large--was based on behavioral data, as discussed in Kraemer et al. (2003), p. 1526. In their Table 1, they made no distinction about the different types of ES measures belonging to the $r$ family. There have by no way an absolute meaning and should be interpreted with reference to established results or literature review.
I would like to add some other references that provide useful reviews of common ES measures and their interpretation.
References
Helena C. Kraemer, George A. Morgan, Nancy L. Leech, Jeffrey A. Gliner, Jerry J. Vaske, and Robert J. Harmon (2003). Measures of Clinical Significance. J Am Acad Child Adolesc Psychiatry, 42(12), 1524-1529.
Christopher J. Ferguson (2009). An Effect Size Primer: A Guide for Clinicians and Researchers. Professional Psychology: Research and Practice, 40(5), 532-538.
Edward F. Fern and Kent B. Monroe (1996). Effect-Size Estimates: Issues and Problems in Interpretation. Journal of Consumer Research, 23, 89-105.
Daniel J. Denis (2003). Alternatives to Null Hypothesis Significance Testing. Theory and Science, 4(1).
Paul D. Ellis (2010). The Essential Guide to Effect Sizes. Cambridge University Press. -- just browsed the TOC | Effect size of Spearman's rank test | I see no obvious reason not to do so. As far as I know, we usually make a distinction between two kind of effect size (ES) measures for qualifying the strength of an observed association: ES based on | Effect size of Spearman's rank test
I see no obvious reason not to do so. As far as I know, we usually make a distinction between two kind of effect size (ES) measures for qualifying the strength of an observed association: ES based on $d$ (difference of means) and ES based on $r$ (correlation). The latter includes Pearson's $r$, but also Spearman's $\rho$, Kendall's $\tau$, or the multiple correlation coefficient.
As for their interpretation, I think it mainly depends on the field you are working in: A correlation of .20 would certainly not be interpreted in the same way in psychological vs. software engineering studies. Don't forget that Cohen's three-way classification--small, medium, large--was based on behavioral data, as discussed in Kraemer et al. (2003), p. 1526. In their Table 1, they made no distinction about the different types of ES measures belonging to the $r$ family. There have by no way an absolute meaning and should be interpreted with reference to established results or literature review.
I would like to add some other references that provide useful reviews of common ES measures and their interpretation.
References
Helena C. Kraemer, George A. Morgan, Nancy L. Leech, Jeffrey A. Gliner, Jerry J. Vaske, and Robert J. Harmon (2003). Measures of Clinical Significance. J Am Acad Child Adolesc Psychiatry, 42(12), 1524-1529.
Christopher J. Ferguson (2009). An Effect Size Primer: A Guide for Clinicians and Researchers. Professional Psychology: Research and Practice, 40(5), 532-538.
Edward F. Fern and Kent B. Monroe (1996). Effect-Size Estimates: Issues and Problems in Interpretation. Journal of Consumer Research, 23, 89-105.
Daniel J. Denis (2003). Alternatives to Null Hypothesis Significance Testing. Theory and Science, 4(1).
Paul D. Ellis (2010). The Essential Guide to Effect Sizes. Cambridge University Press. -- just browsed the TOC | Effect size of Spearman's rank test
I see no obvious reason not to do so. As far as I know, we usually make a distinction between two kind of effect size (ES) measures for qualifying the strength of an observed association: ES based on |
54,617 | Effect size of Spearman's rank test | With increasing sample size $n$, $r_{z} = \sqrt{n-1} r_{S}$ is asymptotically $N(0, 1)$ distributed (standard normal distribution). In R
rSz <- sqrt(n-1) * rS
(pVal <- 1-pnorm(rSz)) # one-sided p-value, test for positive rank correlation | Effect size of Spearman's rank test | With increasing sample size $n$, $r_{z} = \sqrt{n-1} r_{S}$ is asymptotically $N(0, 1)$ distributed (standard normal distribution). In R
rSz <- sqrt(n-1) * rS
(pVal <- 1-pnorm(rSz)) # one-sided p- | Effect size of Spearman's rank test
With increasing sample size $n$, $r_{z} = \sqrt{n-1} r_{S}$ is asymptotically $N(0, 1)$ distributed (standard normal distribution). In R
rSz <- sqrt(n-1) * rS
(pVal <- 1-pnorm(rSz)) # one-sided p-value, test for positive rank correlation | Effect size of Spearman's rank test
With increasing sample size $n$, $r_{z} = \sqrt{n-1} r_{S}$ is asymptotically $N(0, 1)$ distributed (standard normal distribution). In R
rSz <- sqrt(n-1) * rS
(pVal <- 1-pnorm(rSz)) # one-sided p- |
54,618 | How to view GBM package trees? [closed] | This is not a bug. The model is stored using a 0-based index. So SplitVar=0 is X1, SplitVar=1 is X2, and SplitVar=2 is X3. So this split corresponds to a split on X3. Since X3 is an ordinal factor and the split is at 1.5, this corresponds to splitting levels 0&1 from 2&3.
> sum(data$X3<="c")
[1] 522
> sum(data$X3>="b")
[1] 478 | How to view GBM package trees? [closed] | This is not a bug. The model is stored using a 0-based index. So SplitVar=0 is X1, SplitVar=1 is X2, and SplitVar=2 is X3. So this split corresponds to a split on X3. Since X3 is an ordinal factor and | How to view GBM package trees? [closed]
This is not a bug. The model is stored using a 0-based index. So SplitVar=0 is X1, SplitVar=1 is X2, and SplitVar=2 is X3. So this split corresponds to a split on X3. Since X3 is an ordinal factor and the split is at 1.5, this corresponds to splitting levels 0&1 from 2&3.
> sum(data$X3<="c")
[1] 522
> sum(data$X3>="b")
[1] 478 | How to view GBM package trees? [closed]
This is not a bug. The model is stored using a 0-based index. So SplitVar=0 is X1, SplitVar=1 is X2, and SplitVar=2 is X3. So this split corresponds to a split on X3. Since X3 is an ordinal factor and |
54,619 | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$ | Because $\left(X-M\right)^T\left(X-M\right) = \sum_i{(X_i - m_i)^2}$,
$$\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right) = \sum_{i,j}{(X_i - m_i)^2(X_j - m_j)^2} \text{.}$$
There are two kinds of expectations to obtain here. Assuming the $X_i$ are independent and $i \ne j$,
$$\eqalign{
E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^2\right] E\left[(X_j - m_j)^2\right] \cr
&= \lambda_i \lambda_j .
}$$
When $i = j$,
$$\eqalign{
E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^4\right] \cr
&= 3 \lambda_i^2 \text{ for Normal variates} \cr
&= \lambda_i \lambda_j + 2 \lambda_i^2 \text{.}
}$$
Whence the expectation equals
$$\eqalign{
&\sum_{i, j} {\lambda_i \lambda_j} + 2 \sum_{i} {\lambda_i^2} \cr
= &(\sum_{i}{\lambda_i})^2 + 2 \sum_{i} {\lambda_i^2}.
}$$
Note where the assumptions of independence and Normality come in. Minimally, we need to assume the squares of the residuals are mutually independent and we only need a formula for the central fourth moment; Normality is not necessasry. | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$ | Because $\left(X-M\right)^T\left(X-M\right) = \sum_i{(X_i - m_i)^2}$,
$$\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right) = \sum_{i,j}{(X_i - m_i)^2(X_j - m_j)^2} \text{.}$$
There | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$
Because $\left(X-M\right)^T\left(X-M\right) = \sum_i{(X_i - m_i)^2}$,
$$\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right) = \sum_{i,j}{(X_i - m_i)^2(X_j - m_j)^2} \text{.}$$
There are two kinds of expectations to obtain here. Assuming the $X_i$ are independent and $i \ne j$,
$$\eqalign{
E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^2\right] E\left[(X_j - m_j)^2\right] \cr
&= \lambda_i \lambda_j .
}$$
When $i = j$,
$$\eqalign{
E \left[ (X_i - m_i)^2(X_j - m_j)^2 \right] &= E\left[(X_i - m_i)^4\right] \cr
&= 3 \lambda_i^2 \text{ for Normal variates} \cr
&= \lambda_i \lambda_j + 2 \lambda_i^2 \text{.}
}$$
Whence the expectation equals
$$\eqalign{
&\sum_{i, j} {\lambda_i \lambda_j} + 2 \sum_{i} {\lambda_i^2} \cr
= &(\sum_{i}{\lambda_i})^2 + 2 \sum_{i} {\lambda_i^2}.
}$$
Note where the assumptions of independence and Normality come in. Minimally, we need to assume the squares of the residuals are mutually independent and we only need a formula for the central fourth moment; Normality is not necessasry. | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$
Because $\left(X-M\right)^T\left(X-M\right) = \sum_i{(X_i - m_i)^2}$,
$$\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right) = \sum_{i,j}{(X_i - m_i)^2(X_j - m_j)^2} \text{.}$$
There |
54,620 | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$ | I believe this depends on the kurtosis of $X$. If I am reading this correctly, and assuming the $X_i$ are independent, you are trying to find the expectation of $\sum_i (X_i - m_i)^4$. Because $X_i^4$ appears, you cannot find this expectation in terms of $M$ and $\Sigma$ without making further assumptions. (Even without the independence of the $X_i$, you will have $E[X_i^4]$ terms in your expectation.)
If you assume that the $X_i$ are normally distributed, you should find the expectation is equal to $3 \sum_i \lambda_i^2$. | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$ | I believe this depends on the kurtosis of $X$. If I am reading this correctly, and assuming the $X_i$ are independent, you are trying to find the expectation of $\sum_i (X_i - m_i)^4$. Because $X_i^4$ | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$
I believe this depends on the kurtosis of $X$. If I am reading this correctly, and assuming the $X_i$ are independent, you are trying to find the expectation of $\sum_i (X_i - m_i)^4$. Because $X_i^4$ appears, you cannot find this expectation in terms of $M$ and $\Sigma$ without making further assumptions. (Even without the independence of the $X_i$, you will have $E[X_i^4]$ terms in your expectation.)
If you assume that the $X_i$ are normally distributed, you should find the expectation is equal to $3 \sum_i \lambda_i^2$. | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$
I believe this depends on the kurtosis of $X$. If I am reading this correctly, and assuming the $X_i$ are independent, you are trying to find the expectation of $\sum_i (X_i - m_i)^4$. Because $X_i^4$ |
54,621 | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$ | If you lose iid and normality assumptions things can get ugly. In Anderson book you can find explicit formulas for expectations of type
$\sum_{s,r,t,u}E(X_s-m)(X_r-m)(X_t-m)(X_u-m)$
when $X=(x_1,...,x_n)$ is a sample from stationary process, with mean $m$. In general it is not possible to express such types of moments using only the second and first moments. If we have $cov(X_i,X_j)=0$, it does not guarantee that $cov(X_i^2,X_j^2)=0$ for example. It does only for normal variables, for which zero-correlation equals independence. | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$ | If you lose iid and normality assumptions things can get ugly. In Anderson book you can find explicit formulas for expectations of type
$\sum_{s,r,t,u}E(X_s-m)(X_r-m)(X_t-m)(X_u-m)$
when $X=(x_1,..., | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$
If you lose iid and normality assumptions things can get ugly. In Anderson book you can find explicit formulas for expectations of type
$\sum_{s,r,t,u}E(X_s-m)(X_r-m)(X_t-m)(X_u-m)$
when $X=(x_1,...,x_n)$ is a sample from stationary process, with mean $m$. In general it is not possible to express such types of moments using only the second and first moments. If we have $cov(X_i,X_j)=0$, it does not guarantee that $cov(X_i^2,X_j^2)=0$ for example. It does only for normal variables, for which zero-correlation equals independence. | Expectation of $\left(X-M\right)^T\left(X-M\right)\left(X-M\right)^T\left(X-M\right)$
If you lose iid and normality assumptions things can get ugly. In Anderson book you can find explicit formulas for expectations of type
$\sum_{s,r,t,u}E(X_s-m)(X_r-m)(X_t-m)(X_u-m)$
when $X=(x_1,..., |
54,622 | Developing a statistical test to ascertain better "fit" | Smoothing, rolling averages, running means... are all nice ways (perhaps) to display data. But using the results of smoothed data as an input to any statistical analysis is likely to give misleading results, especially when done by novices. William Briggs emphasizes this point in his excellent blog in this article and this one. | Developing a statistical test to ascertain better "fit" | Smoothing, rolling averages, running means... are all nice ways (perhaps) to display data. But using the results of smoothed data as an input to any statistical analysis is likely to give misleading r | Developing a statistical test to ascertain better "fit"
Smoothing, rolling averages, running means... are all nice ways (perhaps) to display data. But using the results of smoothed data as an input to any statistical analysis is likely to give misleading results, especially when done by novices. William Briggs emphasizes this point in his excellent blog in this article and this one. | Developing a statistical test to ascertain better "fit"
Smoothing, rolling averages, running means... are all nice ways (perhaps) to display data. But using the results of smoothed data as an input to any statistical analysis is likely to give misleading r |
54,623 | Developing a statistical test to ascertain better "fit" | Based on the information given, I think that you could consider AIC, a measure of likelihood that is penalized by degrees of freedom. | Developing a statistical test to ascertain better "fit" | Based on the information given, I think that you could consider AIC, a measure of likelihood that is penalized by degrees of freedom. | Developing a statistical test to ascertain better "fit"
Based on the information given, I think that you could consider AIC, a measure of likelihood that is penalized by degrees of freedom. | Developing a statistical test to ascertain better "fit"
Based on the information given, I think that you could consider AIC, a measure of likelihood that is penalized by degrees of freedom. |
54,624 | Specifying conditional probabilities in hybrid Bayesian networks | First of all, your usage of the term "prior probability" seems to be wrong. For any node N with discrete values $n_i$ the probability that a certain value of N occurs a priori is $p(N=n_i)$. If a node has no parents, one is interested in calculate this prob. But if a node has parents P, one is interested in calculating the conditional probability, i.e. the probability that a certain value of N occurs given its parents. This is $p(N=n_i|P)$.
Regarding your actual question: How to calculate the conditional probabilities of a child node if the parents are continuous and the child node is a) discrete or b) continuous. I will to explain one approach using a concrete example, but I hope the general idea will be clear though.
Consider this network:
where subsidy and buys are discrete nodes meanwhile harvest and cost are continuous.
a) p(Cost|Subsidy, Harvest)
Two options:
Discretize Harvest and treat it as discrete => maybe information loss
Model a mapping from the current value of harvest to a parameter describing the distribution of Cost.
Details to 2.:
Let's assume cost can be modeled using a normal distribution. In this case it is common to fix the variance and map the value of Harvest linearly to the mean of gaussian. The parent subsidy (binary) only add the constraint to create a separate distribution for subsidy=true and subsidy=false.
Result:
$p(Cost=c|Harvest=h,Subsidy=true)=N(\alpha_1 * h + \beta_1,\sigma_1^2)(c)$
$p(Cost=c|Harvest=h,Subsidy=false)=N(\alpha_2 * h + \beta_2,\sigma_2^2)(c)$
for some $\alpha$s and $\beta$s.
b) p(Buys|Cost)
In this case on need to map the probability of occurrence of certain costs to the probability of Buy=True (p(Buy=False) = 1-p(Buy=True)). (Note that this is the same task as in logistic regression). One approach is if the parent has a normal distribution, to calculate the integral from 0 to x of a standard normal distribution, where x is the z-transformed value of the parent. In our example:
$p(Buys=true|Cost=c) = integral(0,\frac{-c+\mu}{\sigma})$ with $\mu$=mean and $\sigma$=standard deviation of the Cost distribution. In this case $-c+\mu$ instead of $\mu-c$, because an observation (knowledge extracted from data !) is, that the lower the cost the more probable is a buy.
In the case of a non-binary discrete child-node, one approach is to transform the one multi-value-problem into multiple binary problems.
Example-Source / Further reading:
"Artificial Intelligence: A modern approach" by Russell and Norvig | Specifying conditional probabilities in hybrid Bayesian networks | First of all, your usage of the term "prior probability" seems to be wrong. For any node N with discrete values $n_i$ the probability that a certain value of N occurs a priori is $p(N=n_i)$. If a node | Specifying conditional probabilities in hybrid Bayesian networks
First of all, your usage of the term "prior probability" seems to be wrong. For any node N with discrete values $n_i$ the probability that a certain value of N occurs a priori is $p(N=n_i)$. If a node has no parents, one is interested in calculate this prob. But if a node has parents P, one is interested in calculating the conditional probability, i.e. the probability that a certain value of N occurs given its parents. This is $p(N=n_i|P)$.
Regarding your actual question: How to calculate the conditional probabilities of a child node if the parents are continuous and the child node is a) discrete or b) continuous. I will to explain one approach using a concrete example, but I hope the general idea will be clear though.
Consider this network:
where subsidy and buys are discrete nodes meanwhile harvest and cost are continuous.
a) p(Cost|Subsidy, Harvest)
Two options:
Discretize Harvest and treat it as discrete => maybe information loss
Model a mapping from the current value of harvest to a parameter describing the distribution of Cost.
Details to 2.:
Let's assume cost can be modeled using a normal distribution. In this case it is common to fix the variance and map the value of Harvest linearly to the mean of gaussian. The parent subsidy (binary) only add the constraint to create a separate distribution for subsidy=true and subsidy=false.
Result:
$p(Cost=c|Harvest=h,Subsidy=true)=N(\alpha_1 * h + \beta_1,\sigma_1^2)(c)$
$p(Cost=c|Harvest=h,Subsidy=false)=N(\alpha_2 * h + \beta_2,\sigma_2^2)(c)$
for some $\alpha$s and $\beta$s.
b) p(Buys|Cost)
In this case on need to map the probability of occurrence of certain costs to the probability of Buy=True (p(Buy=False) = 1-p(Buy=True)). (Note that this is the same task as in logistic regression). One approach is if the parent has a normal distribution, to calculate the integral from 0 to x of a standard normal distribution, where x is the z-transformed value of the parent. In our example:
$p(Buys=true|Cost=c) = integral(0,\frac{-c+\mu}{\sigma})$ with $\mu$=mean and $\sigma$=standard deviation of the Cost distribution. In this case $-c+\mu$ instead of $\mu-c$, because an observation (knowledge extracted from data !) is, that the lower the cost the more probable is a buy.
In the case of a non-binary discrete child-node, one approach is to transform the one multi-value-problem into multiple binary problems.
Example-Source / Further reading:
"Artificial Intelligence: A modern approach" by Russell and Norvig | Specifying conditional probabilities in hybrid Bayesian networks
First of all, your usage of the term "prior probability" seems to be wrong. For any node N with discrete values $n_i$ the probability that a certain value of N occurs a priori is $p(N=n_i)$. If a node |
54,625 | Specifying conditional probabilities in hybrid Bayesian networks | Consider two simple cases,
1) a real valued variable X is the parent of another real valued variable Y
2) a real valued variable X is the parent of a discrete valued variable Y
Assume that the Bayes net is a directed graph X -> Y. The Bayes net is fully specified, in both cases, when P(X) and P(Y | X) are specified. Or, strictly speaking when P(X | a) and P(Y | X, b) are specified where a and b are the parameters governing the marginal distribution of X and the conditional distribution of Y given X, respectively.
Parametric Assumptions
If we are happy for some reason to assume that P(X | a) is Normal, so a contains its mean and its variance.
Consider case 1. Perhaps we are happy to assume that P(Y | X, b) is also Normal and that the conditional relationship is linear in X. Then b contains regression coefficients, as we normally understand them. That means here: an intercept, a slope parameter, and a conditional variance.
Now consider case 2. Perhaps we are willing to assume that P(Y | X, b) is Binomial and that the conditional relationship is linear in X. Then b contains the coefficients from a logistic regression model: here that means an intercept and a slope parameter.
In case 1 if we observe Y and try to infer X then we simply use Bayes theorem with the parametric assumptions noted above. The result is rather well known: the (posterior) mean of X given Y is a weighted average of the prior mean of X (gotten from a) and the observed value of Y, where the weights are a function of the standard deviations of X (from a) and Y given X (from b). More certainty about X pulls the posterior mean towards its mean.
In case 2 there may be no handy closed form for P(X | Y) so we might proceed by drawing a sample from P(X), making use of your assumptions about P(Y | X) and normalising the result. That is then a sample from P(X | Y).
In both cases, all we are doing is computing Bayes theorem, for which we need some parametric assumptions about the relevant distributions.
All this gets more complicated when you want to learn a and b, rather than assuming them, but that is, I think, beyond your question.
Sampling
When thinking about continuous parents it may be helpful to think about generating data from the network. To make on data point form the network in case 1 (with the assumptions as above) we would first sample from P(X) - a Normal distribution - then take that value and generate a conditional distribution over Y by plugging the value we generated into the regression model of P(Y | X) as an independent variable. That conditional distribution is also Normal (by assumption) so we then sample a Y value from that distribution. We now have a new data point. The same process applies when P(Y | X) is a logistic regression model. The model gives us a probability of 1 vs. 0 given the value of X we sampled. We then sample from that distribution - essentially tossing a coin biased according to the model probabilities - and we have a new data point. When we think about inference in a Bayes net, we are, conceptually speaking, just reversing this process.
Perhaps that helps? | Specifying conditional probabilities in hybrid Bayesian networks | Consider two simple cases,
1) a real valued variable X is the parent of another real valued variable Y
2) a real valued variable X is the parent of a discrete valued variable Y
Assume that the Bayes | Specifying conditional probabilities in hybrid Bayesian networks
Consider two simple cases,
1) a real valued variable X is the parent of another real valued variable Y
2) a real valued variable X is the parent of a discrete valued variable Y
Assume that the Bayes net is a directed graph X -> Y. The Bayes net is fully specified, in both cases, when P(X) and P(Y | X) are specified. Or, strictly speaking when P(X | a) and P(Y | X, b) are specified where a and b are the parameters governing the marginal distribution of X and the conditional distribution of Y given X, respectively.
Parametric Assumptions
If we are happy for some reason to assume that P(X | a) is Normal, so a contains its mean and its variance.
Consider case 1. Perhaps we are happy to assume that P(Y | X, b) is also Normal and that the conditional relationship is linear in X. Then b contains regression coefficients, as we normally understand them. That means here: an intercept, a slope parameter, and a conditional variance.
Now consider case 2. Perhaps we are willing to assume that P(Y | X, b) is Binomial and that the conditional relationship is linear in X. Then b contains the coefficients from a logistic regression model: here that means an intercept and a slope parameter.
In case 1 if we observe Y and try to infer X then we simply use Bayes theorem with the parametric assumptions noted above. The result is rather well known: the (posterior) mean of X given Y is a weighted average of the prior mean of X (gotten from a) and the observed value of Y, where the weights are a function of the standard deviations of X (from a) and Y given X (from b). More certainty about X pulls the posterior mean towards its mean.
In case 2 there may be no handy closed form for P(X | Y) so we might proceed by drawing a sample from P(X), making use of your assumptions about P(Y | X) and normalising the result. That is then a sample from P(X | Y).
In both cases, all we are doing is computing Bayes theorem, for which we need some parametric assumptions about the relevant distributions.
All this gets more complicated when you want to learn a and b, rather than assuming them, but that is, I think, beyond your question.
Sampling
When thinking about continuous parents it may be helpful to think about generating data from the network. To make on data point form the network in case 1 (with the assumptions as above) we would first sample from P(X) - a Normal distribution - then take that value and generate a conditional distribution over Y by plugging the value we generated into the regression model of P(Y | X) as an independent variable. That conditional distribution is also Normal (by assumption) so we then sample a Y value from that distribution. We now have a new data point. The same process applies when P(Y | X) is a logistic regression model. The model gives us a probability of 1 vs. 0 given the value of X we sampled. We then sample from that distribution - essentially tossing a coin biased according to the model probabilities - and we have a new data point. When we think about inference in a Bayes net, we are, conceptually speaking, just reversing this process.
Perhaps that helps? | Specifying conditional probabilities in hybrid Bayesian networks
Consider two simple cases,
1) a real valued variable X is the parent of another real valued variable Y
2) a real valued variable X is the parent of a discrete valued variable Y
Assume that the Bayes |
54,626 | Sampling with non-uniform costs | Methods to find a solution are well known, but this is a messy problem. A tiny example reveals much, so consider the case $n = 2$. Let the cost of sampling bit 1 be $c_1 = 1$ and the cost of sampling bit 2 be $c_2 = c$. Without any loss of generality assume this is the expensive bit: $c \ge 1$.
Either we sample both bits at a cost of $1 + c$ because we have to in order to keep the error low, or else we will sample bit 2 with probability $\pi$ and bit 1 with probability $1 - \pi$. Let's assume the value of $k$ is large enough that we won't be compelled to sample both bits.
An unbiased estimator is $\hat{B} = b_1 / (1 - \pi)$ if we sample bit 1 and $\hat{B} = b_2 / \pi$ if we sample bit 2. (This is the Horvitz-Thompson estimator.)
The error rate depends on the state of the population. I interpret the problem to require that the expected error size be assured of not exceeding the limit $k$ *no matter what the state of the population may be.* We cannot remove the word "expected" here, because (except for nearly exhaustive samples), the maximum error size can be arbitrarily close to 1 for large populations.
There are $2^2 = 4$ possible states, which can be fully enumerated in this small problem:
$$\eqalign{
\text{Prob.} &b_1 &b_2 &B &\text{Observation} &\hat{B} &\text{Error} \cr
1 - \pi &0 &0 &0 &0 &0 &0\cr
\pi &0 &0 &0 &0 &0 &0\cr
1 - \pi &0 &1 &1 &0 &0 &-1\cr
\pi &0 &1 &1 &1 &1/\pi &1/\pi - 1\cr
1 - \pi &1 &0 &1 &1 &1/(1-\pi) &1/(1-\pi) - 1\cr
\pi &1 &0 &1 &0 &0 &-1\cr
1 - \pi &1 &1 &2 &1 &1/(1-\pi) &1/(1-\pi) - 2\cr
\pi &1 &1 &2 &1 &1/\pi &1/\pi - 2
}$$
Taking expectations for each possible state $(b_1, b_2)$ condenses this into the following:
$$\eqalign{
b_1 &b_2 &\text{Error distribution} &\mathbb{E}[|\text{Error}|]\cr
0 &0 &(0, 0) &0\cr
0 &1 &(-1, 1/\pi-1) &2(1 - \pi)\cr
1 &0 &(1/(1-\pi)-1, -1) &2\pi \cr
1 &1 &(1/(1-\pi) - 2, 1/\pi - 2) &2 - 4\pi
}$$
In computing the expected absolute error I have assumed $\pi \le 1/2$: we will favor sampling the cheaper bit whenever possible.
Suppose, for example, $k = 3/2$. That is, we aim to find a sampling scheme that keeps the absolute error to $3/2$ or less with "high probability" while minimizing the expected cost. (I realize this choice of $k$ is artificial because we might attempt to improve the estimator--at risk of biasing it slightly--by constraining its estimates to 0, 1, or 2; but the purpose here is to look ahead to a situation with large $n$, where such improvements will be unlikely. The mathematical patterns are important in this example, not its (lack of) realism.) Evidently we would like to minimize the chance of paying for the expensive bit; that is, to make $\pi$ as small as possible. The final column in the previous table constrains $\pi$; it implies that
$$2(1-\pi) \le k,\quad 2\pi \le k,\quad 2 - 4\pi \le k.$$
For $k \ge 1$ all constraints can be satisfied provided
$$\max(1-k/2, 1/2 - k/4) \le \pi \le k/2.$$
Because the expected cost is
$$\mathbb{E}[\text{Cost}] = 1 + (c-1)\pi,$$
the unique cost-minimizing solution for $k=3/2$ is $\pi = 1/4$: regardless of the differences in expenses, we should sample the cheap bit with probability $3/4$ and the expensive bit with probability $1/4$, for an expected cost of $1 + (c-1)/4$.
This example reveals many things, including
There can be solutions cheaper than simple random sampling (which in this case would select each bit with probability $1/2$ for an expected cost of $1 + (c-1)/2$).
Finding a solution involves an optimization with an exponential number of (increasingly complicated) constraints in $n$.
The selection probabilities will depend on the value of $k$.
We cannot guarantee a fixed cost; all we can hope for--because randomization is essential--is an optimal expected cost.
As always, the optimal sample size will depend on $k$ (the limit on the amount of error).
As a practical matter, I think most people would have more information than contained in this abstract problem. Even if they didn't, if $n$ were large and a substantial sample size were contemplated, it would make sense to devote part of the sampling budget to the purpose of modeling a relationship between the costs and the values (the $c_i$ and the $b_i$). With such a model in hand one could greatly simplify the analysis and identify an optimal or near-optimal program to spend the remaining sampling budget (or even, in some cases, to establish that the targeted error rate is unlikely to be achieved). For this reason, and because the exponential growth in the constraints is troublesome, I am reluctant to pursue a more detailed analysis of this problem. | Sampling with non-uniform costs | Methods to find a solution are well known, but this is a messy problem. A tiny example reveals much, so consider the case $n = 2$. Let the cost of sampling bit 1 be $c_1 = 1$ and the cost of samplin | Sampling with non-uniform costs
Methods to find a solution are well known, but this is a messy problem. A tiny example reveals much, so consider the case $n = 2$. Let the cost of sampling bit 1 be $c_1 = 1$ and the cost of sampling bit 2 be $c_2 = c$. Without any loss of generality assume this is the expensive bit: $c \ge 1$.
Either we sample both bits at a cost of $1 + c$ because we have to in order to keep the error low, or else we will sample bit 2 with probability $\pi$ and bit 1 with probability $1 - \pi$. Let's assume the value of $k$ is large enough that we won't be compelled to sample both bits.
An unbiased estimator is $\hat{B} = b_1 / (1 - \pi)$ if we sample bit 1 and $\hat{B} = b_2 / \pi$ if we sample bit 2. (This is the Horvitz-Thompson estimator.)
The error rate depends on the state of the population. I interpret the problem to require that the expected error size be assured of not exceeding the limit $k$ *no matter what the state of the population may be.* We cannot remove the word "expected" here, because (except for nearly exhaustive samples), the maximum error size can be arbitrarily close to 1 for large populations.
There are $2^2 = 4$ possible states, which can be fully enumerated in this small problem:
$$\eqalign{
\text{Prob.} &b_1 &b_2 &B &\text{Observation} &\hat{B} &\text{Error} \cr
1 - \pi &0 &0 &0 &0 &0 &0\cr
\pi &0 &0 &0 &0 &0 &0\cr
1 - \pi &0 &1 &1 &0 &0 &-1\cr
\pi &0 &1 &1 &1 &1/\pi &1/\pi - 1\cr
1 - \pi &1 &0 &1 &1 &1/(1-\pi) &1/(1-\pi) - 1\cr
\pi &1 &0 &1 &0 &0 &-1\cr
1 - \pi &1 &1 &2 &1 &1/(1-\pi) &1/(1-\pi) - 2\cr
\pi &1 &1 &2 &1 &1/\pi &1/\pi - 2
}$$
Taking expectations for each possible state $(b_1, b_2)$ condenses this into the following:
$$\eqalign{
b_1 &b_2 &\text{Error distribution} &\mathbb{E}[|\text{Error}|]\cr
0 &0 &(0, 0) &0\cr
0 &1 &(-1, 1/\pi-1) &2(1 - \pi)\cr
1 &0 &(1/(1-\pi)-1, -1) &2\pi \cr
1 &1 &(1/(1-\pi) - 2, 1/\pi - 2) &2 - 4\pi
}$$
In computing the expected absolute error I have assumed $\pi \le 1/2$: we will favor sampling the cheaper bit whenever possible.
Suppose, for example, $k = 3/2$. That is, we aim to find a sampling scheme that keeps the absolute error to $3/2$ or less with "high probability" while minimizing the expected cost. (I realize this choice of $k$ is artificial because we might attempt to improve the estimator--at risk of biasing it slightly--by constraining its estimates to 0, 1, or 2; but the purpose here is to look ahead to a situation with large $n$, where such improvements will be unlikely. The mathematical patterns are important in this example, not its (lack of) realism.) Evidently we would like to minimize the chance of paying for the expensive bit; that is, to make $\pi$ as small as possible. The final column in the previous table constrains $\pi$; it implies that
$$2(1-\pi) \le k,\quad 2\pi \le k,\quad 2 - 4\pi \le k.$$
For $k \ge 1$ all constraints can be satisfied provided
$$\max(1-k/2, 1/2 - k/4) \le \pi \le k/2.$$
Because the expected cost is
$$\mathbb{E}[\text{Cost}] = 1 + (c-1)\pi,$$
the unique cost-minimizing solution for $k=3/2$ is $\pi = 1/4$: regardless of the differences in expenses, we should sample the cheap bit with probability $3/4$ and the expensive bit with probability $1/4$, for an expected cost of $1 + (c-1)/4$.
This example reveals many things, including
There can be solutions cheaper than simple random sampling (which in this case would select each bit with probability $1/2$ for an expected cost of $1 + (c-1)/2$).
Finding a solution involves an optimization with an exponential number of (increasingly complicated) constraints in $n$.
The selection probabilities will depend on the value of $k$.
We cannot guarantee a fixed cost; all we can hope for--because randomization is essential--is an optimal expected cost.
As always, the optimal sample size will depend on $k$ (the limit on the amount of error).
As a practical matter, I think most people would have more information than contained in this abstract problem. Even if they didn't, if $n$ were large and a substantial sample size were contemplated, it would make sense to devote part of the sampling budget to the purpose of modeling a relationship between the costs and the values (the $c_i$ and the $b_i$). With such a model in hand one could greatly simplify the analysis and identify an optimal or near-optimal program to spend the remaining sampling budget (or even, in some cases, to establish that the targeted error rate is unlikely to be achieved). For this reason, and because the exponential growth in the constraints is troublesome, I am reluctant to pursue a more detailed analysis of this problem. | Sampling with non-uniform costs
Methods to find a solution are well known, but this is a messy problem. A tiny example reveals much, so consider the case $n = 2$. Let the cost of sampling bit 1 be $c_1 = 1$ and the cost of samplin |
54,627 | Sampling with non-uniform costs | Despite promising not to, I have thought about this problem further. This approach differs enough from the previous one I outlined that it seems worthwhile posting it as a separate reply.
Both @Aniko and @shabbychef are right: you need to "almost exhaust the population" with "greedy sampling." But there's a twist--on occasion you can get away with a small sample.
Let's first change the notation (only slightly) to provide a clear interpretation of the constraint in the question. Assume that a (small) threshold error probability $p$ and a maximum error size $\epsilon$ (in place of $k$, which will have uses elsewhere) have been specified, so that we require
$$\Pr[|\hat{B}-B| \leq \epsilon] \ge 1 - p$$
regardless of the (unknown) values of the $b_i$ (the "state" of the population).
Let $c_i$ be the cost of sampling element $i$ in the population. Suppose $A$ is a subset of the population that is sampled with probability $\pi_A$, at a cost of $c(A) = \sum_{i \in A}{c_i}$. Let $m$ be the number of elements of $A$ (written $m = |A|$) and let $k$ be the number of them that are 1's. This information tells us that $B$ surely lies between $k$ and $k + n - m$ no matter what the state of the population may be. Provided only that
$$k + n - m - \epsilon \le \hat{B} \le k + \epsilon,$$
we are assured that the error associated with the sample $A$ cannot exceed $\epsilon$. This is not possible whenever $m \lt n - 2 \epsilon$. Let's say that such a subset is "small" (with respect to $\epsilon$ and $n$) and otherwise is "large."
Here is perhaps the only subtlety: when a sample is small we still have a chance of not making an error, provided we use a randomized estimator. An example of the best ones I can find is
$$\hat{B} = k + (2j-1)\epsilon \text{ with probability } \frac{1}{l},\ j=1,2,\ldots, l$$
where $l = \lceil{\frac{n-m}{2 \epsilon}\rceil}$. No matter what the values of the unsampled data are, this procedure has at least a chance of $2/l$ of being within $\epsilon$ of the correct total $B$. Using such an estimator, the probability of an unacceptable error is bounded by the expected chance that the randomized estimate will have too great an error:
$$\Pr[|\hat{B}-B| \gt \epsilon] \le \sum_{A}{\pi_A(1 - \frac{1}{\lceil{\frac{n-|A|}{2 \epsilon}\rceil}})}.$$
(The coefficient when $|A| = n$ appears to be undefined but actually is zero; the sum really needs to extend only over the small subsets where randomization is actually needed.)
We have obtained a linear program for the sample probabilities $\pi_A$; to wit,
Minimize the expected cost $\sum_{A}{\pi_A c(A)}$
subject to
$\sum_{A}{\pi_A(1 - \frac{1}{\lceil{(n-|A|)/(2 \epsilon)\rceil}})} \le p,$
$\sum_{A}{\pi_A} = 1,$
$\pi_A \ge 0$ for all subsets $A$.
This is a simple linear program (but with $2^n$ variables), easy to set up and easy to solve provided the population has about 16 or fewer bits. When some of the costs are the same, the number of variables can be substantially reduced. With larger populations, approximate methods would be needed to obtain a solution. Generally, the solution cannot include any small samples with appreciable probability: most of the probability must be concentrated on large samples. Among those, it will select the cheapest (which can be found with the greedy algorithm). These heuristics allow for simple, rapid approximations to good solutions.
The solutions can be interesting. Here are some examples. As an abbreviation, let $f(c, \epsilon, p)$ indicate a solution for cost vector $c = (c_1, c_2, \ldots, c_n)$ and problem constraints $\epsilon$ and $p$.
$f((1,1), 1/2, 1/20)$ samples each element with probability 9/20 and obtains no data with probability 1/10. The expected cost is 0.9.
Why is this? If we sample element 1 we observe $b_1$ and estimate $\hat{B}$ = $b_1 + 1/2$. This is certainly within $1/2$ of the correct value. When we take no sample we estimate that $B$ equals $1/2$ with 50% probability and otherwise estimate that $B$ equals $3/2$. No matter what the population is, this guessing will return the correct answer (within an error of $1/2$) 50% of the time. Thus we make an error greater than $1/2$ only 50% of $1/10$ of the time, which meets the targeted error rate of $1/20$.
$f((1,1,5,5), 1/2, 1/20)$ elects to sample both cheap bits no matter what. In addition, there is a 45% chance it will also include bit 3 (but not bit 4) and a 45% chance it will also include bit 4 (but not bit 3). The reasoning is similar to the previous situation.
$f((1,1,5,5), 1/4, 1/20)$ samples the entire population with a $33/35$ probability and otherwise obtains no data with $2/35$ probability.
$f((1,2,3,4), 1/2, 1/20)$ samples bits 1, 2, and 3 with $14/15$ probability and otherwise obtains no data. The expected cost equals $84/15$ = 5.6. | Sampling with non-uniform costs | Despite promising not to, I have thought about this problem further. This approach differs enough from the previous one I outlined that it seems worthwhile posting it as a separate reply.
Both @Anik | Sampling with non-uniform costs
Despite promising not to, I have thought about this problem further. This approach differs enough from the previous one I outlined that it seems worthwhile posting it as a separate reply.
Both @Aniko and @shabbychef are right: you need to "almost exhaust the population" with "greedy sampling." But there's a twist--on occasion you can get away with a small sample.
Let's first change the notation (only slightly) to provide a clear interpretation of the constraint in the question. Assume that a (small) threshold error probability $p$ and a maximum error size $\epsilon$ (in place of $k$, which will have uses elsewhere) have been specified, so that we require
$$\Pr[|\hat{B}-B| \leq \epsilon] \ge 1 - p$$
regardless of the (unknown) values of the $b_i$ (the "state" of the population).
Let $c_i$ be the cost of sampling element $i$ in the population. Suppose $A$ is a subset of the population that is sampled with probability $\pi_A$, at a cost of $c(A) = \sum_{i \in A}{c_i}$. Let $m$ be the number of elements of $A$ (written $m = |A|$) and let $k$ be the number of them that are 1's. This information tells us that $B$ surely lies between $k$ and $k + n - m$ no matter what the state of the population may be. Provided only that
$$k + n - m - \epsilon \le \hat{B} \le k + \epsilon,$$
we are assured that the error associated with the sample $A$ cannot exceed $\epsilon$. This is not possible whenever $m \lt n - 2 \epsilon$. Let's say that such a subset is "small" (with respect to $\epsilon$ and $n$) and otherwise is "large."
Here is perhaps the only subtlety: when a sample is small we still have a chance of not making an error, provided we use a randomized estimator. An example of the best ones I can find is
$$\hat{B} = k + (2j-1)\epsilon \text{ with probability } \frac{1}{l},\ j=1,2,\ldots, l$$
where $l = \lceil{\frac{n-m}{2 \epsilon}\rceil}$. No matter what the values of the unsampled data are, this procedure has at least a chance of $2/l$ of being within $\epsilon$ of the correct total $B$. Using such an estimator, the probability of an unacceptable error is bounded by the expected chance that the randomized estimate will have too great an error:
$$\Pr[|\hat{B}-B| \gt \epsilon] \le \sum_{A}{\pi_A(1 - \frac{1}{\lceil{\frac{n-|A|}{2 \epsilon}\rceil}})}.$$
(The coefficient when $|A| = n$ appears to be undefined but actually is zero; the sum really needs to extend only over the small subsets where randomization is actually needed.)
We have obtained a linear program for the sample probabilities $\pi_A$; to wit,
Minimize the expected cost $\sum_{A}{\pi_A c(A)}$
subject to
$\sum_{A}{\pi_A(1 - \frac{1}{\lceil{(n-|A|)/(2 \epsilon)\rceil}})} \le p,$
$\sum_{A}{\pi_A} = 1,$
$\pi_A \ge 0$ for all subsets $A$.
This is a simple linear program (but with $2^n$ variables), easy to set up and easy to solve provided the population has about 16 or fewer bits. When some of the costs are the same, the number of variables can be substantially reduced. With larger populations, approximate methods would be needed to obtain a solution. Generally, the solution cannot include any small samples with appreciable probability: most of the probability must be concentrated on large samples. Among those, it will select the cheapest (which can be found with the greedy algorithm). These heuristics allow for simple, rapid approximations to good solutions.
The solutions can be interesting. Here are some examples. As an abbreviation, let $f(c, \epsilon, p)$ indicate a solution for cost vector $c = (c_1, c_2, \ldots, c_n)$ and problem constraints $\epsilon$ and $p$.
$f((1,1), 1/2, 1/20)$ samples each element with probability 9/20 and obtains no data with probability 1/10. The expected cost is 0.9.
Why is this? If we sample element 1 we observe $b_1$ and estimate $\hat{B}$ = $b_1 + 1/2$. This is certainly within $1/2$ of the correct value. When we take no sample we estimate that $B$ equals $1/2$ with 50% probability and otherwise estimate that $B$ equals $3/2$. No matter what the population is, this guessing will return the correct answer (within an error of $1/2$) 50% of the time. Thus we make an error greater than $1/2$ only 50% of $1/10$ of the time, which meets the targeted error rate of $1/20$.
$f((1,1,5,5), 1/2, 1/20)$ elects to sample both cheap bits no matter what. In addition, there is a 45% chance it will also include bit 3 (but not bit 4) and a 45% chance it will also include bit 4 (but not bit 3). The reasoning is similar to the previous situation.
$f((1,1,5,5), 1/4, 1/20)$ samples the entire population with a $33/35$ probability and otherwise obtains no data with $2/35$ probability.
$f((1,2,3,4), 1/2, 1/20)$ samples bits 1, 2, and 3 with $14/15$ probability and otherwise obtains no data. The expected cost equals $84/15$ = 5.6. | Sampling with non-uniform costs
Despite promising not to, I have thought about this problem further. This approach differs enough from the previous one I outlined that it seems worthwhile posting it as a separate reply.
Both @Anik |
54,628 | Sampling with non-uniform costs | If the costs $c_i$ are known a priori, it seems like a greedy sampling would give you some guarantees. That is, sample the $n-2k$ bits in order of increasing cost. This gives a $k$-error guarantee on $B$ with probability $1$ in the obvious way. I am curious if this strategy is the limit strategy of some sane sequence of strategies that provide a guarantee with probability $1-\epsilon$.
If the algorithm is to be deterministic, and the $c_i$ are set by an adversary, I do not think you can do better than this. | Sampling with non-uniform costs | If the costs $c_i$ are known a priori, it seems like a greedy sampling would give you some guarantees. That is, sample the $n-2k$ bits in order of increasing cost. This gives a $k$-error guarantee on | Sampling with non-uniform costs
If the costs $c_i$ are known a priori, it seems like a greedy sampling would give you some guarantees. That is, sample the $n-2k$ bits in order of increasing cost. This gives a $k$-error guarantee on $B$ with probability $1$ in the obvious way. I am curious if this strategy is the limit strategy of some sane sequence of strategies that provide a guarantee with probability $1-\epsilon$.
If the algorithm is to be deterministic, and the $c_i$ are set by an adversary, I do not think you can do better than this. | Sampling with non-uniform costs
If the costs $c_i$ are known a priori, it seems like a greedy sampling would give you some guarantees. That is, sample the $n-2k$ bits in order of increasing cost. This gives a $k$-error guarantee on |
54,629 | Statistical test to compare two ratios from two independent models | In response to an old question, and given that a good response has been provided already elsewhere by jbowman and StasK to a very similar (but better defined) problem. I refer anyone who stumbles on this to the following question (and answers):
Test for significant difference in ratios of normally distributed random variables
The permutations test should be easy to implement in most statistical tools and many programming languages. Additionally, it doesn't assume that you have count data but means that you can use a ratio of rates or other appropriate metrics. | Statistical test to compare two ratios from two independent models | In response to an old question, and given that a good response has been provided already elsewhere by jbowman and StasK to a very similar (but better defined) problem. I refer anyone who stumbles on t | Statistical test to compare two ratios from two independent models
In response to an old question, and given that a good response has been provided already elsewhere by jbowman and StasK to a very similar (but better defined) problem. I refer anyone who stumbles on this to the following question (and answers):
Test for significant difference in ratios of normally distributed random variables
The permutations test should be easy to implement in most statistical tools and many programming languages. Additionally, it doesn't assume that you have count data but means that you can use a ratio of rates or other appropriate metrics. | Statistical test to compare two ratios from two independent models
In response to an old question, and given that a good response has been provided already elsewhere by jbowman and StasK to a very similar (but better defined) problem. I refer anyone who stumbles on t |
54,630 | Statistical test to compare two ratios from two independent models | Any test for independence of a 2x2 contingency table will do! A chi-square or t-test are the textbook simple solutions. The "best" test in this situation is called Barnard's test for superiority -- the StatXact software will happily calculate this for you. | Statistical test to compare two ratios from two independent models | Any test for independence of a 2x2 contingency table will do! A chi-square or t-test are the textbook simple solutions. The "best" test in this situation is called Barnard's test for superiority -- th | Statistical test to compare two ratios from two independent models
Any test for independence of a 2x2 contingency table will do! A chi-square or t-test are the textbook simple solutions. The "best" test in this situation is called Barnard's test for superiority -- the StatXact software will happily calculate this for you. | Statistical test to compare two ratios from two independent models
Any test for independence of a 2x2 contingency table will do! A chi-square or t-test are the textbook simple solutions. The "best" test in this situation is called Barnard's test for superiority -- th |
54,631 | Statistical test to compare two ratios from two independent models | I assume you are trying to test the difference of two proportions here. For example, a click-through rate of a website before and after a button change, which is defined by
no of visitors who visit the page/no of visitors who click the button and navigate to another page
If that's the case, you can use Z-test if your sample data sets satisfy following assumptions:`
number of examples in each data set is greater than 5
each data set follows normal distribution
Then based on chosen confidence level(say 95%),you can check the Z-test table to get the critical value(say this is one-tail test, then the critical value will be 1.645). And with
number of positive examples in your control group, denoted by x1
number of total examples in your control group, denoted by N1
number of positive examples in your experiment group, denoted by x2
number of total examples in your experiment group, denoted by N2
you can calculate p.hat(estimated ratio of the population) = (x1+x2)/(N1+N2), and your Z test value will be (x1/N1-x2/N2)/sqrt(p.hat*(1-p.hat)*(1/N1+1/N2)).
Then you compare your critical value and Z test value to either reject or accept your null hypothesis. | Statistical test to compare two ratios from two independent models | I assume you are trying to test the difference of two proportions here. For example, a click-through rate of a website before and after a button change, which is defined by
no of visitors who visit | Statistical test to compare two ratios from two independent models
I assume you are trying to test the difference of two proportions here. For example, a click-through rate of a website before and after a button change, which is defined by
no of visitors who visit the page/no of visitors who click the button and navigate to another page
If that's the case, you can use Z-test if your sample data sets satisfy following assumptions:`
number of examples in each data set is greater than 5
each data set follows normal distribution
Then based on chosen confidence level(say 95%),you can check the Z-test table to get the critical value(say this is one-tail test, then the critical value will be 1.645). And with
number of positive examples in your control group, denoted by x1
number of total examples in your control group, denoted by N1
number of positive examples in your experiment group, denoted by x2
number of total examples in your experiment group, denoted by N2
you can calculate p.hat(estimated ratio of the population) = (x1+x2)/(N1+N2), and your Z test value will be (x1/N1-x2/N2)/sqrt(p.hat*(1-p.hat)*(1/N1+1/N2)).
Then you compare your critical value and Z test value to either reject or accept your null hypothesis. | Statistical test to compare two ratios from two independent models
I assume you are trying to test the difference of two proportions here. For example, a click-through rate of a website before and after a button change, which is defined by
no of visitors who visit |
54,632 | "Multiple response" analysis of arrest records | I can't particularly comment on how to handle multiple response categories, but you need to further refine your question for people on this forum to be able to give useful advice.
You mention various interests, such as some sort of drug policy intervention, and differential charges according to race, arrest location, and home location. For the differential charges their is a huge body of criminological literature assessing various aspects of this. Are you interested in discretionary behavior of particular officers (or racially prejudiced treatment)? Are you interested in aspects of disproportionate minority contact with the criminal justice system? There is such a wide variety of potential questions I can not give any advice. Of what nature is the drug policy intervention? Are you interested in criminal histories and the effect of some policy?
The nature of your data is pretty typical. Some recent arrest data I worked with had an average of around 3 charges per arrest (I remember 1 case having 20 charges in a single arrest). You will typically have some charges that tend to come together (and some times functionally redundant charges). Often times drug possession charges are not alone because the offender did something else to attract an officers attention (most often another crime), and upon arrest they were searched and drugs were found. You will undoubtedly have a core of prolific offenders in your data, and for any analysis you will want to know their histories, and likely take them into account for your analysis (do you have unique identifiers for individuals or do you have to match based on names, DOBs, and/or SSN's?) You may also have co-offending behavior that may be of interest.
Most projects I have been involved in (including my own work) have dealt with the multiple charges in two ways. One is only include the "top" charge according to some ranking criteria, the other is to only analyze a particular subset of charges. This is hardly universal advice though, and without knowing the question you are addressing it is probably not advisable to do either of these at the onset. If you clump any charges together (e.g. treat possession of weed the same as possession of cocaine) I would suggest you do it on theoretical grounds as opposed to using some sort of statistical methods (although again depending on the question some type of stat clustering method may be useful).
The more specific questions you have the better this community will be able to give advice. The nature of your data may seem complicated but many people on this forum will have had experience with similar data structures (at least in various aspects). | "Multiple response" analysis of arrest records | I can't particularly comment on how to handle multiple response categories, but you need to further refine your question for people on this forum to be able to give useful advice.
You mention various | "Multiple response" analysis of arrest records
I can't particularly comment on how to handle multiple response categories, but you need to further refine your question for people on this forum to be able to give useful advice.
You mention various interests, such as some sort of drug policy intervention, and differential charges according to race, arrest location, and home location. For the differential charges their is a huge body of criminological literature assessing various aspects of this. Are you interested in discretionary behavior of particular officers (or racially prejudiced treatment)? Are you interested in aspects of disproportionate minority contact with the criminal justice system? There is such a wide variety of potential questions I can not give any advice. Of what nature is the drug policy intervention? Are you interested in criminal histories and the effect of some policy?
The nature of your data is pretty typical. Some recent arrest data I worked with had an average of around 3 charges per arrest (I remember 1 case having 20 charges in a single arrest). You will typically have some charges that tend to come together (and some times functionally redundant charges). Often times drug possession charges are not alone because the offender did something else to attract an officers attention (most often another crime), and upon arrest they were searched and drugs were found. You will undoubtedly have a core of prolific offenders in your data, and for any analysis you will want to know their histories, and likely take them into account for your analysis (do you have unique identifiers for individuals or do you have to match based on names, DOBs, and/or SSN's?) You may also have co-offending behavior that may be of interest.
Most projects I have been involved in (including my own work) have dealt with the multiple charges in two ways. One is only include the "top" charge according to some ranking criteria, the other is to only analyze a particular subset of charges. This is hardly universal advice though, and without knowing the question you are addressing it is probably not advisable to do either of these at the onset. If you clump any charges together (e.g. treat possession of weed the same as possession of cocaine) I would suggest you do it on theoretical grounds as opposed to using some sort of statistical methods (although again depending on the question some type of stat clustering method may be useful).
The more specific questions you have the better this community will be able to give advice. The nature of your data may seem complicated but many people on this forum will have had experience with similar data structures (at least in various aspects). | "Multiple response" analysis of arrest records
I can't particularly comment on how to handle multiple response categories, but you need to further refine your question for people on this forum to be able to give useful advice.
You mention various |
54,633 | "Multiple response" analysis of arrest records | It is not clear what you questions you are trying to answer but here are are several ways to deal with the multiple-response data:
Arresting Officer
Convert the two columns into a single count variable (1 or 2) which indicates the no of arresting officers. You will lose the arresting officer's identities but perhaps that is not of interest to you per se?
Charges
Again convert to a count variable or perhaps a weighted count with the weights being the severity of the crime. The count variable represents the number of charges against the defendant.
The above strategies to deal with multiple-response variables has drawbacks as you lose information (e.g., identity of officers, details of specific crimes charged etc) but perhaps that is acceptable to you given the research questions you want to pursue.
A better answer can potentially be given but that will require some knowledge of what you are trying to accomplish with the data. | "Multiple response" analysis of arrest records | It is not clear what you questions you are trying to answer but here are are several ways to deal with the multiple-response data:
Arresting Officer
Convert the two columns into a single count variab | "Multiple response" analysis of arrest records
It is not clear what you questions you are trying to answer but here are are several ways to deal with the multiple-response data:
Arresting Officer
Convert the two columns into a single count variable (1 or 2) which indicates the no of arresting officers. You will lose the arresting officer's identities but perhaps that is not of interest to you per se?
Charges
Again convert to a count variable or perhaps a weighted count with the weights being the severity of the crime. The count variable represents the number of charges against the defendant.
The above strategies to deal with multiple-response variables has drawbacks as you lose information (e.g., identity of officers, details of specific crimes charged etc) but perhaps that is acceptable to you given the research questions you want to pursue.
A better answer can potentially be given but that will require some knowledge of what you are trying to accomplish with the data. | "Multiple response" analysis of arrest records
It is not clear what you questions you are trying to answer but here are are several ways to deal with the multiple-response data:
Arresting Officer
Convert the two columns into a single count variab |
54,634 | "Multiple response" analysis of arrest records | I've examined associations between multiple response categorical variables in the past basically following the log-linear approach for marginal data outlined in the following:
Strategies for Modeling Two Categorical Variables with Multiple Category Choices (Bilder, Loughlin, 2003)
Your case may be more complicated since you're looking at more than just Officer by Charges. But the Bilder paper and references within may be a good start for exploring your modeling choices. The nice thing is that I was able to fit this in R without much trouble.
The problem you're likely to run into is ending up with a sparse contingency table which can lead to convergence problems when fitting your log-linear model. In this respect, I think Andy's and Srikant's advice will serve you well -- you'll have to make some assumptions or simplifications suitable for your domain. Figure out what question you're interested in and see if you can reduce the dimensions in some way. | "Multiple response" analysis of arrest records | I've examined associations between multiple response categorical variables in the past basically following the log-linear approach for marginal data outlined in the following:
Strategies for Modeling | "Multiple response" analysis of arrest records
I've examined associations between multiple response categorical variables in the past basically following the log-linear approach for marginal data outlined in the following:
Strategies for Modeling Two Categorical Variables with Multiple Category Choices (Bilder, Loughlin, 2003)
Your case may be more complicated since you're looking at more than just Officer by Charges. But the Bilder paper and references within may be a good start for exploring your modeling choices. The nice thing is that I was able to fit this in R without much trouble.
The problem you're likely to run into is ending up with a sparse contingency table which can lead to convergence problems when fitting your log-linear model. In this respect, I think Andy's and Srikant's advice will serve you well -- you'll have to make some assumptions or simplifications suitable for your domain. Figure out what question you're interested in and see if you can reduce the dimensions in some way. | "Multiple response" analysis of arrest records
I've examined associations between multiple response categorical variables in the past basically following the log-linear approach for marginal data outlined in the following:
Strategies for Modeling |
54,635 | Where can I find a good resources for making publication quality line plots | See this related question. All the same advice applies in your case. Just to highlight a few points:
Using R is a good way to go, especially with the ggplot2 package. This is both flexible and produces very high quality output. There are plenty of examples on the ggplot2 website and across the web (including on this website).
To improve the quality of your plots, you should consider using a different device driver than the default, and choose a high-quality output (e.g. SVG). The Cairo package is one good option. You simply call your plot function before plotting and it redirects to Cairo as the output device. Cairo can be used with any plotting software, not just with R.
In terms of putting your plots together in a LaTeX publication, that's the role that Sweave plays. It makes combining plots with your paper a trivial operation (and has the added benefit of leaving you with something that is reproducible and understandable). Use cacheSweave if you have long-running computations.
If you want more specific help on a particular plot, I advise giving more detail.
Edit
If you want a specific example, it is best that you provide more specfics. For more examples of using ggplot, you can also refer to the LearnR blog. Here's an example combining ggplot with Cario for high quality output.
CairoPDF("plot.pdf", 6, 6, bg="transparent")
qplot(factor(cyl), wt, geom=c("boxplot", "jitter"), color=am, data=mtcars)
dev.off()
You can look at the documentation for these packages by using the R help functions. | Where can I find a good resources for making publication quality line plots | See this related question. All the same advice applies in your case. Just to highlight a few points:
Using R is a good way to go, especially with the ggplot2 package. This is both flexible and pro | Where can I find a good resources for making publication quality line plots
See this related question. All the same advice applies in your case. Just to highlight a few points:
Using R is a good way to go, especially with the ggplot2 package. This is both flexible and produces very high quality output. There are plenty of examples on the ggplot2 website and across the web (including on this website).
To improve the quality of your plots, you should consider using a different device driver than the default, and choose a high-quality output (e.g. SVG). The Cairo package is one good option. You simply call your plot function before plotting and it redirects to Cairo as the output device. Cairo can be used with any plotting software, not just with R.
In terms of putting your plots together in a LaTeX publication, that's the role that Sweave plays. It makes combining plots with your paper a trivial operation (and has the added benefit of leaving you with something that is reproducible and understandable). Use cacheSweave if you have long-running computations.
If you want more specific help on a particular plot, I advise giving more detail.
Edit
If you want a specific example, it is best that you provide more specfics. For more examples of using ggplot, you can also refer to the LearnR blog. Here's an example combining ggplot with Cario for high quality output.
CairoPDF("plot.pdf", 6, 6, bg="transparent")
qplot(factor(cyl), wt, geom=c("boxplot", "jitter"), color=am, data=mtcars)
dev.off()
You can look at the documentation for these packages by using the R help functions. | Where can I find a good resources for making publication quality line plots
See this related question. All the same advice applies in your case. Just to highlight a few points:
Using R is a good way to go, especially with the ggplot2 package. This is both flexible and pro |
54,636 | Where can I find a good resources for making publication quality line plots | You are probably aware of TeXexample.net. The TeX package pgfplots might also be of interest to you: it provides a pretty complete manual and allows to directly invoke gnuplot.
Personally, I use Sweave to embed R code, plots and table (see package xtable) into Latex documents. It is very handy especially if you need to run additional processing or statistical tests on your input data. There are many resources about R and its plotting capabilities: R Graph gallery, ggplot2 website and book, R Graphics and Lattice books, as well as many other general tutorials on R that demonstrate how to plot your data. | Where can I find a good resources for making publication quality line plots | You are probably aware of TeXexample.net. The TeX package pgfplots might also be of interest to you: it provides a pretty complete manual and allows to directly invoke gnuplot.
Personally, I use Sweav | Where can I find a good resources for making publication quality line plots
You are probably aware of TeXexample.net. The TeX package pgfplots might also be of interest to you: it provides a pretty complete manual and allows to directly invoke gnuplot.
Personally, I use Sweave to embed R code, plots and table (see package xtable) into Latex documents. It is very handy especially if you need to run additional processing or statistical tests on your input data. There are many resources about R and its plotting capabilities: R Graph gallery, ggplot2 website and book, R Graphics and Lattice books, as well as many other general tutorials on R that demonstrate how to plot your data. | Where can I find a good resources for making publication quality line plots
You are probably aware of TeXexample.net. The TeX package pgfplots might also be of interest to you: it provides a pretty complete manual and allows to directly invoke gnuplot.
Personally, I use Sweav |
54,637 | PCA on out-of-sample data | Following the comments exchange with Ebony (see Whuber's answer). I gather that in Ebony's application, $p$ is much larger than $n$ which is itself very large. In this case the complexity of computing the eigen decomposition is in the order of $O(n^3)$. Two solutions spring to mind:
partial decomposition: assuming $p$ is very large, it could be the case that the full eigen-decomposition is not needed. If only the $k$ largest eigen values (and corresponding vectors) are needed, presumably they could be obtained with complexity near $O(nk^2)$. Would such an algorithm be a solution to your problem ?
Full decomposition: in this case it may be better to draw $J$ random sub-samples of your observations of size $n_0$ suitably smaller than $n$ and compute $J$ pca decompositions. That would in turn give you $J$ values of each eigen values/vector which could be used to establish the sampling distribution of there population values (and there means would be a good estimator of the population eigen values/vectors). Given the $n^3$ complexity, this could be made to be much faster (by appropriately choosing $n_0$). A second benefit is that this procedure can be run in parallele across $m$ cores/computers yielding an overall complexity of $O(jm^{-1}n_0^3)$ | PCA on out-of-sample data | Following the comments exchange with Ebony (see Whuber's answer). I gather that in Ebony's application, $p$ is much larger than $n$ which is itself very large. In this case the complexity of computing | PCA on out-of-sample data
Following the comments exchange with Ebony (see Whuber's answer). I gather that in Ebony's application, $p$ is much larger than $n$ which is itself very large. In this case the complexity of computing the eigen decomposition is in the order of $O(n^3)$. Two solutions spring to mind:
partial decomposition: assuming $p$ is very large, it could be the case that the full eigen-decomposition is not needed. If only the $k$ largest eigen values (and corresponding vectors) are needed, presumably they could be obtained with complexity near $O(nk^2)$. Would such an algorithm be a solution to your problem ?
Full decomposition: in this case it may be better to draw $J$ random sub-samples of your observations of size $n_0$ suitably smaller than $n$ and compute $J$ pca decompositions. That would in turn give you $J$ values of each eigen values/vector which could be used to establish the sampling distribution of there population values (and there means would be a good estimator of the population eigen values/vectors). Given the $n^3$ complexity, this could be made to be much faster (by appropriately choosing $n_0$). A second benefit is that this procedure can be run in parallele across $m$ cores/computers yielding an overall complexity of $O(jm^{-1}n_0^3)$ | PCA on out-of-sample data
Following the comments exchange with Ebony (see Whuber's answer). I gather that in Ebony's application, $p$ is much larger than $n$ which is itself very large. In this case the complexity of computing |
54,638 | PCA on out-of-sample data | What computational savings? The PCA computation is based on the covariance (or correlation) matrix, whose size depends on the number of variables, not the number of data points. The calculation of a covariance matrix is fast. Even if you were doing PCA repeatedly (as part of a simulation, for instance), reducing from 1000 data points to 500 wouldn't even reduce the time by 50%. | PCA on out-of-sample data | What computational savings? The PCA computation is based on the covariance (or correlation) matrix, whose size depends on the number of variables, not the number of data points. The calculation of a | PCA on out-of-sample data
What computational savings? The PCA computation is based on the covariance (or correlation) matrix, whose size depends on the number of variables, not the number of data points. The calculation of a covariance matrix is fast. Even if you were doing PCA repeatedly (as part of a simulation, for instance), reducing from 1000 data points to 500 wouldn't even reduce the time by 50%. | PCA on out-of-sample data
What computational savings? The PCA computation is based on the covariance (or correlation) matrix, whose size depends on the number of variables, not the number of data points. The calculation of a |
54,639 | PCA on out-of-sample data | This isn't unlike a model selection problem where the goal is to arrive at something close to the "true dimensionality" of the data. You could try a cross validation approach, say 5-fold CV with your 500 data points. This will give you a reasonable metric of generalization error for out-of-sample data. The following paper has a nice survey and review of related methods:
Cross-validation methods in principal component analysis: A comparison (Diana, Tommassi, 2002) | PCA on out-of-sample data | This isn't unlike a model selection problem where the goal is to arrive at something close to the "true dimensionality" of the data. You could try a cross validation approach, say 5-fold CV with your | PCA on out-of-sample data
This isn't unlike a model selection problem where the goal is to arrive at something close to the "true dimensionality" of the data. You could try a cross validation approach, say 5-fold CV with your 500 data points. This will give you a reasonable metric of generalization error for out-of-sample data. The following paper has a nice survey and review of related methods:
Cross-validation methods in principal component analysis: A comparison (Diana, Tommassi, 2002) | PCA on out-of-sample data
This isn't unlike a model selection problem where the goal is to arrive at something close to the "true dimensionality" of the data. You could try a cross validation approach, say 5-fold CV with your |
54,640 | PCA on out-of-sample data | I have never done this but my intuition suggests that the answer would depend to the extent to which the covariance matrix for the 500 data points is 'different' from the out-of-sample data. If the out-of-sample covariance matrix is very different then clearly the projection matrix of those points would be different than the projection matrix that emerges from the in-sample data. Thus, to the extent that the covariance matrix for the in-sample and out-of-sample data is 'similar' the results should be about the same.
The above intuition suggests that you should carefully select the 500 in-sample points so that the resulting covariance matrices are as identical as possible for the in-sample and the out-of-sample. | PCA on out-of-sample data | I have never done this but my intuition suggests that the answer would depend to the extent to which the covariance matrix for the 500 data points is 'different' from the out-of-sample data. If the ou | PCA on out-of-sample data
I have never done this but my intuition suggests that the answer would depend to the extent to which the covariance matrix for the 500 data points is 'different' from the out-of-sample data. If the out-of-sample covariance matrix is very different then clearly the projection matrix of those points would be different than the projection matrix that emerges from the in-sample data. Thus, to the extent that the covariance matrix for the in-sample and out-of-sample data is 'similar' the results should be about the same.
The above intuition suggests that you should carefully select the 500 in-sample points so that the resulting covariance matrices are as identical as possible for the in-sample and the out-of-sample. | PCA on out-of-sample data
I have never done this but my intuition suggests that the answer would depend to the extent to which the covariance matrix for the 500 data points is 'different' from the out-of-sample data. If the ou |
54,641 | Parametric techniques for n-related samples | Multilevel/hierarchical linear models can be used for this. Essentially, each repetition of the measure is clustered within the individual; individuals can then be clustered within other hierarchies. For me, at least, it's more intuitive than repeated-measures ANOVA.
The canonical text is Raudenbush and Bryk; I'm also really fond of Gelman and Hill. Here's a tutorial I read some time ago - you may or may not find the tutorial itself useful (that's so often a matter of personal taste, training and experience), but the bibliography at the end is good.
I should note that Gelman and Hill doesn't have a ton on multilevel models specifically for repeated measures; I can't remember if that's the case or not for Raudenbush and Bryk.
Edit: Found a book I was looking for - Applied Longitudinal Data Analysis by Singer and Willett has (I believe) an explicit focus on multilevel models for repeated measures. I haven't had a chance to read very far into it, but it might be worth looking into. | Parametric techniques for n-related samples | Multilevel/hierarchical linear models can be used for this. Essentially, each repetition of the measure is clustered within the individual; individuals can then be clustered within other hierarchies. | Parametric techniques for n-related samples
Multilevel/hierarchical linear models can be used for this. Essentially, each repetition of the measure is clustered within the individual; individuals can then be clustered within other hierarchies. For me, at least, it's more intuitive than repeated-measures ANOVA.
The canonical text is Raudenbush and Bryk; I'm also really fond of Gelman and Hill. Here's a tutorial I read some time ago - you may or may not find the tutorial itself useful (that's so often a matter of personal taste, training and experience), but the bibliography at the end is good.
I should note that Gelman and Hill doesn't have a ton on multilevel models specifically for repeated measures; I can't remember if that's the case or not for Raudenbush and Bryk.
Edit: Found a book I was looking for - Applied Longitudinal Data Analysis by Singer and Willett has (I believe) an explicit focus on multilevel models for repeated measures. I haven't had a chance to read very far into it, but it might be worth looking into. | Parametric techniques for n-related samples
Multilevel/hierarchical linear models can be used for this. Essentially, each repetition of the measure is clustered within the individual; individuals can then be clustered within other hierarchies. |
54,642 | Use of Bayesian Search Theory in geological interpretation | Though it is not generally labeled as Bayesian search theory, such methods are pretty widely used in oil exploration. There are, however, important differences in the standard examples that drive different features of their respective modeling problems.
In the case of lost vessel exploration (in Bayesian search theory), we are looking for a specific point on the sea floor (one elevation), with a distributions modeling the likelihood of its resting location, and another distribution modeling the likelihood of finding the boat were it at that depth. These distributions are then guide search, and are continuously updated through the results of the guided search.
Though similar, oil exploration is fraught with complicating features (multiple sampling depths, high sampling costs, variable yields, multiple geological indicators, drilling cost, etc.) that necessitate methods that go beyond what is considered in the prior example. See Learning through Oil and Gas Exploration for an overview of these complicating factors and a way to model them.
So, yes, it may be said that the oil exploration problem is different in magnitude, but not kind from lost vessel exploration, and thus similar methods may be fruitfully applied. Finally, a quick literature search reveals many different modeling approaches, which is not too surprising, given the complicated nature of the problem. | Use of Bayesian Search Theory in geological interpretation | Though it is not generally labeled as Bayesian search theory, such methods are pretty widely used in oil exploration. There are, however, important differences in the standard examples that drive dif | Use of Bayesian Search Theory in geological interpretation
Though it is not generally labeled as Bayesian search theory, such methods are pretty widely used in oil exploration. There are, however, important differences in the standard examples that drive different features of their respective modeling problems.
In the case of lost vessel exploration (in Bayesian search theory), we are looking for a specific point on the sea floor (one elevation), with a distributions modeling the likelihood of its resting location, and another distribution modeling the likelihood of finding the boat were it at that depth. These distributions are then guide search, and are continuously updated through the results of the guided search.
Though similar, oil exploration is fraught with complicating features (multiple sampling depths, high sampling costs, variable yields, multiple geological indicators, drilling cost, etc.) that necessitate methods that go beyond what is considered in the prior example. See Learning through Oil and Gas Exploration for an overview of these complicating factors and a way to model them.
So, yes, it may be said that the oil exploration problem is different in magnitude, but not kind from lost vessel exploration, and thus similar methods may be fruitfully applied. Finally, a quick literature search reveals many different modeling approaches, which is not too surprising, given the complicated nature of the problem. | Use of Bayesian Search Theory in geological interpretation
Though it is not generally labeled as Bayesian search theory, such methods are pretty widely used in oil exploration. There are, however, important differences in the standard examples that drive dif |
54,643 | Use of Bayesian Search Theory in geological interpretation | There is a free book on Geostatistical Mapping with R here it might help your problem. | Use of Bayesian Search Theory in geological interpretation | There is a free book on Geostatistical Mapping with R here it might help your problem. | Use of Bayesian Search Theory in geological interpretation
There is a free book on Geostatistical Mapping with R here it might help your problem. | Use of Bayesian Search Theory in geological interpretation
There is a free book on Geostatistical Mapping with R here it might help your problem. |
54,644 | Measuring the effectiveness of a pattern recognition software | Yes, there are many methods. You would need to specify which model you're using, because it can vary.
For instance, Some models will be compared based on the AIC or BIC criteria. In other cases, one would look at the MSE from cross validation (as, for instance, with a support vector machine).
I recommend reading Pattern Recognition and Machine Learning by Christopher Bishop.
This is also discussed in Chapter 5 on Credibility, and particularly section 5.5 "Comparing data mining methods" of Data Mining: Practical Machine Learning Tools and Techniques by Witten and Frank (which discusses Weka in detail).
Lastly, you should also have a look at The Elements of Statistical Learning by Hastie, Tibshirani and Friedman which is available for free online. | Measuring the effectiveness of a pattern recognition software | Yes, there are many methods. You would need to specify which model you're using, because it can vary.
For instance, Some models will be compared based on the AIC or BIC criteria. In other cases, o | Measuring the effectiveness of a pattern recognition software
Yes, there are many methods. You would need to specify which model you're using, because it can vary.
For instance, Some models will be compared based on the AIC or BIC criteria. In other cases, one would look at the MSE from cross validation (as, for instance, with a support vector machine).
I recommend reading Pattern Recognition and Machine Learning by Christopher Bishop.
This is also discussed in Chapter 5 on Credibility, and particularly section 5.5 "Comparing data mining methods" of Data Mining: Practical Machine Learning Tools and Techniques by Witten and Frank (which discusses Weka in detail).
Lastly, you should also have a look at The Elements of Statistical Learning by Hastie, Tibshirani and Friedman which is available for free online. | Measuring the effectiveness of a pattern recognition software
Yes, there are many methods. You would need to specify which model you're using, because it can vary.
For instance, Some models will be compared based on the AIC or BIC criteria. In other cases, o |
54,645 | dispersion parameter in Poisson models | That's correct!
You've found out why glm doesn't use deviance/df as an estimate of dispersion: it's not a very good one. It uses the better estimate based on the variance of the Pearson residuals (though for family=poisson it doesn't need to estimate).
The estimate is bad because the deviance residuals aren't actually that close to $N(0,1)$ under the model -- they can't be, because $Poisson(1)$ is discrete, and $N(0,1)$ isn't. However, the Pearson residuals do have variance very close to 1 (exactly 1 if you didn't need to estimate the mean), and so give a better estimate of the dispersion.
> r<-replicate(1000,glm(rpois(n,1)~1,family=poisson)$deviance/(n-1))
> mean(r)
[1] 1.146889
> s<-replicate(1000,summary(glm(rpois(n,1)~1,family=quasipoisson))$dispersion)
> mean(s)
[1] 1.000165
For the Poisson family, the bias looks like this with varying mean. The blue is the deviance-based estimate; the orange is the estimate based on Pearson residuals
This is a related phenomenon to $\chi^2$ tests in contingency tables not being very accurate with small cell counts -- again, the $\chi^2$ approximation is based on Poisson distributions being approximately Normal, and when the mean is 1 they aren't.
Note, however, that a bit of perspective is useful here. The dispersion estimate based on the deviance is biased when the mean is 1, but it's not all that biased. If you want to know the dispersion to within 10%, you need large sample sizes and you need to know the distribution is accurately Poisson. The bias in the estimator probably isn't your biggest problem. | dispersion parameter in Poisson models | That's correct!
You've found out why glm doesn't use deviance/df as an estimate of dispersion: it's not a very good one. It uses the better estimate based on the variance of the Pearson residuals (th | dispersion parameter in Poisson models
That's correct!
You've found out why glm doesn't use deviance/df as an estimate of dispersion: it's not a very good one. It uses the better estimate based on the variance of the Pearson residuals (though for family=poisson it doesn't need to estimate).
The estimate is bad because the deviance residuals aren't actually that close to $N(0,1)$ under the model -- they can't be, because $Poisson(1)$ is discrete, and $N(0,1)$ isn't. However, the Pearson residuals do have variance very close to 1 (exactly 1 if you didn't need to estimate the mean), and so give a better estimate of the dispersion.
> r<-replicate(1000,glm(rpois(n,1)~1,family=poisson)$deviance/(n-1))
> mean(r)
[1] 1.146889
> s<-replicate(1000,summary(glm(rpois(n,1)~1,family=quasipoisson))$dispersion)
> mean(s)
[1] 1.000165
For the Poisson family, the bias looks like this with varying mean. The blue is the deviance-based estimate; the orange is the estimate based on Pearson residuals
This is a related phenomenon to $\chi^2$ tests in contingency tables not being very accurate with small cell counts -- again, the $\chi^2$ approximation is based on Poisson distributions being approximately Normal, and when the mean is 1 they aren't.
Note, however, that a bit of perspective is useful here. The dispersion estimate based on the deviance is biased when the mean is 1, but it's not all that biased. If you want to know the dispersion to within 10%, you need large sample sizes and you need to know the distribution is accurately Poisson. The bias in the estimator probably isn't your biggest problem. | dispersion parameter in Poisson models
That's correct!
You've found out why glm doesn't use deviance/df as an estimate of dispersion: it's not a very good one. It uses the better estimate based on the variance of the Pearson residuals (th |
54,646 | Combining Bayesian and Frequentist Estimation into a Single Model? | You can look at the a priori inferential properties of estimators (which treats both the data and parameters as random), but this is weaker than standard analysis
If you look at a statistical problem from a perspective where both the data and the model parameters are treated as random, you are essentially looking at the a priori properties of estimators. This is an exercise that can be done fruitfully, and it falls within the general class of analysis of the Bayesian properties of estimators. However, performing analysis of this kind is typically weaker than looking at the classical properties of estimators.
To see what this type of analysis looks like, suppose we consider some estimation/inferential method, which as you point out, is built on the basis of its properties conditional on the model parameters but unconditional on the data. For example, an exact confidence interval for a model parameter $\theta \in \Theta$ (based on a data vector $\mathbf{x}$) would have the following property (which is essentially the defining property of an exact confidence interval):
$$\mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha) | \theta ) = 1-\alpha
\quad \quad \quad \text{for all } \theta \in \Theta.$$
Now, if we take any prior distribution $\pi$ for the model parameter then the above property implies the weaker property:
$$\mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha)) = \int \limits_\Theta \mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha) | \theta ) \cdot \pi(\theta) \ d\theta = 1-\alpha.$$
As you can see, because the coverage property for a CI holds under all specific parameter values $\theta$ (which is how we analyse estimators/inference methods in classical analysis), this implies that it must also hold (marginally) for any prior distribution over the possible values of $\theta$. Note that the latter is a weaker property than the underlying property defining the exact confidence interval, but it is interesting to note. This tells us that an exact confidence interval formed by classical methods is such that a priori we expect it to have the correct coverage. This is what it looks like to analyse the properties of a statistical estimator treating both the data and the parameter as random.
I note your overarching question about whether it would be possible to form a new hybrid approach to estimation/inference by combining classical methods and Bayesian methods. That might be possible in principle, but because the above a priori analysis is weaker than the standard classical approach to looking at estimation, it is unlikely that this would assist you to formulate a better method than existing approaches. | Combining Bayesian and Frequentist Estimation into a Single Model? | You can look at the a priori inferential properties of estimators (which treats both the data and parameters as random), but this is weaker than standard analysis
If you look at a statistical problem | Combining Bayesian and Frequentist Estimation into a Single Model?
You can look at the a priori inferential properties of estimators (which treats both the data and parameters as random), but this is weaker than standard analysis
If you look at a statistical problem from a perspective where both the data and the model parameters are treated as random, you are essentially looking at the a priori properties of estimators. This is an exercise that can be done fruitfully, and it falls within the general class of analysis of the Bayesian properties of estimators. However, performing analysis of this kind is typically weaker than looking at the classical properties of estimators.
To see what this type of analysis looks like, suppose we consider some estimation/inferential method, which as you point out, is built on the basis of its properties conditional on the model parameters but unconditional on the data. For example, an exact confidence interval for a model parameter $\theta \in \Theta$ (based on a data vector $\mathbf{x}$) would have the following property (which is essentially the defining property of an exact confidence interval):
$$\mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha) | \theta ) = 1-\alpha
\quad \quad \quad \text{for all } \theta \in \Theta.$$
Now, if we take any prior distribution $\pi$ for the model parameter then the above property implies the weaker property:
$$\mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha)) = \int \limits_\Theta \mathbb{P}(\theta \in \text{CI}( \mathbf{X}, \alpha) | \theta ) \cdot \pi(\theta) \ d\theta = 1-\alpha.$$
As you can see, because the coverage property for a CI holds under all specific parameter values $\theta$ (which is how we analyse estimators/inference methods in classical analysis), this implies that it must also hold (marginally) for any prior distribution over the possible values of $\theta$. Note that the latter is a weaker property than the underlying property defining the exact confidence interval, but it is interesting to note. This tells us that an exact confidence interval formed by classical methods is such that a priori we expect it to have the correct coverage. This is what it looks like to analyse the properties of a statistical estimator treating both the data and the parameter as random.
I note your overarching question about whether it would be possible to form a new hybrid approach to estimation/inference by combining classical methods and Bayesian methods. That might be possible in principle, but because the above a priori analysis is weaker than the standard classical approach to looking at estimation, it is unlikely that this would assist you to formulate a better method than existing approaches. | Combining Bayesian and Frequentist Estimation into a Single Model?
You can look at the a priori inferential properties of estimators (which treats both the data and parameters as random), but this is weaker than standard analysis
If you look at a statistical problem |
54,647 | Combining Bayesian and Frequentist Estimation into a Single Model? | As J. Delaney's comment says, the Bayesian approach already allows both the data and the parameters to be random.
I think the confusion arises because "the parameters are fixed and the data is random" is not true under the frequentist approach, and "the parameters are random and the data is fixed" is not true under the Bayesian approach either. (See the answers to this question for more details.)
What is going on? In both cases you choose a family of models, for example a $N(\mu, \sigma^2)$, which could have generated your data $X$.
In the Bayesian case, you treat $\mu$ and $\sigma^2$ as random variables and calculate their conditional distribution given your observed data $X$. In order to do this, you must choose a prior distribution for $\mu$ and $\sigma^2$. Sometimes you don't want to do this.
In the Frequentist case, you are not allowed to treat $\mu$ and $\sigma^2$ as random variables. Instead, you seek to make statements which are valid no matter what the true values of $\mu$ and $\sigma^2$ happen to be. These statements are constructed by considering what kind of data might have been generated by different values of $\mu$ and $\sigma^2$. But whatever result you get is still conditional on your observed data $X$. It's just that it's not called a conditional distribution in the frequentist case.
For example, suppose your frequentist confidence interval for $\mu$ is $[2, 3]$. Then if you had collected a different data set $X'$ on a different day, you would probably end up with a different confidence interval. Similarly, say your Bayesian credible interval for $\mu$ is $[2, 3]$. If you had collected a different data set $X'$ on a different day, you would probably end up with a different credible interval as well. | Combining Bayesian and Frequentist Estimation into a Single Model? | As J. Delaney's comment says, the Bayesian approach already allows both the data and the parameters to be random.
I think the confusion arises because "the parameters are fixed and the data is random" | Combining Bayesian and Frequentist Estimation into a Single Model?
As J. Delaney's comment says, the Bayesian approach already allows both the data and the parameters to be random.
I think the confusion arises because "the parameters are fixed and the data is random" is not true under the frequentist approach, and "the parameters are random and the data is fixed" is not true under the Bayesian approach either. (See the answers to this question for more details.)
What is going on? In both cases you choose a family of models, for example a $N(\mu, \sigma^2)$, which could have generated your data $X$.
In the Bayesian case, you treat $\mu$ and $\sigma^2$ as random variables and calculate their conditional distribution given your observed data $X$. In order to do this, you must choose a prior distribution for $\mu$ and $\sigma^2$. Sometimes you don't want to do this.
In the Frequentist case, you are not allowed to treat $\mu$ and $\sigma^2$ as random variables. Instead, you seek to make statements which are valid no matter what the true values of $\mu$ and $\sigma^2$ happen to be. These statements are constructed by considering what kind of data might have been generated by different values of $\mu$ and $\sigma^2$. But whatever result you get is still conditional on your observed data $X$. It's just that it's not called a conditional distribution in the frequentist case.
For example, suppose your frequentist confidence interval for $\mu$ is $[2, 3]$. Then if you had collected a different data set $X'$ on a different day, you would probably end up with a different confidence interval. Similarly, say your Bayesian credible interval for $\mu$ is $[2, 3]$. If you had collected a different data set $X'$ on a different day, you would probably end up with a different credible interval as well. | Combining Bayesian and Frequentist Estimation into a Single Model?
As J. Delaney's comment says, the Bayesian approach already allows both the data and the parameters to be random.
I think the confusion arises because "the parameters are fixed and the data is random" |
54,648 | What is non-parametric regression? | In general, this is an interesting question that comes up a lot.
I'll be the first to say "non-parametric" regression is not well-defined. You might be referred to Wasserman's text "All of Non-Parametric Statistics" which was the first seminal reference of its kind, attempting to broach the concept. The text wasn't without its issues, and I recall several of the professors in my department being deeply agitated by the material - actual mistakes, not just epistemological disagreements.
In general, to refer to something as "parametric" means that the terms in the regression model index a probability model. In Poisson regression, for instance, it's quite easy to take the design of $X$ and the estimated coefficients, and simulate responses from the results. The same is true of ordinary linear regression when it's treated like maximum likelihood of a normally distributed error term. But linear regression does not actually require normal errors. So, when we perform asymptotic inference, relying on the CLT to give us asymptotically correct CIs for the regression coefficients, we cannot say that linear regression is a parametric routine because our estimates do not, in fact, index a probability model. Whether or not "asymptotic" OLS is semi-parametric or non-parametric was an issue that not even my professors could agree on; but I'm in the non-parametric camp, if we are willing to make minimal assumptions about the existence of first and second moments.
So in my opinion there's nothing fundamentally wrong with writing down what looks like an ordinary least squares model and saying, "This is a non-parametric regression". Recall, a coefficient is only a parameter - necessitating parametric regression - if we claim to believe there's a probability model beneath it - an estimable probability model, that we know to be true, and for which our reg ression model provides reliable estimates of all the actual components.
In your example, the model description confirms that, while these are panel data, the authors are confident in the robustness and surfeit of data to assure us of the reliability of estimates for what appear to be a large number of fixed effects, whereas the error term has no descriptor other than being "idiosyncratic". One may only hope that this at least means these errors are independent or identically distributed - even if not, the OLS can be motivated, but I argue as a semiparametric estimator. | What is non-parametric regression? | In general, this is an interesting question that comes up a lot.
I'll be the first to say "non-parametric" regression is not well-defined. You might be referred to Wasserman's text "All of Non-Paramet | What is non-parametric regression?
In general, this is an interesting question that comes up a lot.
I'll be the first to say "non-parametric" regression is not well-defined. You might be referred to Wasserman's text "All of Non-Parametric Statistics" which was the first seminal reference of its kind, attempting to broach the concept. The text wasn't without its issues, and I recall several of the professors in my department being deeply agitated by the material - actual mistakes, not just epistemological disagreements.
In general, to refer to something as "parametric" means that the terms in the regression model index a probability model. In Poisson regression, for instance, it's quite easy to take the design of $X$ and the estimated coefficients, and simulate responses from the results. The same is true of ordinary linear regression when it's treated like maximum likelihood of a normally distributed error term. But linear regression does not actually require normal errors. So, when we perform asymptotic inference, relying on the CLT to give us asymptotically correct CIs for the regression coefficients, we cannot say that linear regression is a parametric routine because our estimates do not, in fact, index a probability model. Whether or not "asymptotic" OLS is semi-parametric or non-parametric was an issue that not even my professors could agree on; but I'm in the non-parametric camp, if we are willing to make minimal assumptions about the existence of first and second moments.
So in my opinion there's nothing fundamentally wrong with writing down what looks like an ordinary least squares model and saying, "This is a non-parametric regression". Recall, a coefficient is only a parameter - necessitating parametric regression - if we claim to believe there's a probability model beneath it - an estimable probability model, that we know to be true, and for which our reg ression model provides reliable estimates of all the actual components.
In your example, the model description confirms that, while these are panel data, the authors are confident in the robustness and surfeit of data to assure us of the reliability of estimates for what appear to be a large number of fixed effects, whereas the error term has no descriptor other than being "idiosyncratic". One may only hope that this at least means these errors are independent or identically distributed - even if not, the OLS can be motivated, but I argue as a semiparametric estimator. | What is non-parametric regression?
In general, this is an interesting question that comes up a lot.
I'll be the first to say "non-parametric" regression is not well-defined. You might be referred to Wasserman's text "All of Non-Paramet |
54,649 | Who first suggested weak stationarity and strict stationarity? | It was developed by Khintchine in Korrelationstheorie der stationare stochastischen Processe, Math. Ann. 109, 604-615.
As $\rm [I]$ notes:
The second line of development began with a series of papers in 1932-1934 by the Russian mathematician Khintchine who introduced both stationary and weakly stationary stochastic processes and developed the correlation theory for weakly stationary processes [see Khintchine (1934)]. This development was important not only for time series analysis but was also one of the pioneering works in the modern theory of stochastic processes. Later, Kolmogorov (1941a) developed the geometric theory of weakly stationary time series and Cramér (1942) discovered the important spectral decomposition of weakly stationary processes…
Reference:
$\rm [I]$ The Spectral Analysis of Time Series, Lambert H. Koopmans, Academic Press, $1995, $ sec. $2.1, $ p. $29.$ | Who first suggested weak stationarity and strict stationarity? | It was developed by Khintchine in Korrelationstheorie der stationare stochastischen Processe, Math. Ann. 109, 604-615.
As $\rm [I]$ notes:
The second line of development began with a series of papers | Who first suggested weak stationarity and strict stationarity?
It was developed by Khintchine in Korrelationstheorie der stationare stochastischen Processe, Math. Ann. 109, 604-615.
As $\rm [I]$ notes:
The second line of development began with a series of papers in 1932-1934 by the Russian mathematician Khintchine who introduced both stationary and weakly stationary stochastic processes and developed the correlation theory for weakly stationary processes [see Khintchine (1934)]. This development was important not only for time series analysis but was also one of the pioneering works in the modern theory of stochastic processes. Later, Kolmogorov (1941a) developed the geometric theory of weakly stationary time series and Cramér (1942) discovered the important spectral decomposition of weakly stationary processes…
Reference:
$\rm [I]$ The Spectral Analysis of Time Series, Lambert H. Koopmans, Academic Press, $1995, $ sec. $2.1, $ p. $29.$ | Who first suggested weak stationarity and strict stationarity?
It was developed by Khintchine in Korrelationstheorie der stationare stochastischen Processe, Math. Ann. 109, 604-615.
As $\rm [I]$ notes:
The second line of development began with a series of papers |
54,650 | Rigorous but elementary statistics for self study | Statistical Inference by Casella and Berger is the standard textbook for this. If you’re good at some elementary real analysis (say calculus at the level of Spivak, not necessarily Rudin), you have the mathematical background to handle most of it. Some multivariable calculus might be helpful to know, but the basics of partial derivatives and multiple integrals should be well within reach if you need to self-study them to understand some sections of Casella/Berger. | Rigorous but elementary statistics for self study | Statistical Inference by Casella and Berger is the standard textbook for this. If you’re good at some elementary real analysis (say calculus at the level of Spivak, not necessarily Rudin), you have th | Rigorous but elementary statistics for self study
Statistical Inference by Casella and Berger is the standard textbook for this. If you’re good at some elementary real analysis (say calculus at the level of Spivak, not necessarily Rudin), you have the mathematical background to handle most of it. Some multivariable calculus might be helpful to know, but the basics of partial derivatives and multiple integrals should be well within reach if you need to self-study them to understand some sections of Casella/Berger. | Rigorous but elementary statistics for self study
Statistical Inference by Casella and Berger is the standard textbook for this. If you’re good at some elementary real analysis (say calculus at the level of Spivak, not necessarily Rudin), you have th |
54,651 | Why are superlearners used in TMLE? | Asymptotic inference (i.e., the variance of the estimator) for TMLE using influence functions requires the nuisance models--the models for the expected potential outcomes $E[Y|A,X]$ and propensity scores $E[A|X]$--to converge to the truth (i.e., for the predicted values to converge to the true values) at a certain rate.
Different models converge to the truth at desired rates under a variety of assumptions; for example, if the true propensity score model is a logistic model, maximum likelihood estimated (MLE) logistic regression converges quickly to the truth, but if the true propensity score model is a specific more complicated function, a gradient boosting machine (GBM) may converge at some rate, and the logistic regression won't converge at all. In practice, we don't know what the true model is, so we don't know if we are in a scenario where MLE logistic regression will converge at the required rate or if GBM will.
SuperLearner takes on the fastest convergence rate of its candidate models, which means that to have the best chance of approaching the required converge rate, SuperLearner asymptotically does as well or better than any of its candidate models when the true model is unknown. Using the example above, if the true propensity score model is logistic and MLE logistic regression and GBM are used as candidate libraries for SuperLearner, then the resulting SuperLearner predictions converge to the truth at the same rate as the MLE logistic regression alone would. But if the GBM converges to the truth at a certain rate and the MLE logistic model is wrong, then that same SuperLearner as above will converge at the same rate as the GBM, even with the logistic regression included in the library. So, there is a kind of "multiple robustness" in the SuperLearner in that if any of the candidate models converge to the truth at the required rate, the SuperLearner containing those models does, too.
We know that a machine learning method called "highly adaptive lasso" (HAL) alone converges to the truth at the required rate; so technically, it is the only learner required for asymptotic inference with TMLE to be valid. But if the true propensity score or outcome model is a simple model captured well by MLE logistic regression, we would want predictions from such a model to help steer the predictions to the truth at an even faster rate than HAL would do alone in order to improve precision and arrive at approximately valid inference with a smaller sample size. Including both MLE logistic regression and HAL (and other models) would improve the performance of the resulting predictions while guaranteeing the asymptotic consistency properties imparted by HAL.
Also note that the models are cross-validated and, ideally, cross-fit, which reduces the problems of overfitting that you mention. I believe the convergence rates for a given model refer to their cross-validated rates.
I know others may be able to provide more technical explanations for this, but this is my understanding at an intuitive level of why SuperLearner is so important to TMLE (and AIPW and all other doubly-robust methods). | Why are superlearners used in TMLE? | Asymptotic inference (i.e., the variance of the estimator) for TMLE using influence functions requires the nuisance models--the models for the expected potential outcomes $E[Y|A,X]$ and propensity sco | Why are superlearners used in TMLE?
Asymptotic inference (i.e., the variance of the estimator) for TMLE using influence functions requires the nuisance models--the models for the expected potential outcomes $E[Y|A,X]$ and propensity scores $E[A|X]$--to converge to the truth (i.e., for the predicted values to converge to the true values) at a certain rate.
Different models converge to the truth at desired rates under a variety of assumptions; for example, if the true propensity score model is a logistic model, maximum likelihood estimated (MLE) logistic regression converges quickly to the truth, but if the true propensity score model is a specific more complicated function, a gradient boosting machine (GBM) may converge at some rate, and the logistic regression won't converge at all. In practice, we don't know what the true model is, so we don't know if we are in a scenario where MLE logistic regression will converge at the required rate or if GBM will.
SuperLearner takes on the fastest convergence rate of its candidate models, which means that to have the best chance of approaching the required converge rate, SuperLearner asymptotically does as well or better than any of its candidate models when the true model is unknown. Using the example above, if the true propensity score model is logistic and MLE logistic regression and GBM are used as candidate libraries for SuperLearner, then the resulting SuperLearner predictions converge to the truth at the same rate as the MLE logistic regression alone would. But if the GBM converges to the truth at a certain rate and the MLE logistic model is wrong, then that same SuperLearner as above will converge at the same rate as the GBM, even with the logistic regression included in the library. So, there is a kind of "multiple robustness" in the SuperLearner in that if any of the candidate models converge to the truth at the required rate, the SuperLearner containing those models does, too.
We know that a machine learning method called "highly adaptive lasso" (HAL) alone converges to the truth at the required rate; so technically, it is the only learner required for asymptotic inference with TMLE to be valid. But if the true propensity score or outcome model is a simple model captured well by MLE logistic regression, we would want predictions from such a model to help steer the predictions to the truth at an even faster rate than HAL would do alone in order to improve precision and arrive at approximately valid inference with a smaller sample size. Including both MLE logistic regression and HAL (and other models) would improve the performance of the resulting predictions while guaranteeing the asymptotic consistency properties imparted by HAL.
Also note that the models are cross-validated and, ideally, cross-fit, which reduces the problems of overfitting that you mention. I believe the convergence rates for a given model refer to their cross-validated rates.
I know others may be able to provide more technical explanations for this, but this is my understanding at an intuitive level of why SuperLearner is so important to TMLE (and AIPW and all other doubly-robust methods). | Why are superlearners used in TMLE?
Asymptotic inference (i.e., the variance of the estimator) for TMLE using influence functions requires the nuisance models--the models for the expected potential outcomes $E[Y|A,X]$ and propensity sco |
54,652 | Why are superlearners used in TMLE? | Minimizing the prediction error of nuisance parameters is (generally) the correct goal when developing efficient estimators using e.g. estimating equations or one-step estimators. To suggest this, let's rigorously focus on a case study: estimating $\mu:=E\{AY/\pi(W)\}$ in a nonparametric model, for $A$ binary and $\pi(w) = P(A=1\mid W=w)$ the propensity score.
Define $Q(a,w) = E(Y \mid A=a, W=w)$ as the outcome regression. The nonparametric efficient influence curve is $$D^*(W,A,Y; P) = \frac{A}{\pi(W)}Y - \frac{A-\pi(W)}{\pi(W)} Q(1,W) - \mu.$$ With i.i.d. sampling, an efficient estimator is $$\hat\mu_n = \frac{1}{n}\sum_{i=1}^n \left\{ \frac{A}{\hat\pi(W)}Y - \frac{A-\hat\pi(W)}{\hat\pi(W)} \hat{Q}(1,W) \right\},$$ whenever cross-fitting is used and the remainder is negligible, i.e. $$E\left[ \frac{\{\hat\pi_n(W) - \pi_0(W)\} \{\hat{Q}_n(1,W) - Q_0(1,W)\}}{\hat\pi_n(W)} \right] = o_p(n^{-1/2}).$$ Since the left hand side is upper bounded by $$\|\hat\pi_n(W) - \pi_0(W)\| \|\hat{Q}_n(1,W) - Q_0(1,W)\|,$$ whenever sample positivity holds, we can see that we should fit the nuisance parameters by ensuring the estimates are close to the truth. Further, recall that minimizing closeness to a true expectation is equivalent to minimizing a prediction error. We can for example accomplish this by minimizing the cross validated error.
Although the TMLE is a different estimator, its influence function is the same and the same considerations apply. We can see this through the following motivation.
The superlearner is nicely described by Noah. It's purpose is to find nuisance parameters which ensure the above norms are as small as possible. As Noah explains, it does this by looking through several estimators and finding which to prefer. From above, we can see that the product of the rate of convergence of both estimators must be $o_p(n^{-1/2})$; since the superlearner inherits the best rate of convergence of its constituent estimators, it can be helpful here. | Why are superlearners used in TMLE? | Minimizing the prediction error of nuisance parameters is (generally) the correct goal when developing efficient estimators using e.g. estimating equations or one-step estimators. To suggest this, let | Why are superlearners used in TMLE?
Minimizing the prediction error of nuisance parameters is (generally) the correct goal when developing efficient estimators using e.g. estimating equations or one-step estimators. To suggest this, let's rigorously focus on a case study: estimating $\mu:=E\{AY/\pi(W)\}$ in a nonparametric model, for $A$ binary and $\pi(w) = P(A=1\mid W=w)$ the propensity score.
Define $Q(a,w) = E(Y \mid A=a, W=w)$ as the outcome regression. The nonparametric efficient influence curve is $$D^*(W,A,Y; P) = \frac{A}{\pi(W)}Y - \frac{A-\pi(W)}{\pi(W)} Q(1,W) - \mu.$$ With i.i.d. sampling, an efficient estimator is $$\hat\mu_n = \frac{1}{n}\sum_{i=1}^n \left\{ \frac{A}{\hat\pi(W)}Y - \frac{A-\hat\pi(W)}{\hat\pi(W)} \hat{Q}(1,W) \right\},$$ whenever cross-fitting is used and the remainder is negligible, i.e. $$E\left[ \frac{\{\hat\pi_n(W) - \pi_0(W)\} \{\hat{Q}_n(1,W) - Q_0(1,W)\}}{\hat\pi_n(W)} \right] = o_p(n^{-1/2}).$$ Since the left hand side is upper bounded by $$\|\hat\pi_n(W) - \pi_0(W)\| \|\hat{Q}_n(1,W) - Q_0(1,W)\|,$$ whenever sample positivity holds, we can see that we should fit the nuisance parameters by ensuring the estimates are close to the truth. Further, recall that minimizing closeness to a true expectation is equivalent to minimizing a prediction error. We can for example accomplish this by minimizing the cross validated error.
Although the TMLE is a different estimator, its influence function is the same and the same considerations apply. We can see this through the following motivation.
The superlearner is nicely described by Noah. It's purpose is to find nuisance parameters which ensure the above norms are as small as possible. As Noah explains, it does this by looking through several estimators and finding which to prefer. From above, we can see that the product of the rate of convergence of both estimators must be $o_p(n^{-1/2})$; since the superlearner inherits the best rate of convergence of its constituent estimators, it can be helpful here. | Why are superlearners used in TMLE?
Minimizing the prediction error of nuisance parameters is (generally) the correct goal when developing efficient estimators using e.g. estimating equations or one-step estimators. To suggest this, let |
54,653 | How to implement Adaptive Lasso penalty for a Logistic regression in Python? | Adaptive LASSO is a two-step estimator; check out section 3.1 of Zou "The Adaptive Lasso and Its Oracle Properties" (2006). (This is the original paper that proposed adaptive LASSO.) You can implement the steps separately. Let $p$ be the number of regressors in your model.
You start with a $\sqrt{n}$-consistent estimator of $\beta=(\beta_1,\dots,\beta_p)^\top$ such as the MLE.*
For $j=1,\dots,p$, you specify $\tilde X_j$ as $\frac{X_j}{|\hat\beta_j|^\gamma}$ for some $\gamma>0$ (e.g. $\gamma=1$). You then run a standard LASSO using these modified $\tilde X$s instead of the original ones. (See Section 3.5.)
*This requires the number of regressors $p$ to be less than the sample size $n$, $p<n$. Otherwise, you need to look for another $\sqrt{n}$-consistent estimator. | How to implement Adaptive Lasso penalty for a Logistic regression in Python? | Adaptive LASSO is a two-step estimator; check out section 3.1 of Zou "The Adaptive Lasso and Its Oracle Properties" (2006). (This is the original paper that proposed adaptive LASSO.) You can implement | How to implement Adaptive Lasso penalty for a Logistic regression in Python?
Adaptive LASSO is a two-step estimator; check out section 3.1 of Zou "The Adaptive Lasso and Its Oracle Properties" (2006). (This is the original paper that proposed adaptive LASSO.) You can implement the steps separately. Let $p$ be the number of regressors in your model.
You start with a $\sqrt{n}$-consistent estimator of $\beta=(\beta_1,\dots,\beta_p)^\top$ such as the MLE.*
For $j=1,\dots,p$, you specify $\tilde X_j$ as $\frac{X_j}{|\hat\beta_j|^\gamma}$ for some $\gamma>0$ (e.g. $\gamma=1$). You then run a standard LASSO using these modified $\tilde X$s instead of the original ones. (See Section 3.5.)
*This requires the number of regressors $p$ to be less than the sample size $n$, $p<n$. Otherwise, you need to look for another $\sqrt{n}$-consistent estimator. | How to implement Adaptive Lasso penalty for a Logistic regression in Python?
Adaptive LASSO is a two-step estimator; check out section 3.1 of Zou "The Adaptive Lasso and Its Oracle Properties" (2006). (This is the original paper that proposed adaptive LASSO.) You can implement |
54,654 | How to check the data is generated by machine or human? | Without any further information on the stipulated sampling method for the survey or the meaning of the three outcomes, any possible response could have some from humans or a machine, and there is no statistical test to check the difference.
It is only possible to test for a difference between human and machine-generated data if you are willing to impose some assumptions on what characterises each of these things. Any test you do on that basis would only be as good as your characterisation of what looks like a human or machine-generated answer. As an example, if you were to stipulate that the survey sampling was supposed to be random (e.g., simple random sampling without replacement), it would be possible to perform a test of exchangeability to check if there is evidence in the data to falsify this. If there is strong evidence for the alternative (non-exchangeability) then you might argue that this is evidence of a machine-generated answer. Contrarily, you might decide that the opposite is true --- i.e., a human-generated survey will have some departure from exchangeability, whereas a machine-generated set of data will be "too perfect". | How to check the data is generated by machine or human? | Without any further information on the stipulated sampling method for the survey or the meaning of the three outcomes, any possible response could have some from humans or a machine, and there is no s | How to check the data is generated by machine or human?
Without any further information on the stipulated sampling method for the survey or the meaning of the three outcomes, any possible response could have some from humans or a machine, and there is no statistical test to check the difference.
It is only possible to test for a difference between human and machine-generated data if you are willing to impose some assumptions on what characterises each of these things. Any test you do on that basis would only be as good as your characterisation of what looks like a human or machine-generated answer. As an example, if you were to stipulate that the survey sampling was supposed to be random (e.g., simple random sampling without replacement), it would be possible to perform a test of exchangeability to check if there is evidence in the data to falsify this. If there is strong evidence for the alternative (non-exchangeability) then you might argue that this is evidence of a machine-generated answer. Contrarily, you might decide that the opposite is true --- i.e., a human-generated survey will have some departure from exchangeability, whereas a machine-generated set of data will be "too perfect". | How to check the data is generated by machine or human?
Without any further information on the stipulated sampling method for the survey or the meaning of the three outcomes, any possible response could have some from humans or a machine, and there is no s |
54,655 | How to check the data is generated by machine or human? | There are many ways in which this could go wrong in real life, but given that it sounds like a test question for students, conceptually simplifying it is a reasonable thing to do. If the survey questions had been randomized (such that the answers (1,2, or 3) were equally likely, this would be a situation where run length testing would be useful.
https://www.itl.nist.gov/div898/handbook/eda/section3/eda35d.htm | How to check the data is generated by machine or human? | There are many ways in which this could go wrong in real life, but given that it sounds like a test question for students, conceptually simplifying it is a reasonable thing to do. If the survey questi | How to check the data is generated by machine or human?
There are many ways in which this could go wrong in real life, but given that it sounds like a test question for students, conceptually simplifying it is a reasonable thing to do. If the survey questions had been randomized (such that the answers (1,2, or 3) were equally likely, this would be a situation where run length testing would be useful.
https://www.itl.nist.gov/div898/handbook/eda/section3/eda35d.htm | How to check the data is generated by machine or human?
There are many ways in which this could go wrong in real life, but given that it sounds like a test question for students, conceptually simplifying it is a reasonable thing to do. If the survey questi |
54,656 | How to check the data is generated by machine or human? | If you have labelled instances (i.e. you know if a sample is generated by machine or by human) you can train a machine learning classifier to find the pattern which separates the two classes.
Not sure if you would count a ML method as "statistical method". | How to check the data is generated by machine or human? | If you have labelled instances (i.e. you know if a sample is generated by machine or by human) you can train a machine learning classifier to find the pattern which separates the two classes.
Not sure | How to check the data is generated by machine or human?
If you have labelled instances (i.e. you know if a sample is generated by machine or by human) you can train a machine learning classifier to find the pattern which separates the two classes.
Not sure if you would count a ML method as "statistical method". | How to check the data is generated by machine or human?
If you have labelled instances (i.e. you know if a sample is generated by machine or by human) you can train a machine learning classifier to find the pattern which separates the two classes.
Not sure |
54,657 | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point? | I think you could reasonably call this an instance of context-dropping --- in the present case the conclusion is a fallacious inference from the evidence, since the inference relies on a lack of context around what is a "normal" or "big" value of radiation dosage. | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point? | I think you could reasonably call this an instance of context-dropping --- in the present case the conclusion is a fallacious inference from the evidence, since the inference relies on a lack of conte | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point?
I think you could reasonably call this an instance of context-dropping --- in the present case the conclusion is a fallacious inference from the evidence, since the inference relies on a lack of context around what is a "normal" or "big" value of radiation dosage. | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point?
I think you could reasonably call this an instance of context-dropping --- in the present case the conclusion is a fallacious inference from the evidence, since the inference relies on a lack of conte |
54,658 | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point? | One is unreasonable averaging. And I have a great real-world example. This paper Staff Memo 4/2021 from the Norwegian Central bank. (link at the bottom).
To explain unreasonable averaging, here is a thought experiment. Suppose you have two kids, one 15 and one 19 years old. You decide to stimulate your kids finances by giving them a total of 1000 dollars.
Next, give your 15 year old 1 dollar, and then give 999 dollars to your eldest child. When the younger one complains this is unfair and increases inequality between siblings, you can count on the support of the Norwegian Central bank if you answer this:
"Each child in this household has received an average of 500$ dollars each, and because the younger kids are poorer, this actually reduced inequality between siblings".
This is exactly what happens in this research document from the Norwegian Central Bank.
I'll explain exactly how it works: The paper purportedly tries to show how current central bank policies affects inequality in Norway. In the conclusion, it says that lowering interest and printing money to ease access to credit reduces inequality.
This is odd. The common sense approach is to use basic economic theory, which says that when Central banks print money to buy stocks directly, this stimulus increases demand and drives stock-prices up. This mostly benefits people who own stocks - which are the rich. Also, when interest rates go down, stock prices go up for much the same reason, albeit in a slightly less direct way. Such policies might benefit poor people as well as the rich, because it can help avoid job losses. But nearly all stimulus turns into more wealth for the already wealthy.
Printing money, however, runs the risk of causing inflation (which we have now). It's well known and everybody basically agrees that inflation disproportionately hurt the poor (like now). So it's clear that basic economic theory suggest that curernt Central Bank policies increases inequality. This is the opposite of what the Central Bank states in their conclusion. However, the economy is complex and basic economic theory isn't always correct. But if loose Monetary policy decreases inequality, we should expect this paper to identify some other mechanism that works counter to what basic economic theory suggest. Then it needs to show that this is more powerful, so the net effect is to reduce inequality. What the bank did instead, was to lie using statistics - by making an unreasonable average.
Here is what they did :
Chief researcher Yasin Mimir and his team grouped people together into age cohorts, and then figured out which age-cohorts was mostly affected when reducing interest rates. A rate cut mostly affects people with large mortgages, which are the young - because they haven't paid down their mortgages yet. The more debt you have the more effect the interest rate cut has on your monthly payments. Therefore, young people benefitted more (on average) from the same interest rate cuts than old people did (on average). Also, old people are generally richer than young. So when young benefit more, this contributes to making inequality go down because young are - on average - poorer.
This isn't wrong. But its not a study of inequality, it's a study of inequality between average persons at different age. Why did they choose to group people into age cohorots anyway? Inequality is mostly about rich vs poor, not old vs young, right? Sure, there is some trend that older people get richer - but there is plenty of affluent young people and poor old people. What is going on?
It's worse. Once you have grouped people by age makes it impossible to study how rich 30 year olds are affected, compared to poor 30 year olds. All these differences disappear when we create the "representative housheold".
Reprenstative household
When you group people into age cohorts, you need to calculate a "representative household" for each age cohort. This means you sum of the wealth of all rich, middle class and poor 30-35 year olds, and compute an average wealth for this group. You now have one average wealth for each age gruop, which you then compare to older and younger average wealths. This is the same as assuming that everyone of the same age has the same amount of wealth, or that everyone is middle-class. This is extremely misleading. Focusing on age and not wealth as the grouping variable is unreasonable when studying inequality. Representing age as the only grouping you can make is a very serious methodical flaw.
If your research question is to study how people have different levels of wealth are affected by a policy, you shouldn't assume that everyone (with the same age) has the same amount of wealth. This is something a five year old can understand. But that is exactly what they did.
The power of the Central Bank to mislead like this is immense. Nobody wrote about this in the newspaper, because Journalists can't understand even the most basic economics. Central Banks get away with this kind of stuff, and the FED and the ECB are doing the same thing. Word on the street however, is that Central Bankers have realized how they create inequality, but they're very careful about how they talk about it because they are beginning to worry that if people understand - they might lose credibility.
This is the link - read it yourself:
https://www.norges-bank.no/en/news-events/news-publications/Papers/Staff-Memo/2021/sm-4-2021/ | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point? | One is unreasonable averaging. And I have a great real-world example. This paper Staff Memo 4/2021 from the Norwegian Central bank. (link at the bottom).
To explain unreasonable averaging, here is a t | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point?
One is unreasonable averaging. And I have a great real-world example. This paper Staff Memo 4/2021 from the Norwegian Central bank. (link at the bottom).
To explain unreasonable averaging, here is a thought experiment. Suppose you have two kids, one 15 and one 19 years old. You decide to stimulate your kids finances by giving them a total of 1000 dollars.
Next, give your 15 year old 1 dollar, and then give 999 dollars to your eldest child. When the younger one complains this is unfair and increases inequality between siblings, you can count on the support of the Norwegian Central bank if you answer this:
"Each child in this household has received an average of 500$ dollars each, and because the younger kids are poorer, this actually reduced inequality between siblings".
This is exactly what happens in this research document from the Norwegian Central Bank.
I'll explain exactly how it works: The paper purportedly tries to show how current central bank policies affects inequality in Norway. In the conclusion, it says that lowering interest and printing money to ease access to credit reduces inequality.
This is odd. The common sense approach is to use basic economic theory, which says that when Central banks print money to buy stocks directly, this stimulus increases demand and drives stock-prices up. This mostly benefits people who own stocks - which are the rich. Also, when interest rates go down, stock prices go up for much the same reason, albeit in a slightly less direct way. Such policies might benefit poor people as well as the rich, because it can help avoid job losses. But nearly all stimulus turns into more wealth for the already wealthy.
Printing money, however, runs the risk of causing inflation (which we have now). It's well known and everybody basically agrees that inflation disproportionately hurt the poor (like now). So it's clear that basic economic theory suggest that curernt Central Bank policies increases inequality. This is the opposite of what the Central Bank states in their conclusion. However, the economy is complex and basic economic theory isn't always correct. But if loose Monetary policy decreases inequality, we should expect this paper to identify some other mechanism that works counter to what basic economic theory suggest. Then it needs to show that this is more powerful, so the net effect is to reduce inequality. What the bank did instead, was to lie using statistics - by making an unreasonable average.
Here is what they did :
Chief researcher Yasin Mimir and his team grouped people together into age cohorts, and then figured out which age-cohorts was mostly affected when reducing interest rates. A rate cut mostly affects people with large mortgages, which are the young - because they haven't paid down their mortgages yet. The more debt you have the more effect the interest rate cut has on your monthly payments. Therefore, young people benefitted more (on average) from the same interest rate cuts than old people did (on average). Also, old people are generally richer than young. So when young benefit more, this contributes to making inequality go down because young are - on average - poorer.
This isn't wrong. But its not a study of inequality, it's a study of inequality between average persons at different age. Why did they choose to group people into age cohorots anyway? Inequality is mostly about rich vs poor, not old vs young, right? Sure, there is some trend that older people get richer - but there is plenty of affluent young people and poor old people. What is going on?
It's worse. Once you have grouped people by age makes it impossible to study how rich 30 year olds are affected, compared to poor 30 year olds. All these differences disappear when we create the "representative housheold".
Reprenstative household
When you group people into age cohorts, you need to calculate a "representative household" for each age cohort. This means you sum of the wealth of all rich, middle class and poor 30-35 year olds, and compute an average wealth for this group. You now have one average wealth for each age gruop, which you then compare to older and younger average wealths. This is the same as assuming that everyone of the same age has the same amount of wealth, or that everyone is middle-class. This is extremely misleading. Focusing on age and not wealth as the grouping variable is unreasonable when studying inequality. Representing age as the only grouping you can make is a very serious methodical flaw.
If your research question is to study how people have different levels of wealth are affected by a policy, you shouldn't assume that everyone (with the same age) has the same amount of wealth. This is something a five year old can understand. But that is exactly what they did.
The power of the Central Bank to mislead like this is immense. Nobody wrote about this in the newspaper, because Journalists can't understand even the most basic economics. Central Banks get away with this kind of stuff, and the FED and the ECB are doing the same thing. Word on the street however, is that Central Bankers have realized how they create inequality, but they're very careful about how they talk about it because they are beginning to worry that if people understand - they might lose credibility.
This is the link - read it yourself:
https://www.norges-bank.no/en/news-events/news-publications/Papers/Staff-Memo/2021/sm-4-2021/ | Is there a name for a logical fallacy that uses irrelevant or unfamiliar statistics to make a point?
One is unreasonable averaging. And I have a great real-world example. This paper Staff Memo 4/2021 from the Norwegian Central bank. (link at the bottom).
To explain unreasonable averaging, here is a t |
54,659 | Where does the Logistic Distribution get its name? | The cumulative distribution function of the logistic distribution is the logistic function
$$
F(x) = \frac{1}{1+e^{-(x-\mu)/s}}
$$
For an explanation of where the logistic function got its name, check the What does the name "Logistic Regression" mean? thread. | Where does the Logistic Distribution get its name? | The cumulative distribution function of the logistic distribution is the logistic function
$$
F(x) = \frac{1}{1+e^{-(x-\mu)/s}}
$$
For an explanation of where the logistic function got its name, check | Where does the Logistic Distribution get its name?
The cumulative distribution function of the logistic distribution is the logistic function
$$
F(x) = \frac{1}{1+e^{-(x-\mu)/s}}
$$
For an explanation of where the logistic function got its name, check the What does the name "Logistic Regression" mean? thread. | Where does the Logistic Distribution get its name?
The cumulative distribution function of the logistic distribution is the logistic function
$$
F(x) = \frac{1}{1+e^{-(x-\mu)/s}}
$$
For an explanation of where the logistic function got its name, check |
54,660 | Basic understanding of control variables in observational studies | Factors, whose only connection with the considered variables is that they influence only the dependent variable, in particular, have no connection with the independent variable, will not cause bias to your results so they don't have to be controlled. However, they could improve the precision. This paper is a good introduction; in particular, consider model eight therein.
But note how difficult it is to assess influence. E.g. in your example, it might very well be that the temperature is influencing the age because maybe older people will only run at lower temperatures. If, in addition, people run in general faster at lower temperatures, the temperature will be a confounder.
Edit:
Because of the discussion in the comments, I would like to describe how you convince yourself whether there is a causal influence (causal effect) from temperature to age. Imagine you could create lots of experiments (marathons) and that you could arbitrarily set the temperature, i.e. decide about the marathon temperature without being influenced by anything else in the universe. But, while you remove all influence on temperature (removing all "incoming arrows"), you still allow the temperature to influence other variables as it did before (leave all "outgoing arrows" alone). That is an intervention. Then, would there be a stochastic dependence between the age and the temperature? I.e. would there be different probability distributions over age for different values of the temperature? I think so because older people will not run in high temperatures. That is what it means that temperature has a total causal effect on age in this scenario.
Furthermore, this would not work the other way around: If you intervened on the age of the runners, e.g. by simply forbidding older people to run, doing so would not change the temperature. I.e. age is not a cause for temperature in this scenario.
In the same way, intervening on the temperature would still show a dependency between temperature and runtime, while intervening on the runtime, e.g. by just stopping runners for an hour, would not change the temperature. Thus, the temperature is a cause for the runtime but runtime is not a cause for temperature.
Thus, the temperature is a confounder of age and runtime. And confounders are variables that you need to control. | Basic understanding of control variables in observational studies | Factors, whose only connection with the considered variables is that they influence only the dependent variable, in particular, have no connection with the independent variable, will not cause bias to | Basic understanding of control variables in observational studies
Factors, whose only connection with the considered variables is that they influence only the dependent variable, in particular, have no connection with the independent variable, will not cause bias to your results so they don't have to be controlled. However, they could improve the precision. This paper is a good introduction; in particular, consider model eight therein.
But note how difficult it is to assess influence. E.g. in your example, it might very well be that the temperature is influencing the age because maybe older people will only run at lower temperatures. If, in addition, people run in general faster at lower temperatures, the temperature will be a confounder.
Edit:
Because of the discussion in the comments, I would like to describe how you convince yourself whether there is a causal influence (causal effect) from temperature to age. Imagine you could create lots of experiments (marathons) and that you could arbitrarily set the temperature, i.e. decide about the marathon temperature without being influenced by anything else in the universe. But, while you remove all influence on temperature (removing all "incoming arrows"), you still allow the temperature to influence other variables as it did before (leave all "outgoing arrows" alone). That is an intervention. Then, would there be a stochastic dependence between the age and the temperature? I.e. would there be different probability distributions over age for different values of the temperature? I think so because older people will not run in high temperatures. That is what it means that temperature has a total causal effect on age in this scenario.
Furthermore, this would not work the other way around: If you intervened on the age of the runners, e.g. by simply forbidding older people to run, doing so would not change the temperature. I.e. age is not a cause for temperature in this scenario.
In the same way, intervening on the temperature would still show a dependency between temperature and runtime, while intervening on the runtime, e.g. by just stopping runners for an hour, would not change the temperature. Thus, the temperature is a cause for the runtime but runtime is not a cause for temperature.
Thus, the temperature is a confounder of age and runtime. And confounders are variables that you need to control. | Basic understanding of control variables in observational studies
Factors, whose only connection with the considered variables is that they influence only the dependent variable, in particular, have no connection with the independent variable, will not cause bias to |
54,661 | Basic understanding of control variables in observational studies | Yes, both are right. Basically, the idea is that if you omit an important variable in your analysis, then you are going to miss some predictive power. In addition, if the variable you omitted is correlated to some of your independent variables, then your estimates for the coefficients of these variables are going to be biased, and might cause trouble for the interpretation of your analysis. See here.
EDIT:
As @frank said below, if your omitted variable (temperature) is uncorrelated to your independent variable (age), then omitting it should create no bias in your estimate of the coefficient $b_\text{Age}$. However, including it could still improve the accuracy of your model, if the effect of temperature on the performance is significant. This is why I highlighted "in addition" above. Adding temperature as a predictor could be useful even if it is uncorrelated to your independent variables.
Regarding your second question, I think that the example is indeed quite strange. In this case, additional variables like "Net wealth", "Widowed?" or "Health measure" could definitely refine the measure of the effect of age on happiness. Yes, they have no causality on age, but age certainly has some forms of causality on them, potentially creating correlation (certainly in this example). So adding them as predictors would definitely be a good idea in my opinion. I think that this is because the article seems to focus on "causal models". Maybe they consider that since age changes by itself without external influence, then they want to consider that the additional effects of the variables I imagined above should be "included" in the global effect of age, since age has a causal impact on them.
Anyway, I suggest you focus on the answers @frank and I have provided for your example with marathon. | Basic understanding of control variables in observational studies | Yes, both are right. Basically, the idea is that if you omit an important variable in your analysis, then you are going to miss some predictive power. In addition, if the variable you omitted is corre | Basic understanding of control variables in observational studies
Yes, both are right. Basically, the idea is that if you omit an important variable in your analysis, then you are going to miss some predictive power. In addition, if the variable you omitted is correlated to some of your independent variables, then your estimates for the coefficients of these variables are going to be biased, and might cause trouble for the interpretation of your analysis. See here.
EDIT:
As @frank said below, if your omitted variable (temperature) is uncorrelated to your independent variable (age), then omitting it should create no bias in your estimate of the coefficient $b_\text{Age}$. However, including it could still improve the accuracy of your model, if the effect of temperature on the performance is significant. This is why I highlighted "in addition" above. Adding temperature as a predictor could be useful even if it is uncorrelated to your independent variables.
Regarding your second question, I think that the example is indeed quite strange. In this case, additional variables like "Net wealth", "Widowed?" or "Health measure" could definitely refine the measure of the effect of age on happiness. Yes, they have no causality on age, but age certainly has some forms of causality on them, potentially creating correlation (certainly in this example). So adding them as predictors would definitely be a good idea in my opinion. I think that this is because the article seems to focus on "causal models". Maybe they consider that since age changes by itself without external influence, then they want to consider that the additional effects of the variables I imagined above should be "included" in the global effect of age, since age has a causal impact on them.
Anyway, I suggest you focus on the answers @frank and I have provided for your example with marathon. | Basic understanding of control variables in observational studies
Yes, both are right. Basically, the idea is that if you omit an important variable in your analysis, then you are going to miss some predictive power. In addition, if the variable you omitted is corre |
54,662 | Solving Poisson probability problem using only other known probabilities | Since $X_t\sim\text{Pois}(\lambda t)$, $$\mathbb{P}(X_t=k)=e^{-\lambda t}\frac{\left(\lambda t\right)^k}{k!}$$
As a consequence, you can develop expressions of your values $\mathbb{P}(X_1=1)$ and $\mathbb{P}(X_2=3)$ as functions of $\lambda$. Then, you can find a couple of ways to use these 2 formulas and the numerical values you know in order to determine the value of $\lambda$.
Hint:
Determine the formula for $\mathbb{P}(X_1=1)^2$. | Solving Poisson probability problem using only other known probabilities | Since $X_t\sim\text{Pois}(\lambda t)$, $$\mathbb{P}(X_t=k)=e^{-\lambda t}\frac{\left(\lambda t\right)^k}{k!}$$
As a consequence, you can develop expressions of your values $\mathbb{P}(X_1=1)$ and $\ma | Solving Poisson probability problem using only other known probabilities
Since $X_t\sim\text{Pois}(\lambda t)$, $$\mathbb{P}(X_t=k)=e^{-\lambda t}\frac{\left(\lambda t\right)^k}{k!}$$
As a consequence, you can develop expressions of your values $\mathbb{P}(X_1=1)$ and $\mathbb{P}(X_2=3)$ as functions of $\lambda$. Then, you can find a couple of ways to use these 2 formulas and the numerical values you know in order to determine the value of $\lambda$.
Hint:
Determine the formula for $\mathbb{P}(X_1=1)^2$. | Solving Poisson probability problem using only other known probabilities
Since $X_t\sim\text{Pois}(\lambda t)$, $$\mathbb{P}(X_t=k)=e^{-\lambda t}\frac{\left(\lambda t\right)^k}{k!}$$
As a consequence, you can develop expressions of your values $\mathbb{P}(X_1=1)$ and $\ma |
54,663 | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | You could consider "post-selection inference" methods. This is a collection of approaches to getting p-values (or confidence intervals) for a selected model's parameters, designed to be valid given that you ran model-selection first instead of using a model chosen a priori. In your case, the "model selection" consists of using your dataset to choose a temperature changepoint, before you use the same dataset to test for statistical significance of this changepoint.
Take a look at Hyun et al. (2020), "Post-selection inference for changepoint detection algorithms with application to copy number variation data," Biometrics:
https://onlinelibrary.wiley.com/doi/10.1111/biom.13422
Preprint version:
https://www.stat.cmu.edu/~ryantibs/papers/binseginf.pdf
The basic idea, as I understand it: Find the temperature changepoint on your real dataset. Compute a test statistic for comparing the means of the response-variable, below vs above this estimated changepoint. Then, when you compare this test statistic against a null distribution... do not build this null distribution from all datasets where the null hypothesis of no-difference-in-means is true. Instead, build the null distribution only using those null datasets that also would have chosen the same changepoint as your real dataset did.
The clever math for how to do this is in the paper, and they also provide associated R software:
https://github.com/sangwon-hyun/binseginf/
Hyun et al. also mention a much simpler data-splitting approach. (It has less power, but might be good enough if your dataset is large.) First sort your dataset by temperature, and split it in half in alternating order: all the odd indices vs all the even indices. Then simply estimate your changepoint on one dataset, and use a usual t-test on the other dataset. | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | You could consider "post-selection inference" methods. This is a collection of approaches to getting p-values (or confidence intervals) for a selected model's parameters, designed to be valid given th | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
You could consider "post-selection inference" methods. This is a collection of approaches to getting p-values (or confidence intervals) for a selected model's parameters, designed to be valid given that you ran model-selection first instead of using a model chosen a priori. In your case, the "model selection" consists of using your dataset to choose a temperature changepoint, before you use the same dataset to test for statistical significance of this changepoint.
Take a look at Hyun et al. (2020), "Post-selection inference for changepoint detection algorithms with application to copy number variation data," Biometrics:
https://onlinelibrary.wiley.com/doi/10.1111/biom.13422
Preprint version:
https://www.stat.cmu.edu/~ryantibs/papers/binseginf.pdf
The basic idea, as I understand it: Find the temperature changepoint on your real dataset. Compute a test statistic for comparing the means of the response-variable, below vs above this estimated changepoint. Then, when you compare this test statistic against a null distribution... do not build this null distribution from all datasets where the null hypothesis of no-difference-in-means is true. Instead, build the null distribution only using those null datasets that also would have chosen the same changepoint as your real dataset did.
The clever math for how to do this is in the paper, and they also provide associated R software:
https://github.com/sangwon-hyun/binseginf/
Hyun et al. also mention a much simpler data-splitting approach. (It has less power, but might be good enough if your dataset is large.) First sort your dataset by temperature, and split it in half in alternating order: all the odd indices vs all the even indices. Then simply estimate your changepoint on one dataset, and use a usual t-test on the other dataset. | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
You could consider "post-selection inference" methods. This is a collection of approaches to getting p-values (or confidence intervals) for a selected model's parameters, designed to be valid given th |
54,664 | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | Just in case a Bayesian approach is of interest, there are some potential tools readily available in R or Python for use. The one probably most appropriate for your case is the mcp package--regression with multiple changepoints. Another popular one is bcp. But here I will choose a package Rbeast (https://github.com/zhaokg/Rbeast) most familiar to me (bcz I wrote it) to illustrate the ideas using the sample data from @user4422:
set.seed(1)
n_obs <- 4000
temperatures <- rnorm(n_obs)
random_numbers <- rnorm(n_obs)
temperature_threshold <- 1.5
increase_if_above_threshold <- 0.5
random_numbers[temperatures>temperature_threshold]=random_numbers[temperatures>temperature_threshold] + increase_if_above_threshold
df <- data.frame(random_numbers, temperatures)
df <- df[order(df$temperature),]
library(Rbeast)
out = beast(
df$random_numbers,
season='none', # only consider a trend component
torder.minmax=c(0,0), # the piecewise trend is modelled as flat lines (i.e., the min and max polynomial orders are both 0)
tcp.minmax =c(0,1) # as in your case, the possible range of number of changepoints are 0 to 1.
);
plot(out, vars=c("y","t","tcp"), main='Posterior probability of changepoint in the mean trend')
In the 'prob' subplot below is a curve of probability of changepoint occurrence. Apparently, peaks correspond to locations where changepoints occur.
Print out some addition posterior statistics: The posterior probability of observing a mean shift at 3748 is 0.845.
print(out)
#####################################################################
# Trend Changepoints #
#####################################################################
.-------------------------------------------------------------------.
| Ascii plot of probability distribution for number of chgpts (ncp) |
.-------------------------------------------------------------------.
|Pr(ncp = 0 )=0.009|* |
|Pr(ncp = 1 )=0.991|*********************************************** |
.-------------------------------------------------------------------.
| Summary for number of Trend ChangePoints (tcp) |
.-------------------------------------------------------------------.
|ncp_max = 1 | MaxTrendKnotNum: A parameter you set |
|ncp_mode = 1 | Pr(ncp= 1)=0.99: There is a 99.1% probability |
| | that the trend component has 1 changepoint(s).|
|ncp_mean = 0.99 | Sum{ncp*Pr(ncp)} for ncp = 0,...,1 |
|ncp_pct10 = 1.00 | 10% percentile for number of changepoints |
|ncp_median = 1.00 | 50% percentile: Median number of changepoints |
|ncp_pct90 = 1.00 | 90% percentile for number of changepoints |
.-------------------------------------------------------------------.
| List of probable trend changepoints ranked by probability of |
| occurrence: Please combine the ncp reported above to determine |
| which changepoints below are practically meaningful |
'-------------------------------------------------------------------'
|tcp# |time (cp) |prob(cpPr) |
|------------------|---------------------------|--------------------|
|1 |3748.000000 |0.84592 |
.-------------------------------------------------------------------.
Of the two possible signal structures--no changepoint(i.e., number of changepoint (ncp)=0 ) vs 1 changepoint (ncp=1), there is strong evidence supporting ncp=1 (the posterior probability is 0.991). You can also plot it out using
barplot(o$trend$ncpPr, names.arg = c('numer of changepoints=0','numer of changepoints=1')) | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | Just in case a Bayesian approach is of interest, there are some potential tools readily available in R or Python for use. The one probably most appropriate for your case is the mcp package--regression | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
Just in case a Bayesian approach is of interest, there are some potential tools readily available in R or Python for use. The one probably most appropriate for your case is the mcp package--regression with multiple changepoints. Another popular one is bcp. But here I will choose a package Rbeast (https://github.com/zhaokg/Rbeast) most familiar to me (bcz I wrote it) to illustrate the ideas using the sample data from @user4422:
set.seed(1)
n_obs <- 4000
temperatures <- rnorm(n_obs)
random_numbers <- rnorm(n_obs)
temperature_threshold <- 1.5
increase_if_above_threshold <- 0.5
random_numbers[temperatures>temperature_threshold]=random_numbers[temperatures>temperature_threshold] + increase_if_above_threshold
df <- data.frame(random_numbers, temperatures)
df <- df[order(df$temperature),]
library(Rbeast)
out = beast(
df$random_numbers,
season='none', # only consider a trend component
torder.minmax=c(0,0), # the piecewise trend is modelled as flat lines (i.e., the min and max polynomial orders are both 0)
tcp.minmax =c(0,1) # as in your case, the possible range of number of changepoints are 0 to 1.
);
plot(out, vars=c("y","t","tcp"), main='Posterior probability of changepoint in the mean trend')
In the 'prob' subplot below is a curve of probability of changepoint occurrence. Apparently, peaks correspond to locations where changepoints occur.
Print out some addition posterior statistics: The posterior probability of observing a mean shift at 3748 is 0.845.
print(out)
#####################################################################
# Trend Changepoints #
#####################################################################
.-------------------------------------------------------------------.
| Ascii plot of probability distribution for number of chgpts (ncp) |
.-------------------------------------------------------------------.
|Pr(ncp = 0 )=0.009|* |
|Pr(ncp = 1 )=0.991|*********************************************** |
.-------------------------------------------------------------------.
| Summary for number of Trend ChangePoints (tcp) |
.-------------------------------------------------------------------.
|ncp_max = 1 | MaxTrendKnotNum: A parameter you set |
|ncp_mode = 1 | Pr(ncp= 1)=0.99: There is a 99.1% probability |
| | that the trend component has 1 changepoint(s).|
|ncp_mean = 0.99 | Sum{ncp*Pr(ncp)} for ncp = 0,...,1 |
|ncp_pct10 = 1.00 | 10% percentile for number of changepoints |
|ncp_median = 1.00 | 50% percentile: Median number of changepoints |
|ncp_pct90 = 1.00 | 90% percentile for number of changepoints |
.-------------------------------------------------------------------.
| List of probable trend changepoints ranked by probability of |
| occurrence: Please combine the ncp reported above to determine |
| which changepoints below are practically meaningful |
'-------------------------------------------------------------------'
|tcp# |time (cp) |prob(cpPr) |
|------------------|---------------------------|--------------------|
|1 |3748.000000 |0.84592 |
.-------------------------------------------------------------------.
Of the two possible signal structures--no changepoint(i.e., number of changepoint (ncp)=0 ) vs 1 changepoint (ncp=1), there is strong evidence supporting ncp=1 (the posterior probability is 0.991). You can also plot it out using
barplot(o$trend$ncpPr, names.arg = c('numer of changepoints=0','numer of changepoints=1')) | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
Just in case a Bayesian approach is of interest, there are some potential tools readily available in R or Python for use. The one probably most appropriate for your case is the mcp package--regression |
54,665 | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | Use the temperatures to sort your data (from low to high temperatures). Then, run a test for structural breaks (e.g. cusum, or Bai-Perron) on the random numbers (as if they were a time series).
If the software you use to run the structural break tests asks you to frame your model as a regression model, then make it a regression of the random numbers on a constant only.
Under the null hypothesis that temperatures have no effect, which you want to test, the sorting has no impact on the distribution of the test statistic. Therefore any test for structural breaks provides valid inferences.
Both cusum and Bai-Perron (mentioned above) detect the breakpoint (in your case the temperature threshold) automatically, so that you do not have to pick it with a separate optimization (that would affect the distribution of the test statistic).
Here is the R code for breakpoint detection with the Bai and Perron method (apologies, I am not very familiar with R). In this example, the breakpoint is detected almost perfectly by the algorithm.
n_obs <- 4000
temperature_threshold <- 1.5
increase_if_above_threshold <- 0.5
library(strucchange)
set.seed(1)
random_numbers <- rnorm(n_obs)
temperatures <- rnorm(n_obs)
for (j in 1:n_obs) {
if (temperatures[j] > temperature_threshold) {
random_numbers[j] <- random_numbers[j] + increase_if_above_threshold
}
}
df <- data.frame(random_numbers, temperatures)
df <- df[order(df$temperature),]
row.names(df) <- NULL
results <- breakpoints(df$random_numbers ~ 1, 0.01, 1)
estimated_threshold <- df$temperatures[results$breakpoints]
print(paste0("Threshold estimated with Bai and Perron's method: ", estimated_threshold))
And here is the code for testing the null hypothesis of absence of breakpoints:
f_statistics = Fstats(df$random_numbers ~ 1, 0.01)
test_H0_no_breakpoints = sctest(f_statistics, "supF")
print(paste0("P-value for the null hypothesis of no breakpoints: ", test_H0_no_breakpoints$p.value))
P.S.: the idea of using methods for breakpoint detection in time series to tackle non-linearities in non-time-series data is not new. It goes back at least to West, M. and J. Harrison (1997), Bayesian forecasting and dynamic models, Second Edition, Springer Verlag, New York. | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | Use the temperatures to sort your data (from low to high temperatures). Then, run a test for structural breaks (e.g. cusum, or Bai-Perron) on the random numbers (as if they were a time series).
If the | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
Use the temperatures to sort your data (from low to high temperatures). Then, run a test for structural breaks (e.g. cusum, or Bai-Perron) on the random numbers (as if they were a time series).
If the software you use to run the structural break tests asks you to frame your model as a regression model, then make it a regression of the random numbers on a constant only.
Under the null hypothesis that temperatures have no effect, which you want to test, the sorting has no impact on the distribution of the test statistic. Therefore any test for structural breaks provides valid inferences.
Both cusum and Bai-Perron (mentioned above) detect the breakpoint (in your case the temperature threshold) automatically, so that you do not have to pick it with a separate optimization (that would affect the distribution of the test statistic).
Here is the R code for breakpoint detection with the Bai and Perron method (apologies, I am not very familiar with R). In this example, the breakpoint is detected almost perfectly by the algorithm.
n_obs <- 4000
temperature_threshold <- 1.5
increase_if_above_threshold <- 0.5
library(strucchange)
set.seed(1)
random_numbers <- rnorm(n_obs)
temperatures <- rnorm(n_obs)
for (j in 1:n_obs) {
if (temperatures[j] > temperature_threshold) {
random_numbers[j] <- random_numbers[j] + increase_if_above_threshold
}
}
df <- data.frame(random_numbers, temperatures)
df <- df[order(df$temperature),]
row.names(df) <- NULL
results <- breakpoints(df$random_numbers ~ 1, 0.01, 1)
estimated_threshold <- df$temperatures[results$breakpoints]
print(paste0("Threshold estimated with Bai and Perron's method: ", estimated_threshold))
And here is the code for testing the null hypothesis of absence of breakpoints:
f_statistics = Fstats(df$random_numbers ~ 1, 0.01)
test_H0_no_breakpoints = sctest(f_statistics, "supF")
print(paste0("P-value for the null hypothesis of no breakpoints: ", test_H0_no_breakpoints$p.value))
P.S.: the idea of using methods for breakpoint detection in time series to tackle non-linearities in non-time-series data is not new. It goes back at least to West, M. and J. Harrison (1997), Bayesian forecasting and dynamic models, Second Edition, Springer Verlag, New York. | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
Use the temperatures to sort your data (from low to high temperatures). Then, run a test for structural breaks (e.g. cusum, or Bai-Perron) on the random numbers (as if they were a time series).
If the |
54,666 | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | Your question involves two tasks, estimation and significance testing.
In both cases you have uncertainties about the model (like no specification for the distribution of the "random number generator").
A way to resolve this is to use general methods. These methods work for different types of distributions, but they may not be the most efficient. (For instance, knowing more about the problem might make you choose a better cost function for training the model during fitting, such that the parameter estimates have with higher probability a smaller error)
In your case you could use isotonic regression or regression trees (restricted to a single split) to estimate the parameters and bootstrapping to determine significance.
Estimation of parameters
You can use the least squares method. Fit the model such that the square of the residuals is minimised. This method is consistent: you can make your estimate as precise as desired by adding more data.
More specifically your model can be fitted with isotonic regression or regression trees with a square loss function, and that are restricted to having a single split. For this there are many software libraries available that can help you to perform the regression.
Significance
If you do not know the probability distribution of the observations, then you can not exactly compute the significance. This is because you have no information about the theoretical distribution for the performance of the estimates.
One method would be to use ANOVA (Analysis of variance) to compare the difference between the null model (there's no jump and the mean is independent of temperature) and the alternative model (the mean jumps above some threshold temperature).
If instead of not knowing the distribution, you would know that the distribution of the observations is Gaussian, then you could express the significance by approximating the result of the ANOVA as F-distributed. On the other hand when the distribution is not Gaussian, then possibly you might use the Gaussian distribution as an estimate, or if you suspect a large discrepancy in the distribution then you could use bootstrapping to estimate the distribution. | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value? | Your question involves two tasks, estimation and significance testing.
In both cases you have uncertainties about the model (like no specification for the distribution of the "random number generator" | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
Your question involves two tasks, estimation and significance testing.
In both cases you have uncertainties about the model (like no specification for the distribution of the "random number generator").
A way to resolve this is to use general methods. These methods work for different types of distributions, but they may not be the most efficient. (For instance, knowing more about the problem might make you choose a better cost function for training the model during fitting, such that the parameter estimates have with higher probability a smaller error)
In your case you could use isotonic regression or regression trees (restricted to a single split) to estimate the parameters and bootstrapping to determine significance.
Estimation of parameters
You can use the least squares method. Fit the model such that the square of the residuals is minimised. This method is consistent: you can make your estimate as precise as desired by adding more data.
More specifically your model can be fitted with isotonic regression or regression trees with a square loss function, and that are restricted to having a single split. For this there are many software libraries available that can help you to perform the regression.
Significance
If you do not know the probability distribution of the observations, then you can not exactly compute the significance. This is because you have no information about the theoretical distribution for the performance of the estimates.
One method would be to use ANOVA (Analysis of variance) to compare the difference between the null model (there's no jump and the mean is independent of temperature) and the alternative model (the mean jumps above some threshold temperature).
If instead of not knowing the distribution, you would know that the distribution of the observations is Gaussian, then you could express the significance by approximating the result of the ANOVA as F-distributed. On the other hand when the distribution is not Gaussian, then possibly you might use the Gaussian distribution as an estimate, or if you suspect a large discrepancy in the distribution then you could use bootstrapping to estimate the distribution. | How to prove statistically that mean jumps when a parameter crosses an unknown threshold value?
Your question involves two tasks, estimation and significance testing.
In both cases you have uncertainties about the model (like no specification for the distribution of the "random number generator" |
54,667 | What is the meaning of the variance in a prior that represents L2 regularization? | The basic idea of the prior is, that it describes your knowledge about the weights before you get to know the observations (the data). Thus, a prior with a mean equal to zero and a small prior variance keeps the parameters near zero, i.e. it is used as regularization. Alternatively, a large prior variance means that you don't have much knowledge about the weights and the main source of your information are your data. If there is no prior knowledge about the weights, people use uninformative priors, which don't contain information. This could go as far as using improper priors, which are not even proper densities anymore but can still be computed with as far as the purposes of Bayesian inference are concerned.
Note, that the priors are not the only source of the "noise of the weights", i.e. the variance of the posterior, but it is also very much determined by the data.
Furthermore, what is really useful about priors, is that you can learn them, too. You can make (some of) the hyperparameters, that describe the prior, random variables as well, creating a hierarchical model, and also infer them from the data. That way, you could e.g. even create priors that help you learn sparse models, as in the Relevance Vector Machine.
Thus, while the basic purpose of the prior is to convey prior knowledge about your weights, they can have additional purpose if they are learned, too. | What is the meaning of the variance in a prior that represents L2 regularization? | The basic idea of the prior is, that it describes your knowledge about the weights before you get to know the observations (the data). Thus, a prior with a mean equal to zero and a small prior varianc | What is the meaning of the variance in a prior that represents L2 regularization?
The basic idea of the prior is, that it describes your knowledge about the weights before you get to know the observations (the data). Thus, a prior with a mean equal to zero and a small prior variance keeps the parameters near zero, i.e. it is used as regularization. Alternatively, a large prior variance means that you don't have much knowledge about the weights and the main source of your information are your data. If there is no prior knowledge about the weights, people use uninformative priors, which don't contain information. This could go as far as using improper priors, which are not even proper densities anymore but can still be computed with as far as the purposes of Bayesian inference are concerned.
Note, that the priors are not the only source of the "noise of the weights", i.e. the variance of the posterior, but it is also very much determined by the data.
Furthermore, what is really useful about priors, is that you can learn them, too. You can make (some of) the hyperparameters, that describe the prior, random variables as well, creating a hierarchical model, and also infer them from the data. That way, you could e.g. even create priors that help you learn sparse models, as in the Relevance Vector Machine.
Thus, while the basic purpose of the prior is to convey prior knowledge about your weights, they can have additional purpose if they are learned, too. | What is the meaning of the variance in a prior that represents L2 regularization?
The basic idea of the prior is, that it describes your knowledge about the weights before you get to know the observations (the data). Thus, a prior with a mean equal to zero and a small prior varianc |
54,668 | What is the meaning of the variance in a prior that represents L2 regularization? | In case the prior is a single Gaussian, its variance means how confident you are regarding your prior beliefs. That is, the smaller the variance you put into your prior Gaussian, the more confident you are in your prior belief that the data has to be around the mean of the prior. Calling it "noise", like you say, might be misleading as noise refers to some measurement process, whereas here there is no measurement involved.
Also notice that in the regularization framework, you get the 1/variance in front of the regularization term, which means the bigger your variance, the smaller of a weight your regularization term has, corresponding to your less confident prior belief.
Depending on the situation it can have more meanings. I can give two examples here:
if you already have some data which is measured with very very little noise before your experiment, you can use these to estimate the mean and variance of a Gaussian to use as your prior in the inference afterwards. You can call this prior an empirical Gaussian prior. Then the variance would be the variance of your pre-experiment data points.
You can extend your model by putting a prior distribution onto the variance of your prior. You can use an inverse gamma distribution for the variance prior, for instance. The Gamma distribution would then have so called hyper-parameters. Then you can control your beliefs regarding the prior by controlling your beliefs regarding its variance, i.e. by playing with the hyper-parameters. Alternatively, you can marginalize over the variance parameter and get rid of it entirely to only speak about the hyper-parameters afterwards. | What is the meaning of the variance in a prior that represents L2 regularization? | In case the prior is a single Gaussian, its variance means how confident you are regarding your prior beliefs. That is, the smaller the variance you put into your prior Gaussian, the more confident yo | What is the meaning of the variance in a prior that represents L2 regularization?
In case the prior is a single Gaussian, its variance means how confident you are regarding your prior beliefs. That is, the smaller the variance you put into your prior Gaussian, the more confident you are in your prior belief that the data has to be around the mean of the prior. Calling it "noise", like you say, might be misleading as noise refers to some measurement process, whereas here there is no measurement involved.
Also notice that in the regularization framework, you get the 1/variance in front of the regularization term, which means the bigger your variance, the smaller of a weight your regularization term has, corresponding to your less confident prior belief.
Depending on the situation it can have more meanings. I can give two examples here:
if you already have some data which is measured with very very little noise before your experiment, you can use these to estimate the mean and variance of a Gaussian to use as your prior in the inference afterwards. You can call this prior an empirical Gaussian prior. Then the variance would be the variance of your pre-experiment data points.
You can extend your model by putting a prior distribution onto the variance of your prior. You can use an inverse gamma distribution for the variance prior, for instance. The Gamma distribution would then have so called hyper-parameters. Then you can control your beliefs regarding the prior by controlling your beliefs regarding its variance, i.e. by playing with the hyper-parameters. Alternatively, you can marginalize over the variance parameter and get rid of it entirely to only speak about the hyper-parameters afterwards. | What is the meaning of the variance in a prior that represents L2 regularization?
In case the prior is a single Gaussian, its variance means how confident you are regarding your prior beliefs. That is, the smaller the variance you put into your prior Gaussian, the more confident yo |
54,669 | Random variate of a singular Wishart distribution with non-integral degrees of freedom | The Wishart Distribution is defined on the manifold $\mathcal{M}(p)$ of all positive-definite symmetric (psd) $p\times p$ matrices. In the coordinate system $(x_{ij}, 1\le j \le i \le p)$ (which identifies this space of matrices with a subset of $\mathbb{R}^{p(p+1)/2}$) the density of the "standard" Wishart distribution with $n\gt p-1$ "degrees of freedom" at a matrix $x$ is given by
$$f_n(x) = C_n \det(x)^{(n-p-1)/2}\, e^{-\operatorname{tr}(x)/2}$$
where $C_n$ is the normalizing constant. Given any specified positive-definite $p\times p$ matrix $V,$ another Wishart distribution can be generated upon multiplying a standard Wishart $x$ to produce $Vx.$ (This is the multidimensional analog of a change of scale: see "Higher Dimensional Geometries" at https://stats.stackexchange.com/a/564628/919.)
One important property of this distribution is that the diagonal elements of $x$ will all have scaled chi-squared distributions with $n$ degrees of freedom.
The Bartlett Decomposition, as described in Wikipedia, generates $x$ in the form $L^\prime AA^\prime L$ where $V = L^\prime L$ is the Cholesky decomposition of $V$ ($L$ is lower triangular), the diagonal of $A$ is a sequence of $\chi(n), \chi(n-1), \chi(n-p+1)$ variables (that is, square roots of chi-squared variables), and below the diagonal the components are standard Normal; all $p(p+1)/2$ of these variables are independent. This gives a practical and fairly simple way to generate Wishart variables for any $V$ and $n$ that are mathematically possible. This R function gives one implementation directly modeled on the Wikipedia article (using its notation).
#
# Random Wishart matrices.
# Returns a rank-3 array indexed by the third variable.
#
rWishart <- function(n, V, df) {
d <- dim(V)[1]
L <- chol(V)
X <- replicate(n, {
A <- diag(sqrt(rchisq(d, df + 1 - seq_len(d))), d)
A[lower.tri(A)] <- rnorm(d*(d-1)/2)
Y <- crossprod(L, A)
tcrossprod(Y)
})
array(X, dim = c(d, d, n))
}
As an example of its use, I set $$V = \pmatrix{2 & -3 \\ -3 & 6}$$ (whence $p=2$ and the implied correlation coefficient is extremely negative), specified the degrees of freedom in a variable df, and generated $20,000$ realizations from this Wishart distribution as
X <- rWishart(2e4, V, df)
The result is a rank-three array. That is, X has three subscripts. Its last subscript determines one $p\times p$ matrix. For instance, the first realization is X[,,1] and the last realization is X[,,2e4].
Here are histograms of the components of these realizations for two different values of $n,$ $1 + 1/\pi \approx 1.32$ (close to the minimum of $1$) and $20$ (fairly large). Because symmetry implies $x_{12}=x_{21}$ for any such realization, only three histograms are needed. Notice the differing value scales among the plots.
On top of the histograms I have plotted the chi-squared distributions (left and middle histograms) and the "variance-Gamma" distribution for the off-diagonal $x_{12}$ component. The agreement is good, indicating the formulas in the Wikipedia article are correct (and that the code has correctly implemented them).
Of course the components of a Wishart random matrix are not independent. Here is a scatterplot matrix (of the first two thousand) of the previous results. I show all four components to demonstrate the realizations are indeed all symmetric. The color coding is the same as before.
The R code to produce the figures might be helpful, so it is reproduced below.
#
# Example.
# This generates one set of realizations for each parameter value in `df,
# storing the results in a list `lX`.
#
set.seed(17)
V <- matrix(c(2,-3,-3,6), 2) # Must be psd
df <- c(1 + 1/pi, 20)
lX <- lapply(df, function(df) rWishart(2e4, V, df = df))
#
# Density of the off-diagonal elements.
# (Works only for 2D matrices!)
#
dvgamma <- function(x, V, df) {
s <- prod(sqrt(diag(V)))
rho <- V[1,2] / s
y <- x / (s * (1 - rho^2))
besselK(abs(y), (df-1)/2) * exp(rho * y) * abs(x)^((df-1)/2) /
(gamma(df/2) * sqrt(2^(df-1) * pi * (1 - rho^2) * s^(df + 1)))
}
#
# Plot histograms of the components.
#
sigma2 <- diag(V)
colors <- hsv(seq(0, 2/3, length.out=3), .25, .8)
par(mfrow=c(2,3))
for (i in seq_along(lX)) {
X <- lX[[i]]
n <- df[i]
hist(X[1,1,], freq=FALSE, breaks=102, col=colors[1],
main=bquote(df==.(1 + signif(n-1,2))), xlab=expression(X["1,1"]))
curve(dchisq(x/sigma2[1], n)/sigma2[1], lwd=2, add=TRUE)
hist(X[2,2,], freq=FALSE, breaks=102, col=colors[2],
main=bquote(df==.(1 + signif(n-1,2))), xlab=expression(X["2,2"]))
curve(dchisq(x/sigma2[2], n)/sigma2[2], lwd=2, add=TRUE)
hist(X[1,2,], freq=FALSE, breaks=102, col=colors[3],
main=bquote(df==.(1 + signif(n-1,2))), xlab=expression(X["1,2"]))
curve(dvgamma(x, V, n), add=TRUE, lwd=2)
}
par(mfrow=c(1,1))
#
# Plot the scatterplot matrix for one of the sets of realizations.
#
X <- lX[[1]]
panel.hist <- function(x, col) { # Modified from the man page for `pairs`
COLOR <<- COLOR + 1
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE, breaks=20)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col=colors[COLOR])
}
colors <- colors[c(1,2,2,3)]
COLOR <- 0
e <- c(expression(X["1,1"]), expression(X["2,1"]), expression(X["1,2"]), expression(X["2,2"]))
pairs(t(matrix(X, 4))[1:2000, ], col="#00000020", diag.panel = panel.hist, labels=e) | Random variate of a singular Wishart distribution with non-integral degrees of freedom | The Wishart Distribution is defined on the manifold $\mathcal{M}(p)$ of all positive-definite symmetric (psd) $p\times p$ matrices. In the coordinate system $(x_{ij}, 1\le j \le i \le p)$ (which ide | Random variate of a singular Wishart distribution with non-integral degrees of freedom
The Wishart Distribution is defined on the manifold $\mathcal{M}(p)$ of all positive-definite symmetric (psd) $p\times p$ matrices. In the coordinate system $(x_{ij}, 1\le j \le i \le p)$ (which identifies this space of matrices with a subset of $\mathbb{R}^{p(p+1)/2}$) the density of the "standard" Wishart distribution with $n\gt p-1$ "degrees of freedom" at a matrix $x$ is given by
$$f_n(x) = C_n \det(x)^{(n-p-1)/2}\, e^{-\operatorname{tr}(x)/2}$$
where $C_n$ is the normalizing constant. Given any specified positive-definite $p\times p$ matrix $V,$ another Wishart distribution can be generated upon multiplying a standard Wishart $x$ to produce $Vx.$ (This is the multidimensional analog of a change of scale: see "Higher Dimensional Geometries" at https://stats.stackexchange.com/a/564628/919.)
One important property of this distribution is that the diagonal elements of $x$ will all have scaled chi-squared distributions with $n$ degrees of freedom.
The Bartlett Decomposition, as described in Wikipedia, generates $x$ in the form $L^\prime AA^\prime L$ where $V = L^\prime L$ is the Cholesky decomposition of $V$ ($L$ is lower triangular), the diagonal of $A$ is a sequence of $\chi(n), \chi(n-1), \chi(n-p+1)$ variables (that is, square roots of chi-squared variables), and below the diagonal the components are standard Normal; all $p(p+1)/2$ of these variables are independent. This gives a practical and fairly simple way to generate Wishart variables for any $V$ and $n$ that are mathematically possible. This R function gives one implementation directly modeled on the Wikipedia article (using its notation).
#
# Random Wishart matrices.
# Returns a rank-3 array indexed by the third variable.
#
rWishart <- function(n, V, df) {
d <- dim(V)[1]
L <- chol(V)
X <- replicate(n, {
A <- diag(sqrt(rchisq(d, df + 1 - seq_len(d))), d)
A[lower.tri(A)] <- rnorm(d*(d-1)/2)
Y <- crossprod(L, A)
tcrossprod(Y)
})
array(X, dim = c(d, d, n))
}
As an example of its use, I set $$V = \pmatrix{2 & -3 \\ -3 & 6}$$ (whence $p=2$ and the implied correlation coefficient is extremely negative), specified the degrees of freedom in a variable df, and generated $20,000$ realizations from this Wishart distribution as
X <- rWishart(2e4, V, df)
The result is a rank-three array. That is, X has three subscripts. Its last subscript determines one $p\times p$ matrix. For instance, the first realization is X[,,1] and the last realization is X[,,2e4].
Here are histograms of the components of these realizations for two different values of $n,$ $1 + 1/\pi \approx 1.32$ (close to the minimum of $1$) and $20$ (fairly large). Because symmetry implies $x_{12}=x_{21}$ for any such realization, only three histograms are needed. Notice the differing value scales among the plots.
On top of the histograms I have plotted the chi-squared distributions (left and middle histograms) and the "variance-Gamma" distribution for the off-diagonal $x_{12}$ component. The agreement is good, indicating the formulas in the Wikipedia article are correct (and that the code has correctly implemented them).
Of course the components of a Wishart random matrix are not independent. Here is a scatterplot matrix (of the first two thousand) of the previous results. I show all four components to demonstrate the realizations are indeed all symmetric. The color coding is the same as before.
The R code to produce the figures might be helpful, so it is reproduced below.
#
# Example.
# This generates one set of realizations for each parameter value in `df,
# storing the results in a list `lX`.
#
set.seed(17)
V <- matrix(c(2,-3,-3,6), 2) # Must be psd
df <- c(1 + 1/pi, 20)
lX <- lapply(df, function(df) rWishart(2e4, V, df = df))
#
# Density of the off-diagonal elements.
# (Works only for 2D matrices!)
#
dvgamma <- function(x, V, df) {
s <- prod(sqrt(diag(V)))
rho <- V[1,2] / s
y <- x / (s * (1 - rho^2))
besselK(abs(y), (df-1)/2) * exp(rho * y) * abs(x)^((df-1)/2) /
(gamma(df/2) * sqrt(2^(df-1) * pi * (1 - rho^2) * s^(df + 1)))
}
#
# Plot histograms of the components.
#
sigma2 <- diag(V)
colors <- hsv(seq(0, 2/3, length.out=3), .25, .8)
par(mfrow=c(2,3))
for (i in seq_along(lX)) {
X <- lX[[i]]
n <- df[i]
hist(X[1,1,], freq=FALSE, breaks=102, col=colors[1],
main=bquote(df==.(1 + signif(n-1,2))), xlab=expression(X["1,1"]))
curve(dchisq(x/sigma2[1], n)/sigma2[1], lwd=2, add=TRUE)
hist(X[2,2,], freq=FALSE, breaks=102, col=colors[2],
main=bquote(df==.(1 + signif(n-1,2))), xlab=expression(X["2,2"]))
curve(dchisq(x/sigma2[2], n)/sigma2[2], lwd=2, add=TRUE)
hist(X[1,2,], freq=FALSE, breaks=102, col=colors[3],
main=bquote(df==.(1 + signif(n-1,2))), xlab=expression(X["1,2"]))
curve(dvgamma(x, V, n), add=TRUE, lwd=2)
}
par(mfrow=c(1,1))
#
# Plot the scatterplot matrix for one of the sets of realizations.
#
X <- lX[[1]]
panel.hist <- function(x, col) { # Modified from the man page for `pairs`
COLOR <<- COLOR + 1
usr <- par("usr"); on.exit(par(usr))
par(usr = c(usr[1:2], 0, 1.5) )
h <- hist(x, plot = FALSE, breaks=20)
breaks <- h$breaks; nB <- length(breaks)
y <- h$counts; y <- y/max(y)
rect(breaks[-nB], 0, breaks[-1], y, col=colors[COLOR])
}
colors <- colors[c(1,2,2,3)]
COLOR <- 0
e <- c(expression(X["1,1"]), expression(X["2,1"]), expression(X["1,2"]), expression(X["2,2"]))
pairs(t(matrix(X, 4))[1:2000, ], col="#00000020", diag.panel = panel.hist, labels=e) | Random variate of a singular Wishart distribution with non-integral degrees of freedom
The Wishart Distribution is defined on the manifold $\mathcal{M}(p)$ of all positive-definite symmetric (psd) $p\times p$ matrices. In the coordinate system $(x_{ij}, 1\le j \le i \le p)$ (which ide |
54,670 | Random variate of a singular Wishart distribution with non-integral degrees of freedom | The answer to this question is negative, we must have an integral number of degrees of freedom $\nu$ if $\nu\le p-1$. To prove it, we can look at the singular Wishart, defined as:
$$W_p(\nu,\Sigma)=\sum_{i=1}^nY_iY_i'$$
with $Y_i\sim\mathcal N(0,\Sigma)$, and $\Sigma$ a positive definite matrix.
The rank of the random variate will be $\nu$ almost surely if $\nu\le p-1$ and $\nu$ is integral. In order to keep the property that $W_p(\nu_1,\Sigma)+W_p(\nu_2,\Sigma)=W_p(\nu_1+\nu_2,\Sigma)$ for all $\nu_1$ and $\nu_2$, we must ensure that $\nu_1$ and $\nu_2$ are integral numbers. This can be seen from the definition of the singular Wishart above.
We can try to prove that there exists a way to define $W_p(\frac12,\Sigma)$ so adding two of these independent random variates will create a matrix of rank $1$ but that requires the two variates to share the same (degenerate) eigenspace. Unless we constraint it explicitly then we can't have this property. So we need to have $\nu$ integral until we reach full rank. | Random variate of a singular Wishart distribution with non-integral degrees of freedom | The answer to this question is negative, we must have an integral number of degrees of freedom $\nu$ if $\nu\le p-1$. To prove it, we can look at the singular Wishart, defined as:
$$W_p(\nu,\Sigma)=\s | Random variate of a singular Wishart distribution with non-integral degrees of freedom
The answer to this question is negative, we must have an integral number of degrees of freedom $\nu$ if $\nu\le p-1$. To prove it, we can look at the singular Wishart, defined as:
$$W_p(\nu,\Sigma)=\sum_{i=1}^nY_iY_i'$$
with $Y_i\sim\mathcal N(0,\Sigma)$, and $\Sigma$ a positive definite matrix.
The rank of the random variate will be $\nu$ almost surely if $\nu\le p-1$ and $\nu$ is integral. In order to keep the property that $W_p(\nu_1,\Sigma)+W_p(\nu_2,\Sigma)=W_p(\nu_1+\nu_2,\Sigma)$ for all $\nu_1$ and $\nu_2$, we must ensure that $\nu_1$ and $\nu_2$ are integral numbers. This can be seen from the definition of the singular Wishart above.
We can try to prove that there exists a way to define $W_p(\frac12,\Sigma)$ so adding two of these independent random variates will create a matrix of rank $1$ but that requires the two variates to share the same (degenerate) eigenspace. Unless we constraint it explicitly then we can't have this property. So we need to have $\nu$ integral until we reach full rank. | Random variate of a singular Wishart distribution with non-integral degrees of freedom
The answer to this question is negative, we must have an integral number of degrees of freedom $\nu$ if $\nu\le p-1$. To prove it, we can look at the singular Wishart, defined as:
$$W_p(\nu,\Sigma)=\s |
54,671 | Post-hoc power analysis for null results: how to use 95% confidence interval instead? | If your CIs are narrow, then you have an idea of how large the effect is, and you can say with some confidence that the effect is small, and that's why you didn't detect it.
If the CIs are wide, then you don't know how big the effect is. Maybe it's big and you didn't detect it because you didn't have enough power. Maybe it's small and you didn't detect it because it's small. | Post-hoc power analysis for null results: how to use 95% confidence interval instead? | If your CIs are narrow, then you have an idea of how large the effect is, and you can say with some confidence that the effect is small, and that's why you didn't detect it.
If the CIs are wide, then | Post-hoc power analysis for null results: how to use 95% confidence interval instead?
If your CIs are narrow, then you have an idea of how large the effect is, and you can say with some confidence that the effect is small, and that's why you didn't detect it.
If the CIs are wide, then you don't know how big the effect is. Maybe it's big and you didn't detect it because you didn't have enough power. Maybe it's small and you didn't detect it because it's small. | Post-hoc power analysis for null results: how to use 95% confidence interval instead?
If your CIs are narrow, then you have an idea of how large the effect is, and you can say with some confidence that the effect is small, and that's why you didn't detect it.
If the CIs are wide, then |
54,672 | Units for likelihoods and probabilities | Probabilities (also called "probability masses") are unitless, but probability densities have units of 1/(units of the variable).
Let's say we have a probability density $p\left(x\right)$ of some variable $x$. If we integrate this density over some range in $x$, we obtain the probability that $x$ falls in this range:
$$
P\left(a < x < b\right) = \int_a^b p\left(x\right) dx .
$$
The quantity $P\left(a < x < b\right)$ is, of course, a probability mass, and must therefore be a unitless real number between 0 and 1. But in order for $P\left(a < x < b\right)$ to be unitless, we can see from the above integral that $p\left(x\right)$ must have units of 1/(units of $x$). The quantity $p\left(x\right)$ is the amount of probability per unit of $x$, which is why the term "probability density" is applied to it in the first place.
likelihoods and probabilities calculated from continuous data not only have units, but those units are the reciprocal of units of the data
"Probabilities calculated from continuous data" only have units of 1/(units of the data) if they are probability densities of the data. The likelihood is the probability density of the data (assuming the data to be made up of continuous variables - for discrete data, the likelihood is a unitless probability mass), given a choice of model. The likelihood is usually given as a function of the model parameters, $\theta$:
$$
L\left(\theta\right) := p\left(D|\theta\right) ,
$$
where $D$ is the data. As such, the likelihood has units of 1/(units of $D$), and its integral over all possible values of $D$ must come to 1. However, very importantly, $L\left(\theta\right)$ is not a probability density of $\theta$, and the integral of $L\left(\theta\right)$ over all values of $\theta$ does not have to come to 1. In fact, that integral might not even yield a unitless quantity at all: it will have units of (units of $\theta$)/(units of $D$).
This often leads to considerable confusion, because students are rightly taught that $L\left(\theta\right)$ is not a probability density in $\theta$, but then incorrectly conclude that it's not a probability density at all. In fact, it's the probability density of the data (given the model). | Units for likelihoods and probabilities | Probabilities (also called "probability masses") are unitless, but probability densities have units of 1/(units of the variable).
Let's say we have a probability density $p\left(x\right)$ of some vari | Units for likelihoods and probabilities
Probabilities (also called "probability masses") are unitless, but probability densities have units of 1/(units of the variable).
Let's say we have a probability density $p\left(x\right)$ of some variable $x$. If we integrate this density over some range in $x$, we obtain the probability that $x$ falls in this range:
$$
P\left(a < x < b\right) = \int_a^b p\left(x\right) dx .
$$
The quantity $P\left(a < x < b\right)$ is, of course, a probability mass, and must therefore be a unitless real number between 0 and 1. But in order for $P\left(a < x < b\right)$ to be unitless, we can see from the above integral that $p\left(x\right)$ must have units of 1/(units of $x$). The quantity $p\left(x\right)$ is the amount of probability per unit of $x$, which is why the term "probability density" is applied to it in the first place.
likelihoods and probabilities calculated from continuous data not only have units, but those units are the reciprocal of units of the data
"Probabilities calculated from continuous data" only have units of 1/(units of the data) if they are probability densities of the data. The likelihood is the probability density of the data (assuming the data to be made up of continuous variables - for discrete data, the likelihood is a unitless probability mass), given a choice of model. The likelihood is usually given as a function of the model parameters, $\theta$:
$$
L\left(\theta\right) := p\left(D|\theta\right) ,
$$
where $D$ is the data. As such, the likelihood has units of 1/(units of $D$), and its integral over all possible values of $D$ must come to 1. However, very importantly, $L\left(\theta\right)$ is not a probability density of $\theta$, and the integral of $L\left(\theta\right)$ over all values of $\theta$ does not have to come to 1. In fact, that integral might not even yield a unitless quantity at all: it will have units of (units of $\theta$)/(units of $D$).
This often leads to considerable confusion, because students are rightly taught that $L\left(\theta\right)$ is not a probability density in $\theta$, but then incorrectly conclude that it's not a probability density at all. In fact, it's the probability density of the data (given the model). | Units for likelihoods and probabilities
Probabilities (also called "probability masses") are unitless, but probability densities have units of 1/(units of the variable).
Let's say we have a probability density $p\left(x\right)$ of some vari |
54,673 | Interpreting standard error for dummy variables in linear regression models | The underlying math is that the intercept ($\beta_0$) indicates the average salary of men, in \$1000; the parameter $\beta_1$ indicates the difference between the average woman's and average man's salary. So we could say something like "on average, women make \$530 less than men (with a standard error of \$1,776)". The $\pm 2 \mbox{SE}$ confidence intervals on the difference range from women's salaries being \$2,306 less to \$1,246 more than men's.
$\pm 2 \textrm{SE}$ is a commonly used shortcut; it is a little wider than the Normal-based 95% confidence interval ($\pm 1.96 \textrm{SE}$) and is a good approximation when your residual degrees of freedom (number of observations minus the number of model parameters) is moderate. In particular, the SE multiplier (which is can be computed in R via qt(0.975, df) for a given df) is:
1.96 (actually 1.9599) as df $\to \infty$
2.008 for df = 50
2.086 for df = 20
2.228 for df = 10
2.571 for df = 5.
and so on. The SE multiplier that I back-calculated from your comments is approximately 2.6, so my guess is that you had approximately 5 residual df (the residual df are displayed in the summary() output, but you haven't shown us ...)
Interpreting dummy variables associated with factors can get considerably more complicated when different contrasts, and interactions among predictors, are present, but your case is simple. | Interpreting standard error for dummy variables in linear regression models | The underlying math is that the intercept ($\beta_0$) indicates the average salary of men, in \$1000; the parameter $\beta_1$ indicates the difference between the average woman's and average man's sal | Interpreting standard error for dummy variables in linear regression models
The underlying math is that the intercept ($\beta_0$) indicates the average salary of men, in \$1000; the parameter $\beta_1$ indicates the difference between the average woman's and average man's salary. So we could say something like "on average, women make \$530 less than men (with a standard error of \$1,776)". The $\pm 2 \mbox{SE}$ confidence intervals on the difference range from women's salaries being \$2,306 less to \$1,246 more than men's.
$\pm 2 \textrm{SE}$ is a commonly used shortcut; it is a little wider than the Normal-based 95% confidence interval ($\pm 1.96 \textrm{SE}$) and is a good approximation when your residual degrees of freedom (number of observations minus the number of model parameters) is moderate. In particular, the SE multiplier (which is can be computed in R via qt(0.975, df) for a given df) is:
1.96 (actually 1.9599) as df $\to \infty$
2.008 for df = 50
2.086 for df = 20
2.228 for df = 10
2.571 for df = 5.
and so on. The SE multiplier that I back-calculated from your comments is approximately 2.6, so my guess is that you had approximately 5 residual df (the residual df are displayed in the summary() output, but you haven't shown us ...)
Interpreting dummy variables associated with factors can get considerably more complicated when different contrasts, and interactions among predictors, are present, but your case is simple. | Interpreting standard error for dummy variables in linear regression models
The underlying math is that the intercept ($\beta_0$) indicates the average salary of men, in \$1000; the parameter $\beta_1$ indicates the difference between the average woman's and average man's sal |
54,674 | Neural network gives very different accuracies if repeated on same data, why? | It means that the neural network has a high variance. It is prone to overfitting, it sometimes picks the right patterns, sometimes the noise. It can be due to using different training data, random initialization, or both. That is why procedures, as employed by yourself, are useful, if you trained and validated it only once you would never know. What this means for you is that you don't really know how the model would behave at prediction time.
You may check the What should I do when my neural network doesn't generalize well? thread for possible solutions. | Neural network gives very different accuracies if repeated on same data, why? | It means that the neural network has a high variance. It is prone to overfitting, it sometimes picks the right patterns, sometimes the noise. It can be due to using different training data, random ini | Neural network gives very different accuracies if repeated on same data, why?
It means that the neural network has a high variance. It is prone to overfitting, it sometimes picks the right patterns, sometimes the noise. It can be due to using different training data, random initialization, or both. That is why procedures, as employed by yourself, are useful, if you trained and validated it only once you would never know. What this means for you is that you don't really know how the model would behave at prediction time.
You may check the What should I do when my neural network doesn't generalize well? thread for possible solutions. | Neural network gives very different accuracies if repeated on same data, why?
It means that the neural network has a high variance. It is prone to overfitting, it sometimes picks the right patterns, sometimes the noise. It can be due to using different training data, random ini |
54,675 | How do I best select data for a linear regression? | The question calls for choosing an experimental design and selecting a sample size sufficiently large to estimate the slope $\beta_1$ to within a desired level of precision.
Let $n$ be the number of observations needed and suppose an optimal solution calls for setting the explanatory variable (hydration) to values $x_1, x_2, \ldots, x_n$ (which need not be distinct). We need to find the minimal number of such values as well as what they should be set to.
In the model let the unknown variance of the disturbance terms $\varepsilon$ be $\sigma^2.$ The variance-covariance matrix of the least squares coefficient estimates $(\hat\beta_0, \hat\beta_1)$ is then
$$\pmatrix{\operatorname{Var}(\hat\beta_0) &\operatorname{Cov}(\hat\beta_0,\hat\beta_1) \\ \operatorname{Cov}(\hat\beta_1,\hat\beta_0) & \operatorname{Var}(\hat\beta_1)} = \operatorname{Var}(\hat\beta_0,\hat\beta_1) =\sigma^2\left(X^\prime X\right)^{-1}\tag{1}$$
where $X$ is the "design matrix" created by stacking the vectors $(1,x_1), \ldots, (1,x_n)$ into an $n\times 2$ array. Thus,
$$X^\prime X = \pmatrix{1&1&\cdots&1\\x_1&x_2&\cdots&x_n}\pmatrix{1&x_1\\1&x_2\\\vdots&\vdots\\1&x_n}= \pmatrix{n & S_x \\ S_x & S_{xx}}$$
where $S_x$ is the sum of the $x_i$ and $S_{xx}$ is the sum of their squares. Assuming not all the $x_i$ are equal, this matrix is invertible with inverse
$$\left(X^\prime X\right)^{-1} = \frac{1}{nS_{xx} - S_x^2} \pmatrix{S_{xx} & -S_x \\ -S_x & n},$$
from which we read off the estimation variance from the bottom right entries of $(1)$ as
$$\operatorname{Var}(\hat\beta_1) = \frac{n\sigma^2}{nS_{xx} - S_x^2}.\tag{2}$$
Given any sample size $n,$ we must choose $(x_i)$ to maximize the denominator of $(2).$ Equivalently, because $1/n^2$ times the denominator is the (population) variance of $(x_i),$ we seek to maximize that variance. When those values are constrained to an interval $[A,B],$ it is well known that
this variance is maximized by splitting the $x_i$ into two parts that are as equal in size as possible and setting one part to $A$ and the other to $B.$
This leads to two formulas for the variance--one for even $n$ and the other for odd $n$--but for this analysis it will suffice to use the one for even $n$ because it so closely approximates the one for odd $n:$ namely, the maximal variance is $(B-A)^2/4.$ From (2) we thus obtain
$$\operatorname{Var}(\hat\beta_1) \ge \frac{4n\sigma^2}{n^2(B-A)^2}.$$
Its square root is the standard error of estimate, which simplifies to
$$\operatorname{SE}(\hat\beta_1) \ge \frac{2\sigma}{(B-A)\sqrt{n}}.\tag{3}$$
Confidence intervals are obtained by moving some multiple $z(\alpha)$ (depending on the confidence level $1-\alpha$) of the standard error to either side of $\hat\beta_1.$ Consequently, given an upper threshold $W$ for the width of the confidence interval, $n$ must be large enough to make $2z$ times $(3)$ no greater than $W.$ Solving,
$$n \ge \frac{16 z(\alpha)^2\sigma^2}{W^2(B-A)^2}.$$
That's as far as we can take the answer with the given information. In practice, you don't know $\sigma:$ it reflects the variability of the response ("volume") around the linear model. Usually you have to guess it, either using previous experience, a search of literature, or true guessing. A good approach is to underestimate $n,$ perform the experiment, use the results to obtain a better estimate of $\sigma,$ re-estimate $n$ (which will almost always be larger), and then obtain the requisite number of additional observations.
There are many subtle issues. A full discussion requires a textbook on experimental design. Some of them are
This solution is unable to distinguish random variation from nonlinearity. One consequence of this is that in the presence of important curvature, a different design (involving intermediate values of $x_i$ in the interior of the interval $[A,B]$) could require far fewer observations because the associated value of $\sigma$ might be much smaller.
Technically, $Z(\alpha)$ depends on $n,$ too: it ought to be derived from a Student $t$ distribution. This problem can be overcome by first deriving it from a standard Normal distribution, computing an initial solution for $n,$ then using $n-2$ as the degrees of freedom in the Student $t$ distribution. Re-estimate $n.$ This process will converge within a few iterations to the optimal $n.$
Ultimately, the least squares procedure will estimate $\sigma.$ This estimate won't be quite the same as the true value of $\sigma.$
There is no assurance, after the full experiment is done, that the width of the confidence interval will be less than the threshold $W:$ results will vary and so will this width.
Naive and overoptimistic specifications of the confidence level $1-\alpha$ often lead to astronomically large requirements for the sample size $n.$ Almost always, one has to give up a little confidence or a little precision in order to make the sample size practicable.
Other criteria for optimal design can be considered. The one used here is both C-optimal and D-optimal. Solutions for other (related) forms of optimality can be developed using the same techniques illustrated in $(1),$ $(2),$ and $(3)$ above.
Finally, as an example of how to apply this solution, suppose (as in the question) that the interval is $[A,B]=[55,85],$ that the confidence is to be $95\%$ ($\alpha=0.05$), $\sigma=1,$ and the maximum width of a CI for $\beta_1$ is to be $0.1.$
The initial value of $Z(\alpha)$ is the lower $\alpha/2=0.025$ quantile of the standard Normal distribution, $-1.96.$ The smallest integer greater than
$$\frac{16(-1.96)^2(1)^2}{(0.1)^2(85-55)^2} \approx 6.8$$
is $7.$ The lower $0.025$ quantile of the Student $t$ distribution with $7-2=5$ degrees of freedom is $-2.57.$ Using this instead of the previous value of $Z$ multiplies the estimate by $((-2.57)/(-1.96))^2 \approx 1.7,$ giving a new sample size estimate of $n=12.$ Two more iterations bounce down to $9$ and then stop at $n=10.$
Accordingly, I set five values of $x$ to $A=55$ and five to $B=85.$ In five thousand independent simulations, I created random Normal responses according to a linear model with $\sigma=1,$ fit the model with least squares, and recorded the width of the confidence interval for $\beta_1.$ Here is a summary of those results, with the mean width shown as a red dotted line.
Slightly more than half the time, the width met the target of $0.1.$ (NB: the CI widths were established using least-squares estimates of $\sigma.$) The width rarely was hugely greater than the target. (The variability of the widths depends on $n$ as well as the sample design.) A quick simulation like this can help you appreciate what you might encounter in the actual experiment and might prompt you to modify your solution before you begin experimenting. | How do I best select data for a linear regression? | The question calls for choosing an experimental design and selecting a sample size sufficiently large to estimate the slope $\beta_1$ to within a desired level of precision.
Let $n$ be the number of o | How do I best select data for a linear regression?
The question calls for choosing an experimental design and selecting a sample size sufficiently large to estimate the slope $\beta_1$ to within a desired level of precision.
Let $n$ be the number of observations needed and suppose an optimal solution calls for setting the explanatory variable (hydration) to values $x_1, x_2, \ldots, x_n$ (which need not be distinct). We need to find the minimal number of such values as well as what they should be set to.
In the model let the unknown variance of the disturbance terms $\varepsilon$ be $\sigma^2.$ The variance-covariance matrix of the least squares coefficient estimates $(\hat\beta_0, \hat\beta_1)$ is then
$$\pmatrix{\operatorname{Var}(\hat\beta_0) &\operatorname{Cov}(\hat\beta_0,\hat\beta_1) \\ \operatorname{Cov}(\hat\beta_1,\hat\beta_0) & \operatorname{Var}(\hat\beta_1)} = \operatorname{Var}(\hat\beta_0,\hat\beta_1) =\sigma^2\left(X^\prime X\right)^{-1}\tag{1}$$
where $X$ is the "design matrix" created by stacking the vectors $(1,x_1), \ldots, (1,x_n)$ into an $n\times 2$ array. Thus,
$$X^\prime X = \pmatrix{1&1&\cdots&1\\x_1&x_2&\cdots&x_n}\pmatrix{1&x_1\\1&x_2\\\vdots&\vdots\\1&x_n}= \pmatrix{n & S_x \\ S_x & S_{xx}}$$
where $S_x$ is the sum of the $x_i$ and $S_{xx}$ is the sum of their squares. Assuming not all the $x_i$ are equal, this matrix is invertible with inverse
$$\left(X^\prime X\right)^{-1} = \frac{1}{nS_{xx} - S_x^2} \pmatrix{S_{xx} & -S_x \\ -S_x & n},$$
from which we read off the estimation variance from the bottom right entries of $(1)$ as
$$\operatorname{Var}(\hat\beta_1) = \frac{n\sigma^2}{nS_{xx} - S_x^2}.\tag{2}$$
Given any sample size $n,$ we must choose $(x_i)$ to maximize the denominator of $(2).$ Equivalently, because $1/n^2$ times the denominator is the (population) variance of $(x_i),$ we seek to maximize that variance. When those values are constrained to an interval $[A,B],$ it is well known that
this variance is maximized by splitting the $x_i$ into two parts that are as equal in size as possible and setting one part to $A$ and the other to $B.$
This leads to two formulas for the variance--one for even $n$ and the other for odd $n$--but for this analysis it will suffice to use the one for even $n$ because it so closely approximates the one for odd $n:$ namely, the maximal variance is $(B-A)^2/4.$ From (2) we thus obtain
$$\operatorname{Var}(\hat\beta_1) \ge \frac{4n\sigma^2}{n^2(B-A)^2}.$$
Its square root is the standard error of estimate, which simplifies to
$$\operatorname{SE}(\hat\beta_1) \ge \frac{2\sigma}{(B-A)\sqrt{n}}.\tag{3}$$
Confidence intervals are obtained by moving some multiple $z(\alpha)$ (depending on the confidence level $1-\alpha$) of the standard error to either side of $\hat\beta_1.$ Consequently, given an upper threshold $W$ for the width of the confidence interval, $n$ must be large enough to make $2z$ times $(3)$ no greater than $W.$ Solving,
$$n \ge \frac{16 z(\alpha)^2\sigma^2}{W^2(B-A)^2}.$$
That's as far as we can take the answer with the given information. In practice, you don't know $\sigma:$ it reflects the variability of the response ("volume") around the linear model. Usually you have to guess it, either using previous experience, a search of literature, or true guessing. A good approach is to underestimate $n,$ perform the experiment, use the results to obtain a better estimate of $\sigma,$ re-estimate $n$ (which will almost always be larger), and then obtain the requisite number of additional observations.
There are many subtle issues. A full discussion requires a textbook on experimental design. Some of them are
This solution is unable to distinguish random variation from nonlinearity. One consequence of this is that in the presence of important curvature, a different design (involving intermediate values of $x_i$ in the interior of the interval $[A,B]$) could require far fewer observations because the associated value of $\sigma$ might be much smaller.
Technically, $Z(\alpha)$ depends on $n,$ too: it ought to be derived from a Student $t$ distribution. This problem can be overcome by first deriving it from a standard Normal distribution, computing an initial solution for $n,$ then using $n-2$ as the degrees of freedom in the Student $t$ distribution. Re-estimate $n.$ This process will converge within a few iterations to the optimal $n.$
Ultimately, the least squares procedure will estimate $\sigma.$ This estimate won't be quite the same as the true value of $\sigma.$
There is no assurance, after the full experiment is done, that the width of the confidence interval will be less than the threshold $W:$ results will vary and so will this width.
Naive and overoptimistic specifications of the confidence level $1-\alpha$ often lead to astronomically large requirements for the sample size $n.$ Almost always, one has to give up a little confidence or a little precision in order to make the sample size practicable.
Other criteria for optimal design can be considered. The one used here is both C-optimal and D-optimal. Solutions for other (related) forms of optimality can be developed using the same techniques illustrated in $(1),$ $(2),$ and $(3)$ above.
Finally, as an example of how to apply this solution, suppose (as in the question) that the interval is $[A,B]=[55,85],$ that the confidence is to be $95\%$ ($\alpha=0.05$), $\sigma=1,$ and the maximum width of a CI for $\beta_1$ is to be $0.1.$
The initial value of $Z(\alpha)$ is the lower $\alpha/2=0.025$ quantile of the standard Normal distribution, $-1.96.$ The smallest integer greater than
$$\frac{16(-1.96)^2(1)^2}{(0.1)^2(85-55)^2} \approx 6.8$$
is $7.$ The lower $0.025$ quantile of the Student $t$ distribution with $7-2=5$ degrees of freedom is $-2.57.$ Using this instead of the previous value of $Z$ multiplies the estimate by $((-2.57)/(-1.96))^2 \approx 1.7,$ giving a new sample size estimate of $n=12.$ Two more iterations bounce down to $9$ and then stop at $n=10.$
Accordingly, I set five values of $x$ to $A=55$ and five to $B=85.$ In five thousand independent simulations, I created random Normal responses according to a linear model with $\sigma=1,$ fit the model with least squares, and recorded the width of the confidence interval for $\beta_1.$ Here is a summary of those results, with the mean width shown as a red dotted line.
Slightly more than half the time, the width met the target of $0.1.$ (NB: the CI widths were established using least-squares estimates of $\sigma.$) The width rarely was hugely greater than the target. (The variability of the widths depends on $n$ as well as the sample design.) A quick simulation like this can help you appreciate what you might encounter in the actual experiment and might prompt you to modify your solution before you begin experimenting. | How do I best select data for a linear regression?
The question calls for choosing an experimental design and selecting a sample size sufficiently large to estimate the slope $\beta_1$ to within a desired level of precision.
Let $n$ be the number of o |
54,676 | Can I interpret coefficients for "Year" as differences between years that are not explained by my predictors? | Well done. I think your interpretation is OK. Running anova() will give you a more general test for a time trend than how you described it---a test of whether the time trend is flat, i.e., where there is a difference between any pair of years. It would also be useful to examine the difference in $R^2$ due to yearf with and without adjustment for x3.
The analysis would be slightly better were you to measure time with more resolution. Then you could model year + fraction of a year instead of integer year. | Can I interpret coefficients for "Year" as differences between years that are not explained by my pr | Well done. I think your interpretation is OK. Running anova() will give you a more general test for a time trend than how you described it---a test of whether the time trend is flat, i.e., where the | Can I interpret coefficients for "Year" as differences between years that are not explained by my predictors?
Well done. I think your interpretation is OK. Running anova() will give you a more general test for a time trend than how you described it---a test of whether the time trend is flat, i.e., where there is a difference between any pair of years. It would also be useful to examine the difference in $R^2$ due to yearf with and without adjustment for x3.
The analysis would be slightly better were you to measure time with more resolution. Then you could model year + fraction of a year instead of integer year. | Can I interpret coefficients for "Year" as differences between years that are not explained by my pr
Well done. I think your interpretation is OK. Running anova() will give you a more general test for a time trend than how you described it---a test of whether the time trend is flat, i.e., where the |
54,677 | What are the first four moments of a linear function of IID random variables? | To facilitate this analysis, define the sums $S_{n,r} \equiv \sum_{i=1}^n c_i^r$. Using these quantities the mean, variance, skewness and kurtosis of the quantity $H_n$ can be written as shown in the box below. These formulae are valid for any case where the underlying values are IID with finite kurtosis.
$$\begin{align}
\boxed{
\quad \quad \quad \mathbb{E}(H_n) = \mu S_{n,1}
\quad \quad \quad \quad \quad \quad \quad \ \
\mathbb{V}(H_n) = \sigma^2 S_{n,2}, \\[18pt]
\quad \mathbb{Skew}(H_n) = \gamma \cdot \frac{S_{n,3}}{S_{n,2}^{3/2}}
\quad \quad \quad \quad \quad
\quad \mathbb{Kurt}(H_n) = 3 + (\kappa-3) \frac{S_{n,4}}{S_{n,2}^2}. \quad \\}
\end{align}$$
These reults are simplest to derive via the cumulant function of the random variable of interest. To do this, observe that the random variable $H_n$ has moment generating function:
$$\begin{align}
m_{H_n}(t)
\equiv \mathbb{E}(e^{t H_n})
= \prod_{i=1}^n \mathbb{E}(e^{t c_i X_i})
= \prod_{i=1}^n m_{X}(t c_i),
\end{align}$$
which gives the cumulant function:
$$\begin{align}
K_{H_n}(t)
= \log m_{H_n}(t)
= \sum_{i=1}^n \log m_{X}(t c_i)
= \sum_{i=1}^n K_{X}(t c_i).
\end{align}$$
Now, let $\kappa_r$ denote the $r$th cumulant of the underlying random variables $X_i$. The cumulants of $H_n$ are related to these cumulants by:
$$\begin{align}
\bar{\kappa}_n
\equiv \frac{d^r K_{H_n}}{dt^r}(t) \Bigg|_{t=0}
= \sum_{i=1}^n c_i^r \cdot \frac{d^r K_{X}}{dt^r}(t c_i) \Bigg|_{t=0}
= \sum_{i=1}^n c_i^r \cdot \kappa_r.
\end{align}$$
Using the relationship of the cumulants to the moments of interest, we then have:
$$\begin{align}
\mathbb{E}(H_n)
&= \bar{\kappa}_1 \\[6pt]
&= \sum_{i=1}^n c_i \cdot \kappa_1 \\[6pt]
&= \sum_{i=1}^n c_i \cdot \mu \\[6pt]
&= \mu \sum_{i=1}^n c_i \\[6pt]
&= \mu S_{n,1}, \\[12pt]
\mathbb{V}(H_n)
&= \bar{\kappa}_2 \\[6pt]
&= \sum_{i=1}^n c_i^2 \cdot \kappa_2 \\[6pt]
&= \sum_{i=1}^n c_i^2 \cdot \sigma^2 \\[6pt]
&= \sigma^2 \sum_{i=1}^n c_i^2 \\[6pt]
&= \sigma^2 S_{n,2}, \\[12pt]
\mathbb{Skew}(H_n)
&= \frac{\bar{\kappa}_3}{\bar{\kappa}_2^{3/2}} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^3 \cdot \kappa_3}{(\sum_{i=1}^n c_i^2 \cdot \kappa_2)^{3/2}} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^3 \cdot \gamma \cdot \sigma^3}{(\sum_{i=1}^n c_i^2 \cdot \sigma^2)^{3/2}}, \\[6pt]
&= \frac{\gamma \sum_{i=1}^n c_i^3}{(\sum_{i=1}^n c_i^2)^{3/2}} \\[6pt]
&= \gamma \cdot \frac{S_{n,3}}{S_{n,2}^{3/2}}, \\[6pt]
\mathbb{Kurt}(H_n)
&= \frac{\bar{\kappa}_4 + 3 \bar{\kappa}_2^2}{\bar{\kappa}_2^2} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^4 \cdot \kappa_4 + 3 (\sum_{i=1}^n c_i^2 \cdot \kappa_2)^2}{(\sum_{i=1}^n c_i^2 \cdot \kappa_2)^2} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^4 \cdot (\kappa-3) \sigma^4 + 3 (\sum_{i=1}^n c_i^2 \cdot \sigma^2)^2}{(\sum_{i=1}^n c_i^2 \cdot \sigma^2)^2} \\[6pt]
&= \frac{(\kappa-3) \sum_{i=1}^n c_i^4 + 3 (\sum_{i=1}^n c_i^2)^2}{(\sum_{i=1}^n c_i^2)^2}. \\[6pt]
&= \frac{(\kappa-3) S_{n,4} + 3 S_{n,2}^2}{S_{n,2}^2} \\[6pt]
&= 3 + (\kappa-3) \frac{S_{n,4}}{S_{n,2}^2} \\[6pt]
\end{align}$$ | What are the first four moments of a linear function of IID random variables? | To facilitate this analysis, define the sums $S_{n,r} \equiv \sum_{i=1}^n c_i^r$. Using these quantities the mean, variance, skewness and kurtosis of the quantity $H_n$ can be written as shown in the | What are the first four moments of a linear function of IID random variables?
To facilitate this analysis, define the sums $S_{n,r} \equiv \sum_{i=1}^n c_i^r$. Using these quantities the mean, variance, skewness and kurtosis of the quantity $H_n$ can be written as shown in the box below. These formulae are valid for any case where the underlying values are IID with finite kurtosis.
$$\begin{align}
\boxed{
\quad \quad \quad \mathbb{E}(H_n) = \mu S_{n,1}
\quad \quad \quad \quad \quad \quad \quad \ \
\mathbb{V}(H_n) = \sigma^2 S_{n,2}, \\[18pt]
\quad \mathbb{Skew}(H_n) = \gamma \cdot \frac{S_{n,3}}{S_{n,2}^{3/2}}
\quad \quad \quad \quad \quad
\quad \mathbb{Kurt}(H_n) = 3 + (\kappa-3) \frac{S_{n,4}}{S_{n,2}^2}. \quad \\}
\end{align}$$
These reults are simplest to derive via the cumulant function of the random variable of interest. To do this, observe that the random variable $H_n$ has moment generating function:
$$\begin{align}
m_{H_n}(t)
\equiv \mathbb{E}(e^{t H_n})
= \prod_{i=1}^n \mathbb{E}(e^{t c_i X_i})
= \prod_{i=1}^n m_{X}(t c_i),
\end{align}$$
which gives the cumulant function:
$$\begin{align}
K_{H_n}(t)
= \log m_{H_n}(t)
= \sum_{i=1}^n \log m_{X}(t c_i)
= \sum_{i=1}^n K_{X}(t c_i).
\end{align}$$
Now, let $\kappa_r$ denote the $r$th cumulant of the underlying random variables $X_i$. The cumulants of $H_n$ are related to these cumulants by:
$$\begin{align}
\bar{\kappa}_n
\equiv \frac{d^r K_{H_n}}{dt^r}(t) \Bigg|_{t=0}
= \sum_{i=1}^n c_i^r \cdot \frac{d^r K_{X}}{dt^r}(t c_i) \Bigg|_{t=0}
= \sum_{i=1}^n c_i^r \cdot \kappa_r.
\end{align}$$
Using the relationship of the cumulants to the moments of interest, we then have:
$$\begin{align}
\mathbb{E}(H_n)
&= \bar{\kappa}_1 \\[6pt]
&= \sum_{i=1}^n c_i \cdot \kappa_1 \\[6pt]
&= \sum_{i=1}^n c_i \cdot \mu \\[6pt]
&= \mu \sum_{i=1}^n c_i \\[6pt]
&= \mu S_{n,1}, \\[12pt]
\mathbb{V}(H_n)
&= \bar{\kappa}_2 \\[6pt]
&= \sum_{i=1}^n c_i^2 \cdot \kappa_2 \\[6pt]
&= \sum_{i=1}^n c_i^2 \cdot \sigma^2 \\[6pt]
&= \sigma^2 \sum_{i=1}^n c_i^2 \\[6pt]
&= \sigma^2 S_{n,2}, \\[12pt]
\mathbb{Skew}(H_n)
&= \frac{\bar{\kappa}_3}{\bar{\kappa}_2^{3/2}} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^3 \cdot \kappa_3}{(\sum_{i=1}^n c_i^2 \cdot \kappa_2)^{3/2}} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^3 \cdot \gamma \cdot \sigma^3}{(\sum_{i=1}^n c_i^2 \cdot \sigma^2)^{3/2}}, \\[6pt]
&= \frac{\gamma \sum_{i=1}^n c_i^3}{(\sum_{i=1}^n c_i^2)^{3/2}} \\[6pt]
&= \gamma \cdot \frac{S_{n,3}}{S_{n,2}^{3/2}}, \\[6pt]
\mathbb{Kurt}(H_n)
&= \frac{\bar{\kappa}_4 + 3 \bar{\kappa}_2^2}{\bar{\kappa}_2^2} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^4 \cdot \kappa_4 + 3 (\sum_{i=1}^n c_i^2 \cdot \kappa_2)^2}{(\sum_{i=1}^n c_i^2 \cdot \kappa_2)^2} \\[6pt]
&= \frac{\sum_{i=1}^n c_i^4 \cdot (\kappa-3) \sigma^4 + 3 (\sum_{i=1}^n c_i^2 \cdot \sigma^2)^2}{(\sum_{i=1}^n c_i^2 \cdot \sigma^2)^2} \\[6pt]
&= \frac{(\kappa-3) \sum_{i=1}^n c_i^4 + 3 (\sum_{i=1}^n c_i^2)^2}{(\sum_{i=1}^n c_i^2)^2}. \\[6pt]
&= \frac{(\kappa-3) S_{n,4} + 3 S_{n,2}^2}{S_{n,2}^2} \\[6pt]
&= 3 + (\kappa-3) \frac{S_{n,4}}{S_{n,2}^2} \\[6pt]
\end{align}$$ | What are the first four moments of a linear function of IID random variables?
To facilitate this analysis, define the sums $S_{n,r} \equiv \sum_{i=1}^n c_i^r$. Using these quantities the mean, variance, skewness and kurtosis of the quantity $H_n$ can be written as shown in the |
54,678 | Linear assumption for logistic regression | One way to write the data generating mechanism for logistic regression is as follows
$$ \mbox{logit}(p) = X\beta $$
$$ y \sim \mbox{Binomial}(n , p) $$
From this formulation, we find that the linearity assumption is made on the log odds scale. So were we to plot the log odds of the outcome versus the predictor, we would see a straight line$^{1.}$
$^{1.}$ This isn't strictly true. The assumption of linearity is not about the conditional mean, its about how we combine predictors. I could easily make a non-linear curve using linear combinations of non-linear functions. That being said, this all happens on the log odds scale. | Linear assumption for logistic regression | One way to write the data generating mechanism for logistic regression is as follows
$$ \mbox{logit}(p) = X\beta $$
$$ y \sim \mbox{Binomial}(n , p) $$
From this formulation, we find that the linearit | Linear assumption for logistic regression
One way to write the data generating mechanism for logistic regression is as follows
$$ \mbox{logit}(p) = X\beta $$
$$ y \sim \mbox{Binomial}(n , p) $$
From this formulation, we find that the linearity assumption is made on the log odds scale. So were we to plot the log odds of the outcome versus the predictor, we would see a straight line$^{1.}$
$^{1.}$ This isn't strictly true. The assumption of linearity is not about the conditional mean, its about how we combine predictors. I could easily make a non-linear curve using linear combinations of non-linear functions. That being said, this all happens on the log odds scale. | Linear assumption for logistic regression
One way to write the data generating mechanism for logistic regression is as follows
$$ \mbox{logit}(p) = X\beta $$
$$ y \sim \mbox{Binomial}(n , p) $$
From this formulation, we find that the linearit |
54,679 | Pair-matched count regression in R with offset? | You should be able to do this with a mixed model with a count response (e.g. Poisson or negative binomial): you want the "standard" count-GLM-with-offset model with random variation in the intercept across IDs:
$$
\begin{split}
\eta_{ij} & = \beta_0 + b_i + \beta_1 E_{ij} + \log(A_{ij}) \\
b_i & \sim N(0, \sigma^2_b) \\
O_{ij} & \sim \textrm{Poisson}(\exp(\eta_{ij}))
\end{split}
$$
where $i$ indexes IDs and $j = \{1,2\}$ indexes observations within IDs. $\beta_0$ denotes the overall (population-level) intercept; $b_i$ denotes the random effects, i.e. the deviation of the intercept from the population-level value for each group. (Most mixed-model software will let you estimate the conditional modes of the $b_i$ distributions, equivalent to BLUPs (best linear unbiased predictors) in the linear mixed model case.) $A_{ij}$ is the area for the $i,j$th observation (i.e. observation $j$ within group $i$), $O_{ij}$ is the occurrence (count response).
If you specified that the model be fitted by restricted maximum likelihood (which is possible for GLMMs in glmmTMB and possibly some other packages), this specification is exactly analogous to a paired t-test, but with Poisson rather than Gaussian responses. If you use maximum likelihood or Bayesian methods (which are more common), it's still pretty close to a "paired Poisson t-test" equivalent.
Since you asked how to do this in R: in lme4 (for example) it would be
glmer(observation ~ exposure + offset(log(area)) + (1|ID),
family = poisson,
data = ...)
Similar models can be fitted lots of different packages/functions, some with very similar interfaces (lme4::glmer, glmmTMB::glmmTMB), some with different interfaces (GLMMadaptive), some Bayesian (MCMCglmm, rstanarm, brms), etc. | Pair-matched count regression in R with offset? | You should be able to do this with a mixed model with a count response (e.g. Poisson or negative binomial): you want the "standard" count-GLM-with-offset model with random variation in the intercept a | Pair-matched count regression in R with offset?
You should be able to do this with a mixed model with a count response (e.g. Poisson or negative binomial): you want the "standard" count-GLM-with-offset model with random variation in the intercept across IDs:
$$
\begin{split}
\eta_{ij} & = \beta_0 + b_i + \beta_1 E_{ij} + \log(A_{ij}) \\
b_i & \sim N(0, \sigma^2_b) \\
O_{ij} & \sim \textrm{Poisson}(\exp(\eta_{ij}))
\end{split}
$$
where $i$ indexes IDs and $j = \{1,2\}$ indexes observations within IDs. $\beta_0$ denotes the overall (population-level) intercept; $b_i$ denotes the random effects, i.e. the deviation of the intercept from the population-level value for each group. (Most mixed-model software will let you estimate the conditional modes of the $b_i$ distributions, equivalent to BLUPs (best linear unbiased predictors) in the linear mixed model case.) $A_{ij}$ is the area for the $i,j$th observation (i.e. observation $j$ within group $i$), $O_{ij}$ is the occurrence (count response).
If you specified that the model be fitted by restricted maximum likelihood (which is possible for GLMMs in glmmTMB and possibly some other packages), this specification is exactly analogous to a paired t-test, but with Poisson rather than Gaussian responses. If you use maximum likelihood or Bayesian methods (which are more common), it's still pretty close to a "paired Poisson t-test" equivalent.
Since you asked how to do this in R: in lme4 (for example) it would be
glmer(observation ~ exposure + offset(log(area)) + (1|ID),
family = poisson,
data = ...)
Similar models can be fitted lots of different packages/functions, some with very similar interfaces (lme4::glmer, glmmTMB::glmmTMB), some with different interfaces (GLMMadaptive), some Bayesian (MCMCglmm, rstanarm, brms), etc. | Pair-matched count regression in R with offset?
You should be able to do this with a mixed model with a count response (e.g. Poisson or negative binomial): you want the "standard" count-GLM-with-offset model with random variation in the intercept a |
54,680 | Convergence of sum of normal random variables with variance $\frac{1}{\sqrt{i}}$ | First of all, since the distribution of $X_i$ depends on $i$, the sequence of variables is not IID. Rather, you have independent but not identically distributed random variables. In any case, to look at convergence, let's examine the partial sums:
$$S_n \equiv \sum_{i=1}^n X_i.$$
Since the underlying variables are independent normal random variables, we have:
$$S_n \sim \text{N}(0, V_n)
\quad \quad \quad \quad \quad V_n \equiv \sum_{i=1}^n i^{1/2}.$$
Consequently, for any $s \geqslant 0$ we have:
$$\begin{align}
\mathbb{P}(|S_n| > s)
&= \mathbb{P} \bigg( \frac{|S_n|}{\sqrt{V_n}} > \frac{s}{\sqrt{V_n}} \bigg)
= 2 \Phi \bigg( - \frac{s}{\sqrt{V_n}} \bigg), \\[6pt]
\end{align}$$
and since $\lim_{n \rightarrow \infty} V_n = \infty$ we then have:
$$\begin{align}
\lim_{n \rightarrow \infty} \mathbb{P}(|S_n| > s)
&= 2 \Phi (0) = 2 \cdot \frac{1}{2} = 1. \\[6pt]
\end{align}$$
Thus, we can see that for any finite value $s \geqslant 0$ the probability that $|S_n| > s$ will converge to one as $n \rightarrow \infty$. In this sense, the limiting sum of the underlying random variables "explodes" (i.e., it does not converge). | Convergence of sum of normal random variables with variance $\frac{1}{\sqrt{i}}$ | First of all, since the distribution of $X_i$ depends on $i$, the sequence of variables is not IID. Rather, you have independent but not identically distributed random variables. In any case, to loo | Convergence of sum of normal random variables with variance $\frac{1}{\sqrt{i}}$
First of all, since the distribution of $X_i$ depends on $i$, the sequence of variables is not IID. Rather, you have independent but not identically distributed random variables. In any case, to look at convergence, let's examine the partial sums:
$$S_n \equiv \sum_{i=1}^n X_i.$$
Since the underlying variables are independent normal random variables, we have:
$$S_n \sim \text{N}(0, V_n)
\quad \quad \quad \quad \quad V_n \equiv \sum_{i=1}^n i^{1/2}.$$
Consequently, for any $s \geqslant 0$ we have:
$$\begin{align}
\mathbb{P}(|S_n| > s)
&= \mathbb{P} \bigg( \frac{|S_n|}{\sqrt{V_n}} > \frac{s}{\sqrt{V_n}} \bigg)
= 2 \Phi \bigg( - \frac{s}{\sqrt{V_n}} \bigg), \\[6pt]
\end{align}$$
and since $\lim_{n \rightarrow \infty} V_n = \infty$ we then have:
$$\begin{align}
\lim_{n \rightarrow \infty} \mathbb{P}(|S_n| > s)
&= 2 \Phi (0) = 2 \cdot \frac{1}{2} = 1. \\[6pt]
\end{align}$$
Thus, we can see that for any finite value $s \geqslant 0$ the probability that $|S_n| > s$ will converge to one as $n \rightarrow \infty$. In this sense, the limiting sum of the underlying random variables "explodes" (i.e., it does not converge). | Convergence of sum of normal random variables with variance $\frac{1}{\sqrt{i}}$
First of all, since the distribution of $X_i$ depends on $i$, the sequence of variables is not IID. Rather, you have independent but not identically distributed random variables. In any case, to loo |
54,681 | Which link function in binomial regression is better? | I think it should be a matter of interpretation of model results. The logit link has you modeling the log odds, or equivalently the multiplicative effect in the odds for a one-unit increase in a covariate. The probit link has you modeling the standard normal percentile, or equivalently, the additive effect in the standard normal percentile for a one-unit increase in a covariate.
From my experience as uncomfortable as odds and odds ratios can be they seem much more tangible and immediately relevant than standard normal percentiles unless the original endpoint was in fact a continuous normally distributed endpoint that was dichotomized for the purposes of binary regression. In that case the probit link would be relevant to talk in terms of percentiles or standard deviations from the mean. Likewise for the Cauchit link if the original data are in fact Cauchy distributed. | Which link function in binomial regression is better? | I think it should be a matter of interpretation of model results. The logit link has you modeling the log odds, or equivalently the multiplicative effect in the odds for a one-unit increase in a cova | Which link function in binomial regression is better?
I think it should be a matter of interpretation of model results. The logit link has you modeling the log odds, or equivalently the multiplicative effect in the odds for a one-unit increase in a covariate. The probit link has you modeling the standard normal percentile, or equivalently, the additive effect in the standard normal percentile for a one-unit increase in a covariate.
From my experience as uncomfortable as odds and odds ratios can be they seem much more tangible and immediately relevant than standard normal percentiles unless the original endpoint was in fact a continuous normally distributed endpoint that was dichotomized for the purposes of binary regression. In that case the probit link would be relevant to talk in terms of percentiles or standard deviations from the mean. Likewise for the Cauchit link if the original data are in fact Cauchy distributed. | Which link function in binomial regression is better?
I think it should be a matter of interpretation of model results. The logit link has you modeling the log odds, or equivalently the multiplicative effect in the odds for a one-unit increase in a cova |
54,682 | Resampling small datasets - Issue of overcounting? | When bootstrapping, we are assuming that the sample is representative of the population.
The whole purpose of bootstrapping is to estimate a sampling distribution and infer the likely standard errors and confidence intervals for the population as a whole.
However, the issue with small sample sizes is that bias is more likely to exist relative to large samples - and (contrary to popular belief) bootstrapping does not fix this bias or remedy the issue of small sample sizes.
For instance, suppose one were to roll a dice five times. The numbers 4, 5, 6, 6, 6 are obtained, for an average of 5.4.
If one were to roll a dice a hundred times, an average closer to 3.5 could be expected - which is the theoretical mean.
However, small samples have a higher chance of deviating significantly from the population mean and so bootstrap sampling will not remedy this by virtue of simply generating more observations. | Resampling small datasets - Issue of overcounting? | When bootstrapping, we are assuming that the sample is representative of the population.
The whole purpose of bootstrapping is to estimate a sampling distribution and infer the likely standard errors | Resampling small datasets - Issue of overcounting?
When bootstrapping, we are assuming that the sample is representative of the population.
The whole purpose of bootstrapping is to estimate a sampling distribution and infer the likely standard errors and confidence intervals for the population as a whole.
However, the issue with small sample sizes is that bias is more likely to exist relative to large samples - and (contrary to popular belief) bootstrapping does not fix this bias or remedy the issue of small sample sizes.
For instance, suppose one were to roll a dice five times. The numbers 4, 5, 6, 6, 6 are obtained, for an average of 5.4.
If one were to roll a dice a hundred times, an average closer to 3.5 could be expected - which is the theoretical mean.
However, small samples have a higher chance of deviating significantly from the population mean and so bootstrap sampling will not remedy this by virtue of simply generating more observations. | Resampling small datasets - Issue of overcounting?
When bootstrapping, we are assuming that the sample is representative of the population.
The whole purpose of bootstrapping is to estimate a sampling distribution and infer the likely standard errors |
54,683 | Resampling small datasets - Issue of overcounting? | The issue with small sample sizes is not that you will repeat bootstrap samples but that the original small sample might not be so representative of the population.
Let’s obtain a sample of coin flips from a fair coin, so the true population is $Binom(1, 0.5)$, and let’s use your small sample size of $n=5$. In R…
set.seed(314) # For pi
x <- rbinom(5, 1, 0.5)
I get four $0$s (heads) and one $1$ (tails), which means a $20\%$ chance of tails, rather than the correct $50\%$. When we go to bootstrap this sample, we are telling the bootstrap procedure to sample from a $Binom(1, 0.2)$ distribution, which is quite a bit different from the true $Binom(1, 0.5)$ population.
When the sample size is larger, we are less likely to have a sample that is so drastically different from the population. | Resampling small datasets - Issue of overcounting? | The issue with small sample sizes is not that you will repeat bootstrap samples but that the original small sample might not be so representative of the population.
Let’s obtain a sample of coin flips | Resampling small datasets - Issue of overcounting?
The issue with small sample sizes is not that you will repeat bootstrap samples but that the original small sample might not be so representative of the population.
Let’s obtain a sample of coin flips from a fair coin, so the true population is $Binom(1, 0.5)$, and let’s use your small sample size of $n=5$. In R…
set.seed(314) # For pi
x <- rbinom(5, 1, 0.5)
I get four $0$s (heads) and one $1$ (tails), which means a $20\%$ chance of tails, rather than the correct $50\%$. When we go to bootstrap this sample, we are telling the bootstrap procedure to sample from a $Binom(1, 0.2)$ distribution, which is quite a bit different from the true $Binom(1, 0.5)$ population.
When the sample size is larger, we are less likely to have a sample that is so drastically different from the population. | Resampling small datasets - Issue of overcounting?
The issue with small sample sizes is not that you will repeat bootstrap samples but that the original small sample might not be so representative of the population.
Let’s obtain a sample of coin flips |
54,684 | Resampling small datasets - Issue of overcounting? | Many bootstrap replications are drawn in order to approximate (in a standard situation) the distribution of i.i.d. samples from the empirical distribution. Now if you can get this distribution explicitly, which is possible with a small sample as you can list all possible samples and their probabilities, it isn't necessary to approximate it by a much larger number of random bootstrap samples. Instead you can just use the full distribution of bootstrap samples (obviously potential problems with a lack of representativity of your small sample as mentioned in other answers still exist, but I believe this was not the question).
Note by the way that i.i.d. sampling from the empirical distribution will produce uniform probabilities over ordered rather than distinct samples, meaning that if you want to emulate the true bootstrap distribution by a set of samples, you will need some repetition of most of the 126 distinct samples. This would be approximated by randomly taking a very large number of bootstrap samples, so multiple counting of samples is not the issue here, rather what happens is you're using more computing power than necessary to do something less precise than possible with less computing effort. | Resampling small datasets - Issue of overcounting? | Many bootstrap replications are drawn in order to approximate (in a standard situation) the distribution of i.i.d. samples from the empirical distribution. Now if you can get this distribution explici | Resampling small datasets - Issue of overcounting?
Many bootstrap replications are drawn in order to approximate (in a standard situation) the distribution of i.i.d. samples from the empirical distribution. Now if you can get this distribution explicitly, which is possible with a small sample as you can list all possible samples and their probabilities, it isn't necessary to approximate it by a much larger number of random bootstrap samples. Instead you can just use the full distribution of bootstrap samples (obviously potential problems with a lack of representativity of your small sample as mentioned in other answers still exist, but I believe this was not the question).
Note by the way that i.i.d. sampling from the empirical distribution will produce uniform probabilities over ordered rather than distinct samples, meaning that if you want to emulate the true bootstrap distribution by a set of samples, you will need some repetition of most of the 126 distinct samples. This would be approximated by randomly taking a very large number of bootstrap samples, so multiple counting of samples is not the issue here, rather what happens is you're using more computing power than necessary to do something less precise than possible with less computing effort. | Resampling small datasets - Issue of overcounting?
Many bootstrap replications are drawn in order to approximate (in a standard situation) the distribution of i.i.d. samples from the empirical distribution. Now if you can get this distribution explici |
54,685 | Is my understanding of the Metropolis sampling algorithm correct? | This is a correct description of the (symmetric) Metropolis-[Rosenbluth²]-Hastings algorithm, but the motivation is not:
(i) ignoring the normalising constant $Z_p$ is common to a lot of simulation techniques, like accept-reject methods. The normalising constant is useful when computing the cdf and using the inverse cdf technique, but otherwise, it rarely matters. The reasons for using MCMC methods are rather that the (e.g., posterior) distributions to be simulated are complex, high-dimension, non-standard distributions.
(ii) the posterior must be available analytically in the sense that $$p(\theta)p(x|\theta)$$ must be computable for the observed value $x$ and an arbitrary $\theta$, up to a constant (in $\theta$). Otherwise, the random variable $\theta$ must be completed by an auxiliary variable to augment $p(\theta|x)$ into a joint distribution $q(\theta,z|x)$ that can be computed or simulated (as in data augmentation). | Is my understanding of the Metropolis sampling algorithm correct? | This is a correct description of the (symmetric) Metropolis-[Rosenbluth²]-Hastings algorithm, but the motivation is not:
(i) ignoring the normalising constant $Z_p$ is common to a lot of simulation te | Is my understanding of the Metropolis sampling algorithm correct?
This is a correct description of the (symmetric) Metropolis-[Rosenbluth²]-Hastings algorithm, but the motivation is not:
(i) ignoring the normalising constant $Z_p$ is common to a lot of simulation techniques, like accept-reject methods. The normalising constant is useful when computing the cdf and using the inverse cdf technique, but otherwise, it rarely matters. The reasons for using MCMC methods are rather that the (e.g., posterior) distributions to be simulated are complex, high-dimension, non-standard distributions.
(ii) the posterior must be available analytically in the sense that $$p(\theta)p(x|\theta)$$ must be computable for the observed value $x$ and an arbitrary $\theta$, up to a constant (in $\theta$). Otherwise, the random variable $\theta$ must be completed by an auxiliary variable to augment $p(\theta|x)$ into a joint distribution $q(\theta,z|x)$ that can be computed or simulated (as in data augmentation). | Is my understanding of the Metropolis sampling algorithm correct?
This is a correct description of the (symmetric) Metropolis-[Rosenbluth²]-Hastings algorithm, but the motivation is not:
(i) ignoring the normalising constant $Z_p$ is common to a lot of simulation te |
54,686 | Finding a Common Thread in Disparate Indicators | Have you considered the bifactor measurement model? From what I can glean from your question, it looks as though you are interested in a single social norms dimension with an item bank that could reasonably be used to measure very different dimensions (e.g., alcohol abuse). If I am correct, the bifactor model may be the solution you are looking for. It assumes a single primary dimension (i.e., the social norms dimension) that all items are related to, as well as multiple secondary dimensions (e.g., an alcohol abuse dimension) that account for variation in an item cluster (e.g., alcohol abuse items) orthogonal to the primary dimension of interest.
Such a setup purifies the primary dimension from aspects of the item cluster (e.g., drinking behavior unrelated to social norms) unrelated to what is shared among all items that load onto the primary dimension.
Most applications of the bifactor are confirmatory, though you can still run an exploratory bifactor analysis (e.g., Jennrich & Bentler, 2011). Additionally, you can estimate parameters using factor analysis or item response theory (e.g., DeMars, 2013; Reise et al., 2007; Wirth & Edwards, 2013).
References
DeMars, C. E. (2013). A tutorial on interpreting bifactor model scores. International Journal of Testing, 13(4), 354-378.
Jennrich, R. I., & Bentler, P. M. (2011). Exploratory bi-factor analysis. Psychometrika, 76(4), 537-549.
Reise, S. P., Morizot, J., & Hays, R. D. (2007). The role of the bifactor model in resolving dimensionality issues in health outcomes measures. Quality of Life Research, 16(1), 19-31.
Wirth, R. J., & Edwards, M. C. (2007). Item factor analysis: current approaches and future directions. Psychological methods, 12(1), 58. | Finding a Common Thread in Disparate Indicators | Have you considered the bifactor measurement model? From what I can glean from your question, it looks as though you are interested in a single social norms dimension with an item bank that could reas | Finding a Common Thread in Disparate Indicators
Have you considered the bifactor measurement model? From what I can glean from your question, it looks as though you are interested in a single social norms dimension with an item bank that could reasonably be used to measure very different dimensions (e.g., alcohol abuse). If I am correct, the bifactor model may be the solution you are looking for. It assumes a single primary dimension (i.e., the social norms dimension) that all items are related to, as well as multiple secondary dimensions (e.g., an alcohol abuse dimension) that account for variation in an item cluster (e.g., alcohol abuse items) orthogonal to the primary dimension of interest.
Such a setup purifies the primary dimension from aspects of the item cluster (e.g., drinking behavior unrelated to social norms) unrelated to what is shared among all items that load onto the primary dimension.
Most applications of the bifactor are confirmatory, though you can still run an exploratory bifactor analysis (e.g., Jennrich & Bentler, 2011). Additionally, you can estimate parameters using factor analysis or item response theory (e.g., DeMars, 2013; Reise et al., 2007; Wirth & Edwards, 2013).
References
DeMars, C. E. (2013). A tutorial on interpreting bifactor model scores. International Journal of Testing, 13(4), 354-378.
Jennrich, R. I., & Bentler, P. M. (2011). Exploratory bi-factor analysis. Psychometrika, 76(4), 537-549.
Reise, S. P., Morizot, J., & Hays, R. D. (2007). The role of the bifactor model in resolving dimensionality issues in health outcomes measures. Quality of Life Research, 16(1), 19-31.
Wirth, R. J., & Edwards, M. C. (2007). Item factor analysis: current approaches and future directions. Psychological methods, 12(1), 58. | Finding a Common Thread in Disparate Indicators
Have you considered the bifactor measurement model? From what I can glean from your question, it looks as though you are interested in a single social norms dimension with an item bank that could reas |
54,687 | Finding a Common Thread in Disparate Indicators | This adds a bit more to Preston's answer. I approve of the Reise citation.
However, I'd urge you to consider how closely related the traits you're trying to measure are.
He has a whole range of disparate measures of acting-under-social-norms (from alcohol abuse, to registering with a doctor, etc). We don't want to split these into separate factors, but rather to reveal an underlying latent variable that (we hope) is common to all of them...
You only named two, but alcohol abuse and registering with a doctor don't seem strongly related to me. By my reading of Reise, if you have a bunch of closely related but not totally identical latent traits, it's acceptable to use a bifactor model's primary factor. From my field, cognitive, affect, and somatic symptoms of depression are probably closely related enough that you could use a bifactor structure to produce a summary depression score. If you were to administer a bunch of items for those 3 traits and estimate 3 different scores, the scores would probably be highly correlated.
If you have a bunch of latent traits that are weakly related, maybe even close to independent, then I am not sure what a bifactor model gains you. I bet you can still estimate one, I'm just not sure what the gain is relative to calculating separate factor scores.
Clearly, we don't want to do a "rotation" because that is doing the exact opposite of what we want: to separate them into cohesive clusters of indicators, whereas we want actually something that (to some extent) can underly all of them.
Actually, I think that with an EFA you do want to rotate the factor solution while exploring the data structure. This will show how correlated the identified factors are. If the factors are reasonably correlated and you can make a theoretical case that they're linked through an overarching construct, then you can go ahead - how much is "reasonably" is going to be a judgment call, as with many things in statistics, and the same applies for making that theoretical case. In subsequent analysis, you'd use a bifactor structure if it were justified.
I haven't read in depth, but I believe that it is possible to perform a bifactor rotation in an exploratory model; this is possibly what some of the sources in the other answer refer to. | Finding a Common Thread in Disparate Indicators | This adds a bit more to Preston's answer. I approve of the Reise citation.
However, I'd urge you to consider how closely related the traits you're trying to measure are.
He has a whole range of dispa | Finding a Common Thread in Disparate Indicators
This adds a bit more to Preston's answer. I approve of the Reise citation.
However, I'd urge you to consider how closely related the traits you're trying to measure are.
He has a whole range of disparate measures of acting-under-social-norms (from alcohol abuse, to registering with a doctor, etc). We don't want to split these into separate factors, but rather to reveal an underlying latent variable that (we hope) is common to all of them...
You only named two, but alcohol abuse and registering with a doctor don't seem strongly related to me. By my reading of Reise, if you have a bunch of closely related but not totally identical latent traits, it's acceptable to use a bifactor model's primary factor. From my field, cognitive, affect, and somatic symptoms of depression are probably closely related enough that you could use a bifactor structure to produce a summary depression score. If you were to administer a bunch of items for those 3 traits and estimate 3 different scores, the scores would probably be highly correlated.
If you have a bunch of latent traits that are weakly related, maybe even close to independent, then I am not sure what a bifactor model gains you. I bet you can still estimate one, I'm just not sure what the gain is relative to calculating separate factor scores.
Clearly, we don't want to do a "rotation" because that is doing the exact opposite of what we want: to separate them into cohesive clusters of indicators, whereas we want actually something that (to some extent) can underly all of them.
Actually, I think that with an EFA you do want to rotate the factor solution while exploring the data structure. This will show how correlated the identified factors are. If the factors are reasonably correlated and you can make a theoretical case that they're linked through an overarching construct, then you can go ahead - how much is "reasonably" is going to be a judgment call, as with many things in statistics, and the same applies for making that theoretical case. In subsequent analysis, you'd use a bifactor structure if it were justified.
I haven't read in depth, but I believe that it is possible to perform a bifactor rotation in an exploratory model; this is possibly what some of the sources in the other answer refer to. | Finding a Common Thread in Disparate Indicators
This adds a bit more to Preston's answer. I approve of the Reise citation.
However, I'd urge you to consider how closely related the traits you're trying to measure are.
He has a whole range of dispa |
54,688 | Why do you need non-linear regression if you can use a linear one to fit any kind of curvature to your data? | Model Parsimony
If you have a sine curve, you can approximate it to arbitrary accuracy with its series expansion.
I’d probably rather estimate the two parameters of $\mathbb E[y]= A\sin(Bx)$ than the many parameters in a long series expansion.
Note that, because the $B$ is inside the nonlinear sine function, you cannot create the estimated-frequency sine curve with a sine basis function; you would have to pick a $B$, rather than estimate it from the data.
Interpretation
Parameters in the nonlinear equation can have interpretations of interest. In the above equation, $A$ is the amplitude and $B$ relates to the frequency. Perhaps you can wrestle with a long polynomial that approximates the sine curve in order to get at frequency and amplitude, but they are immediate from the nonlinear equation. | Why do you need non-linear regression if you can use a linear one to fit any kind of curvature to yo | Model Parsimony
If you have a sine curve, you can approximate it to arbitrary accuracy with its series expansion.
I’d probably rather estimate the two parameters of $\mathbb E[y]= A\sin(Bx)$ than the | Why do you need non-linear regression if you can use a linear one to fit any kind of curvature to your data?
Model Parsimony
If you have a sine curve, you can approximate it to arbitrary accuracy with its series expansion.
I’d probably rather estimate the two parameters of $\mathbb E[y]= A\sin(Bx)$ than the many parameters in a long series expansion.
Note that, because the $B$ is inside the nonlinear sine function, you cannot create the estimated-frequency sine curve with a sine basis function; you would have to pick a $B$, rather than estimate it from the data.
Interpretation
Parameters in the nonlinear equation can have interpretations of interest. In the above equation, $A$ is the amplitude and $B$ relates to the frequency. Perhaps you can wrestle with a long polynomial that approximates the sine curve in order to get at frequency and amplitude, but they are immediate from the nonlinear equation. | Why do you need non-linear regression if you can use a linear one to fit any kind of curvature to yo
Model Parsimony
If you have a sine curve, you can approximate it to arbitrary accuracy with its series expansion.
I’d probably rather estimate the two parameters of $\mathbb E[y]= A\sin(Bx)$ than the |
54,689 | CDF of Dirichlet Distribution | The Dirichlet distribution is either defined on the simplex of $\mathbb R^k$,
$$\mathcal S_{k-1}=\big\{\mathbf x;\ x_i\in (0,1),~i=1,2,\ldots,k,~\sum_{i=1}^k x_i=1\big\}$$
in which case the density
$$f(\mathbf x) = \frac{1}{B(\textbf{a})}\prod_{i=1}^{k}x_{i}^{a_{i}-1}$$
is with respect to the Lebesgue distribution over that simplex, or defined in $\mathbb R^{k-1}$, in which case the density
$$f(\mathbf x) = \frac{1}{B(\textbf{a})}\prod_{i=1}^{k-1}x_{i}^{a_{i}-1}(1-x_1-\cdots-x_{k-1})^{a_k-1}$$
is with respect to the Lebesgue distribution over $\mathbb R^{k-1}$.
The later is Wikipedia's definition albeit poorly written since written as a function of $k$ terms.
A particular instance of the later is the family of Beta distributions, which illustrates why it is not feasible to derive an closed form cdf, except for small integer values of the parameters $a_i$:
$$\mathbb P_{\alpha,\beta}(X\le \epsilon)=\dfrac{B(\epsilon;\alpha,\beta)}{B(\alpha,\beta)}\quad0\le\epsilon\le 1$$
where $B(\epsilon;\alpha,\beta)$ is the so-called incomplete Beta function (and an acknowledgement of the absence of closed form!).
Both representations obviously lead to the same distribution, but writing events such as $\mathbb P(\mathbf X\in A)$ will depend on which representation is used for $A$, i.e., either $A\subset\mathcal S_{k-1}$ or $A\subset\mathbb R^{k-1}$. In the former case,
$$\mathbb P(\mathbf X\in A)=\int_A \frac{1}{B(\textbf{a})}\prod_{i=1}^{k}x_{i}^{a_{i}-1}\,\text d\lambda_{\mathcal S_{k-1}}(\mathbf x)$$
and in the later
$$\mathbb P(\mathbf X\in A)=\int_A \frac{1}{B(\textbf{a})}\prod_{i=1}^{k-1}x_{i}^{a_{i}-1}(1-x_1-\cdots-x_{k-1})^{a_k-1}
\,\text dx_1\cdots\,\text dx_{k-1}$$ | CDF of Dirichlet Distribution | The Dirichlet distribution is either defined on the simplex of $\mathbb R^k$,
$$\mathcal S_{k-1}=\big\{\mathbf x;\ x_i\in (0,1),~i=1,2,\ldots,k,~\sum_{i=1}^k x_i=1\big\}$$
in which case the density
$$ | CDF of Dirichlet Distribution
The Dirichlet distribution is either defined on the simplex of $\mathbb R^k$,
$$\mathcal S_{k-1}=\big\{\mathbf x;\ x_i\in (0,1),~i=1,2,\ldots,k,~\sum_{i=1}^k x_i=1\big\}$$
in which case the density
$$f(\mathbf x) = \frac{1}{B(\textbf{a})}\prod_{i=1}^{k}x_{i}^{a_{i}-1}$$
is with respect to the Lebesgue distribution over that simplex, or defined in $\mathbb R^{k-1}$, in which case the density
$$f(\mathbf x) = \frac{1}{B(\textbf{a})}\prod_{i=1}^{k-1}x_{i}^{a_{i}-1}(1-x_1-\cdots-x_{k-1})^{a_k-1}$$
is with respect to the Lebesgue distribution over $\mathbb R^{k-1}$.
The later is Wikipedia's definition albeit poorly written since written as a function of $k$ terms.
A particular instance of the later is the family of Beta distributions, which illustrates why it is not feasible to derive an closed form cdf, except for small integer values of the parameters $a_i$:
$$\mathbb P_{\alpha,\beta}(X\le \epsilon)=\dfrac{B(\epsilon;\alpha,\beta)}{B(\alpha,\beta)}\quad0\le\epsilon\le 1$$
where $B(\epsilon;\alpha,\beta)$ is the so-called incomplete Beta function (and an acknowledgement of the absence of closed form!).
Both representations obviously lead to the same distribution, but writing events such as $\mathbb P(\mathbf X\in A)$ will depend on which representation is used for $A$, i.e., either $A\subset\mathcal S_{k-1}$ or $A\subset\mathbb R^{k-1}$. In the former case,
$$\mathbb P(\mathbf X\in A)=\int_A \frac{1}{B(\textbf{a})}\prod_{i=1}^{k}x_{i}^{a_{i}-1}\,\text d\lambda_{\mathcal S_{k-1}}(\mathbf x)$$
and in the later
$$\mathbb P(\mathbf X\in A)=\int_A \frac{1}{B(\textbf{a})}\prod_{i=1}^{k-1}x_{i}^{a_{i}-1}(1-x_1-\cdots-x_{k-1})^{a_k-1}
\,\text dx_1\cdots\,\text dx_{k-1}$$ | CDF of Dirichlet Distribution
The Dirichlet distribution is either defined on the simplex of $\mathbb R^k$,
$$\mathcal S_{k-1}=\big\{\mathbf x;\ x_i\in (0,1),~i=1,2,\ldots,k,~\sum_{i=1}^k x_i=1\big\}$$
in which case the density
$$ |
54,690 | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1? | This is an example of a poorly worded question. If one were to interpret it strictly as written, one has a sample size of $n=500$ and a population size of $N=500$, so yes, the population mean is certain to be \$750 (and so the probability that this mean is greater than \$755 is known to be zero). If you were to give this answer to the question, that would be correct in my view. Nevertheless, in view of the answer given, it appears that the writer of the question intended to treat the sample of $n=500$ customers as a random sample of a "large" population ($N=\infty$) and the resulting calculations are consistent with that.
For these types of questions, it is worth noting that the confidence interval formula for a population mean can be written in a way that allows a finite or infinite population $n \leqslant N \leqslant \infty$. The general formula for the confidence interval for the population mean (see e.g., O'Neill 2014, pp. 285-286) is:
$$\text{CI}_N(1-\alpha) = \Bigg[ \bar{x}_n \pm \frac{t_{\alpha/2,DF_n}}{n} \cdot \sqrt{\frac{N-n}{N}} \cdot s_n \Bigg],$$
where $DF_n = n-1$ for a mesokurtic distribution (e.g., the normal distribution). You can easily confirm that this interval reduces to a single point given by the sample mean in the special case where $n=N$ and reduces to the standard form used for a "large" population when $N=\infty$.
How to re-word the question: In your bounty request you have asked how the question could be better worded to properly express the query reflected in the posted answer. To re-word the query, it would be important to be clear that $n=500$ and $N=\infty$ in this problem. (For the latter we usually just refer to the population as "large" --- see this related answer for an explanation.) It is also desirable to specify that the observed customers are a random sample of the population. Something like this would be an appropriate wording:
Question: A micro-loan bank has a large number of loan customers. Analysts at the bank take a random sample of 500 of their loan customers and examine the total annual loan repayments made by each of the sampled customers --- they find a mean of \$750 and a standard deviation of \$900 from these values. Use this data to approximate the probability that the average total annual repayments made across all customers at the bank is greater than \$755. | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1? | This is an example of a poorly worded question. If one were to interpret it strictly as written, one has a sample size of $n=500$ and a population size of $N=500$, so yes, the population mean is cert | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1?
This is an example of a poorly worded question. If one were to interpret it strictly as written, one has a sample size of $n=500$ and a population size of $N=500$, so yes, the population mean is certain to be \$750 (and so the probability that this mean is greater than \$755 is known to be zero). If you were to give this answer to the question, that would be correct in my view. Nevertheless, in view of the answer given, it appears that the writer of the question intended to treat the sample of $n=500$ customers as a random sample of a "large" population ($N=\infty$) and the resulting calculations are consistent with that.
For these types of questions, it is worth noting that the confidence interval formula for a population mean can be written in a way that allows a finite or infinite population $n \leqslant N \leqslant \infty$. The general formula for the confidence interval for the population mean (see e.g., O'Neill 2014, pp. 285-286) is:
$$\text{CI}_N(1-\alpha) = \Bigg[ \bar{x}_n \pm \frac{t_{\alpha/2,DF_n}}{n} \cdot \sqrt{\frac{N-n}{N}} \cdot s_n \Bigg],$$
where $DF_n = n-1$ for a mesokurtic distribution (e.g., the normal distribution). You can easily confirm that this interval reduces to a single point given by the sample mean in the special case where $n=N$ and reduces to the standard form used for a "large" population when $N=\infty$.
How to re-word the question: In your bounty request you have asked how the question could be better worded to properly express the query reflected in the posted answer. To re-word the query, it would be important to be clear that $n=500$ and $N=\infty$ in this problem. (For the latter we usually just refer to the population as "large" --- see this related answer for an explanation.) It is also desirable to specify that the observed customers are a random sample of the population. Something like this would be an appropriate wording:
Question: A micro-loan bank has a large number of loan customers. Analysts at the bank take a random sample of 500 of their loan customers and examine the total annual loan repayments made by each of the sampled customers --- they find a mean of \$750 and a standard deviation of \$900 from these values. Use this data to approximate the probability that the average total annual repayments made across all customers at the bank is greater than \$755. | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1?
This is an example of a poorly worded question. If one were to interpret it strictly as written, one has a sample size of $n=500$ and a population size of $N=500$, so yes, the population mean is cert |
54,691 | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1? | The first Comment of @COOLSerdash is correct. The wording of the problem is somewhat confusing.
Moreover, the choice of numbers leads to a z-value that needs to be rounded for use of a printed table, thus we get a noticeable rounding error in the posted answer.
You have $\bar X =\bar X_{500} \sim\mathsf{Norm}(\mu=750, \,\sigma=900/\sqrt{500}),$ and you seek $P(\bar X > 755) = 1-P(\bar X \le 755) = 0.4505682,$ exactly. (Using R:)
1 - pnorm(755, 750, 900/sqrt(500))
[1] 0.4505682
If you were to standardize, then $z = 0.124226.$
z = (755-750)/(900/sqrt(500)); z
[1] 0.124226
Then the exact answer is $P(Z > z) = 1 - P(Z \le z) =0.4505682,$ exactly (same as above). So there is no essential error from standardizing.
1 - pnorm(z)
[1] 0.4505682
However, using printed tables without interpolation, you have to round $z$ to two places in order to enter a table that rounds probabilities to four places.
As in the posted 'answer' you
would get $0.4522,$ which results from rounding twice.
round(1 - pnorm(round(z,2)), 4)
[1] 0.4522
There may be little practical difference between 0.4506 (correct to four places) and 0.4522. But it can be frustrating to use badly designed
homework software that requires results "correct to four places," if you
give the correct value to four places and the software emulates imprecise
use of printed tables. | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1? | The first Comment of @COOLSerdash is correct. The wording of the problem is somewhat confusing.
Moreover, the choice of numbers leads to a z-value that needs to be rounded for use of a printed table, | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1?
The first Comment of @COOLSerdash is correct. The wording of the problem is somewhat confusing.
Moreover, the choice of numbers leads to a z-value that needs to be rounded for use of a printed table, thus we get a noticeable rounding error in the posted answer.
You have $\bar X =\bar X_{500} \sim\mathsf{Norm}(\mu=750, \,\sigma=900/\sqrt{500}),$ and you seek $P(\bar X > 755) = 1-P(\bar X \le 755) = 0.4505682,$ exactly. (Using R:)
1 - pnorm(755, 750, 900/sqrt(500))
[1] 0.4505682
If you were to standardize, then $z = 0.124226.$
z = (755-750)/(900/sqrt(500)); z
[1] 0.124226
Then the exact answer is $P(Z > z) = 1 - P(Z \le z) =0.4505682,$ exactly (same as above). So there is no essential error from standardizing.
1 - pnorm(z)
[1] 0.4505682
However, using printed tables without interpolation, you have to round $z$ to two places in order to enter a table that rounds probabilities to four places.
As in the posted 'answer' you
would get $0.4522,$ which results from rounding twice.
round(1 - pnorm(round(z,2)), 4)
[1] 0.4522
There may be little practical difference between 0.4506 (correct to four places) and 0.4522. But it can be frustrating to use badly designed
homework software that requires results "correct to four places," if you
give the correct value to four places and the software emulates imprecise
use of printed tables. | Why do we apply the sample mean version of the CLT for a problem involving a sample size of 1?
The first Comment of @COOLSerdash is correct. The wording of the problem is somewhat confusing.
Moreover, the choice of numbers leads to a z-value that needs to be rounded for use of a printed table, |
54,692 | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA? | ANOVA is a method that arises within the context of regression models, so I recommend you read some books on regression modelling (see related answer here). It is difficult to recommend a specific book without more knowledge of your strengths and weaknesses, but you should be able to find a book or notes that derive linear regression using vector algebra. That is probably the best way to learn it for someone who already has a maths degree. | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA? | ANOVA is a method that arises within the context of regression models, so I recommend you read some books on regression modelling (see related answer here). It is difficult to recommend a specific bo | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA?
ANOVA is a method that arises within the context of regression models, so I recommend you read some books on regression modelling (see related answer here). It is difficult to recommend a specific book without more knowledge of your strengths and weaknesses, but you should be able to find a book or notes that derive linear regression using vector algebra. That is probably the best way to learn it for someone who already has a maths degree. | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA?
ANOVA is a method that arises within the context of regression models, so I recommend you read some books on regression modelling (see related answer here). It is difficult to recommend a specific bo |
54,693 | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA? | If you want more of a softer, practioner approach to the topic, I would encourage looking at the appropriate chapter within a book on DOE. Box, Hunter, Hunter is the classic reference (78 edition best). A shorter, but serviceable book is Barker Quality by Experimental Design. Both of these books have a lot of non-ANOVA. But you might appreciate what's in there also (type I/II errors, survey design, etc.)
Also, really there are many decent Youtube videos that give you the basics of the topic if you want something quick.
P.s. As for your MESE client (ugh, lost my login, mea culpa), (1) part of growing up is learning how to handle curmudgeons. Turn it into a joke. Or let it slide off. Or tease back a little. But don't get so ruffled so fast. (2) He told me he needed it (proofs) is not a good excuse. Be more astute. Don't accept things at face value. This applies to life/work in general, not just teaching. Bad assumptions are more often the flaw in a model than the formula. Also, "everyone should know continuity" and definitions of limits are not a good use of time for a trainee who needs first to refresh basic manipulations. Prioritize. | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA? | If you want more of a softer, practioner approach to the topic, I would encourage looking at the appropriate chapter within a book on DOE. Box, Hunter, Hunter is the classic reference (78 edition bes | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA?
If you want more of a softer, practioner approach to the topic, I would encourage looking at the appropriate chapter within a book on DOE. Box, Hunter, Hunter is the classic reference (78 edition best). A shorter, but serviceable book is Barker Quality by Experimental Design. Both of these books have a lot of non-ANOVA. But you might appreciate what's in there also (type I/II errors, survey design, etc.)
Also, really there are many decent Youtube videos that give you the basics of the topic if you want something quick.
P.s. As for your MESE client (ugh, lost my login, mea culpa), (1) part of growing up is learning how to handle curmudgeons. Turn it into a joke. Or let it slide off. Or tease back a little. But don't get so ruffled so fast. (2) He told me he needed it (proofs) is not a good excuse. Be more astute. Don't accept things at face value. This applies to life/work in general, not just teaching. Bad assumptions are more often the flaw in a model than the formula. Also, "everyone should know continuity" and definitions of limits are not a good use of time for a trainee who needs first to refresh basic manipulations. Prioritize. | After "Statistics" by Freedman, Pisani, and Purves what book is good for ANOVA?
If you want more of a softer, practioner approach to the topic, I would encourage looking at the appropriate chapter within a book on DOE. Box, Hunter, Hunter is the classic reference (78 edition bes |
54,694 | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions | You already got my +1 for drawing attention to Westfall (2014). A few thoughts:
Unimodality is of course important, and also quite a restriction. If you have multiple peaks, then all kinds of strange things can happen. And if you don't have a peak at all (e.g., a gamma distribution with shape $k<1$), then your $f_{\text{max}}$ is not even defined.
The next problem is when your density is noncontinuous. There may not be a point where it takes a value of $\frac{f_{\text{max}}}{2}$. But of course you can get around this by using suprema and infima as appropriate.
Here are a few well-behaved (that is, unimodal and symmetric) examples, all with your KPI value of $2$ (axes aligned for comparability):
I don't quite know whether I would call all of these "equally tall-and-skinny". Equally tall, yes, but not really equally skinny. Essentially, the two flanks rotate around their halfway point (indicated by red dots) between the four panels, and this constant halfway point is at the constant $x_{1,2}$. But of course all this is subjective.
These distributions all have the following form:
$$ f(x) = \begin{cases}
0, & x<-b \\
f_{\text{max}}\cdot\frac{b+x}{b-a}, & -b\leq x < -a \\
f_{\text{max}}, & -a\leq x<a \\
f_{\text{max}}\cdot\frac{b-x}{b-a}, & a\leq x <b \\
0, & b\leq x
\end{cases} $$
for appropriate parameters $0\leq a<b$ and $f_{\text{max}}$. To ensure these integrate to $1$, we set for given $a$ and $b$
$$ f_{\text{max}}:= \frac{1}{a+b}. $$
To ensure that they have a prespecified value of your KPI
$$ \text{KPI} = \frac{f_{\text{max}}}{a+b}=\frac{1}{(a+b)^2} $$
(because $x_{1,2}=\pm\frac{a+b}{2}$), we use $a$ as a free parameter and set
$$ b := \frac{1}{\sqrt{\text{KPI}}} - a. $$
The plots have $a\in\big\{0,\frac{1}{10},\frac{2}{10},\frac{3}{10}\big\}$.
R code:
KPI <- 2
opar <- par(mfrow=c(2,2),las=1,mai=c(.5,.5,.1,.1))
y_max <- sqrt(KPI)
aa <- c(0,0.1,0.2,0.3)
bb <- 1/sqrt(KPI)-aa
f_max <- 1/(aa+bb)
xx <- seq(-1.1/sqrt(KPI),1.1/sqrt(KPI),by=0.01)
ff <- Vectorize(FUN=function(xx,aa,bb,f_max) {
if ( xx < -bb ) return(0)
if ( -bb <= xx & xx < -aa ) return(f_max*(bb+xx)/(bb-aa))
if ( -aa <= xx & xx < aa ) return(f_max)
if ( aa <= xx & xx < bb ) return(f_max*(bb-xx)/(bb-aa))
if ( bb <= xx ) return(0)
},vectorize.args="xx")
for ( ii in seq_along(aa) ) {
plot(xx,ff(xx,aa[ii],bb[ii],f_max[ii]),ylim=c(0,y_max),type="l",xlab="",ylab="")
halfway <- c(-1,1)*(aa[ii]+bb[ii])/2
points(halfway,ff(halfway,aa[ii],bb[ii],f_max[ii]),pch=19,cex=1.5,col="red")
}
par(opar) | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions | You already got my +1 for drawing attention to Westfall (2014). A few thoughts:
Unimodality is of course important, and also quite a restriction. If you have multiple peaks, then all kinds of strange | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions
You already got my +1 for drawing attention to Westfall (2014). A few thoughts:
Unimodality is of course important, and also quite a restriction. If you have multiple peaks, then all kinds of strange things can happen. And if you don't have a peak at all (e.g., a gamma distribution with shape $k<1$), then your $f_{\text{max}}$ is not even defined.
The next problem is when your density is noncontinuous. There may not be a point where it takes a value of $\frac{f_{\text{max}}}{2}$. But of course you can get around this by using suprema and infima as appropriate.
Here are a few well-behaved (that is, unimodal and symmetric) examples, all with your KPI value of $2$ (axes aligned for comparability):
I don't quite know whether I would call all of these "equally tall-and-skinny". Equally tall, yes, but not really equally skinny. Essentially, the two flanks rotate around their halfway point (indicated by red dots) between the four panels, and this constant halfway point is at the constant $x_{1,2}$. But of course all this is subjective.
These distributions all have the following form:
$$ f(x) = \begin{cases}
0, & x<-b \\
f_{\text{max}}\cdot\frac{b+x}{b-a}, & -b\leq x < -a \\
f_{\text{max}}, & -a\leq x<a \\
f_{\text{max}}\cdot\frac{b-x}{b-a}, & a\leq x <b \\
0, & b\leq x
\end{cases} $$
for appropriate parameters $0\leq a<b$ and $f_{\text{max}}$. To ensure these integrate to $1$, we set for given $a$ and $b$
$$ f_{\text{max}}:= \frac{1}{a+b}. $$
To ensure that they have a prespecified value of your KPI
$$ \text{KPI} = \frac{f_{\text{max}}}{a+b}=\frac{1}{(a+b)^2} $$
(because $x_{1,2}=\pm\frac{a+b}{2}$), we use $a$ as a free parameter and set
$$ b := \frac{1}{\sqrt{\text{KPI}}} - a. $$
The plots have $a\in\big\{0,\frac{1}{10},\frac{2}{10},\frac{3}{10}\big\}$.
R code:
KPI <- 2
opar <- par(mfrow=c(2,2),las=1,mai=c(.5,.5,.1,.1))
y_max <- sqrt(KPI)
aa <- c(0,0.1,0.2,0.3)
bb <- 1/sqrt(KPI)-aa
f_max <- 1/(aa+bb)
xx <- seq(-1.1/sqrt(KPI),1.1/sqrt(KPI),by=0.01)
ff <- Vectorize(FUN=function(xx,aa,bb,f_max) {
if ( xx < -bb ) return(0)
if ( -bb <= xx & xx < -aa ) return(f_max*(bb+xx)/(bb-aa))
if ( -aa <= xx & xx < aa ) return(f_max)
if ( aa <= xx & xx < bb ) return(f_max*(bb-xx)/(bb-aa))
if ( bb <= xx ) return(0)
},vectorize.args="xx")
for ( ii in seq_along(aa) ) {
plot(xx,ff(xx,aa[ii],bb[ii],f_max[ii]),ylim=c(0,y_max),type="l",xlab="",ylab="")
halfway <- c(-1,1)*(aa[ii]+bb[ii])/2
points(halfway,ff(halfway,aa[ii],bb[ii],f_max[ii]),pch=19,cex=1.5,col="red")
}
par(opar) | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions
You already got my +1 for drawing attention to Westfall (2014). A few thoughts:
Unimodality is of course important, and also quite a restriction. If you have multiple peaks, then all kinds of strange |
54,695 | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions | Before you get to the other drawbacks of your measure, the first thing to note is that you appear to be comparing apples and oranges. The idea of using kurtosis as a measure of "peakedness" of a distribution (as flawed as that is) is that it is a measure that adjusts for variance, so the "peakedness" looks at the shape of the distribution rather than its scale. Your measure does not do this --- it is proportionate to the inverse of the variance.$^\dagger$
So, before you get to anything else, you are going to need to decide whether you want your measure of what is "tall-and-skinny" to be effectively just another measure of (inverse) variance, or whether you want it to be a measure that is determined by the shape of the density rather than its scale. If the former then your measure is not really a measure of "peakedness" in the sense that kurtosis is sometimes (erroneously) used. If the latter, one thing you could do is to multiply your measure by the variance of the distribution, so that it is now "scale free".
Even with this adjustment, two other obvious drawbacks are: (1) there might not be a maximum density value (i.e., the density might be unbounded); and (2) the choice of using half the density maximum is arbitrary. You could generalise your analysis in a way that alleviates this problem by looking at the "intensity function" for the distribution. Consider a continuous random variable $X$ scaled to have unit variance and suppose it has a unimodal density $f$. The "intensity" of the distribution can be defined by:
$$H(a) \equiv \mathbb{P}(f(X) \geqslant a) \quad \quad \quad \text{for all } 0 \leqslant a \leqslant \infty.$$
This is a non-increasing function with $H(0) = 1$ and $H(\infty) = 0$ (a simple consequence of the properties of probability densities). In general, a density that is "tall-and-skinny" is going to have an intensity that decreases down to zero slowly as $a$ increases, whereas a density that is "short-and-fat" is going to have an intensity that decreases down to zero rapidly as $a$ increases. Thus, you could get an idea of how "tall-and-skinny" the distribution is by looking at how rapidly the "intensity" of the distribution decreases.
$^\dagger$ To see this, note that if you were to double the random variable then this would halve the maximum density value, your numerator, and double the width in the denominator. | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions | Before you get to the other drawbacks of your measure, the first thing to note is that you appear to be comparing apples and oranges. The idea of using kurtosis as a measure of "peakedness" of a dist | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions
Before you get to the other drawbacks of your measure, the first thing to note is that you appear to be comparing apples and oranges. The idea of using kurtosis as a measure of "peakedness" of a distribution (as flawed as that is) is that it is a measure that adjusts for variance, so the "peakedness" looks at the shape of the distribution rather than its scale. Your measure does not do this --- it is proportionate to the inverse of the variance.$^\dagger$
So, before you get to anything else, you are going to need to decide whether you want your measure of what is "tall-and-skinny" to be effectively just another measure of (inverse) variance, or whether you want it to be a measure that is determined by the shape of the density rather than its scale. If the former then your measure is not really a measure of "peakedness" in the sense that kurtosis is sometimes (erroneously) used. If the latter, one thing you could do is to multiply your measure by the variance of the distribution, so that it is now "scale free".
Even with this adjustment, two other obvious drawbacks are: (1) there might not be a maximum density value (i.e., the density might be unbounded); and (2) the choice of using half the density maximum is arbitrary. You could generalise your analysis in a way that alleviates this problem by looking at the "intensity function" for the distribution. Consider a continuous random variable $X$ scaled to have unit variance and suppose it has a unimodal density $f$. The "intensity" of the distribution can be defined by:
$$H(a) \equiv \mathbb{P}(f(X) \geqslant a) \quad \quad \quad \text{for all } 0 \leqslant a \leqslant \infty.$$
This is a non-increasing function with $H(0) = 1$ and $H(\infty) = 0$ (a simple consequence of the properties of probability densities). In general, a density that is "tall-and-skinny" is going to have an intensity that decreases down to zero slowly as $a$ increases, whereas a density that is "short-and-fat" is going to have an intensity that decreases down to zero rapidly as $a$ increases. Thus, you could get an idea of how "tall-and-skinny" the distribution is by looking at how rapidly the "intensity" of the distribution decreases.
$^\dagger$ To see this, note that if you were to double the random variable then this would halve the maximum density value, your numerator, and double the width in the denominator. | 'Aspect ratio' as a measure of the 'tall-and-skinny' property of unimodal distributions
Before you get to the other drawbacks of your measure, the first thing to note is that you appear to be comparing apples and oranges. The idea of using kurtosis as a measure of "peakedness" of a dist |
54,696 | Emperically testing a "p-test" (of multiple fair coin flips) | In general, there is no such binomial test with
significance level exactly $\alpha = 0.05,$ on
account of the discreteness of binomial distributions.
For an exact test at $\alpha = 0.05$ based on a continuous test statistic,
the distribution of P-value when $H_0$ is true would be
standard uniform and the probability that the P-value is below $0.05$ would be exactly $0.05.$
If $n = 100,$ testing $H_0: p = .5$ against $H_a: p \ne 0.5,$ the closest one can get to a test at the 5% level
(without going over 5%) is $0.0352 = 3.52\%.$
2*(1 - pbinom(60, 100, .5))
[1] 0.0352002
2*(1 - pbinom(59, 100, .5))
[1] 0.05688793
[It does not help to use a normal approximation with nominal 5% level
because z values close to $\pm 1.96$ cannot be achieved. For
$\mathsf{Binom}(100,.5)$ the normal approximation is very accurate so it hardly matters whether one does an 'exact' binomial test or an approximate normal one.]
Below I simulate 100,000 tests in R for $n=100$ observations,
and summarize and plot histograms of P-values. Probabilities should be accurate to about two places.
The binomial test 'binom.test' and approximate normal test 'prop.test' both
have the anticipated P-values.
set.seed(2021); n = 100
pv.b = replicate(10^5,
binom.test(rbinom(1,n,.5),n,.5)$p.val)
mean(pv.b < 0.05)
[1] 0.03605 # aprx 0.0352
2*sd(pv.b < 0.05)/sqrt(10^5)
[1] 0.001178995 # aprx 95% margin of sim error
set.seed(2021); n = 100
pv.n = replicate(10^5,
prop.test(rbinom(1,n,.5),n,.5)$p.val)
mean(pv.n < 0.05)
[1] 0.03605
The figures below show the simulated distributions of the P-values under $H_0.$
R code for figure:
par(mfrow=c(1,2))
hist(pv.b, prob=T, xlim=c(-.01,1.01), col="skyblue2")
abline(v = .05, col="red")
curve(dunif(x), add=T, n=10001, lwd=2)
hist(pv.n, prob=T, xlim=c(-.01,1.01), col="skyblue2")
abline(v = .05, col="red")
curve(dunif(x), add=T, n=10001, lwd=2)
par(mfrow=c(1,1))
Notes: (1) At the resolution of the plot, the two histograms look the same,
but there are miniscule differences in the distributions; at the borderline the binomial and approximate normal tests had slightly different P-values in 8070 tests out of 100,000. But they never disagreed about rejection at the 5% level.
sum(pv.b == pv.n)
[1] 8070
mean(abs(pv.b - pv.n))
[1] 0.0001792384
sum((pv.n <= .05) == (pv.n <=.05))
[1] 100000
(2) Without using a randomized test, the closest $\alpha$ to 5% without going over for $n=1000$
is $0.046 = 4.6\%,$
2*(1-pbinom(531,1000,.5))
[1] 0.0462912
2*(1-pbinom(530,1000,.5))
[1] 0.05367785 | Emperically testing a "p-test" (of multiple fair coin flips) | In general, there is no such binomial test with
significance level exactly $\alpha = 0.05,$ on
account of the discreteness of binomial distributions.
For an exact test at $\alpha = 0.05$ based on a co | Emperically testing a "p-test" (of multiple fair coin flips)
In general, there is no such binomial test with
significance level exactly $\alpha = 0.05,$ on
account of the discreteness of binomial distributions.
For an exact test at $\alpha = 0.05$ based on a continuous test statistic,
the distribution of P-value when $H_0$ is true would be
standard uniform and the probability that the P-value is below $0.05$ would be exactly $0.05.$
If $n = 100,$ testing $H_0: p = .5$ against $H_a: p \ne 0.5,$ the closest one can get to a test at the 5% level
(without going over 5%) is $0.0352 = 3.52\%.$
2*(1 - pbinom(60, 100, .5))
[1] 0.0352002
2*(1 - pbinom(59, 100, .5))
[1] 0.05688793
[It does not help to use a normal approximation with nominal 5% level
because z values close to $\pm 1.96$ cannot be achieved. For
$\mathsf{Binom}(100,.5)$ the normal approximation is very accurate so it hardly matters whether one does an 'exact' binomial test or an approximate normal one.]
Below I simulate 100,000 tests in R for $n=100$ observations,
and summarize and plot histograms of P-values. Probabilities should be accurate to about two places.
The binomial test 'binom.test' and approximate normal test 'prop.test' both
have the anticipated P-values.
set.seed(2021); n = 100
pv.b = replicate(10^5,
binom.test(rbinom(1,n,.5),n,.5)$p.val)
mean(pv.b < 0.05)
[1] 0.03605 # aprx 0.0352
2*sd(pv.b < 0.05)/sqrt(10^5)
[1] 0.001178995 # aprx 95% margin of sim error
set.seed(2021); n = 100
pv.n = replicate(10^5,
prop.test(rbinom(1,n,.5),n,.5)$p.val)
mean(pv.n < 0.05)
[1] 0.03605
The figures below show the simulated distributions of the P-values under $H_0.$
R code for figure:
par(mfrow=c(1,2))
hist(pv.b, prob=T, xlim=c(-.01,1.01), col="skyblue2")
abline(v = .05, col="red")
curve(dunif(x), add=T, n=10001, lwd=2)
hist(pv.n, prob=T, xlim=c(-.01,1.01), col="skyblue2")
abline(v = .05, col="red")
curve(dunif(x), add=T, n=10001, lwd=2)
par(mfrow=c(1,1))
Notes: (1) At the resolution of the plot, the two histograms look the same,
but there are miniscule differences in the distributions; at the borderline the binomial and approximate normal tests had slightly different P-values in 8070 tests out of 100,000. But they never disagreed about rejection at the 5% level.
sum(pv.b == pv.n)
[1] 8070
mean(abs(pv.b - pv.n))
[1] 0.0001792384
sum((pv.n <= .05) == (pv.n <=.05))
[1] 100000
(2) Without using a randomized test, the closest $\alpha$ to 5% without going over for $n=1000$
is $0.046 = 4.6\%,$
2*(1-pbinom(531,1000,.5))
[1] 0.0462912
2*(1-pbinom(530,1000,.5))
[1] 0.05367785 | Emperically testing a "p-test" (of multiple fair coin flips)
In general, there is no such binomial test with
significance level exactly $\alpha = 0.05,$ on
account of the discreteness of binomial distributions.
For an exact test at $\alpha = 0.05$ based on a co |
54,697 | Emperically testing a "p-test" (of multiple fair coin flips) | It is (nearly) impossible for a test statistic that has a discrete distribution to attain exactly a 5% (or any other) significance level. So instead, many so-called "exact" tests use the largest attainable significance level that is less than $\alpha$ (or 5% in your case). That is what you are seeing.
Here is a reference that elaborates:
https://www.jstor.org/stable/2684683?origin=crossref | Emperically testing a "p-test" (of multiple fair coin flips) | It is (nearly) impossible for a test statistic that has a discrete distribution to attain exactly a 5% (or any other) significance level. So instead, many so-called "exact" tests use the largest attai | Emperically testing a "p-test" (of multiple fair coin flips)
It is (nearly) impossible for a test statistic that has a discrete distribution to attain exactly a 5% (or any other) significance level. So instead, many so-called "exact" tests use the largest attainable significance level that is less than $\alpha$ (or 5% in your case). That is what you are seeing.
Here is a reference that elaborates:
https://www.jstor.org/stable/2684683?origin=crossref | Emperically testing a "p-test" (of multiple fair coin flips)
It is (nearly) impossible for a test statistic that has a discrete distribution to attain exactly a 5% (or any other) significance level. So instead, many so-called "exact" tests use the largest attai |
54,698 | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemort being alive? | CORRECTION: The old solution i gave is assuming that the prior odds were referring to the conditional probability of knowing the mark exists and you are calculating the overall probability of being alive. However if that is not the case and $p(A) = \frac{1}{101}$ then you can use:
$$p(A|M) = \frac{p(M|A)\times p(A)}{p(M|A)\times p(A) + p(M|D)\times p(D)}
= \frac{\frac{1}{101}}{1\times\frac{1}{101} + \frac{1}{5}\times\frac{100}{101}} = \frac{1}{1+\frac{100}{5}} = \frac{1}{21}$$
Or 20:1 odds.
OLD SOLUTION:
$$A = Alive, D= Dead, M = Mark$$
Currently we know that Voldemort has the mark (because Dumbledore noticed it). Therefore the prior becomes what we know which is the 1:100 odds.
$$p(A|M) = \frac{1}{101}$$
$$p(D|M) = \frac{100}{101}$$
We also learned the probability of the mark given that he is alive or dead:
$$p(M|A) = 1$$
$$p(M|D) = \frac{1}{5}$$
Using Bayes theorem and the fact that $p(D) = 1-p(A)$ we can solve the equation.
$$p(M|A) = \frac{p(A|M)\times p(M)}{p(A)}$$
$$p(M|D) = \frac{p(D|M)\times p(M)}{1-p(A)}$$
Keeping the fractions actually makes the math easier to do by hand. Solving for p(A) using these equations gives (Hopefully i did my math right but im getting):
$$p(a) = \frac{1}{501}$$
Which is 500:1 odds. | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemo | CORRECTION: The old solution i gave is assuming that the prior odds were referring to the conditional probability of knowing the mark exists and you are calculating the overall probability of being al | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemort being alive?
CORRECTION: The old solution i gave is assuming that the prior odds were referring to the conditional probability of knowing the mark exists and you are calculating the overall probability of being alive. However if that is not the case and $p(A) = \frac{1}{101}$ then you can use:
$$p(A|M) = \frac{p(M|A)\times p(A)}{p(M|A)\times p(A) + p(M|D)\times p(D)}
= \frac{\frac{1}{101}}{1\times\frac{1}{101} + \frac{1}{5}\times\frac{100}{101}} = \frac{1}{1+\frac{100}{5}} = \frac{1}{21}$$
Or 20:1 odds.
OLD SOLUTION:
$$A = Alive, D= Dead, M = Mark$$
Currently we know that Voldemort has the mark (because Dumbledore noticed it). Therefore the prior becomes what we know which is the 1:100 odds.
$$p(A|M) = \frac{1}{101}$$
$$p(D|M) = \frac{100}{101}$$
We also learned the probability of the mark given that he is alive or dead:
$$p(M|A) = 1$$
$$p(M|D) = \frac{1}{5}$$
Using Bayes theorem and the fact that $p(D) = 1-p(A)$ we can solve the equation.
$$p(M|A) = \frac{p(A|M)\times p(M)}{p(A)}$$
$$p(M|D) = \frac{p(D|M)\times p(M)}{1-p(A)}$$
Keeping the fractions actually makes the math easier to do by hand. Solving for p(A) using these equations gives (Hopefully i did my math right but im getting):
$$p(a) = \frac{1}{501}$$
Which is 500:1 odds. | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemo
CORRECTION: The old solution i gave is assuming that the prior odds were referring to the conditional probability of knowing the mark exists and you are calculating the overall probability of being al |
54,699 | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemort being alive? | Studies have shown that people deal with proportions better than probabilities. Rewrite the question as
There 3030 fanfics. There are 100 times as many fanfics where Voldemort died as fanfics where Voldermort lived. In 100% of fanfics where Voldemort lived, the mark remained. In 20% of fanfics where Voldemort died, the mark remained. Of all the fanfics where the mark remained, in what percentage did Voldemort survive?" | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemo | Studies have shown that people deal with proportions better than probabilities. Rewrite the question as
There 3030 fanfics. There are 100 times as many fanfics where Voldemort died as fanfics where V | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemort being alive?
Studies have shown that people deal with proportions better than probabilities. Rewrite the question as
There 3030 fanfics. There are 100 times as many fanfics where Voldemort died as fanfics where Voldermort lived. In 100% of fanfics where Voldemort lived, the mark remained. In 20% of fanfics where Voldemort died, the mark remained. Of all the fanfics where the mark remained, in what percentage did Voldemort survive?" | The prior odds were a hundred-to-one against Voldemort surviving. What is the probability of Voldemo
Studies have shown that people deal with proportions better than probabilities. Rewrite the question as
There 3030 fanfics. There are 100 times as many fanfics where Voldemort died as fanfics where V |
54,700 | Interpretation difference between log link and log transformation | Those models are similar, but the key different thing is we model $\log{E(Y)}$ for GLM and $E(\log Y)$ for LM. Thus, we can estimate $Y$ directly in GLM and $\log Y$ in LM.
GLM: We can directly say $E(Y)=\exp(\beta_0+\beta_1X)$. In this case, $\beta_1$ catches the effect of a unite change in $X$ on $Y$ (acutually, log of ratio of $Y$).
LM: we only say $E(\log Y)=\beta_0+\beta_1X$. Then, $\log Y$ increases by $\beta_1$ when $X$ increases by 1.
Critical note: $E(\log Y)\neq \log E(Y) $. | Interpretation difference between log link and log transformation | Those models are similar, but the key different thing is we model $\log{E(Y)}$ for GLM and $E(\log Y)$ for LM. Thus, we can estimate $Y$ directly in GLM and $\log Y$ in LM.
GLM: We can directly say $E | Interpretation difference between log link and log transformation
Those models are similar, but the key different thing is we model $\log{E(Y)}$ for GLM and $E(\log Y)$ for LM. Thus, we can estimate $Y$ directly in GLM and $\log Y$ in LM.
GLM: We can directly say $E(Y)=\exp(\beta_0+\beta_1X)$. In this case, $\beta_1$ catches the effect of a unite change in $X$ on $Y$ (acutually, log of ratio of $Y$).
LM: we only say $E(\log Y)=\beta_0+\beta_1X$. Then, $\log Y$ increases by $\beta_1$ when $X$ increases by 1.
Critical note: $E(\log Y)\neq \log E(Y) $. | Interpretation difference between log link and log transformation
Those models are similar, but the key different thing is we model $\log{E(Y)}$ for GLM and $E(\log Y)$ for LM. Thus, we can estimate $Y$ directly in GLM and $\log Y$ in LM.
GLM: We can directly say $E |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.