idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
49,501
Observed vs predicted values from a logit model
It sounds as if you are wanting to check the calibration of a model on the same dataset used to build the model. This will require the use of the bootstrap to re-fit the model 300 times. You can use a bootstrap overfitting-corrected nonparametric calibration curve with a nonparametric smoother. It is not a good idea to bin predicted probabilities. Assuming you did no variable selection here's an approach in R with the rms package. require(rms) f <- lrm(y ~ x1 + x2 + x3, x=TRUE, y=TRUE) # Full pre-specified model validate(f, B=300) # bootstrap stats such as Somers' Dxy cal <- calibrate(f, B=300) plot(cal)
Observed vs predicted values from a logit model
It sounds as if you are wanting to check the calibration of a model on the same dataset used to build the model. This will require the use of the bootstrap to re-fit the model 300 times. You can use
Observed vs predicted values from a logit model It sounds as if you are wanting to check the calibration of a model on the same dataset used to build the model. This will require the use of the bootstrap to re-fit the model 300 times. You can use a bootstrap overfitting-corrected nonparametric calibration curve with a nonparametric smoother. It is not a good idea to bin predicted probabilities. Assuming you did no variable selection here's an approach in R with the rms package. require(rms) f <- lrm(y ~ x1 + x2 + x3, x=TRUE, y=TRUE) # Full pre-specified model validate(f, B=300) # bootstrap stats such as Somers' Dxy cal <- calibrate(f, B=300) plot(cal)
Observed vs predicted values from a logit model It sounds as if you are wanting to check the calibration of a model on the same dataset used to build the model. This will require the use of the bootstrap to re-fit the model 300 times. You can use
49,502
What is the correct way to choose tests for pairwise comparison after ANOVA?
When choosing a test you have to consider two important things: A: is the test reliable when the ANOVA assumption have been violated; the question is if the test performs well when the group sizes are different and when the population variances are very different or you have not normally distributed data; B: does the test control over the Type I/Type II error rate; statistical power of a test and type I error rate are very related e.g: you can opt for a more conservative test, aiming small probability of Type I error, but you will loose statistical power. It is a trade-off. Furthermore, Bonferroni and Tukey test are conservative - high control over type I eror rate bat low statistical power; Games-Howell is powerful but not appropriate for small sample. Games-Howell is accurate when sample sizes are unequal. For all of the you should be careful ANOVA assumptions; Moreover you said: "My sample size is only 3 to 4 individuals per experimental group." but I think this is not enough when it comes to test ANOVA assumptions. This is a detailed book on the topic. Andy Field is a great teacher and here has a nice video on post-hoc. Also there and there are relevant documents on your question. Regarding your question in comment: I can say use this test or this one, but the main idea is that you have to know them well, the difference between them and the trade-off; after this you have to decide for one, two or more, and you have to be able to motivate and explain your decision and all of these in relation with your research and data not with the test 'per se'. Moreover, usually 'to assume' is not ok in statistics...therefore you have to test the normality and all ANOVA assumptions. Further, IMHO ANOVA it's ok but the group size is not ok. Considering your exigencies (in terms of significance level, power, no of groups etc.) you can compute a needed sample size per group ( using R, or using many other free resources on the web). I would like avoid to give you a 'cooked dish' because you wont gain anything, but to not make your life harder I say: if I were you I would use ANOVA, 30 individuals per group (for a 2X3 design you need n~180 individuals), I would use Tukey, REGWQ, and Bonfferoni.
What is the correct way to choose tests for pairwise comparison after ANOVA?
When choosing a test you have to consider two important things: A: is the test reliable when the ANOVA assumption have been violated; the question is if the test performs well when the group sizes are
What is the correct way to choose tests for pairwise comparison after ANOVA? When choosing a test you have to consider two important things: A: is the test reliable when the ANOVA assumption have been violated; the question is if the test performs well when the group sizes are different and when the population variances are very different or you have not normally distributed data; B: does the test control over the Type I/Type II error rate; statistical power of a test and type I error rate are very related e.g: you can opt for a more conservative test, aiming small probability of Type I error, but you will loose statistical power. It is a trade-off. Furthermore, Bonferroni and Tukey test are conservative - high control over type I eror rate bat low statistical power; Games-Howell is powerful but not appropriate for small sample. Games-Howell is accurate when sample sizes are unequal. For all of the you should be careful ANOVA assumptions; Moreover you said: "My sample size is only 3 to 4 individuals per experimental group." but I think this is not enough when it comes to test ANOVA assumptions. This is a detailed book on the topic. Andy Field is a great teacher and here has a nice video on post-hoc. Also there and there are relevant documents on your question. Regarding your question in comment: I can say use this test or this one, but the main idea is that you have to know them well, the difference between them and the trade-off; after this you have to decide for one, two or more, and you have to be able to motivate and explain your decision and all of these in relation with your research and data not with the test 'per se'. Moreover, usually 'to assume' is not ok in statistics...therefore you have to test the normality and all ANOVA assumptions. Further, IMHO ANOVA it's ok but the group size is not ok. Considering your exigencies (in terms of significance level, power, no of groups etc.) you can compute a needed sample size per group ( using R, or using many other free resources on the web). I would like avoid to give you a 'cooked dish' because you wont gain anything, but to not make your life harder I say: if I were you I would use ANOVA, 30 individuals per group (for a 2X3 design you need n~180 individuals), I would use Tukey, REGWQ, and Bonfferoni.
What is the correct way to choose tests for pairwise comparison after ANOVA? When choosing a test you have to consider two important things: A: is the test reliable when the ANOVA assumption have been violated; the question is if the test performs well when the group sizes are
49,503
What is the correct way to choose tests for pairwise comparison after ANOVA?
I will recommend you to have a read at BIOMETRY of Sokal and Rholf (old one, but with clear concepts for starting) and Experimental Design and Data Analysis for Biologists by Quinn and Keough. This last one is available in pdf on the web.
What is the correct way to choose tests for pairwise comparison after ANOVA?
I will recommend you to have a read at BIOMETRY of Sokal and Rholf (old one, but with clear concepts for starting) and Experimental Design and Data Analysis for Biologists by Quinn and Keough. This la
What is the correct way to choose tests for pairwise comparison after ANOVA? I will recommend you to have a read at BIOMETRY of Sokal and Rholf (old one, but with clear concepts for starting) and Experimental Design and Data Analysis for Biologists by Quinn and Keough. This last one is available in pdf on the web.
What is the correct way to choose tests for pairwise comparison after ANOVA? I will recommend you to have a read at BIOMETRY of Sokal and Rholf (old one, but with clear concepts for starting) and Experimental Design and Data Analysis for Biologists by Quinn and Keough. This la
49,504
Power calculations, logistic regression with continuous exposure--cohort [duplicate]
I'm skirting around the question on your simulation set up, and addressing the wider question of determining sample size in this scenario. You could turn the question on its head (as I understand your question -- a binomial outcome, one continuous predictor) and then determine the power to detect a difference in means of the continuous predictor between the two groups on your binary variable. This is very easy to calculate. From a hypothesis testing perspective, these two analyses should give identical p-values, and so power for the two scenarios should also be equivalent. see also some of the discussion on Choosing between logistic regression and Mann Whitney/t-tests for details on this. More detailed consideration is in the following article on sample size considerations for logistic regression by Hsieh et al. (pubmed: http://www.ncbi.nlm.nih.gov/pubmed/9699234; full paper available at http://personal.health.usf.edu/ywu/logistic.pdf)
Power calculations, logistic regression with continuous exposure--cohort [duplicate]
I'm skirting around the question on your simulation set up, and addressing the wider question of determining sample size in this scenario. You could turn the question on its head (as I understand your
Power calculations, logistic regression with continuous exposure--cohort [duplicate] I'm skirting around the question on your simulation set up, and addressing the wider question of determining sample size in this scenario. You could turn the question on its head (as I understand your question -- a binomial outcome, one continuous predictor) and then determine the power to detect a difference in means of the continuous predictor between the two groups on your binary variable. This is very easy to calculate. From a hypothesis testing perspective, these two analyses should give identical p-values, and so power for the two scenarios should also be equivalent. see also some of the discussion on Choosing between logistic regression and Mann Whitney/t-tests for details on this. More detailed consideration is in the following article on sample size considerations for logistic regression by Hsieh et al. (pubmed: http://www.ncbi.nlm.nih.gov/pubmed/9699234; full paper available at http://personal.health.usf.edu/ywu/logistic.pdf)
Power calculations, logistic regression with continuous exposure--cohort [duplicate] I'm skirting around the question on your simulation set up, and addressing the wider question of determining sample size in this scenario. You could turn the question on its head (as I understand your
49,505
Number of interactions in ANOVA with 4 independent variables
This is a relatively simple application of combinatorical calculations. The total number of combinations is given as $2^k$ where $k$ is the number of variables in the ANOVA ($k=4$ in your case). The logic behind this is that each variable can be either included or not included in each interaction term (eg the main effects only include one variable). You also have that the number of jth order interaction terms (i.e. interactions involving $j$ of the $k$ variables) is given simply by the choose function ${k\choose j}=\frac{k!}{j!(k-j)!}$ (number of ways to select $j$ objects from $k$ objects). This also gives rise to the well known result $\sum_{j=0}^k{k\choose j}=2^k$. To get the number of interaction terms we simple sum over $j=2,\dots,k$ rather than from $j=0,\dots,k$. Alternatively we can subtract off the terms $j=0,1$ from $2^k$ which gives us $$\text{no. of interactions}=2^k-k-1$$ plugging in $k=4$ gives you $11$.
Number of interactions in ANOVA with 4 independent variables
This is a relatively simple application of combinatorical calculations. The total number of combinations is given as $2^k$ where $k$ is the number of variables in the ANOVA ($k=4$ in your case). The
Number of interactions in ANOVA with 4 independent variables This is a relatively simple application of combinatorical calculations. The total number of combinations is given as $2^k$ where $k$ is the number of variables in the ANOVA ($k=4$ in your case). The logic behind this is that each variable can be either included or not included in each interaction term (eg the main effects only include one variable). You also have that the number of jth order interaction terms (i.e. interactions involving $j$ of the $k$ variables) is given simply by the choose function ${k\choose j}=\frac{k!}{j!(k-j)!}$ (number of ways to select $j$ objects from $k$ objects). This also gives rise to the well known result $\sum_{j=0}^k{k\choose j}=2^k$. To get the number of interaction terms we simple sum over $j=2,\dots,k$ rather than from $j=0,\dots,k$. Alternatively we can subtract off the terms $j=0,1$ from $2^k$ which gives us $$\text{no. of interactions}=2^k-k-1$$ plugging in $k=4$ gives you $11$.
Number of interactions in ANOVA with 4 independent variables This is a relatively simple application of combinatorical calculations. The total number of combinations is given as $2^k$ where $k$ is the number of variables in the ANOVA ($k=4$ in your case). The
49,506
Number of interactions in ANOVA with 4 independent variables
These are the possible interactions between the 4 IV's: 1x2 1x3 1x4 2x3 2x4 3x4 1x2x3 1x2x4 1x3x4 2x3x4 1x2x3x4 Which comes to a total of 11.
Number of interactions in ANOVA with 4 independent variables
These are the possible interactions between the 4 IV's: 1x2 1x3 1x4 2x3 2x4 3x4 1x2x3 1x2x4 1x3x4 2x3x4 1x2x3x4 Which comes to a total of 11.
Number of interactions in ANOVA with 4 independent variables These are the possible interactions between the 4 IV's: 1x2 1x3 1x4 2x3 2x4 3x4 1x2x3 1x2x4 1x3x4 2x3x4 1x2x3x4 Which comes to a total of 11.
Number of interactions in ANOVA with 4 independent variables These are the possible interactions between the 4 IV's: 1x2 1x3 1x4 2x3 2x4 3x4 1x2x3 1x2x4 1x3x4 2x3x4 1x2x3x4 Which comes to a total of 11.
49,507
t-test when observations are years
Many years late, but: This is not a good idea. Profitability of companies is likely to be autocorrelated. This means that observations are not independent, which is a basic assumption of t-tests. The autocorrelation structure can be accounted for in other analytical approaches.
t-test when observations are years
Many years late, but: This is not a good idea. Profitability of companies is likely to be autocorrelated. This means that observations are not independent, which is a basic assumption of t-tests. The
t-test when observations are years Many years late, but: This is not a good idea. Profitability of companies is likely to be autocorrelated. This means that observations are not independent, which is a basic assumption of t-tests. The autocorrelation structure can be accounted for in other analytical approaches.
t-test when observations are years Many years late, but: This is not a good idea. Profitability of companies is likely to be autocorrelated. This means that observations are not independent, which is a basic assumption of t-tests. The
49,508
What is the difference between using the multiplication rule or using Venn diagram subtraction for probability?
Let's draw pictures in which regions depict events (such as "the first light is red") and their areas are proportional to the probabilities of those events. Taking care to show areas accurately extends the Venn diagram metaphor in a useful quantitative way. For the traffic light problem, I will divide a unit square (representing the total probability) into four parts. The left-right division will reflect the possibilities for the first light (set to red at the left, non-red at the right) and the top-bottom division will reflect the possibilities for the second light (red at the bottom, non-red at the top). In the left figure, the divisions have been made in a 40-60 ratio and a 30-70 ratio, respectively. Where the red rectangle (of width 40%) and blue rectangle (of height 30%) intersect they form a purple rectangle of area 30% * 40% = 0.3 * 0.4 = 12% of the total area. Independent events can always be drawn in this way as separate overlapping rectangles. (When you think about what this means--overlapping rectangles are a geometric way to multiply quantities--it becomes clear that this is the very definition of independence.) The right figure shows the actual information in the problem, which tells us the purple rectangle has an area of only 10% and asks us to find the area not covered by either rectangle: the white portion to the upper right, depicting the event "Light 1 is not red and light 2 is not red." This indicates lack of independence: now it takes more than just two rectangles to carve up the square correctly. (There's more than one way to do this. For instance, I could have left the blue rectangle alone and adjusted the two halves of the red rectangle, making the bottom skinnier and--to keep its total area at 40%--the top fatter. Either way works.) Solution Starting with the 10% purple rectangle, notice that the rest of the blue rectangle (at the right) has to include the remaining 20% = 30% - 10% of the time the second light is red. Similarly, the rest of the vertical red rectangle has to include the remaining 30% = 40% - 10% of the time the first light is red. This gives three rectangles of known area: 10%, 20%, and 30%. They sum to 60%. Consequently, because the sum of all areas must be 100%, the white area is 100% - 60% = 40%. This represents the probability to be found. Comments The white region in the first (left) figure is that of a rectangle with base 60% = 100% - 40% and height 70% = 100% - 30%, whence its area is 0.6 * 0.7 = 42% (and not 40%). This area = probability method extends to more than two criteria: at What is the probability that this person is female? it is used to analyze a problem with three separate criteria. Any two-by-two contingency table can (and usually should) be visualized this way, after translating its counts into frequencies relative to the total.
What is the difference between using the multiplication rule or using Venn diagram subtraction for p
Let's draw pictures in which regions depict events (such as "the first light is red") and their areas are proportional to the probabilities of those events. Taking care to show areas accurately exten
What is the difference between using the multiplication rule or using Venn diagram subtraction for probability? Let's draw pictures in which regions depict events (such as "the first light is red") and their areas are proportional to the probabilities of those events. Taking care to show areas accurately extends the Venn diagram metaphor in a useful quantitative way. For the traffic light problem, I will divide a unit square (representing the total probability) into four parts. The left-right division will reflect the possibilities for the first light (set to red at the left, non-red at the right) and the top-bottom division will reflect the possibilities for the second light (red at the bottom, non-red at the top). In the left figure, the divisions have been made in a 40-60 ratio and a 30-70 ratio, respectively. Where the red rectangle (of width 40%) and blue rectangle (of height 30%) intersect they form a purple rectangle of area 30% * 40% = 0.3 * 0.4 = 12% of the total area. Independent events can always be drawn in this way as separate overlapping rectangles. (When you think about what this means--overlapping rectangles are a geometric way to multiply quantities--it becomes clear that this is the very definition of independence.) The right figure shows the actual information in the problem, which tells us the purple rectangle has an area of only 10% and asks us to find the area not covered by either rectangle: the white portion to the upper right, depicting the event "Light 1 is not red and light 2 is not red." This indicates lack of independence: now it takes more than just two rectangles to carve up the square correctly. (There's more than one way to do this. For instance, I could have left the blue rectangle alone and adjusted the two halves of the red rectangle, making the bottom skinnier and--to keep its total area at 40%--the top fatter. Either way works.) Solution Starting with the 10% purple rectangle, notice that the rest of the blue rectangle (at the right) has to include the remaining 20% = 30% - 10% of the time the second light is red. Similarly, the rest of the vertical red rectangle has to include the remaining 30% = 40% - 10% of the time the first light is red. This gives three rectangles of known area: 10%, 20%, and 30%. They sum to 60%. Consequently, because the sum of all areas must be 100%, the white area is 100% - 60% = 40%. This represents the probability to be found. Comments The white region in the first (left) figure is that of a rectangle with base 60% = 100% - 40% and height 70% = 100% - 30%, whence its area is 0.6 * 0.7 = 42% (and not 40%). This area = probability method extends to more than two criteria: at What is the probability that this person is female? it is used to analyze a problem with three separate criteria. Any two-by-two contingency table can (and usually should) be visualized this way, after translating its counts into frequencies relative to the total.
What is the difference between using the multiplication rule or using Venn diagram subtraction for p Let's draw pictures in which regions depict events (such as "the first light is red") and their areas are proportional to the probabilities of those events. Taking care to show areas accurately exten
49,509
What is the difference between using the multiplication rule or using Venn diagram subtraction for probability?
In problems like this it is important that you read specifically which information is given to you and which is the problem. So let's start with assignment 1. The probability that Andrew is still alive - let's call that $P(A)$ and the probability that Ellen is still alive is $P(B)$. What we are looking for is the event that both are still alive, so both events occur simultaneously, which we call $A \cap B$. As noted, they tell you these events are independent. That means: By definition, the probability of this event is $P(A \cap B) = P(A) * P(B) $ So one of your question was, how is this related to the Venn diagramme? Well, independence is not readily apparent in the venn diagramm. A Venn diagramme needs to look a certain way if there is independence, but even if it looks that way you only know events are independent if the problem set tells you! This is a bit tricky but stay with me. If you'd draw a Venn diagramme for this first exercise, you would see that: $$P(A \cap B) / P(A) = P(B) / 1$$ Which is the same as $$P(A \cap B) / P(A) = P(B) / P(\Omega)$$ In essence that means that the chance of "hitting" $B$ if you are already in $A$ is the same as the chance of hitting $B$ in the first place - or said differently - no matter if you hit $A$ - the chance of hitting $B$ is always the same. If $P(A)$ was 0,5, then the probability of $A \cap B$ would have to be 50% of the size of $A$. Only then can the events be independent. Try to draw this up, it is a nice exercise. Also - are you aware that if the size of $A \cap B$ is zero in the Venn diagramme, then the events can not be independent? Think about why. From this point on maybe you want to draw Venn diagramms as you follow along. Okay so far so good. Now let's see what about problem two? Well firstly, note that these events are not independent. You are, in fact, given given the probability that both lights are red at the same time and this probability does not fit what you know about independence. For you to see the difference to the first example, let us see what can happen at that intersection. We want to know when both lights are green. What is the complement of this event? It's NOT "both lights are red". In fact the two complementary events are: Both lights are green At least one light is red It is either option 1 or 2. Option 1 is what we are after. If we can figure out the probability of option 2, then we know that option 1 is the complement/opposite of it, right? So let's figure out event 2. Okay so why is there another probability given in the assignment? Well think about it: Event 2 can be described differently. The first light can be red, let us call this $A$ The second light can be red, let's call this one $B$ Okay so this is probably where you are right now. The probabilities for these two events, let's call them $P(A)$ and $P(B)$ are given: $.4$ and $.3$ We also know: The first and the second light are red at the same this - this is $A \cap B$ : $.1$ How does this fit together? Well $A$ and $B$ are not enough to calculate when both lights are green (event 1), because we will need the second event - which we called: "Either light is red" and then take its complement (you already figured this out). Now - why can we not multiply the probabilities? First: because they are not independent. The multiplication rule can not be used because the ratio of $A \cap B$ to either $A$ or $B$ is not as it was in the first example. Second: Because even if they were independent, then multiplication gives us the event $A \cap B$ - the event that both occur at the same time. But we want the event that either light is red. Let's think about how this event can also be described. The name you are looking for is $A \cup B$. What does it include? $A \cup B = $ -Light $A$ can be red AND light $B$ can be red. We will call this $A \cap B$ (we have this!) -Light $A$ can be red and light $B$ can be green. This is called $A \cap \neg B$ -Light $A$ can be green and light $B$ can be red. This is called $\neg A \cap B$ (the $\neg$ means negation or complement) So the problem is we have $A$ and $B$ and $A \cap B$, but not the probabilities for either light beeing red, while the other is green, $A \cap \neg B$ and $\neg A \cap B$ To make this short, here is where you apply the additive rule. You can easily see why if you look at your Venn diagramms. The difference between $A$ and $A \cap \neg B$ is what? Well it's the part of the intersection, $A \cap B$ which is in $A$. The same goes for $B$. So if we add $A$ and $B$ instead of $(A \cap \neg B)$ and $(\neg A \cap B)$, we have added two things too much. First: the part of the intersection that lies in $A$ and second: the part of the intersection that lies in $B$. Go ahead and draw this in your circles. What you want to see is: What is the difference between $$A+B$$ and $$(A \cup B) = (A \cap B) + (A \cap \neg B) + (\neg A \cap B)$$ ? Well the difference is exactly one time the intersection. So to get the probability of our event "At least one light is red", all we need to do is to add $P(A)$ and $P(B)$ and substract the intersect $P(A \cap B)$ and we have $$P(A) + P(B) - P(A \cap B) = P(A \cup B)$$ Alright and this is the same as "At least one light is red". And what's the opposite of this? It's "Both lights are green" - the solution to the problem.
What is the difference between using the multiplication rule or using Venn diagram subtraction for p
In problems like this it is important that you read specifically which information is given to you and which is the problem. So let's start with assignment 1. The probability that Andrew is still aliv
What is the difference between using the multiplication rule or using Venn diagram subtraction for probability? In problems like this it is important that you read specifically which information is given to you and which is the problem. So let's start with assignment 1. The probability that Andrew is still alive - let's call that $P(A)$ and the probability that Ellen is still alive is $P(B)$. What we are looking for is the event that both are still alive, so both events occur simultaneously, which we call $A \cap B$. As noted, they tell you these events are independent. That means: By definition, the probability of this event is $P(A \cap B) = P(A) * P(B) $ So one of your question was, how is this related to the Venn diagramme? Well, independence is not readily apparent in the venn diagramm. A Venn diagramme needs to look a certain way if there is independence, but even if it looks that way you only know events are independent if the problem set tells you! This is a bit tricky but stay with me. If you'd draw a Venn diagramme for this first exercise, you would see that: $$P(A \cap B) / P(A) = P(B) / 1$$ Which is the same as $$P(A \cap B) / P(A) = P(B) / P(\Omega)$$ In essence that means that the chance of "hitting" $B$ if you are already in $A$ is the same as the chance of hitting $B$ in the first place - or said differently - no matter if you hit $A$ - the chance of hitting $B$ is always the same. If $P(A)$ was 0,5, then the probability of $A \cap B$ would have to be 50% of the size of $A$. Only then can the events be independent. Try to draw this up, it is a nice exercise. Also - are you aware that if the size of $A \cap B$ is zero in the Venn diagramme, then the events can not be independent? Think about why. From this point on maybe you want to draw Venn diagramms as you follow along. Okay so far so good. Now let's see what about problem two? Well firstly, note that these events are not independent. You are, in fact, given given the probability that both lights are red at the same time and this probability does not fit what you know about independence. For you to see the difference to the first example, let us see what can happen at that intersection. We want to know when both lights are green. What is the complement of this event? It's NOT "both lights are red". In fact the two complementary events are: Both lights are green At least one light is red It is either option 1 or 2. Option 1 is what we are after. If we can figure out the probability of option 2, then we know that option 1 is the complement/opposite of it, right? So let's figure out event 2. Okay so why is there another probability given in the assignment? Well think about it: Event 2 can be described differently. The first light can be red, let us call this $A$ The second light can be red, let's call this one $B$ Okay so this is probably where you are right now. The probabilities for these two events, let's call them $P(A)$ and $P(B)$ are given: $.4$ and $.3$ We also know: The first and the second light are red at the same this - this is $A \cap B$ : $.1$ How does this fit together? Well $A$ and $B$ are not enough to calculate when both lights are green (event 1), because we will need the second event - which we called: "Either light is red" and then take its complement (you already figured this out). Now - why can we not multiply the probabilities? First: because they are not independent. The multiplication rule can not be used because the ratio of $A \cap B$ to either $A$ or $B$ is not as it was in the first example. Second: Because even if they were independent, then multiplication gives us the event $A \cap B$ - the event that both occur at the same time. But we want the event that either light is red. Let's think about how this event can also be described. The name you are looking for is $A \cup B$. What does it include? $A \cup B = $ -Light $A$ can be red AND light $B$ can be red. We will call this $A \cap B$ (we have this!) -Light $A$ can be red and light $B$ can be green. This is called $A \cap \neg B$ -Light $A$ can be green and light $B$ can be red. This is called $\neg A \cap B$ (the $\neg$ means negation or complement) So the problem is we have $A$ and $B$ and $A \cap B$, but not the probabilities for either light beeing red, while the other is green, $A \cap \neg B$ and $\neg A \cap B$ To make this short, here is where you apply the additive rule. You can easily see why if you look at your Venn diagramms. The difference between $A$ and $A \cap \neg B$ is what? Well it's the part of the intersection, $A \cap B$ which is in $A$. The same goes for $B$. So if we add $A$ and $B$ instead of $(A \cap \neg B)$ and $(\neg A \cap B)$, we have added two things too much. First: the part of the intersection that lies in $A$ and second: the part of the intersection that lies in $B$. Go ahead and draw this in your circles. What you want to see is: What is the difference between $$A+B$$ and $$(A \cup B) = (A \cap B) + (A \cap \neg B) + (\neg A \cap B)$$ ? Well the difference is exactly one time the intersection. So to get the probability of our event "At least one light is red", all we need to do is to add $P(A)$ and $P(B)$ and substract the intersect $P(A \cap B)$ and we have $$P(A) + P(B) - P(A \cap B) = P(A \cup B)$$ Alright and this is the same as "At least one light is red". And what's the opposite of this? It's "Both lights are green" - the solution to the problem.
What is the difference between using the multiplication rule or using Venn diagram subtraction for p In problems like this it is important that you read specifically which information is given to you and which is the problem. So let's start with assignment 1. The probability that Andrew is still aliv
49,510
Using a particle filter for robot localization
The Kalman filter is optimal when the system is linear and the noises are Gaussian, so if that's the case there is no reason to switch to a particle filter (which, apart from being suboptimal for linear systems, takes much more time to run).
Using a particle filter for robot localization
The Kalman filter is optimal when the system is linear and the noises are Gaussian, so if that's the case there is no reason to switch to a particle filter (which, apart from being suboptimal for line
Using a particle filter for robot localization The Kalman filter is optimal when the system is linear and the noises are Gaussian, so if that's the case there is no reason to switch to a particle filter (which, apart from being suboptimal for linear systems, takes much more time to run).
Using a particle filter for robot localization The Kalman filter is optimal when the system is linear and the noises are Gaussian, so if that's the case there is no reason to switch to a particle filter (which, apart from being suboptimal for line
49,511
How to get the standardized beta coefficients from glm.nb regression in R?
A quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(responseCountVar ~ predictor1 + predictor2 + predictor3 + predictor4 + predictor5 + predictor6 + predictor7 + predictor8 + predictor9 + predictor10 + predictor11 + predictor12 + predictor13 + predictor14 + predictor15 + predictor16 + predictor17 + predictor18 + predictor19 + predictor20 + predictor21, data=myData, control=glm.control(maxit=125)) summary(nb) library(QuantPsyc) lm.beta(nb)
How to get the standardized beta coefficients from glm.nb regression in R?
A quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(respons
How to get the standardized beta coefficients from glm.nb regression in R? A quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(responseCountVar ~ predictor1 + predictor2 + predictor3 + predictor4 + predictor5 + predictor6 + predictor7 + predictor8 + predictor9 + predictor10 + predictor11 + predictor12 + predictor13 + predictor14 + predictor15 + predictor16 + predictor17 + predictor18 + predictor19 + predictor20 + predictor21, data=myData, control=glm.control(maxit=125)) summary(nb) library(QuantPsyc) lm.beta(nb)
How to get the standardized beta coefficients from glm.nb regression in R? A quick way to get at the standardized beta coefficients directly from any lm (or glm) model in R, try using lm.beta(model). In the example provided, this would be: library("MASS") nb = glm.nb(respons
49,512
Clustering high-dimensional sparse binary data
Consider using a graph based approach. Try to find a threshold to define when users are "somewhat similar". It can be quite low. Build a graph of these somewhat similar users. Then use a Clique detection approach to find groups in this graph.
Clustering high-dimensional sparse binary data
Consider using a graph based approach. Try to find a threshold to define when users are "somewhat similar". It can be quite low. Build a graph of these somewhat similar users. Then use a Clique detect
Clustering high-dimensional sparse binary data Consider using a graph based approach. Try to find a threshold to define when users are "somewhat similar". It can be quite low. Build a graph of these somewhat similar users. Then use a Clique detection approach to find groups in this graph.
Clustering high-dimensional sparse binary data Consider using a graph based approach. Try to find a threshold to define when users are "somewhat similar". It can be quite low. Build a graph of these somewhat similar users. Then use a Clique detect
49,513
Clustering high-dimensional sparse binary data
I suggest a cluster analysis. Joachim Bacher discusses the different dissimilarity coefficients in depth in his script about cluster analysis, in particular the effects of treating the absence of a treat. For instance : are two items correlated when both show zero ? I remember that he also works with a multiple-response example from survey research which is close to your problem. The script can be downloaded from : http://www.clusteranalyse.net/sonstiges/zaspringseminar2002/ HTH ftr
Clustering high-dimensional sparse binary data
I suggest a cluster analysis. Joachim Bacher discusses the different dissimilarity coefficients in depth in his script about cluster analysis, in particular the effects of treating the absence of a tr
Clustering high-dimensional sparse binary data I suggest a cluster analysis. Joachim Bacher discusses the different dissimilarity coefficients in depth in his script about cluster analysis, in particular the effects of treating the absence of a treat. For instance : are two items correlated when both show zero ? I remember that he also works with a multiple-response example from survey research which is close to your problem. The script can be downloaded from : http://www.clusteranalyse.net/sonstiges/zaspringseminar2002/ HTH ftr
Clustering high-dimensional sparse binary data I suggest a cluster analysis. Joachim Bacher discusses the different dissimilarity coefficients in depth in his script about cluster analysis, in particular the effects of treating the absence of a tr
49,514
Two-level hierarchical model using time-series cross sectional data?
I'm not 100% sure, so check carefully, and you'll probably want to put priors on the variances instead of the 1.0E-12, but maybe something like this? model { for( i in 1 : nData ) { crime[i] ~ dbern( mu[i] ) logit(mu[i]) <- base + b0[counties[i], period[i]] + b1[counties[i], period[i]] * police[i] } base ~ dnorm( 0 , 1.0E-12) for (j in 1 : nCounties){ for (k in 1 : nPeriod) { b0[j, k] ~ dnorm(b0h[j], 1.0E-12) b1[j, k] ~ dnorm(b1h[j], 1.0E-12) } } for ( j in 1 : nCounties ) { b0h[j] ~ dnorm( 0 , 1.0E-12 ) b1h[j] ~ dnorm( 0 , 1.0E-12 ) } }
Two-level hierarchical model using time-series cross sectional data?
I'm not 100% sure, so check carefully, and you'll probably want to put priors on the variances instead of the 1.0E-12, but maybe something like this? model { for( i in 1 : nData ) { crime[i] ~ d
Two-level hierarchical model using time-series cross sectional data? I'm not 100% sure, so check carefully, and you'll probably want to put priors on the variances instead of the 1.0E-12, but maybe something like this? model { for( i in 1 : nData ) { crime[i] ~ dbern( mu[i] ) logit(mu[i]) <- base + b0[counties[i], period[i]] + b1[counties[i], period[i]] * police[i] } base ~ dnorm( 0 , 1.0E-12) for (j in 1 : nCounties){ for (k in 1 : nPeriod) { b0[j, k] ~ dnorm(b0h[j], 1.0E-12) b1[j, k] ~ dnorm(b1h[j], 1.0E-12) } } for ( j in 1 : nCounties ) { b0h[j] ~ dnorm( 0 , 1.0E-12 ) b1h[j] ~ dnorm( 0 , 1.0E-12 ) } }
Two-level hierarchical model using time-series cross sectional data? I'm not 100% sure, so check carefully, and you'll probably want to put priors on the variances instead of the 1.0E-12, but maybe something like this? model { for( i in 1 : nData ) { crime[i] ~ d
49,515
Adversarial noise in PCA
Here is one for you: the 10 percent of outliers have so much influenced the PCA that the 1st principal component is now nearly orthogonal to its true value. library(MASS) n<-50 p<-100 eps<-0.1 x0<-mvrnorm(n-floor(n*eps),rep(0,p),diag(p)) x1<-mvrnorm(floor(n*eps),rep(100,p),diag(p)/100) O0<-prcomp(x0) O1<-prcomp(rbind(x0,x1)) O1$rotation[,1]%*%O0$rotation[,1]
Adversarial noise in PCA
Here is one for you: the 10 percent of outliers have so much influenced the PCA that the 1st principal component is now nearly orthogonal to its true value. library(MASS) n<-50 p<-100 eps<-0.1 x0<-m
Adversarial noise in PCA Here is one for you: the 10 percent of outliers have so much influenced the PCA that the 1st principal component is now nearly orthogonal to its true value. library(MASS) n<-50 p<-100 eps<-0.1 x0<-mvrnorm(n-floor(n*eps),rep(0,p),diag(p)) x1<-mvrnorm(floor(n*eps),rep(100,p),diag(p)/100) O0<-prcomp(x0) O1<-prcomp(rbind(x0,x1)) O1$rotation[,1]%*%O0$rotation[,1]
Adversarial noise in PCA Here is one for you: the 10 percent of outliers have so much influenced the PCA that the 1st principal component is now nearly orthogonal to its true value. library(MASS) n<-50 p<-100 eps<-0.1 x0<-m
49,516
Tolerance interval for Deming regression
Below is the code. Not very pretty, but (as said in the update of my question) it works well. # returns estimates deming.estim <- function(x,y,lambda=1){ # lambda=sigmay²/sigmax² n <- length(x) my <- mean(y) mx <- mean(x) SSDy <- crossprod(y-my)[,] SSDx <- crossprod(x-mx)[,] SPDxy <- crossprod(x-mx,y-my)[,] A <- sqrt((SSDy - lambda*SSDx)^2 + 4*lambda*SPDxy^2) B <- SSDy - lambda*SSDx beta <- (B + A) / (2*SPDxy) alpha <- my - mx*beta sigma.uu <- ( (SSDy + lambda*SSDx) - A ) /(2*lambda) / (n-1) s.vv <- crossprod(y-my-beta*(x-mx))/(n-2) # formula Gilard & Iles sbeta2. <- (SSDx*SSDy-SPDxy^2)/n/(SPDxy^2/beta^2) sbeta. <- sqrt(sbeta2.) salpha2. <- s.vv/n + mx^2*sbeta2. salpha. <- sqrt(salpha2.) V <- rbind( c(salpha2., -mx*sbeta2.), c(-mx*sbeta2., sbeta2.) ) return(list(alpha=alpha,beta=beta, V=V, sigma=sqrt(sigma.uu*(n-1)/(n-2))) ) } # returns one-sided upper tolerance bound deming.tolerance <- function(x,y,lambda=1,xnew,p=98/100,alph=5/100){ fit <- deming.estim(x,y,lambda) V <- fit$V sigma.uu <- fit$sigma^2 sigma.ee <- lambda*sigma.uu Vbeta.over.sigma <- V/(sigma.ee + sigma.uu) Xnew <- as.matrix(c(1,xnew)) d <- sqrt( (t(Xnew)%*%Vbeta.over.sigma%*%Xnew) ) k1 <- d * qt(1-alph, length(x)-2, ncp=qnorm(p)/d) S <- sqrt(sigma.ee + sigma.uu) ynew <- fit$alpha+fit$beta*xnew TolUp <- ynew - xnew + k1*S TolUp } Some simulations to check the frequentist coverage: ################# ## simulations ## ################# n <- 60 mu <- runif(n, 1, 100) xnew <- 50 sigma.x <- 1 lambda0 <- 2 sigma.y <- sqrt(lambda0)*sigma.x alpha <- 2 beta <- 3 p <- 0.98 # true value of the quantile: qq <- qnorm(p, alpha+(beta-1)*xnew, sqrt(sigma.x^2+sigma.y^2)) nsims <- 5000 test <- rep(NA,nsims) lambda <- lambda0 for(i in 1:nsims){ x <- rnorm(n, mu, sigma.x) y <- rnorm(n, alpha+beta*mu, sigma.y) test[i] <- ( qq < deming.tolerance(x,y,lambda=lambda,xnew=xnew,p=p) ) } > mean(test) [1] 0.9544
Tolerance interval for Deming regression
Below is the code. Not very pretty, but (as said in the update of my question) it works well. # returns estimates deming.estim <- function(x,y,lambda=1){ # lambda=sigmay²/sigmax² n <- length(x)
Tolerance interval for Deming regression Below is the code. Not very pretty, but (as said in the update of my question) it works well. # returns estimates deming.estim <- function(x,y,lambda=1){ # lambda=sigmay²/sigmax² n <- length(x) my <- mean(y) mx <- mean(x) SSDy <- crossprod(y-my)[,] SSDx <- crossprod(x-mx)[,] SPDxy <- crossprod(x-mx,y-my)[,] A <- sqrt((SSDy - lambda*SSDx)^2 + 4*lambda*SPDxy^2) B <- SSDy - lambda*SSDx beta <- (B + A) / (2*SPDxy) alpha <- my - mx*beta sigma.uu <- ( (SSDy + lambda*SSDx) - A ) /(2*lambda) / (n-1) s.vv <- crossprod(y-my-beta*(x-mx))/(n-2) # formula Gilard & Iles sbeta2. <- (SSDx*SSDy-SPDxy^2)/n/(SPDxy^2/beta^2) sbeta. <- sqrt(sbeta2.) salpha2. <- s.vv/n + mx^2*sbeta2. salpha. <- sqrt(salpha2.) V <- rbind( c(salpha2., -mx*sbeta2.), c(-mx*sbeta2., sbeta2.) ) return(list(alpha=alpha,beta=beta, V=V, sigma=sqrt(sigma.uu*(n-1)/(n-2))) ) } # returns one-sided upper tolerance bound deming.tolerance <- function(x,y,lambda=1,xnew,p=98/100,alph=5/100){ fit <- deming.estim(x,y,lambda) V <- fit$V sigma.uu <- fit$sigma^2 sigma.ee <- lambda*sigma.uu Vbeta.over.sigma <- V/(sigma.ee + sigma.uu) Xnew <- as.matrix(c(1,xnew)) d <- sqrt( (t(Xnew)%*%Vbeta.over.sigma%*%Xnew) ) k1 <- d * qt(1-alph, length(x)-2, ncp=qnorm(p)/d) S <- sqrt(sigma.ee + sigma.uu) ynew <- fit$alpha+fit$beta*xnew TolUp <- ynew - xnew + k1*S TolUp } Some simulations to check the frequentist coverage: ################# ## simulations ## ################# n <- 60 mu <- runif(n, 1, 100) xnew <- 50 sigma.x <- 1 lambda0 <- 2 sigma.y <- sqrt(lambda0)*sigma.x alpha <- 2 beta <- 3 p <- 0.98 # true value of the quantile: qq <- qnorm(p, alpha+(beta-1)*xnew, sqrt(sigma.x^2+sigma.y^2)) nsims <- 5000 test <- rep(NA,nsims) lambda <- lambda0 for(i in 1:nsims){ x <- rnorm(n, mu, sigma.x) y <- rnorm(n, alpha+beta*mu, sigma.y) test[i] <- ( qq < deming.tolerance(x,y,lambda=lambda,xnew=xnew,p=p) ) } > mean(test) [1] 0.9544
Tolerance interval for Deming regression Below is the code. Not very pretty, but (as said in the update of my question) it works well. # returns estimates deming.estim <- function(x,y,lambda=1){ # lambda=sigmay²/sigmax² n <- length(x)
49,517
How to perform unsupervised Random Forest classification using R?
Over classification may be caused by prediction bias, which is a problem for the canonical RF method and for which a number of modifications have been researched. Probably the principal approach to mitigating bias is to utilise randomised split thresholds, sometimes referred to as 'extreme' random forest. I'm not sure what flavour of RF is implemented in the R package, but certainly the problem will be more prominent when working with unbalanced classification data sets - by taking a majority vote the forest loses the information regarding the balance of votes, and that can and often will introduce bias to the classifications.
How to perform unsupervised Random Forest classification using R?
Over classification may be caused by prediction bias, which is a problem for the canonical RF method and for which a number of modifications have been researched. Probably the principal approach to mi
How to perform unsupervised Random Forest classification using R? Over classification may be caused by prediction bias, which is a problem for the canonical RF method and for which a number of modifications have been researched. Probably the principal approach to mitigating bias is to utilise randomised split thresholds, sometimes referred to as 'extreme' random forest. I'm not sure what flavour of RF is implemented in the R package, but certainly the problem will be more prominent when working with unbalanced classification data sets - by taking a majority vote the forest loses the information regarding the balance of votes, and that can and often will introduce bias to the classifications.
How to perform unsupervised Random Forest classification using R? Over classification may be caused by prediction bias, which is a problem for the canonical RF method and for which a number of modifications have been researched. Probably the principal approach to mi
49,518
How to perform unsupervised Random Forest classification using R?
In the randomforest function, instead of listing a y ~ x model, simply input your predictor matrix. randomforest thinks you want to run a supervised classification because you are listing the classifying variable factor(category) as the y in your model.
How to perform unsupervised Random Forest classification using R?
In the randomforest function, instead of listing a y ~ x model, simply input your predictor matrix. randomforest thinks you want to run a supervised classification because you are listing the classify
How to perform unsupervised Random Forest classification using R? In the randomforest function, instead of listing a y ~ x model, simply input your predictor matrix. randomforest thinks you want to run a supervised classification because you are listing the classifying variable factor(category) as the y in your model.
How to perform unsupervised Random Forest classification using R? In the randomforest function, instead of listing a y ~ x model, simply input your predictor matrix. randomforest thinks you want to run a supervised classification because you are listing the classify
49,519
Non-algebric curve-fitting along weighted pointcloud (if possible using python)
(assuming you want to use python) The easiest (given what's currently available) would be to use a polynomial and robust methods from statsmodels. something like endog = y # observed points, one dimensional array #polynomial array as explanatory variable: #assuming x contains the vertical points x = x / float(x.max() - x.min()) * 2 - 1 #optional rescaling exog = np.vander(x, 5) import statsmodels.api as sm #robust estimation res = sm.RLM(endog, exog).fit() print res.summary() poly = np.poly1d(res.params) #now we have a polynomial, and we can use differentiation, ... with it I just typed this, and didn't run it. Also I don't know what order of the polynomial would be useful in this case. I would use, chebvander from numpy.polynomial, but I don't remember the details, especially handling the shift in domain. RLM.fit has several different options, if the "outliers" or large spread of points are not handled correctly with the default. http://statsmodels.sourceforge.net/stable/rlm.html Quantile regression would be a good alternative to RLM. Someone contributed a script for it for statsmodels, but it is not included yet. quantile regression could be used to estimate a "median line" that might work well in this case. Fitting splines instead of polynomials might be better, but it's not available in python without a bit of work, and I think it will not make much difference given the relatively small number of vertical points. update: using the data from columns 0 and 1, I tried out RLM. I didn't manage to get the obvious options for RLM to work, but there are still some more uncommon options that I haven't tried. What I tried instead was to put directly all the weight on the center points. This is essentially least trimmed squares, with an initial robust (M-) estimation of the center points. The first stage fits a low order polynomial. Then we select points with a small residual. I selected the threshold manually to get the right points. Then I fit a higher order polynomial with weighted least squares and put weight only on the center points. This seems to work for estimating the line for the center points, but I don't know yet how this would perform for more general problems. Except for importing numpy and matplotlib.pyplot and for loading the data, my script to produce the graph is: #use data in column 1 #rescaling didn't matter much d = 0.1 x_rs = a[:,1] #a is original data x_rs = (x_rs - x_rs.min()) x_rs = x_rs / x_rs.max() #* (1-d) *2 - 1 + d print x_rs.min(), x_rs.max() degree = 3 #preliminary regression exog = x_rs[:,None] ** np.arange(degree+1) degree2 = 6 #trimmed regression exog2 = x_rs[:,None] ** np.arange(degree2+1) endog = a[:,0] import statsmodels.api as sm #fit low order polynomial with robust estimator res = sm.RLM(endog, exog).fit() #fit on center points, threshold 1.31 chosen by inspection of residuals resw = sm.WLS(endog, exog2, weights=(np.abs(res.resid) < 1.31)).fit() plt.plot(a[:,1], a[:,0], 'o') plt.plot(a[:,1], res.fittedvalues, '-', lw=2, label='RLM') idx = np.nonzero(np.abs(res.resid) < 1.31)[0] plt.plot(a[idx,1], a[idx,0], 'ro', alpha=0.5) plt.plot(a[:,1], resw.fittedvalues, '-', lw=2, label='WLS center') plt.legend() plt.show() update 2 One problem with the options for robust estimation is how much control we have over the variance estimate. RLM updates the variance and in this case it downweights the outlying observations not enough. If we have a prior estimate of the variance, then we can force RLM to use it and not to update it. The variance estimate by the trimmed least squares, WLS above, is only about 0.014. If we use this information, scale = resw.scale resw2 = sm.RLM(endog, exog2).fit(init=dict(scale=scale), update_scale=False) then the fitted curve looks very similar to the WLS curve above, although the parameter estimates for the polynomial are different. (Caveat: This doesn't work yet in statsmodels, the init option is only available in a branch where I worked on robust estimation and improvements to RLM.)
Non-algebric curve-fitting along weighted pointcloud (if possible using python)
(assuming you want to use python) The easiest (given what's currently available) would be to use a polynomial and robust methods from statsmodels. something like endog = y # observed points, one dime
Non-algebric curve-fitting along weighted pointcloud (if possible using python) (assuming you want to use python) The easiest (given what's currently available) would be to use a polynomial and robust methods from statsmodels. something like endog = y # observed points, one dimensional array #polynomial array as explanatory variable: #assuming x contains the vertical points x = x / float(x.max() - x.min()) * 2 - 1 #optional rescaling exog = np.vander(x, 5) import statsmodels.api as sm #robust estimation res = sm.RLM(endog, exog).fit() print res.summary() poly = np.poly1d(res.params) #now we have a polynomial, and we can use differentiation, ... with it I just typed this, and didn't run it. Also I don't know what order of the polynomial would be useful in this case. I would use, chebvander from numpy.polynomial, but I don't remember the details, especially handling the shift in domain. RLM.fit has several different options, if the "outliers" or large spread of points are not handled correctly with the default. http://statsmodels.sourceforge.net/stable/rlm.html Quantile regression would be a good alternative to RLM. Someone contributed a script for it for statsmodels, but it is not included yet. quantile regression could be used to estimate a "median line" that might work well in this case. Fitting splines instead of polynomials might be better, but it's not available in python without a bit of work, and I think it will not make much difference given the relatively small number of vertical points. update: using the data from columns 0 and 1, I tried out RLM. I didn't manage to get the obvious options for RLM to work, but there are still some more uncommon options that I haven't tried. What I tried instead was to put directly all the weight on the center points. This is essentially least trimmed squares, with an initial robust (M-) estimation of the center points. The first stage fits a low order polynomial. Then we select points with a small residual. I selected the threshold manually to get the right points. Then I fit a higher order polynomial with weighted least squares and put weight only on the center points. This seems to work for estimating the line for the center points, but I don't know yet how this would perform for more general problems. Except for importing numpy and matplotlib.pyplot and for loading the data, my script to produce the graph is: #use data in column 1 #rescaling didn't matter much d = 0.1 x_rs = a[:,1] #a is original data x_rs = (x_rs - x_rs.min()) x_rs = x_rs / x_rs.max() #* (1-d) *2 - 1 + d print x_rs.min(), x_rs.max() degree = 3 #preliminary regression exog = x_rs[:,None] ** np.arange(degree+1) degree2 = 6 #trimmed regression exog2 = x_rs[:,None] ** np.arange(degree2+1) endog = a[:,0] import statsmodels.api as sm #fit low order polynomial with robust estimator res = sm.RLM(endog, exog).fit() #fit on center points, threshold 1.31 chosen by inspection of residuals resw = sm.WLS(endog, exog2, weights=(np.abs(res.resid) < 1.31)).fit() plt.plot(a[:,1], a[:,0], 'o') plt.plot(a[:,1], res.fittedvalues, '-', lw=2, label='RLM') idx = np.nonzero(np.abs(res.resid) < 1.31)[0] plt.plot(a[idx,1], a[idx,0], 'ro', alpha=0.5) plt.plot(a[:,1], resw.fittedvalues, '-', lw=2, label='WLS center') plt.legend() plt.show() update 2 One problem with the options for robust estimation is how much control we have over the variance estimate. RLM updates the variance and in this case it downweights the outlying observations not enough. If we have a prior estimate of the variance, then we can force RLM to use it and not to update it. The variance estimate by the trimmed least squares, WLS above, is only about 0.014. If we use this information, scale = resw.scale resw2 = sm.RLM(endog, exog2).fit(init=dict(scale=scale), update_scale=False) then the fitted curve looks very similar to the WLS curve above, although the parameter estimates for the polynomial are different. (Caveat: This doesn't work yet in statsmodels, the init option is only available in a branch where I worked on robust estimation and improvements to RLM.)
Non-algebric curve-fitting along weighted pointcloud (if possible using python) (assuming you want to use python) The easiest (given what's currently available) would be to use a polynomial and robust methods from statsmodels. something like endog = y # observed points, one dime
49,520
Non-algebric curve-fitting along weighted pointcloud (if possible using python)
This is an old post but here's a paper worth considering: "Multidimensional curve fitting to unorganized data points by nonlinear minimization", Lian Fang and David C Gossard http://www.cs.jhu.edu/~misha/Fall05/Papers/fang95.pdf And for unordered points this paper is interesting. It describes a method of finding the order of points that are natural neighbors of each other: "Point Ordering with Natural Distance Based on Brownian Motion", Philsu Kim1 and Hyoungseok Kim2 http://downloads.hindawi.com/journals/mpe/2010/450460.pdf
Non-algebric curve-fitting along weighted pointcloud (if possible using python)
This is an old post but here's a paper worth considering: "Multidimensional curve fitting to unorganized data points by nonlinear minimization", Lian Fang and David C Gossard http://www.cs.jhu.edu/~m
Non-algebric curve-fitting along weighted pointcloud (if possible using python) This is an old post but here's a paper worth considering: "Multidimensional curve fitting to unorganized data points by nonlinear minimization", Lian Fang and David C Gossard http://www.cs.jhu.edu/~misha/Fall05/Papers/fang95.pdf And for unordered points this paper is interesting. It describes a method of finding the order of points that are natural neighbors of each other: "Point Ordering with Natural Distance Based on Brownian Motion", Philsu Kim1 and Hyoungseok Kim2 http://downloads.hindawi.com/journals/mpe/2010/450460.pdf
Non-algebric curve-fitting along weighted pointcloud (if possible using python) This is an old post but here's a paper worth considering: "Multidimensional curve fitting to unorganized data points by nonlinear minimization", Lian Fang and David C Gossard http://www.cs.jhu.edu/~m
49,521
Imputation of missing response variables
As per the first answer, in general there is no reason not to impute all of your variables in one go, generating a single set of imputed datasets. Your two outcomes being strongly correlated should not be too much of an issue - when they serve as independent variables in the imputation model(s) for your missing covariates, their corresponding coefficients will be estimated imprecisely, but this isn't usually a problem in terms of drawing from the resulting imputation distribution. There is a long discussion in the comments between AdamO and Joe King, and one of the discussions is about when complete case analysis / listwise deletion is unbiased. If data are MCAR, it is unbiased of course. If missingness is independent of the dependent variable in the model of interest, conditional on the covariates in the model of interest, it is also unbiased (this point is made by Little and Rubin, in their book). Depending on where the missingness occurs, this condition can sometimes correspond to an MAR mechanism and sometimes to a MNAR mechanism. It is not correct that complete case analysis is generally unbiased under MAR mechanisms. For more on this, see for example this paper: http://www.ncbi.nlm.nih.gov/pubmed/20842622 I also wrote a post about complete case analysis validity some time ago on my blog: http://thestatsgeek.com/2013/07/06/when-is-complete-case-analysis-unbiased/
Imputation of missing response variables
As per the first answer, in general there is no reason not to impute all of your variables in one go, generating a single set of imputed datasets. Your two outcomes being strongly correlated should no
Imputation of missing response variables As per the first answer, in general there is no reason not to impute all of your variables in one go, generating a single set of imputed datasets. Your two outcomes being strongly correlated should not be too much of an issue - when they serve as independent variables in the imputation model(s) for your missing covariates, their corresponding coefficients will be estimated imprecisely, but this isn't usually a problem in terms of drawing from the resulting imputation distribution. There is a long discussion in the comments between AdamO and Joe King, and one of the discussions is about when complete case analysis / listwise deletion is unbiased. If data are MCAR, it is unbiased of course. If missingness is independent of the dependent variable in the model of interest, conditional on the covariates in the model of interest, it is also unbiased (this point is made by Little and Rubin, in their book). Depending on where the missingness occurs, this condition can sometimes correspond to an MAR mechanism and sometimes to a MNAR mechanism. It is not correct that complete case analysis is generally unbiased under MAR mechanisms. For more on this, see for example this paper: http://www.ncbi.nlm.nih.gov/pubmed/20842622 I also wrote a post about complete case analysis validity some time ago on my blog: http://thestatsgeek.com/2013/07/06/when-is-complete-case-analysis-unbiased/
Imputation of missing response variables As per the first answer, in general there is no reason not to impute all of your variables in one go, generating a single set of imputed datasets. Your two outcomes being strongly correlated should no
49,522
Imputation of missing response variables
In general, multiple imputation works by using all available information in the model to simulate the missing values: I use the word "simulate" because you're technically doing more than just prediction, which involves more parametric assumptions. I assume the outcomes are either jointly missing or jointly observed, there are no cases where outcome A is known when outcome B isn't, or vice versa. Collinearity is not an issue. If you are treating these outcomes as independent (reporting odds ratios from two logistic regression models with separate outcomes), then you don't even need to worry about whether contradictory outcomes are simulated: (e.g. patients both discharged living and died within 30 days -- I assume you wouldn't have observed that in the hospital). Two overall thoughts about the analysis: Why aren't you reporting a Cox proportional hazards model? Treating death/discharge as a 1/0 event indicator and time until the observed discharge or death as the time-to-event is a very similar, common, and preferred analysis. The hazard ratios approximate relative risks, just like odds ratios (for rare outcomes) and patients who are discharged should be censored. This way you use all available information about when patients were at risk for dying, it is a much more powerful analysis. I don't really agree with the need for missing data methods. Complete case analyses of data (where rows of missing data are dropped from the analysis) are unbiased and require little validation of assumptions, unlike Multiple Imputation which has caveats of estimating and validating parametric models. 13% is fairly negligible when $n$ is 150 or more. As a rule of thumb, having 20 events (these are deaths) per variable in the adjusted model is sufficient for power considerations.
Imputation of missing response variables
In general, multiple imputation works by using all available information in the model to simulate the missing values: I use the word "simulate" because you're technically doing more than just predicti
Imputation of missing response variables In general, multiple imputation works by using all available information in the model to simulate the missing values: I use the word "simulate" because you're technically doing more than just prediction, which involves more parametric assumptions. I assume the outcomes are either jointly missing or jointly observed, there are no cases where outcome A is known when outcome B isn't, or vice versa. Collinearity is not an issue. If you are treating these outcomes as independent (reporting odds ratios from two logistic regression models with separate outcomes), then you don't even need to worry about whether contradictory outcomes are simulated: (e.g. patients both discharged living and died within 30 days -- I assume you wouldn't have observed that in the hospital). Two overall thoughts about the analysis: Why aren't you reporting a Cox proportional hazards model? Treating death/discharge as a 1/0 event indicator and time until the observed discharge or death as the time-to-event is a very similar, common, and preferred analysis. The hazard ratios approximate relative risks, just like odds ratios (for rare outcomes) and patients who are discharged should be censored. This way you use all available information about when patients were at risk for dying, it is a much more powerful analysis. I don't really agree with the need for missing data methods. Complete case analyses of data (where rows of missing data are dropped from the analysis) are unbiased and require little validation of assumptions, unlike Multiple Imputation which has caveats of estimating and validating parametric models. 13% is fairly negligible when $n$ is 150 or more. As a rule of thumb, having 20 events (these are deaths) per variable in the adjusted model is sufficient for power considerations.
Imputation of missing response variables In general, multiple imputation works by using all available information in the model to simulate the missing values: I use the word "simulate" because you're technically doing more than just predicti
49,523
Should false discovery be controlled at the data acquisition level, or should this be at the data interpretation level?
I would argue strongly that is should apply only at the interpretation level. Multiplicity implicitly involves the definition of an investigation by an investigator(s) (i.e. the study wise error rate to be controlled) and needs to accurately reflect the intentions that drove the process of generating inputs to the inference/decision. (This is a bit slippery and for instance Wittgenstein admitted late in his career that he regretting not realizing intentionality early in logic.) For instance, if someone intended to do all the comparisons but stopped with the first one because it was so good – this is a multiplicity to be dealt with. On the other hand if that comparison was credibly documented as the only one to be made – there isn’t. It should not matter if the data entry clerk who was taking a statistics course without permission ran all possible comparisons as an exercise. That sounds like your situation to me. ( This judgement can very slippery and thanks to user603, I can point to Jake's birthday as a good example http://www.johndcook.com/blog/2012/09/07/limits-of-statistics/ ) Something like this happened to an early colleague. They want to test A versus placebo but someone wanted them to includ B as well. They thought B was silly but being a nice guy included the B group. The result was A versus placebo was clearly significant but not after adjusting for B. They could never get the study published because of that. Also, Ed George had a nice talk at the joint meeting this summer where he was in effect arguing for an analysts' posterior for those who have access to the data versus a reported posterior for those who only find out about the study if it is selectively reported to them. Thinking about his talk afterwards and that slippery intentionality stuff possibly also applying to the analysts, the "Men in Black" movie seemed relevant or at least their use of the Neuralizer - http://en.wikipedia.org/wiki/Neuralizer It’s as if the Bayesian analyst knows they would have been neuralized as soon as they realized a given data set did not achieve some pre-set goal. So when they get a data set that does meet it, they realize they don't know how often they have been neuralized but they know the selection rule for avoiding being neuralized this time.
Should false discovery be controlled at the data acquisition level, or should this be at the data in
I would argue strongly that is should apply only at the interpretation level. Multiplicity implicitly involves the definition of an investigation by an investigator(s) (i.e. the study wise error rate
Should false discovery be controlled at the data acquisition level, or should this be at the data interpretation level? I would argue strongly that is should apply only at the interpretation level. Multiplicity implicitly involves the definition of an investigation by an investigator(s) (i.e. the study wise error rate to be controlled) and needs to accurately reflect the intentions that drove the process of generating inputs to the inference/decision. (This is a bit slippery and for instance Wittgenstein admitted late in his career that he regretting not realizing intentionality early in logic.) For instance, if someone intended to do all the comparisons but stopped with the first one because it was so good – this is a multiplicity to be dealt with. On the other hand if that comparison was credibly documented as the only one to be made – there isn’t. It should not matter if the data entry clerk who was taking a statistics course without permission ran all possible comparisons as an exercise. That sounds like your situation to me. ( This judgement can very slippery and thanks to user603, I can point to Jake's birthday as a good example http://www.johndcook.com/blog/2012/09/07/limits-of-statistics/ ) Something like this happened to an early colleague. They want to test A versus placebo but someone wanted them to includ B as well. They thought B was silly but being a nice guy included the B group. The result was A versus placebo was clearly significant but not after adjusting for B. They could never get the study published because of that. Also, Ed George had a nice talk at the joint meeting this summer where he was in effect arguing for an analysts' posterior for those who have access to the data versus a reported posterior for those who only find out about the study if it is selectively reported to them. Thinking about his talk afterwards and that slippery intentionality stuff possibly also applying to the analysts, the "Men in Black" movie seemed relevant or at least their use of the Neuralizer - http://en.wikipedia.org/wiki/Neuralizer It’s as if the Bayesian analyst knows they would have been neuralized as soon as they realized a given data set did not achieve some pre-set goal. So when they get a data set that does meet it, they realize they don't know how often they have been neuralized but they know the selection rule for avoiding being neuralized this time.
Should false discovery be controlled at the data acquisition level, or should this be at the data in I would argue strongly that is should apply only at the interpretation level. Multiplicity implicitly involves the definition of an investigation by an investigator(s) (i.e. the study wise error rate
49,524
Should false discovery be controlled at the data acquisition level, or should this be at the data interpretation level?
As @phaneron has stated, there is no need for multiplicity control if you only consider one gene. I wish to add to that hypothesis testing serves two purposes: (a) convince yourself and (b) convince "the world". For the purpose of (a), recall that the BH procedure controls the "expected proportion of false discoveries over re-tests of the same hypotheses". If you (honestly) consider the test of one hypothesis (snp), there is no multiplicity at hand and the only question left is "do you find frequentist hypothesis testing convincing?". For the purpose of (b) the difficulty might be to convince that the choice of the snp was "honest". Which technically means that the focus on that single snp was made using different (thus statistically independent) data than the data serving for the hypothesis testing.
Should false discovery be controlled at the data acquisition level, or should this be at the data in
As @phaneron has stated, there is no need for multiplicity control if you only consider one gene. I wish to add to that hypothesis testing serves two purposes: (a) convince yourself and (b) convince "
Should false discovery be controlled at the data acquisition level, or should this be at the data interpretation level? As @phaneron has stated, there is no need for multiplicity control if you only consider one gene. I wish to add to that hypothesis testing serves two purposes: (a) convince yourself and (b) convince "the world". For the purpose of (a), recall that the BH procedure controls the "expected proportion of false discoveries over re-tests of the same hypotheses". If you (honestly) consider the test of one hypothesis (snp), there is no multiplicity at hand and the only question left is "do you find frequentist hypothesis testing convincing?". For the purpose of (b) the difficulty might be to convince that the choice of the snp was "honest". Which technically means that the focus on that single snp was made using different (thus statistically independent) data than the data serving for the hypothesis testing.
Should false discovery be controlled at the data acquisition level, or should this be at the data in As @phaneron has stated, there is no need for multiplicity control if you only consider one gene. I wish to add to that hypothesis testing serves two purposes: (a) convince yourself and (b) convince "
49,525
Perpendicular offsets in a weighted least squares regression
Completely revised answer, see history. Take the formula from your link. It contains a lot of sums iterating over your input points. Make sure to multiply the summands in all of these sums with your weights $w$: \begin{align*} \sum_{i=1}^n x_i &\to \sum_{i=1}^n w_ix_i \\ \sum_{i=1}^n y_i &\to \sum_{i=1}^n w_iy_i \\ \sum_{i=1}^n x_i^2 &\to \sum_{i=1}^n w_ix_i^2 \\ \sum_{i=1}^n x_iy_i &\to \sum_{i=1}^n w_ix_iy_i \\ \sum_{i=1}^n y_i^2 &\to \sum_{i=1}^n w_iy_i^2 \\ n = \sum_{i=1}^n 1 &\to \sum_{i=1}^n w_i \end{align*} Notice that I previously suggested weighting the coordinates, but that causes one $w$ too many for the second-order terms. To simulate the effect of $w$ denoting the multiplicity of points (i.e. $w_i=3$ should have the same effect as point $i$ repeated $3$ times), you have to have exactly one $w$ for every sum iterating over your set of points. Your code still has one $w$ too many in the sum(x_weighted.*y_weighted) term of B_down. With this solution, and using exact arithmetic on algebraic numbers to avoid numeric issues, one of the two solutions of the quadratic equation gives a pretty good result on the example data you provided. Seeing as $B$ is only around $22$ with the correct computation, numeric issues shouldn't be to serious a problem, contrary to my previous experiences with the incorrect weighting. I still don't know which solution will be the correct one in general, whether you can always choose the one with the positive square root, or whether you have to examine the sign of the second derivative.
Perpendicular offsets in a weighted least squares regression
Completely revised answer, see history. Take the formula from your link. It contains a lot of sums iterating over your input points. Make sure to multiply the summands in all of these sums with your w
Perpendicular offsets in a weighted least squares regression Completely revised answer, see history. Take the formula from your link. It contains a lot of sums iterating over your input points. Make sure to multiply the summands in all of these sums with your weights $w$: \begin{align*} \sum_{i=1}^n x_i &\to \sum_{i=1}^n w_ix_i \\ \sum_{i=1}^n y_i &\to \sum_{i=1}^n w_iy_i \\ \sum_{i=1}^n x_i^2 &\to \sum_{i=1}^n w_ix_i^2 \\ \sum_{i=1}^n x_iy_i &\to \sum_{i=1}^n w_ix_iy_i \\ \sum_{i=1}^n y_i^2 &\to \sum_{i=1}^n w_iy_i^2 \\ n = \sum_{i=1}^n 1 &\to \sum_{i=1}^n w_i \end{align*} Notice that I previously suggested weighting the coordinates, but that causes one $w$ too many for the second-order terms. To simulate the effect of $w$ denoting the multiplicity of points (i.e. $w_i=3$ should have the same effect as point $i$ repeated $3$ times), you have to have exactly one $w$ for every sum iterating over your set of points. Your code still has one $w$ too many in the sum(x_weighted.*y_weighted) term of B_down. With this solution, and using exact arithmetic on algebraic numbers to avoid numeric issues, one of the two solutions of the quadratic equation gives a pretty good result on the example data you provided. Seeing as $B$ is only around $22$ with the correct computation, numeric issues shouldn't be to serious a problem, contrary to my previous experiences with the incorrect weighting. I still don't know which solution will be the correct one in general, whether you can always choose the one with the positive square root, or whether you have to examine the sign of the second derivative.
Perpendicular offsets in a weighted least squares regression Completely revised answer, see history. Take the formula from your link. It contains a lot of sums iterating over your input points. Make sure to multiply the summands in all of these sums with your w
49,526
Creating ROC curve for multi-level logistic regression model in R
There's a whole lot of literature about multi-class extensions for ROC. I have some presentations with illustrations how the calculation works at softclassval's home page (softclassval calculates sensitivities etc. if you have partial class memberships, also for multiple classes - but that is probably an overkill for your problem). For sensitivity and specificity, the spelled out definitions lead to a very straightforward extension: sensitivity: what proportion of truly class $c$ cases are correctly recognized by the model? specificity: what proportion of cases truly not belonging to class $c$ are correctly recognized as not coming from class $c$? If you think about medical diagnostics/epidemiology, the set up is always multinomial from a philosophical point of view: the normal/healthy/control group in fact is rather a "not this disease" group which may contain a whole lot of other diseases. Sometimes classes are mutually exclusive, more often they are not (having, say, a brain tumour does not mean that you cannot have hepatitis nor does it save you from breaking your arm) I use package ROCR to plot ROCs, but there are plenty alternatives in R (e.g. pROC - pROC's home page has a comparison of several R packages dealing with ROC generation in R). update: @Adam I know this paper: Landgrebe, T. C. & Paclik, P. The ROC skeleton for multiclass ROC estimation, Pattern Recognition Letters, 31, 949-958 (2010). DOI: 10.1016/j.patrec.2009.12.037 which deals with independent classes. Basically with $n$ independent classes, you get an $n-1$ dimensional "surface" in $n$ dimensions spanned by the e.g. sensitivity for each class. Here's something about ordered levels: Nakas, C. T. & Yiannoutsos, C. T. Ordered multiple-class ROC analysis with continuous measurements., Stat Med, 23, 3437-3449 (2004). DOI: 10.1002/sim.1917 But I cannot access it, so I can't tell you anything further.
Creating ROC curve for multi-level logistic regression model in R
There's a whole lot of literature about multi-class extensions for ROC. I have some presentations with illustrations how the calculation works at softclassval's home page (softclassval calculates se
Creating ROC curve for multi-level logistic regression model in R There's a whole lot of literature about multi-class extensions for ROC. I have some presentations with illustrations how the calculation works at softclassval's home page (softclassval calculates sensitivities etc. if you have partial class memberships, also for multiple classes - but that is probably an overkill for your problem). For sensitivity and specificity, the spelled out definitions lead to a very straightforward extension: sensitivity: what proportion of truly class $c$ cases are correctly recognized by the model? specificity: what proportion of cases truly not belonging to class $c$ are correctly recognized as not coming from class $c$? If you think about medical diagnostics/epidemiology, the set up is always multinomial from a philosophical point of view: the normal/healthy/control group in fact is rather a "not this disease" group which may contain a whole lot of other diseases. Sometimes classes are mutually exclusive, more often they are not (having, say, a brain tumour does not mean that you cannot have hepatitis nor does it save you from breaking your arm) I use package ROCR to plot ROCs, but there are plenty alternatives in R (e.g. pROC - pROC's home page has a comparison of several R packages dealing with ROC generation in R). update: @Adam I know this paper: Landgrebe, T. C. & Paclik, P. The ROC skeleton for multiclass ROC estimation, Pattern Recognition Letters, 31, 949-958 (2010). DOI: 10.1016/j.patrec.2009.12.037 which deals with independent classes. Basically with $n$ independent classes, you get an $n-1$ dimensional "surface" in $n$ dimensions spanned by the e.g. sensitivity for each class. Here's something about ordered levels: Nakas, C. T. & Yiannoutsos, C. T. Ordered multiple-class ROC analysis with continuous measurements., Stat Med, 23, 3437-3449 (2004). DOI: 10.1002/sim.1917 But I cannot access it, so I can't tell you anything further.
Creating ROC curve for multi-level logistic regression model in R There's a whole lot of literature about multi-class extensions for ROC. I have some presentations with illustrations how the calculation works at softclassval's home page (softclassval calculates se
49,527
Creating ROC curve for multi-level logistic regression model in R
Not possible. The very idea of ROC requires the concept of sensitivity and specificity, which in turn take only real numbers. To have the idea of ROC working with more than two-valued logic, you would need to accept that sensitivity and specificity are vectors. You might always convert your dependent variable into set two-level dummy variables and perform a series of ROCs. But I guess it's not what you are looking for.
Creating ROC curve for multi-level logistic regression model in R
Not possible. The very idea of ROC requires the concept of sensitivity and specificity, which in turn take only real numbers. To have the idea of ROC working with more than two-valued logic, you would
Creating ROC curve for multi-level logistic regression model in R Not possible. The very idea of ROC requires the concept of sensitivity and specificity, which in turn take only real numbers. To have the idea of ROC working with more than two-valued logic, you would need to accept that sensitivity and specificity are vectors. You might always convert your dependent variable into set two-level dummy variables and perform a series of ROCs. But I guess it's not what you are looking for.
Creating ROC curve for multi-level logistic regression model in R Not possible. The very idea of ROC requires the concept of sensitivity and specificity, which in turn take only real numbers. To have the idea of ROC working with more than two-valued logic, you would
49,528
Predictive model & standardized variables
If one were to fit a model $y= \beta_1 + \beta_2z$ where $z=\frac{x-\bar{x}}{sd(x)}$ and use that model to predict $y$ for some given values of $x$, then use the original $\bar{x}$ and $sd(x)$ to standardize the new $x$ values being used for prediction. However, if one has many new values of $y$ and $x$ and wants to refit the model then standardize $x$ based on the new values of $\bar{x}$ and $sd(x)$.
Predictive model & standardized variables
If one were to fit a model $y= \beta_1 + \beta_2z$ where $z=\frac{x-\bar{x}}{sd(x)}$ and use that model to predict $y$ for some given values of $x$, then use the original $\bar{x}$ and $sd(x)$ to stan
Predictive model & standardized variables If one were to fit a model $y= \beta_1 + \beta_2z$ where $z=\frac{x-\bar{x}}{sd(x)}$ and use that model to predict $y$ for some given values of $x$, then use the original $\bar{x}$ and $sd(x)$ to standardize the new $x$ values being used for prediction. However, if one has many new values of $y$ and $x$ and wants to refit the model then standardize $x$ based on the new values of $\bar{x}$ and $sd(x)$.
Predictive model & standardized variables If one were to fit a model $y= \beta_1 + \beta_2z$ where $z=\frac{x-\bar{x}}{sd(x)}$ and use that model to predict $y$ for some given values of $x$, then use the original $\bar{x}$ and $sd(x)$ to stan
49,529
Predictive model & standardized variables
This is one of the major problems with standardizing variables prior to regression. The entire meaning of the output is sample-dependent. I much prefer working with unstandardized variables so that this problem (and similar ones) do not arise.
Predictive model & standardized variables
This is one of the major problems with standardizing variables prior to regression. The entire meaning of the output is sample-dependent. I much prefer working with unstandardized variables so that th
Predictive model & standardized variables This is one of the major problems with standardizing variables prior to regression. The entire meaning of the output is sample-dependent. I much prefer working with unstandardized variables so that this problem (and similar ones) do not arise.
Predictive model & standardized variables This is one of the major problems with standardizing variables prior to regression. The entire meaning of the output is sample-dependent. I much prefer working with unstandardized variables so that th
49,530
What test should be used for detecting team imbalances in a game?
I think it's a bad idea to ignore the strengths of the players, but it may be hard to completely separate the possible flaws in the rating system from the possible advantages of one option versus another. You could try the following test for each pair of options A and B. Your null hypothesis is that the rating formula is accurate and that the games are independent. Compute the number of wins predicted by the rating formula for option A, and compare this with the observed number of wins. If the rating formula predicts that the player using option A will win with probability $p$, add $p$ to the total expected wins, and add $(p(1-p))$ to the total variance according to the null hypothesis. If the games are not overwhelmingly lopsided, then you should be able to use a normal approximation since you have over $100$ data points for each match-up. Determine how extreme the observed result was in terms of standard deviations away from the predicted mean. Since you would apply this test for each possible match-up, you would expect more false positives if you use a typical significance threshold for a single test. So, instead of asking for the results to be significant at the $0.05$ level on at least one of $6$ tests, you might want to require $0.05/6 \approx 0.008$ or about $2 \frac23$ standard deviations from the mean in either direction in order to reject the null hypothesis. If you reject the null hypothesis, it doesn't necessarily mean that team A has an advantage over team B. It could also be that the ratings formula fails, which might happen for lopsided matches. If you have enough data you can try to compare players of similar ratings, where you can expect the ratings formula to be more accurate.
What test should be used for detecting team imbalances in a game?
I think it's a bad idea to ignore the strengths of the players, but it may be hard to completely separate the possible flaws in the rating system from the possible advantages of one option versus anot
What test should be used for detecting team imbalances in a game? I think it's a bad idea to ignore the strengths of the players, but it may be hard to completely separate the possible flaws in the rating system from the possible advantages of one option versus another. You could try the following test for each pair of options A and B. Your null hypothesis is that the rating formula is accurate and that the games are independent. Compute the number of wins predicted by the rating formula for option A, and compare this with the observed number of wins. If the rating formula predicts that the player using option A will win with probability $p$, add $p$ to the total expected wins, and add $(p(1-p))$ to the total variance according to the null hypothesis. If the games are not overwhelmingly lopsided, then you should be able to use a normal approximation since you have over $100$ data points for each match-up. Determine how extreme the observed result was in terms of standard deviations away from the predicted mean. Since you would apply this test for each possible match-up, you would expect more false positives if you use a typical significance threshold for a single test. So, instead of asking for the results to be significant at the $0.05$ level on at least one of $6$ tests, you might want to require $0.05/6 \approx 0.008$ or about $2 \frac23$ standard deviations from the mean in either direction in order to reject the null hypothesis. If you reject the null hypothesis, it doesn't necessarily mean that team A has an advantage over team B. It could also be that the ratings formula fails, which might happen for lopsided matches. If you have enough data you can try to compare players of similar ratings, where you can expect the ratings formula to be more accurate.
What test should be used for detecting team imbalances in a game? I think it's a bad idea to ignore the strengths of the players, but it may be hard to completely separate the possible flaws in the rating system from the possible advantages of one option versus anot
49,531
Supervised classifier for events with missing data
As requested, I'll elaborate on my comment, although I don't have experience using it. I work with neural networks for regression problems and often construct new features, but I don't have to deal with missing data so I'm not sure whether this will work. Let's suppose the features of your data look like $(0,1)$ $(0.5,0.5)$ $(*,0.2)$ $(*,0.7)$ $(0.8,*)$ where $*$ means the value is missing or unreliable. Rather than replacing $*$ with a very large or very small value outside the typical range, or breaking up your data so that you have a separate net for each subset of the data which might be missing, I suggest making two features for each input which might be missing. If the value is present, the two features become $(0,value)$. If the value is missing, then the features become $(1,random)$ where you randomly sample value from the range. So, the above data set would become $(0,0,0,1)$ $(0,0.5,0,0.5)$ $(1,rand,0,0.2)$ $(1,rand,0,0.7)$ $(0,0.8,1,rand)$ Each point with a random coordinate can be cloned to give several inputs which you can train with a lower learning rate. $(0,0,0,1), \alpha = \alpha_0 $ $(0,0.5,0,0.5), \alpha = \alpha_0$ $(1,0,0,0.2), \alpha = \alpha_0/3$ $(1,0.5,0,0.2), \alpha = \alpha_0/3$ $(1,0.8,0,0.2), \alpha = \alpha_0/3$ $(1,0,0,0.7), \alpha = \alpha_0/3$ $(1,0.5,0,0.7), \alpha = \alpha_0/3$ $(1,0.8,0,0.7), \alpha = \alpha_0/3$ $(0,0.8,1,1), \alpha = \alpha_0/4$ $(0,0.8,1,0.5), \alpha = \alpha_0/4$ $(0,0.8,1,0.2), \alpha = \alpha_0/4$ $(0,0.8,1,0.7), \alpha = \alpha_0/4$ One idea is that this should encourage the neural network to learn that when the indicator that the value is missing is $1$, then the value doesn't matter. You can test whether this is true for the neural network, and perhaps use a regularizer which encourages this.
Supervised classifier for events with missing data
As requested, I'll elaborate on my comment, although I don't have experience using it. I work with neural networks for regression problems and often construct new features, but I don't have to deal wi
Supervised classifier for events with missing data As requested, I'll elaborate on my comment, although I don't have experience using it. I work with neural networks for regression problems and often construct new features, but I don't have to deal with missing data so I'm not sure whether this will work. Let's suppose the features of your data look like $(0,1)$ $(0.5,0.5)$ $(*,0.2)$ $(*,0.7)$ $(0.8,*)$ where $*$ means the value is missing or unreliable. Rather than replacing $*$ with a very large or very small value outside the typical range, or breaking up your data so that you have a separate net for each subset of the data which might be missing, I suggest making two features for each input which might be missing. If the value is present, the two features become $(0,value)$. If the value is missing, then the features become $(1,random)$ where you randomly sample value from the range. So, the above data set would become $(0,0,0,1)$ $(0,0.5,0,0.5)$ $(1,rand,0,0.2)$ $(1,rand,0,0.7)$ $(0,0.8,1,rand)$ Each point with a random coordinate can be cloned to give several inputs which you can train with a lower learning rate. $(0,0,0,1), \alpha = \alpha_0 $ $(0,0.5,0,0.5), \alpha = \alpha_0$ $(1,0,0,0.2), \alpha = \alpha_0/3$ $(1,0.5,0,0.2), \alpha = \alpha_0/3$ $(1,0.8,0,0.2), \alpha = \alpha_0/3$ $(1,0,0,0.7), \alpha = \alpha_0/3$ $(1,0.5,0,0.7), \alpha = \alpha_0/3$ $(1,0.8,0,0.7), \alpha = \alpha_0/3$ $(0,0.8,1,1), \alpha = \alpha_0/4$ $(0,0.8,1,0.5), \alpha = \alpha_0/4$ $(0,0.8,1,0.2), \alpha = \alpha_0/4$ $(0,0.8,1,0.7), \alpha = \alpha_0/4$ One idea is that this should encourage the neural network to learn that when the indicator that the value is missing is $1$, then the value doesn't matter. You can test whether this is true for the neural network, and perhaps use a regularizer which encourages this.
Supervised classifier for events with missing data As requested, I'll elaborate on my comment, although I don't have experience using it. I work with neural networks for regression problems and often construct new features, but I don't have to deal wi
49,532
Supervised classifier for events with missing data
Doug Zare's suggestion is a little like a missing data technique called multiple imputation. The data point gets repeated many times with plausible values for the missing variable being input. I think that would allow the classifier to gain the information from the correct coordinate and in a way learn the uncertainty from the missing one. The tree classifiers can handle missing data with surrogate variables (i.e. using a replacement variable that is known to be highly correlated with the one that is missing).
Supervised classifier for events with missing data
Doug Zare's suggestion is a little like a missing data technique called multiple imputation. The data point gets repeated many times with plausible values for the missing variable being input. I thi
Supervised classifier for events with missing data Doug Zare's suggestion is a little like a missing data technique called multiple imputation. The data point gets repeated many times with plausible values for the missing variable being input. I think that would allow the classifier to gain the information from the correct coordinate and in a way learn the uncertainty from the missing one. The tree classifiers can handle missing data with surrogate variables (i.e. using a replacement variable that is known to be highly correlated with the one that is missing).
Supervised classifier for events with missing data Doug Zare's suggestion is a little like a missing data technique called multiple imputation. The data point gets repeated many times with plausible values for the missing variable being input. I thi
49,533
Distribution of atan2 of normal r.v.'s
I don't think there is a simple expression for the pdf. If there were, then there would be a simple expression for the usual $\arctan$, and of the ratio between two (noncentered) normal distributions. The latter is studied in papers like Marsaglia (1965, 2006) and Cedilnik et al (2004).
Distribution of atan2 of normal r.v.'s
I don't think there is a simple expression for the pdf. If there were, then there would be a simple expression for the usual $\arctan$, and of the ratio between two (noncentered) normal distributions.
Distribution of atan2 of normal r.v.'s I don't think there is a simple expression for the pdf. If there were, then there would be a simple expression for the usual $\arctan$, and of the ratio between two (noncentered) normal distributions. The latter is studied in papers like Marsaglia (1965, 2006) and Cedilnik et al (2004).
Distribution of atan2 of normal r.v.'s I don't think there is a simple expression for the pdf. If there were, then there would be a simple expression for the usual $\arctan$, and of the ratio between two (noncentered) normal distributions.
49,534
What are Effective Regression Techniques for Linguistic Analysis of Linked Data?
Both questions are hard, I'll give a shot at the first one. A straightforward approach to classify documents is to compute their tf-idf. In short, you consider the text is a bag of words, i.e. that it has no linear structure, and you compute a score that says how much the word is specific of a document. I explain a little bit about how to do this here. Once this is done, texts are often compared with the cosine similarity measure, which is the cosine of their tf-idf vectors. If they have a high similarity, they have similar specific words and you can guess they are about the same topic. You can compute cosines, but you can do all sorts of geometric operations. In particular you can fit Support Vector Machines which give good results in text classifications. Finally, a last idea would be to use keyword extraction tools, such as the Alchemy API to summarize your documents to 10-20 relevant keywords. You can then use standard classification techniques on this dataset of reduced dimension. As a good primer on text classification, I suggest Introduction to Information Retrieval (free), and Mining the Social Web... not free but probably available from the best pirate sites ;-)
What are Effective Regression Techniques for Linguistic Analysis of Linked Data?
Both questions are hard, I'll give a shot at the first one. A straightforward approach to classify documents is to compute their tf-idf. In short, you consider the text is a bag of words, i.e. that it
What are Effective Regression Techniques for Linguistic Analysis of Linked Data? Both questions are hard, I'll give a shot at the first one. A straightforward approach to classify documents is to compute their tf-idf. In short, you consider the text is a bag of words, i.e. that it has no linear structure, and you compute a score that says how much the word is specific of a document. I explain a little bit about how to do this here. Once this is done, texts are often compared with the cosine similarity measure, which is the cosine of their tf-idf vectors. If they have a high similarity, they have similar specific words and you can guess they are about the same topic. You can compute cosines, but you can do all sorts of geometric operations. In particular you can fit Support Vector Machines which give good results in text classifications. Finally, a last idea would be to use keyword extraction tools, such as the Alchemy API to summarize your documents to 10-20 relevant keywords. You can then use standard classification techniques on this dataset of reduced dimension. As a good primer on text classification, I suggest Introduction to Information Retrieval (free), and Mining the Social Web... not free but probably available from the best pirate sites ;-)
What are Effective Regression Techniques for Linguistic Analysis of Linked Data? Both questions are hard, I'll give a shot at the first one. A straightforward approach to classify documents is to compute their tf-idf. In short, you consider the text is a bag of words, i.e. that it
49,535
Chi square test on non-normal distributions
Normality is a requirement for the chi square test that a variance equals a specified value but there are many tests that are called chi-square because their asymptotic null distribution is chi-square such as the chi-square test for independence in contingency tables and the chi square goodness of fit test. Neither of these tests require normality. This agrees with Peter Ellis' comment. Regarding your question when specific parametric assumptions are not made (normality being just one such assumption) there are nonparametric procedures (rank tests, permutation tests and the bootstrap) that can be applied sith more generality. In regression, robust regression is an alternative to ordinary least squares.
Chi square test on non-normal distributions
Normality is a requirement for the chi square test that a variance equals a specified value but there are many tests that are called chi-square because their asymptotic null distribution is chi-square
Chi square test on non-normal distributions Normality is a requirement for the chi square test that a variance equals a specified value but there are many tests that are called chi-square because their asymptotic null distribution is chi-square such as the chi-square test for independence in contingency tables and the chi square goodness of fit test. Neither of these tests require normality. This agrees with Peter Ellis' comment. Regarding your question when specific parametric assumptions are not made (normality being just one such assumption) there are nonparametric procedures (rank tests, permutation tests and the bootstrap) that can be applied sith more generality. In regression, robust regression is an alternative to ordinary least squares.
Chi square test on non-normal distributions Normality is a requirement for the chi square test that a variance equals a specified value but there are many tests that are called chi-square because their asymptotic null distribution is chi-square
49,536
Chi square test on non-normal distributions
I learned the chi squared distribution as a special case of a gamma density function. What I have read today online on wikipedia and in texts sometimes says "If Z1, ..., Zk are independent, standard normal random variables, then the sum of their squares is distributed according to the chi-squared distribution with k degrees of freedom." and then in other instances drops the "normal random variables" part- "By the central limit theorem, because the chi-squared distribution is the sum of k independent random variables with finite mean and variance, it converges to a normal distribution for large k." Perhaps this is an error in wikipedia. But the normal assumption is really the part I am interested in today. It seems to me that there are NO "requirements for the chi-square test to 'work'" except maybe that the set of random variables not be empty, and be real numbers. By test I take the asker to mean the squaring of the sum of the squares of the random variables, and then checking that against what would be expected from a standard normal with the given mean and std dev. That is to say the expected versus the observed outcome. Here is a list of so called "chi-squared tests": http://en.wikipedia.org/wiki/Chi-squared_test The outcomes of a M = Z^2 where Z is a standard normal random variable are different than if M = G^2 where G was a random variable from a gamma distribution. An example I can think of is in application when there is a small sample size- I suppose this can be defined as less than the amount that a sample size calculator would yield- Here you don't know if your normal assumption is valid, because let's say you have no prior data, and the sample size being small means no central limit theorem application, but not all is lost because a chi-squared test can be done to measure the validity of Gaussian distribution functions being used such as normal probability distribution function, normal cumulative distribution function and their inverses etc. SO as far as I can tell one of the most useful uses of the chi-squared test in beginning and intermediate practice of statistics is to test the normal assumption on small sample sizes. But to get to the part of the question that asks about other data types. I think it is good to learn all the different distributions other than normal, and t distribution. There are discrete probability distribution functions- Bernoulli, binomial etc. and there are continuous probability distribution functions- exponential, beta, Poisson, Pareto etc. Then learn about what a gamma distribution is- how it is an all encompassing distribution function. From there simply looking at the data, graphing the data and measuring observed versus expected in some way i.e. "goodness of fit" etc can help determine what kind of distribution shape your data has. What's great about a gamma or even exponential distribution is that you can make any shape you see. Due to the central limit theorem and the other versions of that that there are- people often just assume a normal distribution. And this is fine over many many tests and or much prior data. This has been edited from the original comment.
Chi square test on non-normal distributions
I learned the chi squared distribution as a special case of a gamma density function. What I have read today online on wikipedia and in texts sometimes says "If Z1, ..., Zk are independent, standard
Chi square test on non-normal distributions I learned the chi squared distribution as a special case of a gamma density function. What I have read today online on wikipedia and in texts sometimes says "If Z1, ..., Zk are independent, standard normal random variables, then the sum of their squares is distributed according to the chi-squared distribution with k degrees of freedom." and then in other instances drops the "normal random variables" part- "By the central limit theorem, because the chi-squared distribution is the sum of k independent random variables with finite mean and variance, it converges to a normal distribution for large k." Perhaps this is an error in wikipedia. But the normal assumption is really the part I am interested in today. It seems to me that there are NO "requirements for the chi-square test to 'work'" except maybe that the set of random variables not be empty, and be real numbers. By test I take the asker to mean the squaring of the sum of the squares of the random variables, and then checking that against what would be expected from a standard normal with the given mean and std dev. That is to say the expected versus the observed outcome. Here is a list of so called "chi-squared tests": http://en.wikipedia.org/wiki/Chi-squared_test The outcomes of a M = Z^2 where Z is a standard normal random variable are different than if M = G^2 where G was a random variable from a gamma distribution. An example I can think of is in application when there is a small sample size- I suppose this can be defined as less than the amount that a sample size calculator would yield- Here you don't know if your normal assumption is valid, because let's say you have no prior data, and the sample size being small means no central limit theorem application, but not all is lost because a chi-squared test can be done to measure the validity of Gaussian distribution functions being used such as normal probability distribution function, normal cumulative distribution function and their inverses etc. SO as far as I can tell one of the most useful uses of the chi-squared test in beginning and intermediate practice of statistics is to test the normal assumption on small sample sizes. But to get to the part of the question that asks about other data types. I think it is good to learn all the different distributions other than normal, and t distribution. There are discrete probability distribution functions- Bernoulli, binomial etc. and there are continuous probability distribution functions- exponential, beta, Poisson, Pareto etc. Then learn about what a gamma distribution is- how it is an all encompassing distribution function. From there simply looking at the data, graphing the data and measuring observed versus expected in some way i.e. "goodness of fit" etc can help determine what kind of distribution shape your data has. What's great about a gamma or even exponential distribution is that you can make any shape you see. Due to the central limit theorem and the other versions of that that there are- people often just assume a normal distribution. And this is fine over many many tests and or much prior data. This has been edited from the original comment.
Chi square test on non-normal distributions I learned the chi squared distribution as a special case of a gamma density function. What I have read today online on wikipedia and in texts sometimes says "If Z1, ..., Zk are independent, standard
49,537
To aggregate and lose resolution OR not to aggregate and suffer with correlated binary data?
You're right that averaging over the responses and performing a repeated measures ANOVA is not the ideal thing to do. Your intuitions are good; there should be a difference between a global accuracy that's based on 48 responses and one based on 400. First, you should be using logistic regression. If you're not very familiar with that, it may help you to read the answer I wrote to this question: difference between logit and probit models. Although it was written in a different context, there's a lot of information about logistic regression, and it can help you get a sense of what it's about. Logistic regression, in its basic form, is for independent data, and your data is not independent. To deal with this, you want to either fit a logistic regression model using the generalized estimating equations, or fit a GLiMM. The choice of which to use is based on the nature of the question you want to ask. I discuss these issues in this question: difference between generalized linear models generalized linear mixed models in SPSS. For more thorough explanations of these topics, you may want to read Agresti's Introduction to Categorical Data Analysis. One thing I think you won't need to worry about is the size of the clusters though, these will work fine with your situation. HTH.
To aggregate and lose resolution OR not to aggregate and suffer with correlated binary data?
You're right that averaging over the responses and performing a repeated measures ANOVA is not the ideal thing to do. Your intuitions are good; there should be a difference between a global accuracy
To aggregate and lose resolution OR not to aggregate and suffer with correlated binary data? You're right that averaging over the responses and performing a repeated measures ANOVA is not the ideal thing to do. Your intuitions are good; there should be a difference between a global accuracy that's based on 48 responses and one based on 400. First, you should be using logistic regression. If you're not very familiar with that, it may help you to read the answer I wrote to this question: difference between logit and probit models. Although it was written in a different context, there's a lot of information about logistic regression, and it can help you get a sense of what it's about. Logistic regression, in its basic form, is for independent data, and your data is not independent. To deal with this, you want to either fit a logistic regression model using the generalized estimating equations, or fit a GLiMM. The choice of which to use is based on the nature of the question you want to ask. I discuss these issues in this question: difference between generalized linear models generalized linear mixed models in SPSS. For more thorough explanations of these topics, you may want to read Agresti's Introduction to Categorical Data Analysis. One thing I think you won't need to worry about is the size of the clusters though, these will work fine with your situation. HTH.
To aggregate and lose resolution OR not to aggregate and suffer with correlated binary data? You're right that averaging over the responses and performing a repeated measures ANOVA is not the ideal thing to do. Your intuitions are good; there should be a difference between a global accuracy
49,538
How to know the stochastic gradient descent is converging when the objective function is expensive to compute
Yes, if your cost function is convex, stochastic gradient descent (SGD) should converge. If computing the cost value takes too much time, then you can estimate it by computing the cost over a randomly sampled subset of your dataset. You can naturally do this in the mini-batch setting.
How to know the stochastic gradient descent is converging when the objective function is expensive t
Yes, if your cost function is convex, stochastic gradient descent (SGD) should converge. If computing the cost value takes too much time, then you can estimate it by computing the cost over a randoml
How to know the stochastic gradient descent is converging when the objective function is expensive to compute Yes, if your cost function is convex, stochastic gradient descent (SGD) should converge. If computing the cost value takes too much time, then you can estimate it by computing the cost over a randomly sampled subset of your dataset. You can naturally do this in the mini-batch setting.
How to know the stochastic gradient descent is converging when the objective function is expensive t Yes, if your cost function is convex, stochastic gradient descent (SGD) should converge. If computing the cost value takes too much time, then you can estimate it by computing the cost over a randoml
49,539
How to know the stochastic gradient descent is converging when the objective function is expensive to compute
One approach is to use "progressive validation error" for SGD diagnostics as per "Beating the Hold-Out: Bounds for K-fold and Progressive Cross-Validation" Basically, every new case is first plugged into your loss function, and only then is passed to SGD, and the resulting loss function values are averaged across "recent" cases (typically, the loss function is printed out for each 2^Nth case, where N=1,2,3,.. using all data points since the last print out) This gives a decent estimate of the test error and avoids the computational problems you've mentioned (since you don't have to calculate the loss function using all data for every print out).
How to know the stochastic gradient descent is converging when the objective function is expensive t
One approach is to use "progressive validation error" for SGD diagnostics as per "Beating the Hold-Out: Bounds for K-fold and Progressive Cross-Validation" Basically, every new case is first plugged
How to know the stochastic gradient descent is converging when the objective function is expensive to compute One approach is to use "progressive validation error" for SGD diagnostics as per "Beating the Hold-Out: Bounds for K-fold and Progressive Cross-Validation" Basically, every new case is first plugged into your loss function, and only then is passed to SGD, and the resulting loss function values are averaged across "recent" cases (typically, the loss function is printed out for each 2^Nth case, where N=1,2,3,.. using all data points since the last print out) This gives a decent estimate of the test error and avoids the computational problems you've mentioned (since you don't have to calculate the loss function using all data for every print out).
How to know the stochastic gradient descent is converging when the objective function is expensive t One approach is to use "progressive validation error" for SGD diagnostics as per "Beating the Hold-Out: Bounds for K-fold and Progressive Cross-Validation" Basically, every new case is first plugged
49,540
How do you identify the variables that separate several groups?
The first question is whether you already know which frog belongs to which morphotype If you do know, and your goal is to use these frogs to better analyze how the morphotypes vary on these variables, then you want discriminant analysis. This might enable later investigators to accurately place frogs into morphotypes based on these variables. If you do not know which frog belongs to which morphotype, then cluster analysis may be useful. Both these methods have a lot of options and subtypes.
How do you identify the variables that separate several groups?
The first question is whether you already know which frog belongs to which morphotype If you do know, and your goal is to use these frogs to better analyze how the morphotypes vary on these variables,
How do you identify the variables that separate several groups? The first question is whether you already know which frog belongs to which morphotype If you do know, and your goal is to use these frogs to better analyze how the morphotypes vary on these variables, then you want discriminant analysis. This might enable later investigators to accurately place frogs into morphotypes based on these variables. If you do not know which frog belongs to which morphotype, then cluster analysis may be useful. Both these methods have a lot of options and subtypes.
How do you identify the variables that separate several groups? The first question is whether you already know which frog belongs to which morphotype If you do know, and your goal is to use these frogs to better analyze how the morphotypes vary on these variables,
49,541
How do you identify the variables that separate several groups?
I think that you know the group membership so as @PeterFlom said discriminant analysis is a good altternative. A similar method would be to estimate a Multinomial (logit or probit) model. In this model, you estimate the probability of clasyfing a frog into a given $k$ group depending on its characteristics $x$. $P[G=k]=\Phi(\sum \beta_j^k x_j)$ where $\Phi$ is the probability distribution function you assume. The upper script on the beta parameters shows that each characteristic has a different impact on the possibility of classification at different groups. The most simple version of this model is the multinomial logit and there are several extensions to it. I guess that's an affordable start if you are relatively new into statistics.
How do you identify the variables that separate several groups?
I think that you know the group membership so as @PeterFlom said discriminant analysis is a good altternative. A similar method would be to estimate a Multinomial (logit or probit) model. In this mode
How do you identify the variables that separate several groups? I think that you know the group membership so as @PeterFlom said discriminant analysis is a good altternative. A similar method would be to estimate a Multinomial (logit or probit) model. In this model, you estimate the probability of clasyfing a frog into a given $k$ group depending on its characteristics $x$. $P[G=k]=\Phi(\sum \beta_j^k x_j)$ where $\Phi$ is the probability distribution function you assume. The upper script on the beta parameters shows that each characteristic has a different impact on the possibility of classification at different groups. The most simple version of this model is the multinomial logit and there are several extensions to it. I guess that's an affordable start if you are relatively new into statistics.
How do you identify the variables that separate several groups? I think that you know the group membership so as @PeterFlom said discriminant analysis is a good altternative. A similar method would be to estimate a Multinomial (logit or probit) model. In this mode
49,542
Are sample means for quantiles of sorted data unbiased estimators of the true means?
For some distributions there is a positive bias due to measurement errors. If you assume the noise has mean $0$, then if you sample people from the top decile, their average measured income will be the average income of the top decile. However, the top decile of your sample will include some people who have displaced people from the top decile. The difference between the measured incomes of the incorrectly included people and the displaced people is always nonnegative, and the average value of this indicates the bias from this source of error. For some distributions, there is a negative bias due to sampling. I think this is a rare situation which you may be able to ignore based on some assumptions about the income distribution and noise distribution. Here is an artificial distribution which exhibits such a negative bias: Suppose $11\%$ of the population has a job and an income of $1$ unit, while everyone else is unemployed with an income of $0$, and there is no noise. The average income of the top $10\%$ is $1$, but there is a chance that the employment rate in your sample is under $10\%$, so the expected income of the top decile of a sample is less than $1$, so the bias is negative. If you want to get a ballpark estimate for the size of the bias, you can do a Monte Carlo simulation based on a distribution you fit to your sample and model for noise. There might be more accurate techniques, but this should be fast.
Are sample means for quantiles of sorted data unbiased estimators of the true means?
For some distributions there is a positive bias due to measurement errors. If you assume the noise has mean $0$, then if you sample people from the top decile, their average measured income will be th
Are sample means for quantiles of sorted data unbiased estimators of the true means? For some distributions there is a positive bias due to measurement errors. If you assume the noise has mean $0$, then if you sample people from the top decile, their average measured income will be the average income of the top decile. However, the top decile of your sample will include some people who have displaced people from the top decile. The difference between the measured incomes of the incorrectly included people and the displaced people is always nonnegative, and the average value of this indicates the bias from this source of error. For some distributions, there is a negative bias due to sampling. I think this is a rare situation which you may be able to ignore based on some assumptions about the income distribution and noise distribution. Here is an artificial distribution which exhibits such a negative bias: Suppose $11\%$ of the population has a job and an income of $1$ unit, while everyone else is unemployed with an income of $0$, and there is no noise. The average income of the top $10\%$ is $1$, but there is a chance that the employment rate in your sample is under $10\%$, so the expected income of the top decile of a sample is less than $1$, so the bias is negative. If you want to get a ballpark estimate for the size of the bias, you can do a Monte Carlo simulation based on a distribution you fit to your sample and model for noise. There might be more accurate techniques, but this should be fast.
Are sample means for quantiles of sorted data unbiased estimators of the true means? For some distributions there is a positive bias due to measurement errors. If you assume the noise has mean $0$, then if you sample people from the top decile, their average measured income will be th
49,543
Posterior distribution for multinomial parameter
Unfortunately, the data is a bit difficult to deal with, since it consists of mostly "soft evidence", so the parameter estimation doesn't seem to have an easy analytic solution such as a direct update of Dirichlet counts. — why does that follow? Why not just scale the counts based on the amount of evidence supporting each one? If you have samples of the posterior distribution, why can't you use the sufficient statistics to turn those samples into a maximum likelihood distribution? Update after your recent edits: The likelihood on $\theta$ given $o$ is true has density \begin{align} T_1(x) \propto 0.9 x + 0.1(1-x), \end{align} and false has density, say \begin{align} T_2(x) \propto 0.2 x + 0.8(1-x). \end{align} Then, the final density after $\eta_1$ observations of true and $\eta_2$ observations of false is proportional to \begin{align} T_1(x)^{\eta_1}T_2(x)^{\eta_2} \end{align} which is nevertheless an exponential family, although not a Beta distribution as you rightly point out.
Posterior distribution for multinomial parameter
Unfortunately, the data is a bit difficult to deal with, since it consists of mostly "soft evidence", so the parameter estimation doesn't seem to have an easy analytic solution such as a direct up
Posterior distribution for multinomial parameter Unfortunately, the data is a bit difficult to deal with, since it consists of mostly "soft evidence", so the parameter estimation doesn't seem to have an easy analytic solution such as a direct update of Dirichlet counts. — why does that follow? Why not just scale the counts based on the amount of evidence supporting each one? If you have samples of the posterior distribution, why can't you use the sufficient statistics to turn those samples into a maximum likelihood distribution? Update after your recent edits: The likelihood on $\theta$ given $o$ is true has density \begin{align} T_1(x) \propto 0.9 x + 0.1(1-x), \end{align} and false has density, say \begin{align} T_2(x) \propto 0.2 x + 0.8(1-x). \end{align} Then, the final density after $\eta_1$ observations of true and $\eta_2$ observations of false is proportional to \begin{align} T_1(x)^{\eta_1}T_2(x)^{\eta_2} \end{align} which is nevertheless an exponential family, although not a Beta distribution as you rightly point out.
Posterior distribution for multinomial parameter Unfortunately, the data is a bit difficult to deal with, since it consists of mostly "soft evidence", so the parameter estimation doesn't seem to have an easy analytic solution such as a direct up
49,544
Calculating entropy of a binary matrix
The results you are referring to can be replicated using the following code: https://github.com/cosmoharrigan/matrix-entropy This code generates the visualizations and includes the calculation of the "profile" (a list of the entropies) of the set of scaled filtered matrices. Note that the specific entropy values have been updated in the original answer.
Calculating entropy of a binary matrix
The results you are referring to can be replicated using the following code: https://github.com/cosmoharrigan/matrix-entropy This code generates the visualizations and includes the calculation of the
Calculating entropy of a binary matrix The results you are referring to can be replicated using the following code: https://github.com/cosmoharrigan/matrix-entropy This code generates the visualizations and includes the calculation of the "profile" (a list of the entropies) of the set of scaled filtered matrices. Note that the specific entropy values have been updated in the original answer.
Calculating entropy of a binary matrix The results you are referring to can be replicated using the following code: https://github.com/cosmoharrigan/matrix-entropy This code generates the visualizations and includes the calculation of the
49,545
Wilcoxon test in boot() function
This is not boot that is calling the Wilcoxon test, but verification::roc.area which can be checked by looking at the on-line help: P-value produced is related to the Mann-Whitney U statistics. The p-value is calculated using the wilcox.test function which automatically handles ties and makes approximations for large values (or directly by looking at the source code, e.g. verification:::roc.area.) As you are using bootstrap with replacement, we know that about one third of the sample will not be used in each run, and so you naturally introduce ties and ranks will not be unique anymore, which is what wilcox.test complains about. This function will return an approximate $p$-value (using asymptotic normal distribution). As as sidenote, you may want to take a look at the rms package which features everything you need to estimate, calibrate and validate GLMs (with bootstrap techniques, among others).
Wilcoxon test in boot() function
This is not boot that is calling the Wilcoxon test, but verification::roc.area which can be checked by looking at the on-line help: P-value produced is related to the Mann-Whitney U statistics. The
Wilcoxon test in boot() function This is not boot that is calling the Wilcoxon test, but verification::roc.area which can be checked by looking at the on-line help: P-value produced is related to the Mann-Whitney U statistics. The p-value is calculated using the wilcox.test function which automatically handles ties and makes approximations for large values (or directly by looking at the source code, e.g. verification:::roc.area.) As you are using bootstrap with replacement, we know that about one third of the sample will not be used in each run, and so you naturally introduce ties and ranks will not be unique anymore, which is what wilcox.test complains about. This function will return an approximate $p$-value (using asymptotic normal distribution). As as sidenote, you may want to take a look at the rms package which features everything you need to estimate, calibrate and validate GLMs (with bootstrap techniques, among others).
Wilcoxon test in boot() function This is not boot that is calling the Wilcoxon test, but verification::roc.area which can be checked by looking at the on-line help: P-value produced is related to the Mann-Whitney U statistics. The
49,546
Are randomForest variable importance values comparable across same variables on different dates?
Ad 1. IncMSE is an actual result of cross-bag test, so in theory it is better than IncNodePurity which is a training by-product. Ad 3. & 4. To be honest, those values have a little sense of their own -- they depend on how good RF is on a current test, and this is terribly variable. If you want to compare anything, compare rankings calculated on that data. Ad 2. This way it is rather bogus to push the meaning of both measures further than to just an importance score.
Are randomForest variable importance values comparable across same variables on different dates?
Ad 1. IncMSE is an actual result of cross-bag test, so in theory it is better than IncNodePurity which is a training by-product. Ad 3. & 4. To be honest, those values have a little sense of their own
Are randomForest variable importance values comparable across same variables on different dates? Ad 1. IncMSE is an actual result of cross-bag test, so in theory it is better than IncNodePurity which is a training by-product. Ad 3. & 4. To be honest, those values have a little sense of their own -- they depend on how good RF is on a current test, and this is terribly variable. If you want to compare anything, compare rankings calculated on that data. Ad 2. This way it is rather bogus to push the meaning of both measures further than to just an importance score.
Are randomForest variable importance values comparable across same variables on different dates? Ad 1. IncMSE is an actual result of cross-bag test, so in theory it is better than IncNodePurity which is a training by-product. Ad 3. & 4. To be honest, those values have a little sense of their own
49,547
Mixture distributions moments if one distribution has undefined/infinite moments
Yes, you are correct. If $X_1 \sim f_1$, $X_2 \sim f_2$, and $X_3 \sim f_3$, then your equation shows that $$E(X_3) = p \cdot E(X_1) + (1-p) \cdot E(X_2)$$ Therefore if either one of $E(X_1)$ or $E(X_2)$ in non-finite/non-existent then $E(X_3)$ will be non-finite/non-existent also if $p \in (0,1)$ - this is true even if $p$ is very near $0$ or $1$. To get some intuition for this, note that the mixture distribution can be thought of a drawing from $f_1$ with probability $p$ and $f_2$ with probability $1-p$. Bearing that in mind, take an example where $f_1$ is the density of the reciprocal of a standard normal (a distribution with infinite mean), $f_2$ is the standard normal density and $p$ is some very small value (say $.01$). Consider sampling variables that have density $f_3$ - of the $1\%$ that are drawn from $f_1$, there will be some extreme values characteristic of a distribution with non-finite mean. You're also correct that this same logic would apply to higher moments - replace $x$ with $x^k$ in your integrals and you can make an exactly analogous argument. Where it gets more complicated is when, for example, both integrals are non-finite but this seems beyond the scope of the question :)
Mixture distributions moments if one distribution has undefined/infinite moments
Yes, you are correct. If $X_1 \sim f_1$, $X_2 \sim f_2$, and $X_3 \sim f_3$, then your equation shows that $$E(X_3) = p \cdot E(X_1) + (1-p) \cdot E(X_2)$$ Therefore if either one of $E(X_1)$ or $E(X_
Mixture distributions moments if one distribution has undefined/infinite moments Yes, you are correct. If $X_1 \sim f_1$, $X_2 \sim f_2$, and $X_3 \sim f_3$, then your equation shows that $$E(X_3) = p \cdot E(X_1) + (1-p) \cdot E(X_2)$$ Therefore if either one of $E(X_1)$ or $E(X_2)$ in non-finite/non-existent then $E(X_3)$ will be non-finite/non-existent also if $p \in (0,1)$ - this is true even if $p$ is very near $0$ or $1$. To get some intuition for this, note that the mixture distribution can be thought of a drawing from $f_1$ with probability $p$ and $f_2$ with probability $1-p$. Bearing that in mind, take an example where $f_1$ is the density of the reciprocal of a standard normal (a distribution with infinite mean), $f_2$ is the standard normal density and $p$ is some very small value (say $.01$). Consider sampling variables that have density $f_3$ - of the $1\%$ that are drawn from $f_1$, there will be some extreme values characteristic of a distribution with non-finite mean. You're also correct that this same logic would apply to higher moments - replace $x$ with $x^k$ in your integrals and you can make an exactly analogous argument. Where it gets more complicated is when, for example, both integrals are non-finite but this seems beyond the scope of the question :)
Mixture distributions moments if one distribution has undefined/infinite moments Yes, you are correct. If $X_1 \sim f_1$, $X_2 \sim f_2$, and $X_3 \sim f_3$, then your equation shows that $$E(X_3) = p \cdot E(X_1) + (1-p) \cdot E(X_2)$$ Therefore if either one of $E(X_1)$ or $E(X_
49,548
Normalize sample data for clustering
Clustering in general requires a similarity metric to compute a partitioning of your data. Do you know how to compute the similarity of $\vec{a}$ to $\vec{b}$? Whether you need normalization or not will mainly depend on this question. If you don't have such a metric/measure, and you want to go with the regular Euclidean distance, normalizing your data -- bringing each variable to zero mean and unit variance -- would be recommended. Because if you don't, the scores with the largest range will dominate the distance computation.
Normalize sample data for clustering
Clustering in general requires a similarity metric to compute a partitioning of your data. Do you know how to compute the similarity of $\vec{a}$ to $\vec{b}$? Whether you need normalization or not wi
Normalize sample data for clustering Clustering in general requires a similarity metric to compute a partitioning of your data. Do you know how to compute the similarity of $\vec{a}$ to $\vec{b}$? Whether you need normalization or not will mainly depend on this question. If you don't have such a metric/measure, and you want to go with the regular Euclidean distance, normalizing your data -- bringing each variable to zero mean and unit variance -- would be recommended. Because if you don't, the scores with the largest range will dominate the distance computation.
Normalize sample data for clustering Clustering in general requires a similarity metric to compute a partitioning of your data. Do you know how to compute the similarity of $\vec{a}$ to $\vec{b}$? Whether you need normalization or not wi
49,549
Normalize sample data for clustering
To perform z-score normalisation on x, you don't have to test whether x is of normal distribution or not. For whatever distribution, z will be in a distribution of zero mean, one standard deviation. Type of distribution matters when you use any test on the data, based on that particular distribution. Convenience of normal distribution in this sense is that if x is in normal distribution with mean m and standard deviation s z (= (x-m)/s) will also be in normal distribution with mean zero and standard deviation 1. ==== Some people use the normalisation for clustering using min and range of the data set: z= (x - min_x) / (max_x - min_x) making the data to fall in to [0,1]
Normalize sample data for clustering
To perform z-score normalisation on x, you don't have to test whether x is of normal distribution or not. For whatever distribution, z will be in a distribution of zero mean, one standard deviation. T
Normalize sample data for clustering To perform z-score normalisation on x, you don't have to test whether x is of normal distribution or not. For whatever distribution, z will be in a distribution of zero mean, one standard deviation. Type of distribution matters when you use any test on the data, based on that particular distribution. Convenience of normal distribution in this sense is that if x is in normal distribution with mean m and standard deviation s z (= (x-m)/s) will also be in normal distribution with mean zero and standard deviation 1. ==== Some people use the normalisation for clustering using min and range of the data set: z= (x - min_x) / (max_x - min_x) making the data to fall in to [0,1]
Normalize sample data for clustering To perform z-score normalisation on x, you don't have to test whether x is of normal distribution or not. For whatever distribution, z will be in a distribution of zero mean, one standard deviation. T
49,550
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks?
I think that what you need is Multiple Correspondence Analysis (MCA). You can lookup the basics on Wikipedia. MCA is part of the R-core, in the package MASS. So I suggest you start with library(MASS) ?mca Here is the help page for the function mca. The output of MCA is a set of ordered factors capturing the relationships between your variables. The absolute loadings of the first factor (i.e. the coordinates of the projections of the variables on the first factor) says which groups of variables most heavily influence the variation. As pointed out in the comments, MCA works with categorical and not numeric variables. This means that the ordered relationship (1 < 2 < 3 < ... < 10) will not be taken into account. How well MCA is appropriate depends on the nature of your questions. For instance, if those are binned numeric variables (e.g. monthly income / 1000, rounded up) then I'd say MCA is inappropriate. If these are subjective evaluations (e.g. "how much does it hurt?") then I would give it a try because the categories can sometimes be good to show non linear relationships between variables. To do it in practice, in R, you would present your data in a data.frame of 30 columns and as many rows as you have individuals. Each column would be the answers to a question in the form of a factor, meaning that a 9, say, would be interpreted as the category 9, not the score 9. Say that you have collected this in a data.frame that you call answers you could do the following: mca_result <- mca(answers) plot(mca_results) You would have a biplot representing the individuals, the answers to the questions (as categories) and the relationship between all this expressed by spatial proximity. Two individuals close to each other on that space gave similar answers, two answers close to each other on that space were chosen by the same individuals. Similarly, an individual close to a group of answers has chosen many of them, and an answer close to a group of individuals was chosen by many of them. MCA does not natively accomodate block design. A go-around that might be informative is to sum the squares of the loadings of the answers of each block on the first factor. The square of the loading is called the inertia and says how much the variation of a variable (an answer to a question) is represented in the simpler model consisting of only that factor. In R you could do it this way: inertia_on_1 <- mca_results$cs[,1]^2 tapply(X=inertia_on_1, INDEX=rep(1:3, each=100), sum) The part that says INDEX=rep(1:3, each=100) is to build an index corresponding to the variables of each block (labelled 1, 2, and 3) -- and there are 100 of each (because you have 10 categories for 10 questions).
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks?
I think that what you need is Multiple Correspondence Analysis (MCA). You can lookup the basics on Wikipedia. MCA is part of the R-core, in the package MASS. So I suggest you start with library(MASS)
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks? I think that what you need is Multiple Correspondence Analysis (MCA). You can lookup the basics on Wikipedia. MCA is part of the R-core, in the package MASS. So I suggest you start with library(MASS) ?mca Here is the help page for the function mca. The output of MCA is a set of ordered factors capturing the relationships between your variables. The absolute loadings of the first factor (i.e. the coordinates of the projections of the variables on the first factor) says which groups of variables most heavily influence the variation. As pointed out in the comments, MCA works with categorical and not numeric variables. This means that the ordered relationship (1 < 2 < 3 < ... < 10) will not be taken into account. How well MCA is appropriate depends on the nature of your questions. For instance, if those are binned numeric variables (e.g. monthly income / 1000, rounded up) then I'd say MCA is inappropriate. If these are subjective evaluations (e.g. "how much does it hurt?") then I would give it a try because the categories can sometimes be good to show non linear relationships between variables. To do it in practice, in R, you would present your data in a data.frame of 30 columns and as many rows as you have individuals. Each column would be the answers to a question in the form of a factor, meaning that a 9, say, would be interpreted as the category 9, not the score 9. Say that you have collected this in a data.frame that you call answers you could do the following: mca_result <- mca(answers) plot(mca_results) You would have a biplot representing the individuals, the answers to the questions (as categories) and the relationship between all this expressed by spatial proximity. Two individuals close to each other on that space gave similar answers, two answers close to each other on that space were chosen by the same individuals. Similarly, an individual close to a group of answers has chosen many of them, and an answer close to a group of individuals was chosen by many of them. MCA does not natively accomodate block design. A go-around that might be informative is to sum the squares of the loadings of the answers of each block on the first factor. The square of the loading is called the inertia and says how much the variation of a variable (an answer to a question) is represented in the simpler model consisting of only that factor. In R you could do it this way: inertia_on_1 <- mca_results$cs[,1]^2 tapply(X=inertia_on_1, INDEX=rep(1:3, each=100), sum) The part that says INDEX=rep(1:3, each=100) is to build an index corresponding to the variables of each block (labelled 1, 2, and 3) -- and there are 100 of each (because you have 10 categories for 10 questions).
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks? I think that what you need is Multiple Correspondence Analysis (MCA). You can lookup the basics on Wikipedia. MCA is part of the R-core, in the package MASS. So I suggest you start with library(MASS)
49,551
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks?
One approach would be to calculate first principal component in each of the three blocks, and then perform a regression with your final score as the response variable and those three block scores as the explanatory variables. This seems to me one way to answer your question of how much each block contributes to variance in the final score. You could do the same basic approach at the individual question level too, by skipping the principal components part. Some very skimpy R code to do the Block approach is pasted below. This skips all sorts of things like diagnostic checks, plotting the data, looking to see how much variance within each block the first principal component explains, etc. Also be warned - even with completely random data, one of the blocks will be more strongly linked to the final score - you need to think carefully about how to interpret that! # simulate random data x <- matrix(sample(1:10, 3000, replace=TRUE), nrow=100) # create score as you say it is calculated score <- apply(x,1, mean) # estimate a score for the principal component of each block BlockA <-predict(princomp(x[,1:10]))[,1] BlockB <-predict(princomp(x[,11:20]))[,1] BlockC <-predict(princomp(x[,21:30]))[,1] # fit a model summary(lm(score ~ BlockA + BlockB + BlockC))
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks?
One approach would be to calculate first principal component in each of the three blocks, and then perform a regression with your final score as the response variable and those three block scores as t
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks? One approach would be to calculate first principal component in each of the three blocks, and then perform a regression with your final score as the response variable and those three block scores as the explanatory variables. This seems to me one way to answer your question of how much each block contributes to variance in the final score. You could do the same basic approach at the individual question level too, by skipping the principal components part. Some very skimpy R code to do the Block approach is pasted below. This skips all sorts of things like diagnostic checks, plotting the data, looking to see how much variance within each block the first principal component explains, etc. Also be warned - even with completely random data, one of the blocks will be more strongly linked to the final score - you need to think carefully about how to interpret that! # simulate random data x <- matrix(sample(1:10, 3000, replace=TRUE), nrow=100) # create score as you say it is calculated score <- apply(x,1, mean) # estimate a score for the principal component of each block BlockA <-predict(princomp(x[,1:10]))[,1] BlockB <-predict(princomp(x[,11:20]))[,1] BlockC <-predict(princomp(x[,21:30]))[,1] # fit a model summary(lm(score ~ BlockA + BlockB + BlockC))
How to conduct a factor analysis on questionnaire data based on 30 items in three blocks? One approach would be to calculate first principal component in each of the three blocks, and then perform a regression with your final score as the response variable and those three block scores as t
49,552
Odds of X occurrences in a row given Y trials (A coin flip problem)
I believe this is only a partial solution for the case when $N<2X$. Define $A_i$ to be the set of all sequences of $Y$ events with $N$ successes such that at least $X$ successes occur consecutively beginning at position $i$ in the sequence, and no string of $X$ successes is to begin before position $i$. For example, if $Y=4$, $N=3$, $X=2$, and successes are denoted by a $1$ while failures are denoted by a $0$, then $A_1=\left\{1101,1110\right\}$, $A_2=\left\{0111\right\}$, and $A_3=\left\{1011\right\}$. The index $i$ runs from $1$ to $Y-X+1$ because we require at least $X$ spots to contain the string of $X$ successes. Now let's count the size of each of the $A_i$, $i=1,\ldots,Y-X+1$, for generic $Y$, $N$, and $X$ with the constraint that $N<2X$. $|A_1|=\binom{Y-X}{N-X}$ because the string of $X$ successes begins in the first position and runs until position $X$, and beyond that we don't care about the ordering of the remaining successes and the failures. Now for all $i\in\left\{2,\ldots,Y-X+1\right\}$, $|A_i|=\binom{Y-X-1}{N-X}$. We can think about these sequences in the following way: begin the $X$ successes at position $i$ and force a failure in position $i-1$ so the string of $X$ successes cannot start before position $i$. Now we don't care about the ordering of the remaining successes and failures in all other positions, so we can place them by simply choosing where in the remaining $Y-X-1$ spots the remaining $N-X$ successes will go. There are $Y-X+1-1=Y-X$ of these $A_i$, $i\in\left\{2,\ldots,Y-X+1\right\}$. This gives a total of $$\sum_{i=1}^{Y-X+1}|A_i|=\binom{Y-X}{N-X}+(Y-X)\binom{Y-X-1}{N-X}$$ sequences with a sequence of at least $X$ successes. The total number of sequences of length $Y$ with $N$ successes is $\binom{Y}{N}$. Thus, the probability of a sequence of length $Y$ with $N<2X$ successes containing at least $X$ consecutive successes is $$\frac{\binom{Y-X}{N-X}+(Y-X)\binom{Y-X-1}{N-X}}{\binom{Y}{N}}$$ In R, a simple function to calculate these probabilities would be the following. f <- function(params) { # params is a numeric vector of length 3 with Y, the length of the # sequence, in the first position, N, the number of successes in the # sequence, in the second position, and X, the minimum number of # consecutive successes, in the third position. Y <- params[1] N <- params[2] X <- params[3] num <- choose(Y-X,N-X) + (Y-X) * choose(Y-X-1,N-X) den <- choose(Y,N) return(num/den) }
Odds of X occurrences in a row given Y trials (A coin flip problem)
I believe this is only a partial solution for the case when $N<2X$. Define $A_i$ to be the set of all sequences of $Y$ events with $N$ successes such that at least $X$ successes occur consecutively be
Odds of X occurrences in a row given Y trials (A coin flip problem) I believe this is only a partial solution for the case when $N<2X$. Define $A_i$ to be the set of all sequences of $Y$ events with $N$ successes such that at least $X$ successes occur consecutively beginning at position $i$ in the sequence, and no string of $X$ successes is to begin before position $i$. For example, if $Y=4$, $N=3$, $X=2$, and successes are denoted by a $1$ while failures are denoted by a $0$, then $A_1=\left\{1101,1110\right\}$, $A_2=\left\{0111\right\}$, and $A_3=\left\{1011\right\}$. The index $i$ runs from $1$ to $Y-X+1$ because we require at least $X$ spots to contain the string of $X$ successes. Now let's count the size of each of the $A_i$, $i=1,\ldots,Y-X+1$, for generic $Y$, $N$, and $X$ with the constraint that $N<2X$. $|A_1|=\binom{Y-X}{N-X}$ because the string of $X$ successes begins in the first position and runs until position $X$, and beyond that we don't care about the ordering of the remaining successes and the failures. Now for all $i\in\left\{2,\ldots,Y-X+1\right\}$, $|A_i|=\binom{Y-X-1}{N-X}$. We can think about these sequences in the following way: begin the $X$ successes at position $i$ and force a failure in position $i-1$ so the string of $X$ successes cannot start before position $i$. Now we don't care about the ordering of the remaining successes and failures in all other positions, so we can place them by simply choosing where in the remaining $Y-X-1$ spots the remaining $N-X$ successes will go. There are $Y-X+1-1=Y-X$ of these $A_i$, $i\in\left\{2,\ldots,Y-X+1\right\}$. This gives a total of $$\sum_{i=1}^{Y-X+1}|A_i|=\binom{Y-X}{N-X}+(Y-X)\binom{Y-X-1}{N-X}$$ sequences with a sequence of at least $X$ successes. The total number of sequences of length $Y$ with $N$ successes is $\binom{Y}{N}$. Thus, the probability of a sequence of length $Y$ with $N<2X$ successes containing at least $X$ consecutive successes is $$\frac{\binom{Y-X}{N-X}+(Y-X)\binom{Y-X-1}{N-X}}{\binom{Y}{N}}$$ In R, a simple function to calculate these probabilities would be the following. f <- function(params) { # params is a numeric vector of length 3 with Y, the length of the # sequence, in the first position, N, the number of successes in the # sequence, in the second position, and X, the minimum number of # consecutive successes, in the third position. Y <- params[1] N <- params[2] X <- params[3] num <- choose(Y-X,N-X) + (Y-X) * choose(Y-X-1,N-X) den <- choose(Y,N) return(num/den) }
Odds of X occurrences in a row given Y trials (A coin flip problem) I believe this is only a partial solution for the case when $N<2X$. Define $A_i$ to be the set of all sequences of $Y$ events with $N$ successes such that at least $X$ successes occur consecutively be
49,553
Odds of X occurrences in a row given Y trials (A coin flip problem)
This is a difficult problem. Let's start with the N condition. As often a possible way to simplify the problem is to instead calculate the chance of never X occurences in a row given Y trials. Note that for $Y < X$ you will never have X occurences, much less in a row, so the probability here is 1. Let us denote the probability that you do NOT have X occurences in a row given Y trials as $P(X|Y)$. Since we assume a coin, let's call the two outcome H and T. T are our successes. Let us shorten a repetion of n heads or tails as H[n] respectively T[n]. Let us furthermore denote the chance of T as p. Look at the cases which do not contain a T[X]. They have the following possible starts (first few events): $$ H \\ T[1]H \\ T[2]H \\ \vdots \\ T[X-1]H $$ Note that they seperate the room of possible outcomes, it's not possible for a series to start with two of those options, they are mutually exclusive. So we can write $$ P(X|Y) = \sum_{i=0}^{X-1} P(\text{series starts with $T[i]H$ and no $T[X]$ in rest of series)} $$ because of independence of the individual events we get $$ P(X|Y) = \sum_{i=0}^{X-1} P(\text{series starts with $T[i]H$})P(\text{no $T[X]$ in rest of series}). $$ Note that the rest of the series depends on i; it has length $Y-i-1$. So finally we get $$ P(X|Y) = \sum_{i=0}^{X-1} (1-p)p^{i-1}P(X|Y-1-i). $$ This is a well-defined recurrent formula given $P(X|Y) = 1$ for $Y < X$. While there is (as far as I know) no general closed form, it is easy enough to calculate. Remember that the chance you seek is actually $1-P(X|Y)$. Now if could still condition on the total number of sucesses... This could probably be carried along with the Y. I will try to complete that part later. The number of reamining sucesses in the recurrence part of the formula will decrease by i but the inversion part will be tricky... Okay, let's give this a try. We now look at $P(X,N|Y)$. It stands for the probability of having exactly n successes and no chains of length X or more in a sequence of length Y. We still get $$ P(X,N|Y) = \sum_{i=0}^{X-1} (1-p)p^{i-1}P(X,N-i|Y-1-i). $$ Do we have enough boundary conditions on $P(X,N|Y)$ to make this work? We do know that $P(X,N|Y)$ is $\binom{Y}{N}p^N(1-p)^{N-Y}$ for $Y < X$ and $N \leq Y$. It's also 0 if $Y=N$ AND $Y \geq X$.Is that enough? Let's look at an easy example $X=2$,$Y=4$,$N=3$,$p=0.5$. We get $$ P(2,3|4) = (1-p)P(2,3|3)+p(1-p)P(2,2|2) $$ So we get $$ (1-p)0+p(1-p)0=0. $$ Works in this case. Let's try $N=4$, $Y=5$, $X=3$, $p=0.5$. $$ P(3,4|5) = (1-p)P(3,4|4)+p(1-p)P(3,3|3)+p^2(1-p)P(3,2|2) $$ The first two terms are zero (see above), so what remains is $2^{-5}\binom{2}{2}$. You get your probability by P(exactly N sucesses) = P(exactly N successes + no chains of length X) + P(exactly N successes + chains of length X). The left hand is simply given by the binomial theorem. ....so the right-most probability: $$ \frac{5}{2^5} = \frac{1}{2^5}+P(\text{chain of length X exists},N|Y) $$ so $$ P(\text{chain of length X exists},N|Y)=\frac{4}{2^5}. $$ Now just divide by the probability of exactly N successes to get the conditional probability of $\frac{4}{5}$, which is the correct answer. I think the recurrency and the formula is well defined, but I am not 100% certain at this point. R-Version, which after some bug-fixes seems to agree with Max, but might me more general, if slow. chance2 gives the final result. I have also tested the results of the function and compared it to simulation. It seems to provide the correct answer. Caching the values in a two-dimensional array for L,N could make the program relatively fast. chance <- function(x,L,N) { print(c(x,L,N)) if (L < 0) return(0) if (N <0) return(0) if (L < N) return(0) if (L == 0) { if (N!=0) return(0) return(1) } if (L < x) { return(0.5^(L)*choose(L,N)) } result <- 0 for (i in 0:(x-1)) { result <- result + 0.5^(i+1)*chance(x,L-i-1,N-i) } return(result) } chance2 <- function(x,L,N) { result1 <- chance(x,L,N) left.hand <- choose(L,N)*(0.5)^L result2 <- (left.hand-result1)/left.hand return(result2) }
Odds of X occurrences in a row given Y trials (A coin flip problem)
This is a difficult problem. Let's start with the N condition. As often a possible way to simplify the problem is to instead calculate the chance of never X occurences in a row given Y trials. Note t
Odds of X occurrences in a row given Y trials (A coin flip problem) This is a difficult problem. Let's start with the N condition. As often a possible way to simplify the problem is to instead calculate the chance of never X occurences in a row given Y trials. Note that for $Y < X$ you will never have X occurences, much less in a row, so the probability here is 1. Let us denote the probability that you do NOT have X occurences in a row given Y trials as $P(X|Y)$. Since we assume a coin, let's call the two outcome H and T. T are our successes. Let us shorten a repetion of n heads or tails as H[n] respectively T[n]. Let us furthermore denote the chance of T as p. Look at the cases which do not contain a T[X]. They have the following possible starts (first few events): $$ H \\ T[1]H \\ T[2]H \\ \vdots \\ T[X-1]H $$ Note that they seperate the room of possible outcomes, it's not possible for a series to start with two of those options, they are mutually exclusive. So we can write $$ P(X|Y) = \sum_{i=0}^{X-1} P(\text{series starts with $T[i]H$ and no $T[X]$ in rest of series)} $$ because of independence of the individual events we get $$ P(X|Y) = \sum_{i=0}^{X-1} P(\text{series starts with $T[i]H$})P(\text{no $T[X]$ in rest of series}). $$ Note that the rest of the series depends on i; it has length $Y-i-1$. So finally we get $$ P(X|Y) = \sum_{i=0}^{X-1} (1-p)p^{i-1}P(X|Y-1-i). $$ This is a well-defined recurrent formula given $P(X|Y) = 1$ for $Y < X$. While there is (as far as I know) no general closed form, it is easy enough to calculate. Remember that the chance you seek is actually $1-P(X|Y)$. Now if could still condition on the total number of sucesses... This could probably be carried along with the Y. I will try to complete that part later. The number of reamining sucesses in the recurrence part of the formula will decrease by i but the inversion part will be tricky... Okay, let's give this a try. We now look at $P(X,N|Y)$. It stands for the probability of having exactly n successes and no chains of length X or more in a sequence of length Y. We still get $$ P(X,N|Y) = \sum_{i=0}^{X-1} (1-p)p^{i-1}P(X,N-i|Y-1-i). $$ Do we have enough boundary conditions on $P(X,N|Y)$ to make this work? We do know that $P(X,N|Y)$ is $\binom{Y}{N}p^N(1-p)^{N-Y}$ for $Y < X$ and $N \leq Y$. It's also 0 if $Y=N$ AND $Y \geq X$.Is that enough? Let's look at an easy example $X=2$,$Y=4$,$N=3$,$p=0.5$. We get $$ P(2,3|4) = (1-p)P(2,3|3)+p(1-p)P(2,2|2) $$ So we get $$ (1-p)0+p(1-p)0=0. $$ Works in this case. Let's try $N=4$, $Y=5$, $X=3$, $p=0.5$. $$ P(3,4|5) = (1-p)P(3,4|4)+p(1-p)P(3,3|3)+p^2(1-p)P(3,2|2) $$ The first two terms are zero (see above), so what remains is $2^{-5}\binom{2}{2}$. You get your probability by P(exactly N sucesses) = P(exactly N successes + no chains of length X) + P(exactly N successes + chains of length X). The left hand is simply given by the binomial theorem. ....so the right-most probability: $$ \frac{5}{2^5} = \frac{1}{2^5}+P(\text{chain of length X exists},N|Y) $$ so $$ P(\text{chain of length X exists},N|Y)=\frac{4}{2^5}. $$ Now just divide by the probability of exactly N successes to get the conditional probability of $\frac{4}{5}$, which is the correct answer. I think the recurrency and the formula is well defined, but I am not 100% certain at this point. R-Version, which after some bug-fixes seems to agree with Max, but might me more general, if slow. chance2 gives the final result. I have also tested the results of the function and compared it to simulation. It seems to provide the correct answer. Caching the values in a two-dimensional array for L,N could make the program relatively fast. chance <- function(x,L,N) { print(c(x,L,N)) if (L < 0) return(0) if (N <0) return(0) if (L < N) return(0) if (L == 0) { if (N!=0) return(0) return(1) } if (L < x) { return(0.5^(L)*choose(L,N)) } result <- 0 for (i in 0:(x-1)) { result <- result + 0.5^(i+1)*chance(x,L-i-1,N-i) } return(result) } chance2 <- function(x,L,N) { result1 <- chance(x,L,N) left.hand <- choose(L,N)*(0.5)^L result2 <- (left.hand-result1)/left.hand return(result2) }
Odds of X occurrences in a row given Y trials (A coin flip problem) This is a difficult problem. Let's start with the N condition. As often a possible way to simplify the problem is to instead calculate the chance of never X occurences in a row given Y trials. Note t
49,554
Odds of X occurrences in a row given Y trials (A coin flip problem)
This solution works for all values of $n$. You can define a recursive formula for the probability of $x$ consecutive successes, $y$ trials, and $n$ successes: \begin{align} f(x,y,n) &= g(x, x, y, n) \end{align} where \begin{align} g(x,x',y,n) &= \begin{cases} 1 & \text{if }x = 0 \\ \frac{n}{y}g(x-1,x',y-1,n-1)+\frac{y-n}yf(x',y-1,n) & \text{if }0 < x \le n \le y \\ 0 & \text{otherwise.} \end{cases} \end{align} The $x'$ parameter accepted by $g$ is the number of consecutive successes required. The $x$ parameter is the length of a block of consecutive successes required if the block starts at the first position. So, if $x$ is zero, we have already achieved our goal and the probability is one. If it's not true that $0 < x \le n \le y$ we cannot achieve our goal, so we return zero. In the final case, with probability $n/y$, we observe a success, so we need one fewer success in a row to make it and so $x$ decreases; however, $x'$ does not decrease because on a failure, we would still need $x'$ to achieve our goal. The case of a failure has $\frac{y-n}y$ probability, and sets us all the way back to $x=x'$. In either case $y$ is decremented, but $n$ only decreases in the case of a success. #!/usr/bin/env python from tools.decorator import memoized from fractions import Fraction @memoized def g(x, x_prime, y, n): if x == 0: return Fraction(1) elif 0 < x <= n <= y: return (Fraction(n, y) * g(x - 1, x_prime, y - 1, n - 1) + Fraction(y - n, y) * f(x_prime, y - 1, n)) else: return Fraction(0) def f(x, y, n): return g(x, x, y, n) print(f(30, 100, 97)) prints 104/105
Odds of X occurrences in a row given Y trials (A coin flip problem)
This solution works for all values of $n$. You can define a recursive formula for the probability of $x$ consecutive successes, $y$ trials, and $n$ successes: \begin{align} f(x,y,n) &= g(x, x, y, n) \
Odds of X occurrences in a row given Y trials (A coin flip problem) This solution works for all values of $n$. You can define a recursive formula for the probability of $x$ consecutive successes, $y$ trials, and $n$ successes: \begin{align} f(x,y,n) &= g(x, x, y, n) \end{align} where \begin{align} g(x,x',y,n) &= \begin{cases} 1 & \text{if }x = 0 \\ \frac{n}{y}g(x-1,x',y-1,n-1)+\frac{y-n}yf(x',y-1,n) & \text{if }0 < x \le n \le y \\ 0 & \text{otherwise.} \end{cases} \end{align} The $x'$ parameter accepted by $g$ is the number of consecutive successes required. The $x$ parameter is the length of a block of consecutive successes required if the block starts at the first position. So, if $x$ is zero, we have already achieved our goal and the probability is one. If it's not true that $0 < x \le n \le y$ we cannot achieve our goal, so we return zero. In the final case, with probability $n/y$, we observe a success, so we need one fewer success in a row to make it and so $x$ decreases; however, $x'$ does not decrease because on a failure, we would still need $x'$ to achieve our goal. The case of a failure has $\frac{y-n}y$ probability, and sets us all the way back to $x=x'$. In either case $y$ is decremented, but $n$ only decreases in the case of a success. #!/usr/bin/env python from tools.decorator import memoized from fractions import Fraction @memoized def g(x, x_prime, y, n): if x == 0: return Fraction(1) elif 0 < x <= n <= y: return (Fraction(n, y) * g(x - 1, x_prime, y - 1, n - 1) + Fraction(y - n, y) * f(x_prime, y - 1, n)) else: return Fraction(0) def f(x, y, n): return g(x, x, y, n) print(f(30, 100, 97)) prints 104/105
Odds of X occurrences in a row given Y trials (A coin flip problem) This solution works for all values of $n$. You can define a recursive formula for the probability of $x$ consecutive successes, $y$ trials, and $n$ successes: \begin{align} f(x,y,n) &= g(x, x, y, n) \
49,555
Odds of X occurrences in a row given Y trials (A coin flip problem)
If p is the probability of success the probability of X successes in a row is p^X. For your problem these X successes can occur in many different slots in the sequence. So you have to multiply by the number of ways you can pick X consecutive slots out of the total of Y available slots with the additional requirement that N-X successes occur in the remaining Y-X slots.
Odds of X occurrences in a row given Y trials (A coin flip problem)
If p is the probability of success the probability of X successes in a row is p^X. For your problem these X successes can occur in many different slots in the sequence. So you have to multiply by the
Odds of X occurrences in a row given Y trials (A coin flip problem) If p is the probability of success the probability of X successes in a row is p^X. For your problem these X successes can occur in many different slots in the sequence. So you have to multiply by the number of ways you can pick X consecutive slots out of the total of Y available slots with the additional requirement that N-X successes occur in the remaining Y-X slots.
Odds of X occurrences in a row given Y trials (A coin flip problem) If p is the probability of success the probability of X successes in a row is p^X. For your problem these X successes can occur in many different slots in the sequence. So you have to multiply by the
49,556
PDFs and probability in naive Bayes classification
You're right that the statement is wrong. It should be a likelihood: $$L(c \mid x=v)=\frac{1}{\sqrt{2\pi\sigma_c^2}}e^{-\frac{(v-\mu_c)^2}{2\sigma_c^2}}$$ A likelihood applies here because we are interested in the relative likelihood that a point belongs to each class: $$P(c=c' \mid x=v) = \frac{L(c=c' \mid x=v)}{\sum_{c_i} L(c=c_i \mid x=v)}.$$
PDFs and probability in naive Bayes classification
You're right that the statement is wrong. It should be a likelihood: $$L(c \mid x=v)=\frac{1}{\sqrt{2\pi\sigma_c^2}}e^{-\frac{(v-\mu_c)^2}{2\sigma_c^2}}$$ A likelihood applies here because we are int
PDFs and probability in naive Bayes classification You're right that the statement is wrong. It should be a likelihood: $$L(c \mid x=v)=\frac{1}{\sqrt{2\pi\sigma_c^2}}e^{-\frac{(v-\mu_c)^2}{2\sigma_c^2}}$$ A likelihood applies here because we are interested in the relative likelihood that a point belongs to each class: $$P(c=c' \mid x=v) = \frac{L(c=c' \mid x=v)}{\sum_{c_i} L(c=c_i \mid x=v)}.$$
PDFs and probability in naive Bayes classification You're right that the statement is wrong. It should be a likelihood: $$L(c \mid x=v)=\frac{1}{\sqrt{2\pi\sigma_c^2}}e^{-\frac{(v-\mu_c)^2}{2\sigma_c^2}}$$ A likelihood applies here because we are int
49,557
PDFs and probability in naive Bayes classification
If you're interested in a lengthy and rigorous explanation check this out. To summarize, it all comes down to integral approximations. To get the probability of a specific variable value from the variable's continuous probability density function (PDF), you integrate the PDF around the value in question over an interval of width epsilon, and take the limit of that integral as epsilon approaches 0. For small epsilon, this integral will be equivalent to the product of epsilon and the height of the PDF at the variable value in question. Ordinarily, the limit of this expression would be to 0 as epsilon approached 0. However, as Neil mentioned in his answer, in the case of Naive Bayes we are interested in the ratio of conditional probabilities. Because both the numerator and denominator of our ratio will include a factor of epsilon, these factors of epsilon cancel out. As a result, the limit of the ratio of conditional probabilities will be equivalent to the ratio of the PDF heights at the variable value in question.
PDFs and probability in naive Bayes classification
If you're interested in a lengthy and rigorous explanation check this out. To summarize, it all comes down to integral approximations. To get the probability of a specific variable value from the vari
PDFs and probability in naive Bayes classification If you're interested in a lengthy and rigorous explanation check this out. To summarize, it all comes down to integral approximations. To get the probability of a specific variable value from the variable's continuous probability density function (PDF), you integrate the PDF around the value in question over an interval of width epsilon, and take the limit of that integral as epsilon approaches 0. For small epsilon, this integral will be equivalent to the product of epsilon and the height of the PDF at the variable value in question. Ordinarily, the limit of this expression would be to 0 as epsilon approached 0. However, as Neil mentioned in his answer, in the case of Naive Bayes we are interested in the ratio of conditional probabilities. Because both the numerator and denominator of our ratio will include a factor of epsilon, these factors of epsilon cancel out. As a result, the limit of the ratio of conditional probabilities will be equivalent to the ratio of the PDF heights at the variable value in question.
PDFs and probability in naive Bayes classification If you're interested in a lengthy and rigorous explanation check this out. To summarize, it all comes down to integral approximations. To get the probability of a specific variable value from the vari
49,558
Markov chain convergence, total variation and KL divergence
It is important to state the theorem correctly with all conditions. Theorem 4 in the paper by Roberts and Rosenthal states that the $n$-step transition probabilities $P^n(x, \cdot)$ converge in total variation to a probability measure $\pi$ for $\pi$-almost all $x$ if the chain is $\phi$-irreducible, aperiodic and has $\pi$ as invariant initial distribution, that is, if $$\pi(A) = \int P(x, A) \pi(\mathrm{d}x).$$ There is also a technical condition that the $\sigma$-algebra on the state space should be countably generated. We return to this below. It is quite important for the general application of the theorem that one knows upfront that there is an invariant $\pi$ -- otherwise the chain can be null recurrent. In the MCMC context on $\mathbb{R}^d$ of the cited paper the chains are constructed with a given target distribution as invariant distribution so in this context it is only the $\phi$-irreducibility and aperiodicity that we need to check. The authoritative reference on these matters is Meyn and Tweedies book Markov Chains and Stochastic Stability, which is also cited heavily in the paper. However, as far as I can tell, there are minor differences in the results presented in the paper and the book, and the paper do have a proof of Theorem 4. Returning to the question, the $\phi$-measure used to define $\phi$-irreducibility is by assumption non-zero, so the trivial measure is ruled out (this is actually missing in the Meyn and Tweedie book, but stated correctly in the paper on page 31. The Meyn and Tweedie book also lacks the assumption of $\sigma$-finiteness that Roberts and Rosenthal make. I cannot see that it is possible to give this up either.) To return to the assumption on a countably generated $\sigma$-algebra on a general state space, this assumption ensures that $\phi$-irreducible chains have small sets, see Theorem 19 in the paper. If you can prove the existence of a small set by other means the assumption on the $\sigma$-algebra can be dropped. Regarding the second question, I am afraid I can't be of much assistance. Why is this of interest? I have not encountered problems where KL-convergence was needed specifically.
Markov chain convergence, total variation and KL divergence
It is important to state the theorem correctly with all conditions. Theorem 4 in the paper by Roberts and Rosenthal states that the $n$-step transition probabilities $P^n(x, \cdot)$ converge in total
Markov chain convergence, total variation and KL divergence It is important to state the theorem correctly with all conditions. Theorem 4 in the paper by Roberts and Rosenthal states that the $n$-step transition probabilities $P^n(x, \cdot)$ converge in total variation to a probability measure $\pi$ for $\pi$-almost all $x$ if the chain is $\phi$-irreducible, aperiodic and has $\pi$ as invariant initial distribution, that is, if $$\pi(A) = \int P(x, A) \pi(\mathrm{d}x).$$ There is also a technical condition that the $\sigma$-algebra on the state space should be countably generated. We return to this below. It is quite important for the general application of the theorem that one knows upfront that there is an invariant $\pi$ -- otherwise the chain can be null recurrent. In the MCMC context on $\mathbb{R}^d$ of the cited paper the chains are constructed with a given target distribution as invariant distribution so in this context it is only the $\phi$-irreducibility and aperiodicity that we need to check. The authoritative reference on these matters is Meyn and Tweedies book Markov Chains and Stochastic Stability, which is also cited heavily in the paper. However, as far as I can tell, there are minor differences in the results presented in the paper and the book, and the paper do have a proof of Theorem 4. Returning to the question, the $\phi$-measure used to define $\phi$-irreducibility is by assumption non-zero, so the trivial measure is ruled out (this is actually missing in the Meyn and Tweedie book, but stated correctly in the paper on page 31. The Meyn and Tweedie book also lacks the assumption of $\sigma$-finiteness that Roberts and Rosenthal make. I cannot see that it is possible to give this up either.) To return to the assumption on a countably generated $\sigma$-algebra on a general state space, this assumption ensures that $\phi$-irreducible chains have small sets, see Theorem 19 in the paper. If you can prove the existence of a small set by other means the assumption on the $\sigma$-algebra can be dropped. Regarding the second question, I am afraid I can't be of much assistance. Why is this of interest? I have not encountered problems where KL-convergence was needed specifically.
Markov chain convergence, total variation and KL divergence It is important to state the theorem correctly with all conditions. Theorem 4 in the paper by Roberts and Rosenthal states that the $n$-step transition probabilities $P^n(x, \cdot)$ converge in total
49,559
Interpreting positive and negative signs of the elements of PCA eigenvectors
I think you have it backwards. If the value is positive, then a higher score on that variable is associated with a higher score on the component, if the value is negative, then a higher score implies a lower score on the component. In addition, people sometimes use PCA to determine whether to keep or combine certain variables for a subsequent analysis. This is not, strictly speaking, an appropriate use of PCA. Factor analysis should be used for this purpose, but at any rate, people do it. In such a case, people will look at the absolute value to see if it is above some arbitrary threshold, such as .5, and if so, retain (or combine), and if not, drop. For what it's worth, I don't recommend this. Update: I can't tell if I answered the right question or not. @whuber's second comment, in my opinion, is right on the money, and also consistent with my first paragraph above. However, the question is now different than before, and different from how I understand @whuber's comment, so I am a little confused. Essentially, PCA solves for the eigenvectors and eigenvalues. Neither will be negative whether or not you centered your variables first. The eigenvalues are the lengths of the corresponding eigenvectors. Just as I cannot buy a board -10 feet (i.e., -3 meters) long to build a patio, you cannot have a negative eigenvalue. The eigenvector returned will also be positive. You could negate it by multiplying all the signs by -1, but as @whuber notes, that would be meaningless. Once again as @whuber notes, the relative signs are meaningful, and their relation to the component is as I stated in my first paragraph above. That is, the relative signs (negative vs. positive) will denote the same relationship between higher (/ lower) scores on the variable and the component whether the variables were centered first or not.
Interpreting positive and negative signs of the elements of PCA eigenvectors
I think you have it backwards. If the value is positive, then a higher score on that variable is associated with a higher score on the component, if the value is negative, then a higher score implies
Interpreting positive and negative signs of the elements of PCA eigenvectors I think you have it backwards. If the value is positive, then a higher score on that variable is associated with a higher score on the component, if the value is negative, then a higher score implies a lower score on the component. In addition, people sometimes use PCA to determine whether to keep or combine certain variables for a subsequent analysis. This is not, strictly speaking, an appropriate use of PCA. Factor analysis should be used for this purpose, but at any rate, people do it. In such a case, people will look at the absolute value to see if it is above some arbitrary threshold, such as .5, and if so, retain (or combine), and if not, drop. For what it's worth, I don't recommend this. Update: I can't tell if I answered the right question or not. @whuber's second comment, in my opinion, is right on the money, and also consistent with my first paragraph above. However, the question is now different than before, and different from how I understand @whuber's comment, so I am a little confused. Essentially, PCA solves for the eigenvectors and eigenvalues. Neither will be negative whether or not you centered your variables first. The eigenvalues are the lengths of the corresponding eigenvectors. Just as I cannot buy a board -10 feet (i.e., -3 meters) long to build a patio, you cannot have a negative eigenvalue. The eigenvector returned will also be positive. You could negate it by multiplying all the signs by -1, but as @whuber notes, that would be meaningless. Once again as @whuber notes, the relative signs are meaningful, and their relation to the component is as I stated in my first paragraph above. That is, the relative signs (negative vs. positive) will denote the same relationship between higher (/ lower) scores on the variable and the component whether the variables were centered first or not.
Interpreting positive and negative signs of the elements of PCA eigenvectors I think you have it backwards. If the value is positive, then a higher score on that variable is associated with a higher score on the component, if the value is negative, then a higher score implies
49,560
Interpreting positive and negative signs of the elements of PCA eigenvectors
centering your variables shouldn't change the PCA results, as PCA first determines a correlation matrix and goes on from there. The correlations between your variables should be the same regardless, so the PCA results should not be affected by any mean centering you perform.
Interpreting positive and negative signs of the elements of PCA eigenvectors
centering your variables shouldn't change the PCA results, as PCA first determines a correlation matrix and goes on from there. The correlations between your variables should be the same regardless, s
Interpreting positive and negative signs of the elements of PCA eigenvectors centering your variables shouldn't change the PCA results, as PCA first determines a correlation matrix and goes on from there. The correlations between your variables should be the same regardless, so the PCA results should not be affected by any mean centering you perform.
Interpreting positive and negative signs of the elements of PCA eigenvectors centering your variables shouldn't change the PCA results, as PCA first determines a correlation matrix and goes on from there. The correlations between your variables should be the same regardless, s
49,561
Interpreting positive and negative signs of the elements of PCA eigenvectors
When we say correlation that means can be two directional i.e. positive and negative. Interpretation of the principal components is based on finding which variables are most strongly correlated with each component, i.e., which of these numbers are large in magnitude, the farthest from zero in either positive or negative direction.
Interpreting positive and negative signs of the elements of PCA eigenvectors
When we say correlation that means can be two directional i.e. positive and negative. Interpretation of the principal components is based on finding which variables are most strongly correlated with e
Interpreting positive and negative signs of the elements of PCA eigenvectors When we say correlation that means can be two directional i.e. positive and negative. Interpretation of the principal components is based on finding which variables are most strongly correlated with each component, i.e., which of these numbers are large in magnitude, the farthest from zero in either positive or negative direction.
Interpreting positive and negative signs of the elements of PCA eigenvectors When we say correlation that means can be two directional i.e. positive and negative. Interpretation of the principal components is based on finding which variables are most strongly correlated with e
49,562
What sort of problems is backpropagation best suited to solving, and what are the best alternatives to backprop for solving those problems?
In general, feed-forward networks utilizing backpropagation are great for classification tasks in which you have a number of probabilistic cues which need to be integrated. They're obviously used for a great many other things, but this is an example of one driving many people to use them -- it's difficult to capture this sort of learning in a more Hebbian fashion. In any case where you have multiple probabilistic cues which may be of limited, but above-chance use in classification, they can be integrated in a cognitively plausible manner (as in natural language acquisition by children) -- the integration of multiple cues, any one of which is a weak cue for classification in and of itself, can result in very robust classification performance. For this sort of classification w/ multiple cues, AdaBoost would be an appropriate (and quite well-established) algorithm to use as a baseline. An off-the-shelf support vector machine might rival AdaBoost's performance. If you're looking for baseline classifiers that people are already familiar with, you should definitely use an SVM for at least one of them. There are also approaches that combine AdaBoost with SVMs.
What sort of problems is backpropagation best suited to solving, and what are the best alternatives
In general, feed-forward networks utilizing backpropagation are great for classification tasks in which you have a number of probabilistic cues which need to be integrated. They're obviously used for
What sort of problems is backpropagation best suited to solving, and what are the best alternatives to backprop for solving those problems? In general, feed-forward networks utilizing backpropagation are great for classification tasks in which you have a number of probabilistic cues which need to be integrated. They're obviously used for a great many other things, but this is an example of one driving many people to use them -- it's difficult to capture this sort of learning in a more Hebbian fashion. In any case where you have multiple probabilistic cues which may be of limited, but above-chance use in classification, they can be integrated in a cognitively plausible manner (as in natural language acquisition by children) -- the integration of multiple cues, any one of which is a weak cue for classification in and of itself, can result in very robust classification performance. For this sort of classification w/ multiple cues, AdaBoost would be an appropriate (and quite well-established) algorithm to use as a baseline. An off-the-shelf support vector machine might rival AdaBoost's performance. If you're looking for baseline classifiers that people are already familiar with, you should definitely use an SVM for at least one of them. There are also approaches that combine AdaBoost with SVMs.
What sort of problems is backpropagation best suited to solving, and what are the best alternatives In general, feed-forward networks utilizing backpropagation are great for classification tasks in which you have a number of probabilistic cues which need to be integrated. They're obviously used for
49,563
What sort of problems is backpropagation best suited to solving, and what are the best alternatives to backprop for solving those problems?
Edit Originally this answer discussed alternate learning algorithms and topologies for Neural Nets. After the edit, the answer is divided into three parts: Uses and problems with Backpropogation; Alternate Neural Network Training Schemes; and Alternates to Neural Networks (newly added). Part 1 Backpropogation algorithms can help to solve problems where there is a discriminant that can help to separate the positive inputs from the negative inputs, often in networks that have no connections that loop between neurons in their topology (feed forward neural networks as described in your question). Backpropogation is typically slow and it is not guaranteed to converge and may converge to a local maximum (that is, it will converge up to a point were you get results that are better than the results that can be obtained by changing the weights slightly). Part 2 Alternate neural network training algorithms include using a Genetic Algorithm to evolve the weights and simulated annealing. These may eliminate the problems related to slow convergence and problems with connections that loop between neurons. Hebbian learning as used in recurrent neural networks, typically Hopfield networks, is another alternative. Hebbian learning is robust and Hopfield proved that Hebbian learning will converge for Hopfield networks. You might also consider comparing your neural network to a modular neural network, such as, for example a Committee of Machines made up of several neural networks each with their own training algorithms or an Associative Neural Network. Such modular neural networks will require more space and time to work with, but ultimately may perform better if their topologies are designed well for the particular problem at hand. Part 3 Popular alternatives to consider include Support Vector Machines (which are widely used in Fraud Analysis and Network Intrusion Detection amongst other areas). Genetic Algorithms and Simulated Annealing (as stand-alone algorithms and not in conjunction with Neural Nets). A final group of algorithms to consider are Swarm Algorithms such as Ant Colony Optimisation. Some algorithms in this group such as continuous orthogonal ant colony optimisation can be very powerful for solving some problems. You might consider using some sort of clustering algorithms. Depending on your problems, K-Medoids or Density Based Clustering might be useful. Depending on your problem, Machine Vision algorithms might also be useful. For example, the algorithm by Milanfar, http://users.soe.ucsc.edu/~milanfar/research/computer-vision.html, based on local regression kernels might also be useful. The exact algorithms to use might depend on the exact problem that you are facing. Some algorithms will work really well in some instances but really badly in another. Personally I do not believe that there is a single algorithm that is perfect for solving everything (if there were we would be using it), so you would need to decide on the algorithm to benchmark against depending on the scenario in which you will be using it.
What sort of problems is backpropagation best suited to solving, and what are the best alternatives
Edit Originally this answer discussed alternate learning algorithms and topologies for Neural Nets. After the edit, the answer is divided into three parts: Uses and problems with Backpropogation; Al
What sort of problems is backpropagation best suited to solving, and what are the best alternatives to backprop for solving those problems? Edit Originally this answer discussed alternate learning algorithms and topologies for Neural Nets. After the edit, the answer is divided into three parts: Uses and problems with Backpropogation; Alternate Neural Network Training Schemes; and Alternates to Neural Networks (newly added). Part 1 Backpropogation algorithms can help to solve problems where there is a discriminant that can help to separate the positive inputs from the negative inputs, often in networks that have no connections that loop between neurons in their topology (feed forward neural networks as described in your question). Backpropogation is typically slow and it is not guaranteed to converge and may converge to a local maximum (that is, it will converge up to a point were you get results that are better than the results that can be obtained by changing the weights slightly). Part 2 Alternate neural network training algorithms include using a Genetic Algorithm to evolve the weights and simulated annealing. These may eliminate the problems related to slow convergence and problems with connections that loop between neurons. Hebbian learning as used in recurrent neural networks, typically Hopfield networks, is another alternative. Hebbian learning is robust and Hopfield proved that Hebbian learning will converge for Hopfield networks. You might also consider comparing your neural network to a modular neural network, such as, for example a Committee of Machines made up of several neural networks each with their own training algorithms or an Associative Neural Network. Such modular neural networks will require more space and time to work with, but ultimately may perform better if their topologies are designed well for the particular problem at hand. Part 3 Popular alternatives to consider include Support Vector Machines (which are widely used in Fraud Analysis and Network Intrusion Detection amongst other areas). Genetic Algorithms and Simulated Annealing (as stand-alone algorithms and not in conjunction with Neural Nets). A final group of algorithms to consider are Swarm Algorithms such as Ant Colony Optimisation. Some algorithms in this group such as continuous orthogonal ant colony optimisation can be very powerful for solving some problems. You might consider using some sort of clustering algorithms. Depending on your problems, K-Medoids or Density Based Clustering might be useful. Depending on your problem, Machine Vision algorithms might also be useful. For example, the algorithm by Milanfar, http://users.soe.ucsc.edu/~milanfar/research/computer-vision.html, based on local regression kernels might also be useful. The exact algorithms to use might depend on the exact problem that you are facing. Some algorithms will work really well in some instances but really badly in another. Personally I do not believe that there is a single algorithm that is perfect for solving everything (if there were we would be using it), so you would need to decide on the algorithm to benchmark against depending on the scenario in which you will be using it.
What sort of problems is backpropagation best suited to solving, and what are the best alternatives Edit Originally this answer discussed alternate learning algorithms and topologies for Neural Nets. After the edit, the answer is divided into three parts: Uses and problems with Backpropogation; Al
49,564
Comparing two discrete distributions (with small cell counts)
There are two technical issues to deal with: (1) measuring the discrepancy between observed and expected and (2) computing the p-value. We can retain the chi-squared measure of discrepancy (thereby finessing issue 1) and compute an exact p-value. The simple way is to simulate sampling from the expected distribution. Here is the distribution of 10,000 samples performed in R: The actual chi-squared statistic for these data is $549/38 \approx 14.447$. Apparently it is far out in the upper tail of this histogram: only $25$ of the $10,000$ results (0.25%) equal or exceed it. Yes, this proportion is almost four times greater than the approximation of $0.0007$ reported by the chi-squared test, but it's still tiny. We conclude that the observed distribution is significantly different from the expected distribution. The "domain knowledge" may indeed correctly suggest the amount of difference is not material. That, however, is independent of the finding that the observed frequencies are unlikely to arise randomly from a distribution with the expected frequencies. That is all that statistical significance means.
Comparing two discrete distributions (with small cell counts)
There are two technical issues to deal with: (1) measuring the discrepancy between observed and expected and (2) computing the p-value. We can retain the chi-squared measure of discrepancy (thereby fi
Comparing two discrete distributions (with small cell counts) There are two technical issues to deal with: (1) measuring the discrepancy between observed and expected and (2) computing the p-value. We can retain the chi-squared measure of discrepancy (thereby finessing issue 1) and compute an exact p-value. The simple way is to simulate sampling from the expected distribution. Here is the distribution of 10,000 samples performed in R: The actual chi-squared statistic for these data is $549/38 \approx 14.447$. Apparently it is far out in the upper tail of this histogram: only $25$ of the $10,000$ results (0.25%) equal or exceed it. Yes, this proportion is almost four times greater than the approximation of $0.0007$ reported by the chi-squared test, but it's still tiny. We conclude that the observed distribution is significantly different from the expected distribution. The "domain knowledge" may indeed correctly suggest the amount of difference is not material. That, however, is independent of the finding that the observed frequencies are unlikely to arise randomly from a distribution with the expected frequencies. That is all that statistical significance means.
Comparing two discrete distributions (with small cell counts) There are two technical issues to deal with: (1) measuring the discrepancy between observed and expected and (2) computing the p-value. We can retain the chi-squared measure of discrepancy (thereby fi
49,565
Training models on data that may be incorrectly classified?
Yes, there is a bias. For example, assume your classificator agrees with the expert 80% of the time. Now, there are several options, here are the two extremes: your model is better because the 20% where it does not agree is where the experts are wrong -> your performance is underestimated OR the 20% where you disagree are all cases where the experts are right -> your performance is overestimated. You can find more info by searching for "imperfect gold standard". There are some nice bayesian methods availabe, but I am not familiar enough with them to recommend any. It might also more of a "multiple reader" problem, especially if your experts disagree with each other. And, yes, your model will suffer if you train it with partly wrong class labels. It will try to emulate the flawed experts. I don't know whether any particular method is particular resistant, I think a classificator that outputs a class probability could perform somewhat because you can correct somewhat for a expert bias toward one class by adjusting the cutoff. But that's just my intuition talking.
Training models on data that may be incorrectly classified?
Yes, there is a bias. For example, assume your classificator agrees with the expert 80% of the time. Now, there are several options, here are the two extremes: your model is better because the 20% whe
Training models on data that may be incorrectly classified? Yes, there is a bias. For example, assume your classificator agrees with the expert 80% of the time. Now, there are several options, here are the two extremes: your model is better because the 20% where it does not agree is where the experts are wrong -> your performance is underestimated OR the 20% where you disagree are all cases where the experts are right -> your performance is overestimated. You can find more info by searching for "imperfect gold standard". There are some nice bayesian methods availabe, but I am not familiar enough with them to recommend any. It might also more of a "multiple reader" problem, especially if your experts disagree with each other. And, yes, your model will suffer if you train it with partly wrong class labels. It will try to emulate the flawed experts. I don't know whether any particular method is particular resistant, I think a classificator that outputs a class probability could perform somewhat because you can correct somewhat for a expert bias toward one class by adjusting the cutoff. But that's just my intuition talking.
Training models on data that may be incorrectly classified? Yes, there is a bias. For example, assume your classificator agrees with the expert 80% of the time. Now, there are several options, here are the two extremes: your model is better because the 20% whe
49,566
Training models on data that may be incorrectly classified?
We have done some work on this for the case of random label flipping noise. Papers: J. Bootkrajang and A. Kaban. Label-noise Robust Logistic Regression and its Applications. Proc. ECML-PKDD(1) 2012, pp. 143-158. J. Bootkrajang and A. Kaban. Classification of Mislabelled Microarrays using Robust Sparse Logistic Regression. Bioinformatics. 29(7): 870-877, 2013.
Training models on data that may be incorrectly classified?
We have done some work on this for the case of random label flipping noise. Papers: J. Bootkrajang and A. Kaban. Label-noise Robust Logistic Regression and its Applications. Proc. ECML-PKDD(1) 2012, p
Training models on data that may be incorrectly classified? We have done some work on this for the case of random label flipping noise. Papers: J. Bootkrajang and A. Kaban. Label-noise Robust Logistic Regression and its Applications. Proc. ECML-PKDD(1) 2012, pp. 143-158. J. Bootkrajang and A. Kaban. Classification of Mislabelled Microarrays using Robust Sparse Logistic Regression. Bioinformatics. 29(7): 870-877, 2013.
Training models on data that may be incorrectly classified? We have done some work on this for the case of random label flipping noise. Papers: J. Bootkrajang and A. Kaban. Label-noise Robust Logistic Regression and its Applications. Proc. ECML-PKDD(1) 2012, p
49,567
Analyzing Logistic Regression when not using a dichotomous dependent variable
In your case the response variable actually is binary, it has just been summarised into a ratio. Each individual either gets out of the building (1) or doesn't (0). So logistic regression is quite appropriate, you just need to put your data into an appropriate form (which will depend on your software). In R you do this by making the proportion the response and specifying the population sizes (ie number of trials) as weights. It sounds like you also have some questions about hypothesis testing and model selection but they might be best put into a separate question, perhaps after you are happy with the logistic regression issue.
Analyzing Logistic Regression when not using a dichotomous dependent variable
In your case the response variable actually is binary, it has just been summarised into a ratio. Each individual either gets out of the building (1) or doesn't (0). So logistic regression is quite a
Analyzing Logistic Regression when not using a dichotomous dependent variable In your case the response variable actually is binary, it has just been summarised into a ratio. Each individual either gets out of the building (1) or doesn't (0). So logistic regression is quite appropriate, you just need to put your data into an appropriate form (which will depend on your software). In R you do this by making the proportion the response and specifying the population sizes (ie number of trials) as weights. It sounds like you also have some questions about hypothesis testing and model selection but they might be best put into a separate question, perhaps after you are happy with the logistic regression issue.
Analyzing Logistic Regression when not using a dichotomous dependent variable In your case the response variable actually is binary, it has just been summarised into a ratio. Each individual either gets out of the building (1) or doesn't (0). So logistic regression is quite a
49,568
Analyzing Logistic Regression when not using a dichotomous dependent variable
I don't mean to complain, but you have two questions that appear to be closely related where neither of them is clear enough / has enough information to get you a really good answer. You may want to see if you can edit them. @PeterEllis has provided a good answer to the question about why p-values can be high. I don't see what more there is to say about that. He has also provided a good answer here, but maybe I can help. @PeterEllis is clearly right that your proportions come from some number of successes and some number of failures. If you know those values, you can use them directly as your response variable. However, if you don't know them, you have a problem. You could venture a guess; how effective this would be would depend on how good your guess is. If you had the same number of cases making up each proportion, you could simply convert the proportions directly with the logistic transformation, i.e. ln( proportion/(1-proportion) ) and run a normal ols regression with the transformed data as your response variable. The only issue is that your confidence intervals / p-values would be inaccurate due to the fact that you are counting each proportion as 1 datum rather than the number of data that make up the proportion. Nonetheless, if the same number of cases made up each proportion, then your parameter estimates would be unbiased. In addition, this approach would get you out of the problem of having predicted values outside of the (0,1) range.
Analyzing Logistic Regression when not using a dichotomous dependent variable
I don't mean to complain, but you have two questions that appear to be closely related where neither of them is clear enough / has enough information to get you a really good answer. You may want to
Analyzing Logistic Regression when not using a dichotomous dependent variable I don't mean to complain, but you have two questions that appear to be closely related where neither of them is clear enough / has enough information to get you a really good answer. You may want to see if you can edit them. @PeterEllis has provided a good answer to the question about why p-values can be high. I don't see what more there is to say about that. He has also provided a good answer here, but maybe I can help. @PeterEllis is clearly right that your proportions come from some number of successes and some number of failures. If you know those values, you can use them directly as your response variable. However, if you don't know them, you have a problem. You could venture a guess; how effective this would be would depend on how good your guess is. If you had the same number of cases making up each proportion, you could simply convert the proportions directly with the logistic transformation, i.e. ln( proportion/(1-proportion) ) and run a normal ols regression with the transformed data as your response variable. The only issue is that your confidence intervals / p-values would be inaccurate due to the fact that you are counting each proportion as 1 datum rather than the number of data that make up the proportion. Nonetheless, if the same number of cases made up each proportion, then your parameter estimates would be unbiased. In addition, this approach would get you out of the problem of having predicted values outside of the (0,1) range.
Analyzing Logistic Regression when not using a dichotomous dependent variable I don't mean to complain, but you have two questions that appear to be closely related where neither of them is clear enough / has enough information to get you a really good answer. You may want to
49,569
Decision trees and backward pruning
Oh well, gotta answer my own question. Quotting "Data Mining: Practical Machine Learning Tools and Techniques", ..postpruning does seem to offer some advantages. For example, situations occur in which two attributes individually seem to have nothing to contribute but are powerful predictors when combined—a sort of combination-lock effect in which the correct combination of the two attribute values is very informative whereas the attributes taken individually are not. Most decision tree builders postprune; it is an open question whether prepruning strategies can be developed that perform as well. So basically while building a whole decision tree (rather that a subset of it as in pre-prunning) we may often come up with powerful "combined predictors" which are plausible to notice only when the whole tree (rather than its subset) is built. Moreover, this is the recommeded approach and forward pruning is rarely used at all.
Decision trees and backward pruning
Oh well, gotta answer my own question. Quotting "Data Mining: Practical Machine Learning Tools and Techniques", ..postpruning does seem to offer some advantages. For example, situations occur in whic
Decision trees and backward pruning Oh well, gotta answer my own question. Quotting "Data Mining: Practical Machine Learning Tools and Techniques", ..postpruning does seem to offer some advantages. For example, situations occur in which two attributes individually seem to have nothing to contribute but are powerful predictors when combined—a sort of combination-lock effect in which the correct combination of the two attribute values is very informative whereas the attributes taken individually are not. Most decision tree builders postprune; it is an open question whether prepruning strategies can be developed that perform as well. So basically while building a whole decision tree (rather that a subset of it as in pre-prunning) we may often come up with powerful "combined predictors" which are plausible to notice only when the whole tree (rather than its subset) is built. Moreover, this is the recommeded approach and forward pruning is rarely used at all.
Decision trees and backward pruning Oh well, gotta answer my own question. Quotting "Data Mining: Practical Machine Learning Tools and Techniques", ..postpruning does seem to offer some advantages. For example, situations occur in whic
49,570
Kernel matrix normalisation
As long as you understand what you're doing you'll be fine :-) You're actually normalizing your data to have unit length in feature space. It is equivalent to use this kernel: $K(x,y)/\sqrt{K(x,x)K(y,y)}$. Your data will now fall on a hypersphere of radius 1 in feature space. When you add kernel matrices you're actually "concatenating" feature (not exactly true for all kernels but is is a way to think about it). However in the normalized case the new features will fall on a hypersphere of bounded, known radius. Could it hurt? Sure, does the actual value of the feature tell you anything? Consider the case of (normalized) linear kernel [10,10] is a sure 1 and [20,20] is a sure -1 then doing normalization would not be a good idea for your data using this kernel. This paper a paper about these type of issues.
Kernel matrix normalisation
As long as you understand what you're doing you'll be fine :-) You're actually normalizing your data to have unit length in feature space. It is equivalent to use this kernel: $K(x,y)/\sqrt{K(x,x)K(y,
Kernel matrix normalisation As long as you understand what you're doing you'll be fine :-) You're actually normalizing your data to have unit length in feature space. It is equivalent to use this kernel: $K(x,y)/\sqrt{K(x,x)K(y,y)}$. Your data will now fall on a hypersphere of radius 1 in feature space. When you add kernel matrices you're actually "concatenating" feature (not exactly true for all kernels but is is a way to think about it). However in the normalized case the new features will fall on a hypersphere of bounded, known radius. Could it hurt? Sure, does the actual value of the feature tell you anything? Consider the case of (normalized) linear kernel [10,10] is a sure 1 and [20,20] is a sure -1 then doing normalization would not be a good idea for your data using this kernel. This paper a paper about these type of issues.
Kernel matrix normalisation As long as you understand what you're doing you'll be fine :-) You're actually normalizing your data to have unit length in feature space. It is equivalent to use this kernel: $K(x,y)/\sqrt{K(x,x)K(y,
49,571
Probabilistic outputs from SVMs
I don't know whether there are recent approaches to the problem, but I think I know why John Platt solved the problem in this kind of unsatisfactory way. Many machine learning algorithms can be written as regularizer plus loss function. For example, ridge regression would be $\lambda ||w||^2 + \sum_i (y_i - w^\top x_i)^2$. The minimizer of this is equivalent to the MAP of a Gaussian prior on $w$ and a Gaussian likelihood (just take an exp around the whole expression and put a minus in front). The SVM objective function looks similar. The squared norm regularizer stays the same, but the loss function is replaced by the hinge loss. The problem is now that the exp of the hinge loss does not correspond to a proper likelihood. Maybe there are approaches to change it, in order to make it into one, but then the question is whether it would still be called SVM. Edit: One thing one could always do is bagging. One could train $n$ SVMs on different parts of the dataset and then simply count the fraction of positive/negative voting SVMs at testing stage. However, this is not specific to SVMs, of course.
Probabilistic outputs from SVMs
I don't know whether there are recent approaches to the problem, but I think I know why John Platt solved the problem in this kind of unsatisfactory way. Many machine learning algorithms can be writte
Probabilistic outputs from SVMs I don't know whether there are recent approaches to the problem, but I think I know why John Platt solved the problem in this kind of unsatisfactory way. Many machine learning algorithms can be written as regularizer plus loss function. For example, ridge regression would be $\lambda ||w||^2 + \sum_i (y_i - w^\top x_i)^2$. The minimizer of this is equivalent to the MAP of a Gaussian prior on $w$ and a Gaussian likelihood (just take an exp around the whole expression and put a minus in front). The SVM objective function looks similar. The squared norm regularizer stays the same, but the loss function is replaced by the hinge loss. The problem is now that the exp of the hinge loss does not correspond to a proper likelihood. Maybe there are approaches to change it, in order to make it into one, but then the question is whether it would still be called SVM. Edit: One thing one could always do is bagging. One could train $n$ SVMs on different parts of the dataset and then simply count the fraction of positive/negative voting SVMs at testing stage. However, this is not specific to SVMs, of course.
Probabilistic outputs from SVMs I don't know whether there are recent approaches to the problem, but I think I know why John Platt solved the problem in this kind of unsatisfactory way. Many machine learning algorithms can be writte
49,572
How to deal with RAM limitations when working with big datasets in R?
I rely on having a 64-bit operating system and running 64-bit R and even then I still crash. Depending on what you want to do, have a look at this CRAN site. Unfortunately because my large data frame was using mixed methods, biglm wasn't any good for me. I read up on ff and it didn't suit my needs either, because the method it uses to save and retrieve to and from disk space won't work with a number of analysis methods I am using. The bigmemory and associated packages don't appear to be completely compatible with data frames, although matrices appear handled easily enough.
How to deal with RAM limitations when working with big datasets in R?
I rely on having a 64-bit operating system and running 64-bit R and even then I still crash. Depending on what you want to do, have a look at this CRAN site. Unfortunately because my large data frame
How to deal with RAM limitations when working with big datasets in R? I rely on having a 64-bit operating system and running 64-bit R and even then I still crash. Depending on what you want to do, have a look at this CRAN site. Unfortunately because my large data frame was using mixed methods, biglm wasn't any good for me. I read up on ff and it didn't suit my needs either, because the method it uses to save and retrieve to and from disk space won't work with a number of analysis methods I am using. The bigmemory and associated packages don't appear to be completely compatible with data frames, although matrices appear handled easily enough.
How to deal with RAM limitations when working with big datasets in R? I rely on having a 64-bit operating system and running 64-bit R and even then I still crash. Depending on what you want to do, have a look at this CRAN site. Unfortunately because my large data frame
49,573
How to use weights for imbalanced data in R's randomForest?
Ok, so I found part of my answer but not the good part. It turns out the randomForest package can do stratified sampling but only for classification. Here is a link to the package author's explanation. I'm still looking for ideas on how to do stratified sampling for regression rf's.
How to use weights for imbalanced data in R's randomForest?
Ok, so I found part of my answer but not the good part. It turns out the randomForest package can do stratified sampling but only for classification. Here is a link to the package author's explanati
How to use weights for imbalanced data in R's randomForest? Ok, so I found part of my answer but not the good part. It turns out the randomForest package can do stratified sampling but only for classification. Here is a link to the package author's explanation. I'm still looking for ideas on how to do stratified sampling for regression rf's.
How to use weights for imbalanced data in R's randomForest? Ok, so I found part of my answer but not the good part. It turns out the randomForest package can do stratified sampling but only for classification. Here is a link to the package author's explanati
49,574
Probability density function (pdf) of normal sample variance ($S^2$)
Given $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1} \>,$ and the fact that a chi-squared($\nu$) is a Gamma($\frac{\nu}{2},2$), (under the scale parameterization) then $S^2 = \frac{(n-1)S^2}{\sigma^2}\cdot \frac{\sigma^2}{(n-1)}\sim \text{Gamma}(\frac{(n-1)}{2},\frac{2\sigma^2}{(n-1)})$ If you need a proof, it should suffice to show that the relationship between chi-square and gamma random variables holds and then follow the scaling argument here. This relationship is pretty much verifiable by inspection.
Probability density function (pdf) of normal sample variance ($S^2$)
Given $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1} \>,$ and the fact that a chi-squared($\nu$) is a Gamma($\frac{\nu}{2},2$), (under the scale parameterization) then $S^2 = \frac{(n-1)S^2}{\sigma^2}\c
Probability density function (pdf) of normal sample variance ($S^2$) Given $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1} \>,$ and the fact that a chi-squared($\nu$) is a Gamma($\frac{\nu}{2},2$), (under the scale parameterization) then $S^2 = \frac{(n-1)S^2}{\sigma^2}\cdot \frac{\sigma^2}{(n-1)}\sim \text{Gamma}(\frac{(n-1)}{2},\frac{2\sigma^2}{(n-1)})$ If you need a proof, it should suffice to show that the relationship between chi-square and gamma random variables holds and then follow the scaling argument here. This relationship is pretty much verifiable by inspection.
Probability density function (pdf) of normal sample variance ($S^2$) Given $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1} \>,$ and the fact that a chi-squared($\nu$) is a Gamma($\frac{\nu}{2},2$), (under the scale parameterization) then $S^2 = \frac{(n-1)S^2}{\sigma^2}\c
49,575
Probability density function (pdf) of normal sample variance ($S^2$)
The pdf is as follows: \begin{equation} f(x) = \frac{\left(\frac{\nu}{2\, \sigma^{2}}\right)^{\frac{\nu}{2}}}{\Gamma\left(\frac{\nu}{2}\right)}\, x^{\frac{\nu}{2}-1}\, \exp\left\{-x\, \frac{\nu}{2\, \sigma^{2}}\right\} \end{equation} $\nu \equiv \text{degrees of freedom}= N-1$, where $N$ is the sample size. $\sigma \equiv \text{standard deviation of the parent distribution}$.
Probability density function (pdf) of normal sample variance ($S^2$)
The pdf is as follows: \begin{equation} f(x) = \frac{\left(\frac{\nu}{2\, \sigma^{2}}\right)^{\frac{\nu}{2}}}{\Gamma\left(\frac{\nu}{2}\right)}\, x^{\frac{\nu}{2}-1}\, \exp\left\{-x\, \frac{\nu}{2\, \
Probability density function (pdf) of normal sample variance ($S^2$) The pdf is as follows: \begin{equation} f(x) = \frac{\left(\frac{\nu}{2\, \sigma^{2}}\right)^{\frac{\nu}{2}}}{\Gamma\left(\frac{\nu}{2}\right)}\, x^{\frac{\nu}{2}-1}\, \exp\left\{-x\, \frac{\nu}{2\, \sigma^{2}}\right\} \end{equation} $\nu \equiv \text{degrees of freedom}= N-1$, where $N$ is the sample size. $\sigma \equiv \text{standard deviation of the parent distribution}$.
Probability density function (pdf) of normal sample variance ($S^2$) The pdf is as follows: \begin{equation} f(x) = \frac{\left(\frac{\nu}{2\, \sigma^{2}}\right)^{\frac{\nu}{2}}}{\Gamma\left(\frac{\nu}{2}\right)}\, x^{\frac{\nu}{2}-1}\, \exp\left\{-x\, \frac{\nu}{2\, \
49,576
Ways to determine if experience or recent practice time is more significant in ranking?
thanks for updating your question with the scatterplot, it does give us some information we didn't have before. Eyeballing the scatterplot, it looks like 1v1 performance and 3v3 (adjusted) performance aren't related. What this tells us is that there is no simple relationship between 1v1 and 3v3 performance. That sounds like we're stuck, however assuming that the 3v3 mixes the players around a bit, so the players don't have the same team mates for every 3v3, the compositional changes to the 3v3 teams may be masking overall individual performance within teams. The scatterplot could be telling us that, when matched with a lower skilled player, the presence of a higher skilled player on a 3v3 team does not automatically lift team performance (and vice versa). To answer your second question, when you have enough data to change the rankings from 1500, look at those players who score low on the 1v1 ELO axis and have a higher ranking on the 3v3 ELO axis - this tells you the players with poorer individual skills who make large contributions to teams assuming that the teams are matched overall on terms of mix of skills. For example, if one poorer player keeps being matched with the two top players in a team, then the team result is likely due to the top players with the poorer player having probably little effect, and therefore the team result won't be an accurate reflection of how well the poorer player works in a team generally. For your first question, experience and skill will be highly correlated simply because practice tends to increase skill, so both factors are unstable over time. Could you further define how you wish to examine recent playing experience? Do you mean: the number of games played over the last week/fortnight, so each time you look at this, you will use the same week/fortnight measure and ignore earlier games, or whether, as the season progresses, does experience tend to mean that initial skill doesn't matter so much? These are two quite different questions and will require different approaches.
Ways to determine if experience or recent practice time is more significant in ranking?
thanks for updating your question with the scatterplot, it does give us some information we didn't have before. Eyeballing the scatterplot, it looks like 1v1 performance and 3v3 (adjusted) performance
Ways to determine if experience or recent practice time is more significant in ranking? thanks for updating your question with the scatterplot, it does give us some information we didn't have before. Eyeballing the scatterplot, it looks like 1v1 performance and 3v3 (adjusted) performance aren't related. What this tells us is that there is no simple relationship between 1v1 and 3v3 performance. That sounds like we're stuck, however assuming that the 3v3 mixes the players around a bit, so the players don't have the same team mates for every 3v3, the compositional changes to the 3v3 teams may be masking overall individual performance within teams. The scatterplot could be telling us that, when matched with a lower skilled player, the presence of a higher skilled player on a 3v3 team does not automatically lift team performance (and vice versa). To answer your second question, when you have enough data to change the rankings from 1500, look at those players who score low on the 1v1 ELO axis and have a higher ranking on the 3v3 ELO axis - this tells you the players with poorer individual skills who make large contributions to teams assuming that the teams are matched overall on terms of mix of skills. For example, if one poorer player keeps being matched with the two top players in a team, then the team result is likely due to the top players with the poorer player having probably little effect, and therefore the team result won't be an accurate reflection of how well the poorer player works in a team generally. For your first question, experience and skill will be highly correlated simply because practice tends to increase skill, so both factors are unstable over time. Could you further define how you wish to examine recent playing experience? Do you mean: the number of games played over the last week/fortnight, so each time you look at this, you will use the same week/fortnight measure and ignore earlier games, or whether, as the season progresses, does experience tend to mean that initial skill doesn't matter so much? These are two quite different questions and will require different approaches.
Ways to determine if experience or recent practice time is more significant in ranking? thanks for updating your question with the scatterplot, it does give us some information we didn't have before. Eyeballing the scatterplot, it looks like 1v1 performance and 3v3 (adjusted) performance
49,577
TF-IDF cutoff percentage for tweets
Probably the most effective (but also timeconsuming) approach will be to hand pick a set of examples that you know are postive, negative, and neutral. You can then train a classifier (Naive Bayes, SVM, Fisher Discriminant or whatever) on these examples (since you are using 3 classes, you will need to do multi-class classification, although to begin with it might simplify your problem to only look at +ve/-ve and introduce the neutral class later). You should ensure that you have enough examples so that you can perform k-fold cross-validation of the classifier hyperparameters effetively. The more training examples you have, the better the estimation of your threshold will be. Without any training examples, you will have to resort to ad-hoc rules which are unlikely to be robust.
TF-IDF cutoff percentage for tweets
Probably the most effective (but also timeconsuming) approach will be to hand pick a set of examples that you know are postive, negative, and neutral. You can then train a classifier (Naive Bayes, SVM
TF-IDF cutoff percentage for tweets Probably the most effective (but also timeconsuming) approach will be to hand pick a set of examples that you know are postive, negative, and neutral. You can then train a classifier (Naive Bayes, SVM, Fisher Discriminant or whatever) on these examples (since you are using 3 classes, you will need to do multi-class classification, although to begin with it might simplify your problem to only look at +ve/-ve and introduce the neutral class later). You should ensure that you have enough examples so that you can perform k-fold cross-validation of the classifier hyperparameters effetively. The more training examples you have, the better the estimation of your threshold will be. Without any training examples, you will have to resort to ad-hoc rules which are unlikely to be robust.
TF-IDF cutoff percentage for tweets Probably the most effective (but also timeconsuming) approach will be to hand pick a set of examples that you know are postive, negative, and neutral. You can then train a classifier (Naive Bayes, SVM
49,578
TF-IDF cutoff percentage for tweets
For training, if possible, look for users who tweets mostly positive(like celebrities, politicians etc) and some others who mostly tweets negative(no example right now) and use their tweets accordingly. there will be some miscalculation in training data but you can get a lot of data using this technique.
TF-IDF cutoff percentage for tweets
For training, if possible, look for users who tweets mostly positive(like celebrities, politicians etc) and some others who mostly tweets negative(no example right now) and use their tweets accordingl
TF-IDF cutoff percentage for tweets For training, if possible, look for users who tweets mostly positive(like celebrities, politicians etc) and some others who mostly tweets negative(no example right now) and use their tweets accordingly. there will be some miscalculation in training data but you can get a lot of data using this technique.
TF-IDF cutoff percentage for tweets For training, if possible, look for users who tweets mostly positive(like celebrities, politicians etc) and some others who mostly tweets negative(no example right now) and use their tweets accordingl
49,579
Whether to leave the data unaltered in the face of outliers and non-normality when performing structural equation modelling?
A lot depends on where exactly the outliers occur within the model -- in the indicators? in the latent variables and their measurement errors? in the exogenous variables, at the top of the causal chain? In the former case, you cannot do much, as you really have a high leverage influential cases rather than outliers. To control for outliers in the indicators/response variables, you need to work at the equation level, like Moustaki and Victoria-Feser (2006) did. Shooting at it with the robust covariance matrices may or may not be the right thing to do. I am referring here to the recent work by Ke-Hai Yuan and Zhiyong Zhang of Notre Dame who tried to revive robust estimation methods as applied to structural equation modeling -- see e.g. their R package rsem (that seems to rely on having EQS as the estimation engine though, which is weird given the variety of choices within R). They've been publishing like crazy on this in the past five or so years; I've reviewed at least three papers for various journals, and frankly I am at a loss which one is to be recommended, as they all repeat each other. I have not seen this used much in applied work, although it probably should be; may be you'd be the trendsetter! A great diagnostic tool is the forward search method developed by Atkinson and Riani of LSE (for regression and multivariate data). This has been adopted for SEM here and here. I personally think this is really neat, but whether it could catch up in the SEM community at large, I don't know. Frontiers in Quant Psy published a review paper on this in early 2012. Even though I am the acknowledged reviewer of this work, I am extremely reluctant to really recommend it (it barely passed my threshold of publishable work, and I simply gave up explaining the theory of robust statistics in my referee letters), but I am just not aware of anything better.
Whether to leave the data unaltered in the face of outliers and non-normality when performing struct
A lot depends on where exactly the outliers occur within the model -- in the indicators? in the latent variables and their measurement errors? in the exogenous variables, at the top of the causal chai
Whether to leave the data unaltered in the face of outliers and non-normality when performing structural equation modelling? A lot depends on where exactly the outliers occur within the model -- in the indicators? in the latent variables and their measurement errors? in the exogenous variables, at the top of the causal chain? In the former case, you cannot do much, as you really have a high leverage influential cases rather than outliers. To control for outliers in the indicators/response variables, you need to work at the equation level, like Moustaki and Victoria-Feser (2006) did. Shooting at it with the robust covariance matrices may or may not be the right thing to do. I am referring here to the recent work by Ke-Hai Yuan and Zhiyong Zhang of Notre Dame who tried to revive robust estimation methods as applied to structural equation modeling -- see e.g. their R package rsem (that seems to rely on having EQS as the estimation engine though, which is weird given the variety of choices within R). They've been publishing like crazy on this in the past five or so years; I've reviewed at least three papers for various journals, and frankly I am at a loss which one is to be recommended, as they all repeat each other. I have not seen this used much in applied work, although it probably should be; may be you'd be the trendsetter! A great diagnostic tool is the forward search method developed by Atkinson and Riani of LSE (for regression and multivariate data). This has been adopted for SEM here and here. I personally think this is really neat, but whether it could catch up in the SEM community at large, I don't know. Frontiers in Quant Psy published a review paper on this in early 2012. Even though I am the acknowledged reviewer of this work, I am extremely reluctant to really recommend it (it barely passed my threshold of publishable work, and I simply gave up explaining the theory of robust statistics in my referee letters), but I am just not aware of anything better.
Whether to leave the data unaltered in the face of outliers and non-normality when performing struct A lot depends on where exactly the outliers occur within the model -- in the indicators? in the latent variables and their measurement errors? in the exogenous variables, at the top of the causal chai
49,580
Whether to leave the data unaltered in the face of outliers and non-normality when performing structural equation modelling?
General References Hair et al has a fairly extensive non-mathematical discussion of issues of multivariate data cleaning and assumption testing that you might find accessible. First step: Understand your data Why are the distributions as they are? What is causing the outliers? You might want to think about whether the skew and outliers are a natural part of the phenomena or reflect data entry errors, erroneous measurements, or participants for which your model is not intended to generalise. Another point, transforming data will often remove or reduce issues of outliers. What to do with non-normal data There is some discussion of strategies for performing structural equation modelling with non-normal data here: http://rudyanto62.blogspot.com/2008/01/handling-non-normal-data-in-sem.html http://ssc.utexas.edu/software/faqs/amos#Amos_7 In general, it should give you greater confidence in your results that your results are not sensitive to the form of transformation and outlier adjustments that you make. Large standardised residual covariances This may suggest that your proposed model provides a poor fit to the data. It's important to think about the implications of this. What changes to your model do these residuals suggest that you make?
Whether to leave the data unaltered in the face of outliers and non-normality when performing struct
General References Hair et al has a fairly extensive non-mathematical discussion of issues of multivariate data cleaning and assumption testing that you might find accessible. First step: Understan
Whether to leave the data unaltered in the face of outliers and non-normality when performing structural equation modelling? General References Hair et al has a fairly extensive non-mathematical discussion of issues of multivariate data cleaning and assumption testing that you might find accessible. First step: Understand your data Why are the distributions as they are? What is causing the outliers? You might want to think about whether the skew and outliers are a natural part of the phenomena or reflect data entry errors, erroneous measurements, or participants for which your model is not intended to generalise. Another point, transforming data will often remove or reduce issues of outliers. What to do with non-normal data There is some discussion of strategies for performing structural equation modelling with non-normal data here: http://rudyanto62.blogspot.com/2008/01/handling-non-normal-data-in-sem.html http://ssc.utexas.edu/software/faqs/amos#Amos_7 In general, it should give you greater confidence in your results that your results are not sensitive to the form of transformation and outlier adjustments that you make. Large standardised residual covariances This may suggest that your proposed model provides a poor fit to the data. It's important to think about the implications of this. What changes to your model do these residuals suggest that you make?
Whether to leave the data unaltered in the face of outliers and non-normality when performing struct General References Hair et al has a fairly extensive non-mathematical discussion of issues of multivariate data cleaning and assumption testing that you might find accessible. First step: Understan
49,581
Data mining classification competition
An easy way to build an ensemble is by using a random forest. I'm fairly sure weka has a random forest algorithm, and if other tree-based models are performing well it's worth trying out. You could also build your own ensemble by training multiple (say 50 or 100) J48 decision trees and using them to "vote" on the classification of each object. For example, if 60 tress say a given observation belongs to class "A", and 40 say it belongs to class "B", you classify the object as class "A." You can further improve such an ensemble by training each tree on a random sub-sample of the training data. This is called "bagging," and the random sub-samples are usually created with replacement. Finally, you can additionally give each tree a random subset of variables from the training set. This is called a "random forest." While your professor will probably be impressed if your write your own random forest algorithm, it's probably best to use an existing implementation.
Data mining classification competition
An easy way to build an ensemble is by using a random forest. I'm fairly sure weka has a random forest algorithm, and if other tree-based models are performing well it's worth trying out. You could a
Data mining classification competition An easy way to build an ensemble is by using a random forest. I'm fairly sure weka has a random forest algorithm, and if other tree-based models are performing well it's worth trying out. You could also build your own ensemble by training multiple (say 50 or 100) J48 decision trees and using them to "vote" on the classification of each object. For example, if 60 tress say a given observation belongs to class "A", and 40 say it belongs to class "B", you classify the object as class "A." You can further improve such an ensemble by training each tree on a random sub-sample of the training data. This is called "bagging," and the random sub-samples are usually created with replacement. Finally, you can additionally give each tree a random subset of variables from the training set. This is called a "random forest." While your professor will probably be impressed if your write your own random forest algorithm, it's probably best to use an existing implementation.
Data mining classification competition An easy way to build an ensemble is by using a random forest. I'm fairly sure weka has a random forest algorithm, and if other tree-based models are performing well it's worth trying out. You could a
49,582
Data mining classification competition
A model ensemble is simply a collection of models whose output is combined (hopefully generating superior performance in the process). Obviously, to be of any interest, the base models must vary somehow, and there are several ways to do this: vary the model type (tree induction, neural network, discriminant function, etc.), vary the starting conditions of the model training (such as differing weight initializations for feedforward neural networks), vary the observations used (typically random samples of the entire training set), vary the candidate input variables (again, typically random samples of all those available), etc. There are several ways to combine the base model outputs. The simplest are averaging or voting, though these may require some calibration.
Data mining classification competition
A model ensemble is simply a collection of models whose output is combined (hopefully generating superior performance in the process). Obviously, to be of any interest, the base models must vary some
Data mining classification competition A model ensemble is simply a collection of models whose output is combined (hopefully generating superior performance in the process). Obviously, to be of any interest, the base models must vary somehow, and there are several ways to do this: vary the model type (tree induction, neural network, discriminant function, etc.), vary the starting conditions of the model training (such as differing weight initializations for feedforward neural networks), vary the observations used (typically random samples of the entire training set), vary the candidate input variables (again, typically random samples of all those available), etc. There are several ways to combine the base model outputs. The simplest are averaging or voting, though these may require some calibration.
Data mining classification competition A model ensemble is simply a collection of models whose output is combined (hopefully generating superior performance in the process). Obviously, to be of any interest, the base models must vary some
49,583
Data mining classification competition
You could try the new machine learning library called ML-Flex (http://mlflex.sourceforge.net). It is designed to execute a variety of ensemble methods and can also provide side-by-side comparisons when different algorithm parameters are used (though perhaps not exactly as you desire). If you're interested, give it a try and provide any feedback you may have. Full disclosure: I am the author of this package.
Data mining classification competition
You could try the new machine learning library called ML-Flex (http://mlflex.sourceforge.net). It is designed to execute a variety of ensemble methods and can also provide side-by-side comparisons whe
Data mining classification competition You could try the new machine learning library called ML-Flex (http://mlflex.sourceforge.net). It is designed to execute a variety of ensemble methods and can also provide side-by-side comparisons when different algorithm parameters are used (though perhaps not exactly as you desire). If you're interested, give it a try and provide any feedback you may have. Full disclosure: I am the author of this package.
Data mining classification competition You could try the new machine learning library called ML-Flex (http://mlflex.sourceforge.net). It is designed to execute a variety of ensemble methods and can also provide side-by-side comparisons whe
49,584
Are survivor functions meaningful with proportional hazards models?
It has always been my understanding that the appeal of the Cox proportional hazards model is that there's no estimation of the underlying hazard function, and as such it is unbound from some of the assumptions about the shape of that hazard. From that, I've asserted that that means you can't use the Cox model to generate estimates of the survival function, only the differences between them, in front of people who should know better and been met with little objection. For whatever that's worth.
Are survivor functions meaningful with proportional hazards models?
It has always been my understanding that the appeal of the Cox proportional hazards model is that there's no estimation of the underlying hazard function, and as such it is unbound from some of the as
Are survivor functions meaningful with proportional hazards models? It has always been my understanding that the appeal of the Cox proportional hazards model is that there's no estimation of the underlying hazard function, and as such it is unbound from some of the assumptions about the shape of that hazard. From that, I've asserted that that means you can't use the Cox model to generate estimates of the survival function, only the differences between them, in front of people who should know better and been met with little objection. For whatever that's worth.
Are survivor functions meaningful with proportional hazards models? It has always been my understanding that the appeal of the Cox proportional hazards model is that there's no estimation of the underlying hazard function, and as such it is unbound from some of the as
49,585
Finding a minimum variance unbiased (linear) estimator
Your setup is analogous to sampling from a finite population (the $c_i$) without replacement, with a fixed probability $p_i$ of selecting each member of the population for the sample. Successfully opening the $i^{th}$ box corresponds to selecting the corresponding $c_i$ for inclusion in the sample. The estimator you describe is a Horvitz-Thompson estimator, which is the only unbiased estimator in the class of estimators $\hat{S} = \sum_{i=1}^{N} \beta_i c_i$, where $\beta_i$ is a weight to be used whenever $c_i$ is selected for the sample. Thus, within that class of estimators, it is also the optimal unbiased estimator regardless of the criterion for optimality. It has been shown (Ramakrishnan) that the H-T estimator is admissible in the class of all unbiased estimators of a finite population total. (Note the link is not to the original paper by Godambe and Joshi, which I can't seem to find online.) For a review of the Horvitz-Thompson estimator and its properties, see Rao.
Finding a minimum variance unbiased (linear) estimator
Your setup is analogous to sampling from a finite population (the $c_i$) without replacement, with a fixed probability $p_i$ of selecting each member of the population for the sample. Successfully op
Finding a minimum variance unbiased (linear) estimator Your setup is analogous to sampling from a finite population (the $c_i$) without replacement, with a fixed probability $p_i$ of selecting each member of the population for the sample. Successfully opening the $i^{th}$ box corresponds to selecting the corresponding $c_i$ for inclusion in the sample. The estimator you describe is a Horvitz-Thompson estimator, which is the only unbiased estimator in the class of estimators $\hat{S} = \sum_{i=1}^{N} \beta_i c_i$, where $\beta_i$ is a weight to be used whenever $c_i$ is selected for the sample. Thus, within that class of estimators, it is also the optimal unbiased estimator regardless of the criterion for optimality. It has been shown (Ramakrishnan) that the H-T estimator is admissible in the class of all unbiased estimators of a finite population total. (Note the link is not to the original paper by Godambe and Joshi, which I can't seem to find online.) For a review of the Horvitz-Thompson estimator and its properties, see Rao.
Finding a minimum variance unbiased (linear) estimator Your setup is analogous to sampling from a finite population (the $c_i$) without replacement, with a fixed probability $p_i$ of selecting each member of the population for the sample. Successfully op
49,586
How to apply unsupervised classification to spatial data
This sounds to me like an image processing question, unless you are looking for a very complex structure. You may want to use a gaussian filter on the image, and then apply a threshold. Also, you can ask in https://dsp.stackexchange.com/ .
How to apply unsupervised classification to spatial data
This sounds to me like an image processing question, unless you are looking for a very complex structure. You may want to use a gaussian filter on the image, and then apply a threshold. Also, you can
How to apply unsupervised classification to spatial data This sounds to me like an image processing question, unless you are looking for a very complex structure. You may want to use a gaussian filter on the image, and then apply a threshold. Also, you can ask in https://dsp.stackexchange.com/ .
How to apply unsupervised classification to spatial data This sounds to me like an image processing question, unless you are looking for a very complex structure. You may want to use a gaussian filter on the image, and then apply a threshold. Also, you can
49,587
How to apply unsupervised classification to spatial data
This software (it won the best demonstration award at SSTD 2011) Link should be able to do spatial clustering, too.
How to apply unsupervised classification to spatial data
This software (it won the best demonstration award at SSTD 2011) Link should be able to do spatial clustering, too.
How to apply unsupervised classification to spatial data This software (it won the best demonstration award at SSTD 2011) Link should be able to do spatial clustering, too.
How to apply unsupervised classification to spatial data This software (it won the best demonstration award at SSTD 2011) Link should be able to do spatial clustering, too.
49,588
Neural network model to predict treatment outcome
It's often a good idea to do PCA before fitting a neural network, so your instinct could be right there. The only way you are going to determine which model is better for a given problem is to cross-validate both and compare out-of-sample error. The caret package in R is a good way to compare models using this technique (specifically the train function). As a bonus, it includes a model call pcaNNet which calculates principle components before fitting a neural network.
Neural network model to predict treatment outcome
It's often a good idea to do PCA before fitting a neural network, so your instinct could be right there. The only way you are going to determine which model is better for a given problem is to cross-
Neural network model to predict treatment outcome It's often a good idea to do PCA before fitting a neural network, so your instinct could be right there. The only way you are going to determine which model is better for a given problem is to cross-validate both and compare out-of-sample error. The caret package in R is a good way to compare models using this technique (specifically the train function). As a bonus, it includes a model call pcaNNet which calculates principle components before fitting a neural network.
Neural network model to predict treatment outcome It's often a good idea to do PCA before fitting a neural network, so your instinct could be right there. The only way you are going to determine which model is better for a given problem is to cross-
49,589
Neural network model to predict treatment outcome
General rules for when to use a neural network: 1) you can tell, relatively easily, what the right answer is, but not describe how you know that's the right answer; if you know what steps to take to get the right answer, then code it rather than training a NN, and if you can't tell what the right answer is likely to be, likely a NN won't be able to either 2) 90% accuracy is good enough (e.g. when other techniques give substantially less); NN by their nature do not give watertight 100% accuracy 3) you just need the right answer, not an understanding of how; NN's do not, by their nature, tend to give much insight into the nature of the system By the way, giving a NN both the raw data and transforms of it (averages, deltas, etc.) and letting the learning algorithm decide which are useful for prediction is better than figuring it out yourself; if you determine everything about which factors are important and how to code them, you have done most of the work (not all) which a NN can do for you anyway. p.s. running a NN many times and taking the best result is a good idea; any good NN implementation is stochastic, and different runs may be better or worse by a substantial amount.
Neural network model to predict treatment outcome
General rules for when to use a neural network: 1) you can tell, relatively easily, what the right answer is, but not describe how you know that's the right answer; if you know what steps to take to g
Neural network model to predict treatment outcome General rules for when to use a neural network: 1) you can tell, relatively easily, what the right answer is, but not describe how you know that's the right answer; if you know what steps to take to get the right answer, then code it rather than training a NN, and if you can't tell what the right answer is likely to be, likely a NN won't be able to either 2) 90% accuracy is good enough (e.g. when other techniques give substantially less); NN by their nature do not give watertight 100% accuracy 3) you just need the right answer, not an understanding of how; NN's do not, by their nature, tend to give much insight into the nature of the system By the way, giving a NN both the raw data and transforms of it (averages, deltas, etc.) and letting the learning algorithm decide which are useful for prediction is better than figuring it out yourself; if you determine everything about which factors are important and how to code them, you have done most of the work (not all) which a NN can do for you anyway. p.s. running a NN many times and taking the best result is a good idea; any good NN implementation is stochastic, and different runs may be better or worse by a substantial amount.
Neural network model to predict treatment outcome General rules for when to use a neural network: 1) you can tell, relatively easily, what the right answer is, but not describe how you know that's the right answer; if you know what steps to take to g
49,590
Function to convert arithmetic to log-based covariance matrix?
If I have understood the code correctly (ignoring the "$-1$" in the computation of $m$), its input is an $n$-vector $\mu = (\mu_1, \ldots, \mu_n)$ and a symmetric $n$ by $n$ matrix $\Sigma = (\sigma_{ij})$. The output is an $n$-vector $m$ with $$m_i = \exp(\mu_i + \sigma_{ii}/2)$$ and an $n$ by $n$ matrix $S$ with $$S_{ij} = \exp(\mu_i + \mu_j + (\sigma_{ii}+\sigma_{jj})/2)(\exp(\sigma_{ij})-1) = m_i(\exp(\sigma_{ij})-1)m_j.$$ If this is correct, then we can solve readily for $\mu$ and $\Sigma$ in terms of $m$ and $S$ essentially by reversing these operations. Begin by forming the diagonal matrix $M$ whose diagonal entries are $1/m_i$: that is, $M_{ii}=1/m_i$ and $M_{ij}=0$ for $i\ne j$. From the right hand side of the preceding formula it follows immediately that $$M S M + 1_n = \exp(\sigma_{ij})$$ and we easily recover $\Sigma$ by taking the logarithms term-by-term. With these values in hand, $$\mu_i = \log(m_i) - \sigma_{ii}/2.$$ Edit The code in the question uses "linear returns" rather than means. There's no problem with that: starting with the "returns" $m_i$ computed as $\exp(\mu_i + \sigma_{ii}/2)-1$, first add back the $1$ and proceed as above.
Function to convert arithmetic to log-based covariance matrix?
If I have understood the code correctly (ignoring the "$-1$" in the computation of $m$), its input is an $n$-vector $\mu = (\mu_1, \ldots, \mu_n)$ and a symmetric $n$ by $n$ matrix $\Sigma = (\sigma_{
Function to convert arithmetic to log-based covariance matrix? If I have understood the code correctly (ignoring the "$-1$" in the computation of $m$), its input is an $n$-vector $\mu = (\mu_1, \ldots, \mu_n)$ and a symmetric $n$ by $n$ matrix $\Sigma = (\sigma_{ij})$. The output is an $n$-vector $m$ with $$m_i = \exp(\mu_i + \sigma_{ii}/2)$$ and an $n$ by $n$ matrix $S$ with $$S_{ij} = \exp(\mu_i + \mu_j + (\sigma_{ii}+\sigma_{jj})/2)(\exp(\sigma_{ij})-1) = m_i(\exp(\sigma_{ij})-1)m_j.$$ If this is correct, then we can solve readily for $\mu$ and $\Sigma$ in terms of $m$ and $S$ essentially by reversing these operations. Begin by forming the diagonal matrix $M$ whose diagonal entries are $1/m_i$: that is, $M_{ii}=1/m_i$ and $M_{ij}=0$ for $i\ne j$. From the right hand side of the preceding formula it follows immediately that $$M S M + 1_n = \exp(\sigma_{ij})$$ and we easily recover $\Sigma$ by taking the logarithms term-by-term. With these values in hand, $$\mu_i = \log(m_i) - \sigma_{ii}/2.$$ Edit The code in the question uses "linear returns" rather than means. There's no problem with that: starting with the "returns" $m_i$ computed as $\exp(\mu_i + \sigma_{ii}/2)-1$, first add back the $1$ and proceed as above.
Function to convert arithmetic to log-based covariance matrix? If I have understood the code correctly (ignoring the "$-1$" in the computation of $m$), its input is an $n$-vector $\mu = (\mu_1, \ldots, \mu_n)$ and a symmetric $n$ by $n$ matrix $\Sigma = (\sigma_{
49,591
Odd error with caret function rfe
You have to specify the sizes argument ($\leq 2$ in your example). The default value in rfe is sizes=2^(2:4), but you only have two features. ?rfe Arguments ... sizes a numeric vector of integers corresponding to the number of features that should be retained
Odd error with caret function rfe
You have to specify the sizes argument ($\leq 2$ in your example). The default value in rfe is sizes=2^(2:4), but you only have two features. ?rfe Arguments ... sizes a numeric vector of integers
Odd error with caret function rfe You have to specify the sizes argument ($\leq 2$ in your example). The default value in rfe is sizes=2^(2:4), but you only have two features. ?rfe Arguments ... sizes a numeric vector of integers corresponding to the number of features that should be retained
Odd error with caret function rfe You have to specify the sizes argument ($\leq 2$ in your example). The default value in rfe is sizes=2^(2:4), but you only have two features. ?rfe Arguments ... sizes a numeric vector of integers
49,592
How to compare Harrell C-index from different models in survival analysis?
Harrell would advise that you NOT do so: How to do ROC-analysis in R with a Cox model Doing model comparison with LR statistics is more powerful than using methods that depend on an asymptotic distribution of the C-index.
How to compare Harrell C-index from different models in survival analysis?
Harrell would advise that you NOT do so: How to do ROC-analysis in R with a Cox model Doing model comparison with LR statistics is more powerful than using methods that depend on an asymptotic distrib
How to compare Harrell C-index from different models in survival analysis? Harrell would advise that you NOT do so: How to do ROC-analysis in R with a Cox model Doing model comparison with LR statistics is more powerful than using methods that depend on an asymptotic distribution of the C-index.
How to compare Harrell C-index from different models in survival analysis? Harrell would advise that you NOT do so: How to do ROC-analysis in R with a Cox model Doing model comparison with LR statistics is more powerful than using methods that depend on an asymptotic distrib
49,593
How to compare Harrell C-index from different models in survival analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. LR statistics are well suited for hierarchical models. In case of various variables in models I guess information criteria (Akaike - AIC or Bayes BIC) will answer which model is the best.
How to compare Harrell C-index from different models in survival analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to compare Harrell C-index from different models in survival analysis? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. LR statistics are well suited for hierarchical models. In case of various variables in models I guess information criteria (Akaike - AIC or Bayes BIC) will answer which model is the best.
How to compare Harrell C-index from different models in survival analysis? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
49,594
Is sequential Bayesian updating an option when using MCMC?
In this setting, MCMC is less appropriate than particle systems or sequential Monte Carlo, because you can use the previous particle system as an approximation to your prior (posterior for the earlier datapoints) and only use one observation at a time. Appropriate references for this are, e.g., Del Moral, Doucet, and Jasra (Journal of the Royal Statistical Society, Series B, 2006, 68, 411-436) and Andrieu, Doucet, and Hollenstein (Journal of the Royal Statistical Society, Series B, 2011, 72, 269-342).
Is sequential Bayesian updating an option when using MCMC?
In this setting, MCMC is less appropriate than particle systems or sequential Monte Carlo, because you can use the previous particle system as an approximation to your prior (posterior for the earlier
Is sequential Bayesian updating an option when using MCMC? In this setting, MCMC is less appropriate than particle systems or sequential Monte Carlo, because you can use the previous particle system as an approximation to your prior (posterior for the earlier datapoints) and only use one observation at a time. Appropriate references for this are, e.g., Del Moral, Doucet, and Jasra (Journal of the Royal Statistical Society, Series B, 2006, 68, 411-436) and Andrieu, Doucet, and Hollenstein (Journal of the Royal Statistical Society, Series B, 2011, 72, 269-342).
Is sequential Bayesian updating an option when using MCMC? In this setting, MCMC is less appropriate than particle systems or sequential Monte Carlo, because you can use the previous particle system as an approximation to your prior (posterior for the earlier
49,595
Is sequential Bayesian updating an option when using MCMC?
Are you underflowing because you are not anywhere near reasonable parameter values? Perhaps you just need to find a good starting location. I otherwise don't see how you could underflow short of having 1e20 data points or parameters. You could use a small (but randomly sampled) portion of your data and to ML estimation and then use that as your starting point.
Is sequential Bayesian updating an option when using MCMC?
Are you underflowing because you are not anywhere near reasonable parameter values? Perhaps you just need to find a good starting location. I otherwise don't see how you could underflow short of havin
Is sequential Bayesian updating an option when using MCMC? Are you underflowing because you are not anywhere near reasonable parameter values? Perhaps you just need to find a good starting location. I otherwise don't see how you could underflow short of having 1e20 data points or parameters. You could use a small (but randomly sampled) portion of your data and to ML estimation and then use that as your starting point.
Is sequential Bayesian updating an option when using MCMC? Are you underflowing because you are not anywhere near reasonable parameter values? Perhaps you just need to find a good starting location. I otherwise don't see how you could underflow short of havin
49,596
How to calculate mean and standard deviation of a count variable when the raw data is based on frequency categories?
You need to be creative, because these data are consistent with any mean exceeding $0\times .05 + 1\times .07 + \cdots + 5\times .18$ = $2.89$ and any standard deviation exceeding $1.38$ (which are attained by assuming nobody visited any more than five times per month). For reporting purposes, simply tabulate or graph the raw data: If you must have a summary of location and spread, use alternative measures that can uniquely be found from these data. The median is between 2 and 3, because 45% visited 2 times or fewer and 67% visited 3 times or fewer. You might simply interpolate linearly and report a median of 2.3 visits per month. For the spread, use (say) an interquartile range, also computed with linear interpolation. I find Q1 is 1.4 and Q3 is 3.3, for an IQR of 1.9. To go beyond that, you need to fit the data with a distribution, which requires assumptions and therefore is not just reporting. But it can be useful. However, these data are elusive: they will not fit standard models like Binomial or Poisson. (I recommend against trying to fit discretized versions of continuous distributions, such as Lognormal, because it's hard to find any reason why they should fit: they don't form informative bases for comparison. Moreover, since there are only six values here, it would be almost worthless to use more than one parameter in the modeling: two or more parameters give too much flexibility.) As an example of the insight that might be afforded by a simple distributional fit, suppose the visits are made randomly over time by individuals and each individual has the same probability (per unit time) of visiting. This is potentially a useful and interesting framework against which these data can be compared. It leads to a Poisson distribution. The best fit (in a chi-squared sense) is achieved with an intensity of 3.185 per month; this also is the variance (whence the standard deviation is $\sqrt{3.185}$ = $1.8$). This is not a good fit (as a chi-squared test will show, but the eye plainly sees): there are too many people reporting 2 visits and too few reporting 1 visit. That perhaps is the most interesting thing about this analysis. You could announce these results like this: The median number of monthly visits among the respondents is 2.3 (with an IQR of 1.9). The data depart significantly from a (best fit) Poisson distribution with a mean of 3.18 visits per month in that 19 fewer people than expected report one visit and 37 more people than expected report two visits. Incidentally, a Poisson fit suggestively fills in the upper tail of "5 or more visits," providing quantitative hypotheses that could be tested in follow-on surveys: Other distributions would give different extrapolations into this upper range.
How to calculate mean and standard deviation of a count variable when the raw data is based on frequ
You need to be creative, because these data are consistent with any mean exceeding $0\times .05 + 1\times .07 + \cdots + 5\times .18$ = $2.89$ and any standard deviation exceeding $1.38$ (which are at
How to calculate mean and standard deviation of a count variable when the raw data is based on frequency categories? You need to be creative, because these data are consistent with any mean exceeding $0\times .05 + 1\times .07 + \cdots + 5\times .18$ = $2.89$ and any standard deviation exceeding $1.38$ (which are attained by assuming nobody visited any more than five times per month). For reporting purposes, simply tabulate or graph the raw data: If you must have a summary of location and spread, use alternative measures that can uniquely be found from these data. The median is between 2 and 3, because 45% visited 2 times or fewer and 67% visited 3 times or fewer. You might simply interpolate linearly and report a median of 2.3 visits per month. For the spread, use (say) an interquartile range, also computed with linear interpolation. I find Q1 is 1.4 and Q3 is 3.3, for an IQR of 1.9. To go beyond that, you need to fit the data with a distribution, which requires assumptions and therefore is not just reporting. But it can be useful. However, these data are elusive: they will not fit standard models like Binomial or Poisson. (I recommend against trying to fit discretized versions of continuous distributions, such as Lognormal, because it's hard to find any reason why they should fit: they don't form informative bases for comparison. Moreover, since there are only six values here, it would be almost worthless to use more than one parameter in the modeling: two or more parameters give too much flexibility.) As an example of the insight that might be afforded by a simple distributional fit, suppose the visits are made randomly over time by individuals and each individual has the same probability (per unit time) of visiting. This is potentially a useful and interesting framework against which these data can be compared. It leads to a Poisson distribution. The best fit (in a chi-squared sense) is achieved with an intensity of 3.185 per month; this also is the variance (whence the standard deviation is $\sqrt{3.185}$ = $1.8$). This is not a good fit (as a chi-squared test will show, but the eye plainly sees): there are too many people reporting 2 visits and too few reporting 1 visit. That perhaps is the most interesting thing about this analysis. You could announce these results like this: The median number of monthly visits among the respondents is 2.3 (with an IQR of 1.9). The data depart significantly from a (best fit) Poisson distribution with a mean of 3.18 visits per month in that 19 fewer people than expected report one visit and 37 more people than expected report two visits. Incidentally, a Poisson fit suggestively fills in the upper tail of "5 or more visits," providing quantitative hypotheses that could be tested in follow-on surveys: Other distributions would give different extrapolations into this upper range.
How to calculate mean and standard deviation of a count variable when the raw data is based on frequ You need to be creative, because these data are consistent with any mean exceeding $0\times .05 + 1\times .07 + \cdots + 5\times .18$ = $2.89$ and any standard deviation exceeding $1.38$ (which are at
49,597
How to calculate mean and standard deviation of a count variable when the raw data is based on frequency categories?
You definitely have to associate a numerical value to the class "visited five and more times a month". By the way, I would calculate the mean and the standard deviation in the usual way. In fact, $x_i$ are your values and $p_i$ are their empirical frequency estimated on the sample. In your case $$x_0=0 \ x_1=1 \ x_2=2 \ x_3=3 \ x_4=4 \ x_5=6$$ (you should decide $x_5$) $$p_0=0.05 \ p_1=0.07 \ p_2=0.33 \ p_3=0.22 \ p_4=0.15 \ p_5=0.18 $$ Thus $$\bar{x} = \sum_{i=0}^{5}x_i p_i$$ and $$\sigma=\sqrt{\sum_{i=0}^{5}(x_i - \bar{x})^2 p_i}$$ It could be interesting to delete $x_0$ and $p_0$ and rescale all $p_i$ in order their sum is 1. So you can calculate the average number of visits to the supermarket for a person that visits the supermarket.
How to calculate mean and standard deviation of a count variable when the raw data is based on frequ
You definitely have to associate a numerical value to the class "visited five and more times a month". By the way, I would calculate the mean and the standard deviation in the usual way. In fact, $x_i
How to calculate mean and standard deviation of a count variable when the raw data is based on frequency categories? You definitely have to associate a numerical value to the class "visited five and more times a month". By the way, I would calculate the mean and the standard deviation in the usual way. In fact, $x_i$ are your values and $p_i$ are their empirical frequency estimated on the sample. In your case $$x_0=0 \ x_1=1 \ x_2=2 \ x_3=3 \ x_4=4 \ x_5=6$$ (you should decide $x_5$) $$p_0=0.05 \ p_1=0.07 \ p_2=0.33 \ p_3=0.22 \ p_4=0.15 \ p_5=0.18 $$ Thus $$\bar{x} = \sum_{i=0}^{5}x_i p_i$$ and $$\sigma=\sqrt{\sum_{i=0}^{5}(x_i - \bar{x})^2 p_i}$$ It could be interesting to delete $x_0$ and $p_0$ and rescale all $p_i$ in order their sum is 1. So you can calculate the average number of visits to the supermarket for a person that visits the supermarket.
How to calculate mean and standard deviation of a count variable when the raw data is based on frequ You definitely have to associate a numerical value to the class "visited five and more times a month". By the way, I would calculate the mean and the standard deviation in the usual way. In fact, $x_i
49,598
Estimating a p-value when you can't compute it for the whole set
It's not correct to randomly sample both lists a large number of times and average the p-values; the result would understate the evidence against the null hypothesis if it's false, as you then expect the p-value to get smaller as the sample size gets larger, but with this procedure it would stay the same on average. Instead I'd suggest using Fisher's combined probability test to combine the p-values. This assumes the p-values come from independent tests, so you want to sample without replacement so that each list value only occurs in one sample. Equivalently, randomly order both lists then divide them up into suitable-sized chunks to feed into your black box.
Estimating a p-value when you can't compute it for the whole set
It's not correct to randomly sample both lists a large number of times and average the p-values; the result would understate the evidence against the null hypothesis if it's false, as you then expect
Estimating a p-value when you can't compute it for the whole set It's not correct to randomly sample both lists a large number of times and average the p-values; the result would understate the evidence against the null hypothesis if it's false, as you then expect the p-value to get smaller as the sample size gets larger, but with this procedure it would stay the same on average. Instead I'd suggest using Fisher's combined probability test to combine the p-values. This assumes the p-values come from independent tests, so you want to sample without replacement so that each list value only occurs in one sample. Equivalently, randomly order both lists then divide them up into suitable-sized chunks to feed into your black box.
Estimating a p-value when you can't compute it for the whole set It's not correct to randomly sample both lists a large number of times and average the p-values; the result would understate the evidence against the null hypothesis if it's false, as you then expect
49,599
Putting a confidence interval on the mean of a very rare event
The normal approximation for the confidence interval of binomial proportions breaks down very badly for rare events and the rules of thumb about sample sizes are inconsistent and unreliable. Better methods are just as easy to calculate (i.e. you click the button!) and so there is no reason for anyone to use the normal approximation. Ever. Have a quick look at the papers below (and then use Wilson's method). Vollset. Confidence intervals for a binomial proportion. Statist. Med. (1993) vol. 12 (9) pp. 809-24 Brown et al. Interval Estimation for a Binomial Proportion. Statistical Science (2001) pp. 101-117 http://www.jstor.org/stable/2676784 See also some previous questions put to this list: How to report asymmetrical confidence intervals of a proportion? and Discrete functions: Confidence interval coverage? and Clarification on interpreting confidence intervals?
Putting a confidence interval on the mean of a very rare event
The normal approximation for the confidence interval of binomial proportions breaks down very badly for rare events and the rules of thumb about sample sizes are inconsistent and unreliable. Better me
Putting a confidence interval on the mean of a very rare event The normal approximation for the confidence interval of binomial proportions breaks down very badly for rare events and the rules of thumb about sample sizes are inconsistent and unreliable. Better methods are just as easy to calculate (i.e. you click the button!) and so there is no reason for anyone to use the normal approximation. Ever. Have a quick look at the papers below (and then use Wilson's method). Vollset. Confidence intervals for a binomial proportion. Statist. Med. (1993) vol. 12 (9) pp. 809-24 Brown et al. Interval Estimation for a Binomial Proportion. Statistical Science (2001) pp. 101-117 http://www.jstor.org/stable/2676784 See also some previous questions put to this list: How to report asymmetrical confidence intervals of a proportion? and Discrete functions: Confidence interval coverage? and Clarification on interpreting confidence intervals?
Putting a confidence interval on the mean of a very rare event The normal approximation for the confidence interval of binomial proportions breaks down very badly for rare events and the rules of thumb about sample sizes are inconsistent and unreliable. Better me
49,600
Putting a confidence interval on the mean of a very rare event
Now that it is clear that you have a weighting function, I suggest that you use Bayesian intervals (often called credible intervals) with the weighting function being the prior. Multiply that by the likelihood function provided by your results to get the posterior. Any interval containing 95% of the area under that posterior distribution is a 95% credible interval. The likelihood function is easily calculated: start with a uniform (0,1) indicating no data and so no evidence. For each photon received you multiply the distribution by y=x and for each photon sent but not received multiply it by y=1-x. When you've done that for all of the photons sent you will have the likelihood function representing the evidence inherent in your data. You can scale it to a maximum of 1 to look conventional, if you like. [Of course, y represents the likelihood and x is the hypothetical probability of success in each trial.] There is a formula for the likelihood function, but I find it easier to understand in the way I've expressed it here.
Putting a confidence interval on the mean of a very rare event
Now that it is clear that you have a weighting function, I suggest that you use Bayesian intervals (often called credible intervals) with the weighting function being the prior. Multiply that by the l
Putting a confidence interval on the mean of a very rare event Now that it is clear that you have a weighting function, I suggest that you use Bayesian intervals (often called credible intervals) with the weighting function being the prior. Multiply that by the likelihood function provided by your results to get the posterior. Any interval containing 95% of the area under that posterior distribution is a 95% credible interval. The likelihood function is easily calculated: start with a uniform (0,1) indicating no data and so no evidence. For each photon received you multiply the distribution by y=x and for each photon sent but not received multiply it by y=1-x. When you've done that for all of the photons sent you will have the likelihood function representing the evidence inherent in your data. You can scale it to a maximum of 1 to look conventional, if you like. [Of course, y represents the likelihood and x is the hypothetical probability of success in each trial.] There is a formula for the likelihood function, but I find it easier to understand in the way I've expressed it here.
Putting a confidence interval on the mean of a very rare event Now that it is clear that you have a weighting function, I suggest that you use Bayesian intervals (often called credible intervals) with the weighting function being the prior. Multiply that by the l