Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
6014
2
null
5995
5
null
Make a table: ``` No renewal Renewal Total ---------- ------- ------ Attribute No 79800 200 80000 Yes 19700 300 20000 --------------------------------------- Total 99500 500 100000 ``` The computations are: - Number of non-renewers = 100,000 - 500 = 99,500. - "Attribute" is 20% of all buyers = 0.20 * 100,000 = 20,000. - Non-"attribute" is therefore 100,000 - 20,000 - 80,000 of all buyers. - "Attribute" is 60% of all renewers = 0.60 * 500 = 300. - Therefore, 20,000 - 300 = 19,700 of all non-renewers have the attribute. - Non-"attribute" for renewers is 500 - 300 = 200. - Therefore, 80,000 - 200 = 79,800 of all non-renewers do not have the attribute. Conduct a [chi-square test of independence](http://en.wikipedia.org/wiki/Pearson%27s_chi-square_test#Test_of_independence). This is valid because each of the cells in your table has a large count. ("Large" typically means 5 or greater.) Here are the calculations: - 80% of all buyers are "no attribute" and 99.5% do not renew. Therefore we expect .80 * .995 * 100,000 = 79,600 to be non-renewers with "no attribute". - Similarly, the rest of the table of expected values is No renewal Renewal ---------- ------- Attribute No 79600 400 Yes 19900 100 - The residuals (differences between the tables) are 79800 - 79600 = 200, 200 - 400 = -200, 19700 - 19900 = -200, and 300 - 100 = 200. - The chi-square terms are the squared residuals divided by the expectations. Specifically, these equal $200^2/79600 = 0.5025$, $(-200)^2/400 = 100$, $(-200)^2/19900 = 2.0101$, and $200^2/100 = 400$. The chi-square statistic is their sum, 0.5025 + 100 + 2.0101 + 400 = 502.5126. - There is only one degree of freedom: given the marginal percentages (80% and 20% for rows, 99.5% and 0.5% for columns), the entire table is determined by the number of buyers (100,000). That's a single value, so there's one DoF. The chance of a chi-square variate exceeding 502.5126 is astronomically small (less than 2.3E-111). It's best to use good statistical software to compute this value, but even Excel's calculation =CHIDIST(1, 502.5126) is close enough. --- Now we can answer the questions. - Your data say nothing about the "attribute" affecting behavior. This is an observational study. All it finds is that there is an association between the "attribute" and renewal and (thanks to the chi-square calculation) we cannot attribute that association to randomness. - If 55.65% of all buyers had the "attribute" and 60% of all renewers continued to have the attribute, the chi-square test would still be (barely) significant at the 5% level. - It is not clear what you mean by "huge difference." Is it the proportion having the "attribute"? Their actual numbers? And what do you mean by "bias"? That would depend on how you plan to use these results. If you want to predict future renewal rates, there are all kinds of potential biases resulting from ways in which these particular 100,000 customers could differ from future customers. But that question is not settled with a statistical analysis: it's really a matter of (marketing) faith. What the data say is that renewal is convincingly associated with the "attribute" within this particular population of 100,000 people.
null
CC BY-SA 2.5
null
2011-01-05T19:37:25.097
2011-01-05T19:37:25.097
null
null
919
null
6015
2
null
6013
13
null
Based on the way you phrase the question > are outliers not necessarily the best way to attack the problem of finding 'badness'? It is not clear that you are looking for outliers. For example, it seems that you are interested in machines performing above/below some threshold. As an example, if all of your servers were at 98 $\pm$ 0.1 % availability, a server at 100% availability would be an outlier, as would a server at 97.6% availability. But these may be within your desired limits. On the other hand, there may be good reasons apriori to want to be notified of any server at less than 95% availability, whether or not there are one or many servers below this threshold. For this reason, a search for outliers may not provide the information that you are interested in. The thresholds could be determined statistically based on historical data, e.g. by modeling error rate as poisson or percent availability as beta variables. In an applied setting, these thresholds could probably be determined based on performance requirements.
null
CC BY-SA 3.0
null
2011-01-05T20:04:41.887
2011-09-29T13:24:27.140
2011-09-29T13:24:27.140
2817
1381
null
6016
2
null
6013
4
null
A simple way to find anomalous servers would be to assume they are identically distributed, estimate the population parameters, and sort them according to their likelihoods, ascending. Column likelihoods would be combined either with their product or their minimum (or some other T-norm). This works pretty well as long as outliers are rare. For outlier detection itself, stable population parameters are usually estimated iteratively by dropping any discovered outliers, but that's not vital as long as you're manually inspecting the list and thereby avoiding thresholding. For the likelihoods, you might try Beta for the proportions and Poisson for the rates. As pointed out by David, outlier detection is not quite the same as reliability analysis, which would flag all servers that exceed some threshold. Furthermore, some people would approach the problem trough loss functions - defining the pain you feel when some server is at 50% availability or 500 error rate, and then rank them according to that pain.
null
CC BY-SA 2.5
null
2011-01-05T20:59:42.467
2011-01-05T20:59:42.467
null
null
2456
null
6017
2
null
2691
20
null
From someone who has used PCA a lot (and tried to explain it to a few people as well) here's an example from my own field of neuroscience. When we're recording from a person's scalp we do it with 64 electrodes. So, in effect we have 64 numbers in a list that represent the voltage given off by the scalp. Now since we record with microsecond precision, if we have a 1-hour experiment (often they are 4 hours) then that gives us 1e6 * 60^2 == 3,600,000,000 time points at which a voltage was recorded at each electrode so that now we have a 3,600,000,000 x 64 matrix. Since a major assumption of PCA is that your variables are correlated, it is a great technique to reduce this ridiculous amount of data to an amount that is tractable. As has been said numerous times already, the eigenvalues represent the amount of variance explained by the variables (columns). In this case an eigenvalue represents the variance in the voltage at a particular point in time contributed by a particular electrode. So now we can say, "Oh, well electrode `x` at time point `y` is what we should focus on for further analysis because that is where the most change is happening". Hope this helps. Loving those regression plots!
null
CC BY-SA 3.0
null
2011-01-05T21:11:38.220
2018-02-20T19:13:32.433
2018-02-20T19:13:32.433
2660
2660
null
6019
2
null
6013
2
null
Identifying a given data point as an outlier implies that there is some data generating process or model from which the data are expected to come from. It sounds like you are not sure what those models are for the given metrics and clusters you are concerned about. So, here is what I would consider exploring: [statistical process control charts](https://web.archive.org/web/20130526045911/https://controls.engin.umich.edu/wiki/index.php/SPC%3a_Basic_control_charts%3a_theory_and_construction,_sample_size,_x-bar,_r_charts,_s_charts). The idea here would be to collect the - %Availability - Requests/Sec - Errors/Sec - %Memory_Utilization metrics for each of your clusters. For each metric, create a subset of the data that only includes values that are "reasonable" or in control. Build the charts for each metric based on this in-control data. Then you can start feeding live data to your charting code and visually assess if the metrics are in control or not. Of course, visually doing this for multiple metrics across many clusters may not be feasible, but this could be a good way to start to learn about the dynamics you are faced with. You might then create a notification service for clusters with metrics that go out of control. Along these lines, I have played with using neural networks to automatically classify control chart patterns as being OK vs some specific flavor of out-of-control (e.g. %availability trending down or cyclic behavior in errors/sec). Doing this gives you the advantages of statistical process control charts (long used in manufacturing settings) but eases the burden of having to spend lots of time actually looking at charts, since you can train a neural network to classify patterns based upon your expert interpretation. As for code, there is the [spc package on pypi](http://pypi.python.org/pypi/spc/0.3) but I do not have any experience using this. My toy example of using neural networks (naive Bayes too) [can be found here](https://web.archive.org/web/20130515165837/http://forums.vni.com/showthread.php?t=4215).
null
CC BY-SA 4.0
null
2011-01-06T04:54:35.017
2023-01-04T16:07:59.133
2023-01-04T16:07:59.133
362671
1080
null
6020
1
6031
null
3
11112
I'm a programmer with little statistical background, and I'm trying to create something similar to what facebook did recently (with other data): [http://www.facebook.com/notes/facebook-data-team/whats-on-your-mind/477517358858](http://www.facebook.com/notes/facebook-data-team/whats-on-your-mind/477517358858) That is to be able to find correlation between one variable (age) and bunch of other variables or categories (type of words). For the first chart, age vs. word type, I'm guessing they started out with data that looked like: (header) Status Update, Age, Word Cat1 percentage, Word Cat2 percentage, Word Cat3 percentage, etc.. and 1 million of such rows. From what I understand correlation allows you to compare a variable to another variable, so how can I compare one variable (age) to a bunch of others?
Doing correlation on one variable vs many
CC BY-SA 2.5
null
2011-01-06T05:27:21.180
2011-01-06T12:50:15.300
null
null
2664
[ "r", "correlation" ]
6021
1
6030
null
10
8112
THis is a data visualization question. I have a database that contains some data that is constantly revised (online update). What is the best way in R to update a graph every let say 5 or 10 seconds. (without plotting again the all thing is possible)? fRed
R: update a graph dynamically
CC BY-SA 3.0
null
2011-01-06T06:44:19.350
2011-05-09T04:55:43.877
2011-05-09T04:55:43.877
1709
1709
[ "r", "data-visualization" ]
6022
1
6038
null
30
29230
What methods can I use to infer a distribution if I know only three percentiles? For example, I know that in a certain data set, the fifth percentile is 8,135, the 50th percentile is 11,259, and the 95th percentile is 23,611. I want to be able to go from any other number to its percentile. It's not my data, and those are all the statistics I have. It's clear that the distribution isn't normal. The only other information I have is that this data represents government per-capita funding for different school districts. I know enough about statistics to know that this problem has no definite solution, but not enough to know how to go about finding good guesses. Would a lognormal distribution be appropriate? What tools can I use to perform the regression (or do I need to do it myself)?
Estimating a distribution based on three percentiles
CC BY-SA 2.5
null
2011-01-06T08:11:22.483
2021-09-28T13:36:41.830
2011-01-07T13:49:30.270
2116
2665
[ "r", "regression", "quantiles" ]
6023
2
null
6021
8
null
For offline visualization, you can generate PNG files and convert them to an animated GIF using [ImageMagick](http://www.imagemagick.org/). I used it for demonstration (this redraw all data, though): ``` source(url("http://aliquote.org/pub/spin_plot.R")) dd <- replicate(3, rnorm(100)) spin.plot(dd) ``` This generates several PNG files, prefixed with `fig`. Then, on an un*x shell, ``` convert -delay 20 -loop 0 fig*.png sequence.gif ``` gives this animation (which is inspired from Modern Applied Biostatistical Methods using S-Plus, S. Selvin, 1998): ![alt text](https://i.stack.imgur.com/Bjauf.gif) Another option which looks much more promising is to rely on the [animation](http://cran.r-project.org/web/packages/animation/index.html) package. There is an example with a [Moving Window Auto-Regression](http://animation.yihui.name/ts:moving_window_ar) that should let you go started with.
null
CC BY-SA 2.5
null
2011-01-06T08:14:27.433
2011-01-06T08:14:27.433
null
null
930
null
6024
1
null
null
5
201
Given observations of $\{y, x_1, x_2, \cdots, x_n\}$, we can always do a linear regression and get all the coefficients $\{c_i\}$ for the model $$y = c_0 + c_1 x_1 + \cdots + c_n x_n.$$ However, this may not be the best answer. Let me explain it. When we are doing a regression, we have estimates $\{d_i\}$ for the standard deviations of the coefficients $\{c_i\}$ and it may turn out, in my particular problem, that most of these coefficients have low $t$-values. On the other hand, in my problem, I already know the underlying model is more like $$y = \Sigma_i c_i (x_{m_i} - x_{n_i})$$ such as $$y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$$ and the problem is I don't know $\{m_i\}$ and $\{n_i\}$. That is, in my strange case, if I already know it is of the form $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$, then when I find $c_1$ and $c_2$, I will find them to have high $t$-values. Nevertheless I don't know $(x_3 - x_4)$ and $(x_1 - x_9)$ are the "special" combinations. And if I just solve $y = c_1 x_1 + \cdots + c_9 x_9$, I will find all $\{c_i\}$ to have low $t$-values. (The reason for this strange phenomenon is, my $\{x_i\}$ have significant correlations with each other.) It seems that I can solve the model $$y = c_{1,2} (x_1 - x_2) + c_{1,3} (x_1 - x_3) + \cdots + c_{4,6} (x_4 - x_6) + \cdots$$ and find all $\{c_{i,j}\}$ with high $t$-values. But then there will be $36$ coefficients $\{c_{i,j}\}$ instead of $9$. I wonder if there are faster methods? Thank you.
Effectively fitting this kind of model: $y = c_1 (x_3 - x_4) + c_2 (x_1 - x_9)$
CC BY-SA 2.5
null
2011-01-06T08:51:17.610
2011-01-07T05:26:54.567
2011-01-06T16:02:04.530
919
null
[ "regression" ]
6025
2
null
6022
6
null
For a lognormal the ratio of the 95th percentile to the median is the same as the ratio of the median to the 5th percentile. That's not even nearly true here so lognormal wouldn't be a good fit. You have enough information to fit a distribution with three parameters, and you clearly need a skew distribution. For analytical simplicity, I'd suggest the [shifted log-logistic distribution](http://en.wikipedia.org/wiki/Shifted_log-logistic_distribution#Shifted_log-logistic_distribution) as its [quantile function](http://en.wikipedia.org/wiki/Quantile_function) (i.e. the inverse of its cumulative distribution function) can be written in a reasonably simple closed form, so you should be able to get closed-form expressions for its three parameters in terms of your three quantiles with a bit of algebra (i'll leave that as an exercise!). This distribution is used in flood frequency analysis. This isn't going to give you any indication of the uncertainty in the estimates of the other quantiles though. I don't know if you need that, but as a statistician I feel I should be able to provide it, so I'm not really satisfied with this answer. I certainly wouldn't use this method, or probably any method, to extrapolate (much) outside the range of the 5th to 95th percentiles.
null
CC BY-SA 2.5
null
2011-01-06T08:56:43.687
2011-01-06T08:56:43.687
null
null
449
null
6026
1
6028
null
20
84467
I'm a medical student trying to understand statistics(!) - so please be gentle! ;) I'm writing an essay containing a fair amount of statistical analysis including survival analysis (Kaplan-Meier, Log-Rank and Cox regression). I ran a Cox regression on my data trying to find out if I can find a significant difference between the deaths of patients in two groups (high risk or low risk patients). I added several covariates to the Cox regression to control for their influence. ``` Risk (Dichotomous) Gender (Dichotomous) Age at operation (Integer level) Artery occlusion (Dichotomous) Artery stenosis (Dichotomous) Shunt used in operation (Dichotomous) ``` I removed Artery occlusion from the covariates list because its SE was extremely high (976). All other SEs are between 0,064 and 1,118. This is what I get: ``` B SE Wald df Sig. Exp(B) 95,0% CI for Exp(B) Lower Upper risk 2,086 1,102 3,582 1 ,058 8,049 ,928 69,773 gender -,900 ,733 1,508 1 ,220 ,407 ,097 1,710 op_age ,092 ,062 2,159 1 ,142 1,096 ,970 1,239 stenosis ,231 ,674 ,117 1 ,732 1,259 ,336 4,721 op_shunt ,965 ,689 1,964 1 ,161 2,625 ,681 10,119 ``` I know that risk is only borderline-significant at 0,058. But besides that how do I interpret the Exp(B) value? I read an article on logistic regression (which is somewhat similar to Cox regression?) where the Exp(B) value was interpreted as: "Being in the high-risk group includes an 8-fold increase in possibility of the outcome," which in this case is death. Can I say that my high-risk patients are 8 times as likely to die earlier than ... what? Please help me! ;) By the way I'm using SPSS 18 to run the analysis.
How do I interpret Exp(B) in Cox regression?
CC BY-SA 3.0
null
2011-01-06T09:12:48.257
2019-11-13T10:42:47.700
2011-09-07T08:51:16.793
null
2652
[ "regression", "survival", "hazard" ]
6027
2
null
6022
2
null
About the only things you can infer from the data is that the distribution is nonsymmetric. You can't even tell whether those quantiles came from a fitted distribution or just the ecdf. If they came from a fitted distribution, you could try all the distributions you can think of and see if any match. If not, there's not nearly enough information. You could interpolate a 2nd degree polynomial or a 3rd degree spline for the quantile function and use that, or come up with a theory as to the distribution family and match quantiles, but any inferences you would make with these methods would be deeply suspect.
null
CC BY-SA 2.5
null
2011-01-06T10:12:31.150
2011-01-06T10:12:31.150
null
null
2456
null
6028
2
null
6026
24
null
Generally speaking, $\exp(\hat\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The parallel with other linear models is that in Cox regression the hazard function is modeled as $h(t)=h_0(t)\exp(\beta'x)$, where $h_0(t)$ is the baseline hazard. This is equivalent to say that $\log(\text{group hazard}/\text{baseline hazard})=\log\big((h(t)/h_0(t)\big)=\sum_i\beta_ix_i$. Then, a unit increase in $x_i$ is associated with $\beta_i$ increase in the log hazard rate. The regression coefficient allow thus to quantify the log of the hazard in the treatment group (compared to the control or placebo group), accounting for the covariates included in the model; it is interpreted as a relative risk (assuming no time-varying coefficients). In the case of logistic regression, the regression coefficient reflects the log of the [odds-ratio](http://en.wikipedia.org/wiki/Odds_ratio#Role_in_logistic_regression), hence the interpretation as an k-fold increase in risk. So yes, the interpretation of hazard ratios shares some resemblance with the interpretation of odds ratios. Be sure to check Dave Garson's website where there is some good material on [Cox Regression](https://web.archive.org/web/20100612042636/http://faculty.chass.ncsu.edu/garson/PA765/cox.htm) with SPSS.
null
CC BY-SA 4.0
null
2011-01-06T10:49:53.407
2019-11-13T10:42:47.700
2019-11-13T10:42:47.700
230
930
null
6029
2
null
6020
2
null
Correlation is a rather vague word meaning the fact that one variable is dependent on the other; in many cases this is just a synonym for [Pearson correlation coefficient](http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient), which assumes linear dependence (i.e. $y=A \cdot x+B$), so things like "when x increses, y increases" (cor-> +1) or "when x increases, y decreases" (cor-> -1). Regression on the other hand is a problem of finding a function that describes dependence between variables; in a linear problem it would be finding values of A and B, but in general it can be anything -- finding a period of some periodic phenomenon by fitting it with $y=\sin(\omega t+\phi)$ is also a regression. Yet regression can be used to assess correlation -- if some regression model fits the data well and in a significant way, it means there is correlation; this can be numerised with explained variance for instance. (In general it is a long and complex story) That's all for oversized intro; going back to your question, this Facebook page shows Pearson correlations, which can be calculated without any regression with `cor` R function (also the formula is quite simple and available on the Wiki page). And as David wrote, counting correlation between $N$ variables boils down to counting correlations of all possible pairs.
null
CC BY-SA 2.5
null
2011-01-06T12:00:24.480
2011-01-06T12:00:24.480
null
null
null
null
6030
2
null
6021
7
null
Assuming you want to update R `windows()` or `x11()` graph, you can use functions like `points()` and `lines()` to add new points or extend lines on a graph without redraw; yet note that this won't change the axes range to accommodate points that may go out of view. In general it is usually a good idea to make the plotting itself instantaneous -- for instance by moving computation effort into making some reduced middle representation which can be plotted rapidly, like density map instead of huge number of points or reducing resolution of line plots (this may be complex though). For holding R session for a certain time without busy wait, use `Sys.sleep()`.
null
CC BY-SA 2.5
null
2011-01-06T12:10:37.987
2011-01-06T12:10:37.987
null
null
null
null
6031
2
null
6020
6
null
To help you get started with the visualization, here is a snippet of R code with simulated data (a matrix with age and counts for 20 words, arranged in columns, for 100 subjects). The computation are done as proposed my @mbq (correlation). ``` n <- 100 # No. subjects k <- 20 # No. words words <- paste("word", 1:k, sep="") df <- data.frame(age=rnorm(n, mean=25, sd=5), replicate(k, sample(1:10, n, rep=T))) colnames(df)[2:(k+1)] <- words robs <- sort(cor(as.matrix(df))[-1,1]) library(lattice) my.cols <- colorRampPalette(c("red","blue")) res <- data.frame(robs=robs, x=seq(1,20), y=rep(1,20)) trellis.par.set(clip=list(panel="off"), axis.line=list(col="transparent")) levelplot(robs~y*x, data=res, col.regions=my.cols, colorkey=F, xlab="", ylab="", scales=list(draw=F), panel=function(...) { panel.levelplot(...) panel.text(x=rep(1, k), y=seq(1, k), lab=rownames(res)) }) ``` ![alt text](https://i.stack.imgur.com/n23ot.png) The above picture was saved as PDF, setting the margins to 1, and croped with `pdfcrop` from my TeXLive distribution. ``` pdf("1.pdf") op <- par(mar=c(1,1,1,1)) (...) par(op) dev.off() ``` I guess it would not be too difficult to make a similar looking chart with `barchart()` from [lattice](http://cran.r-project.org/web/packages/lattice/index.html), or `ggfluctuation()` or any other `qplot()` from [ggplot2](http://cran.r-project.org/web/packages/ggplot2/index.html).
null
CC BY-SA 2.5
null
2011-01-06T12:21:07.763
2011-01-06T12:50:15.300
2011-01-06T12:50:15.300
930
930
null
6032
2
null
6022
2
null
The use of quantiles to estimate parameters of a priori distributions is discussed in the literature on human response time measurement as "quantile maximum probability estimation" (QMPE, though originally erroneously dubbed "quantile maximum likelihood estimation", QMLE), discussed at length by [Heathcote and colleagues](http://hdl.handle.net/1959.13/27899). You could fit a number of different a priori distributions (ex-Gaussian, shifted Lognormal, Wald, and Weibull) then compare the sum log likelihoods of the resulting best fits for each distribution to find the distribution flavor that seems to yield the best fit.
null
CC BY-SA 2.5
null
2011-01-06T13:10:16.490
2011-01-06T13:10:16.490
null
null
364
null
6033
1
null
null
13
10113
I came across a picture of an application prototype that finds significant changes ("trends" - not spikes/outliers) in traffic data: ![alt text](https://i.stack.imgur.com/sGYeF.png) I want to write a program (Java, optionally R) that is able to do the same - but because my statistic skills are a little rusty, I need to dig into this topic again. What approach/algorithm should I use/research therefore?
Detect changes in time series
CC BY-SA 2.5
null
2011-01-06T13:53:16.580
2022-10-17T21:00:56.683
2011-02-28T12:37:08.800
2116
2667
[ "time-series", "change-point" ]
6036
2
null
726
15
null
> Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin. -- Von Neumann
null
CC BY-SA 2.5
null
2011-01-06T14:22:03.547
2011-01-06T14:22:03.547
null
null
930
null
6037
2
null
4551
17
null
While I can relate to much of what Michael Lew says, abandoning p-values in favor of likelihood ratios still misses a more general problem--that of overemphasizing probabilistic results over effect sizes, which are required to give a result substantive meaning. This type of error comes in all shapes and sizes and I find it to be the most insidious statistical mistake. Drawing on J. Cohen and M. Oakes and others, I've written a piece on this at [here](https://web.archive.org/web/20130824042540/http://integrativestatistics.com/insidious.htm).
null
CC BY-SA 4.0
null
2011-01-06T15:28:05.167
2022-12-07T12:54:23.950
2022-12-07T12:54:23.950
362671
2669
null
6038
2
null
6022
19
null
Using a purely statistical method to do this work will provide absolutely no additional information about the distribution of school spending: the result will merely reflect an arbitrary choice of algorithm. You need more data. This is easy to come by: use data from previous years, from comparable districts, whatever. For example, federal spending on 14866 school districts in 2008 is available from the [Census site](http://www.census.gov/govs/school/). It shows that across the country, total per-capita (enrolled) federal revenues were approximately lognormally distributed, but breaking it down by state shows substantial variation (e.g., log spending in Alaska has negative skew while log spending in Colorado has strong positive skew). Use those data to characterize the likely form of distribution and then fit your quantiles to that form. If you're even close to the right distributional form, then you should be able to reproduce the quantiles accurately by fitting one or at most two parameters. The best technique for finding the fit will depend on what distributional form you use, but--far more importantly--it will depend on what you intend to use the results for. Do you need to estimate an average spending amount? Upper and lower limits on spending? Whatever it is, you want to adopt some measure of goodness of fit that will give you the best chance of making good decisions with your results. For example, if your interest is focused in the upper 10% of all spending, you will want to fit the 95th percentile accurately and you might care little about fitting the 5th percentile. No sophisticated fitting technique will make these considerations for you. Of course no one can legitimately guarantee that this data-informed, decision-oriented method will perform any better (or any worse) than some statistical recipe, but--unlike a purely statistical approach--this method has a basis grounded in reality, with a focus on your needs, giving it some credibility and defense against criticism.
null
CC BY-SA 2.5
null
2011-01-06T16:29:31.403
2011-01-06T19:32:30.900
2011-01-06T19:32:30.900
919
919
null
6039
1
null
null
1
1939
I am kinda new to stats and understand random sampling, however I am just learning PCA and wondering if it is just a more sophisticated form of sampling? In other words if I have a large data set. and take a random sample would I then apply PCA to it? Or just apply PCA to the data set. Thanks for any help.. Mike
Random Sampling and PCA
CC BY-SA 2.5
null
2011-01-06T17:11:11.887
2011-01-07T01:34:31.820
null
null
null
[ "pca" ]
6040
2
null
6024
4
null
You can not use the 36-coefficient model, and not because it's going to be slow. Speed is the least of your worries here. The real trouble is that you've taken an already under-determined problem (because of the correlations), and converted it into a problem which is severely under-determined for any data, because of linear dependencies. Simply put, $x_1-x_2=(x_1-x_3)-(x_2-x_3)$, so you can only determine 2 out of the 3 coefficients for these terms in the best case. The only way to fix this will be to prescribe some artificial regularization condition, like having minimal $\sum c_i^2$, or whatever may be right in your case. It seems to me that you might be better off if you start by analyzing the correlation matrix and first figuring out which terms of the form $c_i-c_j$ should really appear in your problem.
null
CC BY-SA 2.5
null
2011-01-06T17:24:19.430
2011-01-06T17:24:19.430
null
null
2658
null
6041
2
null
6039
2
null
Well, if you were interested in cross validation (i.e. how much your model will predict on a different sample), then you could use PCA on a random subset and then fit that model to the rest of your data. That being said, PCA is a tool for summarising a covariance matrix in a smaller matrix, so it may not be the best thing to use. Factor Analysis is a better approach if you want to figure out what is happening i
null
CC BY-SA 2.5
null
2011-01-06T17:38:35.500
2011-01-06T17:38:35.500
null
null
656
null
6042
1
6043
null
3
766
(I'm asking this question for a friend, honest...) > Is there an easy way to convert from an SPSS file to a SAS file, which preserves the formats AND labels? Saving as a POR file gets me the labels (I think) but not the POR file. I tried to save to a SAS7dat file but it didn't work. Thanks,
Converting an SPSS file to a SAS file?
CC BY-SA 2.5
null
2011-01-06T17:52:31.127
2011-01-07T11:01:23.480
2011-01-07T11:01:23.480
null
253
[ "spss", "sas" ]
6043
2
null
6042
2
null
I would just suggest they make the syntax to relabel and reformat the variables. You can use the command, `display dictionary.` in PASW (aka SPSS) to output the dictionary in a table that you can copy and paste the variable names and labels. Looking at this [example](http://www.ats.ucla.edu/stat/sas/modules/labels.htm) of making SAS labels it should be as simple as pasting the text in the appropriate place. Formats may be slightly harder, but I could likely give a suggestion if pointed to a code sample of formats in SAS (if copy and paste from the display dictionary command won't suffice for value labels or data formats).
null
CC BY-SA 2.5
null
2011-01-06T18:47:42.000
2011-01-06T18:59:07.933
2011-01-06T18:59:07.933
1036
1036
null
6044
1
6062
null
1
4109
As the title says, I'd like to calculate the percentage difference for two sets of points. For example, suppose I have $S_{1}=\{(1,x_{1}),(2,x_{2}),(3,x_{3})\}$ and $S_{2}=\{(1,y_{1}),(2,y_{2}),(3,y_{3})\}$. How can I know the difference in percentage between both sets of data. What is the correct way to do that? Is that kind of assessment meaningful to establish to which degree of precision a set of data is preferred over the other? In my particular case, $S_{1}$ is simply a set of numerical results obtained by [DSMC](http://en.wikipedia.org/wiki/Direct_simulation_Monte_Carlo) and $S_{2}$ was obtained by a theoretical result. I'd like to quantify how much difference exist between each other in order to establish when it is convenient to use one or the other. By "difference in percentage" I mean [percent difference](http://en.wikipedia.org/wiki/Percent_difference). Hopefully that clarifies a bit the question. UPDATE: Another way to formulate my question would be: How can I arrive to conclusions such as "The results from experiment A are inaccurate by 10% with respect to experiment B", when experiment A and B are a set of values.
Calculate percentage difference for two sets of points
CC BY-SA 2.5
null
2011-01-06T19:35:49.930
2011-01-07T13:45:52.677
2011-01-07T01:03:12.160
2676
2676
[ "quantiles" ]
6045
2
null
5922
7
null
See Levina, E. and Bickel, P. (2004) “Maximum Likelihood Estimation of Intrinsic Dimension.” Advances in Neural Information Processing Systems 17 [http://books.nips.cc/papers/files/nips17/NIPS2004_0094.pdf](http://books.nips.cc/papers/files/nips17/NIPS2004_0094.pdf) Their idea is that if the data are sampled from a smooth density in $R^m$ embedded in $R^p$ with $m < p$, then locally the number of data points in a small ball of radius $t$ behaves roughly like a poisson process. The rate of the process is related to the volume of the ball which in turn is related to the intrinsic dimension.
null
CC BY-SA 2.5
null
2011-01-06T19:40:32.547
2011-01-08T20:17:36.033
2011-01-08T20:17:36.033
1670
1670
null
6046
1
null
null
3
3519
I have animals, that could be virgin or mated (reproductive state is the fixed factor), which I've stimulated sequentially with 4 different doses of an odour (doses are the repeated measures, the same animal was blown with 4 increasing doses of the same odorant). Then, I measure the neuronal response (variable: number of spikes) of each animal to each dose of the odorant. This might be a typical case or repeated measures, however I have some missing values for the doses, I have not all the doses completed for some animals. For example, for animal 1, I missed recording 1 out of the 4 doses. what can I do? I have two statistical packages: SPSS 16 or Statistical. thanks for your help!
How to solve a case of unbalanced repeated measures?
CC BY-SA 2.5
null
2011-01-06T19:49:26.740
2011-01-10T14:53:32.060
2011-01-07T09:36:30.217
159
null
[ "repeated-measures" ]
6047
1
40978
null
18
1149
Obviously events A and B are independent iff Pr$(A\cap B)$ = Pr$(A)$Pr$(B)$. Let's define a related quantity Q: $Q\equiv\frac{\mathrm{Pr}(A\cap B)}{\mathrm{Pr}(A)\mathrm{Pr}(B)}$ So A and B are independent iff Q = 1 (assuming the denominator is nonzero). Does Q actually have a name though? I feel like it refers to some elementary concept that is escaping me right now and that I will feel quite silly for even asking this.
Does this quantity related to independence have a name?
CC BY-SA 2.5
null
2011-01-06T19:50:41.317
2018-03-21T18:12:03.933
null
null
2485
[ "probability", "terminology", "independence" ]
6048
2
null
6046
2
null
Mixed effects analysis (available in [R](http://www.r-project.org) via the [lme4](http://cran.r-project.org/web/packages/lme4/index.html) package, free as always) can handle missing data like this. My understanding is (possibly erroneous? Mixed effects modelling experts please feel free to provide correction) of how this is achieved is as follows: Given a set of predictor variables that are combined in some way to form a matrix that I usually call the "predictor design" (for example, for a completely crossed design of 3 variables with 2 levels each, the predictor design is a 2x2x2 matrix in which you expect to have each cell filled with an observed response), and given a set of "units of observation" (ex. individual human participants in an experiment), missing data can occur if the experiment fails to obtain an observation for a given unit in a given cell of the predictor design. If the missing data occurs in a cell of the design that involves a continuous predictor variable over which the model would typically attempt to fit a slope, the model simply goes ahead and fits the slope across those cells without missing data. I think that the model also takes into account the fact that there is missing data in determining the degree of influence that given unit's data has on inference relating to that slope; that is, slopes from units with lots of missing data will by definition be based on fewer observations and thereby are expected to be less reliable estimates, a circumstance that the model takes into account when combining that estimate with those from other units with possibly more reliable estimates. If the slope cannot be computed (ex. one or fewer cells have data), or if the missing data occurs in a cell of the design that involves a categorical predictor variable, then that given unit will not influence inference of that particular slope/effect, but remains in the model for influence of slopes/effects for which the unit does have sufficient data. (Hm, now that I think of it, I'm not sure how this works in the context of missing data for cells that constitute the intercept level of contrasts when treatment contrasts are used...) Note, however, that if you are missing information about the predictor variables (eg, you have a response, but you're unsure where in the predictor design it should fall), the model will not attempt to impute this information and will instead simply ignore that value.
null
CC BY-SA 2.5
null
2011-01-06T21:06:18.200
2011-01-10T14:53:32.060
2011-01-10T14:53:32.060
364
364
null
6049
2
null
6044
1
null
It seems to me like you need to formulate a question you want your data to answer. Let me suggest a few (perhaps you can edit your post to reflect what questions make sense for your data): - As the DSMC value increases, does the theoretical result also increase? - If I know the value of the theoretical result, how accurately can I estimate the value of DSMC? If the points (1,x1) and (1,y1) refer to the same measurement or the same run of the experiment, or one is an estimate of the other. One natural way to see how related they are related is to plot {(x1,y1),(x2,y2) ...}. You can read about Pearson correlation and Kendall tau and Spearman rho here: [http://en.wikipedia.org/wiki/Correlation_and_dependence](http://en.wikipedia.org/wiki/Correlation_and_dependence)
null
CC BY-SA 2.5
null
2011-01-06T22:25:22.827
2011-01-06T22:25:22.827
null
null
1540
null
6050
1
6052
null
13
15896
I'm doing a simple AIC-based backward elimination model where some variables are categorical variables with multiple levels. These variables are modeled as a set of dummy variables. When doing backward elimination, should I be removing all the levels of a variable together? Or should I treat each dummy variable separately? And why? As a related question, step in R handles each dummy variable separately when doing backward elimination. If I wanted to remove an entire categorical variable at once, can I do that using step? Or are there alternatives to step which can handle this?
How should I handle categorical variables with multiple levels when doing backward elimination?
CC BY-SA 4.0
null
2011-01-07T00:15:28.330
2020-01-24T00:55:08.693
2020-01-24T00:55:08.693
11887
2308
[ "model-selection", "stepwise-regression" ]
6051
2
null
4600
1
null
In an anova context, the partial eta squared will tell what % of the Y variance is explained by a given X when controlling for all other X's. In a regression context, you could refer to the squared partial correlation of the X of interest.
null
CC BY-SA 2.5
null
2011-01-07T00:54:44.693
2011-01-07T00:54:44.693
null
null
2669
null
6052
2
null
6050
8
null
I think you'd have to remove the entire categorical variable. Imagine a logistic regression in which you're trying to predict if a person has a disease or not. Country of birth might have a major impact on that, so you include it in your model. If the specific USAmerican origin didn't have any impact on AIC and you dropped it, how would you calculate $\hat{y}$ for an American? R uses reference contrasts for factors by default, so I think they'd just be calculated at the reference level (say, Botswana), if at all. That's probably not going to end well... A better option would be to sort out sensible encodings of country of birth beforehand - collapsing into region, continent, etc. and finding which of those is most suitable for your model. Of course, there are many ways to misuse stepwise variable selection, so make sure that you're doing it properly. There's plenty about that on this site, though; searching for "stepwise" brings up some good results. [This is particularly pertinent](https://stats.stackexchange.com/questions/5360/stepwise-logistic-regression-and-sampling), with lots of good advice in the answers.
null
CC BY-SA 2.5
null
2011-01-07T01:23:04.920
2011-01-07T01:23:04.920
2017-04-13T12:44:29.923
-1
71
null
6053
2
null
6039
1
null
Principal Components Analysis is a way of distilling a large set of variables into a few topics or themes or fundamentals. It's dimension reduction. The only resemblance I see to sampling is that sampling also involves a kind of reduction.
null
CC BY-SA 2.5
null
2011-01-07T01:34:31.820
2011-01-07T01:34:31.820
null
null
2669
null
6054
2
null
868
1
null
One picayune thing that could matter down the road is, in your equation P = 1 / [1 + e^(-35 + (4*4 + 4*4 + 3*0 + 1*1)] you've misplaced the "-": it needs to go outside the parenthesis. So it'd be P = 1 / [1 + e^-(a + B1*X1 + B2*X2 + B3*X3...+ Bn*Xn)].
null
CC BY-SA 2.5
null
2011-01-07T01:55:24.827
2011-01-07T01:55:24.827
null
null
2669
null
6055
2
null
6042
0
null
SPSS writes SAS7dat format files normally with no problems. When you it did not work, what actually happened?
null
CC BY-SA 2.5
null
2011-01-07T02:17:31.810
2011-01-07T02:17:31.810
null
null
null
null
6056
1
null
null
8
5008
I try to install rpy2 in my system, (I compile R with --enable-R-shlib and with --enable-BLAS-shlib flags) but when I try in python console ``` import rpy2 import rpy2.robjects ``` I got: ``` Traceback (most recent call last): File "<stdin>", line 1, in<module> File "/usr/lib/python2.6/dist-packages/rpy2/robjects/__init__.py", line 14, in<module> import rpy2.rinterface as rinterface File "/usr/lib/python2.6/dist-packages/rpy2/rinterface/__init__.py", line 75, in<module> from rpy2.rinterface.rinterface import * ImportError: libRblas.so: cannot open shared object file: No such file or directory ``` The rpy2 directory is: ``` rpy2.__path__ ['/usr/lib/python2.6/dist-packages/rpy2'] ``` My R version is: ``` R version 2.12.1 Patched (2011-01-04 r53913) ``` My R home is: ``` /usr/bin/R ``` My ubuntu version is: ``` Linux kenneth-desktop 2.6.32-27-generic #49-Ubuntu SMP Thu Dec 2 00:51:09 UTC 2010 x86_64 GNU/Linux ``` My Python version is: ``` Python 2.6.5 (r265:79063, Apr 16 2010, 13:57:41) [GCC 4.4.3] on linux2 ``` When I install rpy2 from source (sudo python setup.py build install) I got: ``` running build running build_py running build_ext Configuration for R as a library: include_dirs: ('/usr/lib64/R/include',) libraries: ('R', 'Rblas', 'Rlapack') library_dirs: ('/usr/lib64/R/lib',) extra_link_args: () # OSX-specific (included in extra_link_args) framework_dirs: () frameworks: () running install running install_lib running install_data running install_egg_info Removing /usr/local/lib/python2.6/dist-packages/rpy2-2.1.9.egg-info Writing /usr/local/lib/python2.6/dist-packages/rpy2-2.1.9.egg-info ``` What am I doing wrong? Thank you for your help.
Problems with libRblas.so on ubuntu with rpy2
CC BY-SA 3.0
null
2011-01-07T03:23:36.777
2017-01-12T19:25:09.840
2016-06-29T17:17:36.103
119149
2680
[ "r", "python" ]
6057
2
null
6047
0
null
Maybe you are asking how this quantity is related to the Odds Ratio, as a quantity for measuring independence. I think you are searching for "Relation to statistical independence". See [http://en.wikipedia.org/wiki/Odds_ratio](http://en.wikipedia.org/wiki/Odds_ratio)
null
CC BY-SA 2.5
null
2011-01-07T03:33:37.813
2011-01-07T09:38:47.577
2011-01-07T09:38:47.577
159
2680
null
6058
2
null
6056
5
null
It looks like you tried to do things locally but didn't quite get there. I happen to maintain the Debian packages of R (which get rebuilt for Ubuntu and are accessible at [CRAN](http://cran.r-project.org/bin/linux/ubuntu/). These builds use external BLAS. rpy2 then builds just fine as well. I would recommend that you read the [README](http://cran.r-project.org/bin/linux/ubuntu/), try to install `r-base-core` and `r-base-dev` from the repositories and then try to install `rpy2` from source. Or live with the slightly older `rpy2` package in Ubuntu.
null
CC BY-SA 2.5
null
2011-01-07T03:52:44.340
2011-01-07T03:52:44.340
null
null
334
null
6059
2
null
6024
5
null
(This response picks up where @AVB, who has provided useful comments, left off by suggesting we need to figure out which differences $X_i - X_j$ ought to be included among the independent variables.) The big question here is what is an effective method to identify the model. Later we can worry about faster methods. (But regression is so fast that you could process dozens of variables for millions of records in a matter of seconds.) To make sure I'm not going astray, and to illustrate the procedure, I simulated a dataset like yours, only a little simpler. It consists of 60 independent draws from a common multivariate normal distribution with five unit-variance variables $Z_1, Z_2, Z_3, Z_4,$ and $Y$. The first two variables are independent of the second two and have correlation coefficient 0.9. The second two variables have correlation coefficient -0.9. The correlations between $Z_i$ and $Y$ are 0.5, 0.5, 0.5, and -0.5. Then--this changes nothing essential but makes the data a little more interesting--I rescaled the variables, thus: $X_1 = Z_1, X_2 = 2 Z_2, X_3 = 3 Z_3, X_4 = 4 Z_4$. Let's begin by establishing that this simulation emulates the stated problem. Here is a scatterplot matrix. ![alt text](https://i.stack.imgur.com/Y3AqA.png) The full regression of $Y$ against the $X_i$ is highly significant ($F(4, 55) = 15.28,\ p < 0.0001$) but all four t-values equal 1.24 ($p = 0.222$), which is not significant at all. The estimated coefficients are 0.26, 0.13, 0.088, and -0.066 (rounded to two sig figs). Here is my proposal: systematically combine variables in pairs (six pairs in this case, 36 pairs for nine variables), one pair at a time. Regress a pair along with all remaining variables, seeking highly significant results for the pairs. What is a "pair"? It is the linear combination suggested by the estimated coefficients. In this case, they are $$\eqalign{ X_{12} =& X_1 / 0.26 &+ X_2 / 0.13 \cr X_{13} =& X_1 / 0.26 &+ X_3 / 0.088 \cr X_{14} =& X_1 / 0.26 &- X_4 / 0.066 \cr X_{23} =& X_2 / 0.13 &+ X_3 / 0.088 \cr X_{24} =& X_2 / 0.13 &- X_4 / 0.066 \cr X_{34} =& X_3 / 0.088 &- X_4 / 0.066 \text{.} }$$ In general, with $\hat{\beta}_i$ representing the estimated coefficient of $X_i$ in this full regression, the pairs are defined by $$X_{ij} = X_i / \hat{\beta}_i + X_j / \hat{\beta}_j\text{.}$$ This is so systematic that it's straightforward to script. The "identification regressions" are the model $$Y \sim X_{12} + X_3 + X_4$$ along with the five additional permutations thereof, one for each pair. You are looking for results where $X_{ij}$ becomes significant: ignore the significance of the remaining $X_k$. To see what's going on, I will list the results of all six identification regressions for the simulation. As a shorthand, I list the variables followed by a vector of their t-values only: $$\eqalign{ X_{12}, X_3, X_4:&\ (5.50, 1.24, -1.24) \cr X_{13}, X_2, X_4:&\ (1.36, 4.94, -1.13) \cr X_{14}, X_2, X_3:&\ (1.31, 5.16, 1.17) \cr X_{23}, X_1, X_4:&\ (1.64, 3.10, -1.09) \cr X_{24}, X_1, X_3:&\ (1.50, 4.15, 1.07) \cr X_{34}, X_1, X_2:&\ (5.56, 1.25, 1.25) } $$ As you can see from the first component of each vector (the t-value for the pair), precisely two disjoint pairs exhibit significant t-statistics: $X_{12}$, with $t = 5.50\ (p \lt 0.001)$, and $X_{34}$, with $t = 5.56\ (p \lt 0.001)$. The model thus identified is $$Y \sim X_{12} + X_{34}\text{.}$$ (In general, we would also include--provisionally--any remaining $X_i$ not participating in any of the pairs. There aren't any in this case.) The regression results are $$\eqalign{ \hat{\beta_{12}} &= 0.027\ (t = 5.54,\ p \lt 0.001) \cr \hat{\beta_{34}} &= 0.0055\ (t = 5.58,\ p \lt 0.001), \cr F(2, 57) &= 30.92\ (p \lt 0.0001). }$$ Translating back to the original $X_i$, the model is $$\eqalign{ Y &= 0.027(X_1 / 0.26 + X_2 / 0.13) + 0.0055(X_3 / 0.088 - X_4 / 0.066) \cr &= 0.103 X_1 + 0.206 X_2 + 0.0629 X_3 - 0.0839 X_4 \cr &= 0.103 (Z_1 + Z_2) + 0.021 (Z_3 - Z_4) \text{.} }$$ (The last line shows how this all relates to form of the original question.) That's exactly the form used in the simulation: $Z_1$ and $Z_2$ enter with the same coefficient and $Z_3$ and $Z_4$ enter with opposite coefficients. This method got the right answer. I want to share a cool observation in this regard. First, here's the scatterplot matrix for the model. ![alt text](https://i.stack.imgur.com/kR5Dd.png) Notice how $X_{12}$ and $X_{34}$ look uncorrelated. Furthermore, $Y$ is only weakly correlated with these variables. Doesn't look like much of a relationship, does it? Now consider an alternative set of pairs, $X_{13}$ and $X_{24}$. The regression of $Y$ on these is still highly significant ($F(2, 57) = 16.61\ (p \lt 0.0001).$ Moreover, the coefficient of $X_{24}$ is significant ($t = 2.39,\ p = 0.020$) even though that of $X_{13}$ is not ($t = 0.24,\ p = 0.812$). But look at the scatterplot matrix! ![alt text](https://i.stack.imgur.com/11E4s.png) Clearly $X_{13}$ and $X_{24}$ are strongly correlated. But, even though this is the wrong model, $Y$ is also visibly correlated with these two variables, much more so than in the preceding scatterplot matrix! The lesson here is that mere bivariate plots can be deceiving in a multiple regression setting: to analyze the relationship between any candidate independent variable (such as $X_{12}$) and the independent variable ($Y$), we must make sure to "factor out" all other independent variables. (This is done by regressing $Y$ on all other independent variables and, separately, regressing $X_{12}$ on all the others. Then one looks at a scatterplot of the residuals of the first regression against the residuals of the second regression. It's a theorem that the slope in this bivariate regression equals the coefficient of $X_{12}$ in the full multivariate regression of $Y$ against all the variables.) This insight shows why we might want to systematically perform the "identification regressions" I have proposed, rather than using graphical methods or attempting to combine many of the pairs in one model. Each identification regression assesses the strength of the contribution of a proposed linear combination of variables (a "pair") in the context of all the remaining independent variables. Note that although correlated variables were involved, correlation is not an essential feature of the problem or of the solution. Even where you don't expect the original variables $X_i$ to be strongly correlated, you could expect a model to have (unknown) linear constraints among the variables. That is the important issue to cope with. The presence of correlation only means that it can be problematic to identify such pairs solely by inspecting the original regression results. Following the procedure I have proposed does not guarantee you will find a unique solution. It's conceivable, for instance, that you will find so many highly significant pairs that they are linearly dependent, forcing you to select among them by some other criterion. Nevertheless, the results you get ought to limit the sets of pairs you need to examine; they can be obtained with a straightforward procedure without intervention; and--if this simulation is any guide--they have a good chance of producing effective results.
null
CC BY-SA 2.5
null
2011-01-07T05:07:13.817
2011-01-07T05:26:54.567
2011-01-07T05:26:54.567
919
919
null
6060
2
null
6047
11
null
I think that you are looking for `Lift` (or improvement). Lift is the ratio of the probability that A and B occur together to the multiple of the two individual probabilities for A and B. It is used to interpret the importance of a rule in [association rule mining](http://en.wikipedia.org/wiki/Association_rule_learning#Useful_Concepts). Lift is a way to measure how much better a model is over benchmark and it is defined as the confidence divided by the benchmark, where any value that is greater that one suggest that there is some usefulness to the rule. See [this page](http://maya.cs.depaul.edu/~classes/ect584/WEKA/associate.html) also as another example.
null
CC BY-SA 2.5
null
2011-01-07T08:54:25.257
2011-01-07T08:54:25.257
null
null
339
null
6062
2
null
6044
1
null
First, let's compare two lists of numbers — are they from the same distribution ? For example, how close are the lists of 20 numbers, "|" marks, ``` ||||||.||.||...||.....||.|................|..................................... |||.|...|..|...|.......||...|...|...|.....|..|.................|.....|.......... ``` ? To see, visualize, how such lists differ (whether real, simulated or theoretical), make a [QQ plot](http://en.wikipedia.org/wiki/QQ_plot): sort X, sort Y, plot the pairs (Xj, Yj), see how close that curve is to the line X = Y. Also, [search QQ plot](https://stats.stackexchange.com/search?q=qq+plot) here. A [K-S test](http://en.wikipedia.org/wiki/K-S_test) gives a number from 0, X and Y identical, to 1, way off; you could flip this around to 100 % down to 0 %. However a QQ plot shows more directly where X and Y differ.
null
CC BY-SA 2.5
null
2011-01-07T11:26:21.673
2011-01-07T13:45:52.677
2017-04-13T12:44:35.347
-1
557
null
6063
1
null
null
4
6888
I have been running 3-level multilevel models with [HLM](http://www.ssicentral.com/hlm/), and my main interest is in some cross-level interaction effects that I am finding. My concern is that the effect sizes of these interactions appear to be small – I am wondering whether they are really meaningful. I am turning to you to ask whether anyone could advice me on how to evaluate the size of this kind of effect. Are there any benchmarks for interaction effects in regression analyses? Would these be appropriate for multilevel models, too? I believe that it is not uncommon that the explained variance does not increase much when one adds in interaction terms in multiple regression, and that this would be even more the case for multilevel models when the variance that one is trying to explain is the level 1 variance. Is that right?
Evaluating effect sizes of interactions in multiple regression
CC BY-SA 4.0
null
2011-01-07T11:46:10.653
2021-01-21T18:20:04.660
2021-01-21T18:20:04.660
11887
null
[ "regression", "interaction", "multilevel-analysis", "effect-size" ]
6065
2
null
6022
25
null
As @whuber pointed out, statistical methods do not exactly work here. You need to infer the distribution from other sources. When you know the distribution you have a non-linear equation solving exercise. Denote by $f$ the quantile function of your chosen probability distribution with parameter vector $\theta$. What you have is the following nonlinear system of equations: \begin{align*} q_{0.05}&=f(0.05,\theta) \\\\ q_{0.5}&=f(0.5,\theta) \\\\ q_{0.95}&=f(0.95,\theta)\\\\ \end{align*} where $q$ are your quantiles. You need to solve this system to find $\theta$. Now for practically for any 3-parameter distribution you will find values of parameters satisfying this equation. For 2-parameter and 1-parameter distributions this system is overdetermined, so there are no exact solutions. In this case you can search for a set of parameters which minimizes the discrepancy: \begin{align*} (q_{0.05}-f(0.05,\theta))^2+ (q_{0.5}-f(0.5,\theta))^2 + (q_{0.95}-f(0.95,\theta))^2 \end{align*} Here I chose the quadratic function, but you can chose whatever you want. According to @whuber comments you can assign weights, so that more important quantiles can be fitted more accurately. For four and more parameters the system is underdetermined, so infinite number of solutions exists. Here is some sample R code illustrating this approach. For purposes of demonstration I generate the quantiles from Singh-Maddala distribution from [VGAM](http://cran.r-project.org/web/packages/VGAM/index.html) package. This distribution has 3 parameters and is used in income distribution modelling. ``` q <- qsinmad(c(0.05,0.5,0.95),2,1,4) plot(x<-seq(0,2,by=0.01), dsinmad(x, 2, 1, 4),type="l") points(p<-c(0.05, 0.5, 0.95), dsinmad(p, 2, 1, 4)) ``` ![alt text](https://i.stack.imgur.com/XHQjV.png) Now form the function which evaluates the non-linear system of equations: ``` fn <- function(x,q) q-qsinmad(c(0.05, 0.5, 0.95), x[1], x[2], x[3]) ``` Check whether true values satisfy the equation: ``` > fn(c(2,1,4),q) [1] 0 0 0 ``` For solving the non-linear equation system, I use the function `nleqslv` from package [nleqslv](http://cran.r-project.org/web/packages/nleqslv/index.html). ``` > sol <- nleqslv(c(2.4,1.5,4.3),fn,q=q) > sol$x [1] 2.000000 1.000000 4.000001 ``` As we see we get the exact solution. Now let us try to fit log-normal distribution to these quantiles. For this we will use the `optim` function. ``` > ofn <- function(x,q)sum(abs(q-qlnorm(c(0.05,0.5,0.95),x[1],x[2]))^2) > osol <- optim(c(1,1),ofn) > osol$par [1] -0.905049 0.586334 ``` Now plot the result ``` plot(x,dlnorm(x,osol$par[1],osol$par[2]),type="l",col=2) lines(x,dsinmad(x,2,1,4)) points(p,dsinmad(p,2,1,4)) ``` ![alt text](https://i.stack.imgur.com/VnbG5.png) From this we immediately see that the quadratic function is not so good. Hope this helps.
null
CC BY-SA 4.0
null
2011-01-07T13:49:08.650
2021-09-28T13:36:41.830
2021-09-28T13:36:41.830
46761
2116
null
6066
1
null
null
5
1037
I have carried out a stepwise logistic regression in JMP. Then (using the proper button in the program window), I have chosen to build a nominal logistic regression model using (only) the variables identified by the stepwise procedure. Anyhow, comparing the summary tables of the stepwise regression and the nominal one, I have recognized that the regression coefficients are not the same, and also the p-values are not the same. There is even a variable which changes from a p-value of 0.02 to a p-value of 0.19 (much greater that 0.10, the threshold value I have chosen before stepwise procedure to retain variables in the model! How is it possible? I could use the values in the stepwise summary, but it does not contains any data allowing to build the confidence intervals. So, in suborder my question is: how can I calculate the confidence intervals using only the data reported in JMP stepwise regression summary? Edit: I have recognized just a minute ago that the differences refer to categorical variables which have yield more than one significant comparison. For example, on stepwise regression details I read variable1 is included in the model three times (and passed three times to the nominal regression procedure): A-B versus C-D-E-F-G, C-D versus E-F-G, E-F versus G. Anyhow, such variable1 is reported only one time in regression summary, which cites only the first comparison (A-B versus C-D-E-F-G). It remains a mistery for me why.
Discrepancy between stepwise and nominal logistic regression results in JMP
CC BY-SA 3.0
null
2011-01-07T14:49:08.103
2021-01-09T13:01:05.337
2012-09-30T21:44:14.593
686
1219
[ "logistic", "stepwise-regression", "jmp" ]
6067
1
6086
null
120
127752
Okay, so I think I have a decent enough sample, taking into account the 20:1 rule of thumb: a fairly large sample (N=374) for a total of 7 candidate predictor variables. My problem is the following: whatever set of predictor variables I use, the classifications never get better than a specificity of 100% and a sensitivity of 0%. However unsatisfactory, this could actually be the best possible result, given the set of candidate predictor variables (from which I can't deviate). But, I couldn't help but think I could do better, so I noticed that the categories of the dependent variable were quite unevenly balanced, almost 4:1. Could a more balanced subsample improve classifications?
Does an unbalanced sample matter when doing logistic regression?
CC BY-SA 2.5
null
2011-01-07T16:48:03.487
2022-07-21T17:24:22.547
2022-07-21T17:24:22.547
1352
2690
[ "regression", "logistic", "sample-size", "unbalanced-classes", "faq" ]
6070
2
null
6063
1
null
Pursuant to my discussion on the conceptual overlap between effects size and likelihood ratios [here](https://stats.stackexchange.com/questions/4551/what-are-common-statistical-sins/6037#6037), I wonder if the likelihood ratio for each effect against its respective null might serve as a useful metric to achieve the aims sought by those who conventionally employ effect size measures.
null
CC BY-SA 2.5
null
2011-01-07T17:27:20.800
2011-01-07T17:27:20.800
2017-04-13T12:44:29.013
-1
364
null
6071
1
11496
null
5
1796
I'm not sure if this is precisely a measurement error model or not. I'm working on performing meta analysis, and the model I'm starting with is fairly basic. \begin{aligned} X_i &= \mu_i + e_i \\ Y_i &= \beta \mu_i + g_i + \delta_i \end{aligned} The random components are $e_i$, $g_i$, and $\delta_i$,and the variance is known for $e_i$ and $\delta_i$. This falls under a measurement error model with measurement in both the predictor and response. How would I fit this model in R?
Classical measurement error model in R
CC BY-SA 3.0
null
2011-01-07T18:02:14.250
2018-01-31T16:36:20.850
2018-01-31T16:36:20.850
101426
1364
[ "r", "meta-analysis" ]
6072
2
null
6067
53
null
The problem is not that the classes are imbalanced per se, it is that there may not be sufficient patterns belonging to the minority class to adequately represent its distribution. This means that the problem can arise for any classifier (even if you have a synthetic problem and you know you have the true model), not just logistic regression. The good thing is that as more data become available, the "class imbalance" problem usually goes away. Having said which, 4:1 is not all that imbalanced. If you use a balanced dataset, the important thing is to remember that the output of the model is now an estimate of the a-posteriori probability, assuming the classes are equally common, and so you may end up biasing the model too far. I would weight the patterns belonging to each class differently and choose the weights by minimising the cross-entropy on a test set with the correct operational class frequencies. Alternatively (see the comments) it might be better to weight the positive and negative classes so they contribute equally to the training criterion (so there isn't a class imbalance problem in the estimation of the model parameters), but afterwards to rescale the posterior probabilities estimated by the classifier in order to compensate for the difference in the (effective) training set class frequencies and those in operational conditions (see [this answer](https://stats.stackexchange.com/questions/535770/imbalanced-data-to-match-reality-with-random-forest/535793#535793) to a related question)
null
CC BY-SA 4.0
null
2011-01-07T18:29:10.353
2021-07-29T18:27:50.260
2021-07-29T18:27:50.260
887
887
null
6073
2
null
5873
2
null
Thank you, whuber, for making me aware of Wald's Sequential Probability Ratio Test (SPRT). At your recommendation, I will relist this [Quantitative Skills site](http://quantitativeskills.com/sisa/statistics/sprt.htm). They will give you an out-of-the-box table to determine whether to continue or stop testing. I also took the time to research that site's references, and was directed toward a comprehensive article that is intended for medical testing, but is easily transferable to other domains. It is Increasing Efficiency in Evaluation Research: The Use of Sequential Analysis (Howe, Holly L., American Journal of Public Health July 1982, Vol. 72, No. 7, pp. 690-697.) This article may be downloaded in its entirety. Since I have not seen SPRT in my stats courses, I will provide a cookbook that I hope will he helpful for the stackexchange community. For my null hypothesis, I tested for a level of 95% correct. If, however, the level was below 80%, it would be a cause for concern. So I have $p_1 = .95$ (null hypothesis), and $p_2 = .80$ (alternative hypothesis) I will use $\alpha = 0.05$ and $\beta = 0.10$. Howe shows a graph with two parallel lines, with plots of the cumulative errors. Testing continues while the cumulative errors (and in my case, cumulative count of correct data points) lie between the two lines. If the cumulative errors exceed either line, then either: - accept the null hypothesis (if cumulative error count falls below the bottom line, $d_1$), or - reject the null hypothesis (if cumulative error count exceeds the top line, $d_2$). Here are the equations. I am adding a denominator because it is used several times. $denom = log\left [ \left ( \frac{p_2}{p_1}\right )(\frac{1 - p_1}{1 - p_2}) \right ]$ The slopes of the lines are the same, and represented by s. $s = \frac{log\left ( \frac{1 - p_1}{1 - p_2} \right )}{denom}$ The intercepts, $h_1$ and $h_2$, are computed as follows: $h_1 = \frac{log\left ( \frac{1 - \alpha }{\beta }\right )}{denom}$ $h_2 = \frac{log\left ( \frac{1 - \beta }{\alpha }\right )}{denom}$ I set up a spreadsheet with data point N going from 1 to 50. Then I added two columns for acceptance threshold ($d_1$) and rejection threshold ($d_2$). $d_1 = -h_1 + sN$ $d_2 = h_2 + sN$ In my experiment, $denom = -0.67669$ $h_1 = -1.44485$ $h_2 = -1.85501$ The values of $d_1$ at N=2, N=5, N=10 are 3.224, 5.893, 10.342. I then added columns for success and cumSuccess. I picked data points until the cumulative number exceeded the acceptance threshold, and I accepted the null hypothesis.
null
CC BY-SA 2.5
null
2011-01-07T20:05:42.093
2011-01-07T20:05:42.093
null
null
2591
null
6074
1
null
null
12
10498
Context: I am a programmer with some (half-forgotten) experience in statistics from uni courses. Recently I stumbled upon [http://akinator.com](http://akinator.com) and spent some time trying to make it fail. And who wasn't? :) I've decided to find out how it could work. After googling and reading related blog posts and adding some of my (limited) knowledge into resulting mix I come up with the following model (I'm sure that I'll use the wrong notation, please don't kill me for that): There are Subjects (S) and Questions (Q). Goal of the predictor is to select the subject S which has the greatest aposterior probability of being the subject that user is thinking about, given questions and answers collected so far. Let game G be a set of questions asked and answers given: $\{q_1, a_1\}, \{q_2, a_2\} ... \{q_n, a_n\}$. Then predictor is looking for $P(S|G) = \frac{P(G|S) * P(S)}{P(G)}$. Prior for subjects ($P(S)$) could be just the number of times subject has been guessed divided by total number of games. Making an assumption that all answers are independent, we could compute the likelihood of subject S given the game G like so: $P(G|S) = \prod_{i=1..n} P(\{q_i, a_i\} | S)$ We could calculate the $P(\{q_i, a_i\} | S)$ if we keep track of which questions and answers were given when the used have though of given subject: $P({q, a} | S) = \frac{answer\ a\ was\ given\ to\ question\ q\ in\ the\ game\ when\ S\ was\ the\ subject}{number\ of\ times\ q\ was\ asked\ in\ the\ games\ involving\ S}$ Now, $P(S|G)$ defines a probability distribution over subjects and when we need to select the next question we have to select the one for which the expected change in the entropy of this distribution is maximal: $argmax_j (H[P(S|G)] - \sum_{a=yes,no,maybe...} H[P(S|G \vee \{q_j, a\})]$ I've tried to implement this and it works. But, obviously, as the number of subjects goes up, performance degrades due to the need to recalculate the $P(S|G)$ after each move and calculate updated distribution $P(S|G \vee \{q_j, a\})$ for question selection. I suspect that I simply have chosen the wrong model, being constrained by the limits of my knowledge. Or, maybe, there is an error in the math. Please enlighten me: what should I make myself familiar with, or how to change the predictor so that it could cope with millions of subjects and thousands of questions?
Akinator.com and Naive Bayes classifier
CC BY-SA 2.5
null
2011-01-07T22:08:40.717
2012-04-02T07:40:39.303
2011-01-07T23:40:20.670
null
2696
[ "machine-learning", "naive-bayes" ]
6075
2
null
3779
15
null
It is very hard to draw a rack that does not contain any valid word in Scrabble and its variants. Below is an R program I wrote to estimate the probability that the initial 7-tile rack does not contain a valid word. It uses a monte carlo approach and the [Words With Friends](http://newtoyinc.com/) lexicon (I couldn’t find the official Scrabble lexicon in an easy format). Each trial consists of drawing a 7-tile rack, and then checking if the rack contains a valid word. Minimal words You don’t have to scan the entire lexicon to check if the rack contains a valid word. You just need to scan a minimal lexicon consisting of minimal words. A word is minimal if it contains no other word as a subset. For example 'em’ is a minimal word; 'empty’ is not. The point of this is that if a rack contains word x then it must also contain any subset of x. In other words: a rack contains no words iff it contains no minimal words. Luckily, most words in the lexicon are not minimal, so they can be eliminated. You can also merge permutation equivalent words. I was able to reduce the Words With Friends lexicon from 172,820 to 201 minimal words. Wildcards can be easily handled by treating racks and words as distributions over the letters. We check if a rack contains a word by subtracting one distribution from the other. This gives us the number of each letter missing from the rack. If the sum of those number is $\leq$ the number of wildcards, then the word is in the rack. The only problem with the monte carlo approach is that the event that we are interested in is very rare. So it should take many, many trials to get an estimate with a small enough standard error. I ran my program (pasted at the bottom) with $N=100,000$ trials and got an estimated probability of 0.004 that the initial rack does not contain a valid word. The estimated standard error of that estimate is 0.0002. It took just a couple minutes to run on my Mac Pro, including downloading the lexicon. I’d be interested in seeing if someone can come up with an efficient exact algorithm. A naive approach based on inclusion-exclusion seems like it could involve a combinatorial explosion. Inclusion-exclusion I think this is a bad solution, but here is an incomplete sketch anyway. In principle you can write a program to do the calculation, but the specification would be tortuous. The probability we wish to calculate is $$ P(k\text{-tile rack does not contain a word}) = 1 - P(k\text{-tile rack contains a word}) . $$ The event inside the probability on the right side is a union of events: $$ P(k\text{-tile rack contains a word}) = P\left(\cup_{x \in M} \{ k\text{-tile rack contains }x \} \right), $$ where $M$ is a minimal lexicon. We can expand it using the inclusion-exclusion formula. It involves considering all possible intersections of the events above. Let $\mathcal{P}(M)$ denote the power set of $M$, i.e. the set of all possible subsets of $M$. Then $$ \begin{align} &P(k\text{-tile rack contains a word}) \\ &= P\left(\cup_{x \in M} \{ k\text{-tile rack contains }x \} \right) \\ &= \sum_{j=1}^{|M|} (-1)^{j-1} \sum_{S \in \mathcal{P}(M) : |S| = j} P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} \right) \end{align} $$ The last thing to specify is how to calculate the probability on the last line above. It involves a multivariate hypergeometric. $$\cap_{x \in S} \{ k\text{-tile rack contains }x \}$$ is the event that the rack contains every word in $S$. This is a pain to deal with because of wildcards. We'll have to consider, by conditioning, each of the following cases: the rack contains no wildcards, the rack contains 1 wildcard, the rack contains 2 wildcards, ... Then $$ \begin{align} &P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} \right) \\ &= \sum_{w=0}^{n_{*}} P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} | k\text{-tile rack contains } w \text{ wildcards} \right) \\ &\quad \times P(k\text{-tile rack contains } w \text{ wildcards}) . \end{align} $$ I'm going to stop here, because the expansions are tortuous to write out and not at all enlightening. It's easier to write a computer program to do it. But by now you should see that the inclusion-exclusion approach is intractable. It involves $2^{|M|}$ terms, each of which is also very complicated. For the lexicon I considered above $2^{|M|} \approx 3.2 \times 10^{60}$. Scanning all possible racks I think this is computationally easier, because there are fewer possible racks than possible subsets of minimal words. We successively reduce the set of possible $k$-tile racks until we get the set of racks which contain no words. For Scrabble (or Words With Friends) the number of possible 7-tile racks is in the tens of billions. Counting the number of those that do not contain a possible word should be doable with a few dozen lines of R code. But I think you should be able to do better than just enumerating all possible racks. For instance, 'aa' is a minimal word. That immediately eliminates all racks containing more than one 'a’. You can repeat with other words. Memory shouldn’t be an issue for modern computers. A 7-tile Scrabble rack requires fewer than 7 bytes of storage. At worst we would use a few gigabytes to store all possible racks, but I don’t think that’s a good idea either. Someone may want to think more about this. Monte Carlo R program ``` # # scrabble.R # # Created by Vincent Vu on 2011-01-07. # Copyright 2011 Vincent Vu. All rights reserved. # # The Words With Friends lexicon # http://code.google.com/p/dotnetperls-controls/downloads/detail?name=enable1.txt&can=2&q= url <- 'http://dotnetperls-controls.googlecode.com/files/enable1.txt' lexicon <- scan(url, what=character()) # Words With Friends letters <- c(unlist(strsplit('abcdefghijklmnopqrstuvwxyz', NULL)), '?') tiles <- c(9, 2, 2, 5, 13, 2, 3, 4, 8, 1, 1, 4, 2, 5, 8, 2, 1, 6, 5, 7, 4, 2, 2, 1, 2, 1, 2) names(tiles) <- letters # Scrabble # tiles <- c(9, 2, 2, 4, 12, 2, 3, 2, 9, 1, 1, 4, 2, 6, 8, 2, 1, 6, 4, 6, 4, # 2, 2, 1, 2, 1, 2) # Reduce to permutation equivalent words sort.letters.in.words <- function(x) { sapply(lapply(strsplit(x, NULL), sort), paste, collapse='') } min.dict <- unique(sort.letters.in.words(lexicon)) min.dict.length <- nchar(min.dict) # Find all minimal words of length k by elimination # This is held constant across iterations: # All words in min.dict contain no other words of length k or smaller k <- 1 while(k < max(min.dict.length)) { # List all k-letter words in min.dict k.letter.words <- min.dict[min.dict.length == k] # Find words in min.dict of length > k that contain a k-letter word for(w in k.letter.words) { # Create a regexp pattern makepattern <- function(x) { paste('.*', paste(unlist(strsplit(x, NULL)), '.*', sep='', collapse=''), sep='') } p <- paste('.*', paste(unlist(strsplit(w, NULL)), '.*', sep='', collapse=''), sep='') # Eliminate words of length > k that are not minimal eliminate <- grepl(p, min.dict) & min.dict.length > k min.dict <- min.dict[!eliminate] min.dict.length <- min.dict.length[!eliminate] } k <- k + 1 } # Converts a word into a letter distribution letter.dist <- function(w, l=letters) { d <- lapply(strsplit(w, NULL), factor, levels=l) names(d) <- w d <- lapply(d, table) return(d) } # Sample N racks of k tiles N <- 1e5 k <- 7 rack <- replicate(N, paste(sample(names(tiles), size=k, prob=tiles), collapse='')) contains.word <- function(rack.dist, lex.dist) { # For each word in the lexicon, subtract the rack distribution from the # letter distribution of the word. Positive results correspond to the # number of each letter that the rack is missing. y <- sweep(lex.dist, 1, rack.dist) # If the total number of missing letters is smaller than the number of # wildcards in the rack, then the rack contains that word any(colSums(pmax(y,0)) <= rack.dist[names(rack.dist) == '?']) } # Convert rack and min.dict into letter distributions min.dict.dist <- letter.dist(min.dict) min.dict.dist <- do.call(cbind, min.dict.dist) rack.dist <- letter.dist(rack, l=letters) # Determine if each rack contains a valid word x <- sapply(rack.dist, contains.word, lex.dist=min.dict.dist) message("Estimate (and SE) of probability of no words based on ", N, " trials:") message(signif(1-mean(x)), " (", signif(sd(x) / sqrt(N)), ")") ```
null
CC BY-SA 2.5
null
2011-01-07T23:19:55.983
2011-01-08T20:45:41.183
2011-01-08T20:45:41.183
1670
1670
null
6076
1
6085
null
2
765
I'm a statistics newbie (medical student) trying my luck with a Cox regression for a survival analysis on the outcome of a specific type of operation. And I'm trying to determine which variables to control for. And how to handle the age of the patient at operation-time (measure-start)... Currently I've set the following as covariates in my analysis (PASW/SPSS): ``` Risk (Low/High) Gender (Female/Male) Shunt used in operation (False/True) Artery stenosis (False/True) Age at operation (Integer value) ``` I've set the 4 first variables as categories (dummy coded as 0 and 1), and specified the first value (0) as the reference value. But what about the patients age at the operation? I assume this must be controlled for? But the age at measure-start (operation) isn't time-dependent (is it?), but are always the same value, right? Does this mean I have to control for the patients real age aswell? And does this mean I have to use the `COX with Time-dependent Covariables` function instead of the regular `COX Regression`? Thanks for any help! Regards Alex
How to handle age at measure-start in Cox regression?
CC BY-SA 2.5
null
2011-01-07T23:22:09.210
2011-01-08T15:28:00.433
2011-01-07T23:43:23.037
null
2652
[ "regression", "spss" ]
6077
1
6084
null
2
1761
Another newbie question here (probably piece of cake for you guys). When I run a Cox Regression and one of my covariates come out as significant: ``` 95,0% CI for Exp(B) B SE Wald df Sig. Exp(B) Lower Upper risk 2,224 1,107 4,036 1 ,045 9,244 1,056 80,950 <5 covariates removed for clarity> ``` But my Omnibus Tests of Model Coefficients(a) isn't significant. Change from previous step: Chi-square=10,290. df=6. Sig.=0,113. Does this mean my model is simply too bad to use for anything? Can I say anything credible about my data after running this analysis? Thanks for any clues that'll take me closer to the beauty of applied statistics... ;) Regards Alex
Interpretation of log likelihood and covariate significance in Cox regression
CC BY-SA 2.5
null
2011-01-07T23:48:05.703
2011-01-08T15:20:27.973
2011-01-07T23:57:54.437
null
2652
[ "regression" ]
6078
1
6088
null
3
471
I want to do a chi-squared test on data that looks like this: ``` A B 0 0 1 0 0 1 1 1 8 0 3 4 ... ``` You can think of each pair as one trial with two participants. In each trial, there are a different number of observations for each participant. I have binned each data pair like so: I counted how many pairs have 0 for both pairs (e.g. 0-0), how many have exactly one 0 (e.g., 0-1, 1-0, 8-0, etc.), and how many have greater than 0 for both (e.g., 1-1, 3-4, etc.). This gives me the following counts: Two zeroes: 227 One zero: 277 No zeroes: 146 The problem is that I am not sure how to calculate expected values here. Each pair represents the number of times something appears within an arbitrary number of observations; each pair represents a different number of observations, and each element of the pair does too. So, for instance, for a given 0-1 pair the first participant might have given 200 observations (with no hits), and the second might have yielded 150 observations (with 1 hit). Another 0-1 pair might have yielded 100 observations and 50 observations, respectively. So, in this case, just totaling the overall number of hits and dividing that by the total number of observations won't get me the right expected values... Am I missing something obvious here? For instance if we know that the relative frequency of hits per observation is 0.01217, and if there are a different number of observations for each trial, is there a simple way to get expected values in a problem like this? This is what I want to test: whether the hits between participants in each trial are independent. I expect that as the hits of one participant in one pair increases, so will the other. I realize that there might be better ways to test this, but my committee have asked me to use a chi-sq test, if possible. So what I expect to find is that the number of trials in the 0-0 and in the no-zero bins will be higher than would be expected if hits are distributed evenly.
Expected values for chi-squared test on binned paired counts
CC BY-SA 2.5
null
2011-01-08T02:52:10.047
2011-01-08T19:36:42.637
2011-01-08T05:20:53.410
52
52
[ "chi-squared-test", "expected-value" ]
6079
1
6090
null
5
5381
I did a linear regression in R and got the following result: ``` Estimate Std. Error t value Pr(>|t|) (Intercept) 192116.40 6437.27 29.844 < 2e-16 *** cdd 272.74 26.94 10.123 1.56e-09 *** pmax(hdd - 450, 0) 61.73 22.54 2.738 0.0123 * --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 16500 on 21 degrees of freedom Multiple R-squared: 0.8454, Adjusted R-squared: 0.8307 F-statistic: 57.41 on 2 and 21 DF, p-value: 3.072e-09 ``` My question regards the R-squared value, 0.83 and what it means if I want to specify approximate percentage contributions of each (monthly) variable. EDIT: See data, below. Say I take the first 12 hdd and cdd data points, and calculate the sum of the 12 predictions (i.e. the first year's total prediction), using the coefficients, above. The baseline (intercept) contribution to the year would be approximately 12 * 192116.40 = 2305397, right? Similarly, the cdd contribution to the year would be approximately 1608 * 272.74 = 438565.9, and hdd would be (after my hand-made hinge function) approximately 1329 * 61.73 = 82039.17. Summing the three values yields 2826002, which is within 1.3% of actual total usage (2862840, the sum of the first 12 elec's). Can I then say that cdd contributes 438565.9/2826002= 0.1551895, or approximately 16% of the yearly total? Or do I need to take that and compensate for the adjusted R-squared: 0.1551895*0.8307= 0.1289159 (i.e. multiply by the adjusted R-squares), for approximately 13% of the total? Or is none of this correct reasoning? My data is: ``` elec hdd cdd 1 235940 880 3 2 205380 772 4 3 211780 551 9 4 192220 281 68 5 221440 165 119 6 304840 15 364 7 283160 4 434 8 300440 11 339 9 272900 42 214 10 204220 322 44 11 201060 592 8 12 229460 784 2 13 214520 1064 0 14 197900 719 2 15 186660 618 15 16 195340 332 88 17 241200 109 159 18 260700 18 282 19 299940 29 367 20 293240 2 426 21 268740 51 159 22 208380 319 36 23 183820 452 7 24 231360 903 0 ``` (The monthly billing cycle for elec can be anywhere from 29 to 32 days, so that injects a lot of variance right there. I do not yet have all of the billing cycle lengths to to a trading day kind of adjustment.)
R-squared result in linear regression and "unexplained variance"
CC BY-SA 2.5
null
2011-01-08T03:03:48.000
2011-01-10T18:19:19.323
2011-01-10T18:19:19.323
1764
1764
[ "regression" ]
6080
1
null
null
4
787
I'm trying to fit the GAMLSS library's Sichel distribution to some zero-truncated data, but the only way to get the function to work is to include the zero-class anyway but give it a frequency of 0, which doesn't take into account the zero-truncated nature of my data. Can anyone suggest a way to properly "redistribute" the zero-class's probability to the remaining probabilities (or some other, better, course of action using Sichel)? If you run the following example, you'll see that `sum(pdf2)` equals 1, but that the zero class that has a probability in my case of 0 is still allocated around 27% of the cum probability: ``` Counts = data.frame(n = c(0,1,2,3,4,5,6,7,8,9,10), freq = c(0,182479,76986,44859,24315,49,100,490,106,0,2)) gamlss(n~1,family=SICHEL, control=gamlss.control(n.cyc=50),data=Counts ) pdf2 = dSICHEL(x=with(Counts, n), mu = 1.610, sigma = 98.43, nu = 3.315) print( with(Counts, cbind(n, freq, fitted=pdf2*sum(freq))), dig=9) sum(pdf2) ```
Zero-truncated Sichel distribution in R
CC BY-SA 3.0
null
2011-01-08T06:24:18.810
2017-03-01T12:49:00.607
2017-03-01T12:49:00.607
11887
null
[ "r", "distributions", "count-data", "gamlss" ]
6081
1
6083
null
13
9775
So assuming that there is a point in testing the normality assumption for anova (see [1](https://stats.stackexchange.com/questions/2492/normality-testing-essentially-useless) and [2](https://stats.stackexchange.com/questions/2824/checking-anova-assumptions)) How can it be tested in R? I would expect to do something like: ``` ## From Venables and Ripley (2002) p.165. utils::data(npk, package="MASS") npk.aovE <- aov(yield ~ N*P*K + Error(block), npk) residuals(npk.aovE) qqnorm(residuals(npk.aov)) ``` Which doesn't work, since "residuals" don't have a method (nor predict, for that matter) for the case of repeated measures anova. So what should be done in this case? Can the residuals simply be extracted from the same fit model without the Error term? I am not familiar enough with the literature to know if this is valid or not, thanks in advance for any suggestion.
Testing the normality assumption for repeated measures anova? (in R)
CC BY-SA 2.5
null
2011-01-08T11:26:07.737
2015-10-26T04:39:09.957
2017-04-13T12:44:41.967
-1
253
[ "r", "anova", "normality-assumption", "repeated-measures" ]
6082
2
null
6079
4
null
R^2 is the percent of variance in the DV accounted for by the whole model. That is, your intercept and your IVS combined account for that much of the variance, using the linear regression model. In your case, you got an R^2 of 0.85, indicating that intercept, plus cdd plus pmax combined account for 85% of the variance in the DV. The other 15% is error. That is, variance that is not accounted for by the model You cannot tell, from the information given, how much of this 85% is contributed by each. In order to do this, you would have to run more models: DV ~ . (intercept alone) DV ~ cdd DV ~ pmax Each of these would have an R^2, and you could then tell how much each adds.
null
CC BY-SA 2.5
null
2011-01-08T11:28:51.557
2011-01-08T13:07:34.513
2011-01-08T13:07:34.513
449
686
null
6083
2
null
6081
5
null
You may not get a simple response to `residuals(npk.aovE)` but that does not mean there are no residuals in that object. Do `str` and see that within the levels there are still residuals. I would imagine you were most interested in the "Within" level ``` > residuals(npk.aovE$Within) 7 8 9 10 11 12 4.68058815 2.84725482 1.56432584 -5.46900749 -1.16900749 -3.90234083 13 14 15 16 17 18 5.08903669 1.28903669 0.35570336 -3.27762998 -4.19422371 1.80577629 19 20 21 22 23 24 -3.12755705 0.03910962 2.60396981 1.13730314 2.77063648 4.63730314 ``` My own training and practice has not been to use normality testing, instead to use QQ plots and parallel testing with robust methods.
null
CC BY-SA 2.5
null
2011-01-08T14:26:28.970
2011-01-08T14:26:28.970
null
null
2129
null
6084
2
null
6077
3
null
Credibility depends on context and you've not offered any. If the other 5 degrees of freedom are in the model for good reasons but you aren't particularly interested in estimating the effects, only interested in the impact of "risk", then you may have something interesting but the really wide confidence interval prevents me from applying the label "credible". However, the likelihood ratio test for the addition of "risk" to the model would be a more solid estimate of its effect independent of the other covariates. Your Wald test is fairly borderline.
null
CC BY-SA 2.5
null
2011-01-08T15:20:27.973
2011-01-08T15:20:27.973
null
null
2129
null
6085
2
null
6076
3
null
You are right that you don't need to use a time-dependent model for this situation. If, however, age would be expected to have a dominant effect (as might be expected), you may be safer using an interval-censored model. With R, that is setup with three arguments to the Surv object, but I don't have any experience with SPSS's facilities for survival analysis.
null
CC BY-SA 2.5
null
2011-01-08T15:28:00.433
2011-01-08T15:28:00.433
null
null
2129
null
6086
2
null
6067
98
null
Balance in the Training Set For logistic regression models unbalanced training data affects only the estimate of the model intercept (although this of course skews all the predicted probabilities, which in turn compromises your predictions). Fortunately the intercept correction is straightforward: Provided you know, or can guess, the true proportion of 0s and 1s and know the proportions in the training set you can apply a rare events correction to the intercept. Details are in [King and Zeng (2001)](https://gking.harvard.edu/files/abs/0s-abs.shtml) [[PDF](https://gking.harvard.edu/files/gking/files/0s.pdf)]. These 'rare event corrections' were designed for case control research designs, mostly used in epidemiology, that select cases by choosing a fixed, usually balanced number of 0 cases and 1 cases, and then need to correct for the resulting sample selection bias. Indeed, you might train your classifier the same way. Pick a nice balanced sample and then correct the intercept to take into account the fact that you've selected on the dependent variable to learn more about rarer classes than a random sample would be able to tell you. Making Predictions On a related but distinct topic: Don't forget that you should be thresholding intelligently to make predictions. It is not always best to predict 1 when the model probability is greater 0.5. Another threshold may be better. To this end you should look into the Receiver Operating Characteristic (ROC) curves of your classifier, not just its predictive success with a default probability threshold.
null
CC BY-SA 4.0
null
2011-01-08T16:01:51.453
2018-11-06T21:25:56.047
2018-11-06T21:25:56.047
28666
1739
null
6087
2
null
6081
1
null
I think that the normality assumption can be assessed for each of the repeated measures, before performing the analysis. I would reshape the dataframe so that each column corresponds to a repeated measure, and then perform a shapiro.test to each one of those columns. ``` apply(cast(melt(npk,measure.vars="yield"), ...~N+P+K)[-c(1:2)],2,function(x) shapiro.test(x)$p.value) ```
null
CC BY-SA 2.5
null
2011-01-08T19:33:45.350
2011-01-08T19:33:45.350
null
null
339
null
6088
2
null
6078
3
null
First of all, if your counts come from a different number of trials, then you cannot just ignore that. Apparently your data is better represent as follows: ``` Pair Subject NTrials Hits 1 A 200 0 1 B 150 0 2 A 100 1 2 B 215 0 etc ``` Second, there does not seem to be any reason to recode the number of hits as 0 or >0 - you are throwing away data, and making the analysis more difficult (and probably less meaningful). For the analysis approach, I don't think you can make do with a simple chi-square test. You will need some sort of binomial regression, probably with a random effect accounting for the within-pair correlation. In fact, a test for the presence of this random effect would answer the question of the presence of correlation. Alternatively, you could calculate the probability of a hit for each participant, and then use some sort of weighted correlation, with each observation weighted by its inverse variance (np(1-p)).
null
CC BY-SA 2.5
null
2011-01-08T19:36:42.637
2011-01-08T19:36:42.637
null
null
279
null
6089
2
null
6081
2
null
Another option would be to use the `lme` function of the `nlme` package (and then pass the obtained model to `anova`). You can use `residuals` on its output.
null
CC BY-SA 2.5
null
2011-01-08T19:46:46.810
2011-01-08T19:46:46.810
null
null
582
null
6090
2
null
6079
11
null
$R^2$ is the squared correlation of the OLS prediction $\hat{Y}$ and the DV $Y$. In a multiple regression with three predictors $X_{1}, X_{2}, X_{3}$: ``` # generate some data > N <- 100 > X1 <- rnorm(N, 175, 7) # predictor 1 > X2 <- rnorm(N, 30, 8) # predictor 2 > X3 <- abs(rnorm(N, 60, 30)) # predictor 3 > Y <- 0.5*X1 - 0.3*X2 - 0.4*X3 + 10 + rnorm(N, 0, 10) # DV > fitX123 <- lm(Y ~ X1 + X2 + X3) # regression > summary(fitX123)$r.squared # R^2 [1] 0.6361916 > Yhat <- fitted(fitX123) # OLS prediction Yhat > cor(Yhat, Y)^2 [1] 0.6361916 ``` $R^2$ is also equal to the variance of $\hat{Y}$ divided by the variance of $Y$. In that sense, it is the "variance accounted for by the predictors". ``` > var(Yhat) / var(Y) [1] 0.6361916 ``` The squared semi-partial correlation of $Y$ with a predictor $X_{1}$ is equal to the increase in $R^2$ when adding $X_{1}$ as a predictor to the regression with all remaining predictors. This may be taken as the unique contribution of $X_{1}$ to the proportion of variance explained by all predictors. Here, the semi-partial correlation is the correlation of $Y$ with the residuals from regression where $X_{1}$ is the predicted variable and $X_{2}$ and $X_{3}$ are the predictors. ``` # residuals from regression with DV X1 and predictors X2, X3 > X1.X23 <- residuals(lm(X1 ~ X2 + X3)) > (spcorYX1.X23 <- cor(Y, X1.X23)) # semi-partial correlation of Y with X1 [1] 0.3172553 > spcorYX1.X23^2 # squared semi-partial correlation [1] 0.1006509 > fitX23 <- lm(Y ~ X2 + X3) # regression with DV Y and predictors X2, X3 # increase in R^2 when changing to full regression > summary(fitX123)$r.squared - summary(fitX23)$r.squared [1] 0.1006509 ```
null
CC BY-SA 2.5
null
2011-01-08T20:05:44.290
2011-01-08T22:10:36.727
2011-01-08T22:10:36.727
1909
1909
null
6091
1
6094
null
1
165
I have a bunch of data like this: ``` P1 [1, 2, 3, 4, 5, 6, 7, 8, 10] P2 [5, 8, 10, 12, 20] P3 [10, 201, 440] P4 [1, 2, 10] P5 [1, 2] ``` Right now it's ranked by the size of its data set (as shown above), so values that are heavily represented on the chart are on top e.g., P1 has the most) I'm trying to rank this data so that it favors those that have lower values in data set. So for instance P5 (with low values) would be pushed higher than it's now. Also, P2 could be ranked higher because although it has less values than P1, the values are mostly in the higher range. Basically I want the data to be sorted by the size of it and its overall tendency to have lower values. I can average it all out but I don't wanna give weight to data with small set (e.g., 1 or 2 values) So something like this would be nice (guessing): ``` P1 [1, 2, 3, 4, 5, 6, 7, 8, 10] P5 [1, 2] P4 [1, 2, 10] P2 [1, 5, 8, 10, 12, 20] P3 [10, 201, 440, 500] ``` I'm looking for an R solution.
Good way to rank/sort data that falls in the lower range
CC BY-SA 2.5
null
2011-01-08T21:00:42.343
2011-01-08T23:36:27.173
null
null
2664
[ "r", "distributions", "mean" ]
6093
1
null
null
0
2204
> Possible Duplicate: What book would you recommend for non-statisticians? Hello all, Which is a good book for self-learning of statistics? I tried the "Head first statistics" which starts very well, but when talking on t-distribution and some more complex issues, its quality degrades (just my opinion). Also, smth like "for dummies" might be too simplistic. I started now "Statistics in plain english", it might be a good startup for intuition, but I guess it will not suffice later on. For example, I considered Feller's volumes on probability theory as a very good start for studying that domain. I am looking for similar consistent books in the domain of Statistics. Thanks.
Good book on statistics
CC BY-SA 2.5
null
2011-01-08T23:09:08.913
2011-01-09T01:02:26.773
2017-04-13T12:44:52.277
-1
976
[ "references" ]
6094
2
null
6091
3
null
If you take the average of each row, divide by its length, and order the rows by that value, it produces the results you wanted: ``` > ranks <- unlist(lapply(p,mean))/unlist(lapply(p,length)) > ranks P1 P2 P3 P4 P5 0.5679012 2.2000000 72.3333333 1.4444444 0.7500000 > p[order(ranks)] $P1 [1] 1 2 3 4 5 6 7 8 10 $P5 [1] 1 2 $P4 [1] 1 2 10 $P2 [1] 5 8 10 12 20 $P3 [1] 10 201 440` ```
null
CC BY-SA 2.5
null
2011-01-08T23:36:27.173
2011-01-08T23:36:27.173
null
null
697
null
6095
2
null
6093
1
null
If you liked Feller, then you definitely need to think about a real stats book. My suggestion as a self-learner at perhaps slightly less sophisticated level than you would be to look for Cox and Hinkley's "Theoretical Statistics". Very readable for a stats text. Not much, if any, in the way of problems sets, though.
null
CC BY-SA 2.5
null
2011-01-09T01:02:26.773
2011-01-09T01:02:26.773
null
null
2129
null
6096
1
null
null
6
322
I'm working on benchmarking the speed of various JavaScript methods. Part of the benchmark process requires repeating a test for a minimum time (to reduce the percentage uncertainty to at or less than 1%). There is some overhead on each test (the cost of the loop, incrementing a counter variable and so on). I currently benchmark an empty test to get the overhead cost. The result has it's own mean and margin of error. How should I apply this calibration? Do I take the calibration mean and subtract it from every measured value that compose other benchmark samples, or should I subtract it from the other benchmark's mean value even with differences in margin of error, or is their some formula to follow?
Correct way to calibrate means
CC BY-SA 2.5
null
2011-01-09T08:20:48.563
2023-03-03T10:38:46.273
2011-01-09T17:29:55.527
2634
2634
[ "calibration" ]
6097
1
null
null
6
150
Say I have a process that gives me 3 outputs: $O^1$, $O^2$ and $O^3$. The outputs are generated from a semi-deterministic process, i.e. there is a deterministic component in the outputs, along with a random component. In particular, having $n$ measurements over time, the outputs $O_j \quad j=1,2,...,n$ are -at least in part- dependent on the previous outputs. So $O_j = f(O_{j-1}, O_{j-2}, O_{j-3}) + \epsilon$ (I'm not interested in going farther away than 2 or 3 measurements, $\epsilon$ is the random component). So now I have a set of ~150 consecutive measurements, how can I predict what are the likely outputs in the future? I can easily calculate the distribution of values following a certain output, for instance I could say that if $O^1_j$ is between 50 and 60 I have a certain probability of $O^1_{j+1}$ of being between 30 and 40, by looking at the measurements that I took in the past. I did construct some pdf for the distribution of these probabilities, but now I'm a bit stuck, especially because there is probably an interaction between the three outputs (so for instance, updating my previous statement $O^1_j = f(O^1_{j-l}, O^2_{j-l}, O^3_{j-l}) + \epsilon \quad\quad l=1,2,3$) I've been reading about Bayesian predictors and I tought they could be applied here, but I don't know enough about the topic to determine if this is a good choice or if there is something easier/more appropriate. I will appreciate any suggestion!
Predicting a semi-deterministic process
CC BY-SA 2.5
null
2011-01-09T11:03:31.153
2011-01-09T23:19:23.807
2011-01-09T23:19:23.807
159
582
[ "time-series", "probability", "bayesian", "forecasting", "predictive-models" ]
6098
2
null
6046
4
null
Repeated Measures Personally I would pursue a hierarchical model where the basic observations are, for each animal, the 4 (or fewer) levels of odour and the corresponding neuronal responses. And the predictor for the per animal intercept and slope on this relationship is the animal's reproductive status. (Here I'm assuming that your interest is in the effect of reproductive status on these aspects of the response function and to what extent it is distinguishable from individual variation.) That would give you nice interpretable animal level regression parameters, e.g. moving from virgin to mated animals drops the predicted firing rate by x and increases the effect of one unit increase in odour dose by z. Failing that, a mixed model with reproductive fixed effect would probably also work. Actually I think that's all SPSS 16 offers you anyway. I wouldn't immediately worry about missing data in this framework. Just try it and then check for robustness of the results, as Rob suggests. The more basic problem is knowing what SPSS is telling you when you fit one of these models. For that, you'll want to read up a bit first. Other folk here may have preferred introductions to mixed models - mine are all R-oriented and therefore not so helpful. Spikes If you are working with spike counts they are probably conditionally Poisson distributed (and don't forget the offset, if the exposure during measurement varies). If you don't have the option to specify that fact you might need to fit the model in appropriately adjusted log counts or suchlike. Missing Data If you have enough animals missing a few measurements for some of them might be ok. For a lot of missing, there won't be much information as Rob also points out. If you (or your audience) worry about the missing data, you could do multiple imputation first. If I remember right, and I don't really use SPSS for anything, 16 makes you use AMOS for multiple imputation, but later version have it built into the missing value module. So that might be an option.
null
CC BY-SA 2.5
null
2011-01-09T12:27:42.207
2011-01-09T12:27:42.207
null
null
1739
null
6100
2
null
6096
5
null
There is something called "small error propagation", and it says that the error of a function $f$ of variables $x_1,x_2,\cdots,x_n$ with errors $\Delta x_1,\Delta x_2,\cdots,\Delta x_n$ equals $$\Delta f=\sqrt{\sum_i\left(\frac{\partial f}{\partial x_i}\Delta x_i\right)^2},$$ so for $f(a,b):=a-b$ the error is $\Delta f=\sqrt{\Delta a^2+\Delta b^2}$. So, subtract the means and report this euclidean length of errors as a final error.
null
CC BY-SA 4.0
null
2011-01-09T13:24:09.993
2023-03-03T10:38:46.273
2023-03-03T10:38:46.273
362671
null
null
6101
1
null
null
2
562
I have a real-life situation that can be solved using Queueing Theory. This should be easy for someone in the field. Any pointers would be appreciated. Scenario: There is a single Queue and N Servers. When a server becomes free, the Task at the front of the queue gets serviced. The mean service time is T seconds. The mean inter-Task arrival time is K * T (where K > 1) (assume Poisson or Gaussian distributions, whichever is easier to analyze.) Question: At steady state, what is the length of the queue? (in terms of N, K). Related Question: What is the expected delay for a Task to be completed? Here is the real-life situation I am trying to model: I have an Apache web server with 25 worker processes. At steady-state there are 125 requests in the queue. I want to have a theoretical basis to help me optimize resources and understand quantitatively how adding more worker processes affects the queue length and delay. I know the single queue, single server, Poisson distribution is well analyzed. I don't know the more general solution for N servers. thanks in advance, -- David Jones [email protected]
Queueing Theory: How to estimate steady-state queue length for single queue, N servers?
CC BY-SA 2.5
null
2011-01-09T16:51:41.767
2011-04-13T10:33:54.043
2011-04-13T10:33:54.043
449
2711
[ "poisson-distribution", "networks", "simulation", "queueing" ]
6102
1
6105
null
3
2538
I'm currently applying the Roy Zelner test of poolability as shown in the excellent [article of Andrea Vaona](http://doc.rero.ch/lm.php?url=1000,42,6,20080417092244-DQ/wp0804.pdf), in fact I'm working with panel N=17 T=5, and my model looks like this : $$Y_{it}= a_0+B_1X_1+B_2X_2+B_3X_3+B_4X_4+e_{it}$$ My question is the following. When I'm testing for coefficient equality of the unpooled data (the last stage), many of my constraints are getting dropped. This impacts the degrees of freedom of $\chi^2$, and I would like to understand the reason? Is this because the time dimension of my panel is too small? or because the number of my constraints is too high? Ama
Why are my constraints getting dropped?
CC BY-SA 2.5
null
2011-01-09T18:51:29.387
2011-01-13T20:54:35.390
2011-01-13T20:54:35.390
8
1251
[ "regression", "panel-data" ]
6103
2
null
5026
27
null
- Statistics is concerned with probabilistic models, specifically inference on these models using data. - Machine Learning is concerned with predicting a particular outcome given some data. Almost any reasonable machine learning method can be formulated as a formal probabilistic model, so in this sense machine learning is very much the same as statistics, but it differs in that it generally doesn't care about parameter estimates (just prediction) and it focuses on computational efficiency and large datasets. - Data Mining is (as I understand it) applied machine learning. It focuses more on the practical aspects of deploying machine learning algorithms on large datasets. It is very much similar to machine learning. - Artificial Intelligence is anything that is concerned with (some arbitrary definition of) intelligence in computers. So, it includes a lot of things. In general, probabilistic models (and thus statistics) have proven to be the most effective way to formally structure knowledge and understanding in a machine, to such an extent that all three of the others (AI, ML and DM) are today mostly subfields of statistics. Not the first discipline to become a shadow arm of statistics... (Economics, psychology, bioinformatics, etc.)
null
CC BY-SA 2.5
null
2011-01-09T19:59:14.237
2011-01-09T19:59:14.237
null
null
null
null
6104
1
6121
null
8
13019
I start with three independent random variables, $X_1, X_2, X_3$. They are each normally distributed with: $$X_i \sim N(\mu_i, \sigma^2), i = 1, 2, 3.$$ I then have three transformations, $$\eqalign{ Y_1 &= -X_1/\sqrt{2} + X_2/\sqrt{2} \cr Y_2 &= -X_1/\sqrt{3} - X_2/\sqrt{3} + X_3/\sqrt{3} \cr Y_3 &= X_1/\sqrt{6} + X_2/\sqrt{6} + 2X_3 / \sqrt{6} \cr }$$ I am supposed to show that when $\mu_i = 0,$ $i = 1, 2, 3,$ $(Y_1^2 + Y_2^2 + Y_3^2)/\sigma^2 \sim \chi^2(3)$. I have also shown the transformations to preserve the independence, as the transformation matrix is orthogonal. I have already shown that the expectations of $Y_1, Y_2, Y_3$ is 0 and their variances are all the same. Using the normal pdf, I have shown that: $$Y_i^2 \sim \frac{1}{2\pi\sigma^2} \exp(-2x^2 / 2\sigma^2).$$ I thought about applying a substitution of $z = 2x^2 / \sigma^2$ to get the exponent into a similar form as the chi-square's $\exp(-x/2)$ form, but I'm stuck on what to do with the constants outside to get them to look similar. Could someone offer a hand?
Proving that the squares of normal rv's is Chi-square distributed
CC BY-SA 2.5
null
2011-01-09T20:08:12.767
2015-11-16T21:52:34.593
2011-01-10T02:02:53.867
919
1118
[ "distributions", "probability", "self-study", "chi-squared-test", "mathematical-statistics" ]
6105
2
null
6102
5
null
You have a panel data regression $$y_{it}=x_{it}'\beta+u_{it},$$ where $x_{it}$ in your case is $(1,X_1,X_2,X_3,X_4)$. Poolability tests test whether alternative model is actually correct: $$y_{it}=x_{it}'\beta_i+u_{it}.$$ So the null hypothesis is that $\beta_i=\beta$. To test this hypothesis we need to estimate $\hat{\beta_i}$. In your case, you need to estimate 17 $\beta_i$. Since $T=5$, your are estimating regressions with 5 parameters having 5 observations. This of course gives you a lot of problems, since the usual practice for statistical packages in this case is to drop some of the variables from the regression. In general if $T$ is small do not test whether you can pool the data. Simply use panel data regression and check whether the resulting model is appropriate.
null
CC BY-SA 2.5
null
2011-01-09T20:15:40.220
2011-01-09T20:15:40.220
null
null
2116
null
6106
2
null
6097
4
null
If you want to forecast time-series data, first you need to check whether it is [stationary](http://en.wikipedia.org/wiki/Stationary_process). Basically this means checking whether data has trends. If for example some time trend is present, you can concern yourself only with its forecast, because time-trends usually dominate everything else. For stationary time series it is good to use [Box-Jenkins](http://en.wikipedia.org/wiki/Box-Jenkins) approach. This in the end will give you some kind of [ARMA](http://en.wikipedia.org/wiki/ARMA) model (autoregressive model suggested by @whuber is a subset of this model). Since you have three time series you may look into [VAR](http://en.wikipedia.org/wiki/Vector_autoregression) models. If you use R, then first step can be performed by function `stl`, it is function from standard R. Autoregressive models can be fit automatically by `auto.arima` in package [forecast](http://cran.r-project.org/web/packages/forecast/). This function can either fit your desired model, or find the best specification for certain definition of best. You might look into that package more, since it is specially designed for forecasting time series. For VAR model use `VAR` function from [vars](http://cran.r-project.org/web/packages/vars/index.html) package. This package has a nice vignette describing its capabilities.
null
CC BY-SA 2.5
null
2011-01-09T20:48:21.907
2011-01-09T20:48:21.907
null
null
2116
null
6107
2
null
6104
1
null
Have you tried simply multiplying out the squared Y^2's in terms of the X[1:3] terms. I suspect that when you are all done that you will see that you simply have: (1/2 +1/3 +1/6)* X1^2 + (1/2 +1/3 +1/6)*X2^2 + (1/2 +1/3 +1/6)*X3^2 . This, of course, assumes that X1X3=X3X1, i.e. that your random variable algebra is commutative, but unless you are working on complex variables in particle physics, that assumption should hold. So far I have gotten about halfway there, and my approach seems to be holding up. It would seem to be useful that you go through the exercise, rather than for me to display it.
null
CC BY-SA 2.5
null
2011-01-09T21:09:26.587
2011-01-09T21:09:26.587
null
null
2129
null
6108
1
6116
null
2
803
Suppose I have a biased coin (heads with probability p), and I keep flipping it until I get t tails. So I had to flip n times in total to get t tails. How do I find a (frequentist) confidence interval for the probability of heads p?
Confidence interval for success probability in negative binomial experiment
CC BY-SA 2.5
null
2011-01-09T21:57:48.190
2011-01-10T01:31:24.363
null
null
1106
[ "confidence-interval", "negative-binomial-distribution" ]
6109
1
6117
null
3
1899
This is a softmax probability distribution: $$P(i| w_1, w_2, \ldots, w_n) = \frac{exp(w_i)}{\sum_{i=1}^n exp(w_i)}.$$ It known also as Boltzmann distribution. It is used in generalized Bradley-Terry model and in multinomial logistic regression. There are efficient minorization-maximization algorithms for infering $\vec{w}$ from data through Maximal Likelihood principle. I'm looking for similar distribution but extended with "variance" parameter vector. A parameter that would represent the confidence in $\vec{w}$. Then I would like to infer both $\vec{w}$ and its confidence. Does anybody of you happen to know such distribution or a research on the topic?
Is there a SoftMax distribution with confidence parameters?
CC BY-SA 2.5
null
2011-01-09T22:08:34.650
2022-06-18T22:02:36.787
2011-01-10T02:14:39.147
919
217
[ "distributions", "maximum-likelihood", "bradley-terry-model" ]
6110
2
null
6109
2
null
The Maxwell distribution is the classical limit under conditions of high temperature and non-interacting wave functions of both Fermi-Dirac statistics and Bose-Einstein statistics. I would expect that you would want to look at the F-D statistics if you are interested in higher variance, since Bose-Einstein statistics lead to aggregations of particles, whereas F-D statistics (and the Pauli exclusion principle) are what keep neutron stars from further collapse: (F-D without normalization): \begin{align} P(i| w_1, w_2, \ldots, w_n) = \frac{exp(w_i)}{\sum_{i=1}^n exp(w_i) +1} \end{align} Edit: The Bradley-Terry model is a special case of a more general paired-choice model that was [proposed by Stern](http://biomet.oxfordjournals.org/content/77/2/265.abstract): In The [Springer Encyclopedia of Mathematics](https://encyclopediaofmath.org/wiki/Paired_comparison_model) you read: "H. Stern has considered, [a6], models for paired comparison experiments based on comparison of gamma random variables. Different values of the shape parameter yield different models, including the Bradley–Terry model and the Thurstone model. Likelihood methods can be used to estimate the parameters of the models. The likelihood equations must be solved with iterative methods." H. Stern, "A continuum of paired comparison models" Biometrika , 77 (1990) pp. 265–273 [http://biomet.oxfordjournals.org/content/77/2/265.abstract](http://biomet.oxfordjournals.org/content/77/2/265.abstract)
null
CC BY-SA 4.0
null
2011-01-09T22:28:29.373
2022-06-18T22:02:36.787
2022-06-18T22:02:36.787
361019
2129
null
6111
1
null
null
2
1260
Here is the problem: A survey contains 7 binary questions (Yes/No responses). If two people are answering the survey, what is the probability for their answers on 4 or more of the questions to match? In other words, if we have four or more matching answers, we can consider the overall survey response to be similar for both people.
The probability for two people to provide identical answers on survey questions
CC BY-SA 2.5
null
2011-01-09T22:42:13.717
2017-02-18T19:33:24.767
2017-02-18T19:33:24.767
28666
null
[ "combinatorics" ]
6112
2
null
6111
0
null
If for each question the probability of selecting the same answer is equal to 0.5, the answer is the following: $$\sum_{i=4}^7{\binom{7}{i}p^i(1-p)^{7-i}}$$ where $p=0.5.$ In this case it is a [binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution).
null
CC BY-SA 2.5
null
2011-01-09T23:22:43.173
2011-01-10T01:54:31.270
2011-01-10T01:54:31.270
919
1540
null
6113
2
null
6063
4
null
The effect sizes of interactions in a multivariate regression can be assessed in same way as the effect sizes of any other predictor. The common thing is to look at the incremental contribution to R^2 (semi-partial R^2), but there are other possibilities, including Cohen's f^2 for nested models (this is a likelihood ratio test). Chapter 9 of Cohen's Statistical Power Analysis for the Behavioral Sciences has a very good discussion. It is true, too, that the effect sizes of product-term interactions tend to be "small" in terms of incremental addition to R^2. But the practical effect of such an interaction can be very large. This point is especially important to bear in mind when the interaction involves some sort of treatment or intervention -- e.g., in a drug trial, the practical effect of an interaction between the treatment & some individual characteristic of a patient might contribute only a small amount to mode R^2 but have a very appreciable effect on the clinical outcomes. See Rosenthal, R. & Rubin, D.B. A Note on Percent Variance Explained as A Measure of the Importance of Effects. J Appl Soc Psychol 9, 395-396 (1979); Abelson, R.P. A Variance Explanation Paradox: When a Little is a Lot. Psychological Bulletin 97, 129-133 (1985). Both this possibility & the challenge of trying to interpret (or explicate) the simultaneous importance of the coefficients for the predictor, moderator & product-interaction in a regression output, tend to make reporting of the interaction's "effect size" uninformative; better, I'd say, is to illustrate (graphically) the effect size of the interaction in practical terms -- that is, by showing how changes in meaningful levels of the predictor ("high exposure vs. low exposure") and moderator ("being a man vs. being a woman") affect the outcome variable expressed in units that make sense given the context ("additional yrs of life"). I don't have as much experience w/ multilvel modeling, but I do know that the strategy I'm describing is the basic philosophy of Gelman, A. & Hill, J. Data Analysis Using Regression and Multilevel/Hierarchical Models. (Cambridge University Press, Cambridge ; New York; 2007)-- the greatest work on regression, in my opinion, after Cohen & Cohen!
null
CC BY-SA 2.5
null
2011-01-10T00:19:12.147
2011-01-10T00:19:12.147
null
null
11954
null
6114
2
null
6111
2
null
I assume that the survey will be answered independently by the participants. First, you need estimates for the baseline probabilities $p_{i}$ that an answer $i$ will be answered "yes". The probability of two persons answering "yes" for question $i$ is then $p_{i}^{2}$. Likewise, the probability of two persons answering "no" for question $i$ is $(1-p_{i})^{2}$, hence the probability of agreement is $p_{i}^{2} + (1-p_{i})^{2}$. If you assume that all $p_{i} = 0.5$, then you get the answer given by carlosdc since $0.5^{2} + (1-0.5)^{2} = 0.5$. If you allow the $p_{i}$ to vary, an answer can probably be given in closed form as well, but with only 7 questions, it's easy to simply enumerate all possibilities to get 4 or more agreements, and calculate the probability for each case. ``` > n <- 7 # number of questions > p <- rep(0.5, n) # probabilities p_i, here: set all to 0.5 # p <- c(0.4, 0.4, 0.4, 0.4, 0.1, 0.1, 0.1) # alternative: let p_i vary > k <- 4:7 # number of agreements to check # k <- 0:7 # check: result (total probability) should be 1 # vector to hold probability for each number of agreements > res <- numeric(length(k)) # function to calculate the probability for an event with agreement on the # questions x and disagreement on the remaining questions > getP <- function(x) { + tf <- 1:n %in% x # convert numerical to logical index vector + pp <- p[tf]^2 + (1-p[tf])^2 # probabilities of agreeing on questions x + + # probabilities of disagreeing on remaining questions + qq <- 1 - (p[!tf]^2 + (1-p[!tf])^2) + prod(pp) * prod(qq) # total probability + } # for each number of agreements: calculate probability > for(i in seq(along=res)) { + # all choose(n, k) possibilities to have k agreements + poss <- combn(1:n, k[i]) + + # probability for each of those possibilities, edit: take 0-length into account + if (length(poss) > 0) { + res[i] <- sum(apply(poss, 2, getP)) + } else { + res[i] <- getP(numeric(0)) + } + } > res # probability for 4, 5, 6, 7 agreements [1] 0.2734375 0.1640625 0.0546875 0.0078125 > dbinom(k, n, 0.5) # check: all p_i = 0.5 -> binomial distribution [1] 0.2734375 0.1640625 0.0546875 0.0078125 > sum(res) # probability for 4 or more agreements [1] 0.5 ``` The R code could certainly be simplified, also `prod()` might be worse in terms of error propagation with small numbers than `exp(sum(log()))`, although I'm not sure on that one.
null
CC BY-SA 2.5
null
2011-01-10T01:07:09.440
2011-01-10T19:04:34.480
2011-01-10T19:04:34.480
1909
1909
null
6115
2
null
6109
2
null
I am not sure, but I think your probability model is a special case of [Multinomial logit](http://en.wikipedia.org/wiki/Multinomial_logit) model with no covariates and only the intercept terms ($w_i$ will be the intercepts). This is model is a special case of GLM and hence there exits an iteratively weighted least square method (IRWLS) to get the maximum likelihood estimates of $w_i$. If you don't want to code the IRWLS algorithm yourself, please check the `polr` function in the `MASS` library in R to accomplish the ML estimation of $w_i$. EDIT As @whuber points out, unless you adopt a Bayesian approach, there is no distribution for the parameters (assumed fixed), but there exits one for the estimates (here: ML) as they are estimated from the data, hence the randomness. HTH S.
null
CC BY-SA 2.5
null
2011-01-10T01:26:19.127
2011-01-10T02:51:03.933
2011-01-10T02:51:03.933
1307
1307
null
6116
2
null
6108
3
null
If you are looking for ML estimation (which may or may not be what you want) please check the `fitdistr` function in the `MASS`library in `R`. This function can estimate the unknown parameters using ML estimation. But please be careful about the parametrization of the negative binomial distribution of `MASS`. Apart from estimating the unknown parameters, it gives the confidence intervals for the estimates. HTH S.
null
CC BY-SA 2.5
null
2011-01-10T01:31:24.363
2011-01-10T01:31:24.363
null
null
1307
null
6117
2
null
6109
8
null
This question appears to confuse two distinct things. Any additional parameter in the model would (by definition) describe the distribution of $i$, not the distributions of any of the $w_i$. Unless you adopt a Bayesian prior for $\vec{w}$ (which does not seem to be part of this question), the parameters do not have any distribution at all: they are what they are. When you use a particular procedure to estimate $\vec{w}$, however, then the estimates $(\hat{w_i})$ do have a distribution. It makes sense to talk about the variance of that distribution. It can be estimated in standard ways, such as the [inverse of the expectation of the Hessian of the log likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood). It is unnecessary--and meaningless--to introduce yet another parameter to capture that information.
null
CC BY-SA 2.5
null
2011-01-10T02:14:16.637
2011-01-10T02:14:16.637
null
null
919
null
6119
1
197272
null
7
5341
I'm analyzing people based on their twitter stream. We are using a 'word bag' model of users, which basically amounts to counting how often each word appears in a persons twitter stream (and then using that as a proxy for a more normalized 'probability they will use a given word' in a particular length of text). Due to constraints further down the pipeline, we cannot retain full data on usage of all words for all users, so we are trying to find the most 'symbolically efficient' words to retain in our analysis. That is, we're trying to retain a subset of dimensions, which, knowing their values would allow a hypothetical seer to most accurately model the probabilities of all words (including any we left out of the analysis). So a principal components analysis (PCA) type approach seems an appropriate first step. (happily ignoring for now the fact that PCA would also 'rotate' us into dimensions that don't correspond to any particular word). But I am reading that ["Zipf distributions .. characterize the use of words in a natural language (like English) "](http://www.useit.com/alertbox/zipf.html) and as far as I know, PCA analysis makes various assumptions about the data being normally distributed. So, I'm wondering whether the fundamental assumptions of the PCA analysis will be sufficiently far 'off' from reality to be a ral problem. That is, does PCA rely on the data being 'close to' Gaussian Normal for it to work at all well? If this is a problem as I suspect, are there any other recommendations? That is, some other approach worth investigating that is 'equivalent' to PCA in some way but more appropriate for Zipf or power law distributed data? Note that I am a programmer, not a statistician, so apologies if I messed up my terminology in the above. (Corrections of course welcomed!)
Is principal components analysis valid if the distribution(s) are Zipf like? What would be similar to PCA but suited to non gaussian data?
CC BY-SA 2.5
null
2011-01-10T05:50:59.077
2023-01-23T16:08:31.280
2011-03-15T19:05:03.893
8
2717
[ "pca", "normal-distribution", "zipf" ]
6121
2
null
6104
4
null
We have $X_1\sim N(\mu_1,\sigma^2)$ and $X_2\sim N(\mu_2,\sigma^2)$, hence $$EY_1=E(-X_1/\sqrt{2}+X_2/\sqrt{2})=-1/\sqrt{2}EX_1+1/\sqrt{2}EX_2=0$$ \begin{align*} EY_1^2&=E(-X_1/\sqrt{2}+X_2/\sqrt{2})^2\\\\ &=E(X_1/\sqrt{2})^2-2E(X_1X_2/2)+E(X_2/\sqrt{2})^2\\\\ &=1/2\sigma^2+1/2\sigma^2=\sigma^2 \end{align*} Hence $Y_1\sim N(0,\sigma^2)$ since it is the linear combination of normal variables. Similarly we get $Y_2\sim N(0,\sigma^2)$ and $Y_3\sim N(0,\sigma^2)$ Now $$EY_1Y_2=1/\sqrt{6}E(X_1)^2-1/\sqrt{6}EX_2^2=0$$ and similarly $EY_2Y_3=EY_1Y_3=0$, hence $Y_1$, $Y_2$ and $Y_3$ are independent, since for normal variables independece coincided with zero correlation. Having established that we have $$(Y_1^2+Y_2^2+Y_3^2)/\sigma^2=\left(\frac{Y_1}{\sigma}\right)^2+\left(\frac{Y_2}{\sigma}\right)^2+\left(\frac{Y_3}{\sigma}\right)^2=Z_1^2+Z_2^2+Z_3^2$$, where $Z_i=Y_i/\sigma$. Since $Y_i\sim N(0,\sigma^2)$, we have $Z_i\sim N(0,1)$. We have showed that our quantity of interest is a sum of squares of 3 independent standard normal variables, which by definition is $\chi^2$ with 3 degrees of freedom. As I've said in the comments you do not need to calculate the densities. If you on the other hand want to do that, your formula is wrong. Here is why. Denote by $G(x)$ distribution of $Y_1^2$ and $F(x)$ the distribution of $Y_1$. Then we have $$G(x)=P(Y_1^2<x)=P(-\sqrt{x}<Y_1<\sqrt{x})=F(\sqrt{x})-F(-\sqrt{x})$$ Now the density of $Y_1^2$ is $G'(x)$, so $$G'(x)=\frac{1}{2\sqrt{x}}(F'(\sqrt{x})+F'(-\sqrt{x})$$ We have that $$F'(x)=\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{x^2}{\sigma^2}},$$ so $$G'(x)=\frac{1}{\sigma\sqrt{2\pi x}}e^{-\frac{x}{2}}$$ If $\sigma^2=1$ we have a pdf of $\chi^2$ with one degree of freedom. (Note that for $Z_1$ instead of $Y_1$ the calculation is similar and $\sigma^2=1$ ) As @whuber pointed out, this is [gamma](http://en.wikipedia.org/wiki/Gamma_distribution) distribution, and sums of independent gamma distributions is again gamma, the exact formula is provided in the wikipedia page.
null
CC BY-SA 3.0
null
2011-01-10T08:10:03.080
2014-04-17T19:27:03.137
2014-04-17T19:27:03.137
37240
2116
null
6122
1
6124
null
2
1062
I have almost two questions. I need a single covariate logistic regression (LR) for each of my variables. Should I do it manually in SPSS, selecting each variable and do logistic regression? Is there a "for each" cycle to do it? I should switch to R language to have what I want. In the multivariables (multi covariates) LR, could I have missing values? Thanks!!
Simple and multiple logistic regression
CC BY-SA 2.5
null
2011-01-10T09:54:02.767
2011-01-10T15:04:25.203
2011-01-10T10:37:10.423
930
2719
[ "r", "logistic", "spss" ]
6123
1
6140
null
5
515
I have a sample survey of a population whose distributions of certain characteristics are not identical to the distributions of the overall population. For example, the age of my respondents may be biased downward, or the incomes in my sample may be too high compared to the population (or my theoretical population distribution, if the actual population distribution is unknown). I know that it is possible to calculate a weighting coefficient that can adjust the distribution of the sample to match the population distribution on one dimension, but is it possible to adjust for two dimensions (e.g. age and income)? If so, couldn't there be a situation where there is no solution (no single coefficient that will adjust the sample distribution to the population distribution on both dimensions?)
Is it possible to weight survey observations to more than one distribution?
CC BY-SA 2.5
null
2011-01-10T10:06:20.720
2011-01-10T20:17:54.023
null
null
1195
[ "distributions", "survey" ]
6124
2
null
6122
4
null
If I understand you correctly, you want to fit two successive simple logistic regression model. I don't know if there's a specific instruction in SPSS that allows to switch the covariate of interest or cycle through them, but I guess you can run the two models in succession. In R, if your data are organized in a matrix or data.frame, this is easily done as ``` X <- replicate(2, rnorm(100)) # two random deviates y <- rnorm(100) apply(X, 2, function(x) lm(y ~ x)) ``` About your second question, models like this are generally estimated using listwise deletion: any individuals having one or more missing observations on the covariates are deleted before estimating model parameters. Again, in R: ``` X[2,2] <- NA summary(lm(y ~ X)) ``` shows that one observation has been deleted, yielding 96 DF (instead of 97).
null
CC BY-SA 2.5
null
2011-01-10T10:33:39.490
2011-01-10T10:33:39.490
null
null
930
null