Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
list
7566
2
null
7521
3
null
Actually the @mpiktas comment is the answer to your particular question. Sales models are usually multiplicative by the nature (some intuition could be found in Market response models [book](http://books.google.ru/books?id=xZyJamKdpIsC&printsec=frontcover&dq=market+response+models&source=bl&ots=0-G9pzeYtd&sig=Qgt51utD-Hcg1E0kgvgMdC8_2ZI&hl=ru&ei=3llmTeKjJtHJswa0yMHaDA&sa=X&oi=book_result&ct=result&resnum=4&ved=0CDsQ6AEwAw#v=onepage&q&f=false)). There is also a number of reasons for logs discussed for ARIMA models in my earlier [post](https://stats.stackexchange.com/questions/6330/when-to-log-transform-a-time-series-before-fitting-an-arima-model/6348#6348). In your case it is the scale effect that troubles you, therefore log transformation works well here. Another useful trick is to divide by some size variable (plot of the store, number of workers, etc.), so moving to fractions could help also. In addition to your question. What you have to pay attention to are other important explanatory variables: location variables or density of the population, size, variety of products (categories) and there average prices, number of workers, distances to the rival shops etc. that will matter (omitting them will cause you some estimates with poor properties: probably biased and inconsistent). Regulation can't be put as the solo explanatory variable in this context.
null
CC BY-SA 2.5
null
2011-02-24T13:20:24.633
2011-02-24T13:20:24.633
2017-04-13T12:44:42.893
-1
2645
null
7567
2
null
7450
2
null
Following the detailed answer of the user cardinal i performed the chi-square test on my presumable truncated zipf distribution. The results of the chi-square test are reported in the following table: ![enter image description here](https://i.stack.imgur.com/q3OPH.png) Where the StartInterval and EndInterval represent for example the range of calls and the Observed is the number of callers generating from 0 to 19 calls, and so on.. The chi-square test is good until the last columns are reach, they increase the final calculation, otherwise until that point the "partial" chi-square value was acceptable! With other tests the result is the same, the last column (or the last 2 columns) always increases the final value and i don't know why and i don't know if (and how) use another validation test. PS: for completeness, to calculate the expected values (Expected) i follow cardinal's suggestion in this way: ![enter image description here](https://i.stack.imgur.com/e18If.png) where X_i's are used to calculate: `x <- (1:n)^-S`, the P_i's to calculate `p <- x / sum(x)` and finally the E_i (Expected nr of users for each nr of calls) is obtained by `P_i * Total_Caller_Observed` and with Degree of Freedom=13 the Chi-Square goodness rejects always the Hyphotesis that the sample set follow Zipf Distribution because the Test Statistics (64,14 in this case) is larger than that reported in the chi-square tables, "demerit" for the last column. The graphical result is reported here: ![enter image description here](https://i.stack.imgur.com/um3Eg.png) although the truncation point is set to 500 the maximum value obtaines is 294. I think that the final "dispersion" is the cause of the failure of the chi-square test. UPDATE!! I try to perform the chi-square test on a presumable zipf data sample generated with the R code reported in the answer above. ``` > x <- (1:500)^(-2) > p <- x / sum(x) > y <- sample(length(p), size=300000, repl=TRUE, prob=p) > tab <- table(y) > length(tab) [1] 438 > plot( 1:438, tab/sum(tab), log="xy", pch=20, main="'Truncated' Zipf simulation (truncated at i=500)", xlab="Response", ylab="Probability" ) > lines(p, col="red", lwd=2) ``` The associated plot is the following: ![enter image description here](https://i.stack.imgur.com/kE77i.png) The chi-square test results are reported in the following figure: ![enter image description here](https://i.stack.imgur.com/dElAN.png) and the chi-square test statistic (44,57) is too high for the validation with the chosen Degree of Freedom. Also in this case the final "dispersion" of data is the cause of the high chi-square value. But there is a procedure to validate this zipf distribution (regardless my "wrong" generator, i want to focus on the R data sample) ???
null
CC BY-SA 2.5
null
2011-02-24T13:41:49.787
2011-03-14T13:42:16.817
2011-03-14T13:42:16.817
3342
3342
null
7568
2
null
7563
8
null
One useful technique is monte carlo testing. If there are two algorithms that do the same thing, implement both, feed them random data, and check that (to within a small tolerance for numerical fuzz) they produce the same answer. I've done this several times before: - I wrote an efficient but hard to implement $O(N\ log\ N)$ implementation of Kendall's Tau B. To test it I wrote a dead-simple 50-line implementation that ran in $O(N^2)$. - I wrote some code to do ridge regression. The best algorithm for doing this depends on whether you're in the $n > p$ or $p > n$ case, so I needed two algorithms anyhow. In both of these cases I was implementing relatively well-known techniques in the D programming language (for which no implementation existed), so I also checked a few results against R. Nonetheless, the monte carlo testing caught bugs I never would have caught otherwise. Another good test is [asserts](http://en.wikipedia.org/wiki/Assertion_%28computing%29). You may not know exactly what the correct results of your computation should be, but that doesn't mean that you can't perform sanity checks at various stages of the computation. In practice if you have a lot of these in your code and they all pass, then the code is usually right. Edit: A third method is to feed the algorithm data (synthetic or real) where you know at least approximately what the right answer is, even if you don't know exactly, and see by inspection if the answer is reasonable. For example, you may not know exactly what the estimates of your parameters are, but you may know which ones are supposed to be "big" and which ones are supposed to be "small".
null
CC BY-SA 2.5
null
2011-02-24T14:15:01.640
2011-02-24T14:46:27.493
2011-02-24T14:46:27.493
1347
1347
null
7569
2
null
4753
0
null
I suggest that you look at the 2009 paper by Leng and Wang in JCGS: [http://pubs.amstat.org/toc/jcgs/18/1](http://pubs.amstat.org/toc/jcgs/18/1) If this is what you want, the authors supply R code in the supplementary materials.
null
CC BY-SA 2.5
null
2011-02-24T14:24:17.157
2011-02-24T14:24:17.157
null
null
2773
null
7570
2
null
6886
0
null
If we are talking about, for example, a neural network that is using back-propagation to learn, then a better way of thinking about this is that you are trying to set a,b,c,d,e using the neural network. Your fitness function is fine, but that would be what your neural network is optimizing (a single output). You don't select a,b,c,d,e, you use a method like back-propagation of errors to set these for you. That's normally the point of using the neural network (not having to solve for the a,b,c,d,e coefficients yourself). If, on the other hand, you're really convinced you don't want to use the iterative approach, you might look at the method of Dr. Hu of Southern Illinios University (Carbondale), who developed a non-iterative approach that lets you directly solve for the neural network coefficients. Here's an article on it: [article in 1996 Proceedings of World Congress on Neural Networks](http://books.google.com/books?id=bl9CyjErsusC&pg=PA416&lpg=PA416&dq=hu+carbondale+neural&source=bl&ots=--fEDuu5pk&sig=fr3lTT_BAon9f2ur3wzmxlB2jT8&hl=en&ei=-WtmTbGGGImjtgfP66TmAw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBQQ6AEwAA) Myself, I would use your fitness function as the (single) output you're training the network to optimize, and use back-propagation (supported by many open-source neural network simulations) to find the input coefficients. But Hu's approach (above) is workable if for some reason you don't want to use the iterative approach to find them.
null
CC BY-SA 2.5
null
2011-02-24T14:37:31.103
2011-02-24T14:37:31.103
null
null
2917
null
7571
1
7632
null
5
916
I'm trying to estimate the state of a Gaussian random walk with central tendency based on time series measurements with varying uncertainties. My random variable has the following form: $ \frac{d x}{d t} \equiv F(t) - \alpha x $ Where F is a Gaussian random variable. I've noticed that this problem is analogous to the velocity of a bubble experiencing Brownian motion. (See for example, F. Reif, Fundamentals of Statistical and Thermal Physics, p. 565). As a result of the $ -\alpha x $ term, the position has a central tendency (i.e. the variance does not become infinity as time approaches infinity). Now, like any good physicist, I know that I cannot exactly measure the value $x$. The best I can do is to measure it at time $ t_i $ within some uncertainty, $ \sigma_i $. Using a Kalman filter, I can estimate the value of $x$ from several measurements. Let's call that $ \hat x $. The approach is as follows. For each measurement, we compute: > $ \delta t = t_i - t_{i-1} $ $ P(t) = P(t_{i-1}) * e^{-\alpha\, \delta t} + \langle x^2 | \delta t \rangle $ $ K = {{P}\over{P + \sigma_i}} $ Our incoming est estimate of $x_i$: $ \hat x_{i-} = \hat x_{i-1} \, e^{-\alpha\,\delta t} $ $ \hat x_i = \hat x_{i-} + K [x_{obs} - \hat x_{i-}] $ This works great for propagating our estimates forward in time. My question is: If I have measurements at times that span the time at which I want the best estimate, how do I compute an $\hat x(t) $ where $ t_i < t < t_{i+1} $?
How to apply a Kalman filter to use both previous and future measurements of a random variable?
CC BY-SA 2.5
null
2011-02-24T14:48:35.657
2011-02-27T02:18:19.110
2020-06-11T14:32:37.003
-1
3405
[ "regression", "estimation", "kalman-filter" ]
7572
2
null
7499
2
null
Here is a heuristic: if the practical importance of the result depends a lot on the value of point estimate, use CIs; if the practical importance of the result depends mainly on the existence/magnitude of effect, consider leaving them out. Here is why: error bars correct one sort of misunderstanding but invite others. The misunderstanding they correct is obvious: people assume too much precision in point estimate. But the ones they invite can be bad too; these include that all values within the interval are "equally likely"; that "big confidence intervals" are "bad"; & that point estimates w/ overlapping CIs are "not significantly different" from each other. If a particular result is has practical meaning b/c of the point estimate (e.g., likelihood a candidate will win an election or candidates' vote share in election), the former misunderstanding is more consequential -- that is, people will too likely make a mistake in relying on the result if they don't see the imprecision of it. If a particular result, however, is practically significant b/c it discloses an effect that people wouldn't otherwise likely perceive -- consider a small-sample experiment that shows some framing-effect manipulation induces a large change in the valence of subjects' affective reaction to a political candidates' message, where effect is obviously there but scale for measuring it is not that important & CIs are big relative to dimensions of scale b/c of small sample -- then the CI will often add little information, clutter things up, & invite the sort of "a little knowledge is dangerous" types of confusion I mentioned. Another approach is to try to find some alternative to convey precision of estimate w/o inviting typical misunderstanding of CIs. Some researchers try using graphics w/ multi-color shadings around point estimate, or elongated diamond shapes for pt estimates, to denote probability density of likely "true" values around the point estimate. My understanding is that people who have examined these alternatives conclude that they are confusing too... BTW, bar graphs are usually pretty rotten way to convey information. They are class Tufte chartjunk. There are lots of better alternatives.
null
CC BY-SA 2.5
null
2011-02-24T14:56:55.430
2011-02-24T14:56:55.430
null
null
11954
null
7573
2
null
7555
6
null
Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS ["macros for performing meta-analytic analyses](http://mason.gmu.edu/~dwilsonb/ma.html)". One of these macros is called `MetaReg` which can perform fixed-effect or mixed-effects meta-regression. I would always use [Stata](http://www.stata.com/support/faqs/stat/meta.html) or R. By the way, user [Wolfgang](https://stats.stackexchange.com/users/1934/wolfgang) is the author of an R package called [metafor](http://www.metafor-project.org/). This is an excellent piece of software to conduct meta-regression. As a general (non-technical) intro to meta-regression, I can recommend Thompson/Higgins (2002) ["How should meta-regression analyses be undertaken and interpreted?](http://www.ncbi.nlm.nih.gov/pubmed/12111920)". Now to your question: Q1: What is the minimum number of studies necessary for a meta-regression? Some people suggest at least 10 studies are required. Why not 20 or 5 studies? The answer can be found in [Borenstein et al (2009: 188)](http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470057246.html): > "As is true in primary studies, where we need an appropriately large ratio of subjects to covariates in order for the analysis be to meaningful, in meta-analysis we need an appropriately large ratio of studies to covariates. Therefore, the use of metaregression, especially with multiple covariates, is not a recommended option when the number of studies is small. In primary studies some have recommended a ratio of at least ten subjects for each covariate, which would correspond to ten studies for each covariate in meta-regression. In fact, though, there are no hard and fast rules in either case." Q2: Is the total sample size an important consideration? What is total sample size? The number of studies? Yes, it is important. Or the number of individuals? No, it is not (or less) important. Q3: Why would 10 studies with 200 patients be enough, but 5 studies with 400 patients not be enough? It is just a(n ordinary) regression. You wouldn't run a regression with 5 data points, would you? In your comment, you state that you have 20 studies which is enough to run a meta-regression. Q4: Can I enter all three regressors at once and report the global model, or do I have to enter one regressor at a time and report 3 models each one separately? It is just a regression. I would start with three simple bivariate models then build more complex models (be aware of multicollinearity, see below). Q5: How does the correlation between the independent variables affects this choice? A high correlation between your predictor variables will have a (negative) impact on your results. You should avoid that. Please consult a textbook for the problem of [multicollinearity](http://en.wikipedia.org/wiki/Multicollinearity). Q6: How does the number of the studies affect the number of independent variables that I should enter simultaneously? See the Borenstein et al citation. Q7: Does the independent variable have to be a scale variable? [...] The independent variable must be also scale, or could be ordinal or nominal? What is a "scale variable"? Do you mean a continuous/metric variable? You predictor variables can have any [level of measurement](http://en.wikipedia.org/wiki/Level_of_measurement). However, if you have a categorical (nominal) predictor variable, you will have to deal with dummy variables (see [Multiple Regression with Categorical Variables](http://www.psychstat.missouristate.edu/multibook/mlt08m.html)). Q8: How can I weight my effect size for sample size? As far as I know, all meta-regression approaches expect the weights to be the inverse study variance, i.e. $\frac{1}{v_i}=\frac{1}{SE_i^2}$. Again, you will need standard errors :-) Q9: What is the preferable level of significance? Is p<0.05 still acceptable for clinical research in such an analysis? I cannot answer your question. That really depends on your research question. In my (non-clinical) research I am happy with p < 0.10.
null
CC BY-SA 2.5
null
2011-02-24T15:18:02.870
2011-02-24T15:18:02.870
2017-04-13T12:44:36.923
-1
307
null
7574
2
null
7308
3
null
The hessian is indefinite at a saddle point. It’s possible that this may be the only stationary point in the interior of the parameter space. Update: Let me elaborate. First, let’s assume that the empirical Hessian exists everywhere. If $\hat{\theta}_n$ is a local (or even global) minimum of $\sum_i q(w_i, \cdot)$ and in the interior of the parameter space (assumed to be an open set) then necessarily the Hessian $(1/N) \sum_i H(w_i, \hat{\theta}_n)$ is positive semidefinite. If not, then $\hat{\theta}_n$ is not a local minimum. This follows from second order optimality conditions — locally $\sum_i q(w_i, \cdot)$ must not decrease in any directions away from $\hat{\theta}_n$. One source of the confusion might the "working" definition of an M-estimator. Although in principle an M-estimator should be defined as $\arg\min_\theta \sum_i q(w_i, \theta)$, it might also be defined as a solution to the equation $$0 = \sum_i \dot{q}(w_i, \theta)\,,$$ where $\dot{q}$ is the gradient of $q(w, \theta)$ with respect to $\theta$. This is sometimes called the $\Psi$-type. In the latter case a solution of that equation need not be a local minimum. It can be a saddle point and in this case the Hessian would be indefinite. Practically speaking, even a positive definite Hessian that is nearly singular or ill-conditioned would suggest that the estimator is poor and you have more to worry about than estimating its variance.
null
CC BY-SA 2.5
null
2011-02-24T15:33:43.887
2011-02-27T20:06:09.503
2011-02-27T20:06:09.503
1670
1670
null
7575
2
null
7549
1
null
My first thought would be to regress education (using a proportional odds model or whatever is appropriate for your education variable) on person-level variables and a few simple transportation choice aggregates. The main variable that comes to mind is the proportion of train vs. bus rides (%train), but if you only have two event level variables -- distance and duration -- then another option would be %train-near, %train-far, %train-short, %train-long. If something simple like the above won't work because you have too many event level variables or you're not willing to categorize them, then your first thought of using a logistic regression with random effects for person-level variables (I presume) is the right idea. However, I would modify your suggestion by using a structural equation model (SEM) to regress education on transportation choice, which is in turn regressed on event and person level variables (except for education) and the random effects. Education can additionally be regressed directly on the event and person level variables. All regressions are estimated simultaneously. This can be done in Mplus, but currently is not possible in R, as far as I know, because none of the SEM packages (lavaan, sem, e.g.) allow for mixed effects like those offered by the lme4 package. It can probably be done in SAS with a lot of coding. No idea about other software. Is your second thought of regressing education on combinations of your predictors feasible given the number of combinations and amount of data? How many event and person level variables do you have? Latent class regression wouldn't make sense for your data because individual response patterns aren't comparable (e.g. person 1 might have chosen 00 for near-short, near-short and person 2 might have chosen 0000 for far-long, far-long, far-long, far long -- you could recode response vectors with a lot of missing values, but there are better approaches).
null
CC BY-SA 2.5
null
2011-02-24T16:39:05.287
2011-02-24T16:39:05.287
null
null
3408
null
7576
2
null
7535
5
null
My understanding is that zero-inflated distributions should be used when there is a rationale for certain items to produce counts of zeroes versus any other count. In other words, a zero-inflated distribution should be used if the zeroes are produced by a separate process than the one producing the other counts. If you have no rationale for this, given the overdispersion in your sample, I suggest using a negative binomial distribution because it accurately represents the abundance of zeroes and it represents unobserved heterogeneity by freely estimating this parameter. As mentioned above, Scott Long's book is a great reference.
null
CC BY-SA 2.5
null
2011-02-24T17:06:39.503
2011-02-24T17:06:39.503
null
null
2322
null
7577
2
null
6886
0
null
Based on your question and the comments you have given to answers, I think there's a fundamental misunderstanding in your logic/problem formulation. In order to optimize something, no matter the method, you need to have something to optimize over. In order to formulate a proper solution, you need to have a clear question. I suggest instead of fiddling with implementation of your NN, try to go back to your model (you should have one) and try to define the problem you want to optimize in more clear terms. Once you have the problem defined, then you can use the appropriate means to solve it. It's of course possible that you have a function you want to optimize, and I misunderstood you question. I apologize if that's the case, but I believe in that scenario more detail, and a clear definition of the problem would certainly help your chances of getting help.
null
CC BY-SA 2.5
null
2011-02-24T17:37:38.757
2011-02-24T17:37:38.757
null
null
3014
null
7578
2
null
7563
6
null
Not sure if this is really an answer to your question, but it is at least tangentially related. I maintain the [Statistics](http://www.maplesoft.com/support/help/Maple/view.aspx?path=Statistics) package in [Maple](http://www.maplesoft.com/products/Maple/index.aspx). An interesting example of difficult to test code is random sample generation according to different distributions; it is easy to test that no errors are generated, but it is trickier to determine whether the samples that are generated conform to the requested distribution "well enough". Since Maple has both symbolic and numerical features, you can use some of the symbolic features to test the (purely numerical) sample generation: - We have implemented a few types of statistical hypothesis testing, one of which is the chi square suitable model test - a chi square test of the numbers of samples in bins determined from the inverse CDF of the given probability distribution. So for example, to test Cauchy distribution sample generation, I run something like with(Statistics): infolevel[Statistics] := 1: distribution := CauchyDistribution(2, 3): sample := Sample(distribution, 10^6): ChiSquareSuitableModelTest(sample, distribution, 'bins' = 100, 'level' = 0.001); Because I can generate as large a sample as I like, I can make $\alpha$ pretty small. - For distributions with finite moments, I compute on the one hand a number of sample moments, and on the other hand, I symbolically compute the corresponding distribution moments and their standard error. So for e.g. the beta distribution: with(Statistics): distribution := BetaDistribution(2, 3): distributionMoments := Moment~(distribution, [seq(1 .. 10)]); standardErrors := StandardError[10^6]~(Moment, distribution, [seq(1..10)]); evalf(distributionMoments /~ standardErrors); This shows a decreasing list of numbers, the last of which is 255.1085766. So for even the 10th moment, the value of the moment is more than 250 times the value of the standard error of the sample moment for a sample of size $10^6$. This means I can implement a test that runs more or less as follows: with(Statistics): sample := Sample(BetaDistribution(2, 3), 10^6): sampleMoments := map2(Moment, sample, [seq(1 .. 10)]); distributionMoments := [2/5, 1/5, 4/35, 1/14, 1/21, 1/30, 4/165, 1/55, 2/143, 1/91]; standardErrors := [1/5000, 1/70000*154^(1/2), 1/210000*894^(1/2), 1/770000*7755^(1/2), 1/54600*26^(1/2), 1/210000*266^(1/2), 7/5610000*2771^(1/2), 1/1567500*7809^(1/2), 3/5005000*6685^(1/2), 1/9209200*157366^(1/2)]; deviations := abs~(sampleMoments - distributionMoments) /~ standardErrors; The numbers in distributionMoments and standardErrors come from the first run above. Now if the sample generation is correct, the numbers in deviations should be relatively small. I assume they are approximately normally distributed (which they aren't really, but it comes close enough - recall these are scaled versions of sample moments, not the samples themselves) and thus I can, for example, flag a case where a deviation is greater than 4 - corresponding to a sample moment that deviates more than four times the standard error from the distribution moment. This is very unlikely to occur at random if the sample generation is good. On the other hand, if the first 10 sample moments match the distribution moments to within less than half a percent, we have a fairly good approximation of the distribution. The key to why both of these methods work is that the sample generation code and the symbolic code are almost completely disjoint. If there would be overlap between the two, then an error in that overlap could manifest itself both in the sample generation and in its verification, and thus not be caught.
null
CC BY-SA 2.5
null
2011-02-24T18:12:36.290
2011-02-24T18:12:36.290
null
null
2898
null
7579
1
null
null
8
3370
I am trying to model weekly disease counts in 25 different regions within 1 country over a ten year period as influenced by temperature. The data is zero inflated and over dispersed. I am most familiar with Stata but I don't think that there is any option amongst the `gee`, `xtmixed`, `xtmepoisson` etc. commands that allows me to account for the zero inflation and over dispersion issues as well as the autocorrelation. I log transformed the incidence data and used a SARIMA model but the residuals are not quite normal. I think that there are versions of the ARIMA model for integer data like disease counts but I can't find a program for it. I was also thinking that I could create a hierarchical model with random intercepts for each region and random effects of temperature in each region, while also accounting for the regular seasonal disease cycle. I believe that I could model this in R using a package like [glmm.admb](http://admb-project.org/examples/r-stuff/glmmadmb) but due to my limited statistical and R knowledge I am not entirely sure how to do use it. I am mainly confused about accounting for the autocorrelation and seasonal cycle part of the data using a program like this. Any advice on how to best do this?
How to model zero inflated, over dispersed poisson time series?
CC BY-SA 2.5
null
2011-02-24T18:16:42.123
2017-02-27T15:48:59.183
2017-02-27T15:48:59.183
11887
null
[ "time-series", "poisson-distribution", "autocorrelation", "overdispersion", "gamlss" ]
7581
1
null
null
22
47833
What is the relation between estimator and estimate?
What is the relation between estimator and estimate?
CC BY-SA 2.5
null
2011-02-24T18:57:13.633
2018-10-16T21:32:01.903
2018-05-11T08:18:22.113
28666
null
[ "estimation", "terminology", "estimators" ]
7582
2
null
7579
4
null
You may want to check out `hurdle()` from the `pscl` package in R. It specifies two-component models, one that handles the zero counts and one that handles the positive counts. Check out the `hurdle` help page [here.](http://rss.acs.unt.edu/Rdoc/library/pscl/html/hurdle.html) EDIT: I just found [this](http://r.789695.n4.nabble.com/Problems-using-gamlss-to-model-zero-inflated-and-overdispersed-count-data-quot-the-global-deviance-i-td2239925.html) post in R help that describes the `zeroinf()` function in R (also from the pscl package), as well as `gamlss` and `VGAM` options. However, I don't believe that the `VGAM` options will allow you to take into account non-independent correlation structures. Another option is the `zinb` command in Stata. Fitting a model using the negative binomial family will account for the overdispersion. I am not sure if they allow for seasonality adjustments, however.
null
CC BY-SA 2.5
null
2011-02-24T19:21:15.930
2011-02-24T19:27:49.170
2011-02-24T19:27:49.170
3309
3309
null
7586
2
null
7581
3
null
It might be helpful to illustrate whuber's answer in the context of a linear regression model. Let's say you have some bivariate data and you use Ordinary Least Squares to come up with the following model: > Y = 6X + 1 At this point, you can take any value of X, plug it into the model and predict the outcome, Y. In this sense, you might think of the individual components of the generic form of the model (mX + B) as estimators. The sample data (which you presumably plugged into the generic model to calculate the specific values for m and B above) provided a basis on which you could come up with estimates for m and B respectively. Consistent with @whuber's points in our thread below, whatever values of Y a particular set of estimators generate you for are, in the context of linear regression, thought of as predicted values. (edited -- a few times -- to reflect the comments below)
null
CC BY-SA 2.5
null
2011-02-24T21:18:57.810
2011-02-24T22:37:39.223
2011-02-24T22:37:39.223
3396
3396
null
7587
2
null
7554
1
null
Assuming your distribution remotely resembles a normal curve, you could convert the standard errors into a more intuitive percentage value pretty easily. For example, if the distribution is pretty normal, approximately 95% of your population falls within +/-1.96*SE of the mean. Building from SheldonCooper's sample values, you could say, "The average was 1000 and about 95% of the population was between 600 and 1400." Likewise, about 70% of the population falls within +/- 1*SE, etc. If your sample distribution tends to deviate from normal by a lot, don't despair, but try to provide more details so we can help.
null
CC BY-SA 2.5
null
2011-02-24T21:31:54.297
2011-02-24T21:31:54.297
null
null
3396
null
7589
2
null
7579
3
null
Another option for negative binomial regression in R is the excellent MASS package's `glm.nb()` function. UCLA's statistical consulting group has a [pretty clear vignette](http://www.ats.ucla.edu/stat/R/dae/nbreg.htm), which unfortunately does not seem to provide any obvious insights into your autocorrelation issues, but maybe searching these various nb-regression options on R-seek or elsewhere would help?
null
CC BY-SA 2.5
null
2011-02-24T22:21:30.493
2011-02-24T22:21:30.493
null
null
3396
null
7590
2
null
1099
2
null
If you were considering conducting Poisson or related regressions on this data (with your outcome variable as a rate), remember to include an offset term for the patient bed days as it technically becomes the "exposure" to your counts. However, in that case, you may also want to consider using just the infection count (not the rate) as your dependent variable, and include the patient bed days as a covariate. I am working on a data set with a similar count vs. rate decision and it seems like converting your dependent variable to a rate leads to a decrease in variability, an increase in skewness and a proportionally larger standard deviation. This makes it more difficult to detect any significant effects. Also watch out if your data is zero-truncated or zero-inflated, and make the appropriate adjustments.
null
CC BY-SA 2.5
null
2011-02-24T22:29:52.487
2011-02-24T22:29:52.487
null
null
3309
null
7591
1
7592
null
10
473
Is there an R package, website, or command that will allow one to search for a specific statistical procedure they desire? For instance, if I wanted to find a package that had the Box-Cox Transformation, the website/package/command might return "MASS" and refer me to the `boxcox()` function. It is fairly straightforward with something like Box-Cox, but I was hoping it would allow me to find more difficult procedures or search by what the function does ("Concatenating columns to a data frame" might turn up `cbind()`). Does something like this exist?
How to search for a statistical procedure in R?
CC BY-SA 2.5
null
2011-02-24T22:54:17.743
2011-02-25T09:25:58.053
2011-02-25T09:25:58.053
930
1118
[ "r" ]
7592
2
null
7591
12
null
[rseek](http://rseek.org/) is pretty good. More abstract semantic queries along the lines of your second example are hard anywhere. Also, see this [SO thread](https://stackoverflow.com/questions/102056/how-to-search-for-r-materials) from the [R-faq listing](https://stackoverflow.com/tags/r/faq) there.
null
CC BY-SA 2.5
null
2011-02-24T23:02:05.017
2011-02-24T23:49:21.960
2017-05-23T12:39:26.203
-1
3396
null
7593
2
null
7591
4
null
I would try two things. One is the ?? help search in R, so for Box-Cox I would do ``` ??cox ``` which should list packages or functions with that text The other is to try the [http://www.rseek.org/](http://www.rseek.org/) site which is like google just for R.
null
CC BY-SA 2.5
null
2011-02-24T23:02:57.447
2011-02-24T23:02:57.447
null
null
114
null
7594
1
7600
null
1
630
Let's say the number of hotel rooms in a city is X. We know the arrival rate of visitors every day. We don't know the current occupancy. Is it possible to - estimate the occupancy rate over all hotel rooms (assume 1 visitor / room) - estimate the distribution of stay duration (assume every visitor stays at least 1 day) On the flip side, if we knew just the occupancy rate, can we estimate the total number of hotel rooms from the arrival data?
Estimating occupancy rates from arrival rates
CC BY-SA 2.5
null
2011-02-24T23:03:07.367
2011-02-25T06:23:48.497
2011-02-25T06:23:48.497
919
null
[ "estimation", "stochastic-processes" ]
7595
1
null
null
4
1806
I've been using the `lm` function in R to do demand modeling (tons of steel to be predicted by various economic indicators). I used $R^2$ and $F$ to report on the strength of the model. However, when I use the R function `lqs` ("resistant regression") and then type in `summary(model_name)` I do not get any statistics that I can use to report on the strength of the regression model. Any suggestions? EDIT: Thanks for your quick response. I don't have a problem with lqs(). The problem is that when I type in summary(Model) I do not get any goodness of fit information (e.g., adjusted R squared) as I do when I enter summary(x) where X is a model created using the lm function. I'd like to have something to show the strength of the model. I"m using MASS. See below. ``` library(MASS) M10 = lqs(agri ~ p12 + p1 + p11 + p5 + p8 + p6 + p25 + p50 + p35, data = agri_data2) summary(M10) Length Class Mode crit 1 -none- numeric sing 1 -none- character coefficients 10 -none- numeric bestone 10 -none- numeric fitted.values 103 -none- numeric residuals 103 -none- numeric scale 2 -none- numeric terms 3 terms call call 3 -none- call xlevels 0 -none- list model 10 data.frame list ```
How to get summary statistics from "resistant regression" - lqs - in R?
CC BY-SA 4.0
null
2011-02-24T21:33:14.667
2021-02-20T16:11:13.173
2021-02-20T16:11:13.173
11887
null
[ "r", "regression", "robust" ]
7596
2
null
7591
5
null
Sometimes I just go to crantastic and search for keywords [Search for Box Cox on Crantastic](http://crantastic.org/search?q=Box+Cox)
null
CC BY-SA 2.5
null
2011-02-24T23:36:35.363
2011-02-24T23:36:35.363
null
null
569
null
7597
2
null
7591
13
null
The [sos package](http://cran.r-project.org/web/packages/sos) lets you search the help documentation for all cran packages from within R itself.
null
CC BY-SA 2.5
null
2011-02-24T23:45:54.360
2011-02-24T23:45:54.360
null
null
364
null
7599
2
null
7595
4
null
Try typing: ``` model_name ``` Based on a quick skim of the `lqs()` documentation in the `MASS` package this looks like it should work. If it doesn't work and you're not using `MASS`, please specify which library you're running `lqs()` from (and maybe even point to the documentation if you want to make everybody's life easier).
null
CC BY-SA 2.5
null
2011-02-25T00:02:22.710
2011-02-25T09:20:31.630
2011-02-25T09:20:31.630
930
3396
null
7600
2
null
7594
2
null
No. You need more information. In particular, given the number of rooms and the arrival rate, then the average occupancy rate and the average length of stay are proportional to each other.
null
CC BY-SA 2.5
null
2011-02-25T00:54:02.860
2011-02-25T00:54:02.860
null
null
2958
null
7601
1
7603
null
6
1847
I have a multinomial logistic regression with dependent variable valued in {-1,0,1} (reference category is 0) and a number of continuous and discrete predictors. After running the regression a continuous predictor of interest ('size') has a Type 3 analysis of effects p-value of 0.0683, and the two coefficients (corresponding to outcomes of -1 and 1) have p-values of 0.8786 and 0.0220 respectively. I read somewhere that one should only look at the significance of the coefficients if the predictor itself is significant at the chosen level. Is this right? My naive sense is that the predictor is borderline (taking alpha=0.05 for argument's sake), and that 'size' has a significant relationship to outcome=1 but not to outcome = -1. I would say that the significance of the relationship to outcome=1 is not terribly strong, but that is ok for the application in mind (or at least, with the indirect data I am forced to use)
Interpreting significance of predictor vs significance of predictor coeffs in multinomial logistic regression
CC BY-SA 2.5
null
2011-02-25T01:05:59.623
2011-03-27T03:38:16.917
null
null
1144
[ "logistic", "statistical-significance" ]
7602
2
null
6524
4
null
OK, your question isn't perfectly clear but maybe I can help a little. A statistic $T(X)$ is sufficient for a parameter $\theta$ if $P(X|T(X), \theta) = P(X|T(X))$ In terms of likelihood functions you can verify that this implies $f(x;\theta) = h(x)g(T(x); \theta)$ for some $h$ and $g$, which is known by a few different monikers (the factorization theorem/lemme/criteria and sometimes with a name or two attached). This is where @probabilityislogic's comment comes from, although like I said it's just a property of the likelihood function. There are often a lot of different sufficient statistics (in particular, take $h=1$ and $g=f$, where $T(X)=X$ is just the entire dataset). Since the goal is to find a particular way to reduce the data without losing information, this leads into questions of minimal/complete sufficient statistics, etc. It's not clear what you need for your question, so I'll leave off there. In terms of the MLE, your notation is a little confusing to me so I'll make a couple general comments. What problems can happen finding the MLE? It might not have a closed form, which is less a problem than a complication. It can fail to be unique, or occur at the edge of the parameter space, be infinite, etc. You need to at least define the parameter space, which you haven't done in your problem statement so far as I can tell.
null
CC BY-SA 3.0
null
2011-02-25T02:10:06.680
2018-03-15T23:58:18.083
2018-03-15T23:58:18.083
1679
26
null
7603
2
null
7601
5
null
The p-value itself cannot tell you how strong the relationship is, because the p-value is so influenced by sample size, among other things. But assuming your N is something on the order of 100-150, I'd say there's a reasonably strong effect involving Size whereby as Size increases, the log of the odds of Y being 1 is notably different from the log of the odds of Y being 0. As you indicate, the same cannot be said of the comparison of Y values of -1 and 0. You are right in viewing all of this as somewhat invalidated by the overall nonsignificance of Size (depending on your alpha, or criterion for significance). You wouldn't get too many arguments if you simply declared Size a nonfactor due to its high p-value. But then again, if your N is sufficiently small--perhaps below 80 or 100--then your design affords low power for detecting effects, and you might make a case for taking seriously the specific effect that managed to show up anyway. A way around the problem of relying on p-values involves two steps. First, decide what range of odds ratios would constitute an effect worth bothering with, or worth calling substantial. (The trick there is in being facile enough with odds to recognize what they mean for the more intuitive metric of probability.) Then construct a confidence interval for the odds ratio associated with each coefficient and consider it in light of your hypothetical range. Regardless of statistical significance, does the effect have practical significance?
null
CC BY-SA 2.5
null
2011-02-25T02:48:40.657
2011-02-25T02:58:50.813
2011-02-25T02:58:50.813
2669
2669
null
7604
1
null
null
1
837
I have a single model (e.g generalized Pareto distribution) to test with a different data set (I have a set of different increasing threshold and fit the same model with a data above these threshold). I want to know which set of data will give me the best model fit. Can i use likelihood ratio test in this case or any other suggestion to get this? Many thanks in advance.
Single model for a different data set
CC BY-SA 2.5
null
2011-02-25T03:04:54.500
2011-02-25T19:51:48.713
2011-02-25T09:22:56.200
930
null
[ "model-selection", "likelihood-ratio" ]
7605
1
7614
null
2
564
I'm running an EFA using orthogonal/varimax rotation, and assigning variables to a factors based on maximum load (so only each variable only gets one factor). I then want to validate the model using SEM... since the rotation I used to determine the variable<->factor loads was orthogonal, is it "wrong" to let the factors in my model have a covariance with one another? (eg, using RAM: Factor1<->Factor2,theta,NA) I ask, as I get a much better model fit if I allow for this to occur. More explicitly, what does it actual mean for underlying factors to have a correlation between them? Thanks!
What does it mean if there is a correlation between underlying factors in factor analysis?
CC BY-SA 2.5
null
2011-02-25T03:42:55.343
2011-02-25T12:53:09.590
null
null
3424
[ "psychometrics" ]
7606
2
null
7604
2
null
You can't use the standard likelihood ratio test. It only works for comparing the likelihoods of different models on the same dataset. For the more general question of how to pick the set to which the model fits best, you need to define what "best" means. If your data is iid, the likelihood is obtained by multiplying a probability value (a number less than one) for each point. Just comparing likelihood will favor small sets (because fewer small numbers are multiplied). This is probably not what you want. GP has two parameters, $x_m$ and $\alpha$. I assume you fit both of them on each dataset (because if you hold $x_m$ fixed and change the threshold, then for larger thresholds there won't be any data near $x_m$). In this case, the best fit will be when your threshold is such that there is only one point left. But this fit only looks best because the problem of fitting one data point is very easy. Fitting many points is more difficult, so naturally the fit looks worse. So you need to account for the difficulty of the problem somehow. The problem seems complementary to the usual problem of accounting for the "model complexity". I'm not sure whether there is a standard way of doing this. This would certainly be interesting to learn about. Meanwhile, a way to sidestep this would be the following. Do you have a model for what the data below the threshold should look like? Maybe they come from a different process for which you know the distribution. If yes, what you can do is fit a mixture model to the data both above and below threshold. This mixture would always fit the same dataset (namely, all of your points). So for the mixture you could use the likelihood test or some other model selection method.
null
CC BY-SA 2.5
null
2011-02-25T03:43:01.500
2011-02-25T19:51:48.713
2011-02-25T19:51:48.713
3369
3369
null
7607
1
7954
null
9
1113
What are the methods to minimize the effect of boundaries in a wavelet decomposition? I use R and the package [waveslim](http://cran.r-project.org/web/packages/waveslim/index.html). I have found for instance the function ``` ?brick.wall ``` but - I am not too use how to use it. - I am not sure the best solution is to remove some coefficient. I have read somewhere that it exists some wavelets that are not the same everywhere and their shape change at the boudaries. Any ideas?
Boundary effect in a wavelet multi resolution analysis
CC BY-SA 2.5
null
2011-02-25T06:40:57.203
2013-04-23T05:16:07.760
2020-06-11T14:32:37.003
-1
1709
[ "r", "signal-processing", "wavelet" ]
7608
2
null
7581
15
null
E. L. Lehmann, in his classic Theory of Point Estimation, answers this question on pp 1-2. > The observations are now postulated to be the values taken on by random variables which are assumed to follow a joint probability distribution, $P$, belonging to some known class... ...let us now specialize to point estimation...suppose that $g$ is a real-valued function defined [on the stipulated class of distributions] and that we would like to know the value of $g$ [at whatever is the actual distribution in effect, $\theta$]. Unfortunately, $\theta$, and hence $g(\theta)$, is unknown. However, the data can be used to obtain an estimate of $g(\theta)$, a value that one hopes will be close to $g(\theta)$. In words: an estimator is a definite mathematical procedure that comes up with a number (the estimate) for any possible set of data that a particular problem could produce. That number is intended to represent some definite numerical property ($g(\theta)$) of the data-generation process; we might call this the "estimand." The estimator itself is not a random variable: it's just a mathematical function. However, the estimate it produces is based on data which themselves are modeled as random variables. This makes the estimate (thought of as depending on the data) into a random variable and a particular estimate for a particular set of data becomes a realization of that random variable. In one (conventional) ordinary least squares formulation, the data consist of ordered pairs $(x_i, y_i)$. The $x_i$ have been determined by the experimenter (they can be amounts of a drug administered, for example). Each $y_i$ (a response to the drug, for instance) is assumed to come from a probability distribution that is Normal but with unknown mean $\mu_i$ and common variance $\sigma^2$. Furthermore, it is assumed that the means are related to the $x_i$ via a formula $\mu_i = \beta_0 + \beta_1 x_i$. These three parameters--$\sigma$, $\beta_0$, and $\beta_1$--determine the underlying distribution of $y_i$ for any value of $x_i$. Therefore any property of that distribution can be thought of as a function of $(\sigma, \beta_0, \beta_1)$. Examples of such properties are the intercept $\beta_0$, the slope $\beta_1$, the value of $\cos(\sigma + \beta_0^2 - \beta_1)$, or even the mean at the value $x=2$, which (according to this formulation) must be $\beta_0 + 2 \beta_1$. In this OLS context, a non-example of an estimator would be a procedure to guess at the value of $y$ if $x$ were set equal to 2. This is not an estimator because this value of $y$ is random (in a way completely separate from the randomness of the data): it is not a (definite numerical) property of the distribution, even though it is related to that distribution. (As we just saw, though, the expectation of $y$ for $x=2$, equal to $\beta_0 + 2 \beta_1$, can be estimated.) In Lehmann's formulation, almost any formula can be an estimator of almost any property. There is no inherent mathematical link between an estimator and an estimand. However, we can assess--in advance--the chance that an estimator will be reasonably close to the quantity it is intended to estimate. Ways to do this, and how to exploit them, are the subject of estimation theory.
null
CC BY-SA 3.0
null
2011-02-25T07:03:35.063
2014-10-11T17:18:27.600
2020-06-11T14:32:37.003
-1
919
null
7609
2
null
6026
9
null
I am not a statistician, but an MD, trying to sort things out in the world of statistics. The way you have to interpret this output is by looking at the $\exp(B)$ values. A value of < 1 says that an increase in one unit for that particular variable, will decrease the probability of experiencing an end point throughout the observation period. By inverting (that is $1/\exp(B)$), you will find the "protective effect", for example if $\exp(B) = 0.407$ (as is the case for your "Gender" value), the interpretation will be that having the value of gender = 1 means that you decrease the probability of experiencing an en point with $1/0.407 = 2.46$, compared to when the Gender value = 0. For $\exp(B) > 1$, the interpretation is even easier, as a value of, say $\exp(B) = 1.259$ (as is the case for your "stenosis" variable), means that scoring "stenosis" = 1 will result in an increased probability (25.9%) of experiencing an end point compared to when "stenosis" = 0. The confidence interval (CI) tells us within which range (of 95% probability) we can expect this value to differ, if we were to repeat this survey for an infinite number of times. If the 95% CI overlaps the value of 1, then the result is not statistically significant (since $\exp(B) = 1$ means that there is no difference between the probability of experiencing an end point if the variable value is either "0" or "1"), and the P value will exceed 0.05. If the 95% CI keeps out of the value 1 (on either side), the $\exp(B)$ is statistically significant. From your analysis, it seems as no one of your variables are significant predictors (at a sign level of 5%) of your endpoint, although being a "high risk" patient is of borderline significance. Reading the book "SPSS survival manual", by Julie Pallant will probably enlighten you further on this (and more) topic(s).
null
CC BY-SA 2.5
null
2011-02-25T07:52:29.567
2011-02-25T09:50:26.273
2011-02-25T09:50:26.273
930
null
null
7610
1
null
null
25
14285
Besides obvious classifier characteristics like - computational cost, - expected data types of features/labels and - suitability for certain sizes and dimensions of data sets, what are the top five (or 10, 20?) classifiers to try first on a new data set one does not know much about yet (e.g. semantics and correlation of individual features)? Usually I try Naive Bayes, Nearest Neighbor, Decision Tree and SVM - though I have no good reason for this selection other than I know them and mostly understand how they work. I guess one should choose classifiers which cover the most important general classification approaches. Which selection would you recommend, according to that criterion or for any other reason? --- UPDATE: An alternative formulation for this question could be: "Which general approaches to classification exist and which specific methods cover the most important/popular/promising ones?"
Top five classifiers to try first
CC BY-SA 4.0
null
2011-02-25T09:45:02.317
2018-07-03T07:23:49.697
2018-07-03T07:23:49.697
128677
2230
[ "machine-learning", "classification", "methodology" ]
7611
1
7612
null
3
1512
I am constructing Cox models that predict survival in a clinical trials cohort. After speaking to our statistician (who is away at the moment, hence this post), I was advised to take a forward likelihood ratio-test approach to building Cox survival models, starting with a base model and adding the term that improved the model, by computing a p value from subtracting the likelihood ratio statistic from the extended model from the likelihood ratio from the base model, as outlined in the R code below. I realise that Stata is probably a better fit for this sort of analysis, but I i) I don't have easy access to Stata and ii) am familiar with R (also have access to SPSS), so with that caveat, here is the general format of the code I am using: Conceptually, this makes sense to me if I add binary covariates to the model, but I was wondering whether this approach is appropriate for adding a continuous variable, as outlined below? I'm not sure that a degree of freedom equal to one is correct for this comparison? ``` y<-0:1 data<-data.frame(cbind(sample(y,100,replace=TRUE),runif(100,min=0,max=10),sample(y,100,replace=TRUE),runif(100,min=0,max=1))) colnames(data)<-c("EFS_Status","EFS_Time","var1","Contvar2") library(survival) base <- coxph(Surv(EFS_Time,EFS_Status) ~ var1, data=data) # Create base model lr1 <- -2*base$loglik[2] # Likelihood ratio of base model extend <- coxph(Surv(EFS_Time,EFS_Status) ~ var1 + Contvar2, data=data) # Extended model lr2<- -2*extend$loglik[2] # Likelihood ratio of extended model pchisq(q=lr1-lr2,df=1,lower.tail=FALSE) # 1 df correct for continuous variables? ``` Any guidance is most appreciated. Obviously, I could binarize the continuous variable, but I suppose you're losing information by taking that approach. Thanks for reading, Ed
Building Cox Model using forward likelihood-ratio testing - Appropriateness of adding continuous variables to model?
CC BY-SA 2.5
null
2011-02-25T09:49:28.517
2011-02-25T10:04:12.440
null
null
3429
[ "survival", "continuous-data", "clinical-trials" ]
7612
2
null
7611
3
null
Yes, a continuous variable added as a linear effect to the model formula as you have here adds one degree of freedom to the model, as there is one more model parameter to estimate. By the way, instead of using `pchisq` to compute the p-value it's easier and less error-prone to use `anova()`. This will automatically calculate the correct degrees of freedom for the test. See `?anova.coxph`.
null
CC BY-SA 2.5
null
2011-02-25T10:04:12.440
2011-02-25T10:04:12.440
null
null
449
null
7613
1
null
null
4
733
Does the magnitudes of principal eigenvectors obtained by PCA have anything to do with correlations of original variables, and can we use PCA for clustering? Thanks!
Do correlations relate to PCA eigenvectors and can PCA be used for clustering?
CC BY-SA 2.5
null
2011-02-25T10:25:39.287
2011-02-26T06:11:02.380
2011-02-26T06:11:02.380
183
3430
[ "pca" ]
7614
2
null
7605
3
null
OK, in my experience cases like this mean that you should have allowed the factors to correlate in the first place. You should probably rerun a factor analysis using either oblimin or promax rotation and test the fit of your uncorrelated model against your correlated model. Please do note that SEM loses its utility as a method for testing theories once you start changing the model based on fit indices.
null
CC BY-SA 2.5
null
2011-02-25T11:42:09.717
2011-02-25T11:42:09.717
null
null
656
null
7615
2
null
7610
7
null
Gaussian process classifier (not using the Laplace approximation), preferably with marginalisation rather than optimisation of the hyper-parameters. Why? - because they give a probabilistic classification - you can use a kernel function that allows you to operate directly on non-vectorial data and/or incorporate expert knowledge - they deal with the uncertainty in fitting the model properly, and you can propagate that uncertainty through to the decision making process - generally very good predictive performance. Downsides - slow - requires a lot of memory - impractical for large scale problems. First choice though would be regularised logistic regression or ridge regression [without feature selection] - for most problems, very simple algorithms work rather well and are more difficult to get wrong (in practice the differences in performance between algorithms is smaller than the differences in performance between the operator driving them).
null
CC BY-SA 2.5
null
2011-02-25T12:01:31.843
2011-02-25T16:30:28.397
2011-02-25T16:30:28.397
930
887
null
7616
2
null
7610
1
null
By myself when you are approaching to a new data set you should start to watch to the whole problem. First of all get a distribution for categorical features and mean and standard deviations for each continuous feature. Then: - Delete features with more than X% missing values; - Delete categorical features when a particular value gets more then 90-95% of relative frequency; - Delete continuous features with CV=std/mean<0.1; - Get a parameter ranking, eg ANOVA for continuous and Chi-square for categorical; - Get a significant subset of features; Then I usually split the classification techniques in 2 sets: white box and black box technique. If you need to know 'how the classifier works' you should choose in the first set, eg Decision-Trees or Rules-based classifiers. If you need to classify new records without building a model should should take a look to eager learner, eg KNN. After that I think is better to have a threshold between accuracy and speed: Neural Network are a bit slower than SVM. This is my top five classification technique: - Decision Tree; - Rule-based classifiers; - SMO (SVM); - Naive Bayes; - Neural Networks.
null
CC BY-SA 2.5
null
2011-02-25T12:16:40.370
2011-02-25T12:16:40.370
null
null
2719
null
7617
2
null
6963
5
null
Just an initial remark, if you want computational speed you usually have to sacrifice accuracy. "More accuracy" = "More time" in general. Anyways here is a second order approximation, should improve on the "crude" approx you suggested in your comment above: $$E\Bigg(\frac{X_{j}}{\sum_{i}X_{i}}\Bigg)\approx \frac{E[X_{j}]}{E[\sum_{i}X_{i}]} -\frac{cov[\sum_{i}X_{i},X_{j}]}{E[\sum_{i}X_{i}]^2} +\frac{E[X_{j}]}{E[\sum_{i}X_{i}]^3} Var[\sum_{i}X_{i}] $$ $$= \frac{\alpha_{j}}{\sum_{i} \frac{\beta_{j}}{\beta_{i}}\alpha_{i}}\times\Bigg[1 - \frac{1}{\Bigg(\sum_{i} \frac{\beta_{j}}{\beta_{i}}\alpha_{i}\Bigg)} + \frac{1}{\Bigg(\sum_{i} \frac{\alpha_{i}}{\beta_{i}}\Bigg)^2}\Bigg(\sum_{i} \frac{\alpha_{i}}{\beta_{i}^2}\Bigg)\Bigg] $$ EDIT An explanation for the above expansion was requested. The short answer is [wikipedia](http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables). The long answer is given below. write $f(x,y)=\frac{x}{y}$. Now we need all the "second order" derivatives of $f$. The first order derivatives will "cancel" because they will all involve multiples $X-E(X)$ and $Y-E(Y)$ which are both zero when taking expectations. $$\frac{\partial^2 f}{\partial x^2}=0$$ $$\frac{\partial^2 f}{\partial x \partial y}=-\frac{1}{y^2}$$ $$\frac{\partial^2 f}{\partial y^2}=2\frac{x}{y^3}$$ And so the taylor series up to second order is given by: $$\frac{x}{y} \approx \frac{\mu_x}{\mu_y}+\frac{1}{2}\Bigg(-\frac{1}{\mu_y^2}2(x-\mu_x)(y-\mu_y) + 2\frac{\mu_x}{\mu_y^3}(y-\mu_y)^2 \Bigg)$$ Taking expectations yields: $$E\Big[\frac{x}{y}\Big] \approx \frac{\mu_x}{\mu_y}-\frac{1}{\mu_y^2}E\Big[(x-\mu_x)(y-\mu_y)\Big] + \frac{\mu_x}{\mu_y^3}E\Big[(y-\mu_y)^2\Big]$$ Which is the answer I gave. (although I initially forgot the minus sign in the second term)
null
CC BY-SA 2.5
null
2011-02-25T12:25:12.067
2011-03-08T07:45:43.053
2011-03-08T07:45:43.053
2392
2392
null
7618
1
7634
null
4
432
Have you ever heard about Child-Pugh cirrhosis score? There are five features, each feature is discretized in three intervals. For each interval you get a point, eg 1 for the first, 2 for the second etc. After computing all points you sum them up and then you get a cirrhosis stages ranked from A to C. Each stage is correlated to a particular probability to survive. [http://en.wikipedia.org/wiki/Child-Pugh_score](http://en.wikipedia.org/wiki/Child-Pugh_score) I have a DB with a many features and a label, the label is binary: ill and good. I have chosen probability estimation trees to get a white box model that estimates probabilities to be 'ill'. The model can show almost clearly the classification process, but I think that a Child-Pugh like score could be clearer than this for a medical doctor. Is there something similar in literature?
White box machine learning probability estimator
CC BY-SA 3.0
null
2011-02-25T12:32:09.693
2017-01-10T12:07:27.100
2017-01-10T12:07:27.100
73527
2719
[ "probability", "machine-learning" ]
7619
2
null
7605
3
null
- The first thing to do is see if the structure changes when you use an oblique rotation that allows the factors to correlated. - If you truly want to validate your structure then use a confirmatory factor analysis (CFA, which is the measurement part of SEM) as you are already doing but with a different sample. - If you just have one sample and you are interested in whether the factors correlate, look into building a second order CFA that posits a higher order factor explaining the common variance among 2 or more of the lower order factors.
null
CC BY-SA 2.5
null
2011-02-25T12:53:09.590
2011-02-25T12:53:09.590
null
null
1916
null
7620
1
null
null
2
367
My problem is this: I have one dependent variable and 4 independent ones: one is age and the other three are temperament dimensions. I did 3 sets of two-way ANOVAs. The first independent variable is always the same (age) and the second is always different - one of the tempeament dimensions. In one case I get that age has significant effecet, the temeprament1 does not, and no interaction. in other case I get that age doesn't have the significant effecet, but still no influence of temperament2, and no significant effect of interaction. one-way ANOVA for age shows there is significant effect. My question is how do I interpret the data? My plan was to say if there is effect of age and temperament and interaction of age and temeprament dimensions, but now - the effect of age is sometimes there, and sometimes not?!
Main effect of the first independent variable in two-way ANOVa lost depending on the second independent variable
CC BY-SA 2.5
null
2011-02-25T13:57:09.637
2011-02-25T16:43:05.643
null
null
null
[ "anova" ]
7621
2
null
7613
1
null
@ q2: Ifyou use all principal components, then this is only a rotation in the multidimensional space and the euclidean(!) distances between datapoints are not affected. However, for instance taxicab-distances are affected by rotation. (This ca be seen if you consider the unit square and the taxicab-distance between the two edges of the diagonal. Unrotated you have two times one border s the distance is 2, but if you rotate it by 45 deg you have the distance sqrt(2)) Furthermore, once you employ PCA then your goal is to reduce dimensionality, thus usually you discard variance/covariance (according to the ignored less-principal components), and this cannot be reflected by any selection of sets of items in a "canned" cluster-analysis, so the solutions must be different. [opinion] Well, that the PCA and the unrotated clusters are usually not perfectly equal need not mean, that the PCA-based clusters are worse/bad/meaningless [/opinion]
null
CC BY-SA 2.5
null
2011-02-25T14:09:13.833
2011-02-25T14:09:13.833
null
null
1818
null
7622
2
null
7610
21
null
Random Forest Fast, robust, good accuracy, in most cases nothing to tune, requires no normalization, immune to collinearity, generates quite good error approximation and useful importance ranking as a side effect of training, trivially parallel, predicts in a blink of an eye. Drawbacks: slower than trivial methods like kNN or NB, works best with equal classes, worse accuracy than SVM for problems desperately requiring kernel trick, is a hard black-box, does not make coffee.
null
CC BY-SA 2.5
null
2011-02-25T15:05:57.503
2011-02-25T15:05:57.503
null
null
null
null
7623
2
null
7620
2
null
Unfortunately there is no good short answer to your question--not one that is likely to help you understand these findings on more than a superficial level. What is required is for you to begin exploring the literature on statistical control and on partialling out (adjusting for, controlling for, or holding constant) extraneous variables. One might spend the better part of a semester on this topic, and there are sources at all levels of sophistication that you might read. James Davis' The Logic of Causal Order and Dana Keller's The Tao of Statistics are two very short, user-friendly, introductory books that come to mind. My short, very basic piece at [http://www.integrativestatistics.com/partial.htm](http://www.integrativestatistics.com/partial.htm) might also be of some use as a way of orienting you before you delve into more detailed treatments.
null
CC BY-SA 2.5
null
2011-02-25T15:45:06.957
2011-02-25T15:45:06.957
null
null
2669
null
7624
2
null
7579
0
null
If you have access to SAS 9.2 you could use PROC COUNTREG. It's a fairly new procedure and if you poke around the SAS site you can find out about it in the SAS/ETS(R) 9.2 User's Guide. COUNTREG does count modeling with or without zero inflation, has a "by" clause to split analyses, and allows both categorical and continuous variables.
null
CC BY-SA 2.5
null
2011-02-25T16:31:35.000
2011-02-25T16:31:35.000
null
null
3434
null
7625
1
null
null
4
2649
I have a zero-inflated negative binomial model (ZINB) for highly skewed, high zero-count data, $n=6800$. The null model does not reject the ZINB model, and Stata `count fit` also indicated ZINB over other count models. Yet, when I run the model with more than the minimum of covariates, the inflation intercept has a coefficient but no error, no $t$, no $p$-value. Does this mean the model is incorrect? Or just the zero-inflated assumptions? Any suggestions?
How to interpret results from a zero-inflated negative binomial model?
CC BY-SA 3.0
null
2011-02-25T16:36:28.977
2011-07-25T21:56:21.397
2011-07-25T21:56:21.397
930
3434
[ "regression", "nonparametric", "stata", "negative-binomial-distribution" ]
7626
2
null
7620
2
null
Sounds like age is correlated with one or more of your temperament measures, which means you're violating the assumptions of ANOVA/regression. You might want to instead look at path analysis to ascertain the relationships amongst your variables.
null
CC BY-SA 2.5
null
2011-02-25T16:43:05.643
2011-02-25T16:43:05.643
null
null
364
null
7627
2
null
7625
1
null
Are you using Stata's zinb command to run your regressions? When there are no standard error estimates, the things that comes to mind as possible explanations are that the model may be too collinear, or that there may be zero-cells for some variables.
null
CC BY-SA 2.5
null
2011-02-25T16:45:40.170
2011-02-25T16:45:40.170
null
null
3309
null
7629
2
null
7308
19
null
I think you're right. Let's distill your argument to its essence: - $\widehat \theta_N$ minimizes the function $Q$ defined as $Q(\theta) = {1 \over N}\sum_{i=1}^N q(w_i,\theta).$ - Let $H$ be the Hessian of $Q$, whence $H(\theta) = \frac{\partial^2 Q}{\partial \theta_i \partial \theta_j}$ by definition and this in turn, by linearity of differentiation, equals $\frac{1}{N}\sum_{i=1}^N H(w_i, \theta_n)$. - Assuming $\widehat \theta_N$ lies in the interior of the domain of $Q$, then $H(\widehat \theta_N)$ must be positive semi-definite. This is merely a statement about the function $Q$: how it is defined is merely a distraction, except insofar as the assumed second order differentiability of $q$ with respect to its second argument ($\theta$) assures the second order differentiability of $Q$. --- Finding M-estimators can be tricky. Consider these data provided by @mpiktas: ``` {1.168042, 0.3998378}, {1.807516, 0.5939584}, {1.384942, 3.6700205}, {1.327734, -3.3390724}, {1.602101, 4.1317608}, {1.604394, -1.9045958}, {1.124633, -3.0865249}, {1.294601, -1.8331763},{1.577610, 1.0865977}, { 1.630979, 0.7869717} ``` The R procedure to find the M-estimator with $q((x,y),\theta)=(y-c_1x^{c_2})^4$ produced the solution $(c_1, c_2)$ = $(-114.91316, -32.54386)$. The value of the objective function (the average of the $q$'s) at this point equals 62.3542. Here is a plot of the fit: ![Fit 1](https://i.stack.imgur.com/9NAmz.png) Here is a plot of the (log) objective function in a neighborhood of this fit: ![Objective 1](https://i.stack.imgur.com/lG5FZ.png) Something is fishy here: the parameters of the fit are extremely far from the parameters used to simulate the data (near $(0.3, 0.2)$) and we do not seem to be at a minimum: we are in an extremely shallow valley that is sloping towards larger values of both parameters: ![Objective 1, 3D view](https://i.stack.imgur.com/PUWRw.png) The negative determinant of the Hessian at this point confirms that this is not a local minimum! Nevertheless, when you look at the z-axis labels, you can see that this function is flat to five-digit precision within the entire region, because it equals a constant 4.1329 (the logarithm of 62.354). This probably led the R function minimizer (with its default tolerances) to conclude it was near a minimum. In fact, the solution is far from this point. To be sure of finding it, I employed the computationally expensive but highly effective "[Principal Axis](http://reference.wolfram.com/mathematica/tutorial/UnconstrainedOptimizationPrincipalAxisMethod.html)" method in Mathematica, using 50-digit precision (base 10) to avoid possible numerical problems. It finds a minimum near $(c_1, c_2) = (0.02506, 7.55973)$ where the objective function has the value 58.292655: about 6% smaller than the "minimum" found by R. This minimum occurs in an extremely flat-looking section, but I can make it look (just barely) like a true minimum, with elliptical contours, by exaggerating the $c_2$ direction in the plot: ![Objective 2](https://i.stack.imgur.com/HLnKR.png) The contours range from 58.29266 in the middle all the way up to 58.29284 in the corners(!). Here's the 3D view (again of the log objective): ![Objective 2, 3D view](https://i.stack.imgur.com/Exo3U.png) Here the Hessian is positive-definite: its eigenvalues are 55062.02 and 0.430978. Thus this point is a local minimum (and likely a global minimum). Here is the fit it corresponds to: ![Fit 2](https://i.stack.imgur.com/Qkfnt.png) I think it's better than the other one. The parameter values are certainly more realistic and it's clear we're not going to be able to do much better with this family of curves. There are useful lessons we can draw from this example: - Numerical optimization can be difficult, especially with nonlinear fitting and non-quadratic loss functions. Therefore: - Double-check results in as many ways as possible, including: - Graph the objective function whenever you can. - When numerical results appear to violate mathematical theorems, be extremely suspicious. - When statistical results are surprising--such as the surprising parameter values returned by the R code--be extra suspicious.
null
CC BY-SA 2.5
null
2011-02-25T17:33:31.707
2011-02-26T22:16:44.043
2011-02-26T22:16:44.043
919
919
null
7630
1
7671
null
16
15529
I am clustering probability distributions using the [Affinity Propagation](http://www.psi.toronto.edu/index.php?q=affinity%20propagation) algorithm, and I plan to use Jensen-Shannon Divergence as my distance metric. Is it correct to use JSD itself as the distance, or JSD squared? Why? What differences would result from choosing one or the other?
Clustering: Should I use the Jensen-Shannon Divergence or its square?
CC BY-SA 2.5
null
2011-02-25T18:01:07.703
2011-02-27T16:51:14.127
2011-02-27T16:51:14.127
null
2759
[ "machine-learning", "clustering", "entropy", "distance-functions" ]
7631
1
7645
null
9
10656
Suppose we have random variable $X_1$ distributed as $U[0,1]$ and $X_2$ distributed as $U[0,X_1]$, where $U[a,b]$ means uniform distribution in interval $[a,b]$. I was able to compute joint pdf of $(X_1,X_2)$ and marginal pdf of $X_1$. $$ p(x_1,x_2) = \frac{1}{x_1}, \text{ for }\quad 0\le x_1\le 1, \quad 0\le x_2 \le x_1,$$ $$ p(x_1)= 1, \text{ for } \quad 0\le x_1\le 1.$$ However while computing marginal pdf of $X_2$ I am encountering limits problem. The resultant of integral through marginal of $X_2$ is $\log(X_1)$ and the limits are from 0 to 1. As $\log(X_1)$ is not defined for $X_1=0$, I am facing a difficulty. Am I wrong somwhere? Thanks.
Problem calculating joint and marginal distribution of two uniform distributions
CC BY-SA 2.5
null
2011-02-25T19:29:22.413
2018-10-13T18:03:15.213
2013-06-30T11:36:21.707
22468
null
[ "density-function", "marginal-distribution", "joint-distribution" ]
7632
2
null
7571
3
null
Seems to me like a Kalman smoothing problem, which essentially computes the mean value of the state given past, present and future observations, with a computational effort similar to two Kalman filter passes. You might want to check any of the good books in existence which deal with the Kalman smoother, among them [Durbin-Koopman](http://rads.stackoverflow.com/amzn/click/0198523548) and [Anderson-Moore](http://rads.stackoverflow.com/amzn/click/0486439380).
null
CC BY-SA 2.5
null
2011-02-25T19:48:31.010
2011-02-25T19:48:31.010
null
null
892
null
7633
2
null
7631
2
null
You should not have $X_1$ in the marginal distribution for $X_2$ I would expect you to get $P(X_2 \le x_2) = x_2 (1-\log(x_2))$ and so the derivative gives a marginal density of $-\log(x_2)$. This comes from $P(X_2 \le x_2 |X_1=x_1) = 1$ if $x_1 \le x_2$, and $ P(X_2 \le x_2 |X_1=x_1) = \frac{x_2}{x_1}$ if $x_2 \le x_1$ so the integral is $$P(X_2 \le x_2) = \int_{x_1=0}^{x_2} dx_1 + \int_{x_1=x_2}^{1} \frac{x_2}{x_1} dx_1$$ $$ = \left[ x_1 \right]_{x_1=0}^{x_1=x_2} + \left[x_2 \log(x_1)\right]_{x_1=x_2}^{x_1=1} $$ $$ = x_2 - 0 +x_2 \log(1) - x_2 \log(x_2) $$ $$ = x_2 (1-\log(x_2))$$
null
CC BY-SA 2.5
null
2011-02-25T19:54:44.657
2011-02-25T23:06:14.040
2011-02-25T23:06:14.040
2958
2958
null
7634
2
null
7618
1
null
One option is to take a single tree and make it small (e.g. 3-5 features). This will show a classification process that is easy to interpret. If you want something closer to a "score", then try using a linear classifier (such as Naive Bayes or SVM). Again, use few (3-5) features and discretize them into levels if you want. They will compute something like $$Score = w_1 \cdot f_1 + w_2 \cdot f_2 + ... ,$$ where $f_i$ are feature values (1, 2, or 3 in your example) and $w_i$ is the weight for that feature (i.e. how many points you get for that feature). The disadvantage of all this is that you need to use few features and small classifiers for them to be easy to interpret. But they may not work as well as your big classifier.
null
CC BY-SA 2.5
null
2011-02-25T20:57:11.160
2011-02-25T20:57:11.160
null
null
3369
null
7635
2
null
1099
0
null
One way of proceeding is to construct various null models each of which assume factors are independent of one another. The independence assumption often makes these easy to construct. Then the predicted joint densities are the products of the marginal densities. To the degree the actual data are consistent with these, you know factors are independent. If they are greater or lesser than the joint prediction, you may be able to infer they co-vary positively or negatively. Be careful to consider numbers of observations in each case, and you may be able to do that formally by treating populations as extended hypergeometrics. This is all in the spirit of the Fisher Exact Test, but Fisher actually formulated it so more general situations could be modeled. See, for example, Discrete Multivariate Analysis: Theory and Practice, by Yvonne M. Bishop, Stephen E. Fienberg, Paul W. Holland, R.J. Light, F. Mosteller, and The Analysis of Cross-Classified Categorical Data, by Stephen E. Fienberg.
null
CC BY-SA 2.5
null
2011-02-25T21:34:57.103
2011-02-25T21:34:57.103
null
null
3437
null
7636
2
null
3955
1
null
Caution that arrival rates of users at Web sites are nasty series, tend to be overdispersed (from a Poisson standpoint), so consider negative binominal distributions to look at arrivals, and their fitting. Also, you may want to examine the order statistics of the sites on each day rather than their numbers.
null
CC BY-SA 2.5
null
2011-02-25T21:42:09.303
2011-02-25T21:42:09.303
null
null
3437
null
7637
1
7641
null
2
2091
Suppose I roll a 4-side dice 25 time to get a final sum between 25 and 100. How I calculated the distribution of probability for each sum between 25 and 100?
Multinomial distribution for 4 side dice roll
CC BY-SA 3.0
null
2011-02-25T22:30:25.270
2017-08-20T11:18:22.667
2017-08-20T11:18:22.667
805
3438
[ "probability", "multinomial-distribution", "dice" ]
7638
1
null
null
6
4074
### Context: I am working with an ordinal logistic model and trying to interpret/present the results. The model has two continuous predictors of interests, and a mix of continuous and categorical controls. I was hoping to graph the predicted likelihood of the top outcome (being accepted into a school) across multiple levels of my IVs of interest. I am using R's predict() function to generate predicted likelihoods. For my IVs of interest, I chose a range of reasonable values (i.e. mean +- 1 SD). For the continuous predictors, I can use sensible baseline values (usually 0) because they are mean-centered or standardized. I am trying to work out how to approach the categorical predictors. I've explored my options by plugging in different values, and in most cases the result is just a small shift in the output curve. For one variable however, the differences are huge, so I need to find a way to present results that are general to the different levels of that variable. Perhaps an example would help clarify. In these two graphs, the two IVs of interest are plotted on the x-axis and as the 3 lines. Each graph shows the output given a single level of my troublesome categorical control, "Admitting School" (which has 4 levels total) ![enter image description here](https://i.stack.imgur.com/5rIXW.png) ![enter image description here](https://i.stack.imgur.com/gu1X8.png) [Other graphs and R syntax here if you're curious](http://swift.cbdr.cmu.edu/files/fsd/) ### Question: - How should I represent the model across all levels of the categorical variables in a single graph? ### Initial Thoughts: - Aggregate predicted values across each level of Admitting School with some sort of weighted average. - This post suggests using the proportion of cases of each type as the input for each variable. As in, if 32% of my cases came from School 1, I would use .32*B-school1 in the prediction formula. I don't know how to do that in R, since those variables are factors, but if it's an appropriate approach, I'm sure I could figure it out. Sorry for the verbosity and thanks in advance for any help.
Calculating predicted values from categorical predictors in logistic regression
CC BY-SA 2.5
null
2011-02-25T22:48:52.360
2011-02-26T18:54:20.857
2017-04-13T12:44:45.640
-1
3388
[ "data-visualization", "logistic", "categorical-data" ]
7639
1
null
null
2
596
I have 4301 lines of data from my science project. I have to do unorthodox statistical analysis for my results because they are 4-dimensional; I have three independent variables. With Excel I can do a lot of analysis, even multivariable linear regression, but what I need is a step up: Multivariable nonlinear regression. I have R. Is there a way I can do it with R? Also, is there a way to generate multiple 3-dimensional graphs with slices of the data automatically? EDIT: I've searched with no luck. Is there any way of doing multivariable nonlinear regression automatically?
Doing statistical analysis and charting with large data?
CC BY-SA 2.5
null
2011-02-25T22:50:28.983
2011-02-28T15:30:51.777
2011-02-26T00:16:22.893
null
null
[ "r", "multivariate-analysis", "large-data" ]
7640
2
null
7639
4
null
It's not real clear what you're asking. Is the following useful? I knocked it down from 4301 to 430 lines in case your machine is slow. You can move the little green box around to see how those points fit in all of the other graphs. ``` library(TeachingDemos) #Build 3 independent variables (for speed reasons, I knocked it down to 430) x <- rnorm(430) y <- runif(430) z <- rnorm(430) #Build the dependent variable dep <- 2 + (0.5 * x) + (0.9 * y) + (-0.8 * z) #Put everything in a data frame df <- data.frame(x=x, y=y, z=z, dep=dep) #Plot it and look at chunks of the data dra <- tkBrush(df) ``` ![enter image description here](https://i.stack.imgur.com/hp5fy.jpg)
null
CC BY-SA 2.5
null
2011-02-26T00:23:13.350
2011-02-26T00:30:37.350
2011-02-26T00:30:37.350
2775
2775
null
7641
2
null
7637
4
null
There are several approaches: - Use the normal distribution as an approximation as described in whuber's link to the earlier question, or use some of the other suggestions there. The mean will be 62.5 while the variance will be 31.25. - Do a recurrence starting with $f(i,0)=0$ except when $i=0$ in which case $f(0,0)=1$, and then use $f(i,j+1)=f(i-4,j)/4+f(i-3,j)/4+f(i-2,j)/4+f(i-1,j)/4$ to find $f(i,25)$ - Expand $(x/4+x^2/4+x^3/4+x^4/4)^{25}$, for example by putting Expand[(x/4+x^2/4+x^3/4+x^4/4)^25] into Wolfram Alpha and press "=" then "show more terms" several times - Hope somebody has already done some recurrence calculations for you, which they have in rows 925-1000 of an OEIS table and then divide the results by $4^{25} = 1125899906842624$
null
CC BY-SA 2.5
null
2011-02-26T01:51:47.000
2011-02-26T01:51:47.000
2017-04-13T12:44:52.660
-1
2958
null
7642
2
null
7639
1
null
Save your table as a csv file, with one header row that contains the names of your 4 variables. Save the following code as an R script file in the same directory as your data, and then run it. Lets say your variables are called X1, X2, X3, and Y, where X1, X2, X3 are your independent variables and Y is your dependent variable. ``` MyData <- read.csv('path/to/MyData.csv') model <- lm(Y~X1+X2+X3,data=MyData) predictions <- predict(model) model plot(model) ``` Building a linear regression model and diagnosing it is relatively simple in R, even with 4000+ lines of data. What exactly are you trying to do with this dataset? You can build a non-linear model using the "loess" or "glm" commands, but specifying and diagnosing such models is more difficult than simple linear models. Again, if you can be a bit clearer about what exactly you need to do, we can help you more. Fitting a linear model with 3 dependent variables and 4000 observations isn't exactly unorthodox, and you can do it quickly and easily in R.
null
CC BY-SA 2.5
null
2011-02-26T02:06:46.497
2011-02-28T15:30:51.777
2011-02-28T15:30:51.777
2817
2817
null
7643
2
null
7639
2
null
What sort of nonlinear model do you want? 4300 lines of data is not very large in today's world. Data sets containing millions of records exist. Certainly 4300 will be no problem This link to a work by John Fox may be helpful [http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-nonlinear-regression.pdf](http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-nonlinear-regression.pdf) it highlights use of the nls function in R, which is part of the nls library Hope this helps
null
CC BY-SA 2.5
null
2011-02-26T02:11:08.043
2011-02-26T02:11:08.043
null
null
686
null
7644
1
7666
null
14
876
I am looking for methods which can be used to estimate the "OLS" measurement error model. $$y_{i}=Y_{i}+e_{y,i}$$ $$x_{i}=X_{i}+e_{x,i}$$ $$Y_{i}=\alpha + \beta X_{i}$$ Where the errors are independent normal with unknown variances $\sigma_{y}^{2}$ and $\sigma_{x}^{2}$. "Standard" OLS won't work in this case. [Wikipedia](http://en.wikipedia.org/wiki/Measurement_error_model) has some unappealing solutions - the two given force you to assume that either the "variance ratio" $\delta=\frac{\sigma_{y}^{2}}{\sigma_{x}^{2}}$ or the "reliability ratio" $\lambda=\frac{\sigma_{X}^{2}}{\sigma_{x}^{2}+\sigma_{X}^{2}}$ is known, where $\sigma_{X}^2$ is the variance of the true regressor $X_i$. I am not satisfied by this, because how can someone who doesn't know the variances know their ratio? Anyways, are there any other solutions besides these two which don't require me to "know" anything about the parameters? Solutions for just the intercept and slope are fine.
Methods for fitting a "simple" measurement error model
CC BY-SA 2.5
null
2011-02-26T03:19:55.543
2011-03-29T17:38:35.623
2011-02-26T17:50:46.793
2970
2392
[ "regression", "estimation", "errors-in-variables" ]
7645
2
null
7631
4
null
In the "marginalisation" integral, the lower limit for $x_1$ is not $0$ but $x_2$ (because of the $0<x_2<x_1$ condition). So the integral should be: $$p(x_2)=\int p(x_1,x_2) dx_1=\int \frac{I(0\leq x_2\leq x_1\leq 1)}{x_1} dx_1=\int_{x_2}^{1} \frac{dx_1}{x_1}=log\big(\frac{1}{x_2}\big)$$ You have stumbled across, what I think is one of the hardest parts of statistical integrals - determining the limits of integration. NOTE: This is consistent with Henry's answer, mine is the PDF, and his is the CDF. Differentiating his answer gives you mine, which shows we are both right.
null
CC BY-SA 2.5
null
2011-02-26T03:46:46.097
2011-02-26T03:55:16.637
2011-02-26T03:55:16.637
2392
2392
null
7646
2
null
7534
3
null
The term "statistically significant" is a bit arbitrary, because it is defined relative to a null hypothesis. The basic way to test this is to use your "null hypothesis" to define a set of Expected values for each cell. The standard way to measure the error is $r_i^2=\frac{(O_i-E_i)^2}{E_i}$ (O=observed count, E=expected count under the null hypothesis), which is the "partition" of the chi-square. Another alternative is to use $d_i=O_i log(\frac{O_i}{E_i})$ ("entropy" statistic). Both are approximately equal in large tables, with big expected values. The "significant cells" have large values of these two values
null
CC BY-SA 2.5
null
2011-02-26T04:23:53.270
2011-02-26T04:23:53.270
null
null
2392
null
7648
1
null
null
5
6542
I'm looking at doing text classification/spam filtering using naive Bayesian classifiers with the e1071 or klaR package on R. Is there a good tutorial out there to describe this? I'm kind of stuck because I'm not sure what to use as the data to input into the NaiveBayes function. Some help very much appreciated, thanks!
Spam filtering using naive Bayesian classifiers with the e1071/klaR package on R
CC BY-SA 2.5
null
2011-02-26T08:32:17.770
2011-12-14T04:21:05.333
null
null
3442
[ "r", "classification", "naive-bayes", "e1071" ]
7649
2
null
7389
4
null
You would not use a paired sample t-test. The reason for this is that a particular random seed cannot be assumed to bias the outcome of both algorithms in the same way, even if that random seed is only used to generate the population and not for later operations such as mutation and selection. In other words, its logically possible that, under one algorithm, a given population will evolve better than the average for that algorithm, but will perform in the opposite way under another. If you have reason to believe that there is a similar connection between seed and performance for both algorithms, you can test this using a Pearson correlation coefficient to compare each seed's performance on both tests. By default, however, I would assume that there is no connection, especially if you have reasonably large populations. As far as running more than 10 times, of course more samples are always better, though your computational resources obviously may be a limiting factor. It could be a good idea to generate a power curve, which will show you the relationship between the size of difference needed for statistical significance at you're alpha level, and the SD and n. In other words, at a given n and SD, how big does the difference have to be? [http://moon.ouhsc.edu/dthompso/CDM/power/hypoth.htm](http://moon.ouhsc.edu/dthompso/CDM/power/hypoth.htm) <-- see bottom of page for power curve info. Finally, if you are running a genetic algorithm that actually has a defined stopping point, as yours does, you can just do a plain unpaired t-test on the number of generations needed to find the solution. Otherwise, quantifying algorithm performance tends to get a bit trickier As far as pitfalls, and generalizability of algorithm efficiency to other problems, you really cannot take effectiveness of your algorithm for granted when porting it to other problems. In my experience, genetic algorithms usually have to be tweaked quite a bit for each new problem that you apply them to. Having said that, depending on how diverse your set of 8 tests is, they may give you some indication of how generalizable your results are, and within which scope of applications they are generalizable.
null
CC BY-SA 2.5
null
2011-02-26T08:56:37.837
2011-03-03T08:31:27.563
2011-03-03T08:31:27.563
3443
3443
null
7650
2
null
7638
5
null
My initial thought would have been to display the probability of of acceptance as a function of relative GPA for each of your four schools, using some kind of [trellis displays](http://cm.bell-labs.com/cm/ms/departments/sia/project/trellis/). In this case, facetting should do the job well as the number of schools is not so large. This is very easy to do with [lattice](http://cran.r-project.org/web/packages/lattice/index.html) (`y ~ gpa | school`) or [ggplot2](http://had.co.nz/ggplot2/) (`facet_grid(. ~ school)`). In fact, you can choose the conditioning variable you want: this can be school, but also situation at undergrad institution. In the latter case, you'll have 4 curves for each plot, and three three plot of `Prob(admitting) ~ GPA`. Now, if you are looking for effective displays of effects in GLM, I would recommend the [effects](http://cran.r-project.org/web/packages/effects/index.html) package, from John Fox. Currently, it works with binomial and multinomial link, and ordinal logistic model. Marginalizing over other covariates is handled internally, so you don't have to bother with that. There are a lot of illustrations in the on-line help, see `help(effect)`. But, for a more thorough overview of effects displays in GLM, please refer to - Fox (2003). Effect Displays in R for Generalised Linear Models. JSS 8(15). - Fox and Andersen (2004). Effect displays for multinomial and proportional-odds logit models. ASA Methodology Conference -- Here is the corresponding JSS paper
null
CC BY-SA 2.5
null
2011-02-26T09:08:28.737
2011-02-26T09:08:28.737
null
null
930
null
7651
1
7654
null
1
118
I have collected a bunch of statistical results in the form of "YES" and "NO" strings. Now I would like to have a summary cell displaying "YES" if all these cells equal "YES" (or are empty).
How to assess whether a set of cells all have a specified value in Excel
CC BY-SA 2.5
null
2011-02-26T12:14:50.623
2011-02-27T10:49:04.070
2011-02-27T10:49:04.070
183
3446
[ "excel" ]
7653
1
7658
null
6
257
I read [Clinical Trials, a Methodologic Perspective](http://eu.wiley.com/WileyCDA/WileyTitle/productCd-0471727814.html) (S. Piantadosi) as I was suggested by one of you. According to the author: > A trialist must understand two different modes of thinking that support the science-clinical and statistical. They both underlie the re-emergence of therapeutics as a modern science. Each method of reasoning arose independently and must be combined skillfully if they are to serve therapeutic questions effectively. I cannot figure out what the author means by "clinical reasoning". Can you help me to understand that notion? Thank you in advance
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
CC BY-SA 2.5
null
2011-02-26T13:52:56.903
2011-02-27T10:47:08.083
2011-02-27T10:47:08.083
183
3019
[ "clinical-trials" ]
7654
2
null
7651
2
null
Would you settle for a "TRUE" instead of a "YES" in case there are no "NO" cells? If so, in a nearby cell type `=countif([first cell in range]:[last cell in range],"NO")` Then in an adjoining cell type `=and([previous cell]="0")` As an example, if you had YES or NO strings in A1 through A20, then in A22 you would type `=countif(a1:a20,"NO")` and in A21 you would type `=and(a22="0")`
null
CC BY-SA 2.5
null
2011-02-26T14:31:51.987
2011-02-26T16:05:11.483
2011-02-26T16:05:11.483
582
2669
null
7655
2
null
7653
5
null
I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues. The sole fact that, for instance, a treatment does not have a "statistically significant" effect does not imply that the treatment does not have a biological effect and viceversa. Statistics can tell you if a certain event is likely or unlikely to be happening, but does not give you any hint as to whether something is biologically plausible.
null
CC BY-SA 2.5
null
2011-02-26T14:46:08.097
2011-02-26T14:46:08.097
null
null
582
null
7656
2
null
3955
2
null
There are definitely more and less complex ways to address this kind of problem. From the sound of things, you started out with a fairly simple solution (the formula you found on SO). With that kind of simplicity in mind, I thought I would revisit a few key points you make in (the current version of) your post. So far, you've said you want your measurement of "site activity" to capture: - Slope changes in visits/day over "the past few days" - Magnitude changes in visits/day over "the past few days" As @jan-galkowski points out, you also seem to be (at least tacitly) interested in the rank of the sites relative to each other along these dimensions. If that description is accurate, I would propose exploring the simplest possible solution that incorporates those three measures (change, magnitude, rank) as separate components. For example, you could grab: - The results of your SO solution to capture slope variation (although I would incorporate 3 or 4 days of data) - Magnitude of each site's most recent visits/day value (y2) divided by the mean visits/day for that site (Y): `y2 / mean(Y)` For W0, W1, and W2 respectively, that yields 0.16, 1.45, and 2.35. (For the sake of interpretation, consider that a site whose most recent visits-per-day value was equal to it's mean visits-per-day would generate a result of 1). Note that you could also adjust this measure to capture the most recent 2 (or more) days: `y2 + y1 / 2 * mean(Y)` That yields: 0.12, 1.33, 1.91 for your three sample sites. If you do, in fact, use the mean of each site's visit/day distribution for this kind of measure, I would also look at the distribution's standard deviation to get a sense of its relative volatility. The standard deviation for each site's visit/day distribution is: 12.69, 12.12, and 17.62. Thinking about the `y2/mean(Y)` measure relative to the standard deviation is helpful because it allows you to keep the recent magnitude of activity on site W2 in perspective (bigger standard deviation = less stable/consistent overall). Finally, if you're interested in ranks, you can extend these approaches in that direction too. For example, I would think that knowing a site's rank in terms of the most recent visits per day values as well as the rank of each site's mean visits per day (the rank of `mean (Y)` for each `W` in `Wn`) could be useful. Again, you can tailor to suit your needs. You could present the results of all these calculations as a table, or create a regularly-updated visualization to track them on a daily basis.
null
CC BY-SA 2.5
null
2011-02-26T14:46:45.880
2011-02-26T14:46:45.880
null
null
3396
null
7658
2
null
7653
9
null
I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance. But about your specific question, I would say this is clearly detailed in the two sections that directly follow your quote (p. 10). Rereading Piantadosi's textbook, it appears that the author means that clinical thinking applies to the situation where a physician has to interpret the results of RCTs or other studies in order to decide of the best treatment to apply to a new patient. This has to do with the extent to which (population-based) conclusions drawn from previous RCT might generalize to new, unobserved, samples. In a certain sense, such decision or judgment call for some form of clinical experience, which is not necessarily of the resort of a consistent statistical framework. Then, the author said that "the solution offered by statistical reasoning is to control the signal-to-noise ratio by design." In other words, this is a way to reduce uncertainty, and "the chance of drawing incorrect conclusions from either good or bad data." In sum, both lines of reasoning are required in order to draw valid conclusions from previous (and 'localized') studies, and choose the right treatment to administer to a new individual, given his history, his current medication, etc. -- treatment efficacy follows from a good balance between statistical facts and clinical experience. I like to think of a statistician as someone who is able to mark off the extent to which we can draw firm inferences from the observed data, whereas the clinician is the one that will have a more profound insight onto the implications or consequences of the results at the individual or population level.
null
CC BY-SA 2.5
null
2011-02-26T18:29:26.887
2011-02-26T18:29:26.887
null
null
930
null
7659
1
null
null
4
616
I have used an established 33-item empathy questionnaire. I have modified the scale from a 9 point to a 4 point scale in order to force an answer and reduce ambiguity. Do I need to retest the scale properties after this reduction in the number of response options?
Does a psychological scale need to be revalidated after reducing the number of response options from 9 to 4?
CC BY-SA 3.0
null
2011-02-26T19:28:19.357
2013-08-01T04:13:27.397
2013-08-01T04:13:27.397
183
3447
[ "likert", "scales", "psychology" ]
7660
2
null
6538
3
null
I have seen Statistical Inference, by Silvey, used by mathematicians who needed some workaday grasp of statistics. It's a small book, and should by rights be cheap. Looking at [http://www.amazon.com/Statistical-Inference-Monographs-Statistics-Probability/dp/0412138204/ref=sr_1_1?ie=UTF8&s=books&qid=1298750064&sr=1-1](http://rads.stackoverflow.com/amzn/click/0412138204), it seems to be cheap second hand. It's old and concentrates on classical statistics. While it's not highly abstract, it is intended for a reasonably mathematical audience - many of the exercises are from the Cambridge (UK) Diploma in Mathematical Statistics, which is basically an MSc.
null
CC BY-SA 2.5
null
2011-02-26T19:58:58.230
2011-02-26T19:58:58.230
null
null
1789
null
7662
2
null
7639
0
null
From your comment to my previous answer "....I also want to automatically generate 3-dimensional graphs of each slice of the three IVs (as in leaving one IV constant while using the other two as x and y, and the DV as z)....." Try the following (using the same data as above): ``` library(lattice) #Build 3 independent variables x <- rnorm(4300) y <- runif(4300) z <- rnorm(4300) #Build the dependent variable dep <- 2 + (0.5 * x) + (0.9 * y) + (-0.8 * z) #Put everything in a data frame df <- data.frame(x=x, y=y, z=z, dep=dep) #Break up the x data into 4 chunks of equal counts df$x4 <- equal.count(df$x, 4) cloud(dep ~ y + z | x4, data = df, zlim = rev(range(df$dep)), screen = list(z = 130, x = -65), panel.aspect = 0.75, xlab = "y", ylab = "z", zlab = "dep") #Break up the z data into 2 chunks of equal counts df$z2 <- equal.count(df$z, 2) cloud(dep ~ x + y | z2, data = df, zlim = rev(range(df$dep)), screen = list(z = 160, x = -80), panel.aspect = 0.75, xlab = "x", ylab = "y", zlab = "dep") ``` The above generates two plots. Below is the first plot. ![enter image description here](https://i.stack.imgur.com/QWOmX.jpg)
null
CC BY-SA 2.5
null
2011-02-26T20:49:23.040
2011-02-26T21:08:01.213
2011-02-26T21:08:01.213
2775
2775
null
7664
1
7667
null
1
75
In many cases, we do not actually have to explicitly compute the normalization constant of a certain distribution, if we know its kernel and it corresponds to a known family. Is there a name for this "trick" that I can look up and read more about? I don't think there is something very deep about it, but I figured it must be carrying some name. Thanks.
Is there a name for the "kernel principle"?
CC BY-SA 2.5
null
2011-02-26T21:28:48.913
2011-02-26T22:07:32.897
2011-02-26T21:45:46.313
null
3347
[ "distributions", "normalization" ]
7665
2
null
7659
5
null
Whether you need to retest your instrument's validity and reliability now that you have changed the number of response options is a matter for you and your dissertation committee to decide; there is no standard statistical knowledge that can definitively answer the question for you. You and/or your colleagues may find that, as you suggest, going from 9 to 4 reduces the arbitrariness with which respondents choose an option. On the other hand, when you call the instrument "established" you are implying that it already had good psychometric properties the way it was, and so changing the method of measurement entails some risk. Perhaps there are so few options now that the choice between, say, the first and the second has become more arbitrary. Alternatively, perhaps the lack of a neutral option would seem unreasonable to some; the topic might require it. If so, this would reduce reliability, and with it validity. The bottom line is, if there is good evidence for reliability and validity at 9 options, the burden could easily be placed on you to show that these properties have improved, or at least not worsened, at 4 options.
null
CC BY-SA 2.5
null
2011-02-26T21:31:05.807
2011-02-26T21:36:06.303
2011-02-26T21:36:06.303
2669
2669
null
7666
2
null
7644
7
null
There are a range of possibilities described by J.W. Gillard in [An Historical Overview of Linear Regression with Errors in both Variables](http://www.cardiff.ac.uk/maths/resources/Gillard_Tech_Report.pdf) If you are not interested in details or reasons for choosing one method over another, just go with the simplest, which is to draw the line through the centroid $(\bar{x},\bar{y})$ with slope $\hat{\beta}=s_y/s_x$, i.e. the ratio of the observed standard deviations (making the sign of the slope the same as the sign of the covariance of $x$ and $y$); as you can probably work out, this gives an intercept on the $y$-axis of $\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}.$ The merits of this particular approach are - it gives the same line comparing $x$ against $y$ as $y$ against $x$, - it is scale-invariant so you do not need to worry about units, - it lies between the two ordinary linear regression lines - it crosses them where they cross each other at the centroid of the observations, and - it is very easy to calculate. The slope is the geometric mean of the slopes of the two ordinary linear regression slopes. It is also what you would get if you standardised the $x$ and $y$ observations, drew a line at 45° (or 135° if there is negative correlation) and then de-standardised the line. It could also be seen as equivalent to making an implicit assumption that the variances of the two sets of errors are proportional to the variances of the two sets of observations; as far as I can tell, you claim not to know which way this is wrong. Here is some R code to illustrate: the red line in the chart is OLS regression of $Y$ on $X$, the blue line is OLS regression of $X$ on $Y$, and the green line is this simple method. Note that the slope should be about 5. ``` X0 <- 1600:3600 Y0 <- 5*X0 + 700 X1 <- X0 + 400*rnorm(2001) Y1 <- Y0 + 2000*rnorm(2001) slopeOLSXY <- lm(Y1 ~ X1)$coefficients[2] #OLS slope of Y on X slopeOLSYX <- 1/lm(X1 ~ Y1)$coefficients[2] #Inverse of OLS slope of X on Y slopesimple <- sd(Y1)/sd(X1) *sign(cov(X1,Y1)) #Simple slope c(slopeOLSXY, slopeOLSYX, slopesimple) #Show the three slopes plot(Y1~X1) abline(mean(Y1) - slopeOLSXY * mean(X1), slopeOLSXY, col="red") abline(mean(Y1) - slopeOLSYX * mean(X1), slopeOLSYX, col="blue") abline(mean(Y1) - slopesimple * mean(X1), slopesimple, col="green") ```
null
CC BY-SA 2.5
null
2011-02-26T21:48:43.443
2011-02-27T17:23:09.987
2011-02-27T17:23:09.987
2958
2958
null
7667
2
null
7664
1
null
I was taught the stock phrase "recognising the density of a whatever distribution," to use as part of a mathematical proof. I agree this 'trick' is useful, but AFAIK it doesn't have a name.
null
CC BY-SA 2.5
null
2011-02-26T22:07:32.897
2011-02-26T22:07:32.897
null
null
449
null
7668
1
7669
null
3
582
Suppose I have data with p explanatory variables, and I want to use a LARS algorithm to build a model. Do I - Run LARS until all p variables have been added to my model, and the correlation between each variable and the residual is 0? - Or do I terminate LARS earlier, say a) by stopping LARS once k < p variables have been added to my model, or b) by stopping when the correlation between each variable and the residual crosses below some threshold t? (Where k or t could be determined by cross-validation.) In reading the original paper by Efron et al. again, it sounds like I do version 1 (build the full model). But then I'm confused -- how does this solution differ from the full maximum-likelihood OLS estimate? (The OLS estimate also produces zero correlation between each variable and the residual, and I thought there was only a single way to get this zero correlation, i.e., by orthogonal projection. Or am I mistaken?)
Stopping condition for least-angle regression
CC BY-SA 2.5
null
2011-02-26T22:43:50.250
2011-02-27T05:44:07.347
2011-02-26T23:20:04.150
null
1106
[ "regression", "regularization" ]
7669
2
null
7668
8
null
Certainly, if $p \leq n$ and you run LARS until you've included all $p$ variables in the model and the correlations are zero, then the solution will be exactly the OLS solution. You can view LARS as just another "regularized" least-squares estimate. Of course, it has a very close connection to both forward-stagewise regression and the lasso. My suspicion is that most people use the LARS algorithm primarily to compute the lasso solutions and that LARS itself has gotten fairly little use as an estimation method in its own right. Cross-validation to choose $k$ and $t$ should be feasible if your interest is on prediction. Since the maximal correlation decreases monotonically, then $k$ is completely determined by $t$ in LARS (see, e.g., the right pane of Fig. 3 of the paper). Hence, you only need optimize over the threshold $t$ in the cross-validation. This is cleaner in a couple of ways: (1) There is only one parameter to optimize over and (2) any sensible objective function you choose should be pretty much continuous in $t$, whereas $k$ is discrete. Discrete-valued parameters can often be less pleasant to deal with in CV. For more details regarding some of the properties of least angle regression, you can also see the solution to [this question](https://stats.stackexchange.com/questions/6795/least-angle-regression-keeps-the-correlations-monotonically-decreasing-and-tied).
null
CC BY-SA 2.5
null
2011-02-26T23:25:15.310
2011-02-27T05:44:07.347
2017-04-13T12:44:24.677
-1
2970
null
7670
1
null
null
13
529
I am designing a Hybrid Monte Carlo sampling algorithm for [PyMC](http://code.google.com/p/pymc/), and I am trying to make it as fuss free and general as possible, so I am looking for good advice on designing an HMC algorithm. I have read [Radford's survey chapter](http://www.cs.utoronto.ca/~radford/ham-mcmc.abstract.html) and [Beskos et. al.'s recent paper](http://arxiv.org/abs/1001.4460) on optimal (step size) tuning of HMC and I gathered the following tips: - Momentum variables should be distributed with covariance $C^{-1}$, where $C$ is generally something like the covariance matrix of the distribution (for simple distributions), but could conceivably be different (for funny shaped distributions). By default I am using the hessian at the mode. - Trajectories should be calculated with the leapfrog method (other integrators don't seem to be worth it) - Optimal acceptance rate is .651 for really large problems and otherwise higher. - Step size should be scaled like $L\times d^{(1/4)}$, where $L$ is a free variable and $d$ is the number of dimensions. - Step size should be smaller when there are light tails or otherwise regions with odd stability characteristics. Step size randomization can help with this. Are there other ideas that I should adopt or at least consider? Other papers I should read? For example, are there adaptive step size algorithms that are worth it? Is there good advice on trajectory length? Are there in fact better integrators? Someone please make this a community-wiki.
What should I know about designing a good Hybrid/Hamiltonian Monte Carlo algorithm?
CC BY-SA 2.5
null
2011-02-27T00:40:09.317
2014-06-23T15:04:53.173
2011-02-27T11:56:09.117
930
1146
[ "bayesian", "monte-carlo" ]
7671
2
null
7630
23
null
I think it depends on how it is to be used. Just for reference for other readers, if $P$ and $Q$ are probability measures, then the Jensen-Shannon Divergence is $$ J(P,Q) = \frac{1}{2} \big( D(P \mid\mid R) + D(Q\mid\mid R) \big) $$ where $R = \frac{1}{2} (P + Q)$ is the mid-point measure and $D(\cdot\mid\mid\cdot)$ is the Kullback-Leibler divergence. Now, I would be tempted to use the square root of the Jensen-Shannon Divergence since it is a metric, i.e. it satisfies all the "intuitive" properties of a distance measure. For more details on this, see > Endres and Schindelin, A new metric for probability distributions, IEEE Trans. on Info. Thy., vol. 49, no. 3, Jul. 2003, pp. 1858-1860. Of course, in some sense, it depends on what you need it for. If all you are using it for is to evaluate some pairwise measure, then any monotonic transformation of JSD would work. If you're looking for something that's closest to a "squared-distance", then the JSD itself is the analogous quantity. Incidentally, you might also be interested in [this previous question](https://stats.stackexchange.com/questions/6907/an-adaptation-of-the-kullback-leibler-distance) and the associated answers and discussions.
null
CC BY-SA 2.5
null
2011-02-27T03:32:38.977
2011-02-27T05:49:03.880
2017-04-13T12:44:33.357
-1
2970
null
7672
2
null
7648
8
null
The `NaiveBayes()` function in the [klaR](http://cran.r-project.org/web/packages/klaR/index.html) package obeys the classical `formula` R interface whereby you express your outcome as a function of its predictors, e.g. `spam ~ x1+x2+x3`. If your data are stored in a `data.frame`, you can input all predictors in the rhs of the formula using dot notation: `spam ~ ., data=df` means "`spam` as a function of all other variables present in the `data.frame` called `df`." Here is a toy example, using the `spam` dataset discussed in the [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) (Hastie et al., Springer 2009, 2nd ed.), available on-line. This really is to get you started with the use of the R function, not the methodological aspects for using NB classifier. ``` data(spam, package="ElemStatLearn") library(klaR) # set up a training sample train.ind <- sample(1:nrow(spam), ceiling(nrow(spam)*2/3), replace=FALSE) # apply NB classifier nb.res <- NaiveBayes(spam ~ ., data=spam[train.ind,]) # show the results opar <- par(mfrow=c(2,4)) plot(nb.res) par(opar) # predict on holdout units nb.pred <- predict(nb.res, spam[-train.ind,]) # raw accuracy confusion.mat <- table(nb.pred$class, spam[-train.ind,"spam"]) sum(diag(confusion.mat))/sum(confusion.mat) ``` A recommended add-on package for such ML task is the [caret](http://caret.r-forge.r-project.org/Classification_and_Regression_Training.html) package. It offers a lot of useful tools for preprocessing data, handling training/test samples, running different classifiers on the same data, and summarizing the results. It is available from CRAN and has a lot of vignettes that describe common tasks.
null
CC BY-SA 2.5
null
2011-02-27T10:13:09.253
2011-02-28T21:16:27.870
2011-02-28T21:16:27.870
930
930
null
7673
2
null
7670
3
null
This paper is very interesting (although I haven't yet fully got my head around it), Girolami M. Calderhead B. (2011) Riemann manifold Langevin and Hamiltonian Monte Carlo methods. J. R. Statist. Soc. B. (with discussion). 73, Part 2. pp 1-37.
null
CC BY-SA 2.5
null
2011-02-27T10:33:26.677
2011-02-27T10:57:13.037
2011-02-27T10:57:13.037
887
887
null
7674
2
null
7419
16
null
I never used it directly, so I can only share some papers I had and general thoughts about that technique (which mainly address your questions 1 and 3). My general understanding of biclustering mainly comes from genetic studies (2-6) where we seek to account for clusters of genes and grouping of individuals: in short, we are looking to groups samples sharing similar profile of gene expression together (this might be related to disease state, for instance) and genes that contribute to this pattern of gene profiling. A survey of the state of the art for biological "massive" datasets is available in Pardalos's slides, [Biclustering](http://www.ise.ufl.edu/cao/DMinAgriculture/Lecture6.biclustering.pdf). Note that there is an R package, [biclust](http://cran.r-project.org/web/packages/biclust/index.html), with applications to microarray data. In fact, my initial idea was to apply this methodology to clinical diagnosis, because it allows to put features or variables in more than one cluster, which is interesting from a semeiological perpective because symptoms that cluster together allow to define syndrome, but some symptoms can overlap in different diseases. A good discussion may be found in Cramer et al., [Comorbidity: A network perspective](http://sites.google.com/site/borsboomdenny/CramerEtAl2010.pdf?attredirects=0) (Behavioral and Brain Sciences 2010, 33, 137-193). A somewhat related technique is [collaborative filtering](http://en.wikipedia.org/wiki/Collaborative_filtering). A good review was made available by Su and Khoshgoftaar (Advances in Artificial Intelligence, 2009): [A Survey of Collaborative Filtering Techniques](http://www.hindawi.com/journals/aai/2009/421425.html). Other references are listed at the end. Maybe analysis of [frequent itemset](http://www.albionresearch.com/data_mining/market_basket.php), as exemplified in the [market-basket problem](http://www.albionresearch.com/data_mining/market_basket.php), is also linked to it, but I never investigated this. Another example of co-clustering is when we want to simultaneously cluster words and documents, as in text mining, e.g. Dhillon (2001). [Co-clustering documents and words using bipartite spectral graph partitioning](http://www.cs.utexas.edu/users/inderjit/public_papers/kdd_bipartite.pdf). Proc. KDD, pp. 269–274. About some general references, here is a not very exhaustive list that I hope you may find useful: - Jain, A.K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31, 651–666 - Carmona-Saez et al. (2006). Biclustering of gene expression data by non-smooth non-negative matrix factorization. BMC Bioinformatics, 7, 78. - Prelic et al. (2006). A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics, 22(9), 1122-1129. www.tik.ee.ethz.ch/sop/bimax - DiMaggio et al. (2008). Biclustering via optimal re-ordering of data matrices in systems biology: rigorous methods and comparative studies. BMC Bioinformatics, 9, 458. - Santamaria et al. (2008). BicOverlapper: A tool for bicluster visualization. Bioinformatics, 24(9), 1212-1213. - Madeira, S.C. and Oliveira, A.L. (2004) Bicluster algorithms for biological data analysis: a survey. IEEE Trans. Comput. Biol. Bioinform., 1, 24–45. - Badea, L. (2009). Generalized Clustergrams for Overlapping Biclusters. IJCAI - Symeonidis, P. (2006). Nearest-Biclusters Collaborative Filtering. WEBKDD
null
CC BY-SA 2.5
null
2011-02-27T11:51:49.200
2011-02-27T11:51:49.200
null
null
930
null
7675
1
7677
null
6
1566
An experiment I conducted recently used a 2 (between participants) x 3 (within participants) design. That is, participants were randomly allocated to one of two conditions, and then completed three similar tasks each (in a counterbalanced order). In each of these tasks, participants made binary choices (2AFC) in a number of trials, each of which had a normatively correct answer. In every trial, participants were presented a distractor, which was assumed to bias responses towards one of the alternatives. The tasks differed only in presence and magnitude of this distractor (i.e. no distractor vs. distractor of small and large magnitude). I would like to examine the error rates (deviations from the normatively correct answer) across these conditions. I hypothesize that the error rate will increase when a distractor is present, but will not increase further when the magnitude of the distractor is increased. Also, I expect this increase to differ between the between-subjects conditions. The latter interaction is the central interest. From [discussions here](https://stats.stackexchange.com/questions/3874/unbalanced-mixed-effect-anova-for-repeated-measures) and from the literature (Dixon, 2008; Jaeger, 2008), I gather that logit mixed-models are the appropriate analysis method, and that, in R, the [lme4 package](http://lme4.r-forge.r-project.org/) is the tool of choice. While I could compute some basic analyses (e.g. random intercept model, random effects ANCOVA) with lme4, I am stuck as to how to apply the models to the design in question -- I have the feeling that I am very much thinking in terms of HLMs, and have not yet quite understood the entirety of mixed effects models. Therefore, I would be very grateful for your help. I have two basic questions: - In a first analysis, I would like to consider the error rates in only those trials in which participants were biased towards the wrong answer. The first model would therefore look only at trials in which the bias would point away from the correct answer. If my observations were independent, i would probably use a model like this: correct ~ condition + distractor + condition:distractor ... but obviously, they aren't: Observations are grouped within a task (with a constant presence of a distractor) and within participants. My question, then, is this: How do I add the random effects to reflect this? - (If I haven't lost you already :-) ) Would it be possible to include all trials (those where the bias would be into the direction of the wrong and of the correct answer), and include this difference (i.e. direction of the bias) as a trial-level predictor? In my imagination of HLMs, such a predictor (at the level of the trial) would depend on on the magnitude of the distractor present (at the level of the block), which again would depend on the condition of the participant (plus, possibly, a unique factor for each participant).The interactions would then emerge ›automatically‹ as cross-level interactions. How would such a model be specified in the ›flat‹ lme4 syntax? (Would such a model make sense at all?) Ok, I hope all of this makes sense -- I will gladly elaborate otherwise. Again, I would be most grateful for any ideas and comments regarding this analysis, and would like to thank you for taking the time and trouble to respond. References Dixon, P. (2008). Models of accuracy in repeated-measures designs. Journal of Memory and Language, 59(4), 447-456. doi: 10.1016/j.jml.2007.11.004 Jaeger T. F. (2008). Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models. Journal of Memory and Language, 59(4), 434-446. doi: 10.1016/j.jml.2007.11.007
Analyzing a 2x3 repeated measures design using a logit mixed model
CC BY-SA 2.5
null
2011-02-27T14:09:22.073
2011-04-18T10:22:26.577
2017-04-13T12:44:49.953
-1
3451
[ "r", "logistic", "mixed-model", "repeated-measures" ]
7676
1
null
null
5
175
I have data from 40 people, each measured once in 2 conditions. For reasons tangential to my question I'm not comfortable using parametric analyses to compare the two conditions, so I'm employing bootstrapping. However, when I run a bootstrap analysis (resampling people with replacement, computing condition means, then repeating 1000 times), the condition means I obtain appear to be correlated across iterations. I noticed this because when I plot the 95% confidence interval in each condition the intervals capture each others' means, but when I compute a difference score between the conditions means within each iteration, the 95% confidence interval of this difference score excludes zero. Going back to the results of the bootstrapping, I find that the correlation of the conditions across the bootstrap iterations is .7! Should this affect my confidence in the interval generated for the difference score? Alternatively, does this tell me something (possibly about individual differences) about my data otherwise?
Should I be concerned if the cells of values obtained from bootstrapping are correlated?
CC BY-SA 2.5
null
2011-02-27T15:01:27.410
2011-05-10T05:44:57.843
2011-02-28T18:21:04.023
364
364
[ "confidence-interval", "bootstrap" ]
7677
2
null
7675
3
null
I would definitely use all your data and add direction of intended bias induction as a variable. Since you already have variables in the model describing the difference between the tasks, I don't believe that adding task as a random effect is necessary. The model would be: ``` my_model = lmer( correct ~ (1|subject) + condition*distractor*direction , family = 'binomial' , data = my_data ) ``` Check out the `ezMixed` function from the [ez package](http://cran.r-project.org/web/packages/ez/index.html) for an automated way of evaluating evidence for each effect in the model.
null
CC BY-SA 2.5
null
2011-02-27T15:09:35.677
2011-02-27T15:09:35.677
null
null
364
null