Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
5901 | 2 | null | 5894 | 0 | null | Edit: I made a presumption before where after 27 tries it would then stop searching thus giving a bias on the results
```
# Number of repeats
iters = 10000
results = rep(0,iters)
# Ball to search for
searchNumber = 27
for(i in 1:iters) {
# A random permutation:
balls = sample(1:27,iters, replace=T)
# Number of times picking a ball
tries = 1
while(tries<iters) {
if(balls[tries] == searchNumber) {
results[i] = tries
break
}
tries = tries + 1
}
}
hist(results, breaks=100, col="blue")
mean(results)
```
Edit: with a mean / average of around 27
Edit: In my example we are looking for 27 but the results are the same, it could be any number between 1-27
| null | CC BY-SA 2.5 | null | 2011-01-02T21:55:46.263 | 2011-01-03T00:10:55.017 | 2011-01-03T00:10:55.017 | null | null | null |
5902 | 2 | null | 5836 | 0 | null | Try the [directlabels](http://directlabels.r-forge.r-project.org/) package for some more control. As far as the caption goes, if you're automatically generating the graphs, you might simply use it as caption text (in LaTeX or HTML or what have you). Hope this helps.
| null | CC BY-SA 2.5 | null | 2011-01-02T22:56:50.777 | 2011-01-02T22:56:50.777 | null | null | 317 | null |
5903 | 1 | null | null | 16 | 6901 | Given two arrays x and y, both of length n, I fit a model y = a + b*x and want to calculate a 95% confidence interval for the slope. This is (b - delta, b + delta) where b is found in the usual way and
```
delta = qt(0.975,df=n-2)*se.slope
```
and se.slope is the standard error in the slope. One way to get the standard error of the slope from R is `summary(lm(y~x))$coef[2,2]`.
Now suppose I write the likelihood of the slope given x and y, multiply this by a "flat" prior and use a MCMC technique to draw a sample m from the posterior distribution. Define
```
lims = quantile(m,c(0.025,0.975))
```
My question: is `(lims[[2]]-lims[[1]])/2` approximately equal to delta as defined above?
Addendum Below is a simple JAGS model where these two seem to be different.
```
model {
for (i in 1:N) {
y[i] ~ dnorm(mu[i], tau)
mu[i] <- a + b * x[i]
}
a ~ dnorm(0, .00001)
b ~ dnorm(0, .00001)
tau <- pow(sigma, -2)
sigma ~ dunif(0, 100)
}
```
I run the following in R:
```
N <- 10
x <- 1:10
y <- c(30.5,40.6,20.5,59.1,52.5,
96.0,121.4,78.9,112.1,128.4)
lin <- lm(y~x)
#Calculate delta for a 95% confidence interval on the slope
delta.lm <- qt(0.975,df=N-2)*summary(lin)$coef[2,2]
library('rjags')
jags <- jags.model('example.bug', data = list('x' = x,'y' = y,'N' = N),
n.chains = 4,n.adapt = 100)
update(jags, 1000)
params <- jags.samples(jags,c('a', 'b', 'sigma'),7500)
lims <- quantile(params$b,c(0.025,0.975))
delta.bayes <- (lims[[2]]-lims[[1]])/2
cat("Classical confidence region: +/-",round(delta.lm, digits=4),"\n")
cat("Bayesian confidence region: +/-",round(delta.bayes,digits=4),"\n")
```
And get:
Classical confidence region: +/- 4.6939
Bayesian confidence region: +/- 5.1605
Rerunning this multiple times, the Bayesian confidence region is consistently wider than the classical one. So is this due to the priors I've chosen?
| Confidence intervals for regression parameters: Bayesian vs. classical | CC BY-SA 2.5 | null | 2011-01-02T23:59:51.453 | 2012-06-22T14:09:32.070 | 2011-01-16T14:51:59.780 | 2116 | 2617 | [
"r",
"regression",
"bayesian",
"confidence-interval",
"frequentist"
]
|
5904 | 2 | null | 5903 | 6 | null | If you sample from the posterior of b | y and calculate lims (as you define) it should be same as (b - delta, b + delta). Specifically, if you calculate the posterior distribution of b | y under a flat prior, it is same as the classical sampling distribution of b.
For more details refer to: Gelman et al. (2003). Bayesian Data Analysis. CRC Press. Section 3.6
Edit:
Ringold, the behavior observed by you is consistent with the Bayesian idea. The Bayesian Credible Interval (CI) is generally wider than the classical ones. And the reason is, as you correctly guessed, the hyperpriors taken into account the variability because of the unknown parameters.
For simple scenarios like these (NOT IN GENERAL):
Baysian CI > Empirical Bayesian CI > Classical CI ; > == wider
| null | CC BY-SA 2.5 | null | 2011-01-03T01:53:27.830 | 2011-01-03T13:56:55.263 | 2011-01-03T13:56:55.263 | 1307 | 1307 | null |
5905 | 2 | null | 5899 | 10 | null | To give some context, I don't view this as a "statistical" question as much of a "group preference" question. Economists and policy wonks do a lot of thinking about questions of how to convert individual preferences into a "will of the people." You will find lots of interesting reading if you search the web for "political economy" and "voting system" together.
Reasonable people will disagree on which candidate "wins" for the example you gave. There is no objectively correct voting system. Baked into the idea of a voting system are assumptions about fairness and representation. Each voting system has advantageous and disadvantageous properties. (Digging into the pros/cons of various voting systems is super interesting but would take many pages. I recommend by starting with Wikipedia's entry on "Voting Systems". Also be sure to read about Arrow's Impossibility Theorem.)
Don't expect to find a universally accepted "best" voting system. Rather, pick a voting system that seems most reasonable for your domain (i.e. the upsides of the system outweigh the downsides). Make sure that your constituency buys-in to the voting system if you want the results to be taken seriously.
Resolving this described situation cannot be done with statistics alone. You will need to choose a voting system -- that choice will drive the ballot choice and how the ballots are converted into results (e.g. one or more winners, some sort of ranking, or some sort of scoring).
This is a big question. I think you'll enjoy digging into these topics.
Finally, for what it is worth, I have strong reservations about the described ballot (0 for no hire, +1 for maybe, +2 for hire) as a way for a board to pick a CEO.
| null | CC BY-SA 2.5 | null | 2011-01-03T01:59:54.670 | 2011-02-02T15:44:38.890 | 2011-02-02T15:44:38.890 | 660 | 660 | null |
5907 | 2 | null | 5893 | 1 | null | The amount of information that can be found varies wildly, from just race and gender, to all sorts of personal info. Your best bet at getting the information would be social network sites like facebook, as they generally provide more information than cencus databases.
| null | CC BY-SA 2.5 | null | 2011-01-02T19:30:45.810 | 2011-01-03T11:13:42.550 | null | null | null | null |
5908 | 2 | null | 5893 | 1 | null | There's quite a wide range of information you can get depending on the sources you use. Census data is an obvious one. You can also get information from Facebook, MySpace and other social networking sites. You could also probably search public news archives for mentions of their name. Maybe even those ubclained property sites that some states have.
If you want a real world world example of what can be done, take a look at pipl.com
| null | CC BY-SA 2.5 | null | 2011-01-02T19:41:43.203 | 2011-01-03T11:13:42.550 | null | null | null | null |
5909 | 2 | null | 5893 | 13 | null | This is not a serious answer, but I just remembered something from a book I read a year ago. There is a chapter in [Freakonomics](http://rads.stackoverflow.com/amzn/click/0060731338) devoted to what you can tell about a person from the name. The chapter is based on the author's research paper The causes and consequences of distinctively black names
I think I've found an excerpt or summary of it in [this article](http://www.slate.com/id/2116449/)
>
The data show that, on average, a
person with a distinctively black
name—whether it is a woman named Imani
or a man named DeShawn—does have a
worse life outcome than a woman named
Molly or a man named Jake. But it
isn't the fault of his or her name. If
two black boys, Jake Williams and
DeShawn Williams, are born in the same
neighborhood and into the same
familial and economic circumstances,
they would likely have similar life
outcomes. But the kind of parents who
name their son Jake don't tend to live
in the same neighborhoods or share
economic circumstances with the kind
of parents who name their son DeShawn.
And that's why, on average, a boy
named Jake will tend to earn more
money and get more education than a
boy named DeShawn. DeShawn's name is
an indicator—but not a cause—of his
life path.
| null | CC BY-SA 2.5 | null | 2011-01-02T20:43:28.110 | 2011-01-03T11:13:42.550 | null | null | null | null |
5910 | 2 | null | 5893 | 2 | null | You probably could find out:
- Profession and possibly job history, if one participates in any professional discussions (current job usually can be found out from either domain name in email or signature, search would reveal past ones too)
- Relatives, if one maintains profile on social networks.
- Current location, at least up to the city.
- Ethnic background, if one has distinct name (i.e., somebody named "Lubomir" is probably connected to one of the Slavic European countries, etc.).
- Birth date from social networks - people tend to congratulate a person on or around his birth date, and if you're lucky you also get the year when one turns 25, 30, 35 etc. as one of the people congratulating would probably mention it if not the person in question.
- Educational background - from LinkedIn, etc.
- Hobbies, favorite sports teams, etc.
- If one is a pet lover, he'd probably have all his pets on the social networks too.
Which btw means you should never ever use anything from the list above for your passwords, secret questions, etc.
| null | CC BY-SA 2.5 | null | 2011-01-03T01:47:42.210 | 2011-01-03T11:13:42.550 | null | null | null | null |
5911 | 2 | null | 5893 | 4 | null | Just to add in to other suggestions here, one of the largest sources for family data is the raft of genealogy sites out there. I think most western people are probably listed by some family member, distant or otherwise on a few of them and any such inclusion comes with a usually comprehensive family tree attached, complete with places, birth details, etc. Very informative.
If you cross match that data with friend graphs in Facebook, as people tend to add siblings/cousins (and parents/children on occasion), then use the locational data with electoral roles and directories, you can usually pinpoint people even with common names, and get a surprisingly large amount of data on them.
| null | CC BY-SA 2.5 | null | 2011-01-03T03:24:33.477 | 2011-01-03T11:13:42.550 | null | null | null | null |
5912 | 1 | 5920 | null | 10 | 1984 | A recent question about [alternatives to logistic regression in R](https://stats.stackexchange.com/questions/2234/alternatives-to-logistic-regression-in-r) yielded a variety of answers including randomForest, gbm, rpart, bayesglm, and generalized additive models. What are the practical and interpretation differences between these methods and logistic regression? What assumptions do they make (or not make) relative to logistic regression? Are the suitable for hypothesis testing? Etc.
| What are the practical & interpretation differences between alternatives and logistic regression? | CC BY-SA 2.5 | null | 2011-01-03T13:13:27.213 | 2022-08-15T19:16:51.353 | 2017-04-13T12:44:48.343 | -1 | 196 | [
"r",
"hypothesis-testing",
"logistic",
"random-forest"
]
|
5913 | 1 | 5921 | null | 7 | 2985 | I’m writing some code (JavaScript) to compare benchmark results. I’m using the [Welch T-test](http://frank.mtsu.edu/~dkfuller/notes302/welcht.pdf) because the variance and/or sample size between benchmarks is most likely different. The critical value is pulled from a T-distribution table at 95% confidence (two-sided).
The Welch formula is pretty straight-forward, but I am fuzzy on interpreting a significant result. I am not sure if the critical value should be divided by 2 or not. Help clearing that up is appreciated. Also should I be [rounding](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Math/round) the degrees of freedom, `df`, to lookup the critical value or would [Math.ceil](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Math/ceil) or [Math.floor](https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Math/floor) be more appropriate?
```
/**
* Determines if the benchmark's hertz is higher than another.
* @member Benchmark
* @param {Object} other The benchmark to compare.
* @returns {Number} Returns `1` if higher, `-1` if lower, and `0` if indeterminate.
*/
function compare(other) {
// use welch t-test
// http://frank.mtsu.edu/~dkfuller/notes302/welcht.pdf
// http://www.public.iastate.edu/~alicia/stat328/Regression%20inference-part2.pdf
var a = this.stats,
b = other.stats,
pow = Math.pow,
bitA = a.variance / a.size,
bitB = b.variance / b.size,
df = pow(bitA + bitB, 2) / ((pow(bitA, 2) / a.size - 1) + (pow(bitB, 2) / b.size - 1)),
t = (a.mean - b.mean) / Math.sqrt(bitA + bitB),
c = getCriticalValue(Math.round(df));
// check if t-statistic is significant
return Math.abs(t) > c / 2 ? (t > 0 ? 1 : -1) : 0;
}
```
---
Update: Thanks for all the replies so far! My colleague [posted some more info here](https://stats.stackexchange.com/questions/5913/interpreting-two-sided-two-sample-welch-t-test/5961#5961), in case that affects the advice.
| Interpreting two-sided, two-sample, Welch T-Test | CC BY-SA 2.5 | null | 2011-01-03T13:17:49.960 | 2011-01-04T18:48:40.033 | 2017-04-13T12:44:52.277 | -1 | 2616 | [
"distributions",
"t-test",
"javascript"
]
|
5915 | 2 | null | 5913 | 6 | null | Dividing by 2 is for p-values. If you compare critical values the division by 2 is not necessary. The function `getCriticalValue` should be the [quantile function](http://en.wikipedia.org/wiki/Quantile_function) of Student's t distribution. Thus it should take 2 values, the probability and the degrees of freedom. If you want 2-sided hypothesis as indicated by your code, then you need 0.975 quantile.
For the rounding, since the degrees of freedom are positive `Math.round` looks good.
| null | CC BY-SA 2.5 | null | 2011-01-03T13:56:58.953 | 2011-01-03T13:56:58.953 | null | null | 2116 | null |
5916 | 2 | null | 5913 | 6 | null | It's not absolutely necessary to round the degrees of freedom to an integer. [Student's t-distribution](http://en.wikipedia.org/wiki/Student%27s_t-distribution) can be defined for all positive real values of this parameter. Restricting it to a positive integer may make the critical value easier to calculate though, depending on how you're doing that. And it will make very little difference in practice with any reasonable sample sizes.
| null | CC BY-SA 2.5 | null | 2011-01-03T14:16:02.377 | 2011-01-03T14:16:02.377 | null | null | 449 | null |
5917 | 2 | null | 5899 | 2 | null | You're asking an intriguing question. I agree with the comments that are showing some apprehension at the "one-man-one-vote" system. I also agree that knowing the basic statistics (like standard deviation and mean) will not give you an insight into the will of the voters.
I would like to play off of David James's answer, keying in on stakeholders. Instead of a vote, perhaps you could give stakeholders a virtual account, which they must "spend" on the candidates.
If they had `$100` each, then perhaps one stakeholder would show a strong preference for candidate A by spending all `$100` on him or her. Another stakeholder might like candidate B slightly more than candidate A and spend `$60/$40`. A third stakeholder might find all three candidates equally (un)appealing and spend `$33/$33/$34.`
A variation would be to give different stakeholders accounts of different sizes. For example, perhaps the exiting CEO gets `$200` and a worker's representative gets `$150.`
You could even ask for an open vote, where each stakeholder explains his reasoning.
Highest earner wins the position. Or maybe the top two get the most careful look and a runoff.
This betting technique is an adaptation of what is done in Blind Man's Bluff: The Untold Story of American Submarine Espionage (1998, S. Sontag and C.Drew) A B-52 bomber collided with an air tanker, and they lost an H-bomb.
>
Craven asked a group of submarine and salvage experts to place Las-Vegas-style bets on the probability of each of the different scenarios that might describe the bomb's loss.... Each scenario left the weapon in a different location.... He was relying on Bayes' theorem of subjective probability. (pp. 58-59)
Whatever you choose, please make sure that the rules are clear before you start voting on candidates. A perception that the rules changed will not help the transition.
| null | CC BY-SA 2.5 | null | 2011-01-03T15:06:52.347 | 2011-01-03T15:06:52.347 | null | null | 2591 | null |
5918 | 1 | 5988 | null | 24 | 8646 | Note: Case is n>>p
I am reading Elements of Statistical Learning and there are various mentions about the "right" way to do cross validation( e.g. page 60, page 245). Specifically, my question is how to evaluate the final model (without a separate test set) using k-fold CV or bootstrapping when there has been a model search? It seems that in most cases (ML algorithms without embedded feature selection) there will be
- A feature selection step
- A meta parameter selection step (e.g. the cost parameter in SVM).
My Questions:
- I have seen that the feature selection step can be done where feature selection is done on the whole training set and held aside. Then, using k-fold CV, the feature selection algorithm is used in each fold (getting different features possibly chosen each time) and the error averaged. Then, you would use the features chosen using all the data (that were set aside) to train the final mode, but use the error from the cross validation as an estimate of future performance of the model. IS THIS CORRECT?
- When you are using cross validation to select model parameters, then how to estimate model performance afterwards? IS IT THE SAME PROCESS AS #1 ABOVE OR SHOULD YOU USE NESTED CV LIKE SHOWN ON PAGE 54 (pdf) OR SOMETHING ELSE?
- When you are doing both steps (feature and parameter setting).....then what do you do? complex nested loops?
- If you have a separate hold out sample, does the concern go away and you can use cross validation to select features and parameters (without worry since your performance estimate will come from a hold out set)?
| Cross Validation (error generalization) after model selection | CC BY-SA 4.0 | null | 2011-01-03T15:08:29.897 | 2018-10-25T09:30:53.700 | 2018-10-25T09:30:53.700 | 128677 | 2040 | [
"machine-learning",
"model-selection",
"data-mining",
"cross-validation"
]
|
5920 | 2 | null | 5912 | 9 | null | Disclaimer: It is certainly far from being a full answer to the question!
I think there are at least two levels to consider before establishing a distinction between all such methods:
- whether a single model is fitted or not: This helps opposing methods like logistic regression vs. RF or Gradient Boosting (or more generally Ensemble methods), and also put emphasis on parameters estimation (with associated asymptotic or bootstrap confidence intervals) vs. classification or prediction accuracy computation;
- whether all variables are considered or not: This is the basis of feature selection, in the sense that penalization or regularization allows to cope with "irregular" data sets (e.g., large $p$ and/or small $n$) and improve generalizability of the findings.
Here are few other points that I think are relevant to the question.
In case we consider several models--the same model is fitted on different subsets (individuals and/or variables) of the available data, or different competitive models are fitted on the same data set--, [cross-validation](https://en.wikipedia.org/wiki/Cross-validation_(statistics)) can be used to avoid overfitting and perform model or feature selection, although CV is not limited to this particular cases (it can be used with [GAMs](https://en.wikipedia.org/wiki/Generalized_additive_model) or penalized GLMs, for instance). Also, there is the traditional interpretation issue: more complex models often implies more complex interpretation (more parameters, more stringent assumptions, etc.).
Gradient boosting and RFs overcome the limitations of a single decision tree, thanks to [Boosting](https://en.wikipedia.org/wiki/Boosting) whose main idea is to combine the output of several weak learning algorithms in order to build a more accurate and stable decision rule, and [Bagging](https://en.wikipedia.org/wiki/Bootstrap_aggregating) where we "average" results over resampled data sets. Altogether, they are often viewed as some kind of black boxes in comparison to more "classical" models where clear specifications for the model are provided (I can think of three classes of models: [parameteric](https://en.wikipedia.org/wiki/Parametric_model), [semi-parametric](https://en.wikipedia.org/wiki/Semiparametric_model), [non-parametric](https://en.wikipedia.org/wiki/Non-parametric_model)), but I think the discussion held under this other thread [The Two Cultures: statistics vs. machine learning?](https://stats.stackexchange.com/questions/6/the-two-cultures-statistics-vs-machine-learning) provide interesting viewpoints.
Here are a couple of papers about feature selection and some ML techniques:
- Saeys, Y, Inza, I, and Larrañaga, P. A review of feature selection techniques in bioinformatics, Bioinformatics (2007) 23(19): 2507-2517.
- Dougherty, ER, Hua J, and Sima, C. Performance of Feature Selection Methods, Current Genomics (2009) 10(6): 365–374.
- Boulesteix, A-L and Strobl, C. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction, BMC Medical Research Methodology (2009) 9:85.
- Caruana, R and Niculescu-Mizil, A. An Empirical Comparison of Supervised Learning Algorithms. Proceedings of the 23rd International Conference on Machine Learning (2006).
- Friedman, J, Hastie, T, and Tibshirani, R. Additive logistic regression: A statistical view of boosting, Ann. Statist. (2000) 28(2):337-407. (With discussion)
- Olden, JD, Lawler, JJ, and Poff, NL. Machine learning methods without tears: a primer for ecologists, Q Rev Biol. (2008) 83(2):171-93.
And of course, [The Elements of Statistical Learning](https://hastie.su.domains/ElemStatLearn/), by Hastie and coll., is full of illustrations and references. Also be sure to check the [Statistical Data Mining Tutorials](https://www.cs.cmu.edu/%7E./awm/tutorials/index.html), from Andrew Moore.
| null | CC BY-SA 4.0 | null | 2011-01-03T15:39:49.363 | 2022-08-15T19:16:51.353 | 2022-08-15T19:16:51.353 | 79696 | 930 | null |
5921 | 2 | null | 5913 | 9 | null | (1a) You don't need the Welch test to cope with different sample sizes. That's automatically handled by the Student t-test.
(1b) If you think there's a real chance the variances in the two populations are strongly different, then you are assuming a priori that the two populations differ. It might not be a difference of location--that's what a t-test evaluates--but it's still an important difference nonetheless. Don't paper it over by adopting a test that ignores this difference! (Differences in variance often arise where one sample is "contaminated" with a few extreme results, simultaneously shifting the location and increasing the variance. Because of the large variance it can be difficult to detect the shift in location (no matter how great it is) in a small to medium size sample, because the increase in variance is roughly proportional to the squared change in location. This form of "contamination" occurs, for instance, when only a fraction of an experimental group responds to the treatment.) Therefore you should consider a more appropriate test, such as a [slippage test](http://books.google.com/books?id=u97pzxRjaCQC&pg=PA384&lpg=PA384&dq=statistics+slippage+test&source=bl&ots=ixvyigvJR3&sig=Z-cjW8Bbvae-I5Q8SCZfw0P3qqA&hl=en&ei=K-4hTd2nAcH68AbU0YCqDg&sa=X&oi=book_result&ct=result&resnum=8&ved=0CFAQ6AEwBw#v=onepage&q=statistics%20slippage%20test&f=false). Even better would be a less automated graphical approach using exploratory data analysis techniques.
(2) Use a [two-sided test](http://en.wikipedia.org/wiki/Two-tailed_test) when a change of average in either direction (greater or lesser) is possible. Otherwise, when you are testing only for an increase or decrease in average, use a one-sided test.
(3) Rounding would be incorrect and you shouldn't have to do it: most algorithms for computing t distributions don't care whether the DoF is an integer. Rounding is not a big deal, but if you're using a t-test in the first place, you're concerned about small sample sizes (for otherwise the simpler [z-test](http://en.wikipedia.org/wiki/Z-test) will work fine) and even small changes in DoF can matter a little.
| null | CC BY-SA 2.5 | null | 2011-01-03T15:45:13.360 | 2011-01-03T17:38:42.127 | 2011-01-03T17:38:42.127 | 919 | 919 | null |
5922 | 1 | 6045 | null | 11 | 891 | A colleague in applied statistics sent me this:
>
"I was wondering if you know any way
to find out the true dimension of the
domain of a function. For example, a
circle is a one dimensional function
in a two dimensional space. If I do
not know how to draw, is there a
statistic that I can compute that
tells me that it is a one dimensional
object in a two dimensional space? I
have to do this in high dimensional
situations so cannot draw pictures.
Any help will be greatly appreciated."
The notion of dimension here is obviously ill-defined. I mean,
I could run a curve through any finite collection of points in
high dimensional space, and say that my data is one-dimensional.
But, depending on the configuration, there may be an easier or
more efficient way to describe the data as a higher dimensional set.
Such issues must have been considered in the statistics literature,
but I'm not familiar with it. Any pointers or suggestions? Thanks!
| Estimating the dimension of a data set | CC BY-SA 2.5 | null | 2011-01-03T15:47:12.577 | 2021-08-21T19:40:50.943 | 2021-08-21T19:40:50.943 | 11887 | null | [
"large-data",
"high-dimensional",
"dimensions"
]
|
5923 | 2 | null | 5903 | 10 | null | The 'problem' is in the prior on sigma. Try a less informative setting
```
tau ~ dgamma(1.0E-3,1.0E-3)
sigma <- pow(tau, -1/2)
```
in your jags file. Then update a bunch
```
update(10000)
```
grab the parameters, and summarise your quantity of interest. It should line up reasonably well with the classic version.
Clarification: The updating is just to make sure you get where you're going whatever choice of prior you decide on, although chains for models like this one with diffuse priors and random starting values do take longer to converge. In real problems you'd check convergence before summarising anything, but convergence is not the main issue in your example I don't think.
| null | CC BY-SA 3.0 | null | 2011-01-03T15:54:07.543 | 2012-06-22T14:09:32.070 | 2012-06-22T14:09:32.070 | 1739 | 1739 | null |
5924 | 2 | null | 5922 | 7 | null | [Principal Components Analysis](http://en.wikipedia.org/wiki/Principal_component_analysis) of local data is a good point of departure. We have to take some care, though, to distinguish local (intrinsic) from global (extrinsic) dimension. In the example of points on a circle, the local dimension is 1, but overall the points within the circle lie in a 2D space. To apply PCA to this, the trick is to localize: select one data point and extract only those that are close to it. Apply PCA to this subset. The number of large eigenvalues will suggest the intrinsic dimension. Repeating this at other data points will indicate whether the data exhibit a constant intrinsic dimension throughout. If so, each of the PCA results provides a partial [atlas](http://en.wikipedia.org/wiki/Differentiable_manifold#Atlases) of the manifold.
| null | CC BY-SA 2.5 | null | 2011-01-03T16:03:25.970 | 2011-01-03T16:03:25.970 | null | null | 919 | null |
5925 | 2 | null | 5859 | 10 | null | The choice of the survival model should be guided by the underlying phenomenon. In this case it appears to be continuous, even if the data is collected in a somewhat discrete manner. A resolution of one month would be just fine over a 5-year period.
However, the large number of ties at 6 and 12 months makes one wonder wether you really have a 1-month precision (the ties at 0 are expected - that's a special value where relatively lot of deaths actually happen). I am not quite sure what you can do about that as this most likely reflects after-the-fact rounding rather than interval censoring.
| null | CC BY-SA 2.5 | null | 2011-01-03T16:07:14.337 | 2011-01-03T16:07:14.337 | null | null | 279 | null |
5926 | 1 | null | null | 11 | 24195 | The dependent variables in a MANOVA should not be "too strongly correlated". But how strong a correlation is too strong? It would be interesting to get people's opinions on this issue. For instance, would you proceed with MANOVA in the following situations?
- Y1 and Y2 are correlated with $r=0.3$ and $p<0.005$
- Y1 and Y2 are correlated with $r=0.7$ and $p=0.049$
Update
Some representative quotes in response to @onestop:
>
"MANOVA works well in situations where there are moderate correlations between DVs" (course notes from San Francisco State Uni)
"The dependent variables are correlated which is appropriate for Manova" (United States EPA Stats Primer)
"The dependent variables should be related conceptually, and they should
be correlated with one another at a low to moderate level." (Course notes from Northern Arizona University)
"DVs correlated from about .3 to about .7 are eligible" (Maxwell 2001, Journal of Consumer Psychology)
n.b. I'm not referring to the assumption that the intercorrelation between Y1 and Y2 should be the same across all levels of independent variables, simply to this apparent grey area about the actual magnitude of the intercorrelation.
| MANOVA and correlations between dependent variables: how strong is too strong? | CC BY-SA 3.0 | null | 2011-01-03T17:12:09.590 | 2019-10-15T17:51:53.403 | 2014-10-27T11:57:01.623 | 28666 | 266 | [
"correlation",
"anova",
"multivariate-analysis",
"rule-of-thumb",
"manova"
]
|
5927 | 1 | null | null | 2 | 555 | Are there meaningful ways to quantify how "flat" the log likelihood function is around the MLE when the parameter has more than one dimension? In particular is the determinant of the Hessian a reasonable measure?
| A measure of the "flatness" of log likelihood at the MLE | CC BY-SA 2.5 | null | 2011-01-03T18:01:37.370 | 2011-01-03T18:34:48.363 | null | null | 1004 | [
"maximum-likelihood"
]
|
5929 | 2 | null | 5927 | 4 | null | You might find that the [Fisher Information](http://en.wikipedia.org/wiki/Fisher_information) has some properties you like.
Its the expectation that gives Fisher Information its interpretation as the 'informativeness' of a measurement. But if you're just looking for something geometrically descriptive then your suggestion seems like a reasonable one to me.
| null | CC BY-SA 2.5 | null | 2011-01-03T18:25:09.920 | 2011-01-03T18:34:48.363 | 2011-01-03T18:34:48.363 | 1739 | 1739 | null |
5931 | 2 | null | 4519 | 4 | null | I may, after all this time, finally have understood the question. The data, if I'm correct, are a set of tuples $(i, j, y(i,j))$ where $i$ is one player, $j \ne i$ is another player, and $y(i,j)$ is the number of attacks of $i$ on $j$. In this notation the objective is to relate $y(i,j)$ to $y(j,i)$. There are some natural ways to do this, including:
- Analyze the data set $\{(y(i,j), y(j,i))\}$ by means of a scatterplot or PCA (to find the principal eigenvalue). Note this is not a regression situation because both components of each ordered pair are observed: neither can be considered under the control of an experimenter nor observed without error. It is this scatterplot, I believe, that appears triangular. This already suggests that any attempt to describe it succinctly, such as by means of a principal direction, is doomed.
- Model $y(i,j)$ in terms of characteristics of $i$ and $j$. This is a classic regression situation. The solution provides an indirect, but possibly powerful, way to relate $y(i,j)$ to $y(j,i)$.
In this case, also consider re-expressing the data in terms of relative numbers of attacks. That is, instead of using $y(i,j)$ use $x(i,j) = y(i,j)/\sum_{j}{y(i,j)}$.
| null | CC BY-SA 2.5 | null | 2011-01-03T19:53:37.627 | 2011-01-03T19:53:37.627 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
5932 | 2 | null | 4844 | 6 | null | First, it's almost impossible to drive a car "randomly." Did you periodically consult a random number generator to determine what direction to head in next? I don't think so. This calls into question the use of any statistical procedure that assumes randomness (even if it isn't simple and could lead to dependence). "Arbitrariness" and "randomness" are fundamentally different: it's important not to confuse the two.
Second, a confidence interval applies to inferences about a population or process. In this case it seems to have something to do with temperature, but temperature where? When? Unless this is clearly defined you can develop no valid information relating your data to a target of inference.
Third, independence is not absolutely necessary. This problem sounds similar to standard environmental and [ecological sampling](http://findarticles.com/p/articles/mi_m2120/is_n4_v76/ai_17087662/) plans where something is observed along a transect: animals are counted from an airplane, water temperature is logged from a boat, and so on. When randomness is incorporated in the sampling plan ("design based"), you can use various [probability sampling](http://projecteuclid.org/DPubS/Repository/1.0/Disseminate?view=body&id=pdf_1&handle=euclid.lnms/1215458846) estimators and their confidence intervals. Otherwise, you need to use [geostatistical](https://en.wikipedia.org/wiki/Geostatistics) methods ("model based"). These view the temperatures as samples of a spatially continuous field of temperatures. They attempt to deduce some characteristics of this field from the sampled temperatures, characteristics such as the type, extent, and direction of spatial correlation. From this various inferences can be made about temperatures in unsampled locations. Geostatistical methods exist even for these binary (thresholded) data, where they are known as "[indicator Kriging](https://web.archive.org/web/20080413104327/http://www.spatialanalysisonline.com:80/output/html/IndicatorKriging.html)." Such analyses intimately involve the relative geometries of the sample locations and the field to be estimated. Thus, in this case, an accurate record of the car's location at each sample is needed.
| null | CC BY-SA 4.0 | null | 2011-01-03T20:11:46.050 | 2022-08-18T20:59:10.793 | 2022-08-18T20:59:10.793 | 79696 | 919 | null |
5933 | 2 | null | 5893 | 4 | null | The last chapter of Freakonomics (2005, Steven D. Levitt and Stephen J. Dubner) has a fascinating discussion about names, particularly as they relate to socio-economic status and race.
They have a list of first names that might or might not correlate well with FB's analysis of last names. They also describe how name choice is changing diachronically (across time).
Who knows -- the parents' selection name might be more accurate than what people report on the census.
| null | CC BY-SA 2.5 | null | 2011-01-03T20:34:28.793 | 2011-01-03T20:34:28.793 | null | null | 2591 | null |
5934 | 1 | 5942 | null | 4 | 25090 | Over the holidays I played a dice game where each player had 3 to 7 d6 to roll each turn. The game gave certain advantages to doubles and triples. I wanted to know the odds of rolling doubles or triples given N dice to understand the importance of "upgrading" to more dice.
My question is really about how to reason about such problems: It's easy to see that the odds of rolling doubles with 2 d6 are 1/6 (first die can be anything, and 1/6 of the time the second will match). It's also easy to see that with 7 dice the odds are 1.0 (6 dice could be 1..6, but the 7th must match one of them). The middle cases are fuzzier to me.
I ended up writing a program to enumerate all of the possibilities (even 6^7 is only 279,936 cases to evaluate) because that was the only way I felt I could verify any closed formula I came up with. This also enabled me to distinguish cases of multiple doubles (or triples). I would like to know how to derive closed forms and how to validate such solutions without brute-force simulation.
| Odds of rolling doubles or triples given N 6-sided dice | CC BY-SA 2.5 | null | 2011-01-03T21:54:58.037 | 2011-01-04T01:04:18.533 | null | null | 2622 | [
"probability",
"games",
"dice"
]
|
5935 | 1 | 5936 | null | 41 | 44088 | I am analyzing an experimental data set. The data consists of a paired vector of treatment type and a binomial outcome:
```
Treatment Outcome
A 1
B 0
C 0
D 1
A 0
...
```
In the outcome column, 1 denotes a success and 0 denotes a failure. I'd like to figure out if the treatment significantly varies the outcome. There are 4 different treatments with each experiment repeated a large number of times (2000 for each treatment).
My question is, can I analyze the binary outcome using ANOVA? Or should I be using a chi-square test to check the binomial data? It seems like chi-square assumes the proportion would be be evenly split, which isn't the case. Another idea would be to summarize the data using the proportion of successes versus failures for each treatment and then to use a proportion test.
I'm curious to hear your recommendations for tests that make sense for these sorts of binomial success/failure experiments.
| ANOVA on binomial data | CC BY-SA 2.5 | null | 2011-01-03T22:04:01.343 | 2020-02-09T15:23:30.847 | 2020-02-09T15:23:30.847 | 11887 | 2624 | [
"logistic",
"anova",
"data-transformation",
"binomial-distribution",
"experiment-design"
]
|
5936 | 2 | null | 5935 | 22 | null | No to ANOVA, which assumes a normally distributed outcome variable (among other things). There are "old school" transformations to consider, but I would prefer logistic regression (equivalent to a chi square when there is only one independent variable, as in your case). The advantage of using logistic regression over a chi square test is that you can easily use a linear contrast to compare specific levels of the treatment if you find a significant result to the overall test (type 3). For example A versus B, B versus C etc.
Update Added for clarity:
Taking data at hand (the post doc data set from [Allison](http://ftp.sas.com/samples/A55770)) and using the variable cits as follows, this was my point:
```
postdocData$citsBin <- ifelse(postdocData$cits>2, 3, postdocData$cits)
postdocData$citsBin <- as.factor(postdocData$citsBin)
ordered(postdocData$citsBin, levels=c("0", "1", "2", "3"))
contrasts(postdocData$citsBin) <- contr.treatment(4, base=4) # set 4th level as reference
contrasts(postdocData$citsBin)
# 1 2 3
# 0 1 0 0
# 1 0 1 0
# 2 0 0 1
# 3 0 0 0
# fit the univariate logistic regression model
model.1 <- glm(pdoc~citsBin, data=postdocData, family=binomial(link="logit"))
library(car) # John Fox package
car::Anova(model.1, test="LR", type="III") # type 3 analysis (SAS verbiage)
# Response: pdoc
# LR Chisq Df Pr(>Chisq)
# citsBin 1.7977 3 0.6154
chisq.test(table(postdocData$citsBin, postdocData$pdoc))
# X-squared = 1.7957, df = 3, p-value = 0.6159
# then can test differences in levels, such as: contrast cits=0 minus cits=1 = 0
# Ho: Beta_1 - Beta_2 = 0
cVec <- c(0,1,-1,0)
car::linearHypothesis(model.1, cVec, verbose=TRUE)
```
| null | CC BY-SA 3.0 | null | 2011-01-03T22:21:28.357 | 2012-06-26T17:23:42.433 | 2012-06-26T17:23:42.433 | 7290 | 2040 | null |
5937 | 1 | 5940 | null | 44 | 38113 | When teaching an introductory level class, the teachers I know tend to invent some numbers and a story in order to exemplify the method they are teaching.
What I would prefer is to tell a real story with real numbers. However, these stories needs to relate to a very tiny dataset, which enables manual calculations.
Any suggestions for such datasets will be very welcomed.
Some sample topics for the tiny datasets:
- correlation/regression (basic)
- ANOVA (1/2 ways)
- z/t tests - one/two un/paired samples
- comparisons of proportions - two/multi way tables
| Tiny (real) datasets for giving examples in class? | CC BY-SA 3.0 | null | 2011-01-03T22:23:41.990 | 2015-11-17T21:10:43.313 | 2015-11-17T21:10:43.313 | 22468 | 253 | [
"dataset",
"references",
"teaching"
]
|
5938 | 2 | null | 5937 | 13 | null | For two-way tables, I like the data on gender and survival of the titanic passengers:
```
| Alive Dead | Total
-------+-------------+------
Female | 308 154 | 462
Male | 142 709 | 851
-------+-------------+------
Total | 450 863 | 1313
```
With this data, one can discuss things like the chi-square test for independence and measure of assocation, such as the relative rate and the odds ratio. For example, female passengers were ~4 times more likely to survive than male passengers. At the same time, male passengers were ~2.5 times more likely to die than female passengers. The odds ratio for survival/dying is always 10 though.
| null | CC BY-SA 2.5 | null | 2011-01-03T22:58:26.610 | 2011-01-03T22:58:26.610 | null | null | 1934 | null |
5939 | 2 | null | 1164 | 9 | null | Wooldridge "Introductory Econometrics - A Modern Approach" 2E p.261.
If Heteroskedasticity-robust standard errors are valid more often than the usual OLS standard errors, why do we bother we the usual standard errors at all?...One reason they are still used in cross sectional work is that, if the homoskedasticity assumption holds and the erros are normally distributed, then the usual t-statistics have exact t distributions, regardless of the sample size. The robust standard errors and robust t statistics are justified only as the sample size becomes large. With small sample sizes, the robust t statistics can have distributions that are not very close to the t distribution, and that could throw off our inference. In large sample sizes, we can make a case for always reporting only the Heteroskedasticity-robust standard errors in cross-sectional applications, and this practice is being followed more and more in applied work.
| null | CC BY-SA 2.5 | null | 2011-01-03T23:00:01.527 | 2011-01-03T23:00:01.527 | null | null | null | null |
5940 | 2 | null | 5937 | 27 | null | The [data and story library](http://lib.stat.cmu.edu/DASL/) is an " online library of datafiles and stories that illustrate the use of basic statistics methods".
This site seems to have what you need, and you can search it for particular data sets.
| null | CC BY-SA 3.0 | null | 2011-01-03T23:03:39.257 | 2011-10-28T19:49:33.923 | 2011-10-28T19:49:33.923 | 1381 | 1381 | null |
5941 | 2 | null | 5937 | 9 | null | The [Journal of Statistical Education](http://www.amstat.org/publications/jse/jse_data_archive.htm) has an archive of educational data sets.
| null | CC BY-SA 2.5 | null | 2011-01-03T23:07:43.137 | 2011-01-03T23:07:43.137 | null | null | 1381 | null |
5942 | 2 | null | 5934 | 6 | null | You are looking at a much more manageable version of the famous [birthday problem](http://en.wikipedia.org/wiki/Birthday_problem), which has 365 choices for each "die". The link describes the solution as well. In short, the trick for counting duplicates is to rather counting the cases without a duplicate - that is much simpler.
| null | CC BY-SA 2.5 | null | 2011-01-03T23:12:30.220 | 2011-01-03T23:12:30.220 | null | null | 279 | null |
5943 | 2 | null | 5937 | 23 | null | There's a book called "A Handbook of Small Datasets" by D.J. Hand, F. Daly, A.D. Lunn, K.J. McConway and E. Ostrowski. The Statistics department at NCSU have electronically posted the datasets from this book [here](http://www.stat.ncsu.edu/sas/sicl/data/).
The website above gives only the data; you would need to read the book to get the story behind the numbers, that is, any story beyond what you can glean from the data set's title. But, they are small, and they are real.
| null | CC BY-SA 2.5 | null | 2011-01-03T23:15:30.717 | 2011-01-03T23:15:30.717 | null | null | null | null |
5944 | 2 | null | 5935 | 3 | null | I would like to differ from what you think about Chi-Sq test. It is applicable even if the data is not binomial. It's based on the asymptotic normality of mle (in most of the cases).
I would do a logistic regression like this:
$$\log \frac {\hat{\pi}} {1-\hat{\pi}} = \beta_0 + \beta_1 \times D_1 + \beta_2 \times D_2$$
where
$D_1$ and $D_2$ are dummy variables. $D_1 = D_2 = 0 \implies A, D_1 = 1, D_2 = 0 \implies B, D_1 = 1 D_2 = 1 \implies C$
$$H_o : \beta_0 = \beta_1 = \beta_2 = 0$$
Is the ANOVA equivalent if there is a relation or not.
$$H_o : \beta_0 = 0$$
Is the test is A has some effect.
$$H_o : \beta_1 - \beta_0 = 0$$
Is the test is B has some effect.
$$H_o : \beta_2 - (\frac {\beta_0+\beta_1} {2}) = 0$$
Is the test is C has some effect.
Now you can do further contrasts to find our what you are interested in. It is still a chi-sq test, but with different degrees of freedom (3, 1, 1, and 1, respectively)
| null | CC BY-SA 2.5 | null | 2011-01-03T23:27:03.747 | 2011-01-03T23:40:57.383 | 2011-01-03T23:40:57.383 | 1307 | 1307 | null |
5945 | 1 | 5950 | null | 13 | 6363 | SVMs for classification make intuitive sense to me: I understand how minimizing $||\theta||^2$ yields the maximum margin. However, I don't understand that objective in the context of regression. Various texts ([here](http://kernelsvm.tripod.com/) and [here](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.99.2073&rep=rep1&type=pdf)) describe this as maximizing "flatness." Why would we want to do that? What in regression is the equivalent to the concept of "margin"?
[Here](http://agbs.kyb.tuebingen.mpg.de/km/bb/showthread.php?tid=991) are a few attempted answers, but none that really helped my understanding.
| Understanding SVM regression: objective function and "flatness" | CC BY-SA 2.5 | null | 2011-01-03T23:31:11.083 | 2019-08-09T08:52:20.067 | null | null | 1720 | [
"regression",
"svm"
]
|
5946 | 2 | null | 5680 | 34 | null | Like other parametric tests, the analysis of variance assumes that the data fit the normal distribution. If your measurement variable is not normally distributed, you may be increasing your chance of a false positive result if you analyze the data with an anova or other test that assumes normality. Fortunately, an anova is not very sensitive to moderate deviations from normality; simulation studies, using a variety of non-normal distributions, have shown that the false positive rate is not affected very much by this violation of the assumption (Glass et al. 1972, Harwell et al. 1992, Lix et al. 1996). This is because when you take a large number of random samples from a population, the means of those samples are approximately normally distributed even when the population is not normal.
It is possible to test the goodness-of-fit of a data set to the normal distribution. I do not suggest that you do this, because many data sets that are significantly non-normal would be perfectly appropriate for an anova.
Instead, if you have a large enough data set, I suggest you just look at the frequency histogram. If it looks more-or-less normal, go ahead and perform an anova. If it looks like a normal distribution that has been pushed to one side, like the sulphate data above, you should try different data transformations and see if any of them make the histogram look more normal. If that doesn't work, and the data still look severely non-normal, it's probably still okay to analyze the data using an anova. However, you may want to analyze it using a non-parametric test. Just about every parametric statistical test has a non-parametric substitute, such as the Kruskal–Wallis test instead of a one-way anova, Wilcoxon signed-rank test instead of a paired t-test, and Spearman rank correlation instead of linear regression. These non-parametric tests do not assume that the data fit the normal distribution. They do assume that the data in different groups have the same distribution as each other, however; if different groups have different shaped distributions (for example, one is skewed to the left, another is skewed to the right), a non-parametric test may not be any better than a parametric one.
References
- Glass, G.V., P.D. Peckham, and J.R. Sanders. 1972. Consequences of failure to meet assumptions underlying fixed effects analyses of variance and covariance. Rev. Educ. Res. 42: 237-288.
- Harwell, M.R., E.N. Rubinstein, W.S. Hayes, and C.C. Olds. 1992. Summarizing Monte Carlo results in methodological research: the one- and two-factor fixed effects ANOVA cases. J. Educ. Stat. 17: 315-339.
- Lix, L.M., J.C. Keselman, and H.J. Keselman. 1996. Consequences of assumption violations revisited: A quantitative review of alternatives to the one-way analysis of variance F test. Rev. Educ. Res. 66: 579-619.
| null | CC BY-SA 3.0 | null | 2011-01-04T00:00:42.387 | 2013-02-26T10:54:57.397 | 2013-02-26T10:54:57.397 | 2669 | null | null |
5947 | 2 | null | 5934 | 5 | null | You're really asking for instruction in [combinatorics](http://en.wikipedia.org/wiki/Combinatorics), which is a vast field. For this particular problem, a good method, if somewhat abstract, is to use [generating functions](http://en.wikipedia.org/wiki/Probability-generating_function). The probability generating function (pgf) for a d$n$, where $n = 1, 2, \ldots$, is obtained by averaging $n$ variables. Their names don't matter, so let's call them $x_1, x_2, \ldots, x_n$. The pgf for $k$ rolls is just the $k^{\text{th}}$ power of the pgf for 1 roll. So, if you expand
$$\left(\frac{x_1 + x_2 + \cdots + x_n}{n}\right)^k$$
you can read the answer to any question by inspecting the coefficients.
For example, to find the probability of rolling a full house (three of one number, two of another) in Yahtzee ($n=6, k=5$) look at all the terms in $((x_1 + \cdots + x_6)/6)^5$ involving a third power and a second power. There are 15 of them, each with the coefficient 5/3888, and each counts distinctly different outcomes, so the chance of a full house is the sum of these chances, equal to 15*5/3888.
It's straightforward to write code that multiplies and adds polynomials of this sort. The number of calculations for seven rolls of your d6 is just a few thousand, far smaller than the hundreds of thousands needed to enumerate and tally all possible combinations.
Mathematical analysis of generating functions often gives closed-form formulas for the coefficients. Many examples are available; you can check some of them out on the [math site](https://math.stackexchange.com/questions/tagged/generating-functions).
| null | CC BY-SA 2.5 | null | 2011-01-04T00:35:45.310 | 2011-01-04T00:35:45.310 | 2017-04-13T12:19:38.800 | -1 | 919 | null |
5948 | 2 | null | 5934 | 6 | null | You have posed the kind of combinatorics question that profs will ask you as an undergrad.
Let's reduce the problem to doubles of two dice. Instead of asking about doubles, let's ask how not to roll doubles. There are 6 ways to roll the first die and 5 ways (excluding the first die's roll) to roll the second die. Because we are not rolling doubles, we could pair a 1 like this:
```
1 2
1 3
1 4
1 5
1 6
```
You could continue this pattern for 6 x 5 or 30 non-doubles. There are 6 * 6 = 36 ways to roll a pair of dice, so 36 - (6 * 5) gives you 6 ways to roll doubles.
That was trivial, but let's build up to 3 dice.
How many ways are there to not roll 3 dice that are the same? The answer is 6 * 5 * 4 = 120, because there are 6 ways to roll the first die, 5 ways to roll the second (and not duplicate the first), and 4 ways to roll the third die (and not duplicate the first or the second).
Here are the ways to not roll a double or triple with three dice, when the first die is a 1.
```
1 2 3
1 2 4
1 2 5
1 2 6
1 3 2
1 3 4
1 3 5
1 3 6
1 4 2
1 4 3
1 4 5
1 4 6
1 5 2
1 5 3
1 5 4
1 5 6
1 6 2
1 6 3
1 6 4
1 6 5
```
There are 6 * 6 * 6 = 216 ways to roll three dice, and 6 * 5 * 4 ways to roll non-duplicates, so there are 216 - 120 = 96 ways to roll doubles OR triples.
| null | CC BY-SA 2.5 | null | 2011-01-04T01:04:18.533 | 2011-01-04T01:04:18.533 | null | null | 2591 | null |
5949 | 2 | null | 5937 | 4 | null | Probably such an obvious answer that it does not really need to be mentioned, but for correlation or linear regression [Anscombe's quartet](http://en.wikipedia.org/wiki/Anscombe%27s_quartet) is a logical choice. Although it is not a real story with real data I think it is such a simple example it would reasonably fit into your criteria.
| null | CC BY-SA 2.5 | null | 2011-01-04T04:04:56.267 | 2011-01-04T04:04:56.267 | null | null | 1036 | null |
5950 | 2 | null | 5945 | 11 | null | One way that I think about the flatness is that it makes my predictions less sensitive to perturbations in the features. That is, if I am constructing a model of the form
$$y = x^\top \theta + \epsilon,$$
where my feature vector $x$ has already been normalized, then smaller values in $\theta$ mean my model is less sensitive to errors in measurement/random shocks/non-stationarity of the features, $x$. Given two models (i.e. two possible values of $\theta$) which explain the data equally well, I prefer the 'flatter' one.
You can also think of Ridge Regression as peforming the same thing without the kernel trick or the SVM 'tube' regression formulation.
edit: In response to @Yang's comments, some more explanation:
- Consider the linear case: $y = x^\top \theta + \epsilon$. Suppose the $x$ are drawn i.i.d. from some distribution, independent of $\theta$. By the dot product identity, we have $y = ||x|| ||\theta|| \cos\psi + \epsilon$, where $\psi$ is the angle between $\theta$ and $x$, which is probably distributed under some spherically uniform distribution. Now note: the 'spread' (e.g. the sample standard deviation) of our predictions of $y$ is proportional to $||\theta||$. To get good MSE with the latent, noiseless versions of our observations, we want to shrink that $||\theta||$. c.f. James Stein estimator.
- Consider the linear case with lots of features. Consider the models $y = x^\top \theta_1 + \epsilon$, and $y = x^\top \theta_2 + \epsilon$. If $\theta_1$ has more zero elements in it than $\theta_2$, but about the same explanatory power, we would prefer it, base on Occam's razor, since it has dependencies on fewer variables (i.e. we have 'done feature selection' by setting some elements of $\theta_1$ to zero). Flatness is kind of a continuous version of this argument. If each marginal of $x$ has unit standard deviation, and $\theta_1$ has e.g. 2 elements which are 10, and the remaining $n-2$ are smaller than 0.0001, depending on your tolerance of noise, this is effectively 'selecting' the two features, and zeroing out the remaining ones.
- When the kernel trick is employed, you are performing a linear regression in an high (sometimes infinite) dimensional vector space. Each element of $\theta$ now corresponds to one of your samples, not your features. If $k$ elements of $\theta$ are non-zero, and the remaining $m-k$ are zero, the features corresponding to the $k$ non-zero elements of $\theta$ are called your 'support vectors'. To store your SVM model, say on disk, you need only keep those $k$ feature vectors, and you can throw the rest of them away. Now flatness really matters, because having $k$ small reduces storage and transmission, etc, requirements. Again, depending on your tolerance for noise, you can probably zero out all elements of $\theta$ but the $l$ largest, for some $l$, after performing an SVM regression. Flatness here is equivalent to parsimony with respect to the number of support vectors.
| null | CC BY-SA 2.5 | null | 2011-01-04T05:22:41.427 | 2011-01-14T17:48:48.753 | 2011-01-14T17:48:48.753 | 795 | 795 | null |
5951 | 2 | null | 5922 | 3 | null | I'm not sure about the 'domain of a function' part, but [Hausdorff Dimension](http://en.wikipedia.org/wiki/Hausdorff_dimension) seems to answer this question. It has the odd property of agreeing with simple examples (e.g. the circle has Hausdorff Dimension 1), but of giving non-integral results for some sets ('fractals').
| null | CC BY-SA 2.5 | null | 2011-01-04T05:28:07.677 | 2011-01-04T05:28:07.677 | null | null | 795 | null |
5952 | 1 | 5956 | null | 10 | 6213 | Is there a way of plotting the regression line of a piecewise model like this, other than using `lines` to plot each segment separately, or using `geom_smooth(aes(group=Ind), method="lm", fill=FALSE)` ?
```
m.sqft <- mean(sqft)
model <- lm(price~sqft+I((sqft-m.sqft)*Ind))
# sqft, price: continuous variables, Ind: if sqft>mean(sqft) then 1 else 0
plot(sqft,price)
abline(reg = model)
Warning message:
In abline(reg = model) :
only using the first two of 3regression coefficients
```
Thank you.
| Plotting a piecewise regression line | CC BY-SA 2.5 | null | 2011-01-04T08:14:55.153 | 2011-01-05T10:31:19.180 | 2011-01-05T10:31:19.180 | 339 | 339 | [
"r",
"data-visualization",
"regression"
]
|
5954 | 1 | null | null | 47 | 1507 | How can a regression model be any use if you don't know the function you are trying to get the parameters for?
I saw a piece of research that said that mothers who breast fed their children were less likely to suffer diabetes in later life. The research was from a survey of some 1000 mothers and controlled for miscellaneous factors and a loglinear model was used.
Now does this mean that they reckon all the factors that determine the likelihood of diabetes fit in a nice function (exponential presumably) that translates neatly into a linear model with logs and that whether the woman breast fed turned out to be statistically significant?
I'm missing something I'm sure but, how the hell do they know the model?
| Understanding regressions - the role of the model | CC BY-SA 4.0 | null | 2011-01-04T09:29:01.153 | 2019-08-08T14:41:28.020 | 2019-08-08T14:41:28.020 | 11887 | 2629 | [
"regression",
"modeling",
"epidemiology",
"log-linear"
]
|
5955 | 2 | null | 5954 | 44 | null | It helps to view regression as a linear approximation of the true form. Suppose the true relationship is
$$y=f(x_1,...,x_k)$$
with $x_1,...,x_k$ factors explaining the $y$. Then first order Taylor approximation of $f$ around zero is:
$$f(x_1,...,x_k)=f(0,...,0)+\sum_{i=1}^{k}\frac{\partial f(0)}{\partial x_k}x_k+\varepsilon,$$
where $\varepsilon$ is the approximation error. Now denote $\alpha_0=f(0,...,0)$ and $\alpha_k=\frac{\partial{f}(0)}{\partial x_k}$ and you have a regression:
$$y=\alpha_0+\alpha_1 x_1+...+\alpha_k x_k + \varepsilon$$
So although you do not know the true relationship, if $\varepsilon$ is small you get approximation, from which you can still deduce useful conclusions.
| null | CC BY-SA 2.5 | null | 2011-01-04T09:45:18.260 | 2011-01-04T09:45:18.260 | null | null | 2116 | null |
5956 | 2 | null | 5952 | 6 | null | The only way I know how to do this easily is to predict from the model across the range of `sqft` and plot the predictions. There isn't a general way with `abline` or similar. You might also take a look at the [segmented](http://cran.r-project.org/web/packages/segmented/index.html) package which will fit these models and provide the plotting infrastructure for you.
Doing this via predictions and base graphics. First, some dummy data:
```
set.seed(1)
sqft <- runif(100)
sqft <- ifelse((tmp <- sqft > mean(sqft)), 1, 0) + rnorm(100, sd = 0.5)
price <- 2 + 2.5 * sqft
price <- ifelse(tmp, price, 0) + rnorm(100, sd = 0.6)
DF <- data.frame(sqft = sqft, price = price,
Ind = ifelse(sqft > mean(sqft), 1, 0))
rm(price, sqft)
plot(price ~ sqft, data = DF)
```
Fit the model:
```
mod <- lm(price~sqft+I((sqft-mean(sqft))*Ind), data = DF)
```
Generate some data to predict for and predict:
```
m.sqft <- with(DF, mean(sqft))
pDF <- with(DF, data.frame(sqft = seq(min(sqft), max(sqft), length = 200)))
pDF <- within(pDF, Ind <- ifelse(sqft > m.sqft, 1, 0))
pDF <- within(pDF, price <- predict(mod, newdata = pDF))
```
Plot the regression lines:
```
ylim <- range(pDF$price, DF$price)
xlim <- range(pDF$sqft, DF$sqft)
plot(price ~ sqft, data = DF, ylim = ylim, xlim = xlim)
lines(price ~ sqft, data = pDF, subset = Ind > 0, col = "red", lwd = 2)
lines(price ~ sqft, data = pDF, subset = Ind < 1, col = "red", lwd = 2)
```
You could code this up into a simple function - you only need the steps in the two preceding code chunks - which you can use in place of `abline`:
```
myabline <- function(model, data, ...) {
m.sqft <- with(data, mean(sqft))
pDF <- with(data, data.frame(sqft = seq(min(sqft), max(sqft),
length = 200)))
pDF <- within(pDF, Ind <- ifelse(sqft > m.sqft, 1, 0))
pDF <- within(pDF, price <- predict(mod, newdata = pDF))
lines(price ~ sqft, data = pDF, subset = Ind > 0, ...)
lines(price ~ sqft, data = pDF, subset = Ind < 1, ...)
invisible(model)
}
```
Then:
```
ylim <- range(pDF$price, DF$price)
xlim <- range(pDF$sqft, DF$sqft)
plot(price ~ sqft, data = DF, ylim = ylim, xlim = xlim)
myabline(mod, DF, col = "red", lwd = 2)
```
Via the segmented package
```
require(segmented)
mod2 <- lm(price ~ sqft, data = DF)
mod.s <- segmented(mod2, seg.Z = ~ sqft, psi = 0.5,
control = seg.control(stop.if.error = FALSE))
plot(price ~ sqft, data = DF)
plot(mod.s, add = TRUE)
lines(mod.s, col = "red")
```
With these data it doesn't estimate the breakpoint at `mean(sqft)`, but the `plot` and `lines` methods in that package might help you implement something more generic than `myabline` to do this job for you diretcly from the fitted `lm()` model.
Edit: If you want segmented to estimate the location of the breakpoint, then set the `'psi'` argument to `NA`:
```
mod.s <- segmented(mod2, seg.Z = ~ sqft, psi = NA,
control = seg.control(stop.if.error = FALSE))
```
Then `segmented` will try `K = 10` quantiles of `sqft`, with `K` being set in `seg.control()` and which defaults to `10`. See `?seg.control` for more.
| null | CC BY-SA 2.5 | null | 2011-01-04T09:58:53.000 | 2011-01-04T12:26:42.980 | 2011-01-04T12:26:42.980 | 1390 | 1390 | null |
5957 | 2 | null | 5954 | 15 | null | An excellent first question! I agree with mpiktas's answer, i.e. the short answer is "they don't, but they hope to have an approximation to the right model that gives approximately the right answer".
In the jargon of epidemiology, this model uncertainty is one source of what's known as 'residual confounding'. See [Steve Simon's page 'What is residual confounding?'](http://pmean.com/10/ResidualConfounding.html) for a good short description, or [Heiko Becher's 1992 paper in Statistics in Medicine](http://onlinelibrary.wiley.com/doi/10.1002/sim.4780111308/abstract) (subscription req'd) for a longer, more mathematical treatment, or [Fewell, Davey Smith & Sterne's more recent paper in the American Journal of Epidemiology](http://aje.oxfordjournals.org/content/166/6/646.abstract) (subscription req'd).
This is one reason that epidemiology of small effects is difficult and the findings often controversial - if the measured effect size is small, it's hard to rule out residual confounding or other sources of bias as the explanation.
| null | CC BY-SA 3.0 | null | 2011-01-04T10:38:37.040 | 2012-07-01T11:44:17.527 | 2012-07-01T11:44:17.527 | 449 | 449 | null |
5958 | 2 | null | 5954 | 18 | null | The other side of the answer, complementary to mpiktas's answer but not mentioned so far, is:
"They don't, but as soon as they assume some model structure, they can check it against the data".
The two basic things that could go wrong are: The form of the function, e.g. it's not even linear in logs. So you'd start by plotting the an appropriate residual against the expected values. Or the choice of conditional distribution, e.g. the observed counts overdispersed relative to Poisson. So you'd test against an Negative Binomial version of the same model, or see if extra covariates account for the extra variation.
You'd also want to check for outliers, influential observations, and a host of other things. A reasonable place to read about checking these kinds of model problems is ch.5 of Cameron and Trivedi 1998. (There is surely a better place for epidemiologically oriented researchers to start - perhaps other folk can suggest it.)
If these diagnostics indicated the model failed to fit the data, you'd change the relevant aspect of the model and start the whole process again.
| null | CC BY-SA 2.5 | null | 2011-01-04T12:06:35.013 | 2011-01-04T12:06:35.013 | null | null | 1739 | null |
5959 | 2 | null | 5854 | 12 | null | You might also want to have a look at [beanplots](http://www.jstatsoft.org/v28/c01/paper).

[[Source](http://jnlnet.wordpress.com/2008/11/30/beanplots/)]
Implemented in [R package](http://cran.r-project.org/web/packages/beanplot/index.html) by Peter Kampstra.
| null | CC BY-SA 2.5 | null | 2011-01-04T12:37:08.117 | 2011-01-04T12:37:08.117 | null | null | 22 | null |
5960 | 1 | 5966 | null | 58 | 46833 | I understand that once we plot the values as a chart, we can identify a bimodal distribution by observing the twin-peaks, but how does one find it programmatically? (I am looking for an algorithm.)
| How to identify a bimodal distribution? | CC BY-SA 3.0 | null | 2011-01-04T13:03:59.157 | 2022-08-03T13:26:32.640 | 2011-12-21T17:31:11.937 | 930 | 2535 | [
"distributions"
]
|
5961 | 2 | null | 5913 | 2 | null | I'm working with the OP on the benchmarking project and wanted to thank you all for clearing some things up. Also I wanted to provide a bit more information in case that affects the advice.
The sample size ranges from 5 - 700+ (as many as can be completed in 8 seconds or until the margin of error is at or below 1%. The critical values are pulled from an object for simplicity (because other calculations determine the degrees of freedom as sample-size minus 1).
```
/**
* T-Distribution two-tailed critical values for 95% confidence
* http://www.itl.nist.gov/div898/handbook/eda/section3/eda3672.htm
*/
T_DISTRIBUTION = {
'1': 12.706, '2': 4.303, '3': 3.182, '4': 2.776 /* , ... */
}
```
---
## Update
I checked and the difference between variances seems rather high.
```
Variances:
4,474,400,141.236059
3,032,977,106.8208385
226,854,226,665.14194
24,612,581.169126578
```
We are testing the operations per second of various code snippets (some are slower, lower ops/sec, others faster, higher ops/sec).
Also we used to simply compare the overlap between means `+-` margin of error of each but it was suggested that a t-test is better because it can find results with statistical significance.
| null | CC BY-SA 2.5 | null | 2011-01-04T13:04:57.233 | 2011-01-04T18:48:40.033 | 2020-06-11T14:32:37.003 | -1 | 2634 | null |
5962 | 1 | 5992 | null | 12 | 1696 | I have 12 positive training sets (cancer cells treated with drugs with each of 12 different mechanisms of action). For each of these positive training sets, I would like to train a support-vector machine to distinguish it from a negative set of equal size sampled from the experiment. Each set has between 1000 and 6000 cells, and there are 476 features (image features) of each cell, each scaled linearly to [0, 1].
I use [LIBSVM](http://www.csie.ntu.edu.tw/~cjlin/libsvm/) and the Gaussian RGB kernel. Using five-fold crossvalidation, I have done a grid search for log₂ C ∈ [-5, 15] and log₂ ɣ ∈ [-15, 3]. The results are as follows:

I was disappointed that there is not a single set of parameters that give high accuracies for all 12 classification problems. I was also surprised that the grids do not generally show a high-accuracy region surrounded by lower accuracies. Does this just mean that I need to expand the search parameter space, or is the grid search an indication that something else is wrong?
| Should an SVM grid search show a high-accuracy region with low accuracies around? | CC BY-SA 2.5 | null | 2011-01-04T14:06:34.460 | 2011-01-05T00:57:32.817 | null | null | 220 | [
"svm"
]
|
5963 | 2 | null | 5960 | 0 | null | The definition in wiki is slightly confusing to me. The probability of a continous data set having just one mode is zero. A simple way to program a bimodal distrubiton is with two seperate normal distributions centered differently. This creates two peaks or what wiki calls modes. You can actually use almost any two distributions, but one of the harder statistical opportunities is to find how the data set was formed after combining the two random data distributions.
| null | CC BY-SA 2.5 | null | 2011-01-04T14:58:43.317 | 2011-01-04T14:58:43.317 | null | null | 2539 | null |
5964 | 2 | null | 5954 | 13 | null | There is the famous quote "Essentially, all models are wrong, but some are useful" of [George Box](http://en.wikiquote.org/wiki/George_Box). When fitting models like this, we try to (or should) think about the data generation process and the physical, real world, relationships between the response and covariates. We try to express these relationships in a model that fits the data. Or to put it another way, is consistent with the data. As such an empirical model is produced.
Whether it is useful or not is determined later - does it give good, reliable predictions, for example, for women not used to fit the model? Are the model coefficients interpretable and of scientific use? Are the effect sizes meaningful?
| null | CC BY-SA 2.5 | null | 2011-01-04T15:34:50.540 | 2011-01-04T15:34:50.540 | null | null | 1390 | null |
5966 | 2 | null | 5960 | 34 | null | Identifying a mode for a continuous distribution requires smoothing or binning the data.
Binning is typically too procrustean: the results often depend on where you place the bin cutpoints.
[Kernel smoothing](http://en.wikipedia.org/wiki/Kernel_smoother) (specifically, in the form of [kernel density estimation](http://en.wikipedia.org/wiki/Kernel_density_estimation)) is a good choice. Although many kernel shapes are possible, typically the result does not depend much on the shape. It depends on the kernel bandwidth. Thus, people either use an [adaptive kernel smooth](http://www.math.nyu.edu/mfdd/riedel.save/cip3.pdf) or conduct a sequence of kernel smooths for varying fixed bandwidths in order to check the stability of the modes that are identified. Although using an adaptive or "optimal" smoother is attractive, be aware that most (all?) of these are designed to achieve a balance between precision and average accuracy: they are not designed to optimize estimation of the location of modes.
As far as implementation goes, kernel smoothers locally shift and scale a predetermined function to fit the data. Provided that this basic function is differentiable--Gaussians are a good choice because you can differentiate them as many times as you like--then all you have to do is replace it by its derivative to obtain the derivative of the smooth. Then it's simply a matter of applying a standard zero-finding procedure to detect and test the critical points. ([Brent's method](http://en.wikipedia.org/wiki/Brent%27s_method) works well.) Of course you can do the same trick with the second derivative to get a quick test of whether any critical point is a local maximum--that is, a mode.
For more details and working code (in `R`) please see [https://stats.stackexchange.com/a/428083/919](https://stats.stackexchange.com/a/428083/919).
| null | CC BY-SA 4.0 | null | 2011-01-04T16:44:31.697 | 2022-08-03T13:26:32.640 | 2022-08-03T13:26:32.640 | 919 | 919 | null |
5967 | 1 | 6002 | null | 5 | 2341 | We are working on a multivariate linear regression model. Our objective is to forecast the quarterly % growth in mortgage loans outstanding.
The independent variables are:
1) Dow Jones level.
2) % change in Dow Jones over past quarter.
3) Case Shiller housing price index.
4) % change in Case Shiller housing price index over past quarter.
In a stepwise regression process, all above variables were selected. And, variables 1) and 3) were surprisingly significant. But, that is when used in combination. Somehow, there is something about the difference in those two indeces that does partly explain the % change in mortgage loans outstanding.
For my part, I find variables 1) and 3) problematic. This is because I believe they are heteroskedastic. And, their respective coefficients confidence interval are therefore unreliable. I also think they may have multicollinearity issues with their related variables based on % change. They may also cause some autocorrelation issues.
However, at first it seems some of my concerns may be overstated. After graphing the residuals of the whole model they seem OK. They don't trend upward. So, it appears the heteroskedasticity is not an issue for the whole model. Multicollinearity is not too bad. The variable with the highest VIF is around 5 much lower than the usual threshold of 10.
Nevertheless, I am still somewhat concerned that even though the whole model seems OK; the specific regression coefficients of the mentioned variables may not be (or more specifically the related confidence intervals).
| Can you use heteroskedastic time series variables within a regression model? | CC BY-SA 2.5 | null | 2011-01-04T16:55:36.283 | 2011-01-05T12:06:14.560 | null | null | 1329 | [
"regression",
"multicollinearity",
"stepwise-regression",
"heteroscedasticity"
]
|
5968 | 2 | null | 3140 | 2 | null | ```
load fisheriris
indices = crossvalind('Kfold',species,10);
cp = classperf(species); % initializes the CP object
for i = 1:10
test = (indices == i); train = ~test;
class = knnclassify(meas(test,:),meas(train,:),species(train));
% updates the CP object with the current classification results
classperf(cp,class,test);
end
cp.CorrectRate % queries for the correct classification rate
```
| null | CC BY-SA 2.5 | null | 2011-01-04T17:01:30.713 | 2011-01-04T19:11:00.380 | 2011-01-04T19:11:00.380 | 930 | null | null |
5969 | 1 | 6001 | null | 10 | 2360 | I do quasi-experimental individual differences psychology research. I examine how people who differ in a cognitive ability (that I measure) perform on another task that always at least involves within-subject manipulations (and sometimes between-subject)–DVs are usually response time and/or accuracy. For this question I'd like to focus on response times (let's assume they are normally distributed). I then infer from the ability-task relations what it means theoretically for the cognitive ability. The nature of this work is correlational and involves repeated measures where each subject completes many task trials(mostly I’m not interested in changes over time, just the overall difference).
Researchers in my field often create categorical variables from the cognitive ability scores and compare the upper and lower quartiles of the distribution with a repeated-measures ANOVA. Because the cognitive ability is measured continuously, I am looking for an analytic strategy that treats the cognitive ability in this way. I’ve recently been investigating mixed-models, thinking that I can treat the people as a random effect grouping variable and the cognitive ability score as a random effect nested within people. I would like to examine interactions between this nested random effect (cognitive ability) and the fixed effects of the experimental treatments by doing model comparisons.
Does this seem like a reasonable analytic strategy? Am I thinking about this right? What are some other ways (the simpler, the better) that I can take advantage of repeated measures -remove experimental subject variance- while also maintaining the cognitive ability measure as a continuous measure? Any citations or examples in R are appreciated.
In a typical experiment, I would expect there to be anywhere from 1-3 categorical IVs with 2-4 levels(measured by multiple trials), and 1 continuous IV(cognitive ability). The exact nature of the categorical variables changes from study to study. The designs are fully crossed.
| How should I analyze repeated-measures individual differences experiments? | CC BY-SA 2.5 | null | 2011-01-04T17:37:50.460 | 2011-01-06T22:33:36.143 | 2011-01-05T15:21:18.433 | 2322 | 2322 | [
"repeated-measures"
]
|
5970 | 2 | null | 5680 | 2 | null | Juan has offered a lot, although I'll echo others and repeat that for best accuracy the variables themselves can be nonnormal as long as their residuals aren't. Also, a simplified and slightly more structured answer (via an annotated flow chart) is available at [yellowbrickstats.com](http://yellowbrickstats.com).
| null | CC BY-SA 3.0 | null | 2011-01-04T17:44:03.453 | 2012-03-29T23:13:14.170 | 2012-03-29T23:13:14.170 | 2669 | 2669 | null |
5971 | 1 | null | null | 1 | 353 | I'm doing some software testing where we are measuring specific latencies. Generally we run the same test several times to just eyeball the results and make sure that they are consistent across runs. I usually do this by plotting the cumulative distributions for each run together on a graph, and look for an anomalies.
What more quantitative methods exist for doing this sort of thing? I would ultimately like to be able to run my tests multiple times, and determine some sort of error bound, so I can say something like "I change variable X and the median time changed Y, but that is not significant".
Apologies for any vague handwaving above, I am not a statistician :)
| What is a good measure (or set of measures) for the difference between two sample sets? | CC BY-SA 2.5 | null | 2011-01-04T17:45:49.020 | 2011-01-04T20:03:10.713 | null | null | 2638 | [
"statistical-significance"
]
|
5972 | 1 | 5986 | null | 7 | 2330 | Let's say we have two biased coins. The probability of tossing a head on the first coin is $\alpha$ and the probability of tossing a head on the second coin is $1-\alpha$. We toss both coins $n$ times and we say that there is a success when there is a head on both coins. If we denote this random variable by $X$ then
$$X\sim B(n,\alpha-\alpha^2).$$
The question is how to properly estimate $\alpha$. The problem with a 'standard' approach ($\alpha - \alpha^2=\frac{k}{n}$ where $k$ is number of successes) is that the number of successes might be greater than $0.25n$ and in such a case we obtain that $\alpha$ is a complex number. We can treat such cases as $\alpha=0.5$, but then this estimator is not unbiased anymore.
| Estimation of probability of a success in binomial distribution | CC BY-SA 2.5 | null | 2011-01-04T17:50:44.507 | 2011-01-13T23:11:34.813 | 2011-01-13T23:11:34.813 | 919 | 1643 | [
"estimation",
"binomial-distribution",
"unbiased-estimator"
]
|
5973 | 2 | null | 5918 | 5 | null | I have been doing an extensive cross-validation analysis on a data set that cost millions to acquire, and there is no external validation set available. In this case, I performed extensive nested cross validation to ensure validity. I selected features and optimized parameters only from the respective training sets. This is computationally expensive for large data sets, but it's what I had to do to maintain validity. However, there are complications that come with it...for example, different features are selected in each training set.
So my answer is that in cases where you don't have feasible access to an external data set, this is a reasonable way to go. When you do have an external data set, you can pretty much go to town however you want on the main data set and then test once on the external data set.
| null | CC BY-SA 2.5 | null | 2011-01-04T18:25:18.910 | 2011-01-04T18:25:18.910 | null | null | 2643 | null |
5974 | 2 | null | 5971 | 1 | null | what kind of output are you getting? are you using regression? if you're trying to measure which set is best to use, you can use an F test using the sample errors, but I'm not sure what exactly you're taking about.
| null | CC BY-SA 2.5 | null | 2011-01-04T18:26:12.340 | 2011-01-04T18:26:12.340 | null | null | 2644 | null |
5975 | 1 | null | null | 3 | 1760 | This probably seems like a really strange question, but let me try to explain what I want to do; hopefully it will make sense.
I have a data set with a couple dozen variables, such as age, level of education, self-assessed (via a Likert scale) measures of technical ability, experience, willingness to share personal information, etc etc. Using this data, I want to estimate "likelihood of participation" in using a not-yet-developed social network. Since the site doesn't exist yet, I don't have a simple dependent variable with a "1" or "0" indicating whether they've used it or not, so I can't take the usual approach with regression and estimate what impact each variable had on their participation.
Is there some way to "create" a dependent variable that models "likely vs. not likely" (or something like that), and do the regression against that? I'm really lost ... would be open to any ideas.
| Regression with an unknown dependent variable - estimating "likelihood" to do something | CC BY-SA 4.0 | null | 2011-01-04T18:26:50.577 | 2019-07-25T10:13:07.933 | 2019-07-25T10:13:07.933 | 11887 | 2641 | [
"regression",
"multivariate-analysis",
"correspondence-analysis"
]
|
5976 | 2 | null | 5975 | 2 | null | The simplest approach would be to do a literature search and see if someone else developed a model with your variables, and then generate predictions based on those prior models. Another possibility would be to make your own model from another data source that does have the required information, and then use that to generate predictions.
Obviously you can't do much with your data besides generate predictions with my suggestion, although that may be all you want from the context of your question.
| null | CC BY-SA 2.5 | null | 2011-01-04T18:35:41.397 | 2011-01-04T18:35:41.397 | null | null | 1036 | null |
5977 | 2 | null | 5967 | 0 | null | well, to try and help, with heteroskedasticity present, the least squares estimators are still unbiased but no longer most efficient, they no longer have smallest variance violating part of the Gauss Markov theorum. I could be wrong, but I believe you're standard errors are more effected by multicollinearity than heterosked. Multicoll inflates your standard errors, T values become smaller and leading to increased likelihood of retaining false null hypotheses.
| null | CC BY-SA 2.5 | null | 2011-01-04T18:43:15.373 | 2011-01-04T18:43:15.373 | null | null | 2644 | null |
5979 | 2 | null | 5680 | 8 | null | Specifically regarding error rates as a DV, [Dixon (2008)](http://linkinghub.elsevier.com/retrieve/pii/S0749596X07001283) very cogently demonstrates that null hypothesis testing via ANOVA can cause both increased false alarm rates (calling effects "significant" when they're not) and increased miss rates (missing real effects). He also shows that mixed effects modelling, specifying binomially distributed error, is the more appropriate approach to analyzing rate data.
| null | CC BY-SA 2.5 | null | 2011-01-04T19:01:59.580 | 2011-01-04T19:01:59.580 | null | null | 364 | null |
5980 | 2 | null | 5967 | 1 | null | You are trying to mix level and flow data in one regression. Just notice, that the percent change could be considered as a certain transformation of the level variable (like returns and log-returns in financial econometrics). So since your dependent variable is % change, when you include levels, they try to "mimic" stationarity instead of stochastic movement, and likely may form co-integrating relationship (try to test this anyway). Another "trick" to make level series stationary is to try fractions if they have some sense or consider differences (like interest-rate term-structure).
And I do agree that it is not the heteroscedasticity that forms the problem. Think carefully about the theory you are building your forecasting model on.
Good luck.
| null | CC BY-SA 2.5 | null | 2011-01-04T19:07:38.950 | 2011-01-04T20:24:38.007 | 2011-01-04T20:24:38.007 | 2645 | 2645 | null |
5982 | 1 | 5996 | null | 2 | 1103 | We are trying to make simulation experiment involving a common stochastic trend, that is described by the random walk (or $I(1)$ process) $Y_t = Y_{t-1} + \varepsilon_t$, where innovations $\varepsilon_t$ ~ $N(0,1)$. However when could we be sure that the past innovations are more or less reasonably included into the stochastic trend? Is there any good proxy for random-walk's burn-in period?
I've looked for R's burn-in default suggestion for $AR(p)$ part: `ceiling(6/log(minroots))`, meaning that in case of unit root we get infinity here and roughly tried to take close to unity root 1.0001 (actually it is the same like taking 60000 at once). So any reasonable suggestions or rule of thumbs you do use in practice?
| Burn-in period for random walk | CC BY-SA 2.5 | null | 2011-01-04T19:21:07.543 | 2011-01-21T08:46:29.903 | 2011-01-21T08:46:29.903 | 2645 | 2645 | [
"time-series",
"simulation"
]
|
5983 | 2 | null | 5971 | 1 | null | The general approach is to model your latencies in each condition with some distribution, like a Gaussian, and test whether the means of those two distributions are different, e.g. using a [t-test](http://en.wikipedia.org/wiki/Student%27s_t-test). This lets you pick a significance threshold (p-value) and test for it automatically.
The Gaussian isn't a perfect model for this case, because it's symmetric, and you know that your latencies are the sum of the actual time to run your code in isolation, plus some random latency due to the OS swapping, waiting for I/O, ... in other words, you know that your code will never randomly run faster than the time to run it in isolation. But, it probably won't matter. I'd just use a t-test.
| null | CC BY-SA 2.5 | null | 2011-01-04T19:30:05.887 | 2011-01-04T19:30:05.887 | null | null | 2489 | null |
5984 | 2 | null | 1164 | 5 | null | My knowledge of robust estimators is solely in regards to robust standard errors for regression parameters so my comment will only be in regards to those. I would suggest people read this article,
On The So-Called "Huber Sandwich Estimator" and "Robust Standard Errors"
by: Freedman, A. David
The American Statistician, Vol. 60, No. 4. (November 2006), pp. 299-302. [doi:10.1198/000313006X152207](http://dx.doi.org/10.1198/000313006X152207) ([PDF Version](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.119.135&rep=rep1&type=pdf))
Particular what I am concerned about with these approaches is not that they are wrong, but they are simply distracting from bigger problems. Thus I entirely agree with Robin Girard's answer and his mention of "no free lunch".
| null | CC BY-SA 2.5 | null | 2011-01-04T19:39:25.350 | 2011-01-04T19:39:25.350 | null | null | 1036 | null |
5985 | 2 | null | 5971 | 1 | null | I would take your latency responses from running the operation and perform some tests for normality. If these hold true, then you can create some confidence intervals using a standard normal table. If your tests for normality don't hold true, then a t-table would be more appropriate. You said that you normally plot your latency variables to find any anomalies. How does your plot normally look? Are these repeated operations being performed with the system being held constant? Do you want the system held constant during your experiment? Is the speed of the system being taken into account. These are all things that could be considered to reduce your error in modeling of the time latency variables. These things will provide for a better confidence interval.
| null | CC BY-SA 2.5 | null | 2011-01-04T20:03:10.713 | 2011-01-04T20:03:10.713 | null | null | 2539 | null |
5986 | 2 | null | 5972 | 8 | null | The estimator is biased, regardless.
Note first that $\alpha$ is not identifiable because you cannot distinguish between $\alpha$ and $1-\alpha$. Let's accommodate this problem by allowing that we don't care which coin is which and stipulating (arbitrarily, but with no loss of generality), that $0 \le \alpha \le 1/2$.
It's reasonable, and conventional, to fix the estimator $g$ as follows:
$$\eqalign{
g(k,n) =& \frac{1 - \sqrt{\delta}}{2} \cr
\delta =& \max(0, 1 - 4 k / n)
}$$
No matter what you do, though, this will be a nonlinear function of the outcome $k$ and therefore is certain to be biased for almost all $\alpha$.
A better approach is to search for estimators $h(k,n)$ among some functional class of estimators (such as ones that are linear in $k$) that minimize the expectation of some loss function. In many situations an estimator that works well for quadratic loss also works well for many reasonable types of losses, so let's look at this. What we're talking about, then, is (for each $n$) to minimize the expectation $\mathbb{E}[(h(k,n) - \alpha)^2]$ among all estimators $h$.
Let's look graphically at what's going on. The bias of any estimator $h$ of the parameter $\alpha$ is the difference between its expectation and the parameter, $\mathbb{E}[h(k,n) - \alpha]$. We can study any proposed estimator, then, by graphing its bias (if we really care about that) and its loss. For any value of $n$ they are functions of $\alpha$, which (of course) is unknown. That's why we have to look at the entire graph.
Here are the bias (blue, dashed) and square root of the expected quadratic loss (red) for $g$ when $n=16$:

(I use the root of the loss because this is directly comparable to the bias.)
For example, $g$ is unbiased for $\alpha \approx 1/3$ but otherwise is biased, with the size of the bias largest for $\alpha = 1/2$. The root expected loss is roughly between 0.15 and 0.2 provided $\alpha$ exceeds $1/6$, approximately.
As an alternative, consider linear estimators $h_\lambda(k,n)$ of the form $h_\lambda(k,n) = \lambda(n) k/n$. Here is a plot of $h_2$ also for $n=16$ (but please note the change in scale on the vertical axis):

For most $\alpha$ its bias exceeds that of $g$, but for some $\alpha$ (near 0.4) it actually has less bias. For a wide range of $\alpha$, though, its root expected loss is less than that of $g$. Provided $\alpha \gt 1/5$ or so, this simple estimator clearly outperforms the "obvious" one!
This is not necessarily "the best" linear estimator, however. To illustrate, here is a plot of $h_{4/3}$:

It outperforms both $g$ and $h_2$ for $1/8 \lt \alpha \lt 3/8$, approximately. Note, though, that $g$ outperforms the $h_{\lambda}$ for sufficiently small $\alpha$.
These considerations suggest there is value in knowing something about what $\alpha$ might be: that will tell you which portions of the loss graphs to focus on in selecting among alternative estimators. If, in addition, you have a prior distribution for $\alpha$ you can compute the expected loss (this is now a single number) and use that to compare estimators: your task becomes one of finding an estimator with lowest possible expected loss. This, of course, is a Bayesian estimator.
Regardless, using plots of expected loss is a standard and effective way to compare estimators and to choose ones that are appropriate for any particular problem.
| null | CC BY-SA 2.5 | null | 2011-01-04T20:23:32.117 | 2011-01-13T22:10:04.197 | 2011-01-13T22:10:04.197 | 919 | 919 | null |
5987 | 1 | 5989 | null | 3 | 779 | What is the worst classier that learns badly in practical problems?
Edit:
Especially bad on test data..
Thanks
| Worst classifier | CC BY-SA 2.5 | null | 2011-01-04T20:58:27.577 | 2011-01-05T13:27:22.290 | 2011-01-05T13:27:22.290 | 2599 | 2599 | [
"machine-learning",
"classification"
]
|
5988 | 2 | null | 5918 | 21 | null | The key thing to remember is that for cross-validation to give an (almost) unbiased performance estimate every step involved in fitting the model must also be performed independently in each fold of the cross-validation procedure. The best thing to do is to view feature selection, meta/hyper-parameter setting and optimising the parameters as integral parts of model fitting and never do any one of these steps without doing the other two.
The optimistic bias that can be introduced by departing from that recipe can be surprisingly large, as demonstrated by [Cawley and Talbot](http://jmlr.csail.mit.edu/papers/v11/cawley10a.html), where the bias introduced by an apparently benign departure was larger than the difference in performance between competing classifiers. Worse still biased protocols favours bad models most strongly, as they are more sensitive to the tuning of hyper-parameters and hence are more prone to over-fitting the model selection criterion!
Answers to specific questions:
The procedure in step 1 is valid because feature selection is performed separately in each fold, so what you are cross-validating is whole procedure used to fit the final model. The cross-validation estimate will have a slight pessimistic bias as the dataset for each fold is slightly smaller than the whole dataset used for the final model.
For 2, as cross-validation is used to select the model parameters then you need to repeat that procedure independently in each fold of the cross-validation used for performance estimation, you you end up with nested cross-validation.
For 3, essentially, yes you need to do nested-nested cross-validation. Essentially you need to repeat in each fold of the outermost cross-validation (used for performance estimation) everything you intend to do to to fit the final model.
For 4 - yes, if you have a separate hold-out set, then that will give an unbiased estimate of performance without needing an additional cross-validation.
| null | CC BY-SA 3.0 | null | 2011-01-04T21:04:30.177 | 2014-07-04T20:38:51.437 | 2014-07-04T20:38:51.437 | 17230 | 887 | null |
5989 | 2 | null | 5987 | 7 | null | It is usually the statistician using the classifier that is the problem (for picking the wrong tool for the particular problem at hand, or for using it incorrectly). The "no free lunch" theorems show there is no a-priori distinction between classifiers (which works best/worst depends on the data), so I'd say there isn't a worst classifier in a general sense. However, some classifiers are easier to shoot yourself in the foot with than others, in which case, I would nominate artificial neural networks (the over-fit very easily and need proper architecture selection and none of the solutions to such problems are reliable).
| null | CC BY-SA 2.5 | null | 2011-01-04T21:15:11.963 | 2011-01-04T21:15:11.963 | null | null | 887 | null |
5990 | 2 | null | 5987 | 9 | null | Consider the binary case. If you don't know the proportions of the two classes, then the worst you can do is to flip a fair coin in each case: the expected error rate is $1/2$. If you do know the proportions, and the smaller of the two is $p$, say, then you should always classify objects in that category: the expected error rate is $1-p \gt 1/2$. (However, it could be argued that this procedure is not worse. After observing it sufficiently long, you would notice its error rate is significantly greater than $1/2$ and then you would have the option of negating it--that is, assigning the opposite classification--and that would have an error rate of only $p \lt 1/2$.)
| null | CC BY-SA 2.5 | null | 2011-01-04T21:17:03.833 | 2011-01-04T21:17:03.833 | null | null | 919 | null |
5991 | 2 | null | 5975 | 7 | null | >
"Data! Data! Data! I can't make bricks
without clay."
Sherlock Holmes
Sorry, you just can't do it. If you make up some measure that combines some of your variables, then guess what: those exact variables will be predictive of it.
| null | CC BY-SA 2.5 | null | 2011-01-04T22:49:00.363 | 2011-01-04T22:49:00.363 | null | null | 279 | null |
5992 | 2 | null | 5962 | 9 | null | The optimal values for the hyper-parameters will be different for different learning taks, you need to tune them separately for every problem.
The reason you don't get a single optimum is becuase both the kernel parameter and the regularisation parameter control the complexity of the model. If C is small you get a smooth model, likewise if the kernel with is broad, you will get a smooth model (as the basis functions are not very local). This means that different combinations of C and the kernel width lead to similarly complex models, with similar performance (which is why you get the diagonal feature in many of the plots you have).
The optimum also depends on the particular sampling of the training set. It is possible to over-fit the cross-validation error, so choosing the hyper-parameters by cross-validation can actually make performance worse if you are unlucky. See [Cawley and Talbot](http://jmlr.csail.mit.edu/papers/v11/cawley10a.html) for some discussion of this.
The fact that there is a broad plateau of values for the hyper-parameters where you get similarly good values is actually a good feature of support vector machines as it suggests that they are not overly vulnerable to over-fitting in model selection. If you had a sharp peak at the optimal values, that would be a bad thing as the peak would be difficult to find using a finite dataset which would provide an unreliable indication of where that peak actually resides.
| null | CC BY-SA 2.5 | null | 2011-01-05T00:57:32.817 | 2011-01-05T00:57:32.817 | null | null | 887 | null |
5993 | 2 | null | 5797 | 1 | null | This answer may be way, way off base as I don't understand the medical context of your question and the nature of the medical test results you allude to, but it might be possible to estimate data mining bias by some sort of Monte Carlo permutation of your results. The type of approach I'm thinking of is taken from a book, Evidence Based Technical Analysis, which has a companion website [here](http://www.evidencebasedta.com/).
Although this is written with the financial markets in mind it is, in my opinion, a book about Monte Carlo techniques as much as it is about markets.
| null | CC BY-SA 2.5 | null | 2011-01-05T01:32:02.787 | 2011-01-05T01:32:02.787 | null | null | 226 | null |
5995 | 1 | 6014 | null | 3 | 3420 | I apologize in advance for the vague title, but I couldn't think of anything better.
I have two datasets, where one is a very small subset of the other. The percentage of people who have a specific attribute in the large dataset is x%. The percentage of people who have the same attribute in the subset is y%. The subset contains people most likely to do some action X.
For instance: I have a list of people who have bought my software. I have a list of people who have renewed. The attribute % for the bought is 20, for the renewed is 60%.
The buyer list size is say 100,000, the people who have renewed is 500.
My questions are:
- Is the likelihood of that attribute affecting the user's behaviour simply 3x?
- I know correlation is not causation, but at what point does this change? I.e, in the above case, there's a 3x difference, is that statistically significant enough?
- Does the huge difference between buyers/renewed introduce some sort of bias into the dataset tha makes the attribute percentage useless as a form of determining people who will renew?
Thank you.
| Analyzing the difference between two datasets where one is a subset of the other | CC BY-SA 2.5 | null | 2011-01-05T05:27:36.953 | 2011-01-05T19:37:25.097 | 2011-01-05T07:40:00.260 | 930 | null | [
"statistical-significance",
"data-mining",
"unbiased-estimator"
]
|
5996 | 2 | null | 5982 | 4 | null | Burn-in doesn’t make sense here. The random walk you describe does not have a stationary distribution.
| null | CC BY-SA 2.5 | null | 2011-01-05T05:51:44.557 | 2011-01-05T05:51:44.557 | null | null | 1670 | null |
5997 | 1 | 5998 | null | 11 | 1095 | I'm a beginner to statistics (taken only one college course), but I have a background in programming.
I just started playing with a Bayesian classifier library for Ruby and I am looking for ideas for things to analyze.
Right now I'm messing around with Tweet categorization, but do you have any ideas?
More importantly, how can I learn about which types of data lend themselves to naive Bayesian classification.
Thanks.
| What kinds of things can I predict with a naive Bayesian classifier? | CC BY-SA 2.5 | null | 2011-01-05T05:55:56.113 | 2011-03-26T14:30:20.593 | null | null | 2649 | [
"bayesian",
"naive-bayes"
]
|
5998 | 2 | null | 5997 | 13 | null | [The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), by Hastie et al. has a lot of illustrations of Machine Learning applications, and all [data sets](http://www-stat.stanford.edu/~tibs/ElemStatLearn/data.html) are available on the companion website, including data on spam as on the [Ruby Classifier](http://classifier.rubyforge.org/) webpage.
As for a gentle introduction to Bayes classifier, I would suggest to look at the following tutorial from Andrew Moore: [A Short Intro to Naive Bayesian Classifiers](http://www.autonlab.org/tutorials/naive.html) (many [other tutorials](http://www.autonlab.org/tutorials/) are also available).
| null | CC BY-SA 2.5 | null | 2011-01-05T07:35:46.257 | 2011-01-05T07:35:46.257 | null | null | 930 | null |
5999 | 2 | null | 5975 | 3 | null | I agree with @Aniko that you won't be able to "predict" anything without an outcome. Now @Andy suggestion makes sense, provided you find a users database sharing similar characteristics. As an example of related studies, I guess you might find interesting google hits on users' characteristics in studies on Twitter, Facebook or other social networks or community driven sites. Here is a couple of references that I just found: [Comparing community structure to characteristics in online collegiate social networks](http://www.amath.unc.edu/Faculty/mucha/Reprints/facebook.pdf) (Traud et al., [arXiv:0809.0690](http://arxiv.org/abs/0809.0690)); [Social Computing Privacy Concerns: Antecedents & Effects](http://faculty.poly.edu/~onov/Nov%20Wattal%20CHI%202009%20Social%20Computing%20and%20Privacy%20Concerns.pdf) (Nov and Wattal, CHI 2009).
Aside from users' characteristics, willingness to particpate or contribute to a social network is also a function of its characteristics (number of members, contact frequency, content quality, etc.).
Finally, you still can do some kind of exploratory analysis on your data set. Specifically, a cluster analysis would help to highlight groups of individuals sharing similar response profiles. Another approach would be to use [Multiple correspondence analysis](http://en.wikipedia.org/wiki/Multiple_correspondence_analysis) to uncover potentially interesting patterns in your data. Those features might be related to already published results in a further step. Obviously, you can't regress an outcome that is still to be observed from this, but you will certainly get a better idea of how structured your data are.
| null | CC BY-SA 2.5 | null | 2011-01-05T10:02:04.577 | 2011-01-05T10:02:04.577 | null | null | 930 | null |
6000 | 2 | null | 5995 | 1 | null | For question 2 (significance), could you simply use a randomised resampling procedure and calculate the percentage of subsets of n=500 where the attribute % is ≥60?
| null | CC BY-SA 2.5 | null | 2011-01-05T10:12:10.967 | 2011-01-05T10:12:10.967 | null | null | 266 | null |
6001 | 2 | null | 5969 | 7 | null | There were already some useful comments, that are probably waiting for some updates in the question, so I will just drop some general online references:
- Practical Data Analysis for the Language Sciences with R, Baayen (2008)
- Categorical data analysis: Away from ANOVAs (transformation or not) and towards logit mixed models, Jaeger (J. Mem. Language 2008 59(4))
Examples using R may be found on Doug Bates' [lme4 - Mixed-effects models project](http://lme4.r-forge.r-project.org/).
| null | CC BY-SA 2.5 | null | 2011-01-05T10:55:52.227 | 2011-01-05T10:55:52.227 | null | null | 930 | null |
6002 | 2 | null | 5967 | 4 | null | Your dependent variable is growth. For economic time-series data it is more likely that growth will be a stationary process. This means that it will have constant mean. The level data on the other hand is usually non-stationary. Since your model is linear regression you assume that the true data generating process is
$$Y_t=\alpha_0+X_{1t}\alpha_1+...+X_{kt}\alpha_k+u_t$$
where $u_t$ is a white noise, $Y_t$ is stationary and $X_{kt}$ are non-stationary. Now stationarity implies that
$$EY_t=const=\alpha_0+\alpha_1EX_{1t}+...+\alpha_kEX_{kt}$$.
Now $EX_{kt}$ are functions of time which for non-stationary processes changes with time. So you are implying that
$$\alpha_0+\alpha_1\mu_1(t)+...+\alpha_k\mu_k(t)=const$$
for some non-constant functions $\mu_k(t)$. This places quite severe restrictions on non-stationary processes $X_{kt}$. For example if we have only one independent variable this restriction becomes
$$\alpha_1\mu_1(t)=const-\alpha_0$$
so either $\mu_1$ is constant or $\alpha_1$ is zero. In first case this contradicts the presumption, that $X_1$ is non-stationary in the second case the regression model is of no use.
So in general this is why it is not a good idea to mix levels and growths in regression, unless you are really sure that they are all stationary.
Another problem with time-series regression that for certain class [of non-stationarity processes](http://en.wikipedia.org/wiki/Order_of_integration) the regression can be [spurious](http://en.wikipedia.org/wiki/Spurious_correlation). In this case you cannot trust the least squares estimates $\hat{\alpha}_k$, since in spurious regression case their distribution is not normal and does not tend to normal, so usual regression statistics do not apply. For example you can get that $\alpha_k$ is significantly non-zero, when it is actually zero. So before the regression it is always a good idea to test whether your variables are not integrated using some variant of [Dickey-Fuller test](http://en.wikipedia.org/wiki/Dickey%E2%80%93Fuller_test). I strongly suspect that Dow Jones index is integrated process.
Now as others pointed out, heteroscedasticity in the independent regression variable is harmless. The problems can arise if regression errors are heteroscedastic. Then the least squares estimates will be consistent, but inefficient, also the standard errors should [be adjusted](http://en.wikipedia.org/wiki/White_standard_errors), for hypothesis testing.
| null | CC BY-SA 2.5 | null | 2011-01-05T12:06:14.560 | 2011-01-05T12:06:14.560 | null | null | 2116 | null |
6003 | 2 | null | 5995 | 2 | null | I interpret "has attribute" as "the value for this attribute is not missing". As you already pointed out, the question now is whether the value "missing" has a importance regarding the target "renewed" (to add another (fictional, but plausible) example: missing entries for attribute "income" in a customer database may occur with a higher probability when the corresponding customers have high income).
Let's say $q$ the missing ratio and $p=1-q$ the not-missing ratio.
- No, not plainly 3. This is because the other value of the attribute may have some influence on the outcome ("renewed"), too. To measure the importance one can e.g. use InformationGainRatio. I recommend to perform a discretization first (if the attribute we are talking about is numerical) and (most important) treat the value "missing" as an additional value, don't just ignore it. Now you can calculate the InformationGainRatio for the whole set and the subset to see the difference in importance. Another option is do apply a Logistic Regression and check the coefficient of the attribute (but I do not have that much experience in this area).
- See the answer of Freya Harrison. You can perform a statistical test using the normal distribution approximation to the binomial distribution with the Null-Hypothesis $H_0:=p<=0.2$ and Alternative-Hypothesis $H_1:= p>0.2$. The result will tell you whether the difference is statistical significant.
- I don't think so, because a set of 500 is not small (beside: it is not necessarily true that the whole set would have less information if you restrict it to a sample of 20000). There is still some variance, but useless is the attribute (given that the difference is, even if not significant, quite big) definitively not. In such cases a good old repetitive crossvalidation tells us (I'd choose k around 5 in your case) whether one can perform a prediction based on the attribute or not.
| null | CC BY-SA 2.5 | null | 2011-01-05T12:07:25.033 | 2011-01-05T12:07:25.033 | null | null | 264 | null |
6004 | 2 | null | 5746 | 0 | null | Also not an answer, but a back-of-the envelope calculation for the simpler case of i.i.d. Xj from a known distribution, to help my (non-statistician's) intuition:
Example Monte Carlo, to show the wide variation in run lengths, payoffs and costs:
```
# sample exponential*100 until > the first 3
# c0 = break-even cost, peak / n
15 [ 54 127 0 36 16 10 21 42 51 77 54 116 23 210]
17 [24 31 68 5 85]
22 [345 38 118 209 225 9 4 19 210 10 55 317 76 118 38 116 180 2 139 450]
23 [ 3 111 54 82 15 22 161]
23 [ 35 34 14 2 114]
24 [ 16 89 120 11 53 119 53 5 77 109 72 289]
33 [ 88 234 15 15 165 51 18 262]
40 [138 33 156 11 59 239]
45 [ 98 139 43 31 226]
54 [ 43 139 129 215]
```
For Xj i.i.d. from a known distribution,
one can calculate the following:
Emax3 = Expected max( X1 .. X3 ) # or 10 or √ n ...
p = Prob { X > Emax3 }
Ntopeak ~ 1/p, the expected number of draws to the first X > Emax
Epeak = its expected value
then compare the break-even cost c0 = Epeak / (3 + Ntopeak)
to the ante you can afford.
For exponentials, as above, I get
```
expon 3:
Emax3 158
Ntopeak 5
Epeak quartiles [187 227 296]
c0 ~ [187 227 296] / (3 + 5) = [23 28 37]
# but Epeak dist / Ntopeak dist ?
```
For a given distribution and cost-per-sample c,
one could optimize the number of initial samples (here 3)
by brute force.
But the high variance in such results
makes them hard to communicate, even misleading.
| null | CC BY-SA 2.5 | null | 2011-01-05T12:29:37.580 | 2011-01-05T12:29:37.580 | null | null | 557 | null |
6005 | 1 | 6009 | null | 3 | 4231 | I am slightly new to writing functions in R.
Here I have a basic function that searches for a pattern and returns the indexes of where the occurence occurs given a list dataset
```
#this functions takes a pattern and prints the indexes for the matches
find_domain <- function(pattern,list) grep(pattern,list,ignore.case=T)
```
Sometimes during the analysis i found that i need to do that for a larger dataset. The idea is to look for occurence of a domain (pattern) in each of the d7_dataset rows(since each row represents a protein sequence and each column is a domain) such that i can then count the number of sequences that contain or have a given domain(pattern in this case) So i wrote this
```
seqs <- c()
for (i in 1:nrow(d7_dataset)){
pos <- find_domain("CIDRγ13",d7_dataset[i,])
if (!is.na(pos[1]))
seqs <- c(seqs,1)
}
total_seqs <- sum(seqs)
total_seqs
```
However this seems like it can be written as a single function so that i can eliminate duplication and simplify it such that it is generic enough to apply to multiple datasets.
Any ideas on how to condence it to a single function?
In a nutshell, Given a pattern or list of patterns and a dataset(as a dataframe) to search, return the number of sequences(rows in the dataset) that contain the given pattern.
More or less like this
```
results <– search(a list of patterns to search,dataset)
```
results should look like
```
pattern count
pat1 40
pat2 3
pat3 0
. .
. .
```
It would be very good if the pattern argument would accept a list to search
Thank you
| An R function for performing searches | CC BY-SA 2.5 | null | 2011-01-05T13:36:36.637 | 2011-01-05T16:59:03.550 | null | null | 18462 | [
"r",
"dataset"
]
|
6007 | 2 | null | 6005 | 4 | null | You should first replace your for loop with something like `apply(d7_dataset, 1, foo)`, where `foo()` is either your function or something along those lines, e.g. `gregexpr()`. The result of `gregexpr()` is a list of numeric vectors with attributes similar to `regexpr()` but giving all matches in each element.
On a related point, there was another function that was proposed as a more user-friendly alternative to `gregexpr()`: [easyGregexpr](http://www.r-bloggers.com/developing-a-user-friendly-regular-expression-function-easygregexpr/).
The following example gives you the number of matches for each row when considering a list of three motifs (in a 10x100 matrix):
```
dd <- replicate(100, replicate(10, paste(sample(letters[1:4], 2),
collapse="")))
pat <- list("aa", "ab", "cd")
foo <- function(d, p) apply(d, 1, function(x) length(grep(p, x)))
lapply(pat, function(x) foo(dd, x))
```
| null | CC BY-SA 2.5 | null | 2011-01-05T13:53:05.480 | 2011-01-05T16:59:03.550 | 2011-01-05T16:59:03.550 | 930 | 930 | null |
6008 | 1 | 6011 | null | 4 | 6774 | I am trying to do Bayesian posterior predictive checking, whereby I calculate the DIC for my fitted model, and compare to DIC from data simulated from the fitted model. I can get the DIC out of winBUGS, however I am not sure how to calculate the likelihood (for the DIC) outside of winBUGS (i.e., without fitting new models). All of the literature is quite general with respect to discrepancy functions for model checking, and have notation like p(D|theta) which I understand, but doesn't help me when I want to actually do the calculation.
Given a set of parameter values, and a Y_rep data set - how do I calculate the likelihood?
| How to calculate likelihood for a bayesian model? | CC BY-SA 2.5 | null | 2011-01-05T13:56:46.507 | 2023-04-29T18:48:24.233 | 2011-01-05T15:59:50.490 | 930 | 2654 | [
"bayesian",
"likelihood"
]
|
6009 | 2 | null | 6005 | 6 | null | If your dataset is a matrix, then `grep` will work directly:
```
> set.seed(1)
> bb=matrix(letters[sample(1:20, 100, rep=TRUE)], nrow=20)
> grep("b", bb)
[1] 10 55 56 69 92
> bb[grep("b", bb)]
[1] "b" "b" "b" "b" "b"
```
If you have multiple patterns, then use `ldply` from package [plyr](http://cran.r-project.org/web/packages/plyr/index.html):
```
> ldply(c("a", "b", "z"), function(l) data.frame(pattern = l, count = length(grep(l, bb))))
pattern count
1 a 2
2 b 5
3 z 0
```
Your function `search` then can look like this
```
search <- function(patlist,data) {
if(!is.matrix(data)) data <- as.matrix(data)
ldply(patlist, function(l) data.frame(pattern = l, count = length(grep(l, bb))))
}
```
Note that this will not work for `data.frame`. So if that is possible you will need to convert it to the matrix, as I did in the code above.
| null | CC BY-SA 2.5 | null | 2011-01-05T14:06:47.837 | 2011-01-05T14:06:47.837 | null | null | 2116 | null |
6011 | 2 | null | 6008 | 4 | null | The [likelihood](http://en.wikipedia.org/wiki/Likelihood_function) is numerically equal to $\operatorname{P}(D\vert\theta)$, where D is the data vector and theta the parameter vector. Well, strictly speaking it's only equal up to a multiplicative factor, but most software packages including WinBUGS define it as being equal - see [Q1 of their FAQ on DIC](http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/dicpage.shtml#q1). The distinction between likelihood and probability is made because instead of the probability of the data for given values of the parameters, the likelihood is considered to be a function of the parameters for a fixed set of data.
It's usually analytically simpler and more computationally stable to work with the [log-likelihood](http://en.wikipedia.org/wiki/Likelihood_function#Log-likelihood), and that's what you'll need to calculate the DIC anyway.
ADDED In response to user2654's comment below:
$\operatorname{P}(D\vert\theta)$ depends on the model for the data. The simplest case is when the data are [independent and identically distributed](http://en.wikipedia.org/wiki/Iid) (iid), in which case $\operatorname{P}(D\vert\theta)$ becomes a product over all the data points, and therefore the log-likelihood is a sum over all the data points. The probability for each data point is calculated from the [probability density function](http://en.wikipedia.org/wiki/Probability_density_function) or [probability mass function](http://en.wikipedia.org/wiki/Probability_mass_function) defined by your model.
| null | CC BY-SA 2.5 | null | 2011-01-05T15:33:28.930 | 2011-01-05T16:01:11.860 | 2011-01-05T16:01:11.860 | 449 | 449 | null |
6013 | 1 | 6015 | null | 8 | 1481 | I have a python script that creates a list of lists of server uptime and performance data, where each sub-list (or 'row') contains a particular cluster's stats. For example, nicely formatted it looks something like this:
```
------- ------------- ------------ ---------- -------------------
Cluster %Availability Requests/Sec Errors/Sec %Memory_Utilization
------- ------------- ------------ ---------- -------------------
ams-a 98.099 1012 678 91
bos-a 98.099 1111 12 91
bos-b 55.123 1513 576 22
lax-a 99.110 988 10 89
pdx-a 98.123 1121 11 90
ord-b 75.005 1301 123 100
sjc-a 99.020 1000 10 88
...(so on)...
```
So in list form, it might look like:
```
[[ams-a,98.099,1012,678,91],[bos-a,98.099,1111,12,91],...]
```
### My question:
- What's the best way to determine the outliers in each column? Or are outliers not necessarily the best way to attack the problem of finding 'badness'?
In the data above, I'd definitely want to know about bos-b and ord-b, as well as ams-a since it's error rate is so high, but the others can be discarded. Depending on the column, since higher is not necessarily worse, nor is lower, I'm trying to figure out the most efficient way to do this. Seems like numpy gets mentioned a lot for this sort of stuff, but not sure where to even start with it (sadly, I'm more sysadmin than statistician...). When I asked over at Stack Overflow, someone mentioned using numpy's scoreatpercentile function and throw out anything over 99th percentile - does that seem like a good idea?
(Cross-posted from stackoverflow, [here](https://stackoverflow.com/questions/4606288))
| How to identify outliers in server uptime performance data? | CC BY-SA 4.0 | null | 2011-01-05T19:17:42.883 | 2023-01-04T16:08:29.513 | 2023-01-04T16:08:29.513 | 362671 | 2659 | [
"outliers",
"quantiles"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.