Id
stringlengths
1
6
PostTypeId
stringclasses
7 values
AcceptedAnswerId
stringlengths
1
6
ParentId
stringlengths
1
6
Score
stringlengths
1
4
ViewCount
stringlengths
1
7
Body
stringlengths
0
38.7k
Title
stringlengths
15
150
ContentLicense
stringclasses
3 values
FavoriteCount
stringclasses
3 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
sequence
1517
1
1518
null
3
2621
I am a database developer working for a sales and manufacturing business. I am mostly ignorant about statistics. We need useful metrics. Our managers are tuned to accounting, and the vagaries of production tend to confound us. We do very little measuring of our production, and what we have is poorly formed. I should note we are a "job shop", not a "flow shop" -- we do a lot of engineer-to-order work, so the usual MRP standards are often hard to apply. Some tradition business metrics are known to us; for example "inventory turn-over rate." But we are unable to convert those to useful information. I believe our inability to qualify data statistically is a big reason why. Of course we perform averaging all the time. Rolling averages (smoothing data over time using a 3-week rolling average, for example) is a helpful extension. Recently I discovered how to apply standard deviation to labor costing with wonderful benefits. Now that I understand (A) averaging, (B) rolling averages, and (C) standard deviation, what are the next useful functions or functions I should seek to learn? I would love to have your insights on "business intelligence," I mean defining and using metrics. But it's the use of stats to get from raw data to usable information that I'm really after. No matter, give me whatever you've got.
Useful statistical functions for business -- for use by a newbie
CC BY-SA 2.5
null
2010-08-10T22:39:06.193
2011-10-15T12:48:16.130
2010-09-17T20:25:29.290
null
857
[ "business-intelligence" ]
1518
2
null
1517
2
null
There are lots of generic tools. You should probably start from these: - Tests to compare means - Linear Regression Foundational items to understand statistics: - Probability Distribution and List of Probability Distributions This should probably keep you busy depending on how much background you already have.
null
CC BY-SA 2.5
null
2010-08-10T23:36:25.607
2010-08-10T23:36:25.607
null
null
null
null
1519
1
null
null
8
10327
In (most of) the analytical chemistry literature, the standard test for detecting outliers in univariate data (e.g. a sequence of measurements of some parameter) is Dixon's Q test. Invariably, all the procedures listed in the textbooks have you compute some quantity from the data to be compared with a tabular value. By hand, this is not much of a concern; however I am planning to write a computer program for Dixon Q, and just caching values strikes me as inelegant. Which brings me to my first question: - How are the tabular values for Dixon Q generated? Now, I have already looked into this [article](http://pubs.acs.org/doi/pdf/10.1021/ac00002a010), but I'm of the feeling that this is a bit of cheating, in that the author merely constructs a spline that passes through the tabular values generated by Dixon. I have the feeling that a special function (e.g. error function or incomplete beta/gamma) will be needed somewhere, but at least I have algorithms for those. Now for my second question: ISO seems to be slowly recommending Grubbs's test over Dixon Q nowadays, but judging from the textbooks it has yet to catch on. This on the other hand was relatively easy to implement since it only involves computing the inverse of the CDF of Student t. Now for my second question: - Why would I want to use Grubbs's instead of Dixon's? On the obvious front in my case, the algorithm is "neater", but I suspect there are deeper reasons. Can anyone care to enlighten me?
On univariate outlier tests (or: Dixon Q versus Grubbs)
CC BY-SA 2.5
null
2010-08-11T02:33:03.993
2013-12-02T16:02:40.650
2010-08-11T10:09:50.527
8
830
[ "outliers", "hypothesis-testing" ]
1520
1
1537
null
8
4896
Came across an interesting problem today. You are given a coin and x money, you double money if you get heads and lose half if tails on any toss. - What is the expected value of your money in n tries - What is the probability of getting more than expected value in (1) This is how I approached it. The probability of heads and tails is same (1/2). Expected value after first toss = $1/2(2*x) + 1/2(1/2*x) = 5x/4$ So expected value is $5x/4$ after first toss. Similarly repeating second toss expectation on 5x/4, Expected value after second toss = $1/2(2*5x/4) + 1/2(1/2*5x/4) = 25x/16$ So you get a sequence of expected values: $5x/4$, $25x/16$, $125x/64$, ... After $n$ tries, your expected value should be $(5^n/4^n) * x$. If $n$ is large enough, your expected value should approach the mean of the distribution. So probability that value is greater than expected value should be $0.5$. I am not sure about this one.
The expected value of random variable on tosses of a coin
CC BY-SA 3.0
null
2010-08-11T05:17:23.593
2013-06-12T21:38:15.070
2013-06-12T21:38:15.070
7290
862
[ "probability", "stochastic-processes" ]
1521
1
1532
null
21
17100
What is the difference between data mining and statistical analysis? For some background, my statistical education has been, I think, rather traditional. A specific question is posited, research is designed, and data are collected and analyzed to offer some insight on that question. As a result, I've always been skeptical of what I considered "data dredging", i.e. looking for patterns in a large dataset and using these patterns to draw conclusions. I tend to associate the latter with data-mining and have always considered this somewhat unprincipled (along with things like algorithmic variable selection routines). Nonetheless, there is a large and growing literature on data mining. Often, I see this label referring to specific techniques like clustering, tree-based classification, etc. Yet, at least from my perspective, these techniques can be "set loose" on a set of data or used in a structured way to address a question. I'd call the former data mining and the latter statistical analysis. I work in academic administration and have been asked to do some "data mining" to identify issues and opportunities. Consistent with my background, my first questions were: what do you want to learn and what are the things that you think contribute to issue? From their response, it was clear that me and the person asking the question had different ideas on the nature and value of data mining.
What is the difference between data mining and statistical analysis?
CC BY-SA 3.0
null
2010-08-11T05:31:50.527
2019-05-25T16:13:59.387
2019-05-25T16:13:59.387
22311
485
[ "data-mining", "terminology", "analysis" ]
1522
2
null
1521
7
null
I previously wrote a post where I made a few observations comparing data mining to psychology. I think these observations may capture some of the differences you are identifying: - "Data mining seems more concerned with prediction using observed variables than with understanding the causal system of latent variables; psychology is typically more concerned with the causal system of latent variables. - Data mining typically involves massive datasets (e.g. 10,000 + rows) collected for a purpose other than the purpose of the data mining. Psychological datasets are typically small (e.g., less than 1,000 or 100 rows) and collected explicitly to explore a research question. - Psychological analysis typically involves testing specific models. Automated model development approaches tend not to be theoretically interesting." - Data Mining and R
null
CC BY-SA 2.5
null
2010-08-11T06:25:03.157
2010-08-11T06:25:03.157
null
null
183
null
1523
2
null
1521
8
null
Data mining is statistics, with some minor differences. You can think of it as re-branding statistics, because statisticians are kinda weird. It is often associated with computational statistics, i.e. only stuff you can do with a computer. Data miners stole a significant proportion of multivariate statistics and called it their own. Check the table of contents of any 1990s multivariate book and compare it to a new data mining book. Very similar. Statistics is associated with testing hypotheses and with model building, whereas data mining is more associated with prediction and classification, regardless of whether there is an understandable model.
null
CC BY-SA 3.0
null
2010-08-11T06:29:22.680
2018-03-17T19:52:28.390
2018-03-17T19:52:28.390
null
74
null
1524
2
null
1517
4
null
I would suggest, if you can afford the time, to follow two online (taped) courses, one in probability and another one in statistics. I think it is the best way to get some basic knowledge that will help you move forward. - Probability course - Statistics course (This is lesson one. It goes up to Lesson 64. Unfortunately, I haven't found the links gathered together in one page. I think also that lessons 21,26 and 39 are missing but I don't remember having any problem whatsoever in following the course). You can also find the handouts for this class posted here.
null
CC BY-SA 2.5
null
2010-08-11T06:42:21.157
2010-08-11T07:22:46.477
2010-08-11T07:22:46.477
339
339
null
1525
1
4850
null
31
57988
Say I have eaten hamburgers every Tuesday for years. You could say that I eat hamburgers 14% of the time, or that the probability of me eating a hamburger in a given week is 14%. What are the main differences between probabilities and proportions? Is a probability an expected proportion? Are probabilities uncertain and proportions are guaranteed?
What's the difference between a probability and a proportion?
CC BY-SA 2.5
null
2010-08-11T07:24:02.763
2021-03-23T13:39:14.147
2017-11-10T09:30:50.997
11887
74
[ "probability", "intuition" ]
1526
2
null
1521
4
null
I don't think the distinction you make is really related to the difference between data mining and statistical analysis. You are talking about the difference between exploratory analysis and modelling-prediction approach. I think the tradition of statisic is build with all steps : exploratory analysis, then modeling, then estimation, then testing, then forecasting/infering. Statistician do exploratory analysis to figure out what the data looks like (function summary under R !) I guess datamining is less structured and could be identified with exploratory analysis. However it uses techniques from statistics that are from estimation, forecasting, classification ....
null
CC BY-SA 2.5
null
2010-08-11T07:27:27.137
2010-08-11T07:27:27.137
null
null
223
null
1527
2
null
1525
30
null
If you flip a fair coin 10 times and it comes up heads 3 times, the proportion of heads is .30 but the probability of a head on any one flip is .50.
null
CC BY-SA 2.5
null
2010-08-11T07:46:40.850
2010-08-11T07:46:40.850
null
null
183
null
1528
2
null
1525
7
null
A proportion implies it is a guaranteed event, whereas a probability is not. If you eat hamburgers 14% of the time, in a given (4-week) month (or over whatever interval you based your proportion on), you must have eaten 4 hamburgers; whereas with probability there is a possibility of having eaten no hamburgers at all or perhaps eaten a hamburger everyday. Probability is a measure of uncertainty, whereas proportion is a measure of certainty.
null
CC BY-SA 2.5
null
2010-08-11T07:50:04.400
2010-08-11T07:50:04.400
null
null
17
null
1529
1
null
null
10
1705
When carrying out OLS multiple linear regression, rather than plot the residuals against fitted values, I plot the (internal) Studentized residuals against fitted values (ditto for covariates). These residuals are defined as: \begin{equation} e^*_i = \frac{e_i}{\sqrt{s^2 (1-h_{ii})}} \end{equation} where $e_i$ is the residual and $h_{ii}$ are the diagonal elements of the hat matrix. To get these studentized residuals in R, you can use the `rstandard` command. What type of residuals do people routinely use in this context? For example, do you just stick with $e_i$ or do you use jackknife residuals, or something else entirely. Note: I'm not that interested in papers that define a new type of residual that no-one ever uses.
What type of post-fit analysis of residuals do you use?
CC BY-SA 3.0
null
2010-08-11T10:09:24.163
2013-01-31T03:06:21.320
2012-05-05T19:08:36.587
930
8
[ "regression", "residuals", "diagnostic" ]
1530
2
null
1395
2
null
In addition to the rmeta package there is also the meta package in R, which produce publication quality plots.
null
CC BY-SA 2.5
null
2010-08-11T10:29:48.033
2010-08-11T10:29:48.033
null
null
609
null
1531
1
2233
null
8
14462
I am working on some MRSA data and need to calculate the relative risk of a group of hospitals compared with the remaining hospital. My colleagues throws me an excel with a formula inside to calculate the "exact confidence interval of relative risk", I can do the calculation without difficulties, but I have no idea on how and why this formula is used for do such calculation. I have attached the excel file [here](http://www.speedfile.org/987618) for your reference. Can anyone show me a reference on the rationale of the calculation? Article from textbooks will be fine to me. Thanks!
How to calculate the "exact confidence interval" for relative risk?
CC BY-SA 2.5
null
2010-08-11T10:30:31.527
2011-04-29T00:22:59.733
2011-04-29T00:22:59.733
3911
588
[ "confidence-interval", "epidemiology", "relative-risk" ]
1532
2
null
1521
21
null
Jerome Friedman wrote a paper a while back: [Data Mining and Statistics: What's the Connection?](http://www-stat.stanford.edu/~jhf/ftp/dm-stat.pdf), which I think you'll find interesting. Data mining was a largely commercial concern and driven by business needs (coupled with the "need" for vendors to sell software and hardware systems to businesses). One thing Friedman noted was that all the "features" being hyped originated outside of statistics -- from algorithms and methods like neural nets to GUI driven data analysis -- and none of the traditional statistical offerings seemed to be a part of any of these systems (regression, hypothesis testing, etc). "Our core methodology has largely been ignored." It was also sold as user driven along the lines of what you noted: here's my data, here's my "business question", give me an answer. I think Friedman was trying to provoke. He didn't think data mining had serious intellectual underpinnings where methodology was concerned, but that this would change and statisticians ought to play a part rather than ignoring it. My own impression is that this has more or less happened. The lines have been blurred. Statisticians now publish in data mining journals. Data miners these days seem to have some sort of statistical training. While data mining packages still don't hype generalized linear models, logistic regression is well known among the analysts -- in addition to clustering and neural nets. Optimal experimental design may not be part of the data mining core, but the software can be coaxed to spit out p-values. Progress!
null
CC BY-SA 2.5
null
2010-08-11T10:36:15.907
2010-08-11T10:36:15.907
null
null
251
null
1533
2
null
1531
4
null
This seems to be Fisher's Exact Test for Count Data. You can reproduce the results in R by giving: ``` data <- matrix(c(678,4450547,63,2509451),2,2) fisher.test(data) data: data p-value < 2.2e-16 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 4.682723 7.986867 sample estimates: odds ratio 6.068817 ```
null
CC BY-SA 2.5
null
2010-08-11T11:53:51.930
2010-08-11T11:53:51.930
null
null
339
null
1534
1
null
null
2
138
I've been asked to give some advice for some clinicians who are comparing two different methods of blood pressure measurement. I suggested to them that we should proceed with a two-one-sided-test technique to determine equivalence of the two techniques. Unfortunately I have now learned that the clinicians have multiple measurements of blood pressure by each of the two methods and that the blood pressure can be quite variable within each patient during the period of observation (they are theatre cases). Is it possible to use some multiple regression technique to perform an equivalence test? Can I simply use confidence intervals to determine variability between the two techniques whilst accounting for inter-patient variability if I use the patients as factors in the regression model? Sorry it's an amateur question, but despite the Masters degree, I still feel like quite the amateur!
Best method for comparing multiple ranging measures
CC BY-SA 2.5
null
2010-08-11T12:12:35.573
2010-08-11T14:05:25.087
2010-08-11T13:09:20.933
5
867
[ "multiple-comparisons", "equivalence" ]
1536
1
null
null
2
2236
Does mutual information discriminate against fold change differences?
What are the advantages of using mutual information over Pearson correlation in network inference?
CC BY-SA 2.5
null
2010-08-11T12:51:51.797
2010-08-11T18:55:03.780
null
null
null
[ "correlation", "mutual-information" ]
1537
2
null
1520
9
null
> If n is large enough, your expected value should approach the mean of the distribution. Yes that's correct. > So probability that value is greater than expected value should be 0.5. This would only be correct if the distribution is symmetric - which in your game isn't the case. You can see this easily if you think about what the median value of your winnings should be after $n$ throws. --- You can think of your problem as a [random walk](http://en.wikipedia.org/wiki/Random_walk). A basic one-dimensional random walk is a walk on the integer real line, where at each point we move $\pm 1$ with probability $p$. This is exactly what you have if we ignore the doubling/halving of money and set $p=0.5$. All we have to do is remap your coordinate system to this example. Let $x$ be your initial starting pot. Then we remap in the following way: ``` x*2^{-2} = -2 x*2^{-1} = -1 x = 0 x*2 = 1 ``` i.e. $2^k x=k$. Let $S_n$ denote how much money we have made from the game after $n$ turns, then \begin{equation} Pr(S_n = 2^k x) = 2^{-n} \binom{n}{(n+k)/2} \end{equation} for $n \ge (n+k)/2 \ge 0$. When $(n+k)$ isn't a multiple of 2, then $Pr(S_n)=0$. To understand this, assume that we begin with £10. After $n=1$ turns, the only possible values are £5 or £20, i.e. $k=-1$ or $k=1$. The above result is a standard result from Random walks. Google random walks for more info. Also from random walk theory, we can calculate the median return to be $x$, which is not the same as the expected value. Note: I have assumed that you can always half your money. For example, 1pence, 0.5pence, 0.25pence are all allowed. If you remove this assumption, then you have a random walk with an absorbing wall. --- For completeness Here's a quick simulation in R of your process: ``` #Simulate 10 throws with a starting amount of x=money=10 #n=10 simulate = function(){ #money won/lost in a single game money = 10 for(i in 1:10){ if(runif(1) < 0.5) money = money/2 else money = 2*money } return(money) } #The Money vector keeps track of all the games #N is the number of games we play N = 1000 Money = numeric(N) for(i in 1:N) Money[i]= simulate() mean(Money);median(Money) #Probabilities #Simulated table(Money)/1000 #Exact 2^{-10}*choose(10,10/2) #Plot the simulations plot(Money) ```
null
CC BY-SA 2.5
null
2010-08-11T12:56:09.933
2010-09-10T08:41:42.590
2010-09-10T08:41:42.590
8
8
null
1538
1
1539
null
18
3854
The Kolmogorov–Smirnov distribution is known from the [Kolmogorov–Smirnov test](http://en.wikipedia.org/wiki/Kolmogorov_Smirnov). However, it is also the distribution of the supremum of the Brownian bridge. Since this is far from obvious (to me), I would like to ask you for an intuitive explanation of this coincidence. References are also welcome.
Why does the supremum of the Brownian bridge have the Kolmogorov–Smirnov distribution?
CC BY-SA 2.5
null
2010-08-11T13:15:24.397
2011-04-29T00:23:29.750
2011-04-29T00:23:29.750
3911
650
[ "distributions", "hypothesis-testing", "mathematical-statistics", "stochastic-processes" ]
1539
2
null
1538
14
null
$\sqrt{n}\sup_x|F_n-F|=\sup_x|\frac{1}{\sqrt{n}}\sum_{i=1}^nZ_i(x)| $ where $Z_i(x)=1_{X_i\leq x}-E[1_{X_i\leq x}]$ by CLT you have $G_n=\frac{1}{\sqrt{n}}\sum_{i=1}^nZ_i(x)\rightarrow \mathcal{N}(0,F(x)(1-F(x)))$ this is the intuition... brownian bridge $B(t)$ has variance $t(1-t)$ [http://en.wikipedia.org/wiki/Brownian_bridge](http://en.wikipedia.org/wiki/Brownian_bridge) replace $t$ by $F(x)$. This is for one $x$... You also need to check the covariance and hence it still is easy to show (CLT) that for ($x_1,\dots,x_k$) $(G_n(x_1),\dots,G_n(x_k))\rightarrow (B_1,\dots,B_k)$ where $(B_1,\dots,B_k)$ is $\mathcal{N}(0,\Sigma)$ with $\Sigma=(\sigma_{ij})$, $\sigma_{ij}=\min(F(x_i),F(x_j))-F(x_i)F(x_j)$. The difficult part is to show that the distribution of the suppremum of the limit is the supremum of the distribution of the limit... Understanding why this happens requires some empirical process theory, reading books such as van der Waart and Welner (not easy). The name of the Theorem is Donsker Theorem [http://en.wikipedia.org/wiki/Donsker%27s_theorem](http://en.wikipedia.org/wiki/Donsker%27s_theorem) ...
null
CC BY-SA 2.5
null
2010-08-11T13:34:07.253
2010-08-11T17:10:44.323
2010-08-11T17:10:44.323
223
223
null
1540
1
1543
null
8
9201
Assume you draw a uniformly distributed random number between 0 and 1 n times. How would one go about calculating the expected minimum number drawn after n trials? In addition, how would one go about calculating a confidence interval to state that the minimum number drawn is in the interval [a,b] with m% confidence?
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?
CC BY-SA 2.5
null
2010-08-11T13:38:19.647
2011-04-29T00:24:01.403
2011-04-29T00:24:01.403
3911
870
[ "uniform-distribution", "order-statistics", "extreme-value" ]
1541
2
null
1507
3
null
> It must be indicative of something besides the redistribution of wealth. Heads. A weaker man might be moved to re-examine his faith, for nothing else at least in the law of probability... Heads. Consider. One, probability is a factor which operates within natural forces. Two, probability is not operating as a factor. Three, we are now held within um... sub or supernatural forces. Discuss! What? Look at it this way. If six monkeys... If six monkeys... The law of averages, if I have got this right means... that if six monkeys were thrown up in the air long enough... they would land on their tails about as often as they would land on their... Heads, getting a bit of a bore, isn't it? – Tom Stoppard Rosencrantz and Guildenstern are Dead (1966) As John Christie pointed out, no matter how unlikely the student's result was, you can't infer anything from a single trial. A clever student might well have tried this gambit knowing it could not be refuted, in which case I might be inclined to commend her. Incidentally, Rosencrantz (or Guildenstern) tossed at least 157 consecutive heads and it was nothing to write home about.
null
CC BY-SA 2.5
null
2010-08-11T13:47:37.173
2010-08-11T13:47:37.173
2020-06-11T14:32:37.003
-1
869
null
1542
1
null
null
3
1268
i'm not sure how to google for this as i am not very familiar with time series analysis. i have 500 websites, and i am measuring the number of visitors to each website each day. at some point, i turn on SEO (search engine optimization) for each of the websites. this happens on different days for different websites. the distribution of visitors by account for any given day has a long tail to the right. SEO may not have an immediate effect; it might take a few days/weeks to really start to see some results. i want to measure something like the "average" lift in the number of visitors, but an average is probably not going to do the trick because of the mix of websites. (a daily/weekly/monthly trend curve would be really cool, but the average problem will exist there, too) i can probably average the number of visitors per day for any given account, but i can't do it across accounts. do i simply need to segment the websites into "number of visitor" groups? what other kinds of approaches should i read about?
Analysis of multiple time series
CC BY-SA 2.5
null
2010-08-11T13:48:20.340
2010-09-30T21:21:25.593
2010-09-30T21:21:25.593
930
125
[ "time-series" ]
1543
2
null
1540
9
null
You are looking for [order statistics](http://en.wikipedia.org/wiki/Order_statistic). The wiki indicates that the distribution of the minimum draw from a uniform distribution between 0 and 1 after $n$ trials is a beta distribution (I have not checked it for correctness which you should probably do.). Specifically, let $U_{(1)}$ be the minimum order statistic. Then: $U_{(1)} \sim B(1,n)$ Therefore, the mean is $\frac{1}{1+n}$. You can use the beta distribution to identify $a$ and $b$ such that $Prob(a \le U_{(1)} \le b) = 0.95$. By the way, the use of the term confidence interval is not appropriate in this context as you are not performing inference. Update Calculating $a$ and $b$ such that $Prob(a \le U_{(1)} \le b) = 0.95$ is not straightforward. There are several possible ways in which you can calculate $a$ and $b$. One approach is to center the interval around the mean. In this approach, you would set: $a = \mu - \delta$ and $b = \mu + \delta$ where $\mu = \frac{1}{1+n}$. You would then calculate $\delta$ such that the required probability is 0.95. Do note that under this approach you may not be able to identify a symmetric interval around the mean for high $n$ but this is just my hunch.
null
CC BY-SA 2.5
null
2010-08-11T13:53:56.707
2010-08-11T15:10:55.150
2010-08-11T15:10:55.150
null
null
null
1544
2
null
1534
1
null
It looks to me like [mixed effects or multi-level modelling](http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf) is what you want here.
null
CC BY-SA 2.5
null
2010-08-11T14:05:25.087
2010-08-11T14:05:25.087
null
null
601
null
1545
2
null
1507
5
null
How about a simulation based approach? Here's some R code to generate 100000 students each trying the 40 tosses. ``` theSum = c() for (i in 1:100000) { coin1 = rbinom(40,1,.5) coin2 = rbinom(40,1,.5) coin3 = rbinom(40,1,.5) coin4 = rbinom(40,1,.5) coin5 = rbinom(40,1,.5) theSum[i] = sum(coin1+coin2+coin3+coin4+coin5 == 1) } summary(theSum) hist(theSum, xlim = c(0,40), freq = F, main = "", xlab = "") ``` The range of times the HTTTT combination occurred (in any order): 0-18 (out of 40), with a mean of around 6. Below: a histogram of the 100000 attempts and how many times the magical combination occurred. You'd have to be very lucky indeed to get it 39 times out of 40 with fair coins. But stranger things have happened by chance (e.g., our evolution). [alt text http://img80.imageshack.us/img80/9268/coinflips.png](http://img80.imageshack.us/img80/9268/coinflips.png)
null
CC BY-SA 2.5
null
2010-08-11T14:06:35.990
2010-08-11T14:06:35.990
null
null
702
null
1546
2
null
1542
6
null
Any study should start out with some conception of a goal. Are you interested in measuring the impact of your SEO? Or are you trying to model the visitor behavior? It isn't clear to me that you're making use of the "time series" aspect of this data. Are you also interested in the time of day or day of week of visits, for instance? Or visits around specific events? You could just as easily divide your # of visits/day into two groups -- with/without SEO -- and eliminate time. This would then be a categorical variable in your data. A basic next step could be to run a logistic regression to see the impact of SEO on your site traffic. In R, this could look something like this (where "seo" is a `factor`): ``` site.traffic.lg <- glm(num.visits ~ seo, family=binomial, data=your.data) summary(site.traffic.lg) ``` If you want to use the fact that this is across a number of different kinds of sites, you could include this by adding it into the formula as another variable.
null
CC BY-SA 2.5
null
2010-08-11T14:10:21.613
2010-08-11T14:16:42.220
2010-08-11T14:16:42.220
5
5
null
1547
2
null
1521
9
null
Data mining is categorized as either Descriptive or Predictive. Descriptive data mining is to search massive data sets and discover the locations of unexpected structures or relationships, patterns, trends, clusters, and outliers in the data. On the other hand, Predictive is to build models and procedures for regression, classification, pattern recognition, or machine learning tasks, and assess the predictive accuracy of those models and procedures when applied to fresh data. The mechanism used to search for patterns or structure in high-dimensional data might be manual or automated; searching might require interactively querying a database management system, or it might entail using visualization software to spot anomalies in the data. In machine-learning terms, descriptive data mining is known as unsupervised learning, whereas predictive data mining is known as supervised learning. Most of the methods used in data mining are related to methods developed in statistics and machine learning. Foremost among those methods are the general topics of regression, classification, clustering, and visualization. Because of the enormous sizes of the data sets, many applications of data mining focus on dimensionality-reduction techniques (e.g., variable selection) and situations in which high-dimensional data are suspected of lying on lower-dimensional hyperplanes. Recent attention has been directed to methods of identifying high-dimensional data lying on nonlinear surfaces or manifolds. There are also situations in data mining when statistical inference — in its classical sense — either has no meaning or is of dubious validity: the former occurs when we have the entire population to search for answers, and the latter occurs when a data set is a “convenience” sample rather than being a random sample drawn from some large population. When data are collected through time (e.g., retail transactions, stock-market transactions, patient records, weather records), sampling also may not make sense; the time-ordering of the observations is crucial to understanding the phenomenon generating the data, and to treat the observations as independent when they may be highly correlated will provide biased results. The central components of data mining are — in addition to statistical theory and methods — computing and computational efficiency, automatic data processing, dynamic and interactive data visualization techniques, and algorithm development. One of the most important issues in data mining is the computational problem of scalability. Algorithms developed for computing standard exploratory and confirmatory statistical methods were designed to be fast and computationally efficient when applied to small and medium-sized data sets; yet, it has been shown that most of these algorithms are not up to the challenge of handling huge data sets. As data sets grow, many existing algorithms demonstrate a tendency to slow down dramatically (or even grind to a halt).
null
CC BY-SA 2.5
null
2010-08-11T14:37:06.930
2010-08-11T14:37:06.930
null
null
339
null
1548
2
null
1538
6
null
For Kolmogorov-Smirnov, consider the null hypothesis. It says that a sample is drawn from a particular distribution. So if you construct the empirical distribution function for $n$ samples $f(x) = \frac{1}{n} \sum_i \chi_{(-\infty, X_i]}(x)$, in the limit of infinite data, it will converge to the underlying distribution. For finite information, it will be off. If one of the measurements is $q$, then at $x=q$ the empirical distribution function takes a step up. We can look at it as a random walk which is constrained to begin and end on the true distribution function. Once you know that, you go ransack the literature for the huge amount of information known about random walks to find out what the largest expected deviation of such a walk is. You can do the same trick with any $p$-norm of the difference between the empirical and underlying distribution functions. For $p=2$, it's called the Cramer-von Mises test. I don't know the set of all such tests for arbitrary real, positive $p$ form a complete class of any kind, but it might be an interesting thing to look at.
null
CC BY-SA 2.5
null
2010-08-11T15:02:36.447
2010-08-11T15:02:36.447
null
null
873
null
1549
2
null
1540
4
null
As Srikant suggests, you need to look at [order statistics](http://en.wikipedia.org/wiki/Order_statistic). To add to Srikant's answer, you can simulate this process easily in R: ``` n = 10 N = 1000;sims = numeric(N) for(i in 1:N) sims[i] = min(runif(n)) hist(sims, freq=FALSE) x = seq(0,1,0.01) lines(x, dbeta(x, 1, n), col=2) ``` To get [alt text http://img441.imageshack.us/img441/6826/tmpe.jpg](http://img441.imageshack.us/img441/6826/tmpe.jpg) --- Slight digression This question is related to one of my favourite statistics problems, the [German tank problem](http://en.wikipedia.org/wiki/German_tank_problem). This problem is about the maximum of uniform distributions, and can be summarised as: > Suppose one is an Allied intelligence analyst during World War II, and one has some serial numbers of captured German tanks. Further, assume that the tanks are numbered sequentially from 1 to N. How does one estimate the total number of tanks? Taken from wikipedia Check out the wikipedia page for more details.
null
CC BY-SA 2.5
null
2010-08-11T15:16:55.993
2010-08-11T15:16:55.993
2020-06-11T14:32:37.003
-1
8
null
1550
2
null
1469
0
null
"Efficient" usually just means that in the class of estimators considered and for a given loss function, you choose one which is optimal. If you look up "admissible" instead, you'll find a huge amount of information, though it's a slightly weaker criterion.
null
CC BY-SA 2.5
null
2010-08-11T15:20:19.787
2010-08-11T15:20:19.787
null
null
873
null
1551
2
null
1521
10
null
The difference between statistics and data mining is largely a historical one, since they came from different traditions: statistics and computer science. Data mining grew in parallel out of work in the area of artificial intelligence and statistics. Section 1.4 from [Witten & Frank](http://www.cs.waikato.ac.nz/~ml/weka/book.html) summarizes my viewpoint so I'm going to quote it at length: > What's the difference between machine learning and statistics? Cynics, looking wryly at the explosion of commercial interest (and hype) in this area, equate data mining to statistics plus marketing. In truth, you should not look for a dividing line between machine learning and statistics because there is a continuum--and a multidimensional one at that--of data analysis techniques. Some derive from the skills taught in standard statistics courses, and others are more closely associated with the kind of machine learning that has arisen out of computer science. Historically, the two sides have had rather different traditions. If forced to point to a single difference of emphasis, it might be that statistics has been more concerned with testing hypotheses, whereas machine learning has been more concerned with formulating the process of generalization as a search through possible hypotheses... In the past, very similar methods have developed in parallel in machine learning and statistics... But now the two perspectives have converged. N.B.1 IMO, data mining and machine learning are very closely related terms. In one sense, machine learning techniques are used in data mining. I regularly see these terms as interchangeable, and in so far as they are different, they usually go together. I would suggest looking through ["The Two Cultures" paper](https://stats.stackexchange.com/questions/6/the-two-cultures-statistics-vs-machine-learning) as well as the other threads from my original question. N.B.2 The term "data mining" can have a negative connotation when used colloquially to mean letting some algorithm loose on the data without any conceptual understanding. The sense is that data mining will lead to spurious results and over-fitting. I typically avoid using the term when talking to non-experts as a result, and instead use machine learning or statistical learning as a synonym.
null
CC BY-SA 2.5
null
2010-08-11T15:20:38.757
2010-08-11T15:42:21.273
2017-04-13T12:44:28.813
-1
5
null
1552
2
null
1337
68
null
A statistician confidently tried to cross a river that was 1 meter deep on average. He drowned.
null
CC BY-SA 2.5
null
2010-08-11T15:56:33.000
2010-08-11T15:56:33.000
null
null
null
null
1553
2
null
1536
3
null
See this question for the differences/advantages of using mutual information versus Pearson correlation or Spearman's rank: - What is the major difference between correlation and mutual information? > Does mutual information discriminate against fold change differences? If that's the variation measure you use for your correlation values, then that is assumed for the hypothesized network.
null
CC BY-SA 2.5
null
2010-08-11T18:55:03.780
2010-08-11T18:55:03.780
2017-04-13T12:44:33.310
-1
251
null
1554
2
null
1540
1
null
following @Srikant, one can compute the CDF of the beta distribution, and find conditions on $a, b$ such that the interval $[a,b]$ contains the minimum of $n$ draws of a uniform with 95% probability. The condition is: $(1-a)^n - (1-b)^n = 0.95$. One attractive choice would then be the interval $[0,1 - 0.05^{1/n}]$. This is also the smallest interval with the desired property.
null
CC BY-SA 2.5
null
2010-08-11T19:52:29.877
2010-08-11T19:52:29.877
null
null
795
null
1555
1
1608
null
5
12897
I have the following frequency table: ``` 35 0 4 3 7 6 5 4 39 1 9 6 7 7 6 8 36 0 7 10 11 11 10 16 41 0 9 8 8 7 6 7 41 0 8 9 10 9 12 11 55 2 12 9 11 12 11 13 55 1 10 10 11 10 12 11 47 1 14 8 12 15 12 12 45 1 10 11 10 10 9 18 56 0 13 16 12 12 12 11 ``` The Kruskal-Wallis ANOVA test returns: ``` Source SS df MS Chi-sq Prob>Chi-sq Columns 25306.8 7 3615.26 47.16 5.18783e-008 Error 17083.2 72 237.27 Total 42390 79 ``` According to a multiple comparison of mean ranks: - Six groups of mean significantly different from group 1 (column 1) - Six groups of mean significantly different from group 2 (column 2) --- Now the Kruskal-Wallis and multiple comparison tests make sense, however the Chi Square Test returns a chi square value of 31.377 and a p-value of 0.9997, which leads us to accept the null hypothesis that the frequencies are independent. I understand that an assumption of ANOVA is independence, but... I want to see test if the frequencies are statistically independent, was the Kruskal-Wallis and multiple comparison tests the correct methodology? Note: I am not trying to be subjective, but for a given set of frequencies, how do you test that the differences between groups are significant?
Test if differences between frequencies is significant
CC BY-SA 2.5
null
2010-08-11T20:31:25.710
2011-07-02T01:10:51.207
null
null
559
[ "multiple-comparisons", "statistical-significance", "anova", "chi-squared-test" ]
1556
1
1558
null
11
4467
What is the difference between having something statistically significant (such as a difference between two samples) and stating if a group of numbers are independent or dependent.
Statistically significant vs. independent/dependent
CC BY-SA 3.0
null
2010-08-11T20:55:41.393
2012-03-24T22:50:53.610
2012-03-24T10:26:09.273
930
559
[ "statistical-significance", "independence" ]
1557
1
1560
null
15
2155
I am taken by the idea of James-Stein shrinkage (i.e. that a nonlinear function of a single observation of a vector of possibly independent normals can be a better estimator of the means of the random variables, where 'better' is measured by squared error). However, I have never seen it in applied work. Clearly I am not well enough read. Are there any classic examples of where James-Stein has improved estimation in an applied setting? If not, is this kind of shrinkage just an intellectual curiosity?
James-Stein shrinkage 'in the wild'?
CC BY-SA 2.5
null
2010-08-11T20:57:31.327
2014-11-03T14:54:15.627
2014-11-03T14:54:15.627
28666
795
[ "estimation", "error", "regularization", "application", "steins-phenomenon" ]
1558
2
null
1556
10
null
Significance in an independent-samples t test just means that the probability (if the null were true) of sampling a mean difference as extreme as the mean difference you actually sampled is less than .05. This is totally unrelated to dependent/independent. "Dependent" means the distribution of some individual observations is connected to the distribution of others, for example A) they are the same person taking the same test a second time, B) people in each group are matched on some pre-test variable, C) people in the two groups are related (i.e. family). "Independent" means there is no such connection.
null
CC BY-SA 3.0
null
2010-08-11T21:12:16.003
2012-03-24T20:29:05.633
2012-03-24T20:29:05.633
7497
880
null
1559
2
null
1525
-5
null
I don't know if there is a difference, but probabilities are not % they range from 0 to 1. I mean if you multiply a probability by 100 you get %. If your question is what's the difference between probability and % then this would be my answer, but this is not your question. The definition of probability assumes an infinite number of sampling experiments, so then we can never truly get probability because we can never truly conduct an infinite number of sampling experiments.
null
CC BY-SA 2.5
null
2010-08-11T21:19:03.187
2010-08-11T21:19:03.187
null
null
880
null
1560
2
null
1557
14
null
James-Stein estimator is not widely used but it has inspired soft thresholding, hard thresholding which is really widely used. Wavelet shrinkage estimation (see R package wavethresh) is used a lot in signal processing, shrunken centroid (package pamr under R) for classication is used for DNA micro array, there are a lot of examples of practical efficiency of shrinkage... For theoretical purpose, see the section of candes's review about shrinkage estimation (p20-> James stein and the section after after that one deals with soft and hard thresholding): [http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.8881&rep=rep1&type=pdf](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.8881&rep=rep1&type=pdf) EDIT from the comments: why is JS shrinkage less used than Soft/hard Thresh ? James Stein is more difficult to manipulate (practically and theoretically) and to understand intuitively than hard thresholding but the why question is a good question!
null
CC BY-SA 2.5
null
2010-08-11T21:20:47.680
2010-09-27T05:01:01.460
2010-09-27T05:01:01.460
223
223
null
1561
1
1567
null
1
625
i have a different sample size `n` for every month for example i have `13890` one month then `17756`, then `21425` the data every month for example `13890` is broken down into: ``` 48 chairs, 12 tables, 2 couches etc... ``` next month we have similar metrics like ``` 3 chairs, 23 tables, 4 couches etc.. ``` i would like to know how i am doing with the furniture in relation to the total per month
Normalizing data
CC BY-SA 2.5
0
2010-08-11T21:28:10.213
2010-08-20T05:54:14.797
2010-08-20T05:54:14.797
183
876
[ "proportion" ]
1562
1
null
null
4
1372
I have a question on subject chi-square test for independence. I have, for example, two events A and B. If chi square test is not passed: is A dependent on B (A|B) or B on A (B|A)? Or does be valid both? (A|B and B|A). Thank you in advance.
What dependence is implied by a chi square test for independence?
CC BY-SA 2.5
null
2010-08-11T21:49:08.720
2020-05-21T11:55:14.897
2010-08-11T23:36:13.873
159
null
[ "statistical-significance", "chi-squared-test" ]
1563
2
null
124
0
null
Naive Bayes is usually the starting point for text classification, here's an [article](http://www.drdobbs.com/184406064;jsessionid=A35OB1KFVVLTTQE1GHPCKH4ATMY32JVN) from Dr. Dobbs on how to implement one. It's also often the ending point for text classification because it's so efficient and parallelizes well, SpamAssassin and POPFile use it.
null
CC BY-SA 2.5
null
2010-08-11T22:10:15.810
2010-08-11T22:17:23.683
2010-08-11T22:17:23.683
511
511
null
1564
1
null
null
20
74123
Could you inform me please, how can I calculate conditioned probability of several events? for example: P (A | B, C, D) - ? I know, that: P (A | B) = P (A $\cap$ B) / P (B) But, unfortunately, I can't find any formula if an event A depends on several variables. Thanks in advance.
How can I calculate the conditional probability of several events?
CC BY-SA 3.0
null
2010-08-11T22:14:39.753
2022-05-12T13:52:16.437
2015-03-10T02:44:39.170
61366
null
[ "conditional-probability" ]
1565
2
null
1562
0
null
A chi-square test is not motivated by your description thus far. Give many more details. Chi-square tests for independence are used to see if the probability or counts of each kind of event in a given variable are independent of other variables.
null
CC BY-SA 2.5
null
2010-08-11T22:17:04.820
2010-08-11T22:17:04.820
null
null
601
null
1566
2
null
1485
2
null
I'm not really sure if an implementation exists to address all your needs. For (1), you can use any of the online implementations of SVM such as Pegasos or LASVM. If you want something simpler, you may use Perceptron or Kernel Perceptron. Basically, in all these algorithms, given an already learned weight vector (say w0), you can update w0 incrementally, given a fresh set of new examples. For (2) and (3), I'm not sure if the above approaches would straightaway allow but you can probably borrow some ideas from the literature dealing with unknown classes. I'd suggest taking a look at [this](http://homepage.tudelft.nl/a9p19/papers/prl_08_reject.pdf).
null
CC BY-SA 2.5
null
2010-08-11T22:48:19.673
2010-08-11T22:48:19.673
null
null
881
null
1567
2
null
1561
3
null
Do you mean you want the percentage of the n in that month that belongs to each furniture category? If so, can't you take the N for each month and divide all of the values from that month by N? For example, your first case would be: 0.0034557235 0.0008639309 0.0001439885 and your second case (where N = 17756) would be 0.0001689570 0.0012953368 0.0002252760. Or do you want a comparison of your observed values to your expected values? If that is the case you can construct a table with furniture type as columns and months as rows. For each cell you can take the (sum of values in the row to which it belongs) * (sum of values in the column to which it belongs) and divide by the total number of values you have. That will give you the expected value for the cell. If you subtract that from your initial value it will tell you how many more or less of each type of furniture you had than expected given the type of furniture it is and the month in which you made your observation. For example, consider this source data Month Chairs Tables Couches Other Total 1 48 12 2 13828 13890 2 3 24 4 17725 17756 Total 51 36 6 31553 31646 48 chairs, 12 tables, 2 couches It would be calculated like so (with the ?? marking values that still need to be calculated)... Month Chairs Tables Couches Other Total 1 (51*13890)/31646=22.38 ?? ?? ?? 13890 2 (51*17756)/31646=28.61 ?? ?? ?? 17756 Total 51 ?? ?? ?? 31646 Letting you know that in Month 1 there were 48-22.38=25.62 more chairs than expected and that in Month 2 there were 3-28.61=-25.61 more chairs than expected - but to make more sense we can flip the sign and the terminology and say there were 25.61 fewer chairs than expected. For more details consider looking [here](http://davidmlane.com/hyperstat/B143466.html).
null
CC BY-SA 2.5
null
2010-08-11T22:49:25.943
2010-08-12T15:21:12.473
2010-08-12T15:21:12.473
196
196
null
1568
2
null
1564
11
null
Take the intersection of B,C and D call it U. Then perform P(A|U).
null
CC BY-SA 2.5
null
2010-08-11T22:49:28.470
2010-08-11T22:49:28.470
null
null
572
null
1569
2
null
1028
11
null
Suppose you are given n IID samples generated by either p or by q. You want to identify which distribution generated them. Take as null hypothesis that they were generated by q. Let a indicate probability of Type I error, mistakenly rejecting the null hypothesis, and b indicate probability of Type II error. Then for large n, probability of Type I error is at least $\exp(-n \text{KL}(p,q))$ In other words, for an "optimal" decision procedure, probability of Type I falls at most by a factor of exp(KL(p,q)) with each datapoint. Type II error falls by factor of $\exp(\text{KL}(q,p))$ at most. For arbitrary n, a and b are related as follows $b \log \frac{b}{1-a}+(1-b)\log \frac{1-b}{a} \le n \text{KL}(p,q)$ and $a \log \frac{a}{1-b}+(1-a)\log \frac{1-a}{b} \le n \text{KL}(q,p)$ If we express the bound above as the lower bound on a in terms of b and KL and decrease b to 0, result [seems](http://yaroslavvb.com/upload/lower-bounds.png) to approach the "exp(-n KL(q,p))" bound even for small n More details on page 10 [here](http://arxiv.org/abs/adap-org/9601001), and pages 74-77 of Kullback's "Information Theory and Statistics" (1978). As a side note, this interpretation can be used to [motivate](http://www.pnas.org/content/97/21/11170.abstract) Fisher Information metric, since for any pair of distributions p,q at Fisher's distance k from each other (small k) you need the same number of observations to to tell them apart
null
CC BY-SA 2.5
null
2010-08-11T23:09:01.983
2011-02-03T03:22:11.430
2011-02-03T03:22:11.430
511
511
null
1570
2
null
1562
6
null
I assume A and B are both random variables taking discrete values and you are thinking of a chi-squared test on the two-way frequency table formed by the counts of observations on the two variables. In that case, a significant result indicates both directions of dependence: A|B and B|A. If you think about Bayes' theorem, it is clear that one always implies the other: P(A|B) = P(B|A) P(A) / P(B) So P(A|B) = P(A) if and only if P(B|A)=P(B).
null
CC BY-SA 2.5
null
2010-08-11T23:33:59.727
2010-08-13T06:32:21.300
2010-08-13T06:32:21.300
159
159
null
1571
1
1573
null
5
1332
I am trying to recreate (in R) a frequentist hypothesis testing in Bayesian from, by calculating Bayes factors of the null (H0) and alternative (H1) models. The model is simply a simple linear regression that tries to detect a trend in global temp. data from 1995 to 2009 ([here](http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt)). Therefore, H0 is no trend (i.e. slope = 0), or similary, the H0 model is a linear model with only the intercept. So I calculated the `lm()` of both models to arrive at negative log likelihood values that are significantly different. The p-value for the H1 lm() model is 0.0877. I also calculated this in a Bayesian way by using [MCMCpack](http://cran.r-project.org/web/packages/MCMCpack/index.html), and I get negative log likelihood values that are super duper uber different. Log likelihood values of 13.7 and 4.3 are about a 10000 fold difference in their likelihood ratios (where [>100 is considered to be "decisive"](http://en.wikipedia.org/wiki/Bayes_factor)). The means and sds of the estimates are very similar, so why am I getting such different likelihood values? (particularly for the Bayesian H0 model) I feel like there is a gap in my understanding on marginal likelihoods, but I can't pinpoint the problem. Thanks ``` library(MCMCpack) ## data: http://www.cru.uea.ac.uk/cru/data/temperature/hadcrut3gl.txt head(hadcru, 2) ## Year 1 2 3 4 5 6 7 8 9 10 ## 1 1850 -0.691 -0.357 -0.816 -0.586 -0.385 -0.311 -0.237 -0.340 -0.510 -0.504 ## 2 1851 -0.345 -0.394 -0.503 -0.480 -0.391 -0.264 -0.279 -0.175 -0.211 -0.123 ## 11 12 Avg ## 1 -0.259 -0.318 -0.443 ## 2 -0.141 -0.151 -0.288 hadcru.lm <- lm(Avg ~ 1 + Year, data = subset(hadcru, (Year <= 2009 & Year >= 1995))) hadcru.lm.zero <- lm(Avg ~ 1, data = subset(hadcru, (Year <= 2009 & Year >= 1995))) hadcru.mcmc <- MCMCregress(Avg ~ 1 + Year, data = subset(hadcru, (Year <= 2009 & Year >= 1995)), thin = 100, mcmc = 100000, b0 = c(-20, 0), B0 = c(.00001, .00001), marginal = "Laplace") hadcru.mcmc.zero <- MCMCregress(Avg ~ 1, data = subset(hadcru, (Year <= 2009 & Year >= 1995)), thin = 100, mcmc = 100000, b0 = c(0), B0 = c(.00001), marginal = "Laplace") -logLik(hadcru.lm) ## 'log Lik.' -14.55338 (df=3) -logLik(hadcru.lm.zero) ## 'log Lik.' -12.80723 (df=2) attr(hadcru.mcmc, "logmarglike") ## [,1] ## [1,] -13.65188 attr(hadcru.mcmc.zero, "logmarglike") ## [,1] ## [1,] -4.310564 ``` ![alt text](https://www.skepticalscience.com/images/HadCRUT_1995_2009.gif)
Recreating traditional null hypothesis testing with Bayesian methods
CC BY-SA 2.5
null
2010-08-11T23:47:41.980
2010-08-18T17:19:16.027
2017-03-09T17:30:36.347
-1
291
[ "r", "bayesian", "maximum-likelihood", "markov-chain-montecarlo" ]
1572
2
null
1571
1
null
I do not know the packages you are using or their internal working but perhaps the choice of priors matter? Perhaps, you should consider using different prior structures to see how sensitive the mcmc marginal likelihoods are to your choice of priors. In particular, I suspect that the mcmc and the traditional likelihoods are likely to converge better as the priors become more diffuse. Note that in mcmc the marginal likelihoods are computed by integrating out the likelihood function with respect to the priors. Thus, I have a feeling that the 'diffuseness' of the priors may matter (could be wrong on this issue but worth checking out).
null
CC BY-SA 2.5
null
2010-08-12T00:43:27.360
2010-08-12T00:54:34.060
2010-08-12T00:54:34.060
null
null
null
1573
2
null
1571
4
null
When you're computing Bayes factors, the priors matter. The influence of the priors can persist even if you have a large amount of data. When you're doing posterior inference, the effect of the prior goes away as you collect more data, but not so with Bayes factors. Also, you'll get faster convergence if your null and alternative priors have disjoint support. Details [here](http://www.bepress.com/mdandersonbiostat/paper47/).
null
CC BY-SA 2.5
null
2010-08-12T01:11:05.383
2010-08-12T01:11:05.383
null
null
319
null
1576
1
1579
null
261
287922
It seems that a number of the statistical packages that I use wrap these two concepts together. However, I'm wondering if there are different assumptions or data 'formalities' that must be true to use one over the other. A real example would be incredibly useful.
What are the differences between Factor Analysis and Principal Component Analysis?
CC BY-SA 3.0
1
2010-08-12T03:46:05.987
2022-07-27T12:21:58.353
2015-01-17T00:33:00.820
28666
776
[ "pca", "factor-analysis" ]
1577
2
null
1576
35
null
There are numerous suggested definitions on the web. Here is one from a [on-line glossary on statistical learning](http://alumni.media.mit.edu/~tpminka/statlearn/glossary/glossary.html): > Principal Component Analysis Constructing new features which are the principal components of a data set. The principal components are random variables of maximal variance constructed from linear combinations of the input features. Equivalently, they are the projections onto the principal component axes, which are lines that minimize the average squared distance to each point in the data set. To ensure uniqueness, all of the principal component axes must be orthogonal. PCA is a maximum-likelihood technique for linear regression in the presence of Gaussian noise on both inputs and outputs. In some cases, PCA corresponds to a Fourier transform, such as the DCT used in JPEG image compression. See "Eigenfaces for recognition" (Turk&Pentland, J Cognitive Neuroscience 3(1), 1991), Bishop, "Probabilistic Principal Component Analysis", and "Automatic choice of dimensionality for PCA".choice of dimensionality for PCA". Factor analysis A generalization of PCA which is based explicitly on maximum-likelihood. Like PCA, each data point is assumed to arise from sampling a point in a subspace and then perturbing it with full-dimensional Gaussian noise. The difference is that factor analysis allows the noise to have an arbitrary diagonal covariance matrix, while PCA assumes the noise is spherical. In addition to estimating the subspace, factor analysis estimates the noise covariance matrix. See "The EM Algorithm for Mixtures of Factor Analyzers".choice of dimensionality for PCA".
null
CC BY-SA 2.5
null
2010-08-12T03:55:36.327
2010-08-12T03:55:36.327
null
null
334
null
1579
2
null
1576
182
null
Principal component analysis involves extracting linear composites of observed variables. Factor analysis is based on a formal model predicting observed variables from theoretical latent factors. In psychology these two techniques are often applied in the construction of multi-scale tests to determine which items load on which scales. They typically yield similar substantive conclusions (for a discussion see Comrey (1988) Factor-Analytic Methods of Scale Development in Personality and Clinical Psychology). This helps to explain why some statistics packages seem to bundle them together. I have also seen situations where "principal component analysis" is incorrectly labelled "factor analysis". In terms of a simple rule of thumb, I'd suggest that you: - Run factor analysis if you assume or wish to test a theoretical model of latent factors causing observed variables. - Run principal component analysis If you want to simply reduce your correlated observed variables to a smaller set of important independent composite variables.
null
CC BY-SA 3.0
null
2010-08-12T04:44:12.307
2013-06-25T00:20:29.910
2013-06-25T00:20:29.910
22047
183
null
1580
1
1582
null
41
72211
Imagine - You run a linear regression with four numeric predictors (IV1, ..., IV4) - When only IV1 is included as a predictor the standardised beta is +.20 - When you also include IV2 to IV4 the sign of the standardised regression coefficient of IV1 flips to -.25 (i.e., it's become negative). This gives rise to a few questions: - With regards to terminology, do you call this a "suppressor effect"? - What strategies would you use to explain and understand this effect? - Do you have any examples of such effects in practice and how did you explain and understand these effects?
Regression coefficients that flip sign after including other predictors
CC BY-SA 3.0
null
2010-08-12T05:03:22.717
2018-01-24T22:17:46.770
2013-11-25T11:40:27.387
22047
183
[ "regression", "predictor" ]
1581
2
null
1580
18
null
I believe effects like these are frequently caused by collinearity (see [this question](https://stats.stackexchange.com/questions/1149/is-there-an-intuitive-explanation-why-multicollinearity-is-a-problem-in-linear-re)). I think the book on multilevel modeling by Gelman and Hill talks about it. The problem is that `IV1` is correlated with one or more of the other predictors, and when they are all included in the model, their estimation becomes erratic. If the coefficient flipping is due to collinearity, then it's not really interesting to report, because it's not due to the relationship between your predictors to the outcome, but really due to the relationship between predictors. What I've seen suggested to resolve this problem is residualization. First, you fit a model for `IV2 ~ IV1`, then take the residuals of that model as `rIV2`. If all of your variables are correlated, you should really residualize all of them. You may choose do to so like this ``` rIV2 <- resid(IV2 ~ IV1) rIV3 <- resid(IV3 ~ IV1 + rIV2) rIV4 <- resid(IV4 ~ IV1 + rIV2 + rIV3) ``` Now, fit the final model with ``` DV ~ IV1 + rIV2 + rIV3 + rIV4 ``` Now, the coefficient for `rIV2` represents the independent effect of `IV2` given its correlation with `IV1`. I've heard you won't get the same result if you residualized in a different order, and that choosing the residualization order is really a judgment call within your research.
null
CC BY-SA 3.0
null
2010-08-12T05:31:08.567
2013-09-21T22:19:40.683
2017-04-13T12:44:41.967
-1
287
null
1582
2
null
1580
35
null
Multicollinearity is the usual suspect as JoFrhwld mentioned. Basically, if your variables are positively correlated, then the coefficients will be negatively correlated, which can lead to a wrong sign on one of the coefficients. One check would be to perform a principal components regression or ridge regression. This reduces the dimensionality of the regression space, handling the multicollinearity. You end up with biased estimates but a possibly lower MSE and corrected signs. Whether you go with those particular results or not, it's a good diagnostic check. If you still get sign changes, it may be theoretically interesting. UPDATE Following from the comment in John Christie's answer, this might be interesting. Reversal in association (magnitude or direction) are examples of Simpson's Paradox, Lord's Paradox and Suppression Effects. The differences essentially relate to the type of variable. It's more useful to understand the underlying phenomenon rather than think in terms of a particular "paradox" or effect. For a causal perspective, the paper below does a good job of explaining why and I'll quote at length their introduction and conclusion to whet your appetite. - The role of causal reasoning in understanding Simpson's paradox, Lord's paradox, and the suppression effect: covariate selection in the analysis of observational studies > Tu et al present an analysis of the equivalence of three paradoxes, concluding that all three simply reiterate the unsurprising change in the association of any two variables when a third variable is statistically controlled for. I call this unsurprising because reversal or change in magnitude is common in conditional analysis. To avoid either, we must avoid conditional analysis altogether. What is it about Simpson's and Lord's paradoxes or the suppression effect, beyond their pointing out the obvious, that attracts the intermittent and sometimes alarmist interests seen in the literature? [...] In conclusion, it cannot be overemphasized that although Simpson's and related paradoxes reveal the perils of using statistical criteria to guide causal analysis, they hold neither the explanations of the phenomenon they purport to depict nor the pointers on how to avoid them. The explanations and solutions lie in causal reasoning which relies on background knowledge, not statistical criteria. It is high time we stopped treating misinterpreted signs and symptoms ('paradoxes'), and got on with the business of handling the disease ('causality'). We should rightly turn our attention to the perennial problem of covariate selection for causal analysis using non-experimental data.
null
CC BY-SA 2.5
null
2010-08-12T06:39:13.927
2010-08-12T17:31:16.623
2020-06-11T14:32:37.003
-1
251
null
1583
1
1589
null
7
666
Contrary to [some here](https://stats.stackexchange.com/questions/1412/consequences-of-an-improper-link-function-in-n-alternative-forced-choice-procedur), others (e.g. [Brian Ripley](https://stat.ethz.ch/pipermail/r-help/2006-December/122353.html), the authors of [sensR](http://cran.r-project.org/web/packages/sensR/), and the authors of [psyphy](http://cran.r-project.org/web/packages/psyphy/index.html)) appear to think that using a standard binomial link function when analyzing two alternative forced choice data in which the minimum expected proportion correct is .5 is incorrect. However, their approach as to what the link function should be varies. 1.The sensR library uses: ``` function (mu) { tres <- mu for (i in 1:length(mu)) { if (mu[i] > 0.5) tres[i] <- sqrt(2) * qnorm(mu[i]) if (mu[i] <= 0.5) tres[i] <- 0 } tres } ``` 2.The psyphy library uses: ``` function (mu) { m <- 2 mu <- pmax(mu, 1/m + .Machine$double.eps) qlogis((m * mu - 1)/(m - 1)) } ``` 3.[Gabriel Baud-Bovy](https://stat.ethz.ch/pipermail/r-help/2006-December/122351.html) implicitly recommends (1+exp(x)/(1+exp(x)))/2. The approach selected seems like it may have some consequences for the result. Is there a "correct" link function to be using with these sorts of problems, or so long as the link, inverse link, mu.eta, and variance functions all agree is everything going to be all right? Is there a single source material that provides any authoritative guidance on this issue? Following John's advice I plotted these functions... [alt text http://psychlab2.variablesolutions.org/~russell/ForInternet/2AFCFunctionPlots.jpg](http://psychlab2.variablesolutions.org/~russell/ForInternet/2AFCFunctionPlots.jpg) The black line is a standard logistic function. The red line is the function from sensR. The blue line is from psyphy and the cyan line is from Gabriel Baud-Bovy, but given the oddness of the shape it provides, perhaps I misinterpreted him. The psyphy function line looks like what I'd expect a logistic function to look like in a psychophysics 2AFC experiment.
Appropriate link function for 2AFC data?
CC BY-SA 3.0
null
2010-08-12T06:46:44.597
2012-11-07T19:36:52.387
2017-04-13T12:44:51.217
-1
196
[ "logistic", "logit", "link-function" ]
1584
2
null
1576
24
null
Differences between factor analysis and principal component analysis are: • In factor analysis there is a structured model and some assumptions. In this respect it is a statistical technique which does not apply to principal component analysis which is a purely mathematical transformation. • The aim of principal component analysis is to explain the variance while factor analysis explains the covariance between the variables. One of the biggest reasons for the confusion between the two has to do with the fact that one of the factor extraction methods in Factor Analysis is called "method of principal components". However, it's one thing to use PCA and another thing to use the method of principal components in FA. The names may be similar, but there are significant differences. The former is an independent analytical method while the latter is merely a tool for factor extraction.
null
CC BY-SA 3.0
null
2010-08-12T06:49:58.880
2013-06-25T00:16:27.687
2013-06-25T00:16:27.687
22047
339
null
1585
2
null
1557
13
null
Ridge regression is a form of shrinkage. See [Draper & Van Nostrand (1979)](http://www.jstor.org/pss/1268284). Shrinkage has also proved useful in estimating seasonal factors for time series. See [Miller and Williams (IJF, 2003)](http://www.forecasters.org/ijf/journal-issue/273/article/5847).
null
CC BY-SA 2.5
null
2010-08-12T07:27:47.233
2010-08-12T07:27:47.233
null
null
159
null
1586
2
null
1580
6
null
See [Simpson's Paradox](http://en.wikipedia.org/wiki/Simpson's_paradox). In short the main effect observed can reverse when an interaction is added to a model. At the linked page most of the examples are categorical but there is a figure at the top of the page one could imagine continuously. For example, if you have a categorical and continuous predictor then the continuous predictor could easily flip sign if the categorical one is added and within each category the sign is different than for the overall score.
null
CC BY-SA 3.0
null
2010-08-12T07:30:47.867
2018-01-24T22:17:46.770
2018-01-24T22:17:46.770
601
601
null
1587
2
null
1571
4
null
I'm note sure I follow the R-code as I have only used R once or twice, but it looks to me as if you are comparing the marginal likelihood of a model with only an intercept and no slope (hadcru.mcmc.zero) and the marginal likelihood of a model with a slope and an intercept (hadcru.mcmc). However, while hadcru.mcmc.zero seems to be the correct model for H0, hadcru.mcmc does not seem to me to correctly represent H1 as there is nothing as far as I can see that constrains the slope to be positive. Is the something in the prior for the slope that makes it strictly positive (I don't know enough about MCMC in R to know)? If not, that may be where your problem lies as the marginal likelihood would then have a component representing the likelihood of the data for all of then egative values of the slop permitted under the prior (and 0) as well as the positive. It is debatable whether the H0 for this question should be that the slope is exactly zero, nobody would believe that to be plausible a-priori. Perhaps a test using the Bayes factor for a model where the slope is strictly positive (H1) against a model where it is zero or negative (H0). HTH (and I am not just confusing things)
null
CC BY-SA 2.5
null
2010-08-12T07:45:05.587
2010-08-12T07:45:05.587
null
null
887
null
1588
2
null
1557
5
null
[Korbinian Strimmer](http://strimmerlab.org/index.html) uses the James-Stein estimator for [infering gene networks](http://jmlr.csail.mit.edu/papers/v10/hausser09a.html). I've used his R packages a few times and it seems to provide a very good and quick answer.
null
CC BY-SA 2.5
null
2010-08-12T08:11:37.910
2010-08-12T08:11:37.910
null
null
8
null
1589
2
null
1583
5
null
It doesn't just seem like it will have consequences, it will have large consequences. Fit that second function to the data you put in your last question on this. It goes dramatically negative as it approaches 0.5. Perhaps more importantly, you also need to consider what the different equations mean for how one interprets the functioning of the mind. There is no known function that is just best for all 2AFC*. Such a function would be tantamount to proving a universal law of the operations of the mind. You have to model your data if you want the very best fit. *OK, some models like splines will just fit most anything but you'd have to justify why you have all the extra parameters theoretically. (ASIDE: you were opposed to clipping when difficulty achieved maximum (or minimum). Consider, if you were modelling a robotic arm at the maximum point of travel you would just clip the results at the maximum point of travel (something your first equation does). Just because you didn't know what that point was before you found it doesn't mean anything. You found it when performance reached chance.)
null
CC BY-SA 2.5
null
2010-08-12T08:37:45.023
2010-08-12T08:37:45.023
null
null
601
null
1590
1
1596
null
1
6003
If one has an r value of 0.60, can one state that an increase in one variable is 60% likely to mean an increase in the other variable?
Does an r value of 0.60 mean that an increase in one variable is 60% likely to mean an increase in the other variable?
CC BY-SA 3.0
null
2010-08-12T08:50:26.350
2011-09-21T23:22:24.123
2011-09-21T23:22:24.123
183
888
[ "correlation" ]
1593
2
null
1564
15
null
Another approach would be: ``` P(A| B, C, D) = P(A, B, C, D)/P(B, C, D) = P(B| A, C, D).P(A, C, D)/P(B, C, D) = P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)} = P(B| A, C, D).P(C| A, D).P(D| A).P(A)/{P(C| B, D).P(D| B).P(B)} ``` Note the similarity to: ``` P(A| B) = P(A, B)/P(B) = P(B| A).P(A)/P(B) ``` And there are many equivalent forms. Taking U = (B, C, D) gives: P(A| B, C, D) = P(A, U)/P(U) ``` P(A| B, C, D) = P(A, U)/P(U) = P(U| A).P(A)/P(U) = P(B, C, D| A).P(A)/P(B, C, D) ``` I'm sure they're equivalent, but do you want the joint probability of B, C & D given A?
null
CC BY-SA 2.5
null
2010-08-12T10:19:39.270
2010-08-13T06:44:05.597
2010-08-13T06:44:05.597
521
521
null
1594
2
null
1383
2
null
Interesting question! All statistical models can be viewed as performing lossy data compression. For instance simple linear regression with one predictor replaces $N$ points (where $N$ can be massive, e.g., in the 1000s) with two parameters: a slope and intercept. The parameters may then be used to reconstruct the data, with degree of success depending on how good the original fit was. Your specific example concerns predicting binary time series data (Bernoulli distributed data, which is a specific case of the binomial distribution). Binary data can encode a lot: coin flips, pictures, sounds, the digits of $\pi$, statistical programming languages... As you can imagine, and as a quick search around Google will confirm, there are a lot of statistical models which could apply to binary data. One is logistic regression, or (to express the same model in a more general framework) a Generalized Linear Model with a binomial distribution and a logit link function. The function fit is of the following form: $\mbox{logit}[P(Y)] = \beta X + \epsilon$, where $X$ (predictors), $Y$ (probability of a 1), and $\epsilon$ (residuals) are vectors. Okay. Now a little demonstration. Suppose data are generated so that the probability of a 1 correlates with the sine of time (represented as black points in the graph below). You don't know this, however. You get data for time points from 0 to 359 (blue points). [alt text http://img196.imageshack.us/img196/589/cointimepredict2.png](http://img196.imageshack.us/img196/589/cointimepredict2.png) With the available data points, I fitted the function $\mbox{logit}[P(Y)] = \beta_0 + \beta_1 t + \beta_2 t^2 + \beta_3 t^3$, which popped out as $\mbox{logit}[P(Y)] = -0.2 -30.9 t -3.1 t^2 + 22.2 t^3$. (The probability predictions are plotted in red.) It's a good fit to the data (between 0 and 359). However as you can see, when extrapolating, it does a rather poor job: beyond a certain point it says "just guess 1!" Take-home message: to do the analysis correctly, you need to have a some idea of the likely processes generating the data. If I knew a sine process were doing the job, then I'd be able to do a wonderful job predicting. Thinking about this is where a statistician would start. The appropriate model is always going to be domain specific, which is why, for example, compression techniques working well for images don't automatically apply to sounds.
null
CC BY-SA 2.5
null
2010-08-12T10:20:31.550
2010-08-14T11:17:52.653
2010-08-14T11:17:52.653
702
702
null
1595
1
1632
null
378
135537
Lots of people use a main tool like Excel or another spreadsheet, SPSS, Stata, or R for their statistics needs. They might turn to some specific package for very special needs, but a lot of things can be done with a simple spreadsheet or a general stats package or stats programming environment. I've always liked Python as a programming language, and for simple needs, it's easy to write a short program that calculates what I need. Matplotlib allows me to plot it. Has anyone switched completely from, say R, to Python? R (or any other statistics package) has a lot of functionality specific to statistics, and it has data structures that allow you to think about the statistics you want to perform and less about the internal representation of your data. Python (or some other dynamic language) has the benefit of allowing me to program in a familiar, high-level language, and it lets me programmatically interact with real-world systems in which the data resides or from which I can take measurements. But I haven't found any Python package that would allow me to express things with "statistical terminology" – from simple descriptive statistics to more complicated multivariate methods. What can you recommend if I wanted to use Python as a "statistics workbench" to replace R, SPSS, etc.? What would I gain and lose, based on your experience?
Python as a statistics workbench
CC BY-SA 3.0
null
2010-08-12T10:46:45.407
2020-12-29T19:34:40.127
2013-07-04T17:03:39.747
22047
890
[ "r", "spss", "stata", "python" ]
1596
2
null
1590
8
null
No, your interpretation is incorrect. Common interpretations of a correlation between X and Y equal to .60 include: - X explains the following proportion of variance in Y: $0.60^2=.36$ . I.e., 36% of variance. - A value one standard deviation larger on X is associated with a value .60 of a standard deviation larger on Y. Both these statements are bidirectional. I.e., you could switch the words X and Y in the above two statements and the statements would still be true.
null
CC BY-SA 2.5
null
2010-08-12T10:53:26.677
2010-08-12T10:53:26.677
null
null
183
null
1597
2
null
1595
42
null
The following StackOverflow discussions might be useful - R versus Python - SciPy versus R - Psychology researcher choosing between R, Python, and Matlab
null
CC BY-SA 3.0
null
2010-08-12T10:58:37.137
2013-03-20T05:53:51.310
2017-05-23T12:39:26.203
-1
183
null
1598
2
null
1595
63
null
I don't think there's any argument that the range of statistical packages in [cran](http://cran.r-project.org/) and [Bioconductor](http://bioconductor.org/) far exceed anything on offer from other languages, however, that isn't the only thing to consider. In my research, I use R when I can but sometimes R is just too slow. For example, a large MCMC run. Recently, I combined python and C to tackle this problem. Brief summary: fitting a large stochastic population model with ~60 parameters and inferring around 150 latent states using MCMC. - Read in the data in python - Construct the C data structures in python using ctypes. - Using a python for loop, call the C functions that updated parameters and calculated the likelihood. A quick calculation showed that the programme spent 95% in C functions. However, I didn't have to write painful C code to read in data or construct C data structures. --- I know there's also [rpy](http://rpy.sourceforge.net/), where python can call R functions. This can be useful, but if you're "just" doing statistics then I would use R.
null
CC BY-SA 3.0
null
2010-08-12T10:59:17.060
2011-10-13T16:36:41.867
2011-10-13T16:36:41.867
46
8
null
1599
2
null
118
10
null
I think the contrast between using absolute deviations and squared deviations becomes clearer once you move beyond a single variable and think about linear regression. There's a nice discussion at [http://en.wikipedia.org/wiki/Least_absolute_deviations](http://en.wikipedia.org/wiki/Least_absolute_deviations), particularly the section "Contrasting Least Squares with Least Absolute Deviations" , which links to some student exercises with a neat set of applets at [http://www.math.wpi.edu/Course_Materials/SAS/lablets/7.3/73_choices.html](http://www.math.wpi.edu/Course_Materials/SAS/lablets/7.3/73_choices.html) . To summarise, least absolute deviations is more robust to outliers than ordinary least squares, but it can be unstable (small change in even a single datum can give big change in fitted line) and doesn't always have a unique solution - there can be a whole range of fitted lines. Also least absolute deviations requires iterative methods, while ordinary least squares has a simple closed-form solution, though that's not such a big deal now as it was in the days of Gauss and Legendre, of course.
null
CC BY-SA 2.5
null
2010-08-12T12:00:55.580
2010-08-12T12:00:55.580
null
null
449
null
1600
2
null
1595
31
null
One benefit of moving to Python is the possibility to do more work in one language. Python is a reasonable choice for number crunching, writing web sites, administrative scripting, etc. So if you do your statistics in Python, you wouldn't have to switch languages to do other programming tasks. Update: On January 26, 2011 Microsoft Research announced [Sho](https://docs.microsoft.com/en-us/archive/blogs/the_blog_of_sho/introducing-sho), a new Python-based environment for data analysis. I haven't had a chance to try it yet, but it sounds like an interesting possibility if want to run Python and also interact with .NET libraries.
null
CC BY-SA 4.0
null
2010-08-12T12:43:41.063
2020-12-29T19:34:40.127
2020-12-29T19:34:40.127
79696
319
null
1601
1
1607
null
10
15769
In the analysis of test scores (e.g., in Education or Psychology), common analysis techniques often assume that data are normally distributed. However, perhaps more often than not, scores tend to deviate sometimes wildly from normal. I am familiar with some basic normalizing transformations, like: square roots, logarithms, reciprocal transformations for reducing positive skew, reflected versions of the above for reducing negative skew, squaring for leptokurtic distributions. I have heard of arcsine transformations and power transformations, though I am not really knowledgeable about them. So, I am curious as to what other transformations are commonly used by analysts?
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.?
CC BY-SA 3.0
null
2010-08-12T13:20:13.170
2016-01-09T14:31:22.677
2015-06-24T20:09:32.290
11887
445
[ "data-transformation", "normality-assumption", "variance-stabilizing" ]
1602
2
null
22
139
null
Tongue firmly in cheek: A Bayesian defines a "probability" in exactly the same way that most non-statisticians do - namely an indication of the plausibility of a proposition or a situation. If you ask them a question about a particular proposition or situation, they will give you a direct answer assigning probabilities describing the plausibilities of the possible outcomes for the particular situation (and state their prior assumptions). A Frequentist is someone that believes probabilities represent long run frequencies with which events occur; if needs be, they will invent a fictitious population from which your particular situation could be considered a random sample so that they can meaningfully talk about long run frequencies. If you ask them a question about a particular situation, they will not give a direct answer, but instead make a statement about this (possibly imaginary) population. Many non-frequentist statisticians will be easily confused by the answer and interpret it as Bayesian probability about the particular situation. However, it is important to note that most Frequentist methods have a Bayesian equivalent that in most circumstances will give essentially the same result, the difference is largely a matter of philosophy, and in practice it is a matter of "horses for courses". As you may have guessed, I am a Bayesian and an engineer. ;o)
null
CC BY-SA 4.0
null
2010-08-12T14:53:36.410
2021-05-02T10:53:40.393
2021-05-02T10:53:40.393
887
887
null
1603
2
null
1249
1
null
This answer is inspired by shabbychef's answer using the median. By definition: $E[exp(Z^2)] = \sum_{z=1}^{z=n} exp(z^2) P(z;n,n^{-\beta})$ where, $P(z;n,n^{-\beta})$ is the binomial probability. Denote the [mode of this binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution#Mode_and_median) by: $m(n,n^{-\beta})$. Thus, by definition we have: $P(z;n,n^{-\beta}) \le P(m(n,n^{-\beta});n,n^{-\beta}) \ \ \forall z$ Let, $\bar{P} = P(m(n,n^{-\beta});n,n^{-\beta})$ Thus, $E[exp(Z^2] \le \sum_{z=1}^{z=n} exp(z^2) \bar{P}$ This upper bound is a function of $n$ and $\beta$ as desired. Hopefully, this in the right track unlike my previous attempt. This approach is technically not ok as $Z$ is a discrete variable but can be justified if we take the normal approximation to the binomial. I am not sure to what extent this is a better bound then the trivial bound but here is one approach. Take the [taylor series expansion](http://en.wikipedia.org/wiki/Taylor_series) of $exp(z^2)$ and ignoring terms higher than the second term, you get: $\int e^{z^2} f(z) dz < \int (1 + z^2) f(z) dz$ Now, $\int (1 + z^2) f(z) dz = 1 + \int z^2 f(z) dz$ But, we know that: $\int z^2 f(z) dz = Var(z) + E(z)^2$ Substituting for the variance and mean of the [binomial distribution](http://en.wikipedia.org/wiki/Binomial_distribution) and simplifying, we get: $\int e^{z^2} f(z) dz < 1 + n^{1-\beta} (1-n^{-\beta} + n^{1-\beta})$ PS: Please check the math as I corrected one error.
null
CC BY-SA 2.5
null
2010-08-12T15:11:07.163
2010-08-12T17:08:46.207
2010-08-12T17:08:46.207
null
null
null
1604
1
1609
null
8
2215
Can someone provide me with a book or online reference on how to construct smoothing splines with cross-validation? I have a programming and undergraduate level mathematics background. I would also appreciate an overview of whether this is smoothing technique is a good one for smoothing data and whether there are any disadvantages of which a non-statistician needs to be aware.
Constructing smoothing splines with cross-validation
CC BY-SA 2.5
null
2010-08-12T15:16:55.543
2010-08-13T02:46:22.033
2010-08-12T20:57:22.143
847
847
[ "cross-validation", "smoothing", "splines" ]
1605
2
null
1601
2
null
A simple option is to use sums of scores instead of the scores themselves. The sum of distributions tends to normality. For example, in Education you could add a student's scores over a series of tests. Another option, of course, is to use techniques that do not assume normality, which are underestimated and underused.
null
CC BY-SA 2.5
null
2010-08-12T16:03:50.133
2010-08-12T16:03:50.133
null
null
666
null
1607
2
null
1601
5
null
The [Box-Cox](http://en.wikipedia.org/wiki/Box-Cox_transformation) transformation includes many of the ones you cited. See this answer for some details: - How should I transform non-negative data including zeros? UPDATE: These [slides](http://www.stat.uconn.edu/~studentjournal/index_files/pengfi_s05.pdf) provide a pretty good overview of Box-Cox transformations.
null
CC BY-SA 2.5
null
2010-08-12T16:54:18.413
2010-08-13T05:49:53.540
2017-04-13T12:44:35.347
-1
251
null
1608
2
null
1555
4
null
Based on your additional explanation in the comments, it appears that you have 8 groups (each corresponding to a column) and a continuous outcome variable that you grouped into 10 bins (each bin corresponding to a row). Note that it also implies that the rows are ordered with later rows implying larger values. First of all, if you do have the underlying continuous variable, then do not bin it - just use Kruskall-Wallis or ANOVA to compare the groups. Assuming that the binning is unavoidable, you can still use a Kruskall-Wallis test, but not on the frequencies as you have apparently done it. Your current KW inference just tells you that you have more data in some groups as compared to others. The actual observations in this case are the row numbers (1 through 10), and the values in the table are just the frequencies of occurrences. Most statistical software has an option of specifying these as "weights" or "frequencies". The chi-square test can be used on the frequencies, however if the rows are ordered it might have much lower power compared to the Kruskall-Wallis test to actually detect differences, since it completely ignores the ordering of the rows. Thus even though its results are valid, I would not recommend using these due to the loss of power.
null
CC BY-SA 2.5
null
2010-08-12T17:54:22.010
2010-08-12T17:54:22.010
null
null
279
null
1609
2
null
1604
7
null
[Nonparametric Regression and Spline Smoothing](http://rads.stackoverflow.com/amzn/click/0824793374) by Eubank is a good book. You probably want to start with Chapters 2 and 5 which cover goodness of fit and the theory and construction of smoothing splines. I've heard good things about [Generalized Additive Models: An Introduction with R](http://rads.stackoverflow.com/amzn/click/1584884746), which might be better if you're looking for examples in R. For a quick introduction, a google search turns up a course on [Nonparametric function estimation](http://www.stat.osu.edu/~yklee/763/note.html) where you can peruse the slides and see examples in R. The general problem with splines is overfitting your data, but this is where cross validation comes in.
null
CC BY-SA 2.5
null
2010-08-12T19:52:35.463
2010-08-13T02:46:22.033
2010-08-13T02:46:22.033
251
251
null
1610
1
1616
null
102
54848
I'm not a statistician by education, I'm a software engineer. Yet statistics comes up a lot. In fact, questions specifically about Type I and Type II error are coming up a lot in the course of my studying for the Certified Software Development Associate exam (mathematics and statistics are 10% of the exam). I'm having trouble always coming up with the right definitions for Type I and Type II error - although I'm memorizing them now (and can remember them most of the time), I really don't want to freeze up on this exam trying to remember what the difference is. I know that Type I Error is a false positive, or when you reject the null hypothesis and it's actually true and a Type II error is a false negative, or when you accept the null hypothesis and it's actually false. Is there an easy way to remember what the difference is, such as a mnemonic? How do professional statisticians do it - is it just something that they know from using or discussing it often? (Side Note: This question can probably use some better tags. One that I wanted to create was "terminology", but I don't have enough reputation to do it. If someone could add that, it would be great. Thanks.)
Is there a way to remember the definitions of Type I and Type II Errors?
CC BY-SA 2.5
null
2010-08-12T19:55:02.167
2022-11-17T05:49:07.317
2012-05-15T11:34:07.880
686
110
[ "terminology", "type-i-and-ii-errors" ]
1611
1
null
null
40
2952
As an outsider, it appears that there are two competing views on how one should perform statistical inference. Are the two different methods both considered valid by working statisticians? Is choosing one considered more of a philosophical question? Or is the current situation considered problematic and attempts are being made to somehow unify the different approaches?
Do working statisticians care about the difference between frequentist and Bayesian inference?
CC BY-SA 3.0
null
2010-08-12T20:09:33.550
2013-08-06T10:43:09.903
2013-08-06T10:36:35.717
6029
572
[ "bayesian", "frequentist" ]
1612
2
null
1610
4
null
I used to think of it in terms of the usual [picture](http://intuitor.com/statistics/T1T2Errors.html) of two Normal distributions (or bell curves). Going left to right, distribution 1 is the Null, and the distribution 2 is the Alternative. Type I (erroneously) rejects the first (Null) and Type II "rejects" the second (Alternative). (Now you just need to remember that you're not actually rejecting the alternative, but erroneously accepting (or failing to reject) the Null -- i.e. restate everything in the form of the Null. Hey, it worked for me!)
null
CC BY-SA 2.5
null
2010-08-12T20:10:13.347
2010-08-12T20:10:13.347
null
null
251
null
1613
2
null
1611
5
null
While this is subjective, I would say: It is called the Bayesian/frequentist "debate" for a reason. There is a clear philosophical difference between the two approaches. But as with most things, it's a spectrum. Some people are very much in one camp or the other and completely reject the alternative. Most people probably fall somewhere in the middle. I myself would use either method depending on the circumstances.
null
CC BY-SA 3.0
null
2010-08-12T20:25:12.937
2013-08-06T10:43:09.903
2013-08-06T10:43:09.903
6029
5
null
1614
2
null
1610
24
null
Here's a handy way that happens to have some truth to it. Young scientists commit Type-I because they want to find effects and jump the gun while old scientist commit Type-II because they refuse to change their beliefs. (someone comment in a funnier version of that :) )
null
CC BY-SA 2.5
null
2010-08-12T20:27:49.107
2010-08-12T20:27:49.107
null
null
601
null
1615
2
null
1611
15
null
Adding to what Shane says, I think the continuum comprises: - Firm philosophical standing in the Bayes camp - Both considered valid, with one approach more or less preferable for a given problem - I'd use a Bayesian approach (at all or more often) but I don't have the time. - Firm philosophical standing in the frequentist camp - I do it like I learned in class. What's Bayes? And yes, I know working statisticians and analysts at all of these points. Most of the time I'm living at #3, striving to spend more time at #2.
null
CC BY-SA 2.5
null
2010-08-12T20:51:36.930
2010-08-12T20:51:36.930
null
null
394
null
1616
2
null
1610
126
null
Since type two means "False negative" or sort of "false false", I remember it as the number of falses. - Type I: "I falsely think the alternate hypothesis is true" (one false) - Type II: "I falsely think the alternate hypothesis is false" (two falses)
null
CC BY-SA 4.0
null
2010-08-12T20:52:04.730
2019-05-07T18:11:23.010
2019-05-07T18:11:23.010
237592
900
null
1617
2
null
1610
1
null
I remember it by thinking: What's the first thing I do when I do a null-hypothesis significance test? I set the criterion for the probability that I will make a false rejection. Thus, type 1 is this criterion and type 2 is the other probability of interest: the probability that I will fail to reject the null when the null is false. So, 1=first probability I set, 2=the other one.
null
CC BY-SA 2.5
null
2010-08-12T21:21:19.430
2010-08-12T21:21:19.430
null
null
364
null
1618
2
null
1611
5
null
I would imagine that in applied fields the divide is not paid that much attention as researchers/practitioners tend to be pragmatic in applied works. You choose the tool that works given the context. However, the debate is alive and well among those who care about the philosophical issues underlying these two approaches. See for example the following blog posts of [Andrew Gelman](http://stat.columbia.edu/~gelman): - Ya don't know Bayes, Jack - Philosophy and the practice of Bayesian statistics
null
CC BY-SA 2.5
null
2010-08-12T21:48:24.977
2010-08-12T21:48:24.977
null
null
null
null
1619
2
null
1610
8
null
I use the "judicial" approach for remembering the difference between type I and type II: a judge committing a type I error sends an innocent man to jail, while a judge committing a type II error lets a guilty man walk free.
null
CC BY-SA 2.5
null
2010-08-12T23:02:11.910
2010-08-12T23:02:11.910
null
null
830
null
1620
2
null
1610
22
null
I was talking to a friend of mine about this and he kicked me a link to [the Wikipedia article on type I and type II errors](http://en.wikipedia.org/wiki/Type_I_and_type_II_errors), where they apparently now provide a (somewhat unhelpful, in my opinion) mnemonic. I did, however, want to add it here just for the sake of completion. Although I didn't think it helped me, it might help someone else: > For those experiencing difficulty correctly identifying the two error types, the following mnemonic is based on the fact that (a) an "error" is false, and (b) the Initial letters of "Positive" and "Negative" are written with a different number of vertical lines: A Type I error is a false POSITIVE; and P has a single vertical line. A Type II error is a false NEGATIVE; and N has two vertical lines. With this, you need to remember that a false positive means rejecting a true null hypothesis and a false negative is failing to reject a false null hypothesis. This is by no means the best answer here, but I did want to throw it out there in the event someone finds this question and this can help them.
null
CC BY-SA 2.5
null
2010-08-12T23:38:47.220
2010-08-12T23:38:47.220
null
null
110
null
1621
1
null
null
13
736
I came across a [large](http://www.citeulike.org/user/yaroslavvb/tag/information-geometry) body of literature which advocates using Fisher's Information metric as a natural local metric in the space of probability distributions and then integrating over it to define distances and volumes. But are these "integrated" quantities actually useful for anything? I found no theoretical justifications and very few practical applications. One is Guy Lebanon's [ work](http://www.cs.cmu.edu/~lafferty/pub/hyperplane.pdf) where he uses "Fisher's distance" to classify documents and another one is Rodriguez' [ABC of Model Selection…](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.68.9259&rep=rep1&type=pdf) where "Fisher's volume" is used for model selection. Apparently, using "information volume" gives "orders of magnitude" improvement over AIC and BIC for model selection, but I haven't seen any follow up on that work. A theoretical justification might be to have a generalization bound which uses this measure of distance or volume and is better than bounds derived from MDL or asymptotic arguments, or a method relying on one of these quantities that's provably better in some reasonably practical situation, are there any results of this kind?
Using information geometry to define distances and volumes…useful?
CC BY-SA 2.5
null
2010-08-12T23:53:12.230
2014-08-09T12:28:55.610
2014-08-09T12:28:55.610
6961
511
[ "model-selection", "information-geometry" ]
1622
1
1635
null
3
652
One estimate of the 'quality' of a portfolio of stocks is the Sharpe ratio, which is defined as the mean of the returns divided by the standard deviation of the returns (modulo adjustments for risk free rate, etc). The sample Sharpe ratio is the sample mean divided by the sample standard deviation. Up to a constant factor ($\sqrt{n}$, where $n$ is the number of observations), this is distributed as a (possibly non-central) $t$-statistic. Are there known techniques for comparing the mean of independent variables distributed as non-central $t$-statistics? Of course, there are non-parametric tests of mean, but is there something specific to the case of noncentral $t$? (I'm not sure what I meant by that.) edit: the original question is somewhat ambiguous (well, it's actually not what I want). Is there a way to test the null hypothesis: population Sharpe ratio of $X$ equals population Sharpe ratio of $Y$, given independent collections of observations drawn from $X$ and $Y$? Here Sharpe ratio is mean divided by standard deviation. edit: given $n_x, n_y$ observations of $X, Y$, construct sample means, standard deviations, to get sample Sharpe ratios: $\hat{S}_x = \frac{\hat{\mu}_x}{\hat{\sigma}_x}, \hat{S}_y = \frac{\hat{\mu}_y}{\hat{\sigma}_y}$. Then $t_x = \sqrt{n_x}\hat{S}_x$, and $t_y = \sqrt{n_y}\hat{S}_y$ are distributed as non-central t-statistics with noncentrality parameters $\sqrt{n_x}S_x$ and $\sqrt{n_y}S_y$, where $S_x, S_y$ are the population Sharpe ratios of $X, Y$. Given these independent observations, I wish to test the null hypothesis $H_0: S_x = S_y$. In one form of the problem, one only has the summary statistics $n_x, n_y, \hat{\mu}_x, \hat{\mu}_y, \hat{\sigma}_x, \hat{\sigma}_y$. For large sample sizes, $t_x, t_y$ are approximately normal, I believe but the small sample size case is also of interest (funds often quote performance based on monthly returns).
Comparing 2 independent non-central t statistics
CC BY-SA 2.5
null
2010-08-13T00:18:34.380
2010-09-30T21:20:25.937
2010-09-30T21:20:25.937
930
795
[ "finance", "hypothesis-testing", "mean" ]
1623
2
null
507
11
null
Rather than using the Gelman-Rubin statistic, which is a nice aid but not perfect (as with all convergence diagnostics), I simply use the same idea and plot the results for a visual graphical assessment. In almost all cases I have considered (which is a very large number), graphing the trace plots of multiple MCMC chains started from widely varied starting positions is sufficient to show or assess whether the same posterior is being converged to or not, in each case. I use this method to: - Whether the MCMC chain (ever) converges - Assess how long I should set the burn-in period - To calculate Gelman's R statistic (see Gelman, Carlin, Stern and Rubin, Bayesian Data Analysis) to measure the efficiency and speed of mixing in the MCMC sampler. Efficiency and convergence are slightly different issues: e.g. you can have convergence with very low efficiency (i.e. thus requiring long chains to converge). I have used this graphical method to successfully diagnose (and later correct) lack of convergence problems in specific and general situations.
null
CC BY-SA 3.0
null
2010-08-13T00:37:26.680
2013-11-08T08:17:57.653
2013-11-08T08:17:57.653
8
null
null