Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
6794 | 1 | null | null | 12 | 1737 | A few days ago I saw a post on how to setup a SweaveR, which would allow for a user to directly export things like tables, graphs, etc. into Latex. I couldn't quite follow the directions.
Can anyone give step-by-step instructions on how to do it on both, Mac and Windows?
| Setting up Sweave, R, Latex, Eclipse StatET | CC BY-SA 3.0 | null | 2011-02-02T03:16:45.343 | 2017-05-18T21:15:53.950 | 2017-05-18T21:15:53.950 | 28666 | 3008 | [
"r"
]
|
6795 | 1 | 6933 | null | 10 | 1928 | I'm trying to solve a problem for least angle regression (LAR). This is a problem 3.23 on page 97 of [Hastie et al., Elements of Statistical Learning, 2nd. ed. (5th printing)](http://www.stanford.edu/~hastie/local.ftp/Springer/ESLII_print5.pdf#page=115).
Consider a regression problem with all variables and response having mean zero and standard deviation one. Suppose also that each variable has identical absolute correlation with the response:
$
\frac{1}{N} | \left \langle \bf{x}_j, \bf{y} \right \rangle | = \lambda, j = 1, ..., p
$
Let $\hat{\beta}$ be the least squares coefficient of $\mathbf{y}$ on $\mathbf{X}$ and let $\mathbf{u}(\alpha)=\alpha \bf{X} \hat{\beta}$ for $\alpha\in[0,1]$.
I am asked to show that
$$
\frac{1}{N} | \left \langle \bf{x}_j, \bf{y}-u(\alpha) \right \rangle | = (1 - \alpha) \lambda, j = 1, ..., p
$$
and I am having problems with that. Note that this can basically says that the correlations of each $x_j$ with the residuals remain equal
in magnitude as we progress toward $u$.
I also do not know how to show that the correlations are equal to:
$\lambda(\alpha) = \frac{(1-\alpha)}{\sqrt{(1-\alpha)^2 + \frac{\alpha (2-\alpha)}{N} \cdot RSS}} \cdot \lambda$
Any pointers would be greatly appreciated!
| Least angle regression keeps the correlations monotonically decreasing and tied? | CC BY-SA 2.5 | null | 2011-02-02T03:46:06.413 | 2012-01-29T21:18:55.347 | 2011-02-10T07:55:06.183 | 2116 | 988 | [
"regression",
"machine-learning",
"correlation",
"self-study"
]
|
6796 | 2 | null | 6791 | 4 | null | [Technically you've got survey items, not Likert scales; the latter are fashioned from multiple items. See, for example, Paul Spector's Summated Rating Scale Construction {Sage}.]
The steps you take will need to depend on the audience for which you're reporting. If it's academic and rigorous, like a dissertation committee, you may face special challenges. If it's not, and if it's comfortable with the common 1-5 format, why not rescale to fit that and then report means and standard deviations (especially since shapes, skew, and kurtosis are no different from year to year. I presume the distributions are normal enough that means accurately express central tendency?).
-->Why am I treating your variables as interval-level ones? Purists may say that ordinal-level variables should not be reported via means or s.d. Well, your comments suggest, despite your use of "categorical/ordinal," that you are dealing with an ordinal level of measurement which you actually feel comfortable treating as interval-level. After all, why otherwise would you assess skewness or kurtosis. I'm guessing that your audience, too, will be ok with and will be able to relate to interval-level statistics such as means.
It sounds good that you have already explored the data graphically. If you want to go beyond assessing the magnitude of the difference and conduct an hypothesis test, why not do a T-test (independent or correlated, depending on your data) comparing the 1-5 scores pre and the 1-5 scores post, and yielding a confidence interval for the mean difference. Here I'm assuming you've got random samples from a population.
| null | CC BY-SA 2.5 | null | 2011-02-02T04:21:28.807 | 2011-03-31T15:24:12.113 | 2011-03-31T15:24:12.113 | 2669 | 2669 | null |
6797 | 2 | null | 6791 | 5 | null | Apply empirically-based rescaling formula: If you can administer both versions of the scale to a subsample, you could estimate what the corresponding scores are on the two response formats. Then you could apply a conversion formula that is empirically justified.
There are several ways that you could do this. For instance, you could get 100 or so participants (more is better) to answer the set of questions twice (perhaps counterbalanced for order) using one response format and then the other. You could then experiment with different weightings of the old scale that yield identical means and standard deviations (and any other characteristics of interest) on the new scale. This could potentially be setup as an optimisation problem.
Apply a "common-sense" conversion":
Alternatively, you could apply a "common-sense" conversion.
- One naive conversion involves rescaling so that the min and max of the two scales are aligned. So for you 5-point to 9-point conversion, it would be 1 = 1; 2 = 3; 3 = 5; 4 = 7; 5 = 9.
- A more psychologically plausible conversion might consider 1 on 9-point scale to be more extreme than a 1 on a 5-point scale. You could also consider the words used for the response options and how they align across the response formats. So for instance, you might choose something like 1 = 1.5, 2 = 3, 3 = 5, 4 = 7, 5 = 8.5, with a final decision based on some expert judgements.
Importantly, these "common sense conversions" are only approximate. In some contexts, an approximate conversion is fine. But if you're interested in subtle longitudinal changes (e.g., did employees less satisfied in Year 2 compared to Year 1), then approximate conversions are generally not adequate. As a broad statement (at least within my experience in organisational settings) changes in item wording and changes in scale options are likely to have a greater effect on responses than any actual change in the attribute of interest.
| null | CC BY-SA 4.0 | null | 2011-02-02T04:48:17.157 | 2023-04-13T11:04:09.670 | 2023-04-13T11:04:09.670 | 183 | 183 | null |
6798 | 1 | 6799 | null | 2 | 327 | I have several time series (generated from a numerical model) that go through an initial stage of spinup, followed by a period of dynamic equilibrium, that presumably exists for all times beyond the spinup period. [See this link for an example.](http://26.media.tumblr.com/tumblr_lfyplfPp1J1qgj7hyo1_500.png)
While I can usually estimate the spinup time and the equilibrium by inspection of the curves, I would like to know is if there is some standard method from time series analysis for determining whether a time series has passed through some early deterministic stage and has approached a later stage of noise around some mean value (Something like a AR(1) process, if I am using the terminology correctly).
Summary:
Is there an objective method to determine the point upon which a time series has approached equilibrium (e.g. phrased as a probability from a hypothesis test for each point) and is there similarly an objective way to state the mean value during this equilibrium period?
| Objective measure of relaxation time towards equilibrium for a time series | CC BY-SA 2.5 | null | 2011-02-02T05:18:40.943 | 2011-02-02T09:04:46.853 | null | null | 3010 | [
"time-series",
"mean"
]
|
6799 | 2 | null | 6798 | 1 | null | Since it seems that you have a time trend and then constant mean, [KPSS test](http://en.wikipedia.org/wiki/KPSS_test) is appropriate. Here is the example with R code.
```
x<-rnorm(1000,sd=10)
y<-c(rep(0.3,200),rep(180,800))+c(rep(0,20),(1:180),rep(0,800))+x
plot(1:1000,y,type="l")
```

Test for stationarity
```
> kpss.test(y)
KPSS Test for Level Stationarity
data: y
KPSS Level = 5.5237, Truncation lag parameter = 7, p-value = 0.01
Message d'avis :
In kpss.test(y) : p-value smaller than printed p-value
```
The null hypothesis that the series are stationary is rejected. Test the series after the spinoff.
```
> kpss.test(y[300:1000])
KPSS Test for Level Stationarity
data: y[300:1000]
KPSS Level = 0.1797, Truncation lag parameter = 6, p-value = 0.1
Message d'avis :
In kpss.test(y[300:1000]) : p-value greater than printed p-value
```
The null hypothesis of stationarity is accepted.
It is also possible to detect when the change occurs by using `breakpoints` function from package strucchange. It is rather slow for large data sets.
```
> breakpoints(y~t,data=df)
Optimal 2-segment partition:
Call:
breakpoints.formula(formula = y ~ t, data = df)
Breakpoints at observation number:
204
Corresponding to breakdates:
204
```
In general, testing for stationarity is a complicated procedure, so my approach is not guaranteed to work in general. The main difficulty is that different tests have power over different alternatives. So the most appropriate test should be chosen on the basis what might be the deviation from the stationarity.
| null | CC BY-SA 2.5 | null | 2011-02-02T09:04:46.853 | 2011-02-02T09:04:46.853 | null | null | 2116 | null |
6800 | 2 | null | 6759 | 115 | null | `unique()` indeed answers your question, but another related and interesting function to achieve the same end is `duplicated()`.
It gives you the possibility to look up which rows are duplicated.
```
a <- c(rep("A", 3), rep("B", 3), rep("C",2))
b <- c(1,1,2,4,1,1,2,2)
df <-data.frame(a,b)
duplicated(df)
[1] FALSE TRUE FALSE FALSE FALSE TRUE FALSE TRUE
> df[duplicated(df), ]
a b
2 A 1
6 B 1
8 C 2
> df[!duplicated(df), ]
a b
1 A 1
3 A 2
4 B 4
5 B 1
7 C 2
```
| null | CC BY-SA 2.5 | null | 2011-02-02T09:27:56.653 | 2011-02-02T09:27:56.653 | null | null | 679 | null |
6801 | 2 | null | 6652 | 30 | null | I found this thought experiment helpful when thinking about confidence intervals. It also answers your question 3.
Let $X\sim U(0,1)$ and $Y=X+a-\frac{1}{2}$. Consider two observations of $Y$ taking the values $y_1$ and $y_2$ corresponding to observations $x_1$ and $x_2$ of $X$, and let $y_l=\min(y_1,y_2)$ and $y_u=\max(y_1,y_2)$. Then $[y_l,y_u]$ is a 50% confidence interval for $a$ (since the interval includes $a$ if $x_1<\frac12<x_2$ or $x_1>\frac12>x_2$, each of which has probability $\frac14$).
However, if $y_u-y_l>\frac12$ then we know that the probability that the interval contains $a$ is $1$, not $\frac12$. The subtlety is that a $z\%$ confidence interval for a parameter means that the endpoints of the interval (which are random variables) lie either side of the parameter with probability $z\%$ before you calculate the interval, not that the probability of the parameter lying within the interval is $z\%$ after you have calculated the interval.
| null | CC BY-SA 3.0 | null | 2011-02-02T09:31:07.053 | 2012-03-14T00:21:30.513 | 2012-03-14T00:21:30.513 | 858 | 2425 | null |
6802 | 1 | 6805 | null | 5 | 268 | I want to compare few methods of variable selection. I want to do it using simulations. I'm aware of the fact that it won't be an ultimate answer to question 'Which method is the best one?', but I'm looking just form some hint. To do such simulations I need a method to draw a 'random linear model'. Is there any well accepted algorithm of drawing 'random linear model'? By well accepted I mean method that was used for example in some scientific paper.
I was thinking about following simple approach:
1) Choose $n$ and $k$, which denotes number of observations and number of variables.
2) Generate random matrix $X$ by drawing each element using uniform distribution $(0,1)$.
3) Generate parameters using uniform distribution $(0,1)$.
4) Generate residuals using Normal Distribution $(0,\sigma^2)$ for some fixed and arbitrary chosen $\sigma^2$
5) Calculate $Y=X\beta + \epsilon$
| Concept of a random linear model | CC BY-SA 2.5 | null | 2011-02-02T10:24:07.393 | 2011-02-02T16:36:59.593 | null | null | 1643 | [
"random-generation",
"linear-model"
]
|
6803 | 2 | null | 6794 | 3 | null | For me, I found that Eclipse was overkill for creation of scientific papers.
So, for Windows, what i did was the following:
Install Miktex 2.8 (? not sure of version). Make sure that you install Miktex into a directory such as C:\Miktex, as Latex hates file paths with spaces in them. Make sure to select the option to install packages on the fly.
Also make sure that R is installed somewhere that Latex can find it i.e. in a path with no spaces.
I installed TechNix center as my program to write documents in, but there are many others such as WinEdt, eclipse, texmaker, or indeed Emacs.
Now, make sure that you have \usepackage{Sweave} and usepackage{graphicx} in your preamble.
As I'm sure you know, you need to put <>= at the start of your R chunk, and end it with @.
You will need either the package xtable or Hmisc to convert R objects to a latex format.
I like xtable, but you will probably need to do quite a bit of juggling of objects to get it into a form that xtable will accept (lm outputs, data frames, matrices).
When inserting a table make sure to put the results=tex option into your preamble for the code chunk, and if you need a figure, ensure that the fig=TRUE option is also there.
You can also only generate one figure per chunk, so just bear that in mind.
Something to be very careful with is that the R code is at the extreme left of the page, as if it is enclosed in an environment then it will be ignored (this took me a long time to figure out).
You need to save the file as .Rnw - make sure that whatever tex program you use does not append a .tex after this, as this will cause problems.
Then either run R CMD Sweave foo.Rnw from the command line, or from within R run Sweave("foo.Rnw"). Inevitably it will fail at some point (especially if you haven't done this before) so just debug your .Rnw file, rinse and repeat.
If it is the first time you have done this, it may prove easier to code all the R analyses from within r, and then use print statements to insert them into LaTex. I wouldn't recommend this as a good idea though, as if you discover that your datafile has errors at the end of this procedure (as i did last weekend) then you will need to rerun all of your analyses, which if you could properly from within latex from the beginning, can be avoided.
Also, Sweave computations can take some time, so you may wish to use the R package cacheSweave to save rerunning analyses. Apparently the R package highlight allows for colour coding of R code in documents, but i have not used this.
I've never used latex or R on a Mac, so i will leave that explanation to someone else. Hope this helps.
| null | CC BY-SA 2.5 | null | 2011-02-02T10:47:15.130 | 2011-02-02T10:47:15.130 | null | null | 656 | null |
6804 | 2 | null | 6794 | 7 | null | I use Eclipse / StatEt to produce document with Sweave and LaTex, and find Eclipse perfect as an editing environment. I can recommend the following guides:
- Longhow Lam's pdf guide
- Jeromy Anglim's post (Note this includes info on installing the RJ package required for the latest versions of StatEt.)
I also use [MikTex](http://miktex.org/) on Windows and find everything works really well once it's setup. There's a [few good questions and answers on Stack Overflow](https://stackoverflow.com/search?q=%5Br%5D+statet) as well.
| null | CC BY-SA 3.0 | null | 2011-02-02T12:04:48.830 | 2014-05-07T22:46:34.473 | 2017-05-23T12:39:32.527 | -1 | 114 | null |
6805 | 2 | null | 6802 | 10 | null | One problem with the approach you outline is that the regressors $x_i$ will (on average) be uncorrelated, and one situation in which variable selection methods have difficulty is highly correlated regressors.
I'm not sure the concept of a 'random' linear model is very useful here, as you have to decide on a probability distribution over your model space, which seems arbitrary. I'd rather think of it as an experiment, and apply the principles of good experimental design.
Postscript: Here's one reference but i'm sure there are others:
Andrea Burton, Douglas G. Altman, Patrick Royston, and Roger L. Holder. The design of simulation studies in medical statistics. Statistics in Medicine 25(24):4279-4292, 2006. [DOI:10.1002/sim.2673](http://dx.doi.org/10.1002/sim.2673)
See also this related letter:
Hakan Demirtas. Statistics in Medicine 26(20):3818-3821, 2007. [DOI:10.1002/sim.2876](http://dx.doi.org/10.1002/sim.2876)
Just found a commentary on a similar topic:
G. Maldonado and S. Greenland. The importance of critically interpreting simulation studies. Epidemiology 8 (4):453-456, 1997. [http://www.jstor.org/stable/3702591](http://www.jstor.org/stable/3702591)
| null | CC BY-SA 2.5 | null | 2011-02-02T13:02:11.713 | 2011-02-02T16:28:59.487 | 2011-02-02T16:28:59.487 | 449 | 449 | null |
6806 | 1 | 6812 | null | 5 | 19247 | I have a dataframe df as shown below
```
name position
1 HLA 1:1-15
2 HLA 1:2-16
3 HLA 1:3-17
```
I would like to split the position column into two more columns based on the ":" character such that i get
```
name seq position
1 HLA 1 1-15
2 HLA 1 2-16
3 HLA 1 3-17
```
So i thought this would do the trick,
```
df <- transform(df,pos = as.character(position))
df_split<- strsplit(df$pos, split=":")
#found this hack from an old mailing list post
df <- transform(df, seq_name= sapply(df_split, "[[", 1),pos2= sapply(df_split, "[[", 2))
```
however I get an error
```
Error in strsplit(df$pos, split = ":") : non-character argument
```
What could be wrong?
How do you achieve this in R. I have simplified my case here, actuality the dataframe runs to over a hundred thousand rows.
| Splitting a numeric column for a dataframe | CC BY-SA 2.5 | null | 2011-02-02T14:02:57.167 | 2015-03-30T20:38:35.407 | null | null | 18462 | [
"r",
"dataset"
]
|
6807 | 1 | 6808 | null | 13 | 1179 | I have recently graduated with my masters degree on medical and biological modeling, accompanied with engineering mathematics as a background. Even though my education program included a significant amount of courses on mathematical statistics (see below for a list), which I managed with pretty high grades, I frequently end up completely lost staring down on both theory and applications of statistics. I have to say, compared to "pure" mathematics, statistics really makes little sense to me. Especially the notations and language used by most statisticians (including my past lecturers) is annoyingly convoluted and almost none of the resources I have seen so far (including wikipedia) had simple examples that one could easily relate to, and associate to the theory given...
This being the background; I also realize the bitter reality that I cannot have career as a researcher/engineer without a firm grip on statistics, especially within the field of bioinformatics.
I was hoping that I could get some tips from more experienced statisticians/mathematicians. How can I overcome this problem I have mentioned above? DO you know of any good resources; such as books, e-books, open courses (via iTunes or OpenCourseware for ex) etc..
EDIT: As I have mentioned I am quite biased (negatively) towards a majority of the literature under general title of statistics, and since I can't buy a number of large (and expensive) coursebooks per branch of statistics, what I would need in terms of a book is something similar to what [Tipler & Mosca](https://rads.stackoverflow.com/amzn/click/com/0716783398)
is for Physics, but instead for statistics.
For those who don't know about Tipler; it's a large textbook that covers a wide majority of the subjects that one might encounter during higher studies, and presents them each from basic introduction to slightly deeper in detail. Basically a perfect reference book, bought it during my first year in uni, still use it every once in a while.
---
The courses I have taken on statistics:
- a large introduction course,
- stationary stochastic processes,
- Markov processes,
- Monte Carlo methods
- Survival analysis
| Making sense out of statistics theory and applications | CC BY-SA 4.0 | null | 2011-02-02T14:26:45.833 | 2019-02-19T14:37:07.653 | 2019-02-19T14:37:07.653 | 128677 | 3014 | [
"mathematical-statistics",
"bioinformatics",
"computational-statistics"
]
|
6808 | 2 | null | 6807 | 4 | null | I can completely understand your situation. Even though I am PhD student, I find it hard sometimes to related theory and application. If you are willing to immerse yourself in understanding theory, it is definitely rewarding when you think about real world problems. But the process may be frustrating.
One of the many references that I like is Gelman and Hill's [Data Analysis Using Hierarchical/Multilevel Models](http://www.stat.columbia.edu/%7Egelman/arm/). They avoid the theory where they can express the underlying concept using simulations. It will definitely benefit you as you have experience in MCMC etc. As you say, you are working in bioinformatics, probably Harrell's [Regression Modeling Strategies](https://rads.stackoverflow.com/amzn/click/com/0387952322) is a great reference too.
I will make this a community wiki and let others add to it.
| null | CC BY-SA 2.5 | null | 2011-02-02T14:41:39.990 | 2011-02-02T23:37:15.957 | 2020-06-11T14:32:37.003 | -1 | 1307 | null |
6809 | 1 | 6811 | null | 11 | 2303 | I'm considering building MATLAB and R interfaces to [Ross Quinlan](http://en.wikipedia.org/wiki/Ross_Quinlan)'s [C5.0](http://rulequest.com/download.html) (for those not familiar with it, C5.0 is a decision tree algorithm and software package; an extension of [C4.5](http://en.wikipedia.org/wiki/C4.5)), and I am trying to get a sense of the components I would need to write.
The only documentation I found for C5.0 is [here](http://www.rulequest.com/see5-win.html), which is a tutorial for See5 (a Windows interface to C5.0?) . The [tar](http://rulequest.com/GPL/C50.tgz) file comes with a Makefile, but no Readme files or any additional documentation.
From what I read in the tutorial above, C5.0 uses an ASCII-based representation to handle inputs and outputs, and I am also considering building an interface that passes binary data directly between MATLAB or R and C5.0. Is C5.0's data representation used by any other machine-learning/classification software?
Has anybody tried building a MATLAB or an R interface to ID3, C4.5 or C5.0 before?
Thanks
| Building MATLAB and R interfaces to Ross Quinlan's C5.0 | CC BY-SA 2.5 | null | 2011-02-02T14:42:08.477 | 2012-08-30T18:48:56.600 | 2011-02-02T19:14:18.960 | 930 | 2798 | [
"r",
"machine-learning",
"matlab"
]
|
6810 | 2 | null | 6806 | 2 | null | The "trick" is to use `do.call`.
```
> a <- data.frame(x = c("1:1-15", "1:2-16", "1:3-17"))
> a
x
1 1:1-15
2 1:2-16
3 1:3-17
> a$x <- as.character(a$x)
> a.split <- strsplit(a$x, split = ":")
> tmp <-do.call(rbind, a.split)
> data.frame(a, tmp)
x X1 X2
1 1:1-15 1 1-15
2 1:2-16 1 2-16
3 1:3-17 1 3-17
```
| null | CC BY-SA 2.5 | null | 2011-02-02T14:44:34.900 | 2011-02-02T14:44:34.900 | null | null | 307 | null |
6811 | 2 | null | 6809 | 12 | null | That sounds like a great idea, especially as the page you link to shows that C5.0 is now under GPL.
I have some experience wrapping C/C++ software to R using [Rcpp](http://dirk.eddelbuettel.com/code/rcpp.html); I would be happy to help.
| null | CC BY-SA 2.5 | null | 2011-02-02T14:54:16.900 | 2011-02-02T14:54:16.900 | null | null | 334 | null |
6812 | 2 | null | 6806 | 10 | null | ```
df_split<- strsplit(as.character(df$position), split=":")
df <- transform(df, seq_name= sapply(df_split, "[[", 1),pos2= sapply(df_split, "[[", 2))
>
> df
name position pos seq_name pos2
1 HLA 1:1-15 1:1-15 1 1-15
2 HLA 1:2-16 1:2-16 1 2-16
3 HLA 1:3-17 1:3-17 1 3-17
```
| null | CC BY-SA 2.5 | null | 2011-02-02T14:57:24.317 | 2011-02-02T14:57:24.317 | null | null | 2129 | null |
6813 | 2 | null | 6807 | 1 | null | As an alternative to Regression Modeling Strategies, and for a more practical approach, [Applied Linear Statistical Models](http://rads.stackoverflow.com/amzn/click/007310874X) is very good from my point of view.
| null | CC BY-SA 2.5 | null | 2011-02-02T15:01:24.270 | 2011-02-02T15:01:24.270 | null | null | null | null |
6814 | 1 | null | null | 19 | 10390 | I have a question about the Kullback-Leibler divergence.
Can someone explain why the "distance" between the blue density and the "red" density is smaller than the distance between the "green" curve and the "red" one?

| Kullback-Leibler divergence - interpretation | CC BY-SA 3.0 | null | 2011-02-02T15:07:42.593 | 2018-05-13T20:00:57.547 | 2016-11-16T20:48:03.920 | 11887 | null | [
"kullback-leibler"
]
|
6815 | 2 | null | 6807 | 2 | null | Are you familiar with Bayesian Data Analysis (by Gelman, Carlin, Stern, and Rubin)? Maybe that's what you need a dose of.
| null | CC BY-SA 2.5 | null | 2011-02-02T15:52:41.277 | 2011-02-02T15:52:41.277 | null | null | 1047 | null |
6816 | 2 | null | 6809 | 5 | null | Interfacing C/C++ code to MATLAB is pretty straightforward, all you have to do is create a MEX gateway function to handle the conversion of parameters and return parameters. I have experience in making MEX files to do this sort of thing and would be happy to help.
| null | CC BY-SA 2.5 | null | 2011-02-02T15:54:30.040 | 2011-02-02T15:54:30.040 | null | null | 887 | null |
6817 | 1 | 6825 | null | 2 | 302 | My intuition says that the third equation must be "the length of the gradient squared less than epsilon".
$x_{k+1} = x_k - f(x_k)$
$x_{k+1} = x_k + 1$
$|f(x_k)|^2 < \epsilon$
However, I am not sure whether it is the standard form.
How would you write the standard form of the gradient method, and particularly its ending rule?
| Optimizing the ending rule of gradient method | CC BY-SA 2.5 | null | 2011-02-02T16:20:15.653 | 2011-02-03T05:07:19.070 | 2011-02-03T05:07:19.070 | 919 | 3017 | [
"optimization"
]
|
6818 | 2 | null | 6802 | 2 | null | To address @onestop's objection to non-correlated regressors, you could do the following:
- Choose $n, k, l$, where $l$ is the number of latent factors.
- Choose $\sigma_i$, the amount of 'idiosyncratic' volatility in the regressors.
- Draw a $k \times l$ matrix, $F$, of exposures, uniformly on $(0,1)$. (you may want to normalize to sum 1 across rows of $F$.)
- Draw a $n \times l$ matrix, $W$, of latent regressors as standard normals.
- Let $X = W F^\top + \sigma_i E$ be the regressors, where $E$ is an $n\times k$ matrix drawn from a standard normal.
- Proceed as before: draw $k$ vector $\beta$ uniformly on $(0,1)$.
- draw $n$ vector $\epsilon$ as a normal with variance $\sigma^2$.
- Let $y = X\beta + \epsilon$.
| null | CC BY-SA 2.5 | null | 2011-02-02T16:36:59.593 | 2011-02-02T16:36:59.593 | null | null | 795 | null |
6819 | 2 | null | 6814 | 12 | null | KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen that the empirical distribution of this sample mimicks the blue distribution. This is rare but this may happen... albeit with a probability which is vanishingly small, and which behaves like $\mathrm{e}^{-nH}$. The exponent $H$ is the KL divergence of the blue distribution with respect to the red one.
Having said that, I wonder why your KL divergences are ranked in the order you say they are.
| null | CC BY-SA 2.5 | null | 2011-02-02T17:23:44.250 | 2011-02-02T17:23:44.250 | null | null | 2592 | null |
6820 | 1 | null | null | 4 | 162 | Let's say that there is a table like the one below, where for each mutant (A, B, C, etc.) protein or peptide we have a binding and a functional information (true/false) column, and for each amino acid in that protein, we have some kind of amino acid property like hydrophobicity:
```
Protein Binding Functional 1 2 3 4 5 6 7 8 9 10
A 0 0 13 96 39 77 70 94 96 29 22 82
B 0 1 94 45 2 2 11 46 50 77 7 99
C 0 1 66 71 97 37 14 77 89 92 12 72
D 1 1 11 8 94 73 16 53 2 27 54 97
E 1 1 31 62 49 51 2 86 91 49 61 7
F 1 0 2 42 65 42 54 41 45 9 71 20
G 0 0 26 44 56 65 61 43 56 90 70 86
H 0 1 54 99 68 64 94 81 85 0 50 84
I 1 1 27 52 76 12 46 38 24 74 11 90
J 1 1 1 58 77 50 72 51 87 99 47 67
```
What statistical tests would you recommend to:
- See if the difference between means for each mutant is statistically significant.
- See if the the aminoacid property can somewhere be related to binding and functional properties.
Thanks in advance.
| Statistical analysis of mutant proteins sequences | CC BY-SA 2.5 | null | 2011-02-02T18:05:08.037 | 2012-01-17T22:37:14.117 | 2011-02-02T19:06:48.170 | 930 | 2909 | [
"anova",
"multiple-comparisons",
"genetics"
]
|
6821 | 2 | null | 6814 | 29 | null | Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs:

The KL distance from $F$ to $G$ is the expectation, under the probability law $F$, of the difference in logarithms of their PDFs. Let us therefore look closely at the log PDFs. The values near 0 matter a lot, so let's examine them. The next figure plots the log PDFs in the region from $x=0$ to $x=0.10$:

Mathematica computes that KL(red, blue) = 0.574461 and KL(red, green) = 0.641924. In the graph it is clear that between 0 and 0.02, approximately, log(green) differs far more from log(red) than does log(blue). Moreover, in this range there is still substantially large probability density for red: its logarithm is greater than -1 (so the density is greater than about 1/2).
Take a look at the differences in logarithms. Now the blue curve is the difference log(red) - log(blue) and the green curve is log(red) - log(green). The KL divergences (w.r.t. red) are the expectations (according to the red pdf) of these functions.

(Note the change in horizontal scale, which now focuses more closely near 0.)
Very roughly, it looks like a typical vertical distance between these curves is around 10 over the interval from 0 to 0.02, while a typical value for the red pdf is about 1/2. Thus, this interval alone should add about 10 * 0.02 /2 = 0.1 to the KL divergences. This just about explains the difference of .067. Yes, it's true that the blue logarithms are further away than the green logs for larger horizontal values, but the differences are not as extreme and the red PDF decays quickly.
In brief, extreme differences in the left tails of the blue and green distributions, for values between 0 and 0.02, explain why KL(red, green) exceeds KL(red, blue).
Incidentally, KL(blue, red) = 0.454776 and KL(green, red) = 0.254469.
---
### Code
Specify the distributions
```
red = GammaDistribution[1/.85, 1];
green = InverseGaussianDistribution[1, 1/3.];
blue = InverseGaussianDistribution[1, 1/5.];
```
Compute KL
```
Clear[kl];
(* Numeric integation between specified endpoints. *)
kl[pF_, qF_, l_, u_] := Module[{p, q},
p[x_] := PDF[pF, x];
q[x_] := PDF[qF, x];
NIntegrate[p[x] (Log[p[x]] - Log[q[x]]), {x, l, u},
Method -> "LocalAdaptive"]
];
(* Integration over the entire domain. *)
kl[pF_, qF_] := Module[{p, q},
p[x_] := PDF[pF, x];
q[x_] := PDF[qF, x];
Integrate[p[x] (Log[p[x]] - Log[q[x]]), {x, 0, \[Infinity]}]
];
kl[red, blue]
kl[red, green]
kl[blue, red, 0, \[Infinity]]
kl[green, red, 0, \[Infinity]]
```
Make the plots
```
Clear[plot];
plot[{f_, u_, r_}] :=
Plot[Evaluate[f[#, x] & /@ {blue, red, green}], {x, 0, u},
PlotStyle -> {{Thick, Darker[Blue]}, {Thick, Darker[Red]},
{Thick, Darker[Green]}},
PlotRange -> r,
Exclusions -> {0},
ImageSize -> 400
];
Table[
plot[f], {f, {{PDF, 4, {Full, {0, 3}}}, {Log[PDF[##]] &,
0.1, {Full, Automatic}}}}
] // TableForm
Plot[{Log[PDF[red, x]] - Log[PDF[blue, x]],
Log[PDF[red, x]] - Log[PDF[green, x]]}, {x, 0, 0.04},
PlotRange -> {Full, Automatic},
PlotStyle -> {{Thick, Darker[Blue]}, {Thick, Darker[Green]}}]
```
| null | CC BY-SA 4.0 | null | 2011-02-02T18:11:19.850 | 2018-05-13T20:00:57.547 | 2020-06-11T14:32:37.003 | -1 | 919 | null |
6822 | 2 | null | 6807 | 1 | null | Everyone learns differently, but I think it's safe to say that examples, examples, examples, help a lot in statistics. My suggestion would be to learn R (just the basics are enough to help a lot) and then you can try any and every example until your eyes bleed. You can sort it, fit it, plot it, you name it. And, since R is geared toward statistics, as you learn R, you'll be learning statistics. Those books that you listed can then be attacked from a "show me" point of view.
Since R is free, and a lot of source material is free, all you need to invest is your time.
[http://www.mayin.org/ajayshah/KB/R/index.html](http://www.mayin.org/ajayshah/KB/R/index.html)
[http://math.illinoisstate.edu/dhkim/rstuff/rtutor.html](http://math.illinoisstate.edu/dhkim/rstuff/rtutor.html)
[http://www.cyclismo.org/tutorial/R/](http://www.cyclismo.org/tutorial/R/)
[http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm](http://www.stat.pitt.edu/stoffer/tsa2/R_time_series_quick_fix.htm)
[http://www.statmethods.net/about/books.html](http://www.statmethods.net/about/books.html)
There are many good books on R that you can buy, here's one that I've used:
[http://www.amazon.com/Introductory-Statistics-R-Peter-Dalgaard/dp/0387954759](http://rads.stackoverflow.com/amzn/click/0387954759)
Edit============
I forgot to add a couple of links. If you're using Windows, a good editor to feed R is Tinn-R (someone else can add links for editors on a Mac, or Linux).
[http://www.sciviews.org/Tinn-R/](http://www.sciviews.org/Tinn-R/)
[http://cran.r-project.org/web/packages/TinnR/](http://cran.r-project.org/web/packages/TinnR/)
| null | CC BY-SA 2.5 | null | 2011-02-02T19:13:18.257 | 2011-02-02T20:50:36.100 | 2011-02-02T20:50:36.100 | 2775 | 2775 | null |
6823 | 2 | null | 6809 | 2 | null | The C5.0 (Linux) documentation is at [http://rulequest.com/see5-unix.html](http://rulequest.com/see5-unix.html)
| null | CC BY-SA 2.5 | null | 2011-02-02T20:05:04.330 | 2011-02-02T20:05:04.330 | null | null | null | null |
6824 | 1 | 6826 | null | 9 | 14266 | Can an effect size be calculated for an interaction effect in general and more specifically using the F-statistic and its associated degrees of freedom? If yes, should this be done and what is the appropriate interpretation of the effect size given the nature of interaction effects? Initially, I assumed the answer is "no." However, a quick Google search turned up statements about computing effect sizes for interaction terms. Any assistance in clarifying this issue will be greatly appreciated!
| Calculating and interpreting effect sizes for interaction terms | CC BY-SA 2.5 | null | 2011-02-02T20:21:45.820 | 2021-01-21T18:18:17.583 | 2021-01-21T18:18:17.583 | 11887 | null | [
"interaction",
"effect-size"
]
|
6825 | 2 | null | 6817 | 3 | null | Interpreting your function $f(x)$ as a (scaled) version of the gradient, your termination rule is equivalent to $|f(x)| < \sqrt{\epsilon}$, i.e. "terminate when you've taken a step that is too small." This seems perfectly reasonable.
| null | CC BY-SA 2.5 | null | 2011-02-02T20:59:15.827 | 2011-02-02T20:59:15.827 | null | null | 795 | null |
6826 | 2 | null | 6824 | 3 | null | Looking into classic old texts (like Geoffrey Keppel's Design and Analysis: A Researcher's Handbook and Fredric Wolf's Meta-Analysis: Quantitative Methods for Research Synthesis), I've seen several options, including omega, phi, and the square of each. But most widely interpretable and simplest to obtain from most software packages' output is the incremental contribution that the interaction makes to r-squared. Partial eta squared (explained variance not shared with any other predictor in the model) is another option, and in fact for an interaction tested in a sequential model, it should be the same as the increment in r-squared. I realize I'm not answering your question about specifically using F and df; if that is that essential to you maybe others can address it. Wolf shows how to convert F to r for the 2-group case only, and I'm not the strongest when it comes to formulas.
| null | CC BY-SA 2.5 | null | 2011-02-02T23:06:25.757 | 2011-02-02T23:06:25.757 | null | null | 2669 | null |
6827 | 1 | 6831 | null | 8 | 43021 | Given a table comprising three columns,
```
1 2 0.05
1 3 0.04
2 3 0.001
```
and a matrix with defined dimensions,
```
mat <- matrix(NA, nrow = 3, ncol = 3)
[,1] [,2] [,3]
[1,] NA NA NA
[2,] NA NA NA
[3,] NA NA NA
```
Is there an efficient way to populate the matrix with the entries in the third column of the table with R, without having to iterate over the table, isolate the indices and insert value in a for loop?
Thanks.
| Efficient way to populate matrix in R? | CC BY-SA 2.5 | null | 2011-02-02T23:37:14.350 | 2015-05-13T00:40:31.243 | null | null | 2842 | [
"r",
"matrix"
]
|
6828 | 2 | null | 6817 | 2 | null | A simplified version of the gradient descent algorithm is as follows:
You begin with $k=0$, $x_k$, $\alpha_k$ and a backtracking parameter $c \in (0,1)$.
The descent direction is given by $p_k = - \nabla f(x_k)$.
Each iterate is computed as follows, $x_{k+1} = x_k + \alpha p_k$.
Remember that $\alpha$ is interpreted as step-length and must satisfy the sufficient descent condition, $f(x_k + p_k ) \leq f(x_k) + c \alpha_k \nabla f(x_k)^T p_k$. The value of $\alpha$ must be calculated using the backtracking procedure.
The algorithm is as follows:
```
data: k=0; x; alpha; c; tol
begin:
compute the gradient;
while( norm(grad) > tol ){
compute alpha using backtracking;
p = -grad;
x = x + alpha*p;
}
```
The parameter `tol` is the tolerance for the stopping rule, it basically ensures that your gradient is close enough to zero. If the gradient is close to zero you can proof that your current iteration is close to a local minimum.
I hope this answers your question.
For further reference you may check Numerical Optimization by Nocedal & Wright.
| null | CC BY-SA 2.5 | null | 2011-02-02T23:58:42.023 | 2011-02-03T00:07:26.593 | 2011-02-03T00:07:26.593 | 2902 | 2902 | null |
6829 | 2 | null | 6824 | 6 | null | Yes, an effect size for an interaction can be computed, though I don't think I know any measures of effect size that you can compute simply from the F and df values; usually you need various sums-of-squares values to do the computations. If you have the raw data, the "ezANOVA" function in the ["ez" package](http://cran.r-project.org/web/packages/ez/index.html) for R will give you [generalized eta square](http://www.ncbi.nlm.nih.gov/pubmed/14664681), a measure of effect size that, unlike partial-eta square, generalizes across design types (eg. within-Ss designs vs between-Ss designs).
| null | CC BY-SA 2.5 | null | 2011-02-03T00:02:26.337 | 2011-02-03T00:02:26.337 | null | null | 364 | null |
6830 | 2 | null | 6827 | 1 | null | ```
mat <- matrix(c(1, 1, 2, 2, 3, 3, 0.05, 0.04, 0.001), nrow = 3, ncol = 3)
mat
mat1 <- matrix(c(rep(NA,6), 0.05, 0.04, 0.001), nrow = 3, ncol = 3)
mat1
mat2 <- matrix(NA, nrow = 3, ncol = 3)
mat2
mat2[,3] <- c(0.05, 0.04, 0.001)
mat2
df <- data.frame(x=c(1, 1, 2), y=c(2, 3, 3), z=c(0.05, 0.04, 0.001))
df
mat3 <- matrix(NA, nrow = 3, ncol =3)
mat3
mat3[,3] <- df$z
mat3
mat4 <- matrix(NA, nrow = 3, ncol =3)
mat4
mat4[,3] <- unlist(df[3])
mat4
```
| null | CC BY-SA 2.5 | null | 2011-02-03T00:54:32.690 | 2011-02-03T02:21:42.227 | 2011-02-03T02:21:42.227 | 2775 | 2775 | null |
6831 | 2 | null | 6827 | 8 | null | Generally tables are handled as matrices or arrays and matrix indexing allows two column arguments as (i,j)-indexing (and if the object being indexed has higher dimensions then matrices with more columns are used) so:
```
> inp.mtx <- as.matrix(inp)
> mat[inp.mtx[,1:2] ]<- inp.mtx[,3]
> mat
[,1] [,2] [,3]
[1,] NA 0.05 0.040
[2,] NA NA 0.001
[3,] NA NA NA
```
| null | CC BY-SA 3.0 | null | 2011-02-03T01:38:16.820 | 2015-05-13T00:40:31.243 | 2015-05-13T00:40:31.243 | 2129 | 2129 | null |
6832 | 1 | 6836 | null | 3 | 896 | I'm building a database of economic and political indicators provided by the World Bank and other similar sources. I would like to run uni- and multivariate analysis of these indicators over the past 20 years of data. I'll be using JMP to analyze the data.
Here's an excerpt of the database structure:
```
ID | Pop. | GDP | (20ish other columns)
------------------------------------------------
Afghanistan | xx | xx | xx
Albania | xx | xx | xx
Algeria | xx | xx | xx
Andorra | xx | xx | xx
etc. | xx | xx | xx
```
What would be the best way to input this data so that JMP can deal with all 20 years of data? Should I have 20 separate tables, one per year? Or should I have a column for each year all in one table (`GDP_1990`, `GDP_1991`, `GDP_1992`, etc.)
| How should I design a database for multi-year data analysis with JMP? | CC BY-SA 2.5 | null | 2011-02-03T02:24:22.080 | 2011-02-03T05:54:20.473 | 2011-02-03T05:54:20.473 | 919 | 3025 | [
"jmp",
"dataset"
]
|
6833 | 2 | null | 6794 | 1 | null | I installed this suite quite recently and followed the instructions as per instructions [here](http://www.r-bloggers.com/getting-started-with-sweave-r-latex-eclipse-statet-texlipse/).
There are links to all required software components required. I use MiKTex for all LaTex components.
There are a few pitfalls if you are planning to use 64-bit windows as you will need the additional 64-bit java runtime. This is quite easy to overcome if you go to java.com in a 64-bit IE and verify your installation, it will point you to the 64-bit installer which is otherwise difficult to find.
To avoid mucking around with path variables I simply extracted the eclipse folder in C:\Program Files as this is where java lives and 64-bit R. From here the configuration options in eclipse can easily run automatically and find the appropriate parameters.
I hope this helps.
| null | CC BY-SA 2.5 | null | 2011-02-03T02:39:36.663 | 2011-02-03T02:39:36.663 | null | null | 1834 | null |
6834 | 1 | 6849 | null | 7 | 350 | What are the major machine learning theories that maybe used by Twitter for suggesting followers?
| What are the major machine learning theories that maybe used by Twitter for suggesting followers? | CC BY-SA 2.5 | null | 2011-02-03T03:44:59.920 | 2011-02-03T15:22:11.077 | null | null | 3026 | [
"machine-learning"
]
|
6835 | 1 | null | null | 0 | 1065 | What are the best quantitative models for trend detection?
I.e. market trend.
| Trend detection quantitative models | CC BY-SA 2.5 | null | 2011-02-03T04:02:57.213 | 2011-03-03T15:32:02.253 | 2011-02-03T10:40:11.013 | null | null | [
"time-series",
"trend"
]
|
6836 | 2 | null | 6832 | 3 | null | 20 separate tables means 20 times as much work for everything you do: forget that!
I notice your "database structure" does not seem to include an attribute for year. How is that information going to get into JMP?
There are two useful structures for these data: "long" and "wide". The long structure consists of tuples of (ID, Pop., GDP, ..., Year). The wide structure lists all years at once in each row: (ID, Pop_1990, ..., Pop_2009, GDP_1990, ..., ...). Depending on the software, some procedures are easiest (or feasible only) with one structure and other procedures need the other structure. Typically, tasks that query or sort the data by year (e.g., do separate regressions by year) benefit from the long structure and tasks that make pairwise comparisons (e.g., a scatterplot matrix of GDP over all years) benefit from a wide structure. You will need both structures.
Good stats software supports interconversion between long and wide structures. It's usually easiest to convert long to wide, and there are powerful reasons on the database management side in favor of long, so as a rule, I start with the long format in all projects. I can't remember what JMP does--I recall it's somewhat limited--so you might need to enlist the help of a real database management system, which is not a bad idea anyway.
I notice your question is specifically about inputting data. That issue is really separate from how the data are to be maintained. For example, if you are manually transcribing tables, by far the best procedure is to create computer-readable copies of those tables. This makes checking the input easier and more reliable. Then write little scripts, if you have to, to assemble, restructure, and clean the input as needed. In particular, do not let the structure most convenient for getting the data into JMP determine how you will manage the data throughout the project! By the same token, do not let the format of typical summaries or reports determine the database structure, either.
| null | CC BY-SA 2.5 | null | 2011-02-03T05:54:02.283 | 2011-02-03T05:54:02.283 | null | null | 919 | null |
6837 | 2 | null | 6807 | 2 | null | All statistics problems essentialy boils to following 4 steps (which I borrowed from @whuber [answer on another question](https://stats.stackexchange.com/questions/6780/zipfs-law-coefficient/6786#6786)):
- Estimate the parameter.
- Assess the quality of that estimate.
- Explore the data.
- Evaluate the fit.
You can exchange word parameter with word model.
Statistics books usually present the first two points for various situations. The problem that each real world application requires different approach, hence different model, so a large part of the books end up cataloguing these different models. This has undesired effect that it is easy to lose yourself in the details and miss the big picture.
The big picture book which I heartily recommend is [Asymptotic statistics](http://books.google.com/books?id=UEuQEM5RjWgC&lpg=PR1&dq=asymptotic%20statistics&hl=fr&pg=PR1#v=onepage&q&f=false). It gives a rigorous treatment of the topic and is mathematically "pure". Though its title mentions asymptotic statistics, the big untold secret is that majority of classical statistics methods are in essence based on asymptotic results.
| null | CC BY-SA 2.5 | null | 2011-02-03T08:54:35.680 | 2011-02-03T08:54:35.680 | 2017-04-13T12:44:41.607 | -1 | 2116 | null |
6838 | 1 | 24241 | null | 13 | 501 | The data: I have worked recently on analysing the stochastic properties of a spatio-temporal field of wind power production forecast errors. Formally, it can be said to be a process $$ \left (\epsilon^p_{t+h|t} \right )_{t=1\dots,T;\; h=1,\dots,H,\;p=p_1,\dots,p_n}$$
indexed twice in time (with $t$ and $h$) and once in space ($p$) with $H$ being the number of look ahead times (equals something around $24$, regularly sampled) , $T$ being the number of "forecast times" (i.e. times at which the forecast is issued, around 30000 in my case, regularly sampled), and $n$ being a number of spatial positions (not gridded, around 300 in my case). Since this is a weather related process, I also have plenty of weather forecast, analysis, meteorological measurments that can be used.
Question: Can you describe me the exploratory analysis that you would perform on this type of data to understand the nature of the interdependence structure (that might not be linear) of the process in order to propose a fine modelling of it.
| Exploratory analysis of spatio-temporal forecast errors | CC BY-SA 2.5 | null | 2011-02-03T08:58:53.177 | 2012-03-30T11:02:20.380 | 2011-03-13T20:33:09.153 | 223 | 223 | [
"forecasting",
"data-mining",
"stochastic-processes",
"spatial",
"spatio-temporal"
]
|
6839 | 2 | null | 6653 | 9 | null | Do a logistic regression with covariates "play time" and "goals(home team) - goals(away team)". You will need an interaction effect of these terms since a 2 goal lead at half-time will have a much smaller effect than a 2 goal lead with only 1 minute left. Your response is "victory (home team)".
Don't just assume linearity for this, fit a smoothly varying coefficient model for the effect of "goals(home team) - goals(away team)", e.g. in R you could use `mgcv`'s `gam` function with a model formula like `win_home ~ s(time_remaining, by=lead_home)`. Make
`lead_home` into a factor, so that you get a different effect of `time_remaining` for every value of `lead_home`.
I would create multiple observations per game, one for every slice of time you are interested in.
| null | CC BY-SA 2.5 | null | 2011-02-03T09:30:48.723 | 2011-02-03T09:30:48.723 | null | null | 1979 | null |
6840 | 1 | 6850 | null | 6 | 4701 | I would like to be sure I am able to compute the KL divergence based on a sample.
Assume the data come from a Gamma distribution with shape=1/.85 and scale=.85.
>
set.seed(937)
theta <- .85
x <- rgamma(1000, shape=1/theta, scale=theta)
Based on that sample, I would like to compute the KL-divergence from the real underlying distribution to an inverse gaussian distribution with mean 1 (mu=1) and precision 0.832 (lambda=0.832).
I obtain KL = 1.286916.
Can you confirm I compute it well?
| Estimate the Kullback-Leibler divergence | CC BY-SA 3.0 | null | 2011-02-03T09:31:02.100 | 2017-04-11T17:30:51.927 | 2020-06-11T14:32:37.003 | -1 | 3019 | [
"kullback-leibler"
]
|
6841 | 1 | null | null | 4 | 1106 | I have fitted a dynamic panel data model with Arellano-Bond estimator in gretl, here is the output:
```
Model 5: 2-step dynamic panel, using 2332 observations
Included 106 cross-sectional units
H-matrix as per Ox/DPD
Dependent variable: trvr
coefficient std. error z p-value
---------------------------------------------------------
Dtrvr(-1) 0.895381 0.0248490 36.03 2.55e-284 ***
const 0.0230952 0.00226823 10.18 2.39e-024 ***
x1 -0.0263556 0.00836633 -3.150 0.0016 ***
x2 0.127888 0.0171532 7.456 8.94e-014 ***
Sum squared resid 605.9396 S.E. of regression 0.510180
Number of instruments = 256
Test for AR(1) errors: z = -4.29161 [0.0000]
Test for AR(2) errors: z = 1.62503 [0.1042]
Sargan over-identification test: Chi-square(252) = 105.139 [1.0000]
Wald (joint) test: Chi-square(3) = 2333.35 [0.0000]
```
I have 2 questions about the results:
- How do I assess the fit?
- How can I simulate from the model?
| Simulate Arellano-Bond | CC BY-SA 4.0 | null | 2011-02-03T10:18:25.350 | 2018-08-15T12:51:00.363 | 2018-08-15T12:51:00.363 | 11887 | 1443 | [
"panel-data",
"simulation"
]
|
6842 | 2 | null | 6835 | 2 | null | Without more detail it's hard to give you a comprehensive response, but you might for example look at the Hurst exponent to detect if a series displays trending characteristics. There are many R packages which compute the Hurst exponent - in my opinion the best collection can be found in the package fArma.
There are many methods you could use to detect when a specific series is trending. A simple and on-line method is to take an exponential moving average of lagged returns.
| null | CC BY-SA 2.5 | null | 2011-02-03T11:44:39.530 | 2011-02-03T11:44:39.530 | null | null | 2425 | null |
6843 | 2 | null | 5899 | 1 | null | A little OT, but one of my favourite nuggets of science is Arrow's theorem, so in case you're not familiar here's the wikipedia page:
[http://en.wikipedia.org/wiki/Arrow](http://en.wikipedia.org/wiki/Arrow)'s_impossibility_theorem
And all from a PhD thesis, too. Quite inspiring really. Mine was rubbish.
| null | CC BY-SA 2.5 | null | 2011-02-03T12:51:24.553 | 2011-02-03T12:51:24.553 | null | null | 199 | null |
6844 | 2 | null | 6807 | 1 | null | I personally loved [this](http://statwww.epfl.ch/davison/SM/) which had a really good mix of theory and application (with lots of examples). It was a good match with casella and berger for a more theory oriented approach. And for a broad brush overview [this](http://books.google.co.uk/books?id=th3fbFI1DaMC&lpg=PP1&ots=eMrRH_Smn2&dq=wasserman%20all%20of%20statistics&pg=PP1#v=onepage&q&f=false).
| null | CC BY-SA 2.5 | null | 2011-02-03T13:00:51.660 | 2011-02-03T13:00:51.660 | null | null | 3033 | null |
6845 | 2 | null | 672 | 3 | null | Bayes' theorem relates two ideas: probability and likelihood. Probability says: given this model, these are the outcomes. So: given a fair coin, I'll get heads 50% of the time. Likelihood says: given these outcomes, this is what we can say about the model. So: if you toss a coin 100 times and get 88 heads (to pick up on a previous example and make it more extreme), then the likelihood that the fair coin model is correct is not so high.
One of the standard examples used to illustrate Bayes' theorem is the idea of testing for a disease: if you take a test that's 95% accurate for a disease that 1 in 10000 of the population have, and you test positive, what are the chances that you have the disease?
The naive answer is 95%, but this ignores the issue that 5% of the tests on 9999 out of 10000 people will give a false positive. So your odds of having the disease are far lower than 95%.
My use of the vague phrase "what are the chances" is deliberate. To use the probability/likelihood language: the probability that the test is accurate is 95%, but what you want to know is the likelihood that you have the disease.
Slightly off topic: The other classic example which Bayes theorem is used to solve in all the textbooks is the Monty Hall problem: You're on a quiz show. There is a prize behind one of three doors. You choose door one. The host opens door three to reveal no prize. Should you change to door two given the chance?
I like the rewording of the question (courtesy of the reference below): you're on a quiz show. There is a prize behind one of a million doors. You choose door one. The host opens all the other doors except door 104632 to reveal no prize. Should you change to door 104632?
My favourite book which discusses Bayes' theorem, very much from the Bayesian perspective, is "Information Theory, Inference and Learning Algorithms ", by David J. C. MacKay. It's a Cambridge University Press book, ISBN-13: 9780521642989. My answer is (I hope) a distillation of the kind of discussions made in the book. (Usual rules apply: I have no affiliations with the author, I just like the book).
| null | CC BY-SA 2.5 | null | 2011-02-03T13:13:45.523 | 2011-02-03T13:13:45.523 | null | null | null | null |
6847 | 2 | null | 672 | 2 | null | I really like Kevin Murphy's intro the to Bayes Theorem
[http://www.cs.ubc.ca/~murphyk/Bayes/bayesrule.html](http://www.cs.ubc.ca/~murphyk/Bayes/bayesrule.html)
The quote here is from an economist article:
[http://www.cs.ubc.ca/~murphyk/Bayes/economist.html](http://www.cs.ubc.ca/~murphyk/Bayes/economist.html)
>
The essence of the Bayesian approach is to provide a mathematical rule explaining how you should change your existing beliefs in the light of new evidence. In other words, it allows scientists to combine new data with their existing knowledge or expertise. The canonical example is to imagine that a precocious newborn observes his first sunset, and wonders whether the sun will rise again or not. He assigns equal prior probabilities to both possible outcomes, and represents this by placing one white and one black marble into a bag. The following day, when the sun rises, the child places another white marble in the bag. The probability that a marble plucked randomly from the bag will be white (ie, the child's degree of belief in future sunrises) has thus gone from a half to two-thirds. After sunrise the next day, the child adds another white marble, and the probability (and thus the degree of belief) goes from two-thirds to three-quarters. And so on. Gradually, the initial belief that the sun is just as likely as not to rise each morning is modified to become a near-certainty that the sun will always rise.
| null | CC BY-SA 2.5 | null | 2011-02-03T15:12:13.333 | 2011-02-03T15:12:13.333 | null | null | 2904 | null |
6848 | 2 | null | 2296 | 5 | null | Here are a few suggestions, more from the statistics literature, with an eye toward applications in finance:
- Geweke, J., & Zhou, G. (1996). Measuring the pricing error of the arbitrage pricing theory. Review of Financial Studies, 9(2), 557. Soc Financial Studies. Retrieved January 29, 2011, from http://rfs.oxfordjournals.org/content/9/2/557.abstract.
You might start here - a detailed discussion of identifiability issues (related to and including the sign indeterminacy you describe)
- Aguilar, O., & West, M. (2000). Bayesian Dynamic Factor Models and Portfolio Allocation. Journal of Business & Economic Statistics, 18(3), 338. doi: 10.2307/1392266.
- Lopes, H. F., & West, M. (2004). Bayesian model assessment in factor analysis. Statistica Sinica, 14(1), 41â68. Citeseer. Retrieved September 19, 2010, from here.
Good luck!
| null | CC BY-SA 4.0 | null | 2011-02-03T15:18:48.797 | 2023-01-06T04:40:28.030 | 2023-01-06T04:40:28.030 | 362671 | 26 | null |
6849 | 2 | null | 6834 | 7 | null | Three different approaches come to my mind
- decisions based solely on the following / follower lists. If a lot of people you are following, follow a particular person, the chance is high you might be interested in this persons tweet.
- using links and hashtags. Assuming you link very often to specific websites or use specific links you might be interested in people doing the same.
- doing some kind of document clustering approaches on the tweets, to figure out who's writing similar things and suggest him or her.
The problem with recommender systems to me is that most machine learning algorithms are just using some kind of similarity measure to give suggestions.
Check out "recommender systems" and "document clustering" as search keywords to get some more ideas.
| null | CC BY-SA 2.5 | null | 2011-02-03T15:22:11.077 | 2011-02-03T15:22:11.077 | null | null | 2904 | null |
6850 | 2 | null | 6840 | 9 | null | Mathematica, using symbolic integration (not an approximation!), reports a value that equals 1.6534640367102553437 to 20 decimal digits.
```
red = GammaDistribution[20/17, 17/20];
gray = InverseGaussianDistribution[1, 832/1000];
kl[pF_, qF_] := Module[{p, q},
p[x_] := PDF[pF, x];
q[x_] := PDF[qF, x];
Integrate[p[x] (Log[p[x]] - Log[q[x]]), {x, 0, \[Infinity]}]
];
kl[red, gray]
```
In general, using a small Monte-Carlo sample is inadequate for computing these integrals. As we saw in [another thread](https://stats.stackexchange.com/q/6814/919), the value of the KL divergence can be dominated by the integral in short intervals where the base PDF is nonzero and the other PDF is close to zero. A small sample can miss such intervals entirely and it can take a large sample to hit them enough times to obtain an accurate result.
Take a look at the Gamma PDF (red, dashed) and the log PDF (gray) for the Inverse Gaussian near 0:


In this case, the Gamma PDF stays high near 0 while the log of the Inverse Gaussian PDF diverges there. Virtually all of the KL divergence is contributed by values in the interval [0, 0.05], which has a probability of 3.2% under the Gamma distribution. The number of elements in a sample of $N = 1000$ Gamma variates which fall in this interval therefore has a Binomial(.032, $N$) distribution. Its standard deviation equals $.032(1 - .032)/\sqrt{N}$ = 0.55%. Thus, as a rough estimate, we can't expect your integral to have a relative accuracy much better than (3.2% - 2*0.55%) / 3.2% = give or take around 40%, because the number of times it samples this critical interval has this amount of error. That accounts for the difference between your result and Mathematica's. To get this error down to 1%--a mere two decimal places of precision--you would need to multiply your sample approximately by $(40/1)^2 = 1600$: that is, you would need a few million values.
Here is a histogram of the natural logarithms of 1000 independent estimates of the KL divergence. Each estimate averages 1000 values randomly obtained from the Gamma distribution. The correct value is shown as the dashed red line.

```
Histogram[Log[Table[
Mean[Log[PDF[red, #]/PDF[gray, #]] & /@ RandomReal[red, 1000]], {i,1,1000}]]]
```
Although on the average this Monte-Carlo method is unbiased, most of the time (87% in this simulation) the estimate is too low. To make up for this, on occasion the overestimate can be gross: the largest of these estimates is 18.98. (The wide spread of values shows that the estimate of 1.286916 actually has no reliable decimal digits!) Because of this huge skewness in the distribution, the situation is actually much worse than I previously estimated with the simple binomial thought experiment. The average of these simulations (comprising 1000*1000 values total) is just 1.21, still about 25% less than the true value.
For computing the KL divergence in general, you need to use [adaptive quadrature](http://en.wikipedia.org/wiki/Adaptive_quadrature) or exact methods.
| null | CC BY-SA 2.5 | null | 2011-02-03T15:32:19.690 | 2011-02-03T17:44:15.207 | 2017-04-13T12:44:25.283 | -1 | 919 | null |
6851 | 1 | null | null | 3 | 1221 | Can the KS3D2 test as suggested by [Fasano and Franceschini (1987)](http://adsabs.harvard.edu/abs/1987MNRAS.225..155F) be used when one of the three variables take discrete values between 0-40? The other two variables are continuous.
| 3d Kolmogorov-Smirnov test | CC BY-SA 2.5 | null | 2011-02-03T15:45:53.743 | 2019-10-08T18:43:29.583 | 2011-02-03T16:14:17.177 | 449 | null | [
"kolmogorov-smirnov-test"
]
|
6852 | 1 | 6879 | null | 1 | 2277 | I have a folder with a hundred comma separated files (CSVs) of which the filename equals a company stock exchange followed by it's symbol, delimited with a "_", eg: NASDAQ_MSFT.csv
Each file contains historical daily stock information, eg. a csv file looks like:
```
Date,Open,High,Low,Close,Volume
29-Dec-00,21.97,22.91,21.31,21.69,93999000
28-Dec-00,22.56,23.12,21.94,22.28,75565800
27-Dec-00,23.06,23.41,22.50,23.22,66881000
...
5-Jan-00,55.56,58.19,54.69,56.91,62712600
4-Jan-00,56.78,58.56,55.94,56.31,52866600
3-Jan-00,58.69,59.31,56.00,58.34,51680600
```
Now, I want to analyse this information (analyse the companies as if the company name is a column field rather than a new list of fields). But there are a couple of issues:
- Each company has it's own file with the symbol and exchange as filename
- Some companies start at different dates than others. Eg. some might have a range of 30 days while others have a range of 30 months (every step is still 1 day difference though).
I am using SPSS as my analysis tool. My question is, how can I import these files to perform senseful analystical operations on them?
For example, I wish to see the average slope of the open price of all companies together, etc.
| Strategy for analysing stock history data from multiple files with SPSS | CC BY-SA 2.5 | null | 2011-02-03T16:40:05.237 | 2018-07-13T03:16:24.943 | null | null | 3040 | [
"distributions",
"spss"
]
|
6853 | 1 | 6858 | null | 18 | 20204 | Given $X_1$ and $X_2$ normal random variables with correlation coefficient $\rho$, how do I find the correlation between following lognormal random variables $Y_1$ and $Y_2$?
$Y_1 = a_1 \exp(\mu_1 T + \sqrt{T}X_1)$
$Y_2 = a_2 \exp(\mu_2 T + \sqrt{T}X_2)$
Now, if $X_1 = \sigma_1 Z_1$ and $X_2 = \sigma_1 Z_2$, where $Z_1$ and $Z_2$ are standard normals, from the linear transformation property, we get:
$Y_1 = a_1 \exp(\mu_1 T + \sqrt{T}\sigma_1 Z_1)$
$Y_2 = a_2 \exp(\mu_2 T + \sqrt{T}\sigma_2 (\rho Z_1 + \sqrt{1-\rho^2}Z_2)$
Now, how to go from here to compute correlation between $Y_1$ and $Y_2$?
| Correlation of log-normal random variables | CC BY-SA 2.5 | null | 2011-02-03T18:14:11.723 | 2017-08-24T09:32:38.873 | 2011-02-03T18:28:40.687 | null | 862 | [
"correlation",
"random-variable",
"lognormal-distribution"
]
|
6854 | 2 | null | 6807 | 2 | null | I think the most important thing here is to develop an intuition about statistics and some general statistical concepts. Perhaps the best way to do this is to have some domain that you can "own." This can provide a positive feedback loop where understanding about the domain helps you to understand more about the underlying statistics, which helps you to understand more about the domain, etc.
For me that domain was baseball stats. I understood that a batter that goes 3 for 4 in a game is not a "true" .750 hitter. This helps to understand the more general point that the sample data is not the same as the underlying distribution. I also know he is probably closer to an average player than to a .750 hitter, so this helps to understand concepts like regression to the mean. From there I can get to full-blown Bayesian inference where my prior probability distribution had a mean of that of the mean baseball player, and I now have 4 new samples with which to update my posterior distribution.
I don't know what that domain is for you, but I would guess it would be more helpful than a mere textbook. Examples help to understand the theory, which helps to understand the examples. A textbook with examples is nice, but unless you can make those examples "yours" then I wonder if you will get enough from them.
| null | CC BY-SA 2.5 | null | 2011-02-03T18:24:47.533 | 2011-02-03T18:24:47.533 | null | null | 2485 | null |
6855 | 1 | null | null | 7 | 5437 | I used cforest and randomForest for a 300 rows and 9 columns dataset and received good (almost overfitted - error equal to zero) results for randomForest and big prediction errors for cforest classifiers. What is the main difference between these two procedures?
I admit that for cforest I used any possible input parameters combination e.g. the best one, but still with big classification errors, was `cforest_control(savesplitstats = TRUE, ntree=100, mtry=8, mincriterion=0, maxdepth=400, maxsurrogate = 1)`.
For very big datasets (about 10000 rows and 192 columns) randomForest and cforest have almost the same errors (the former slightly better on the same level as radial kernel svms), but for the mentioned small one for my surprise there is no way to improve cforest prediction accuracy...
| cforest and randomForest classification prediction error | CC BY-SA 2.5 | null | 2011-02-03T18:55:04.247 | 2015-06-16T15:10:02.920 | 2011-02-04T06:56:45.530 | 2116 | 3041 | [
"r",
"machine-learning",
"classification",
"random-forest"
]
|
6856 | 1 | 6859 | null | 17 | 8786 | Since regression modeling is often more "art" than science, I often find myself testing many iterations of a regression structure. What are some efficient ways to summarize the information from these multiple model runs in an attempt to find the "best" model? One approach I've used is to put all the models into a list and run `summary()` across that list, but I imagine there are more efficient ways to compare?
Sample code & models:
```
ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14)
trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69)
group <- gl(2,10,20, labels=c("Ctl","Trt"))
weight <- c(ctl, trt)
lm1 <- lm(weight ~ group)
lm2 <- lm(weight ~ group - 1)
lm3 <- lm(log(weight) ~ group - 1)
#Draw comparisions between models 1 - 3?
models <- list(lm1, lm2, lm3)
lapply(models, summary)
```
| Aggregating results from linear model runs R | CC BY-SA 2.5 | null | 2011-02-03T19:00:03.880 | 2011-02-03T20:23:39.190 | null | null | 696 | [
"r",
"regression"
]
|
6858 | 2 | null | 6853 | 21 | null | I assume that $X_1\sim N(0,\sigma_1^2)$ and $X_2\sim N(0,\sigma_2^2)$. Denote $Z_i=\exp(\sqrt{T}X_i)$. Then
\begin{align}
\log(Z_i)\sim N(0,T\sigma_i^2)
\end{align}
so $Z_i$ are [log-normal](http://en.wikipedia.org/wiki/Lognormal_distribution). Thus
\begin{align}
EZ_i&=\exp\left(\frac{T\sigma_i^2}{2}\right)\\
var(Z_i)&=(\exp(T\sigma_i^2)-1)\exp(T\sigma_i^2)
\end{align}
and
\begin{align}
EY_i&=a_i\exp(\mu_iT)EZ_i\\
var(Y_i)&=a_i^2\exp(2\mu_iT)var(Z_i)
\end{align}
Then using the formula [for m.g.f of multivariate normal](http://en.wikipedia.org/wiki/Multivariate_normal_distribution) we have
\begin{align}
EY_1Y_2&=a_1a_2\exp((\mu_1+\mu_2)T)E\exp(\sqrt{T}X_1+\sqrt{T}X_2)\\
&=a_1a_2\exp((\mu_1+\mu_2)T)\exp\left(\frac{1}{2}T(\sigma_1^2+2\rho\sigma_1\sigma_2+\sigma_2^2)\right)
\end{align}
So
\begin{align}
cov(Y_1,Y_2)&=EY_1Y_2-EY_1EY_2\\
&=a_1a_2\exp((\mu_1+\mu_2)T)\exp\left(\frac{T}{2}(\sigma_1^2+\sigma_2^2)\right)(\exp(\rho\sigma_1\sigma_2T)-1)
\end{align}
Now the correlation of $Y_1$ and $Y_2$ is covariance divided by square roots of variances:
\begin{align}
\rho_{Y_1Y_2}=\frac{\exp(\rho\sigma_1\sigma_2T)-1}{\sqrt{\left(\exp(\sigma_1^2T)-1\right)\left(\exp(\sigma_2^2T)-1\right)}}
\end{align}
| null | CC BY-SA 2.5 | null | 2011-02-03T19:27:54.193 | 2011-02-03T20:17:19.980 | 2011-02-03T20:17:19.980 | 2116 | 2116 | null |
6859 | 2 | null | 6856 | 20 | null | [Plot them!](http://tables2graphs.com/doku.php?id=04_regression_coefficients)
[http://svn.cluelessresearch.com/tables2graphs/longley.png](http://svn.cluelessresearch.com/tables2graphs/longley.png)
Or, if you must, use tables:
The [apsrtable](http://cran.r-project.org/web/packages/apsrtable/index.html) package or the `mtable` function in the [memisc](http://cran.r-project.org/web/packages/memisc/index.html) package.
Using `mtable`
```
mtable123 <- mtable("Model 1"=lm1,"Model 2"=lm2,"Model 3"=lm3,
summary.stats=c("sigma","R-squared","F","p","N"))
> mtable123
Calls:
Model 1: lm(formula = weight ~ group)
Model 2: lm(formula = weight ~ group - 1)
Model 3: lm(formula = log(weight) ~ group - 1)
=============================================
Model 1 Model 2 Model 3
---------------------------------------------
(Intercept) 5.032***
(0.220)
group: Trt/Ctl -0.371
(0.311)
group: Ctl 5.032*** 1.610***
(0.220) (0.045)
group: Trt 4.661*** 1.527***
(0.220) (0.045)
---------------------------------------------
sigma 0.696 0.696 0.143
R-squared 0.073 0.982 0.993
F 1.419 485.051 1200.388
p 0.249 0.000 0.000
N 20 20 20
=============================================
```
>
| null | CC BY-SA 2.5 | null | 2011-02-03T19:32:10.660 | 2011-02-03T19:47:52.257 | 2011-02-03T19:47:52.257 | 375 | 375 | null |
6860 | 1 | 6864 | null | 2 | 3798 | I am doing some histogram rowstacked plots using the gnuplot command
```
plot 'file.dat' using (100.*$2/$6) title column(2), for [i=3:5] '' using (100.*column(i)/column(6)) title column(i)
```
I want to change all the colors used for fill the bars in the histogram. For change the first box color, I simply add lt rgb "color" before the comma, but I don't know how to modify colors in the loop to make each box have a different color.
Modifying the line's style does nothing.
| Fill colors in Gnuplot | CC BY-SA 2.5 | null | 2011-02-03T19:51:29.630 | 2011-02-03T23:52:06.230 | 2011-02-03T22:35:53.690 | null | 3043 | [
"data-visualization",
"gnuplot"
]
|
6861 | 2 | null | 2296 | 2 | null | This article deals with Bayesian estimation of dynamical hierarchical factor model:
E. Moench, S. Ng, S. Potter. Dynamic Hierarchical Factor Models, Federal Reserve Bank of New York, 2009, Report No. 412. [link](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1523787).
Naturally it can be adapted for non-hierarchical case. As usual you will find more references on topic by perusing the references in the article.
| null | CC BY-SA 2.5 | null | 2011-02-03T20:06:33.490 | 2011-02-03T20:06:33.490 | null | null | 2116 | null |
6862 | 2 | null | 6856 | 12 | null | The following doesn't answer exactly the question. It may give you some ideas, though. It's something I recently did in order to assess the fit of several regression models using one to four independent variables (the dependent variable was in the first column of the df1 dataframe).
```
# create the combinations of the 4 independent variables
library(foreach)
xcomb <- foreach(i=1:4, .combine=c) %do% {combn(names(df1)[-1], i, simplify=FALSE) }
# create formulas
formlist <- lapply(xcomb, function(l) formula(paste(names(df1)[1], paste(l, collapse="+"), sep="~")))
```
The contents of as.character(formlist) was
```
[1] "price ~ sqft" "price ~ age"
[3] "price ~ feats" "price ~ tax"
[5] "price ~ sqft + age" "price ~ sqft + feats"
[7] "price ~ sqft + tax" "price ~ age + feats"
[9] "price ~ age + tax" "price ~ feats + tax"
[11] "price ~ sqft + age + feats" "price ~ sqft + age + tax"
[13] "price ~ sqft + feats + tax" "price ~ age + feats + tax"
[15] "price ~ sqft + age + feats + tax"
```
Then I collected some useful indices
```
# R squared
models.r.sq <- sapply(formlist, function(i) summary(lm(i))$r.squared)
# adjusted R squared
models.adj.r.sq <- sapply(formlist, function(i) summary(lm(i))$adj.r.squared)
# MSEp
models.MSEp <- sapply(formlist, function(i) anova(lm(i))['Mean Sq']['Residuals',])
# Full model MSE
MSE <- anova(lm(formlist[[length(formlist)]]))['Mean Sq']['Residuals',]
# Mallow's Cp
models.Cp <- sapply(formlist, function(i) {
SSEp <- anova(lm(i))['Sum Sq']['Residuals',]
mod.mat <- model.matrix(lm(i))
n <- dim(mod.mat)[1]
p <- dim(mod.mat)[2]
c(p,SSEp / MSE - (n - 2*p))
})
df.model.eval <- data.frame(model=as.character(formlist), p=models.Cp[1,],
r.sq=models.r.sq, adj.r.sq=models.adj.r.sq, MSEp=models.MSEp, Cp=models.Cp[2,])
```
The final dataframe was
```
model p r.sq adj.r.sq MSEp Cp
1 price~sqft 2 0.71390776 0.71139818 42044.46 49.260620
2 price~age 2 0.02847477 0.01352823 162541.84 292.462049
3 price~feats 2 0.17858447 0.17137907 120716.21 351.004441
4 price~tax 2 0.76641940 0.76417343 35035.94 20.591913
5 price~sqft+age 3 0.80348960 0.79734865 33391.05 10.899307
6 price~sqft+feats 3 0.72245824 0.71754599 41148.82 46.441002
7 price~sqft+tax 3 0.79837622 0.79446120 30536.19 5.819766
8 price~age+feats 3 0.16146638 0.13526220 142483.62 245.803026
9 price~age+tax 3 0.77886989 0.77173666 37884.71 20.026075
10 price~feats+tax 3 0.76941242 0.76493500 34922.80 21.021060
11 price~sqft+age+feats 4 0.80454221 0.79523470 33739.36 12.514175
12 price~sqft+age+tax 4 0.82977846 0.82140691 29640.97 3.832692
13 price~sqft+feats+tax 4 0.80068220 0.79481991 30482.90 6.609502
14 price~age+feats+tax 4 0.79186713 0.78163109 36242.54 17.381201
15 price~sqft+age+feats+tax 5 0.83210849 0.82091573 29722.50 5.000000
```
Finally, a Cp plot (using library wle)

| null | CC BY-SA 2.5 | null | 2011-02-03T20:10:28.257 | 2011-02-03T20:23:39.190 | 2011-02-03T20:23:39.190 | 339 | 339 | null |
6863 | 2 | null | 6851 | 3 | null | Kolmogorov-Smirnov 3d test is when you have the sample of 3d vectors. The idea is to compare the sample distribution to a model distribution. So the main question is how does model distribution looks like.
Now KS-test compares the cumulative distributions of sample distribution and the model distribution. The 3d test does the same. If you worry about discrete-values look at the bottom of page 6 of your reference. The authors test the behaviour of their statistic when the model distribution (they call it parent distribution) is constant in certain cubes. This means that the the testable distribution is discrete. So the answer to your question seems to be yes, discreteness is not a problem.
To make sure you can always do some Monte-Carlo simulations where you control the model distribution and choose it to be similar to the one you want to test, and see how the statistic performs.
| null | CC BY-SA 2.5 | null | 2011-02-03T20:34:22.620 | 2011-02-03T20:34:22.620 | null | null | 2116 | null |
6864 | 2 | null | 6860 | 2 | null | I was able to find the answer by myself. Is pretty easy in fact if you know about the ternary operator in gnuplot.
The first thing you should do is create a function for set the color you want for each box. A valid function could be:
```
colorfunc(x) = x == 3 ? "#C17D11" : x == 4 ? "#73d216" : "#3465A4"
```
Then, in the for-loop part of the plot command you simply add lt rgb colorfunc(i). The plot command should be
>
plot 'file.dat' using (100.*$2/$6)
title column(2), for [i=3:5] '' using
(100.*column(i)/column(6)) title
column(i) lt rgb colorfunc(i)
and each color box will have the desired color.
I prefer to use HTML code in the color function, but you can use any of the valid ways to specify a color in gnuplot.
| null | CC BY-SA 2.5 | null | 2011-02-03T23:52:06.230 | 2011-02-03T23:52:06.230 | null | null | 3043 | null |
6865 | 1 | 14207 | null | 12 | 6997 | How can I test effects in a Split-Plot ANOVA using suitable model comparisons for use with the `X` and `M` arguments of `anova.mlm()` in R? I'm familiar with `?anova.mlm` and Dalgaard (2007)[1]. Unfortunately it only brushes Split-Plot Designs. Doing this in a fully randomized design with two within-subjects factors:
```
N <- 20 # 20 subjects total
P <- 3 # levels within-factor 1
Q <- 3 # levels within-factor 2
DV <- matrix(rnorm(N* P*Q), ncol=P*Q) # random data in wide format
id <- expand.grid(IVw1=gl(P, 1), IVw2=gl(Q, 1)) # intra-subjects layout of data matrix
library(car) # for Anova()
fitA <- lm(DV ~ 1) # between-subjects design: here no between factor
resA <- Anova(fitA, idata=id, idesign=~IVw1*IVw2)
summary(resA, multivariate=FALSE, univariate=TRUE) # all tests ...
```
The following model comparisons lead to the same results. The restricted model doesn't include the effect in question but all other effects of the same order or lower, the full model adds the effect in question.
```
anova(fitA, idata=id, M=~IVw1 + IVw2, X=~IVw2, test="Spherical") # IVw1
anova(fitA, idata=id, M=~IVw1 + IVw2, X=~IVw1, test="Spherical") # IVw2
anova(fitA, idata=id, M=~IVw1 + IVw2 + IVw1:IVw2,
X=~IVw1 + IVw2, test="Spherical") # IVw1:IVw2
```
A Split-Splot design with one within and one between-subjects factor:
```
idB <- subset(id, IVw2==1, select="IVw1") # use only first within factor
IVb <- gl(2, 10, labels=c("A", "B")) # between-subjects factor
fitB <- lm(DV[ , 1:P] ~ IVb) # between-subjects design
resB <- Anova(fitB, idata=idB, idesign=~IVw1)
summary(resB, multivariate=FALSE, univariate=TRUE) # all tests ...
```
These are the `anova()` commands to replicate the tests, but I don't know why they work. Why do the tests of the following model comparisons lead to the same results?
```
anova(fitB, idata=idB, X=~1, test="Spherical") # IVw1, IVw1:IVb
anova(fitB, idata=idB, M=~1, test="Spherical") # IVb
```
Two within-subjects factors and one between-subjects factor:
```
fitC <- lm(DV ~ IVb) # between-subjects design
resC <- Anova(fitC, idata=id, idesign=~IVw1*IVw2)
summary(resC, multivariate=FALSE, univariate=TRUE) # all tests ...
```
How do I replicate the results given above with the corresponding model comparisons for use with the `X` and `M` arguments of `anova.mlm()`? What is the logic behind these model comparisons?
EDIT: suncoolsu pointed out that for all practical purposes, data from these designs should be analyzed using mixed models. However, I'd still like to understand how to replicate the results of `summary(Anova())` with `anova.mlm(..., X=?, M=?)`.
[1]: [Dalgaard, P. 2007. New Functions for Multivariate Analysis. R News, 7(2), 2-7.](http://lib.stat.cmu.edu/R/CRAN/doc/Rnews/Rnews_2007-2.pdf)
| Split-Plot ANOVA: model comparison tests in R | CC BY-SA 2.5 | null | 2011-02-03T23:57:29.870 | 2011-08-15T13:30:21.613 | 2011-02-07T20:12:30.183 | 1909 | 1909 | [
"r",
"anova",
"multivariate-analysis",
"repeated-measures",
"split-plot"
]
|
6866 | 2 | null | 1637 | 7 | null | One obvious point that everyone's overlooked: With ANOVA you're testing the null that the mean is identical regardless of the values of your explanatory variables. With a T-Test you can also test the one-sided case, that the mean is specifically greater given one value of your explanatory variable than given the other.
| null | CC BY-SA 2.5 | null | 2011-02-04T01:29:59.663 | 2011-02-04T01:29:59.663 | null | null | 1347 | null |
6867 | 1 | null | null | 1 | 1515 | Hey everyone, thanks for taking the time to look at this question. This is pretty simple, and I understand statistics quite well, but I think I'm not wrapping my head around the words, it seems to me as if there's something missing. Here's the question.
>
The manager of the aerospace division
of General Aeronautics has estimated
the price it can charge for providing
satellite launching services to
commercial firms. Her most optimistic
estimate (a price not expected to be
exceeded more than 10 percent of the
time) is 2 million. Her most
pessimistic estimate (a lower price
than this one is not expected more
than 10 percent of the time) is 1
million. The expected value estimate
is 1.5 million. The price distribution
is believed to be approximately
normal.
What is the expected price?
What is
the std dev of the launch price?
What
is the probability of receiving a
price less than 1.2 million?
So what is the actual probability of a 2 million dollar price? It just says that they don't expect to exceed 2 million more than 10% of the time. Does that mean 90% of the time they expect the price to be less than 2 million?
Thanks for your suggestions.
| Fitting normal distribution to given quantiles | CC BY-SA 3.0 | null | 2011-02-04T01:39:21.263 | 2011-06-20T21:01:58.190 | 2011-04-21T19:59:06.003 | null | 2644 | [
"self-study"
]
|
6868 | 2 | null | 6867 | 1 | null | actually i think i answered it, i was over complicating it. the probability under thethe std normal curve associated with a greater value than 2 mill is 10%. So the z value associated with 90% (to the left of 2mill) is 1.285. So;
1.285=(value-mean)/std dev The mean is given as 1.5 which is also the expected price right? So the std dev works to be .389. This means the probability of getting less than 1.2mill 22.9%.
Sound right to anyone?
Thanks!
| null | CC BY-SA 2.5 | null | 2011-02-04T02:13:30.190 | 2011-02-04T02:13:30.190 | null | null | 2644 | null |
6869 | 2 | null | 6865 | 9 | null | Split-plot designs originated in agriculture, hence the name. But they frequently occur and I would say -- the workhorse of most of the clinical trials. The main plot is treated with a level of one factor while the levels of some other factor are allowed to vary with the subplots. The design arises as a result of restrictions on a full randomization. For example: a field may be divided into four subplots. It may be possible to plant different varieties in subplots, but only one type of irrigation may be used for the whole field. Not the distinction between splits and blocks. Blocks are features of the experimental units which we have the option to take advantage of in the experimental design, because we know they are there. Splits, on the other hand, impose restriction on what assignments of factors are possible. They impose requirements on the design that prevent a complete randomization.
They are used a lot in clinical trials where when one factor is easy to change while another factor takes much more time to change. If the experimenter must do all runs for each level of the hard-to-change factor consecutively, a split plot design results with the hard-to-change factor representing the whole plot factor.
Here is an example: In an agricultural field trial, the objective was to determine the effects of two crop varieties and four different irrigation methods. Eight fields were available, but only one type of irrigation may be applied to each field. The fields may be divided into two parts with a different variety in each part. The whole plot factor is the irrigation, which should be randomly assigned to fields. Within each field, the variety is assigned.
This is how you do this in `R`:
```
install.packages("faraway")
data(irrigation)
summary(irrigation)
library(lme4)
R> (lmer(yield ~ irrigation * variety + (1|field), data = irrigation))
Linear mixed model fit by REML
Formula: yield ~ irrigation * variety + (1 | field)
Data: irrigation
AIC BIC logLik deviance REMLdev
65.4 73.1 -22.7 68.6 45.4
Random effects:
Groups Name Variance Std.Dev.
field (Intercept) 16.20 4.02
Residual 2.11 1.45
Number of obs: 16, groups: field, 8
Fixed effects:
Estimate Std. Error t value
(Intercept) 38.50 3.02 12.73
irrigationi2 1.20 4.28 0.28
irrigationi3 0.70 4.28 0.16
irrigationi4 3.50 4.28 0.82
varietyv2 0.60 1.45 0.41
irrigationi2:varietyv2 -0.40 2.05 -0.19
irrigationi3:varietyv2 -0.20 2.05 -0.10
irrigationi4:varietyv2 1.20 2.05 0.58
Correlation of Fixed Effects:
(Intr) irrgt2 irrgt3 irrgt4 vrtyv2 irr2:2 irr3:2
irrigation2 -0.707
irrigation3 -0.707 0.500
irrigation4 -0.707 0.500 0.500
varietyv2 -0.240 0.170 0.170 0.170
irrgtn2:vr2 0.170 -0.240 -0.120 -0.120 -0.707
irrgtn3:vr2 0.170 -0.120 -0.240 -0.120 -0.707 0.500
irrgtn4:vr2 0.170 -0.120 -0.120 -0.240 -0.707 0.500 0.500
```
Basically, what this model says is, irrigation and variety are fixed effects and variety is nested within irrigation. The fields are the random effects and pictorially will be something like
>
I_1 | I_2 | I_3 | I_4
V_1 V_2 | V_1 V_2 | V_1 V_2 | V_1 V_2
But this was a special variant with fixed whole plot effect and subplot effect. There can be variants in which one or more are random. There can be more complicated designs like split-split .. plot designs. Basically, you can go wild and crazy. But given the underlying structure and distribution ( ie fixed or random, nested or crossed, .. ) is clearly understood, a `lmer-Ninja` will have no troubles in modeling. May be interpretation will be a mess.
Regarding comparisons, say you have `lmer1` and `lmer2`:
```
anova(lmer1, lmer2)
```
will give you the appropriate test based on the chi-sq test statistic with degrees of freedom equal to the difference of parameters.
cf:
Faraway, J., Extending Linear Models with R.
Casella, G., Statistical Design
| null | CC BY-SA 2.5 | null | 2011-02-04T02:36:57.687 | 2011-02-04T05:53:02.263 | 2011-02-04T05:53:02.263 | 1307 | 1307 | null |
6870 | 1 | 7461 | null | 12 | 6169 | I have a question on how to fit a censoring problem in JAGS.
I observe a bivariate mixture normal where the X values have measurement error. I would like to model the true underlying 'means' of the observed censored values.
\begin{align*}
\lceil x_{true}+\epsilon \rceil = x_{observed} \
\epsilon \sim N(0,sd=.5)
\end{align*}
Here is what I have now:
```
for (i in 1:n){
x[i,1:2]~dmnorm(mu[z[i],1:2], tau[z[i],1:2,1:2])
z[i]~dcat(prob[ ])
}
```
Y also has measurement error. What I want to do is something like this:
```
for (i in 1:n){
x_obs[i] ~ dnorm(x_true[i],prec_x)I(x_true[i],)
y_obs[i] ~ dnorm(y_true[i],prec_y)
c(x_true[i]:y_true[i])~dmnorm(mu[ z [ i ],1:2], tau[z[i],1:2,1:2])
z[i]~dcat(prob[ ])
}
#priors for measurement error
e_x~dunif(.1,.9)
prec_x<-1/pow(e_x,2)
e_y~dunif(2,4)
prec_y<-1/pow(e_y,2)
```
Obviously the c command is not valid in JAGS.
Thanks in advance.
| Censoring/Truncation in JAGS | CC BY-SA 2.5 | null | 2011-02-04T02:47:18.727 | 2018-04-23T20:44:09.343 | 2011-02-15T18:24:29.840 | 8 | 2310 | [
"markov-chain-montecarlo",
"censoring",
"truncation",
"jags"
]
|
6871 | 2 | null | 6870 | 11 | null | Perhaps this is what you are looking for:
```
x_obs[i] ~ dnorm(x_true[i],prec_x)T(x_true[i], )
```
JAGS has options for both censoring and truncation. It sounds like you want truncation, since you know a-priori that the observation lies within a particular range
Read the [user's manual](http://sourceforge.net/projects/mcmc-jags/files/Manuals/2.x/jags_user_manual.pdf/download) for more details about how jags uses truncation and censoring.
| null | CC BY-SA 2.5 | null | 2011-02-04T04:14:18.500 | 2011-02-21T18:21:36.643 | 2011-02-21T18:21:36.643 | 1381 | 1381 | null |
6872 | 2 | null | 6773 | 2 | null | I have worked this out myself.
The lower bound for phi can be estiamted from
```
-ln(0.5)/(max separating distance between points)
```
To find the max separating distance I used the following code in R. My data are in a flat file with x and y coords renamed to long and lat respectively:
```
data <- read.csv(file="file.csv", header=T, sep=",")
coords <- data.frame(data$long,data$lat)
library(sp)
pointDist <- apply(coords, 1, function(eachPoint) spDistsN1(as.matrix(coords), eachPoint, longlat=TRUE))
distances <- as.vector(pointDist)
max(distances)
```
| null | CC BY-SA 2.5 | null | 2011-02-04T04:27:12.527 | 2011-02-04T04:27:12.527 | null | null | 1834 | null |
6873 | 2 | null | 6838 | 6 | null | It seems to me that you have enough data to model the dependence on space-time and meteorological influences of both the bias of forecast errors (i.e. tendency to systematically over-/underestimate [first moment]) and their variance [second moment].
For exploration of the bias, I'd just do a lot of scatterplots, heatmaps or hexbin plots.
For exploration of the variability, I'd just square the original errors and then again do a lot of scatterplots, heatmaps or hexbin plots. This is of course not entirely unproblematic if you have lots of bias, but it may still help to see patterns of covariate-influenced heteroskedasticity.
Colleagues of mine did a nice techreport that details a very flexible method for fitting these kind of models (also allows for modelling of higher moments, if necessary) that also has a good `R`-implementation [gamboostLSS](https://r-forge.r-project.org/projects/gamboostlss/) based on `mboost`: [Mayr, Andreas; Fenske, Nora; Hofner, Benjamin; Kneib, Thomas and Schmid, Matthias (2010): GAMLSS for high-dimensional data – a flexible approach based on boosting.](http://epub.ub.uni-muenchen.de/11938/). Assuming you have access to machines with a lot of RAM (your datasets seems to be BIG), you can estimate all kinds of semiparametric effects (like smooth surface estimators for spatial effects or the joint effect of $t$ and $h$, tensor product splines for tempo-spatial effects or smooth interactions of meteorological effects etc..) for the different moments and perform term selection at the same time in order to get a parsimonious and interpretable model. The hope would be that the terms in this model are sufficient to account for the spatio-temporal autocorrelation structure of the forecast errors, but you should probably check the residuals of these models for autocorrelation (i.e. look at some variograms and ACFs).
| null | CC BY-SA 2.5 | null | 2011-02-04T10:48:43.283 | 2011-02-04T10:48:43.283 | null | null | 1979 | null |
6874 | 1 | 6884 | null | 16 | 6160 | There are well-known on-line formulas for computing exponentially weighted moving averages and standard deviations of a process $(x_n)_{n=0,1,2,\dots}$. For the mean,
$\mu_n = (1-\alpha) \mu_{n-1} + \alpha x_n$
and for the variance
$\sigma_n^2 = (1-\alpha) \sigma_{n-1}^2 + \alpha(x_n - \mu_{n-1})(x_n - \mu_n)$
from which you can compute the standard deviation.
Are there similar formulas for on-line computation of exponential weighted third- and fourth-central moments? My intuition is that they should take the form
$M_{3,n} = (1-\alpha) M_{3,n-1} + \alpha f(x_n,\mu_n,\mu_{n-1},S_n,S_{n-1})$
and
$M_{4,n} = (1-\alpha) M_{4,n-1} + \alpha f(x_n,\mu_n,\mu_{n-1},S_n,S_{n-1},M_{3,n},M_{3,n-1})$
from which you could compute the skewness $\gamma_n = M_{3,n} / \sigma_n^3$ and the kurtosis $k_n = M_{4,n}/\sigma_n^4$ but I've not been able to find simple, closed-form expression for the functions $f$ and $g$.
---
Edit: Some more information. The updating formula for moving variance is a special case of the formula for the exponential weighted moving covariance, which can be computed via
$C_n(x,y) = (1-\alpha) C_{n-1}(x,y) + \alpha (x_n - \bar{x}_n) (y_n - \bar{y}_{n-1})$
where $\bar{x}_n$ and $\bar{y}_n$ are the exponential moving means of $x$ and $y$. The asymmetry between $x$ and $y$ is illusory, and disappears when you notice that $y-\bar{y}_n = (1-\alpha) (y-\bar{y}_{n-1})$.
Formulas like this can be computed by writing the central moment as an expectation $E_n(\cdot)$, where weights in the expectation are understood to be exponential, and using the fact that for any function $f(x)$ we have
$E_n(f(x)) = \alpha f(x_n) + (1-\alpha) E_{n-1}(f(x))$
It's easy to derive the updating formulas for the mean and variance using this relation, but it's proving to be more tricky for the third and fourth central moments.
| Exponential weighted moving skewness/kurtosis | CC BY-SA 2.5 | null | 2011-02-04T12:01:50.607 | 2013-10-17T13:28:59.293 | 2011-02-04T12:49:50.233 | 2425 | 2425 | [
"moments",
"online-algorithms",
"kurtosis"
]
|
6876 | 1 | 6877 | null | 6 | 1514 | I've recently been looking for top-of-the-line statisticians in a recruiting process for our company. Myself, I'm a Physics Engineering major. I gather that great mathematical statisticians have studied a bit different courses, and much more in depth.
When evaluating a candidate, are courses a good indicators of this person being excellent?
Preferably we're talking graduate or post-graduate level.
---
We're looking to fill roles of data miners, statistical modeling and data visualization. Thanks Chris, for the suggestion to clarify.
| What university level statistics courses are considered advanced/hard? | CC BY-SA 3.0 | null | 2011-02-04T13:09:58.190 | 2017-04-25T07:39:19.733 | 2017-04-25T07:39:19.733 | 28666 | 3048 | [
"careers",
"academia"
]
|
6877 | 2 | null | 6876 | 10 | null | It really depends what your company is doing. Are you looking for machine learning experts? Data visualisation experts? Data mining experts?
When I interview statistics PhDs I like to ask them questions about linear regression, as I feel that anyone claiming to be an expert in statistics should at the very minimum be able to explain linear regression to me, and it's surprising how many can't.
Apart from that I'd consider it to be a good sign if they can have a good discussion about model selection/validation procedures, the concept of training and validation sets, cross-validation etc. If they know about classification algorithms (k-NN, SVM, decision trees etc) and can discuss their strengths/weaknesses that's even better.
I find that the particular courses they've studied are rarely a good indicator, and are only really useful for steering the discussion in the interview. If they're claiming to have studied something on their CV, I expect them to be able to discuss it at length.
| null | CC BY-SA 2.5 | null | 2011-02-04T13:24:32.357 | 2011-02-04T13:24:32.357 | null | null | 2425 | null |
6878 | 2 | null | 6874 | 0 | null | I think that the following updating formula works for the third moment, although I'd be glad to have someone check it:
$M_{3,n} = (1-\alpha)M_{3,n-1} + \alpha \Big[ x_n(x_n-\mu_n)(x_n-2\mu_n) - x_n\mu_{n-1}(\mu_{n-1}-2\mu_n) - \dots$
$\dots - \mu_{n-1}(\mu_n-\mu_{n-1})^2 - 3(x_n-\mu_n) \sigma_{n-1}^2 \Big]$
Updating formula for the kurtosis still open...
| null | CC BY-SA 2.5 | null | 2011-02-04T13:28:52.210 | 2011-02-04T13:28:52.210 | null | null | 2425 | null |
6879 | 2 | null | 6852 | 2 | null | Getting the data into SPSS
Typically when you organize your data, the rows in the data matrix correspond to the units of analysis, and the columns correspond to attributes of those units. Given the description of your data, you have basically three possible units of analysis, companies, days, and companydays. Given the format of your CSV files it will be easiest to either import the data as "days" or "companydays". I'm going to suggest you import it as "company*days" as it is easier to transform the data to the other units of analysis later on if needed in that format.
So to import the data, you can use this syntax (just replace with the right variable names and formats). I just generated this example through the GUI.
```
GET DATA
/TYPE=TXT
/FILE='H:\NASDAQ_MSFT.csv'
/DELCASE=LINE
/DELIMITERS=","
/ARRANGEMENT=DELIMITED
/FIRSTCASE=2
/IMPORTCASE=ALL
/VARIABLES=
time F3.0
panel F1.0
price F16.15.
EXECUTE.
Dataset Name NASDAQ_MSFT.
```
I would then make a variable that identifies what company the dataset correponds to.
```
string company (A100).
compute company = "NASDAQ_MSFT".
execute.
```
After you have loaded the csv files into SPSS datasets, you can then concatenate them together with the add files command. Note this adds new rows to the dataset, not columns, hence the need to make a variable that identifies each company.
```
ADD FILES file = 'NASDAQ_MSFT'
/FILE = 'OTHER_DATASET'
/FILE = 'OTHER_DATASET'.
```
This will obviously be tedious, but if it only needs to be done once I would just write a macro to semi automate the job. If this needs to be done periodically I would suggest posting the question to one of the other SPSS forums (links provided in the [SPSS tag wiki](https://stats.stackexchange.com/tags/spss/info)), I wouldn't be surprised in someone has already made a tool via Python to accomplish this (i.e. getting all the files in one directory and doing whatever you want with them).
---
Analyzing the data
To be able to give useful advice you need to be able to formulate more specific questions. Different hypotheses would require different answers to your second question. Below I have posted a simulated example getting the average slope against time for all companies (via the split file command) and two plotting examples I cooked up with the GUI. If they are not clear enough just say so in a comment and I will add further explanation. For documentation on SPSS's graph language and other syntax see [this page](http://support.spss.com/productsext/statistics/documentation/19/clientindex.html).
```
*SIMULATING DATA.
input program.
loop #company = 1 to 10.
loop time = 1 to 100.
compute company = #company.
compute price = RV.NORMAL(#company,.10).
variable level company (NOMINAL).
end case.
end loop.
end loop.
end file.
end input program.
execute.
*USING split file to get the average slope against time for every company.
sort cases by company.
split file by company.
REGRESSION
/MISSING LISTWISE
/STATISTICS COEFF OUTS R ANOVA
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT price
/METHOD=ENTER time.
split file off.
*Multiple lines on one chart via Chart Builder GUI, note the panel variable needs to be coded as nominal.
GGRAPH
/GRAPHDATASET NAME="graphdataset" VARIABLES=time price company MISSING=LISTWISE REPORTMISSING=NO
/GRAPHSPEC SOURCE=INLINE.
BEGIN GPL
SOURCE: s=userSource(id("graphdataset"))
DATA: time=col(source(s), name("time"))
DATA: price=col(source(s), name("price"))
DATA: panel=col(source(s), name("company"), unit.category())
GUIDE: axis(dim(1), label("time"))
GUIDE: axis(dim(2), label("price"))
GUIDE: legend(aesthetic(aesthetic.color.interior), label("company"))
ELEMENT: line(position(time*price), color.interior(panel), missing.wings())
END GPL.
*Small multiples via the IGRAPH GUI.
IGRAPH
/VIEWNAME='Line Chart'
/X1=VAR(time) TYPE=SCALE
/Y=VAR(price) TYPE=SCALE
/PANEL=VAR(company)
/COORDINATE=VERTICAL
/YLENGTH=5.2
/X1LENGTH=6.5
/CHARTLOOK='NONE'
/CATORDER VAR(company) (ASCENDING VALUES OMITEMPTY)
/LINE(MEAN) KEY=ON STYLE=LINE DROPLINE=OFF INTERPOLATE=STRAIGHT BREAK=MISSING.
```
| null | CC BY-SA 2.5 | null | 2011-02-04T14:35:35.567 | 2011-02-04T14:35:35.567 | 2017-04-13T12:44:33.310 | -1 | 1036 | null |
6880 | 1 | null | null | 0 | 356 | There are a few things that are heavily used in the practice of statistics, data analysis, etc. and that form the common canon of education, yet everybody knows they are not exactly true, useful, well-behaved, or empirically supported.
So here is the question: Which is the single worst idea still actively propagated?
Please make it one suggestion per post.
| What are the most dangerous concepts in the practice of data analysis? | CC BY-SA 2.5 | 0 | 2011-02-04T14:40:58.553 | 2016-09-19T18:21:06.180 | 2016-09-19T18:21:06.180 | 919 | 3044 | [
"fallacy"
]
|
6881 | 2 | null | 4551 | 10 | null | That the p-value is the probability that the null hypothesis is true and (1-p) is the probability that the alternative hypothesis is true, of that failing to reject the null hypothesis means the alternative hypothesis is false etc.
| null | CC BY-SA 2.5 | null | 2011-02-04T14:54:21.060 | 2011-02-04T14:54:21.060 | null | null | 887 | null |
6882 | 2 | null | 4551 | 9 | null | Using pie charts to illustrate relative frequencies. More [here](http://psychology.wikia.com/wiki/Pie_chart).
| null | CC BY-SA 2.5 | null | 2011-02-04T15:06:00.230 | 2011-02-04T15:06:00.230 | null | null | 609 | null |
6883 | 1 | 18266 | null | 1 | 3954 | Along the same lines as [this question](https://stats.stackexchange.com/questions/6856/aggregating-results-from-linear-model-runs-r), is there a nice way to display regression results in MATLAB from a single or many regressions in table or graph form?
| Displaying regression results in MATLAB | CC BY-SA 2.5 | null | 2011-02-04T15:33:38.487 | 2012-07-03T13:01:01.357 | 2017-04-13T12:44:46.680 | -1 | 2864 | [
"regression",
"matlab"
]
|
6884 | 2 | null | 6874 | 6 | null | The formulas are straightforward but they are not as simple as intimated in the question.
Let $Y$ be the previous EWMA and let $X = x_n$, which is presumed independent of $Y$. By [definition](http://en.wikipedia.org/wiki/Moving_average), the new weighted average is $Z = \alpha X + (1 - \alpha)Y$ for a constant value $\alpha$. For notational convenience, set $\beta = 1-\alpha$. Let $F$ denote the CDF of a random variable and $\phi$ denote its [moment generating function](http://en.wikipedia.org/wiki/Moment-generating_function), so that
$$\phi_X(t) = \mathbb{E}_F[\exp(t X)] = \int_\mathbb{R}{\exp(t x) dF_X(x)}.$$
With [Kendall and Stuart](http://rads.stackoverflow.com/amzn/click/0340614307), let $\mu_k^{'}(Z)$ denote the non-central moment of order $k$ for the random variable $Z$; that is, $\mu_k^{'}(Z) = \mathbb{E}[Z^k]$. The [skewness](http://en.wikipedia.org/wiki/Skewness) and [kurtosis](http://en.wikipedia.org/wiki/Kurtosis) are expressible in terms of the $\mu_k^{'}$ for $k = 1,2,3,4$; for example, the skewness is defined as $\mu_3 / \mu_2^{3/2}$ where
$$\mu_3 = \mu_3^{'} - 3 \mu_2^{'}\mu_1^{'} + 2{\mu_1^{'}}^3 \text{ and }\mu_2 = \mu_2^{'} - {\mu_1^{'}}^2$$
are the third and second central moments, respectively.
By standard elementary results,
$$\eqalign{
&1 + \mu_1^{'}(Z) t + \frac{1}{2!} \mu_2^{'}(Z) t^2 + \frac{1}{3!} \mu_3^{'}(Z) t^3 + \frac{1}{4!} \mu_4^{'}(Z) t^4 +O(t^5) \cr
&= \phi_Z(t) \cr
&= \phi_{\alpha X}(t) \phi_{\beta Y}(t) \cr
&= \phi_X(\alpha t) \phi_Y(\beta t) \cr
&= (1 + \mu_1^{'}(X) \alpha t + \frac{1}{2!} \mu_2^{'}(X) \alpha^2 t^2 + \cdots)
(1 + \mu_1^{'}(Y) \beta t + \frac{1}{2!} \mu_2^{'}(Y) \beta^2 t^2 + \cdots).
}
$$
To obtain the desired non-central moments, multiply the latter power series through fourth order in $t$ and equate the result term-by-term with the terms in $\phi_Z(t)$.
| null | CC BY-SA 3.0 | null | 2011-02-04T15:38:21.347 | 2013-10-17T13:28:59.293 | 2013-10-17T13:28:59.293 | 919 | 919 | null |
6885 | 2 | null | 4551 | 75 | null | The most dangerous trap I encountered when working on a predictive model is not to reserve a test dataset early on so as to dedicate it to the "final" performance evaluation.
It's really easy to overestimate the predictive accuracy of your model if you have a chance to somehow use the testing data when tweaking the parameters, selecting the prior, selecting the learning algorithm stopping criterion...
To avoid this issue, before starting your work on a new dataset you should split your data as:
- development set
- evaluation set
Then split your development set as a "training development set" and "testing development set" where you use the training development set to train various models with different parameters and select the bests according to there performance on the testing development set. You can also do grid search with cross validation but only on the development set. Never use the evaluation set while model selection is not 100% done.
Once you are confident with the model selection and parameters, perform a 10 folds cross-validation on the evaluation set to have an idea of the "real" predictive accuracy of the selected model.
Also if your data is temporal, it is best to choose the development / evaluation split on a time code: "It's hard to make predictions - especially about the future."
| null | CC BY-SA 3.0 | null | 2011-02-04T15:38:59.070 | 2016-10-18T20:44:42.540 | 2016-10-18T20:44:42.540 | 2150 | 2150 | null |
6886 | 1 | null | null | 1 | 328 | Suppose I have a neural network, with input variables $a, b, c, d, f, g$ and output variables$ m, n, o, p, q$.
Given different input values, the neural network will output corresponding $m, n, o, p, q$.
Now I want find out the best input values which can maxmize $m, n$, while minimize $o,p,q$ with different weights as well. So how can I find the best $a, b, c, d, f, g$?
Currently I use a simple way, which calculate $x= w_1 m + w_2n+w_3 \frac{1}{o}+w_4 \frac{1}{p}+w_5 \frac{1}{q}$, then find the input to get maxmization of x. However this simple method assume $m, n, o, p, q$ are independent, which is not the case.
So how should I solve this problem?
Many thanks.
| How to find the best input value for this simple problem? | CC BY-SA 2.5 | null | 2011-02-04T16:07:50.967 | 2011-02-24T17:37:38.757 | 2011-02-04T18:10:34.560 | null | 2454 | [
"optimization",
"neural-networks"
]
|
6887 | 2 | null | 4551 | 8 | null | Perhaps the poor teaching of statistics to end consumers. The fact is that most courses have given a medieval menu, not including new theoretical developments, computational and best practices, insufficient teaching of modern and complete analysis of real data sets, at least in poor and developing countries, what is the situation in developed countries?
| null | CC BY-SA 2.5 | null | 2011-02-04T16:40:59.897 | 2011-02-04T16:40:59.897 | null | null | 523 | null |
6888 | 2 | null | 6876 | 1 | null | Chris really nailed the data minining stuff. If you need someone who can also look at experimental data, you can stop all but the most versatile of statisticians dead in their tracks by asking them to explain a split-plot experiment.
| null | CC BY-SA 2.5 | null | 2011-02-04T17:00:11.510 | 2011-02-04T17:00:11.510 | null | null | 5792 | null |
6889 | 2 | null | 6876 | 5 | null | I agree with Chris on most of what he says. Additionally, I'd like to add that without knowing the institutions or universities in detail, just looking at grades would be very misleading. I could easily give [a relevant example](https://stats.stackexchange.com/questions/6807/making-sense-out-of-statistics-theory-and-applications); I have recently graduated with a masters in engineering mathematics; and taken a variety of statistics courses (with good grades) but I couldnt work in any statistics intensive job right now. That doesn't mean that my uni sucks, but mostly that I didn't manage to learn much out of my statistics courses during university...
Apart from the candidate's knowledge on statistics, I'd also highly value good communication skills; as any cross-disciplinary project eventually boils down to communication problems between experts of different fields. Any test on how well the candidate can share his expertise with others should be a good measure on that.
Furthermore, good computer/programming skills (and no just R is not enough, IMHO) is surely a big plus. If the person has some background in mathematical modeling, it'd be a cherry on the cake :)
| null | CC BY-SA 2.5 | null | 2011-02-04T17:22:27.417 | 2011-02-04T17:22:27.417 | 2017-04-13T12:44:39.283 | -1 | 3014 | null |
6890 | 1 | 6893 | null | 6 | 17143 | I have dendrogram and a distance matrix. I wish to compute a heatmap -- without re-doing the distance matrix and clustering.
Is there a function in R that permits this?
| Plotting a heatmap given a dendrogram and a distance matrix in R | CC BY-SA 3.0 | null | 2011-02-04T17:50:56.660 | 2011-05-02T09:07:10.713 | 2011-05-02T09:07:10.713 | 183 | 2842 | [
"r",
"data-visualization",
"clustering"
]
|
6891 | 1 | null | null | 3 | 562 | I have a series of measurements in which something is sampled and placed into either category 1 or category 2. I have 3x2x3 factors for each set of measurements, and each set of measurements is done independently 3 times.
What is a good statistical test to look examine the combined effects of each factor, both independently and interactively with each other?
| What is a good statistical test for independent replicates? | CC BY-SA 2.5 | null | 2011-02-04T18:50:12.683 | 2011-03-21T04:22:44.343 | 2011-02-04T19:32:26.510 | 930 | 1327 | [
"anova",
"repeated-measures",
"statistical-significance"
]
|
6892 | 2 | null | 6890 | 1 | null | You might try looking in the maptree or ape packages. What are you trying to do?
| null | CC BY-SA 2.5 | null | 2011-02-04T19:23:34.453 | 2011-02-04T19:23:34.453 | null | null | 1475 | null |
6893 | 2 | null | 6890 | 9 | null | I don't know a specific function for that. The ones I used generally take raw data or a distance matrix. However, it would not be very difficult to hack already existing code, without knowing more than basic R. Look at the source code for the `cim()` function in the [mixOmics](http://cran.r-project.org/web/packages/mixOmics/index.html) package for example (I choose this one because source code is very easy to read; you will find other functions on the [Bioconductor](http://www.bioconductor.org) project). The interesting parts of the code are l. 92-113, where they assign the result of HC to `ddc`, and around l. 193-246 where they devised the plotting regions (you should input the values of your distance matrix in place of mat when they call `image()`). HTH
Edit
A recent Google search on a related subject lead me to `dendrogramGrob()` from the [latticeExtra](http://latticeextra.r-forge.r-project.org/) package. Assuming you already have your sorted dendrogram object, you can skip the first lines of the example code from the on-line help and get something like this (here, with the `mtcars` dataset):

| null | CC BY-SA 3.0 | null | 2011-02-04T19:29:58.010 | 2011-04-30T08:25:17.213 | 2011-04-30T08:25:17.213 | 930 | 930 | null |
6894 | 2 | null | 6886 | 1 | null | Your question is not properly thought out. There are many different neural network models, but all I know of require a real-valued objective function to optimize for. That means you can't ask that specific question.
To maximize $\sum w_i x_i$, simply set $w_i=\infty\cdot sign(x_i)$ or something comparable.
A basic neural network training algorithm for the multilayer perceptron network is gradient descent for the objective function, averaged over the training set.
| null | CC BY-SA 2.5 | null | 2011-02-04T19:34:43.890 | 2011-02-04T19:42:33.363 | 2011-02-04T19:42:33.363 | 2456 | 2456 | null |
6895 | 2 | null | 3176 | 2 | null | It seems that you are fitting [simultaneous equation model](http://en.wikipedia.org/wiki/Simultaneous_equations_model) in [panel data](http://en.wikipedia.org/wiki/Panel_data) setting. In this case using different dummies for different equations is entirely appropriate from statistical point of view. The justification should come from economic model.
The thing you should worry most is finding appropriate instrumental variables for your endogenous variables. Simultaneous equation estimates for panel data are covered [in Baltagi book "Econometric analysis of panel data"](http://books.google.com/books?id=yTVSqmufge8C&lpg=PP1&dq=baltagi%20panel%20data&hl=fr&pg=PA113#v=onepage&q&f=false). You can also use Arrelano-Bond type estimates which are covered in the same book.
| null | CC BY-SA 2.5 | null | 2011-02-04T20:01:34.703 | 2011-02-04T20:01:34.703 | null | null | 2116 | null |
6896 | 1 | null | null | 15 | 2464 |
### Context
I have two sets of data that I want to compare. Each data element in both sets is a vector containing 22 angles (all between $-\pi$ and $\pi$). The angles relate to a given human pose configuration, so a pose is defined by 22 joint angles.
What I am ultimately trying to do is determine the "closeness" of the two sets of data. So for each pose (22D vector) in one set, I want to find its nearest neighbour in the other set, and create a distance plot for each of the closest pairs.
### Questions
- Can I simply use Euclidean distance?
To be meaningful, I assume that the distance metric would need to be defined as: $\theta = |\theta_1 - \theta_2| \quad mod \quad \pi$, where $|...|$ is absolute value and mod is modulo. Then using the resulting 22 thetas, I can perform the standard Euclidean distance calculation, $\sqrt{t_1^2 + t_2^2 + \ldots + t_{22}^2}$.
Is this correct?
- Would another distance metric be more useful, such as chi-square, or Bhattacharyya, or some other metric? If so, could you please provide some insight as to why.
| Best distance measure to use to compare vectors of angles | CC BY-SA 4.0 | null | 2011-02-04T21:33:28.603 | 2021-12-09T14:10:01.090 | 2021-12-09T14:10:01.090 | 22311 | 3052 | [
"measurement",
"distance-functions",
"circular-statistics"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.