Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4490 | 2 | null | 4489 | 21 | null | Add `bty="n"` in both plot commands.
For time series, add `frame.plot=FALSE` for the same effect.
For fancier Tufte axes, see [http://www.cl.cam.ac.uk/~sjm217/projects/graphics/](http://www.cl.cam.ac.uk/~sjm217/projects/graphics/)
| null | CC BY-SA 3.0 | null | 2010-11-13T02:04:40.637 | 2013-07-24T01:23:45.453 | 2013-07-24T01:23:45.453 | 159 | 159 | null |
4491 | 1 | 4494 | null | 2 | 1570 | I'm using the following code to insert vertical lines in a timeseries plot:
```
abline(v=as.Date("2007-09-19"),col="red",lty=2)
abline(v=as.Date("2008-03-10"),col="red",lty=2)
abline(v=as.Date("2008-04-28"),col="red",lty=2)
abline(v=as.Date("2009-08-07"),col="red",lty=2)
```
But I would rather prefer a small triangle pointing up in the bottom axis, along with a label. Hints?
| Inserting small arrows (triangles) in bottom axis pointing up with R | CC BY-SA 2.5 | null | 2010-11-13T03:08:04.377 | 2010-11-13T13:32:38.697 | 2010-11-13T09:21:23.217 | 930 | 990 | [
"r",
"data-visualization"
]
|
4492 | 2 | null | 4469 | 3 | null | 1) You can always find out exactly how an algorithm in SPSS is defined mathematically from the Algorithms link on the Help menu.
2) SPSS also has a Weighted Least Square procedure that allows you to model the error variance and correct for heteroscedasticity.
2) In order to run R within SPSS (version 16 or later), you just need to install the R plugin or Essentials (depending on version) from SPSS Developer Central, www.spss.com/devcentral. The integration is free. You can also get SPSS syntax and dialog boxes for a number of useful R packages from the same site.
HTH
| null | CC BY-SA 2.5 | null | 2010-11-13T03:18:29.467 | 2010-11-13T03:18:29.467 | null | null | 1971 | null |
4493 | 2 | null | 4489 | 4 | null | If you use
```
par(bty = 'n')
```
Before calling plot that will fix it for zoo. It might also fix it for a variety of situations where it isn't passable to the plotting command.
(Check out bty option in the par() help for other kinds of frames for the plot)
| null | CC BY-SA 2.5 | null | 2010-11-13T03:19:10.860 | 2010-11-13T03:19:10.860 | null | null | 601 | null |
4494 | 2 | null | 4491 | 3 | null | There's a variety of ways. You could use a pch = '^' and put that near the bottom or look up an arrow symbol character. Also you could just do what you're doing with the arrows() command and draw short arrows instead of abline(). With arrows you would need to specify both ends instead of just the horizontal position.. e.g.
```
arrows(as.Date("2009-08-07"),0, as.Date("2009-08-07"), 0.5, col="red",lty=2)
```
| null | CC BY-SA 2.5 | null | 2010-11-13T03:28:41.470 | 2010-11-13T13:32:38.697 | 2010-11-13T13:32:38.697 | 601 | 601 | null |
4496 | 1 | 4509 | null | 3 | 2370 | Consider the following code:
```
require(zoo)
data <- read.csv(file="summary.csv",sep=",",head=TRUE)
data = zoo(data$compressed, as.Date(data$date))
data <- aggregate(data, identity, tail, 1)
days = seq(start(data), end(data), "day")
data2 = na.locf(merge(data, zoo(,days)))
par(bty = 'n')
plot(data2,xlab='',ylab='entropy (bytes)')
```
How can one:
- Adjust both the horizontal and vertical limits of the drawn axis to match the start and end of data (as an example, while the y value may vary between 20 and 1525, the axis shows 0 and 1500).
- Increase the horizontal resolution (at least add some minor ticks) when the time series spans more years than are mentioned.
Using:
```
plot(data2,xlab='',ylab='entropy (bytes)', xaxs = 'i', yaxs = 'i')
```
I get:

which is not exactly what I had in mind. The vertical scale doesn't start with the exact minimum (around 25), nor does it end with the exact maximum. The horizontal scale starts and ends in the middle. Hints?
| Adjusting axis properties in a time series graph | CC BY-SA 3.0 | null | 2010-11-13T04:08:55.587 | 2016-12-30T08:28:47.977 | 2016-12-30T08:28:47.977 | 22047 | 990 | [
"r",
"data-visualization"
]
|
4497 | 2 | null | 3542 | 12 | null | If you are interested in table design, I would definitely recommend two papers on the subject by Andrew Gelman:
A necessary preface to the paper on table design is Gelman et al, 2002 [Let's practice what we preach: Turning Tables Into Graphs](http://www.stat.columbia.edu/~gelman/research/published/dodhia.pdf)
Gelman argues that graphs are better than tables in the above paper. Then his satire piece provides a look at elements commonly found in tables that make them particularly difficult to interpret. [Why Tables are Really Much Better than Graphs](https://www.jstor.org/stable/23113375?seq=1#metadata_info_tab_contents) suggest the following (interpreted as satire, these are actually what not to do):
- lots of numbers
- don't obsess about clarity
- exact numbers, minimum of four significant digits
- default table design provided by your favorite software
Both are great reads.
---
[Gelman, Pasarica, and Dodhia. The American Statistician, 56(2): 121-130](http://www.stat.columbia.edu/~gelman/research/published/dodhia.pdf)
[Gelman, 2011. Journal of Computational and Graphical Statistics, Vol. 20, No. 1: 3–7.](https://www.jstor.org/stable/23113375?seq=1#metadata_info_tab_contents)
| null | CC BY-SA 4.0 | null | 2010-11-13T05:24:16.083 | 2019-09-11T05:16:40.350 | 2019-09-11T05:16:40.350 | 1381 | 1381 | null |
4498 | 1 | 4502 | null | 39 | 9495 | They all seem to represent random variables by the nodes and (in)dependence via the (possibly directed) edges. I'm esp interested in a bayesian's point-of-view.
| What's the relation between hierarchical models, neural networks, graphical models, bayesian networks? | CC BY-SA 2.5 | null | 2010-11-13T05:43:15.703 | 2015-12-08T14:19:58.223 | 2010-11-13T20:33:21.760 | 930 | 1795 | [
"causality",
"neural-networks",
"multilevel-analysis",
"graphical-model"
]
|
4499 | 2 | null | 4491 | 0 | null | You could manually construct the triangles using the 'polygon' command. There might be a nice way to automate the construction of these but depending on how many you want to make it probably wouldn't be too bad to figure out the coordinates by hand.
| null | CC BY-SA 2.5 | null | 2010-11-13T06:03:11.923 | 2010-11-13T06:03:11.923 | null | null | 1028 | null |
4500 | 1 | 4506 | null | 7 | 790 | Context:
I've recently adopted version control as part of my data analysis work (finally I may hear you saying: see my [earlier question on SO](https://stackoverflow.com/questions/2712421/r-and-version-control-for-the-solo-data-analyst)). This prompted me to think more about repositories and the directory structure I use for my projects.
My typical research work involves one or more studies (i.e., data that I have collected) which gets written up as one or more publications (journal articles, book chapters, presentations, reports, etc.). Analyses and reports are typically produced using a combination of R, LaTeX, Sweave, textual data files and so on. I really like the idea of being able to upload a single self-contained repository that can be used to analyse data and reproduce a publication.
In particular, I've been thinking about publications, studies, data, and common code, and how these entities map on to repositories. For example, is it better to have a separate repository for each publication or is it better to have each publication as an individual folder within the larger repository. I'm evolving a few thoughts on this, but I was keen to hear other options.
Question:
- What strategies do people use to map studies, publications, and analyses onto repositories?
- When should related entities (e.g., publications, studies, etc.) be split into multiple repositories?
| Repositories and data analysis projects | CC BY-SA 2.5 | null | 2010-11-13T06:26:43.493 | 2010-11-14T23:06:00.820 | 2017-05-23T12:39:26.593 | -1 | 183 | [
"reproducible-research",
"project-management"
]
|
4501 | 1 | 4503 | null | 3 | 503 | I've been thinking recently about conventions for figure width in data analysis reports that lead to PDFs (e.g., in A4 or Letter size). My usual context is R, Sweave, and LaTeX.
The default figure width in Sweave is 80% of the text-width (i.e., the width of a paragraph of text).
```
\setkeys{Gin}{width=0.80\textwidth}
```
Questions:
- Is it reasonable for the width of figures to be wider than the text-width (i.e., to spill into the margins)?
- Is the 80% rule of text-width a good one or would, for example 100%, or some other value be better?
- Should the figure widths be consistent throughout a document (or perhaps with two sizes for small and large figures) or should the width be adapted completely to the content?
Any references or thoughts on best practice in this regards would be most welcome.
| Typographic conventions for width of figures in LaTeX data analysis reports | CC BY-SA 2.5 | null | 2010-11-13T07:02:26.113 | 2017-05-18T21:19:05.927 | 2017-05-18T21:19:05.927 | 28666 | 183 | [
"presentation"
]
|
4502 | 2 | null | 4498 | 34 | null | A Bayesian network is a type of graphical model. The other "big" type of graphical model is a Markov Random Field (MRF). Graphical models are used for inference, estimation and in general, to model the world.
The term hierarchical model is used to mean many things in different areas.
While neural networks come with "graphs" they generally don't encode dependence information, and the nodes don't represent random variables. NNs are different because they are discriminative. Popular neural networks are used for classification and regression.
Kevin Murphy has an excellent introduction to these topics [available here](http://www.cs.ubc.ca/~murphyk/Bayes/bnintro.html).
| null | CC BY-SA 2.5 | null | 2010-11-13T08:37:26.060 | 2010-11-13T08:37:26.060 | null | null | 1540 | null |
4503 | 2 | null | 4501 | 7 | null | I'll second @onestop comment about the fact that this question seems marginally related to statistical analysis or reporting.
That being said, I can't refrain from thinking of Ed. Tufte's work on the display of quantitative information, especially the design of his books which mixes different graphics layouts: some figures or tables are put in the margin, other in the body with caption in the margin, and large figures may extend beyond the body (full page width). The [tufte-latex](http://code.google.com/p/tufte-latex/) project offers $\LaTeX$ classes for articles/handouts and books in the spirit of Ed. Tufte's design. Some examples are included on the project page; I particularly like the [example handout](http://tufte-latex.googlecode.com/files/sample-handout-3.5.0.pdf).
On a related point, I also like the [tutorial](http://cc.oulu.fi/~jarioksa/opetus/metodi/vegantutor.pdf) from the [vegan](http://cran.r-project.org/web/packages/vegan/index.html) R package.
My personal approach is to use 80% or 100% of text width (and keep it consistent across all the document), but I often play with the `width`, `height`, and `cex` arguments of `pdf()` when exporting figure so as to get the most clean and readable figure. It also happens to me to rely on a different layout--figure 60% and caption 40%, side by side, 100% of text width--for small illustrations or graphics.
| null | CC BY-SA 2.5 | null | 2010-11-13T09:06:44.280 | 2010-11-13T10:18:47.823 | 2010-11-13T10:18:47.823 | 930 | 930 | null |
4504 | 2 | null | 4491 | 4 | null | I prefer to use dedicated symbol for that purpose. For example, use `points()` with `pch=17` (filled triangle) like in the example below:
```
dd <- dotplot(m~i|p,data=res.EQ.long,subset=i %in% eq5d.items.names[1:5],
ylab="Score",ylim=c(0.5,3.5),
scales=list(x=list(rot=45,at=1:5,
labels=eq5d.items.names[1:5],cex=c(.8,.8))),
main="EQ-5D",
panel=function(x,y,subscripts,...){
panel.dotplot(x,y,...)
panel.lines(x=c(1,5),y=3,lwd=1.5, col="gray50")
panel.segments(x0=c(1,3,5),y0=rep(2.95,3),
x1=c(1,3,5),y1=rep(3.05,5),
lwd=1.5, col="gray50")
panel.text(x=c(1,3,5),y=3,c(0,50,100),cex=.6,pos=3)
panel.text(x=3,y=3.2,"Health State",cex=.6,pos=3)
panel.points(x=(4*ref[packet.number()])/100+1,y=2.9,
pch=17)
panel.points(x=1:5,y=res.EQ2[1:5,packet.number()],
col="red",pch=19,cex=.6)
},
key=list(text=list(c("Mean","Median")),x=0.7,y=0.9,
lines=list(col=c("blue","red"),type="p",
pch=19,cex=.8)))
print(dd)
```

The complete list of symbols can be obtained with `show.pch()` from the [Hmisc](http://cran.r-project.org/web/packages/Hmisc/index.html) package.
| null | CC BY-SA 2.5 | null | 2010-11-13T09:19:10.647 | 2010-11-13T09:19:10.647 | null | null | 930 | null |
4505 | 2 | null | 4498 | 10 | null | As [@carlosdc](https://stats.stackexchange.com/questions/4498/whats-the-relation-between-hierarchical-models-neural-networks-graphical-model/4502#4502) said, a bayesian network is a type of Graphical Model (i.e., a directed acyclic graph (DAG) whose structure defines a set of conditional independence properties). [Hierarchical Bayes Models](http://en.wikipedia.org/wiki/Hierarchical_Bayes_model) can also be represented as DAGs; [Hierarchical Naive Bayes Classifiers for uncertain data](http://www.labmedinfo.org/download/lmi339.pdf), by Bellazzi et al., provides a good introduction to classification with such models. About hierarchical models, I think many articles can be retrieved by googling with appropriate keywords; for example, I found this one:
>
C. H. Jackson, N. G. Best and S.
Richardson. Bayesian graphical models
for regression on multiple data sets
with different variables.
Biostatistics (2008) 10(2): 335-351.
Michael I. Jordan has a nice tutorial on [Graphical Models](http://www.cs.berkeley.edu/~jordan/papers/statsci.ps), with various applications based on the factorial [Hidden Markov model](http://en.wikipedia.org/wiki/Hidden_Markov_model) in bioinformatics or natural language processing. His book, [Learning in Graphical Models](http://mitpress.mit.edu/catalog/item/default.asp?tid=8141&ttype=2) (MIT Press, 1998), is also worth reading (there's an application of GMs to structural modeling with [BUGS](http://www.mrc-bsu.cam.ac.uk/bugs/) code, pp. 575-598)
| null | CC BY-SA 2.5 | null | 2010-11-13T10:05:45.920 | 2010-11-13T10:05:45.920 | 2017-04-13T12:44:36.923 | -1 | 930 | null |
4506 | 2 | null | 4500 | 5 | null | Regarding your first question:
>
What strategies do people use to map studies, publications, and analyses onto repositories?
One year ago or so, I decided to have one repository for each publication, presentation or semester/class. My typical directory looks like this:
.git
.gitignore
README.org
ana
dat
doc
org
The underlying idea is (hopefully) obvious: Each publication, presentation, class is an "autarchic entity" which I could easily share with others.
By the way, you are not the first to ask this question: [One repository/multiple projects without getting mixed up?](https://stackoverflow.com/questions/2475520/one-repository-multiple-projects-without-getting-mixed-up)
However, I also started using git for managing projects which might result in some publications (the directory structure follows roughly John Myles White's [ProjectTemplate](https://github.com/johnmyleswhite/ProjectTemplate), without using it, though).
.git
.gitignore
README.org
ana
data
docs
graphs
lib
org
reports
tests
Regarding your second question:
>
When should related entities (e.g., publications, studies, etc.) be split into multiple repositories?
I cannot think of any reason to split a publication, conference etc. related repository into multiple repositories. But I would be interested in other opinions...
| null | CC BY-SA 2.5 | null | 2010-11-13T11:37:51.957 | 2010-11-13T11:37:51.957 | 2017-05-23T12:39:26.203 | -1 | 307 | null |
4507 | 1 | 4514 | null | 5 | 956 | Let $G = (V,E)$ be a directed acyclic graph.
Let $i \rightarrow j$ be an edge such that the parents of $j$ are exactly:
- the parents of $i$,
- and $i$.
Let $L\left(G \right)$ be the set defined by {a vertex and its parents, for all vertices in $G$}
Let $G' = (V,F)$ where $F$ is $E$ with $j \rightarrow i$ instead of $i \rightarrow j$.
Given a set of random variables $(X_{v})_{v \in V}$ for which the joint density can be factorized over $L\left(G \right)$, I would like to show that the joint density could also be factorized over $L\left(G' \right)$.
---
Let $P$ be the parents of $i$ in the graph $G$, thus the parents of $j$ in the graph $G'$.
So I would like to show that there exists $g_{i}$ and $g_{j}$ such that: $g_{i}(x_{i},x_{j},x_{P}) \times g_{j}(x_{j},x_{P}) = f_{i}(x_{i},x_{P}) \times f_{j}(x_{i},x_{j},x_{P})$.
I am unsure whether I could just consider that there was a swap of $i$ and $j$, and I could take $g_{i}=f_{j}$ and $g_{j}=f_{i}$. Or maybe, there is something to do with conditional probabilities and Bayes formula.
| Factorization of a joint density over a directed graph | CC BY-SA 2.5 | null | 2010-11-13T13:41:20.420 | 2019-04-29T20:08:37.633 | 2010-11-13T21:21:50.660 | 1351 | 1351 | [
"distributions",
"data-visualization",
"self-study",
"graphical-model"
]
|
4508 | 2 | null | 4496 | 2 | null | For the first question try:
```
plot(data2,xlab='',ylab='entropy (bytes)', xaxs = 'i', yaxs = 'i')
```
| null | CC BY-SA 2.5 | null | 2010-11-13T16:04:13.600 | 2010-11-13T16:54:51.440 | 2010-11-13T16:54:51.440 | 1050 | 1050 | null |
4509 | 2 | null | 4496 | 2 | null | For the second Q, you can use `axis.Date()` with argument `labels = FALSE` to add minor tick marks at locations defined by argument `at`.
| null | CC BY-SA 2.5 | null | 2010-11-13T17:28:45.037 | 2010-11-13T17:28:45.037 | null | null | 1390 | null |
4510 | 1 | 4511 | null | 2 | 18651 | I have fit a linear model using the `lm` function in R...
```
model <- lm(trans.baseline.CD4 ~ hiv$Julian.Date)
```
... and I would like to assess the quality of the model's fit. Is there a function in R that will do this? Alternatively, I found a formula for goodness-of-fit involving the sum of squared residuals given the null and alternative hypotheses, but I don't know how to get these values either. Any pointers?
Thanks!
| How to compute goodness of fit for a linear model in R | CC BY-SA 2.5 | null | 2010-11-13T17:47:12.277 | 2010-11-13T18:17:03.820 | 2010-11-13T18:17:03.820 | 930 | 1973 | [
"r",
"regression",
"goodness-of-fit"
]
|
4511 | 2 | null | 4510 | 7 | null | It all starts with
```
summary(model)
```
after your fit. There are numerous commands to assess the fit, test commands, compare alternative models, ... in base R as well as in add-on packages on CRAN.
But you may want to do some reading, for example with Dalgaard's book or another introduction to statistics with R.
| null | CC BY-SA 2.5 | null | 2010-11-13T17:56:39.150 | 2010-11-13T17:56:39.150 | null | null | 334 | null |
4512 | 2 | null | 4510 | 1 | null | You should probably take a look at this:
[https://stackoverflow.com/questions/1181025/goodness-of-fit-functions-in-r](https://stackoverflow.com/questions/1181025/goodness-of-fit-functions-in-r)
| null | CC BY-SA 2.5 | null | 2010-11-13T17:59:01.260 | 2010-11-13T17:59:01.260 | 2017-05-23T12:39:26.150 | -1 | 1540 | null |
4513 | 1 | null | null | 5 | 327 | Can you give me examples of machine learning algorithms which learn from the statistical properties of the dataset not the individual observations itself i.e. employ the statistical query model?
| Statistical query model algorithms? | CC BY-SA 2.5 | null | 2010-11-13T21:43:52.613 | 2017-01-04T13:16:06.610 | null | null | 1961 | [
"machine-learning",
"algorithms"
]
|
4514 | 2 | null | 4507 | 5 | null | What you are looking for is termed a covered edge reversal, which preserves Markov equivalence of directed acyclic graphs. A proof of equivalence is given in Lemma 1 of [A Transformational Characterization of Equivalent Bayesian Network Structures](http://research.microsoft.com/pubs/65664/uai95.pdf), by Max Chickering. The relationship between Markov properties and factorisation is well established in the theory of graphical models, for example [Theorem 5.14](http://books.google.co.uk/books?id=G_4E_w_wJzcC&lpg=PA74&vq=DF&pg=PA74#v=onepage&q=Theorem%205.14&f=false) of Probabilistic Networks and Expert Systems.
| null | CC BY-SA 2.5 | null | 2010-11-13T22:15:43.690 | 2010-11-13T22:15:43.690 | null | null | 495 | null |
4515 | 1 | null | null | 4 | 534 | Lets say we have $T$ tags and $N$ articles and lets say that for each tag $t_{i}$ we know that it has tagged $n_{i}$ articles. Meaning that, frequency($t_{i}$)=$n_{i}$.
Given the above information how could I compute the probability P($t_{i}$ to tag a new article) or more generally how could I go about scoring the "importance" of each tag in Tag set ?
| How to measure/weight the importance of tags? | CC BY-SA 3.0 | null | 2010-11-13T22:59:12.890 | 2017-11-09T14:10:58.733 | 2017-11-09T14:10:58.733 | 101426 | null | [
"probability",
"predictive-models",
"data-mining"
]
|
4516 | 2 | null | 4396 | 2 | null | Actually, if you want ranks, try the RANK command. Or use the menus: Transform>Rank Cases. It can generate ranks within groups.
If you prefer the AGGREGATE approach, note that it does NOT require sorting first.
HTH,
Jon Peck
| null | CC BY-SA 2.5 | null | 2010-11-13T23:21:12.727 | 2010-11-13T23:21:12.727 | null | null | 1971 | null |
4517 | 1 | 4518 | null | 70 | 201451 | Is it possible to have a (multiple) regression equation with two or more dependent variables? Sure, you could run two separate regression equations, one for each DV, but that doesn't seem like it would capture any relationship between the two DVs?
| Regression with multiple dependent variables? | CC BY-SA 2.5 | null | 2010-11-14T02:50:03.993 | 2022-02-24T14:36:28.833 | null | null | 1977 | [
"regression"
]
|
4518 | 2 | null | 4517 | 40 | null | Yes, it is possible. What you're interested is is called "Multivariate Multiple Regression" or just "Multivariate Regression". I don't know what software you are using, but you can do this in R.
[Here's a link that provides examples](https://web.archive.org/web/20181024155136/http://www.public.iastate.edu/%7Emaitra/stat501/lectures/MultivariateRegression.pdf).
| null | CC BY-SA 4.0 | null | 2010-11-14T03:32:24.503 | 2022-02-24T14:28:42.420 | 2022-02-24T14:28:42.420 | 11887 | 485 | null |
4519 | 1 | 5931 | null | 3 | 1545 | I plotted a set of about 200,000 points and got a triangular shaped region. The shape is roughly like the triangle made by the points $(1,0)$, $(0,1)$ and $(0,0)$. My points have the property that if $(a,b)$ is a member of the set, then $(b,a)$ is also a member. I wanted to do a regression on the variable associated with the second coordinate y on the variable associated with the first coordinate x. I did a linear regression but I'm bothered by the fact that the region is shaped nothing like a line. Is there a better way to do this?
Edit: The points represent a type of game. For each player i and j, i can attack j or j can attack i. The points are produced by looking at (number of times where i attacks j, of times j attacks i) for all pairs i and j. This is the reason that we get (a,b) being a member of my set if (b,a) is a member. I felt that the triangle region suggests that players who don't attack enough have a greater chance of being attacked. Just to emphasize, my points fill the whole region bounded by a triangle similar to (1,0), (0,1) and (0,0), not necessarily uniformly.
Second Edit:The region is shaped like the triangle I mentioned but bigger. Lets say for arguments sake, it roughly looks like the triangle (1000,0), (0,1000), (0,0). There are a few scattered points outside the triangle and the triangle itself is not completely filled out.
My goal is to characterize or relate the number of attacks that i makes on j to the number of attacks that j makes on i.
| Regression on a triangular shaped region of points representing a symmetric relation | CC BY-SA 2.5 | null | 2010-11-14T05:28:34.397 | 2011-01-03T19:53:37.627 | 2010-11-14T20:50:03.263 | 847 | 847 | [
"regression"
]
|
4520 | 2 | null | 4515 | 1 | null | First for calculating the importance of each tag in a new article I would use the bag of words format to store the corpus information plus the new article added to the last row. I would calculate the tf-idf of the sparse matrix containing the rows with articles and the columns with words. Then I would just Keep the columnns with the tags. Then I would get the tf-idf values of the new article tags, add them all in order to get the total, an to calculate the probability of each one I would divide the tf-idf of the tag from the last article cell divided the calculated total and multiply it by 100. This will provide me the percentage that each tag represents from the article.
For calculating the importance of a tag in a tag-set I would add all the tag columns. From those totals I would add them all and get the grand total. Then I would take each tag divide it for the grand total and multiply it by 100. This will provide me with the percentage or total importance of each tag.
| null | CC BY-SA 2.5 | null | 2010-11-14T09:40:39.110 | 2010-11-14T09:40:39.110 | null | null | 1808 | null |
4521 | 2 | null | 4517 | 2 | null | I would do this by first transforming the regression variables to PCA calculated variables, and then I would to the regression with the PCA calculated variables. Of course I would store the eigenvectors to be able to calculate the corresponding pca values when I have a new instance I wanna classify.
| null | CC BY-SA 2.5 | null | 2010-11-14T09:43:19.713 | 2010-11-14T09:43:19.713 | null | null | 1808 | null |
4522 | 2 | null | 4517 | -4 | null | It's called structural equation model or simultaneous equation model.
| null | CC BY-SA 2.5 | null | 2010-11-14T10:44:06.097 | 2010-11-14T10:44:06.097 | null | null | 1966 | null |
4523 | 1 | 4524 | null | 1 | 742 | The following code:
```
require(zoo)
data <- read.csv(file="summary.csv",sep=",",head=TRUE)
cum = zoo(data$dcomp, as.Date(data$date))
data = zoo(data$compressed, as.Date(data$date))
data <- aggregate(data, identity, tail, 1)
cum <- aggregate(cum, identity, sum, 1)
days = seq(start(data), end(data), "day")
data2 = na.locf(merge(data, zoo(,days)))
pdf(file='timeseries.pdf',width=9,height=5)
par(bty = 'n')
plot(data2,xlab='',ylab='entropy (bytes)',axes=FALSE)
axis(side = 2, at=c(991, 20000, 40000, 53048))
axis(side = 2, at=c(10000, 30000),labels=FALSE)
lines(cum,type="h",lwd=0.3,col=rgb(0.64,0.08,0.00))
axis.Date(side = 1, days, at=c("2007-07-25", "2008-01-01", "2009-01-01", "2010-01-01", "2010-06-21"))
```
Yields the following Graph:

The questions (all related to the bottom axis) are:
- How can I make it display '2007/07' and '2010/06' in the beginning and end labels?
- How can I make it to automatically add minor ticks to the months (without labels)?
| Tweaking axis properties in a timeseries graph | CC BY-SA 2.5 | null | 2010-11-14T16:21:47.483 | 2010-11-14T17:28:03.050 | null | null | 990 | [
"r",
"data-visualization"
]
|
4524 | 2 | null | 4523 | 3 | null | Two quick suggestions:
- axis.Date() has a format field, set it to `"%Y/%m" to get '2007/07' (but then for all labels)
- See help(axTicks) and help(rug) which should help you.
Also try the [xts](http://packages.r-project.org/package=xts) package which extends the [zoo](http://packages.r-project.org/package=zoo) package. You may find reading the source for the plotting functions in both packages helpful.
| null | CC BY-SA 2.5 | null | 2010-11-14T17:28:03.050 | 2010-11-14T17:28:03.050 | null | null | 334 | null |
4525 | 2 | null | 4500 | 2 | null | I keep a separate repository for each project, with a project being centered around a particular data set or question being addressed. The repo contains the data, code, and Sweave documents/plots that explain and express the results.
I maintain a separate repo for each discrete publication or presentation because
- A single project may result in multiple publications or presentations. Once a publication is out or you've given the presentation, you are essentially "done" with the contents of that repo so they don't need to be dragged around with the project.
- A output (publication/presentation/chapter) may contain data from more than one project.
- Not all of the results from a project will end up in a particular piece of output.
Code that is reusable across projects gets its own repo, as well. If I come up with a new & discrete question using data that is already in one repo, I'll copy that data to a new repo.
If you want to be really strict about it, many version control systems offer the idea of "subprojects", but I've found that to be overkill.
| null | CC BY-SA 2.5 | null | 2010-11-14T17:50:08.343 | 2010-11-14T17:50:08.343 | null | null | 1916 | null |
4526 | 2 | null | 4453 | 5 | null | To avoid the ambiguity of a terminology that refers to everything as an "item," let's suppose the set $S$ consists of $n$ distinct "letters." And, because this is a pretty generalization of the Birthday Paradox, let's generalize it a little further and set $P$ to be the set of all $r$-letter words having all distinct letters. The question as stated corresponds to $r=2$. We ask more generally
>
What is the expected number of distinct letters found in the $r k$-letter word obtained by concatenating $k$ random $r$-letter words of distinct letters?
We can compute this in three stages:
- For any $m$, $r \le m \le n$, find the proportion of the $k$-fold $r$-letter words having $m$ or fewer distinct letters.
- From this obtain the probability $p_{n,k,r}(m)$ of exactly $m$ distinct letters.
- Compute the expected number of letters by summing $m p_{n,k,r}(m)$ over $m$ (from $r$ to $n$).
We find the proportions in #1 with a polynomial expansion. Let $f_r(x_1, x_2, \ldots, x_n)$ be the average of all words of length $r$ in which the letters are distinct. For example,
$$f_2(x_1, x_2, x_3) = \left( x_1 x_2 + x_2 x_1 + x_2 x_3 + x_3 x_2 + x_1 x_3 + x_3 x_1 \right)/6 \text{.}$$
The $k^\text{th}$ power of this (computed as a polynomial) displays the probabilities with which which words can be formed by a $k$-fold concatenation. For example, inspection of the coefficients of
$$f_2(x_1, x_2, x_3)^2 = \left( x_1^2 x_2^2 + x_2^2 x_3^2 + x_1^2 x_3^2 + 2 x_1^2 x_2 x_3 + 2 x_1 x_2^2 x_3 + 2 x_1 x_2 x_3^2 \right) / 9 $$
shows that the chance of forming a word with two $x_1$'s and two $x_2$'s (which can occur in four ways out of the 36 possible ways: $x_1x_2x_1x_2$, $x_1x_2x_2x_1$, $x_2x_1x_1x_2$, and $x_2x_1x_2x_1$) is 1/9.
When we set some of the $x_i$ equal to zero, they drop out of $f_r^k$ altogether, and when we set the remaining $x_i$ equal to one, what remains is the sum of their coefficients. Thus we could hope to find the probability of having $m$ distinct letters by setting exactly $m$ of the $x_i$ to one (and the remainder to zero) and summing the value of $f_r^k$ over all such possibilities. (This is easily done because the symmetry of $f_r$ implies all these values are equal, so we need only compute $f_r^k(1,\ldots,1,0,\ldots,0)$ and multiply that by ${n}\choose{m}$.) Unfortunately, this also over-counts words having fewer than $m$ distinct letters, so we have to apply the [Principle of Inclusion-Exclusion](http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle) to extract the desired value. The punchline is that $f_r$ is easy to evaluate; when we set $n-m$ of its arguments to $0$, there remain ${n}\choose{m}$ terms, all of which will evaluate to $1$, whence
$$f_r^k(1,\ldots,1,0,\ldots,0) = {{m}\choose{k}} / {{n}\choose{k}}.$$
Putting this together gives
$$p_{n,k,r}(m) = {{n}\choose{m}} \sum_{j=0}^{m-r}{(-1)^j {{m}\choose{m-j}} {{m-j}\choose{r}}^k } / {{n}\choose{r}}^k \text{.}$$
The expectation equals
$$\mathbb{E}[\text{# distinct letters}] = \sum_{m=r}^n{p_{n,k,r}(m) m}.$$
This looks like an expression that is unlikely to simplify. Its calculation requires $O(n^2)$ effort. When $k$ is small, this can be reduced to $O(n r k)$ by dynamic programming using a recursive relationship between the probability distribution for $k$ and the probability distribution for $k+1$. The recursion is obtained by considering how $m+r$ distinct letters can arise in a $k+1$-fold concatenation: either $r$ distinct letters are adjoined to a word of $m$ distinct letters, or $r-1$ distinct letters are adjoined to a word of $m+1$ distinct letters, ..., or no distinct letters are introduced to a word of $m+r$ distinct letters. Rather than write down this recursion generally, I'll display the case $r=2$:
$$\eqalign{
p_{n,k+1,2}(m+2) &= \Bigl[ {{n-m}\choose{2}} p_{n,k,2}(m) \cr
&+ (m+1)(n-m+1)p_{n,k,2}(m+1) \cr
&+ {{m+2}\choose{2}} p_{n,k,2}(m+2) \Bigr] / {{n}\choose{2}}
}\text{.}$$
The dynamic program computes the entire distribution for the case $k=1$ and then uses this recursion to compute it for $k=2, \ldots$ up to the desired value.
| null | CC BY-SA 2.5 | null | 2010-11-14T19:02:45.520 | 2010-11-30T18:21:49.360 | 2010-11-30T18:21:49.360 | 919 | 919 | null |
4527 | 1 | 4588 | null | 0 | 1535 | I am exploring the use of the `survreg` function in R to analyze my current experiment.
Does anyone know what the "Value" column in the output of the function stands for?
| What is the "Value" output of Survreg in R? | CC BY-SA 2.5 | null | 2010-11-14T19:55:14.707 | 2010-11-16T11:37:33.347 | 2010-11-14T19:59:53.407 | 930 | 1862 | [
"r",
"survival",
"interpretation"
]
|
4528 | 1 | 4534 | null | 18 | 30326 | I have been trying to discern what exactly the "coef" and "(exp)coef" output of coxph signify. It seems that the "(exp)coef" are comparisons of the first variable in the model according to the group assigned in the command.
How does the coxph function arrive at the values for "coef" and "(exp)coef"?
Additionally, how does coxph determine these values when there is censoring involved?
| What is the difference between the "coef" and "(exp)coef" output of coxph in R? | CC BY-SA 2.5 | null | 2010-11-14T20:39:07.227 | 2010-11-15T22:25:52.023 | null | null | 1862 | [
"r",
"survival",
"interpretation"
]
|
4529 | 2 | null | 4528 | 8 | null | To quote the documentation for the print method for a coxph object, obtained in R by typing `?survival::print.coxph`:
>
coefficients the coefficients of the linear predictor, which multiply the columns of the model matrix.
That's all the documentation the author of the package provides. The package contains no user guide or package vignette. R is not designed to be user-friendly, and the documentation assumes you already have understand the statistical methods involved.
I'd assume that the the `coef` column gives the above `coefficients`, and the `exp(coef)` column is the exponential of these. As Cox regression involves a log link function, the coefficients are the log hazard ratios. Exponentiating them therefore gives you back hazard ratios.
| null | CC BY-SA 2.5 | null | 2010-11-14T21:10:23.387 | 2010-11-14T21:10:23.387 | null | null | 449 | null |
4530 | 1 | null | null | 4 | 12457 | Yesterday, I ran a repeated-measures ANCOVA. The purpose was to determine the usability of two computer systems, call them 1 and 2. Each subject completed three (conceptually different) tasks, call them A B and C, on each system. (The covariate was whether the subject had prior experience with the current system, system 1). The dependent variable was time taken to complete each task.
A GLM in SPSS prints out both the standard and multivariate results. The standard, within-subjects results indicated a main effect of system and an interaction of system*task. Now why would I need to use the multivariate results? They were significant as well, and I've heard multivariate tests have more power. But this seems to provide too much wiggle room for researchers-- if the within subjects test isn't significant, they can just look to see if the multivariate results are significant.
Also, since there was an interaction system*task, I ran simple main effects on each task. Turned out, A and B were significant in one direction, C was significant in the other. This seems to suggest to me that doing a MANOVA is a bad idea here. But how would I even know that if I was only looking at the multivariate results?
In short, I guess I just don't understand when to use MANOVA vs ANOVA if I have more than one DV.
| When to interpret multivariate tests when performing repeated-measures ANCOVA? | CC BY-SA 3.0 | null | 2010-11-14T21:14:33.743 | 2011-10-03T05:24:24.107 | 2011-10-03T05:24:24.107 | 183 | 1977 | [
"anova",
"spss",
"multivariate-analysis",
"manova"
]
|
4531 | 2 | null | 4530 | 8 | null | This is not a complete answer as ANOVA with repeated-measures is a complex topic. You can look at your design from a multivariate point of view if you regard your data not as representing realisations of one DV in different conditions, but of (ultimately) different DVs which are to be analysed simultaneously. (Note that the multiple DVs will not correspond to the 2*3 cells themselves but will represent all linearly independent differences between the levels of a factor for the test of its main effect. For the "system" factor, there is just 1 difference variable, hence the multivariate test for this factor should be equivalent to the univariate approach.)
One consideration when choosing between univariate und multivariate tests are the test's assumptions. The univariate approach assumes (among others) sphericity of the error variance-covariance matrix for each of the three tests (factors A, B, interaction A*B). This may be doubtful in many cases - which is why the epsilon-correction schemes were developed. But these are not universally regarded as the best solution. (Note that the sphericity assumption here automatically holds for the "system" factor as it only has 2 levels.)
I suggest chapters 11-14 in Maxwell & Delaney (2004). Designing Experiments and Analyzing Data. NJ: Lawrence Erlbaum Associates. It is a long read but worth it - the chapters provide an in-depth explanation of the univariate and multivariate approach to one-way and multi-way repeated measures ANOVA.
| null | CC BY-SA 2.5 | null | 2010-11-14T22:29:08.370 | 2010-11-14T22:54:11.630 | 2010-11-14T22:54:11.630 | 1909 | 1909 | null |
4534 | 2 | null | 4528 | 25 | null | If you have a single explanatory variable, say treatment group, a Cox's regression model is fitted with `coxph()`; the coefficient (`coef`) reads as a regression coefficient (in the context of the Cox model, described hereafter) and its exponential gives you the hazard in the treatment group (compared to the control or placebo group). For example, if $\hat\beta=-1.80$, then the hazard is $\exp(-1.80)=0.165$, that is 16.5%.
As you may know, the hazard function is modeled as
$$
h(t)=h_0(t)\exp(\beta'x)
$$
where $h_0(t)$ is the baseline hazard. The hazards depend multiplicatively on the covariates, and $\exp(\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The ratio of the hazards of any two individuals $i$ and $j$ is $\exp\big(\beta'(x_i-x_j)\big)$, and is called the hazard ratio (or incidence rate ratio). This ratio is assumed to be constant over time, hence the name of proportional hazard.
To echo your preceding question about `survreg`, here the form of $h_0(t)$ is left unspecified; more precisely, this is a semi-parametric model in that only the effects of covariates are parametrized, and not the hazard function. In other words, we don't make any distribution assumption about survival times.
The regression parameters are estimated by maximizing the partial log-likelihood defined by
$$
\ell=\sum_f\log\left(\frac{\exp(\beta'x_f)}{\sum_{r(f)}\exp(\beta'x_r)}\right)
$$
where the first summation is over all deaths or failures $f$, and the second summation is over all subjects $r(f)$ still alive (but at risk) at the time of failure -- this is known as the risk set. In other words, $\ell$ can be interpreted as the log profile likelihood for $\beta$ after eliminating $h_0(t)$ (or in other words, the LL where the $h_0(t)$ have been replaced by functions of $\beta$ that maximize the likelihood with respect to $h_0(t)$ for a fixed vector $\beta$).
About censoring, it is not clear whether you refer to left censoring (as might be the case if we consider an origin for the time scale that is earlier than the time when observation began, also called delayed entry), or right-censoring. In any case, more details about the computation of the regression coefficients and how the [survival](http://cran.r-project.org/web/packages/survival/index.html) package handles censoring can be found in Therneau and Grambsch, [Modeling Survival Data](http://www.springer.com/statistics/life+sciences,+medicine+%26+health/book/978-0-387-98784-2) (Springer, 2000). [Terry Therneau](http://mayoresearch.mayo.edu/mayo/research/biostat/therneau-book.cfm) is the author of the former S package. An [online tutorial](http://www.mayo.edu/hsr/people/therneau/survival.ps) is available.
[Survival Analysis in R](http://www.ddiez.com/teac/surv/R_survival.pdf), by David Diez, provides a good introduction to Survival Analysis in R. A brief overview of $\chi^2$ tests for regression parameters is given p. 10. Hopefully, this should help clarifying the on-line help quoted by [@onestop](https://stats.stackexchange.com/questions/4528/what-is-the-difference-between-the-coef-and-expcoef-output-of-coxph-in-r/4529#4529), "coefficients the coefficients of the linear predictor, which multiply the columns of the model matrix." For an applied textbook, I recommend [Analyzing Medical Data Using S-PLUS](http://www.springer.com/statistics/life+sciences,+medicine+%26+health/book/978-0-387-98862-7), by Everitt and Rabe-Hesketh (Springer, 2001, chap. 16 and 17), from which most of the above comes from.
Another useful reference is John Fox's appendix on [Cox Proportional-Hazards Regression for Survival Data](http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-cox-regression.pdf).
| null | CC BY-SA 2.5 | null | 2010-11-15T00:02:10.313 | 2010-11-15T22:25:52.023 | 2017-04-13T12:44:45.640 | -1 | 930 | null |
4535 | 2 | null | 4530 | 5 | null | Your question brings up a substantial problem with SPSS... too much automatic output. You are absolutely correct in your implication that the automatic generation of multiple tests is problematic. Some people take the fact that their computer program does this as evidence that it is good procedure. It isn't. Trust your statistical instincts on that point.
| null | CC BY-SA 2.5 | null | 2010-11-15T00:28:09.360 | 2010-11-15T00:28:09.360 | null | null | 601 | null |
4536 | 2 | null | 4517 | 10 | null | @Brett's response is fine.
If you are interested in describing your two-block structure, you could also use [PLS regression](http://en.wikipedia.org/wiki/Partial_least_squares_regression). Basically, it is a regression framework which relies on the idea of building successive (orthogonal) linear combinations of the variables belonging to each block such that their covariance is maximal. Here we consider that one block $X$ contains explanatory variables, and the other block $Y$ responses variables, as shown below:

We seek "latent variables" who account for a maximum of information (in a linear fashion) included in the $X$ block while allowing to predict the $Y$ block with minimal error. The $u_j$ and $v_j$ are the loadings (i.e., linear combinations) associated to each dimension. The optimization criteria reads
$$
\max_{\mid u_h\mid =1,\mid v_h\mid =1}\text{cov}(X_{h-1}u_h,Yv_h)\quad \big(\equiv \max\text{cov}(\xi_h,\omega_h)\big)
$$
where $X_{h-1}$ stands for the deflated (i.e., residualized) $X$ block, after the $h^\text{th}$ regression.
The correlation between factorial scores on the first dimension ($\xi_1$ and $\omega_1$) reflects the magnitude of the $X$-$Y$ link.
| null | CC BY-SA 2.5 | null | 2010-11-15T00:34:37.603 | 2010-11-15T00:34:37.603 | null | null | 930 | null |
4537 | 2 | null | 3242 | 3 | null | If you're interested in bivariate power-law functions (as opposed to univariate power-law distributions), then
Warton et al. "[Bivariate line-fitting methods for allometry](http://www.maths.unsw.edu.au/statistics/files/preprint-2005-02.pdf)." Biol. Rev. 81, 259-201 (2006)
is an excellent reference. In this case, regression is the right thing to do, although there can be some corrections (OLS vs. RMA, etc.) depending on what you want the results of the regression to mean.
| null | CC BY-SA 2.5 | null | 2010-11-15T06:44:49.957 | 2010-11-15T06:44:49.957 | null | null | null | null |
4538 | 2 | null | 4364 | 9 | null | Let $\eta$ and $\xi$ be two independent random variables with a common symmetric distribution such that
$$ P\left ( \left |\frac{\xi+\eta}{\sqrt{2}}\right | \geq t \right )\leq P(|\xi|\geq t).$$
Then these random variables are gaussian. (Obviously, if the $\xi$ and $\eta$ are centered gaussian, it is true.)
This is the [Bobkov-Houdre Theorem](http://www.math.ethz.ch/EMIS/journals/EJP-ECP/EcpVol1/paper2.pdf)
| null | CC BY-SA 2.5 | null | 2010-11-15T07:06:37.643 | 2010-11-15T15:07:17.930 | 2010-11-15T15:07:17.930 | 919 | 223 | null |
4539 | 1 | 4541 | null | 5 | 10987 | I have kinda puzzled when I heard from the other about the calculation of incidence rate
From Kenneth Rothman's Modern Epidemiology, the incidence rate is calculated as `number of cases in a fix period of time` divided by `person-time at risk at that period of time`, which mean if the patient has the disease in the middle of the month, only the first half of patient days will be included in the denominator, but not for the second half. Am I correct?
Problem arises when I saw someone use the following formula for calculation:
`number of cases in a fix period of time`/`total person-time at the period of time`
If this is used, then the incidence rate will be underestimated, because the denominator now is bigger than it should be. But this formula seems to be used often, I wonder what this formula does. Thanks!
| Calculation of incidence rate for epidemiological study in hospital | CC BY-SA 2.5 | null | 2010-11-15T10:23:55.267 | 2011-08-15T13:31:32.683 | 2010-11-15T11:57:43.427 | 449 | 588 | [
"epidemiology",
"definition"
]
|
4540 | 2 | null | 4539 | 1 | null | I think Rothman's definition should be used. In the second definition all patients seem to be given the same duration (is this correct?) so that the incidence will not be the incidence rate but will be proportional to the cumulative incidence (cases / total in given time frame)
| null | CC BY-SA 2.5 | null | 2010-11-15T11:16:20.133 | 2010-11-15T11:16:20.133 | null | null | 1573 | null |
4541 | 2 | null | 4539 | 4 | null | It is commonly admitted that the denominator for IRs is the "population at risk" (i.e., all individuals in which the studied event(s) may occur). Although your first formula is generally used, I found in The new public health, by Tulchinsky and Varavikova (Elsevier, 2009, 2nd. ed., [p. 84](http://books.google.fr/books?id=T2yQwTxRRfcC&pg=PA84&lpg=PA84&dq=incidence+rate+denominator&source=bl&ots=A3HGM26jbQ&sig=4-yAB3WHPLcMQHmiLRcseKASEsU&hl=fr&ei=FRjhTNegA4SGhQf-kYCLDQ&sa=X&oi=book_result&ct=result&resnum=2&ved=0CCYQ6AEwATgo#v=onepage&q=incidence%20rate%20denominator&f=false)) that a distinction is made between ordinary incidence rate, where the average size of the population in the fixed period of time is used in the denominator, and person-time incidence rate, with PT at risk in the denominator.
Obviously, when individuals not at risk of the disease are included in the denominator, the resultant measure of disease frequency will underestimate the true incidence of disease in the population under investigation, but see [Numerators, denominators and populations at risk](http://www.healthknowledge.org.uk/public-health-textbook/research-methods/1a-epidemiology/numerators-denominators-populations).
| null | CC BY-SA 2.5 | null | 2010-11-15T11:33:03.200 | 2010-11-15T11:33:03.200 | null | null | 930 | null |
4542 | 1 | 4593 | null | 4 | 198 | After running a study based on interactive genetic algorithms, I have a univariate data file containing multiple participants each doing multiple generations (blocks) of multiple trials.
Is there an established approach used to analyse this?
I've looked at using a grand mean on each of the component variables. I've also looked at the modal values of these variables after filtering out all but the final generation.
Neither approach seems to properly capture the richness of the data, so I'd like to know how other people would handle this.
| Are there any recommended approaches for analysing data from genetic algorithms? | CC BY-SA 2.5 | null | 2010-11-15T16:23:13.670 | 2010-11-16T17:23:45.307 | null | null | 1950 | [
"genetic-algorithms"
]
|
4543 | 1 | 4554 | null | 11 | 4636 | I am fitting a linear model in R using `glmnet`. The original (non-regularized) model was fitted using `lm` and did not have a constant term (i.e. it was in the form `lm(y~0+x1+x2,data)`).
`glmnet` takes a matrix of predictors and a vector of responses. I've been reading `glmnet` documentation, and can find no mention of the constant term.
So, is there a way to ask `glmnet` to force the linear fit through the origin?
| In R, does "glmnet" fit an intercept? | CC BY-SA 2.5 | null | 2010-11-15T16:35:22.907 | 2010-11-15T20:54:46.520 | 2010-11-15T20:54:46.520 | 439 | 439 | [
"r",
"regression",
"lasso"
]
|
4544 | 1 | 4549 | null | 29 | 47692 | Please provide R code which allows one to conduct a between-subjects ANOVA with -3, -1, 1, 3 contrasts. I understand there is a debate regarding the appropriate Sum of Squares (SS) type for such an analysis. However, as the default type of SS used in SAS and SPSS (Type III) is considered the standard in my area. Thus I would like the results of this analysis to match perfectly what is generated by those statistics programs. To be accepted an answer must directly call aov(), but other answers may be voted up (espeically if they are easy to understand/use).
```
sample.data <- data.frame(IV=rep(1:4,each=20),DV=rep(c(-3,-3,1,3),each=20)+rnorm(80))
```
Edit: Please note, the contrast I am requesting is not a simple linear or polynomial contrast but is a contrast derived by a theoretical prediction, i.e. the type of contrasts discussed by Rosenthal and Rosnow.
| How does one do a Type-III SS ANOVA in R with contrast codes? | CC BY-SA 2.5 | null | 2010-11-15T17:01:36.680 | 2018-05-06T10:35:31.813 | 2011-09-13T20:14:59.197 | 442 | 196 | [
"r",
"anova",
"contrasts",
"sums-of-squares"
]
|
4545 | 2 | null | 4544 | 10 | null | You may want to have a look at this blog post:
[Obtaining the same ANOVA results in R as in SPSS - the difficulties with Type II and Type III sums of squares](http://myowelt.blogspot.com/2008/05/obtaining-same-anova-results-in-r-as-in.html)
(Spoiler: add `options(contrasts=c("contr.sum", "contr.poly"))` at the beginning of your script)
| null | CC BY-SA 2.5 | null | 2010-11-15T17:30:52.187 | 2010-11-15T17:30:52.187 | null | null | 582 | null |
4546 | 2 | null | 4288 | 1 | null | I'm not sure I understand your question, but you have no other answers, so I thought I'd give it a shot and revive this topic: If I understand you correctly for a number of different sampling units (say people) you have estimates that they have given attribute, e.g. there is a probability that they have eaten turkey today - Pr(Turkey). You also have an estimate of the error in your record of Pr(Turkey). Thus, you expect that Joe has a Pr(Turkey) of 50% +/- 10%. Given information of this sort, I think you are interested in the expected number of your sampling units that will have eaten Turkey.
I believe you can simply sum the propensity scores and you'll get the expected number of sampling units with the attribute so long as the error of the estimate is symmetrical. In short, I'm proposing that the errors of the estimate become unimportant when you aggregate. The only time the variance is going to matter is if you want the expected variance of your aggregate score.
Edit: Given the edit to the question I think the approach outlined above may still be salvageable, especially for binary observations. Taking the supposition that we have 5 persons; 3 of which had a value of 1 for an attribute, 2 of which had a value of 0 for that attribute, each of which had a probability .8,.81,.82.,.83,.84 respectively of being observed correctly. We want to find the expected value of p(having that attribute). We can invert the probability of having that attribute for those who were judged not to have the attribute such that we can phrase each of the 5 observations in terms of their probability of having the attribute, .8, .81, .82, (1 - .83) .17, and (1 - .84) .16, respectively. Thus the average Pr of having the attribute should be (.80+.81+.82+.17+.16)/5 = .552. Two problems exist with this approach.
- It assumes that probabilities are equally probable (if you will). That is, that that a certainty of .95 and .75 means that the average probability of having the attribute is .85. It may be the case that averaging on a linear scale isn't the right sort of thing to do. Perhaps these probabilities should be converted to logits then back, in which case the average probability of .95 (logit: 2.94) and .75 (logit: 1.10) is .88 (logit: 2.02).
- It has little to say about the case of non-binary classifications. Should one judge that if the categorization is .90 likely to be correct and there are two other options that there is a .05 chance it is one and .05 chance it is the other? Should base rates for the other options be considered? Etc.
| null | CC BY-SA 2.5 | null | 2010-11-15T17:36:43.753 | 2010-11-17T05:28:47.827 | 2010-11-17T05:28:47.827 | 196 | 196 | null |
4547 | 2 | null | 4462 | 11 | null | The median of medians is not the same as the median of the raw scores. A simple case of this is that when you have an odd number of sales, the median is the middle value; when you have an even number of sales, the median is commonly taken as the average between those two values. A more "real world" challenge to this is that states will sell differing numbers of houses and thus the median of their medians is a poor guess as to the median of all home sales. Though it also will to be precise, a good first pass estimation would be to find the median of values where each state's median is reflected a number of times proportional to the number of sales in that state. Thus, I am essentially suggesting a weighted median.
| null | CC BY-SA 2.5 | null | 2010-11-15T17:50:20.730 | 2010-11-15T17:50:20.730 | null | null | 196 | null |
4548 | 2 | null | 4451 | 1 | null | A good resource for finding datasets is:
[/r/datasets](http://datasets.reddit.com) on Reddit.
A quick glance at that page reveals [this source](http://networkdata.ics.uci.edu/resources.php), which might contain something useful for you.
| null | CC BY-SA 2.5 | null | 2010-11-15T18:07:57.280 | 2010-12-10T05:46:27.450 | 2010-12-10T05:46:27.450 | 115 | 115 | null |
4549 | 2 | null | 4544 | 25 | null | Type III sum of squares for ANOVA are readily available through the `Anova()` function from the [car](http://cran.r-project.org/web/packages/car/index.html) package.
Contrast coding can be done in several ways, using `C()`, the `contr.*` family (as indicated by @nico), or directly the `contrasts()` function/argument. This is detailed in §6.2 (pp. 144-151) of [Modern Applied Statistics with S](http://www.stats.ox.ac.uk/pub/MASS4/) (Springer, 2002, 4th ed.). Note that `aov()` is just a wrapper function for the `lm()` function. It is interesting when one wants to control the error term of the model (like in a within-subject design), but otherwise they both yield the same results (and whatever the way you fit your model, you still can output ANOVA or LM-like summaries with `summary.aov` or `summary.lm`).
I don't have SPSS to compare the two outputs, but something like
```
> library(car)
> sample.data <- data.frame(IV=factor(rep(1:4,each=20)),
DV=rep(c(-3,-3,1,3),each=20)+rnorm(80))
> Anova(lm1 <- lm(DV ~ IV, data=sample.data,
contrasts=list(IV=contr.poly)), type="III")
Anova Table (Type III tests)
Response: DV
Sum Sq Df F value Pr(>F)
(Intercept) 18.08 1 21.815 1.27e-05 ***
IV 567.05 3 228.046 < 2.2e-16 ***
Residuals 62.99 76
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
is worth to try in first instance.
About factor coding in R vs. SAS: R considers the baseline or reference level as the first level in lexicographic order, whereas SAS considers the last one. So, to get comparable results, either you have to use `contr.SAS()` or to `relevel()` your R factor.
| null | CC BY-SA 2.5 | null | 2010-11-15T18:26:07.910 | 2010-11-15T18:26:07.910 | null | null | 930 | null |
4550 | 2 | null | 4544 | 3 | null | Try the Anova command in the car library. Use the type="III" argument, as it defaults to type II. For example:
```
library(car)
mod <- lm(conformity ~ fcategory*partner.status, data=Moore, contrasts=list(fcategory=contr.sum, partner.status=contr.sum))
Anova(mod, type="III")
```
| null | CC BY-SA 2.5 | null | 2010-11-15T18:31:59.447 | 2010-11-15T18:31:59.447 | null | null | 101 | null |
4551 | 1 | null | null | 254 | 27187 | I'm a grad student in psychology, and as I pursue more and more independent studies in statistics, I am increasingly amazed by the inadequacy of my formal training. Both personal and second hand experience suggests that the paucity of statistical rigor in undergraduate and graduate training is rather ubiquitous within psychology. As such, I thought it would be useful for independent learners like myself to create a list of "Statistical Sins", tabulating statistical practices taught to grad students as standard practice that are in fact either superseded by superior (more powerful, or flexible, or robust, etc.) modern methods or shown to be frankly invalid. Anticipating that other fields might also experience a similar state of affairs, I propose a community wiki where we can collect a list of statistical sins across disciplines. Please, submit one "sin" per answer.
| What are common statistical sins? | CC BY-SA 3.0 | null | 2010-11-15T18:46:37.113 | 2022-12-07T12:54:23.950 | 2016-08-11T17:17:48.537 | 22468 | 364 | [
"fallacy"
]
|
4552 | 2 | null | 4551 | 25 | null | Correlation implies causation, which is not as bad as accepting the Null Hypothesis.
| null | CC BY-SA 2.5 | null | 2010-11-15T18:56:19.547 | 2010-11-15T18:56:19.547 | null | null | 1307 | null |
4553 | 2 | null | 4551 | 42 | null | Dichotomization of a continuous predictor variable to either "simplify" analysis or to solve for the "problem" of non-linearity in the effect of the continuous predictor.
| null | CC BY-SA 2.5 | null | 2010-11-15T18:57:56.170 | 2010-11-15T21:25:05.107 | 2010-11-15T21:25:05.107 | 364 | 364 | null |
4554 | 2 | null | 4543 | 13 | null | Yes, an intercept is included in a [glmnet](http://cran.r-project.org/web/packages/glmnet/index.html) model, but it is not regularized (cf. [Regularization Paths for Generalized Linear Models via Coordinate Descent](http://www-stat.stanford.edu/~jhf/ftp/glmnet.pdf), p. 13). More details about the implementation could certainly be obtained by carefully looking at the code (for a gaussian family, it is the `elnet()` function that is called by `glmnet()`), but it is in Fortran.
You could try the [penalized](http://cran.r-project.org/web/packages/penalized/index.html) package, which allows to remove the intercept by passing `unpenalized = ~0` to `penalized()`.
```
> x <- matrix(rnorm(100*20),100,20)
> y <- rnorm(100)
> fit1 <- penalized(y, penalized=x, unpenalized=~0,
standardize=TRUE)
> fit2 <- lm(y ~ 0+x)
> plot((coef(fit1) + coef(fit2))/2, coef(fit2)-coef(fit1))
```
To get Lasso regularization, you might try something like
```
> fit1b <- penalized(y, penalized=x, unpenalized=~0,
standardize=TRUE, lambda1=1, steps=20)
> show(fit1b)
> plotpath(fit1b)
```
As can be seen in the next figure, there is little differences between the regression parameters computed with both methods (left), and you can plot the Lasso path solution very easily (right).

| null | CC BY-SA 2.5 | null | 2010-11-15T19:02:36.613 | 2010-11-15T19:02:36.613 | null | null | 930 | null |
4555 | 2 | null | 4551 | 23 | null | Analysis of rate data (accuracy, etc) using ANOVA, thereby assuming that rate data has Gaussian distributed error when it's actually binomially distributed.
[Dixon (2008)](http://dx.doi.org/10.1016/j.jml.2007.11.004) provides a discussion of the consequences of this sin and exploration of more appropriate analysis approaches.
| null | CC BY-SA 2.5 | null | 2010-11-15T19:12:03.287 | 2010-11-16T15:47:31.527 | 2010-11-16T15:47:31.527 | 364 | 364 | null |
4556 | 1 | 6724 | null | 8 | 117711 | Having the Transfer Function of a discrete system as such:
$$H(z) = \frac{0.8}{z(z-0.8)}$$
I am asked to find the Steady State Gain of the system.
I have the solution and it simply states:
Steady State Gain:
$$H(1) = \frac{0.8}{1-0.8} = 4$$
I have no idea why this is the case, and the book, the slides, the solutions, my (limited) notes and the internet are seemingly empty on the subject. Is the Steady State Gain of a system always the outcome of the Transfer Function applied to 1? That just sounds ridiculous, especially since I'm not finding any references to it online.
I was chased out of mathoverflow with this question, those guys really hate homework... Then again, who doesn't.
| How to compute the steady state gain from the transfer function of a discrete time system? | CC BY-SA 2.5 | null | 2010-11-15T19:12:46.883 | 2018-01-12T15:20:02.297 | 2010-11-15T22:57:21.427 | 159 | 1994 | [
"time-series",
"self-study"
]
|
4557 | 2 | null | 4551 | 4 | null | Application of least-squares minimization when maximum-likelihood procedures exist.
| null | CC BY-SA 2.5 | null | 2010-11-15T19:16:20.850 | 2010-11-15T19:16:20.850 | null | null | 364 | null |
4558 | 2 | null | 4551 | 18 | null | A current popular one is plotting 95% confidence intervals around the raw performance values in repeated measures designs when they only relate to the variance of an effect. For example, a plot of reaction times in a repeated measures design with confidence intervals where the error term is derived from the MSE of a repeated measures ANOVA. These confidence intervals don't represent anything sensible. They certainly don't represent anything about the absolute reaction time. You could use the error term to generate confidence intervals around the effect but that is rarely done.
| null | CC BY-SA 3.0 | null | 2010-11-15T19:26:09.143 | 2012-03-09T13:33:02.260 | 2012-03-09T13:33:02.260 | 601 | 601 | null |
4559 | 2 | null | 4517 | 0 | null | Did you already come across the term "canonical correlation"? There you have sets of variables on the independent as well as on the dependent side. But maybe there are more modern concepts available, the descriptions I have are all of the eighties/nineties...
| null | CC BY-SA 2.5 | null | 2010-11-15T19:59:37.720 | 2010-11-15T19:59:37.720 | null | null | 1818 | null |
4560 | 2 | null | 4551 | 14 | null | My intro psychometrics course in undergrad spent at least two weeks teaching how to perform a stepwise regression. Is there any situation where stepwise regression is a good idea?
| null | CC BY-SA 2.5 | null | 2010-11-15T20:01:39.920 | 2010-11-15T20:01:39.920 | null | null | 1118 | null |
4561 | 1 | null | null | 1 | 2927 | For a university paper, I have to compare whether two program versions are statistically significantly different. The comparison in terms of runtime performance is straightforward -- I determine whether the confidence intervals for the two sets of measurements overlap. However, in the case of memory footprint, I have no sets of measurements, because the memory footprint does not vary from measurement to measurement (e.g., version 1 has a constant footprint of 920 kb and version 2 consumes 1040 kb). Is there any way to compare both values in a statistically sound manner?
| Comparison of two values in a statistically sound manner | CC BY-SA 2.5 | null | 2010-11-15T20:14:11.647 | 2010-11-16T00:13:45.617 | 2010-11-15T22:58:10.100 | 159 | 1992 | [
"hypothesis-testing",
"statistical-significance",
"self-study",
"inference"
]
|
4562 | 2 | null | 4561 | 4 | null | It is quite simple; memory footprint is constant, this means it has no deviation and thus any difference is significant.
| null | CC BY-SA 2.5 | null | 2010-11-15T20:32:16.927 | 2010-11-15T20:32:16.927 | null | null | null | null |
4563 | 2 | null | 4551 | 14 | null | Failing to test the assumption that error is normally distributed and has constant variance between treatments. These assumptions aren't always tested, thus least-squares model fitting is probably often used when it is actually inappropriate.
| null | CC BY-SA 2.5 | null | 2010-11-15T21:39:05.163 | 2010-11-17T16:29:42.977 | 2010-11-17T16:29:42.977 | 319 | 101 | null |
4564 | 1 | 4763 | null | 7 | 446 | It is known that [belief propagation](http://en.wikipedia.org/wiki/Belief_propagation) gives exact result on trees, are there interesting examples when [Generalized Belief Propagation](http://www.merl.com/publications/TR2001-22/) is exact? (edit junction tree is not interesting because it is exactly solvable without GBP)
On the surface, Belief Propagation passes messages between cliques and separators, whereas GBP allows more general region hierarchy. That helps with convergence rate, but I wonder if this also extends the class of exactly solvable inference problems.
Edit: as Thomas Minka points out, junction tree algorithm can be viewed as a version of generalized belief propagation. But it can also be viewed as a version of (cluster)belief propagation. What I'm wondering specifically is if GBP can give exact solution for any problem for which BP can't. The motivation is that with exact solution GBP gives result in a finite number of steps and you can view the result as a kind of algebraic factorization of the problem, in the spirit of [Generalized Distributive Law](http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.125.8954) paper
| When is Generalized Belief Propagation exact? | CC BY-SA 2.5 | null | 2010-11-15T21:47:29.903 | 2010-11-20T23:51:23.590 | 2010-11-20T22:30:54.403 | 511 | 511 | [
"graphical-model"
]
|
4566 | 2 | null | 4561 | 3 | null | You're incorrect about the confidence intervals. Assuming they are 95% confidence intervals then they can overlap quite a bit and be statistically significantly different. What you need is for the confidence interval of the difference between the two to not overlap 0. The standard error of the difference is sqrt(2 *((var1+var2)/(2(n-1)))/n) and the df for the confidence interval is 2(n-1) (assuming you ran the test equal numbers of times for both programs).
Alternatively you could average the w (one side of the confidence interval) from each condition and then multiply that by sqrt(2). That will give you a minimum difference for statistical significance.
If the program can run quickly just run it 1e5 times or more (lots and lots) and call the resulting means and standard deviations approximate population values. Your standard error will be so small that a confidence interval will be almost 0 and any difference would be significant. You could even report overlayed histogram plots and CDFs.
(while I would endorse this last method you might need to do the CIs anyway)
| null | CC BY-SA 2.5 | null | 2010-11-16T00:13:45.617 | 2010-11-16T00:13:45.617 | null | null | 601 | null |
4567 | 2 | null | 4551 | 118 | null | Most interpretations of p-values are sinful! The conventional usage of p-values is badly flawed; a fact that, in my opinion, calls into question the standard approaches to the teaching of hypothesis tests and tests of significance.
Haller and Krause have found that statistical instructors are almost as likely as students to misinterpret p-values. (Take the test in their paper and see how you do.) Steve Goodman makes a good case for discarding the conventional (mis-)use of the p -value in favor of likelihoods. The Hubbard paper is also worth a look.
Haller and Krauss. [Misinterpretations of significance: A problem students share with their teachers](http://www.dgps.de/fachgruppen/methoden/mpr-online/issue16/art1/article.html). Methods of Psychological Research (2002) vol. 7 (1) pp. 1-20 ([PDF](https://web.archive.org/web/20160310003951/http://www.dgps.de/fachgruppen/methoden/mpr-online/issue16/art1/haller.pdf))
Hubbard and Bayarri. [Confusion over Measures of Evidence (p's) versus Errors (α's) in Classical Statistical Testing](http://pubs.amstat.org/doi/abs/10.1198/0003130031856). The American Statistician (2003) vol. 57 (3)
Goodman. Toward evidence-based medical statistics. 1: The P value fallacy. Ann Intern Med (1999) vol. 130 (12) pp. 995-1004 ([PDF](https://web.archive.org/web/20111130055621/http://psg-mac43.ucsf.edu/ticr/syllabus/courses/4/2003/11/13/Lecture/readings/Steven%20Goodman.pdf))
Also see:
Wagenmakers, E-J. A practical solution to the pervasive problems of p values. Psychonomic Bulletin & Review, 14(5), 779-804.
for some clear cut cases where even the nominally "correct" interpretation of a p-value has been made incorrect due to the choices made by the experimenter.
Update (2016): In 2016, American Statistical Association issued a statement on p-values, see [here](https://web.archive.org/web/20160321211044/https://www.amstat.org/newsroom/pressreleases/P-ValueStatement.pdf). This was, in a way, a response to the "ban on p-values" issued by [a psychology journal](http://www.tandfonline.com/doi/pdf/10.1080/01973533.2015.1012991) about a year earlier.
| null | CC BY-SA 4.0 | null | 2010-11-16T00:15:32.840 | 2022-11-28T14:19:15.210 | 2022-11-28T14:19:15.210 | 362671 | 1679 | null |
4568 | 1 | null | null | 5 | 2880 | Thanks for all the answer for the question [Calculation of incidence rate for epidemiological study in hospital](https://stats.stackexchange.com/questions/4539/calculation-of-incidence-rate-for-epidemiological-study-in-hospital). And here come's the second part of the question:
What about the prevalence rate then? I have read `The new public health` as suggested by Chi, the book says that prevalence is usually not available when using ordinary incidence rate, but I saw another formula here:
`total case count in that period of time`/`total patient bed days during that period of time`
It puzzled me again, what is it? I have never heard of prevalence calculated using denominator as `patient-bed days`.
Thanks!
| Calculation of incidence rate for epidemiological study -- prevalence rate this time | CC BY-SA 3.0 | null | 2010-11-16T01:24:41.520 | 2012-12-28T12:39:49.547 | 2017-04-13T12:44:40.883 | -1 | 588 | [
"epidemiology",
"definition"
]
|
4569 | 1 | 4577 | null | 9 | 11649 | I would like to gather input from people in the field about the Yates continuity correction for 2 x 2 contingency tables. [The Wikipedia article](http://en.wikipedia.org/wiki/Yates%27_correction_for_continuity) mentions it may adjust too far, and is thus only used in a limited sense. The [related post here](https://stats.stackexchange.com/questions/939/yates-correction-for-continuity-only-for-2x2) doesn't offer much further insight.
So to the people who use these tests regularly, what are your thoughts? Is it better to use the correction or not?
And a real world example which would yield different results at the 95% confidence level. Note this was a homework problem, but our class does not deal with the Yates continuity correction at all, so sleep easy knowing you aren't doing my homework for me.
```
samp <- matrix(c(13, 12, 15, 3), byrow = TRUE, ncol = 2)
colnames(samp) <- c("No", "Yes")
rownames(samp) <- c("Female", "Male")
chisq.test(samp, correct = TRUE)
chisq.test(samp, correct = FALSE)
```
| Yates continuity correction for 2 x 2 contingency tables | CC BY-SA 3.0 | null | 2010-11-16T01:27:37.710 | 2017-11-12T21:25:54.187 | 2017-11-12T21:25:54.187 | 11887 | 696 | [
"categorical-data",
"chi-squared-test",
"yates-correction"
]
|
4570 | 2 | null | 4551 | 5 | null | I would say, doing tests and regressions on a small set of data.
Edit: Without looking at the confidence intervals, or when the confidence intervals/error bars are not easy to calculate.
| null | CC BY-SA 2.5 | null | 2010-11-16T02:30:58.087 | 2010-11-19T09:35:48.883 | 2010-11-19T09:35:48.883 | 1709 | 1709 | null |
4571 | 2 | null | 4569 | 3 | null | If you have counts low enough that the Yates Correction is a worry (as in your example), you probably should be using Fisher's exact test. Otherwise, I recommend that after you use the chi-square test on a 2x2 table, you confirm your test with a log odds-ratio z-test.
| null | CC BY-SA 2.5 | null | 2010-11-16T02:48:09.623 | 2010-11-16T02:48:09.623 | null | null | 5792 | null |
4572 | 2 | null | 4551 | 26 | null | The one that I see quite often and always grinds my gears is the assumption that a statistically significant main effect in one group and a non-statistically significant main effect in another group implies a significant effect x group interaction.
| null | CC BY-SA 3.0 | null | 2010-11-16T02:49:20.940 | 2014-07-01T16:47:32.577 | 2014-07-01T16:47:32.577 | 196 | 196 | null |
4573 | 2 | null | 4451 | 3 | null | visit the Max Planck institute. They have also collected several datasets for OSNs.
| null | CC BY-SA 2.5 | null | 2010-11-16T07:16:21.443 | 2010-11-16T07:16:21.443 | null | null | null | null |
4574 | 2 | null | 4551 | 8 | null | (With a bit of luck this will be controversial.)
Using a Neyman-Pearson approach to statistical analysis of scientific experiments. Or, worse, using an ill-defined hybrid of Neyman-Pearson and Fisher.
| null | CC BY-SA 2.5 | null | 2010-11-16T07:52:02.757 | 2010-11-16T07:52:02.757 | null | null | 1679 | null |
4575 | 1 | 4576 | null | 15 | 4291 | Are there good tutorials on object-oriented programming in R?
It would be great if it included the following:
- how to define a class;
- differences between S3 and S4 classes;
- operator overloading (I'd like to be able to write a+b where a and b are instances of the class I have in mind).
| Tutorials on object-oriented programming in R | CC BY-SA 2.5 | null | 2010-11-16T08:10:48.690 | 2015-08-02T21:11:59.127 | null | null | 439 | [
"r"
]
|
4576 | 2 | null | 4575 | 10 | null | Hadley Wickham's wiki on [devtools](https://github.com/hadley/devtools/wiki/) is a great resource for the necessary information in a concise form. However, if you want an exhaustive resource, the R language manual's [OOP](http://cran.r-project.org/doc/manuals/R-lang.html#Object_002doriented-programming) section may be helpful. I am sure more experienced members will have better suggestions.
| null | CC BY-SA 3.0 | null | 2010-11-16T08:20:20.087 | 2015-08-02T21:11:59.127 | 2015-08-02T21:11:59.127 | 30351 | 1307 | null |
4577 | 2 | null | 4569 | 6 | null | Yates' correction results in tests that are more conservative as with Fisher's "exact" tests.
Here is an online tutorial on the use of [Yates’s continuity correction](http://faculty.london.edu/cstefanescu/Yates.pdf), by Stefanescu et al, which clearly points to various flaws of systematic correction for continuity (pp. 4-6). Quoting Agresti ([CDA](http://www.stat.ufl.edu/~aa/cda/cda.html) 2002), "Yates (1934) mentioned that Fisher suggested the hypergeometric to him for an exact test", which led to the continuity-corrected version of the $\chi^2$. Agresti also indicated that Fisher's test is a good alternative now that computers can do it even for large samples (p. 103). Now, the point is that choosing a test really depends on the question that is asked and the assumptions that are made by each of them (e.g., in the case of the Fisher's test we assume that margins are fixed).
In your case, Fisher test and corrected $\chi^2$ agree and yield $p$-value above 5%. In the case of the ordinary $\chi^2$, if $p$-values are computed using a Monte Carlo approach (see `simulate.p.value`), then it fails to reach significance too.
Other useful references dealing with small sample size issues and the overuse of Fisher's test, include:
- I. Campbell, Chi-squared and Fisher–Irwin tests of two-by-two tables with small sample recommendations, Statistics in Medicine 26(19): 3661–3675, 2007.
- Mark G. Haviland, Yates's correction for continuity and the analysis of 2 × 2 contingency tables, Statistics in Medicine 9(4): 363–367, 1990.
| null | CC BY-SA 2.5 | null | 2010-11-16T08:22:44.013 | 2010-11-16T08:22:44.013 | null | null | 930 | null |
4578 | 2 | null | 4515 | 2 | null | I am not absolutely sure what you want to achieve but I will give it a try.
Measuring Importance
The concept of importance without any target is rather ambiguous. Suppose you have the tags "football","crops" and "funny". Now want to create a classification algorithm to determine whether a certain article is in the category "agriculture" or "sport". In this case the tags "football" and "crops" are more important than "funny". On the other hand, if you want to classify documents whether they are funny or not, "football" and "crops" maybe less helpful. I think you get the idea.
Sooo ... in relation to a certain label, you can measure the importance of tags by calculating the [information gain ratio](http://en.wikipedia.org/wiki/Information_gain_ratio) or related measures.
Calculating a probability
Without such a target, you can "only" calculate the probability that a certain tags occurs. In detail: You can calculate this probability either a priori (|Tag=ti|/|tags|) or a posteriori, i.e. the probability that a given article is tagged with a certain tag ti (prob(Tag=ti|Article=art)).
The latter can be achieved by constructing a bayes model (e.g. naive bayes) for each tag. In this case a binary label is constructed with value "1" if the tag is present, else "0".
| null | CC BY-SA 2.5 | null | 2010-11-16T08:25:35.340 | 2010-11-16T08:25:35.340 | null | null | 264 | null |
4579 | 1 | null | null | 2 | 9871 | My question is: plus or minus?
Is the Lagrange function:
- the objective PLUS lambda times constraints, or
- the objective function MINUS lambda times constraints?
Example:
want to maximize A=xy
subject to g(x,y)=2x+y-400=0
is F(x,y,lambda):
- xy + lambda (2x+y-400), or
- xy - lambda (2x+y-400)
I found both notations. Does that mean one can use them interchangeably (i.e. they are the same)?
Thanks for help
| Is the Lagrange function objective plus lambda times constraints or objective minus lambda times constraints? | CC BY-SA 3.0 | null | 2010-11-16T08:27:45.800 | 2013-01-04T06:11:03.597 | 2012-11-10T02:07:00.883 | 183 | null | [
"optimization"
]
|
4580 | 1 | null | null | 16 | 3983 | Who of you in this forum uses ">R with the [multicore](http://cran.r-project.org/web/packages/multicore/index.html), [snow](http://cran.r-project.org/web/packages/snow/index.html) packages, or [CUDA](http://www.nvidia.com/object/cuda_home_new.html), so for advanced calculations that need more power than a workstation CPU? On which hardware do you compute these scripts? At home/work or do you have data center access somewhere?
The background of these questions is the following: I am currently writing my M.Sc. thesis about R and High Performance Computing and need a strong knowledge about who actually uses R. I read that R had 1 million users in 2008, but that's more or less the only user statistics I could find on this topic - so I hope for your answers!
Sincerely Heinrich
| Who uses R with multicore, SNOW or CUDA package for resource intense computing? | CC BY-SA 2.5 | null | 2010-11-16T08:28:42.500 | 2012-10-19T01:57:54.343 | 2010-11-16T21:05:51.823 | 930 | 2006 | [
"r",
"mathematical-statistics",
"computational-statistics",
"parallel-computing"
]
|
4581 | 2 | null | 4575 | 7 | null | In addition to [@suncoolsu](https://stats.stackexchange.com/questions/4575/tutorials-on-object-oriented-programming-in-r/4576#4576) excellent response, there is [A (Not So) Short Introduction to S4](http://cran.r-project.org/doc/contrib/Genolini-S4tutorialV0-5en.pdf), by Christophe Genolini. It is available on CRAN website.
| null | CC BY-SA 2.5 | null | 2010-11-16T08:30:14.563 | 2010-11-16T08:30:14.563 | 2017-04-13T12:44:55.360 | -1 | 930 | null |
4582 | 2 | null | 4579 | 1 | null | It is exactly the same!
You want the constraint to be respected, and you don't care about the sign of g(x,y)
| null | CC BY-SA 2.5 | null | 2010-11-16T09:27:03.307 | 2010-11-16T09:27:03.307 | null | null | 1709 | null |
4584 | 2 | null | 4580 | 4 | null | Since you ask, I am using the [foreach](http://cran.r-project.org/web/packages/foreach/index.html) package with the [multicore](http://cran.r-project.org/web/packages/multicore/index.html) backend. I use it to split an [embarrassingly parallel](http://en.wikipedia.org/wiki/Embarrassingly_parallel) workload across multiple cores on a single Nehalem box with lots of RAM. This works pretty well for the task at hand.
| null | CC BY-SA 2.5 | null | 2010-11-16T09:49:15.153 | 2010-11-16T09:49:15.153 | null | null | 439 | null |
4585 | 1 | 4586 | null | 6 | 3053 | I was reading about [exponential smoothing](http://en.wikipedia.org/wiki/Exponential_smoothing) on Wikipedia. I saw a sentence saying that I could find the optimal parameter $\alpha$ by using the method of least squares. How do I go about doing that? Are there other ways I can find the right $\alpha$?
Edit: Also, if possible, I'd like a way to handle missing values.
| Optimal parameter $\alpha$ for exponential smoothing using least squares | CC BY-SA 2.5 | null | 2010-11-16T09:52:30.103 | 2010-11-16T10:28:32.583 | 2010-11-16T10:28:32.583 | 159 | 847 | [
"time-series",
"estimation",
"optimization"
]
|
4586 | 2 | null | 4585 | 7 | null | Minimize the sum of squared one-step forecast errors. If $\hat{Y}_t$ is the prediction of $Y_t$ given $Y_1,\dots,Y_{t-1}$, then $e_t=Y_t-\hat{Y}_t$ is the one-step forecast error. So minimize $e_2^2+\cdots+e_n^2$.
You can also use maximum likelihood estimation as discussed in [my Springer book](http://www.exponentialsmoothing.net).
If you're just using simple exponential smoothing, and are happy to assume normal errors with constant variance, then an ARIMA(0,1,1) model is equivalent.
When you use a state space representation (such as in the innovations state space form, or by writing the ARIMA model in state space form), then handling missing values is easy. For example, the R function `arima()` will handle missing values without complaint.
| null | CC BY-SA 2.5 | null | 2010-11-16T10:04:36.643 | 2010-11-16T10:18:31.483 | 2010-11-16T10:18:31.483 | 159 | 159 | null |
4587 | 2 | null | 4580 | 3 | null | I work in academy and I'm using multicore for some heavy benchmarks of machine learning algorithms, mostly on our Opteron based Sun Constellation and some smaller clusters; those are also rather embarrassingly parallel problems so multicore's main role is to spread computation over node without multiplication of memory usage.
| null | CC BY-SA 2.5 | null | 2010-11-16T10:23:55.127 | 2010-11-16T10:23:55.127 | null | null | null | null |
4588 | 2 | null | 4527 | 1 | null | The value column simply gives the values for the coefficient estimates.
| null | CC BY-SA 2.5 | null | 2010-11-16T11:37:33.347 | 2010-11-16T11:37:33.347 | null | null | 1806 | null |
4589 | 2 | null | 4551 | 24 | null | Especially in epidemiology and public health - using arithmetic instead of logarithmic scale when reporting graphs of relative measures of association (hazard ratio, odds ratio or risk ratio).
More information [here](http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2810%2960541-7/fulltext).
| null | CC BY-SA 2.5 | null | 2010-11-16T12:19:21.080 | 2010-11-16T12:19:21.080 | null | null | 22 | null |
4590 | 2 | null | 4551 | 47 | null | Testing the hypotheses $H_0: \mu=0$ versus $H_1: \mu\neq 0$
(for example in a Gaussian setting)
to justify that $\mu=0$ in a model (i.e mix "$H_0$ is not rejected" and "$H_0$ is true").
A very good example of that type of (very bad) reasoning is when you test whether the variances of two Gaussians are equal (or not) before testing if their mean are equal or not with the assumption of equal variance.
Another example occurs when you test normality (versus non normality) to justify normality. Every statistician has done that in is life ? it is baaad :) (and should push people to check robustness to non Gaussianity)
| null | CC BY-SA 3.0 | null | 2010-11-16T13:30:48.163 | 2011-09-22T15:00:43.493 | 2011-09-22T15:00:43.493 | 6432 | 223 | null |
4591 | 2 | null | 4341 | 5 | null | Another option using Excel that gives you a fair amount of flexibility over the number & size of bins is the `=frequency(data_array, bins_array)` function. It is an array function that expects two arguments. The first argument is your data, the second is your bins that you define.
Let's assume your data is in cells A1 - A1000 and create bins in cells B1 - B20. You would want to highlight cells C1 - C21 and then type something like `=FREQUENCY(A1:A100, B1:B21)`. Unlike normal functions, array functions must be entered with the key combination `SHIFT + CTRL + ENTER`. You should see the counts fill down for all bins, if not - you most likely only hit enter and the first cell is calculated.
There are plenty of good tutorials online explaining this in more detail, here's a decent [one](http://www.exceluser.com/solutions/q0001/frequency.htm).
| null | CC BY-SA 2.5 | null | 2010-11-16T14:27:29.113 | 2010-11-16T14:27:29.113 | null | null | 696 | null |
4592 | 2 | null | 4551 | 43 | null | Not really answering the question, but there's an entire book on this subject:
Phillip I. Good, James William Hardin (2003). Common errors in statistics (and how to avoid them). Wiley. ISBN [9780471460688](http://en.wikipedia.org/w/index.php?title=Special%3ABookSources&isbn=9780471460688)
| null | CC BY-SA 2.5 | null | 2010-11-16T16:24:50.713 | 2010-11-16T16:24:50.713 | null | null | 449 | null |
4593 | 2 | null | 4542 | 3 | null | The following approaches might be of interest. In (1) [http://www.chemoton.org/ref31.html](http://www.chemoton.org/ref31.html) a GA was used on image analysis but data was first treated. In order to treat the data, the GA was hybridized with K-means clustering, a well-know data classification procedure. Among classification the GA was also able to perform data feature extraction. Other approaches are also available from (2) ref 39 in chemoton org (3) ref 51 in chemoton org and (4) ref 42 in chemoton org among many other works. Hope it helps. Best, v.
(1) Vitorino Ramos, Fernando Muge, Map Segmentation by Colour Cube Genetic K-Mean Clustering, Proc. of ECDL´2000 - 4th European Conference on Research and Advanced Technology for Digital Libraries, J. Borbinha and T. Baker (Eds.), ISBN 3-540-41023-6, Lecture Notes in Computer Science, Vol. 1923, pp. 319-323, Springer-Verlag -Heidelberg, Lisbon, Portugal, 18-20 Sep. 2000.
(2) Vitorino Ramos, Fernando Muge, Pedro Pina, Self-Organized Data and Image Retrieval as a Consequence of Inter-Dynamic Synergistic Relationships in Artificial Ant Colonies, in Javier Ruiz-del-Solar, Ajith Abraham and Mario Köppen (Eds.), Frontiers in Artificial Intelligence and Applications, Soft Computing Systems - Design, Management and Applications, 2nd Int. Conf. on Hybrid Intelligent Systems, IOS Press, Vol. 87, ISBN 1 5860 32976, pp. 500-509, Santiago, Chile, Dec. 2002.
(3) Vitorino Ramos, Ajith Abraham, Evolving a Stigmergic Self-Organized Data-Mining, in ISDA-04, 4th Int. Conf. on Intelligent Systems, Design and Applications, Budapest, Hungary, ISBN 963-7154-30-2, pp. 725-730, August 26-28, 2004.
(4) Vitorino Ramos, Juan J. Merelo, Self-Organized Stigmergic Document Maps: Environment as a Mechanism for Context Learning, in AEB´2002 – 1st Spanish Conference on Evolutionary and Bio-Inspired Algorithms, E. Alba, F. Herrera, J.J. Merelo et al. (Eds.), pp. 284-293, Centro Univ. de Mérida, Mérida, Spain, 6-8 Feb. 2002.
| null | CC BY-SA 2.5 | null | 2010-11-16T17:23:45.307 | 2010-11-16T17:23:45.307 | null | null | null | null |
4595 | 1 | null | null | 5 | 342 | Genetic algorithms are avoided in econometry literature as often as possible, but still sometimes they are inevitable. The question is: Which well known models are the most difficult to estimate using conventional algorithms? (By Conventional algorithms I mean Gauss-Newton method, Levenberg–Marquardt algorithm, and so on)
Motivation: I want to test some heuristic methods, and I need some benchmark to be sure that this particular model is really hard to estimate.
| Nonlinear models which are hard to estimate | CC BY-SA 2.5 | null | 2010-11-16T17:25:53.533 | 2021-07-22T13:45:46.400 | 2010-11-16T22:59:21.257 | 159 | 1643 | [
"estimation",
"genetic-algorithms",
"nonlinear-regression",
"optimization"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.