Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
4383 | 1 | 4389 | null | 11 | 3655 | [edits made in response to feedback- thanks :-) ]
Doh! More edits! Sorry!
Hello-
I am doing some rather rough and ready data collection with a survey sent out to healthcare staff using a published scale about morale and other such issues.
The only thing is that the scale is rather long with all the other things in the survey and I would like to reduce its size by cutting each subscale in half and only using half the items. My intuition is that this is fine, since the subscales are inter-correlated, and while it's not ideal for publication-standard research, it's okay just for a bit of intra-organisational fact finding.
I wondered if anyone had any thoughts on the validity of doing this, pitfalls, or anything else. References particularly are gratefully received because my colleagues will need some convincing!
Many thanks,
Chris B
edits-
Yes it is a validated scale with known psychometric properties.
It's unidimensional and it has subscales, if that's the right way to put it.
I'll be working at the subscale and total, not the item, level.
30 items, probably about 40-60 individuals.
Cheers!
| Can one validly reduce the numbers of items in a published Likert-scale? | CC BY-SA 2.5 | null | 2010-11-10T12:32:49.143 | 2011-03-30T18:43:46.680 | 2011-03-30T18:43:46.680 | 930 | 199 | [
"psychometrics",
"scales",
"reliability",
"likert"
]
|
4384 | 2 | null | 4341 | 1 | null | In Excel, a simple way to group numeric data into bins is via the Pivot Table. Pull the numeric variable into the "row labels". Now right-click on any of the values in this right column and choose "Group". You can set the min and max of the overall range and the bin size (equal bins widths for all data).
| null | CC BY-SA 2.5 | null | 2010-11-10T14:07:28.267 | 2010-11-10T14:07:28.267 | null | null | 1945 | null |
4385 | 2 | null | 4316 | 2 | null | One place where I have seen this come up is in discussions of using "intention to treat" analysis versus an analysis that tries to get at the "efficacy" of a treatment in experiments with imperfect compliance. See the Wikipedia article on "intention to treat" ([link](http://en.wikipedia.org/wiki/Intention_to_treat_analysis)), which includes some references.
In a run-of-the-mill randomized control trial with noncompliance, the intention to treat estimate examines only the difference between those assigned to treatment and control. However, noncompliance means that some people who were assigned to treatment may not have actually taken it, and some assigned to be in the control group may have actually received the treatment. If so, the intention to treat estimate may understate the average treatment effect that would obtain were all members of the population under study to actually take up the treatment.
When this kind of noncompliance is present, the analyst has a decision to make. She could decide to simply do the intention to treat analysis, justifying it by saying that in the real world, we cannot control compliance, and so the intention to treat analysis is more "realistic" as an estimate of what would happen were this treatment approved for use clinically. I have seen this referred to as an analysis of a treatment's "effectiveness." Or, she may use some kind of adjustment method to try to get at how people who actually took up the treatment differed from those who didn't. She could justify this by saying that what we are really interested in knowing is the biological (in the case of a medical trial) "efficacy" of the treatment, and to do so, we need to make the comparison between those who actually took the treatment and those who didn't.
The issue for an analysis of biological efficacy is, what "kind of adjustment method" is valid? The current state of the art, as I understand, is to view an experiment with noncompliance as an instrumental variables problem, a la Angrist, Imbens and Rubin (1996) ([gated link](http://www.jstor.org/stable/2291629)), or, more generally to view the problem in terms of "principle stratification", a la Frangakis and Rubin (2002) ([gated link](http://www.jstor.org/stable/3068286)). As such, the randomization serves as an instrument that nonparametrically identifies "efficacy" effects for at least certain subpopulations---namely, those who would comply with their treatment or control assignment. Beyond this, one could impose a more stringent model in order to identify efficacy effects, but then one may wonder, why did you bother to do a randomized experiment in the first place?
| null | CC BY-SA 2.5 | null | 2010-11-10T14:16:39.320 | 2010-11-10T20:24:26.790 | 2010-11-10T20:24:26.790 | 96 | 96 | null |
4386 | 2 | null | 2419 | 1 | null | I have the same issue right now: I code in Python daily, use R once in a while, and need a good boosted regression tree algorithm. While there are lots of great Python packages for advanced analytics, my searching has not found a good offering for this particular algorithm. So, the route I think I'll be taking in coming weeks is to use the [GBM package in R](http://cran.r-project.org/web/packages/gbm/). There is a good paper showing practical issues with using it that can be found [here](http://www-stat.stanford.edu/~hastie/Papers/Ecology/Elith%20Leathwick%20Hastie_2008submitted.pdf). Importantly, the GBM package was basically used "off the shelf" to w[in the 2009 KDD Cup](http://www.cybaea.net/Blogs/Data/How-to-win-the-KDD-Cup-Challenge-with-R-and-gbm.html). So, I'll probably do all of my pre and post modeling in Python and use RPy to go back and forth with R/GBM.
| null | CC BY-SA 2.5 | null | 2010-11-10T14:36:25.517 | 2010-11-10T14:36:25.517 | null | null | 1080 | null |
4387 | 2 | null | 4259 | 6 | null | In Statistics, like in Data Mining, you start with data and a goal. In statistics there is a lot of focus on inference, that is, answering population-level questions using a sample. In data mining the focus is usually prediction: you create a model from your sample (training data) in order to predict test data.
The process in statistics is then:
- Explore the data using summaries and graphs - depending on how data-driven the statistician, some will be more open-minded, looking at the data from all angles, while others (especially social scientists) will look at the data through the lens of the question of interest (e.g., plot especially the variables of interest and not others)
Choose an appropriate statistical model family (e.g., linear regression for a continuous Y, logistic regression for a binary Y, or Poisson for count data), and perform model selection
Estimate the final model
Test model assumptions to make sure they are reasonably met (different from testing for predictive accuracy in data mining)
Use the model for inference -- this is the main step that differs from data mining. The word "p-value" arrives here...
Take a look at any basic stats textbook and you'll find a chapter on Exploratory Data Analysis followed by some distributions (that will help choose reasonable approximating models), then inference (confidence intervals and hypothesis tests) and regression models.
I described to you the classic statistical process. However, I have many issues with it. The focus on inference has completely dominated the fields, while prediction (which is extremely important and useful) has been nearly neglected. Moreover, if you look at how social scientists use statistics for inference, you'll find that they use it quite differently! You can check out more about this [here](http://www.rhsmith.umd.edu/faculty/gshmueli/web/html/explain-predict.html)
| null | CC BY-SA 2.5 | null | 2010-11-10T14:38:26.370 | 2010-11-10T14:38:26.370 | null | null | 1945 | null |
4388 | 2 | null | 4337 | 2 | null | How about comparing the before and after you introduce the cash option across the two groups? Say you assign half the cinemas to the cash option (treatment) and half continue with no-cash (control). Now, you can compare how sales changed in the treatment group following the introduction of the cash option, and also compare how sales changes in the control group. If indeed the cash option is effective, then the change in the treatment group will be bigger than the change in the control group.
I recall reading an interesting statistical analysis done by Prof Ayala Cohen at the Technion's statistical lab for assessing the effect of removing advertising boards from a major highway in Israel on accidents in a similar fashion: to control for other factors that changed during this period, they compared the reduction in accidents before/after to a parallel highway where advertising boards remained there throughout the period.
| null | CC BY-SA 2.5 | null | 2010-11-10T14:57:03.523 | 2010-11-10T14:57:03.523 | null | null | 1945 | null |
4389 | 2 | null | 4383 | 11 | null | Although there is still some information lacking (No. individuals and items per subscale), here are some general hints about scale reduction. Also, since you are working at the questionnaire level, I don't see why its length matters so much (after all, you will just give summary statistics, like total or mean scores).
I shall assume that (a) you have a set of K items measuring some construct related to morale, (b) your "unidimensional" scale is a second-order factor that might be subdivided into different facets, (c) you would like to reduce your scale to k < K items so as to summarize with sufficient accuracy subjects' totalled scale scores while preserving the content validity of the scale.
About content/construct validity of this validated scale: The number of items has certainly been choosen so as to best reflect the construct of interest. By shortening the questionnaire, you are actually reducing construct coverage. It would be good to check that the factor structure remains the same when considering only half of the items (which could also impact the way you select them, after all). This can be done using traditional FA techniques. You hold the responsability of interpreting the scale in a spirit similar to that of the authors.
About scores reliability: Although it is a sample-dependent measure, scores reliability decreases when decreasing the number of items (cf. [Spearman-Brown formula](http://en.wikipedia.org/wiki/Spearman-Brown_prediction_formula)); another way to see that is that the standard error of measurement (SEM) will increase, but see [An NCME Instructional Module on Standard Error of Measurement](http://www.ncme.org/pubs/items/16.pdf), by Leo M Harvill.
Needless to say, it applies to every indicator that depends on the number of items (e.g., Cronbach's alpha which can be used to estimate one form of reliability, namely the internal consistency). Hopefully, this will not impact any between-group comparisons based on raw scores.
So, my recommendations (the easiest way) would be:
- Select your items so as to maximise construct coverage; check the dimensionality with FA and coverage with univariate responses distributions;
- Compare average interitem correlations to previously reported ones;
- Compute internal consistency for the full scale and your composites; check that they are in agreement with published statistics on the original scale (no need to test anything, these are sample-dependent measures);
- Test the linear (or polychoric, or rank) correlations between original and reduced (sub)scores, to ensure that they are comparable (i.e., that individuals locations on the latent trait do no vary to a great extent, as objectivated through the raw scores);
- If you have an external subject-specific variable (e.g., gender, age, or best a measure related to morale), compare known-group validity between the two forms.
The hard way would be to rely on [Item Response Theory](http://en.wikipedia.org/wiki/Item_response_theory) to select those items that carry the maximum of information on the latent trait -- scale reduction is actually one of its best application. Models for polytomous items were partly described in this thread, [Validating questionnaires](https://stats.stackexchange.com/questions/4239/validating-questionnaires).
Update after your 2nd update
- Forget about any IRT models for polytomous items with so few subjects.
- Factor Analysis will also suffer from such a low sample size; you will get unreliable factor loadings estimates.
- 30 items divided by 2 = 15 items (it's easy to get an idea of the increase in the corresponding SEM for the total score), but it will definitively get worse if you consider subscales (this was actually my 2nd question--No. items per subscale, if any)
| null | CC BY-SA 2.5 | null | 2010-11-10T15:32:50.900 | 2010-11-11T05:39:58.823 | 2017-04-13T12:44:45.783 | -1 | 930 | null |
4390 | 2 | null | 4383 | 8 | null | I guess there's no clear-cut "yes/no" answer to your question. If you arbitrarily drop items from sub-scales to create a short form of the original questionnaire, you lose the long form's psychometric validation. Things that can change are the factorial structure of the questionnaire, reliability of sub-scales, item-total correlations, etc. (you'll note I'm used to classical test theory thinking, not IRT). Plus, you can't use any standardization of the original questionnaire. That's why short forms of established questionnaires have to undergo a separate validation phase.
Depending on your requirements, all ist not lost however. You may not need standardization because you may only want to compare results within your sample without making "absolute" judgements with respect to a reference population. IMHO, it would be a plus if you had the chance to validate the short form with the original form at least for a sub-sample of your group. This may allow you to see if results are similar.
In general though, results to a questionnaire can be surprisingly sensitive to its item composition. People do not robotically fill out questionnaires but make all sorts of tacit assumptions and cognitive inferences: "what is this really about?", "what am I expected to report here?", "what do they actually want to know?". This can be heavily influenced by the given context of items, cf. Schwarz, N. 1996. Cognition and Communication: Judgmental Biases, Research Methods, and the Logic of Conversation. Mahwah, NJ: Lawrence Erlbaum.
| null | CC BY-SA 2.5 | null | 2010-11-10T15:35:46.517 | 2010-11-10T16:26:03.803 | 2010-11-10T16:26:03.803 | 1909 | 1909 | null |
4391 | 2 | null | 4383 | 2 | null | One reference that you may find useful for this:
[STANTON, J. M., SINAR, E. F., BALZER, W. K. and SMITH, P. C. (2002), ISSUES AND STRATEGIES FOR REDUCING THE LENGTH OF SELF-REPORT SCALES. Personnel Psychology, 55: 167–194.](http://dx.doi.org/10.1111/j.1744-6570.2002.tb00108.x)
| null | CC BY-SA 2.5 | null | 2010-11-10T15:36:31.583 | 2010-11-10T22:49:43.327 | 2010-11-10T22:49:43.327 | 159 | 1871 | null |
4392 | 1 | 4395 | null | 1 | 2434 | Let $Z=(X+Y)/2$, where $X$ and $Y$ are independent normally-distributed random variables with known variances $\sigma^2_X$ and $\sigma^2_Y$ and unknown (and possibly different) means. Given a sample $x_1$ from $X$ and $y_1$ from $Y$, what is the maximum likelihood estimator of the mean of $Z$?
| What is the maximum likelihood estimator of the mean of two normally-distributed variables? | CC BY-SA 2.5 | null | 2010-11-10T17:13:54.410 | 2010-11-11T01:30:38.347 | 2010-11-11T01:30:38.347 | null | null | [
"estimation",
"maximum-likelihood",
"normal-distribution"
]
|
4393 | 2 | null | 4347 | 4 | null | I have no experience with fuzzy things (well, apart from [Fuzzy Felt](http://en.wikipedia.org/wiki/Fuzzy_Felt)) but this book looks interesting:
Buckley, James J. Fuzzy probability and statistics. Springer, 2006. ISBN [9783540308416](http://en.wikipedia.org/w/index.php?title=Special%3ABookSources&isbn=9783540308416).
| null | CC BY-SA 2.5 | null | 2010-11-10T17:18:08.687 | 2010-11-10T17:18:08.687 | null | null | 449 | null |
4394 | 1 | null | null | 18 | 42106 | It occurred to me that, while I've pieced together some ideas over the years about the differences between statistics and biostatistics, I've never heard a formal explanation. What is the distinction between these two disciplines (currently)? And why did this distinction begin in the first place?
EDIT: I've not been specific enough in my original question. I understand that biostatistics is the application and development of statistics in the biomedical field. But what are some specific examples of the distinctions? For example, what distinguishes graduate education in the two fields? What is the purpose of having distinct academic departments for the two disciplines (a distinction I see in no other field)?
| What is the difference between statistics and biostatistics? | CC BY-SA 2.5 | null | 2010-11-10T17:37:00.063 | 2019-06-24T14:10:12.680 | 2010-11-10T18:51:24.437 | 930 | 71 | [
"terminology",
"biostatistics"
]
|
4395 | 2 | null | 4392 | 3 | null | If $Z = \frac{X+Y}{2}$ then it must be that:
$Z \sim N(\frac{\mu_X + \mu_Y}{2} , \frac{\sigma_X^2 + \sigma_Y^2}{4})$
Thus, the mle of the mean of $Z$ given that we observe $z=(x_1 + y_1)/2$ is:
$\frac{x_1 + y_1}{2}$
| null | CC BY-SA 2.5 | null | 2010-11-10T17:58:49.740 | 2010-11-10T18:16:36.003 | 2010-11-10T18:16:36.003 | null | null | null |
4396 | 1 | 4400 | null | 2 | 2952 | I have a data file in which each participant is tested many times in several blocks. The data is formatted so that each trial contains a participant id, a block number and a score. What I want to do is group the data by participant and block, then rank the scores for each group, so I can use the top 50%.
For example:
```
p_id, block, score
------------------
1 1 35
1 1 84
1 1 12
1 1 76
1 2 51
1 2 67
1 2 54
1 2 18
```
With this data I would have 2 groups. Of the first (p_id=1,block=1) the second and fourth scores would be used (score=84,score=76). Of the second group, the second and third would be used (score=67,score=54).
How would I go about doing this in SPSS?
(I'm using v16, but have the option of upgrading if necessary.)
| In SPSS, is there any way to define a use groups of data based on combinations of variables? | CC BY-SA 2.5 | null | 2010-11-10T18:13:38.233 | 2010-12-26T13:37:32.393 | 2010-11-10T18:54:34.407 | 1950 | 1950 | [
"spss"
]
|
4397 | 2 | null | 4394 | 13 | null | When I look at the Wikipedia entry for [biostatistics](http://en.wikipedia.org/wiki/Biostatistics), the relation to biometrics doesn't seem so obvious to me since, historically, biometrics was more concerned with characterizing individuals by some phenotypes of interest, with large applications in population genetics (as exemplified by the work of Fisher), whereas part of this discipline now focus on biometric systems (whose objectives are the "recognition or identification of individuals based on some physical or behavioral characteristics that are intrinsically unique for each individual", according to Boulgouris et al., Biometrics, 2010). Anyway, there still are reviews like [Biometrika](http://biomet.oxfordjournals.org/) and [Biometrics](http://www.biometrics.tibs.org/); although I read the latter on an irregular basis, most articles focus on "biostatistical" theoretical or applied work. The same applies for [Biostatistics](http://biostatistics.oxfordjournals.org/). By "biostatistical" applications, I mean that it has to do with applications or models related to the biomedical domain, in a wide sense (biology, health science, genetics, etc.).
According to the Encyclopedia of Biostatistics (2005, 2nd ed.),
>
(...) As is clear from the above examples,
biostatistics is problem oriented. It
is specifically directed to questions
that arise in biomedical science. The
methods of biostatistics are the
methods of statistics -- concepts
directed at variation in observations
and methods for extracting information
from observations in the face of
variation from various sources, but
notably from variation in the
responses of living organisms and
particularly human beings under study.
Biostatistical activity spans a broad
range of scientific inquiry, from the
basic structure and functions of human
beings, through the interactions of
human beings with their environment,
including problems of environmental
toxicities and sanitation, health
enhancement and education, disease
prevention and therapy, the
organization of health care systems
and health care financing.
In sum, I think that Biostatistics is part of a super-family--Statistics--, and share most of its methods, but has a more focused area of interest (hence, an historical background, specific designs, and a general theoretical framework) and dedicated modeling strategies.
| null | CC BY-SA 2.5 | null | 2010-11-10T18:26:30.110 | 2010-11-10T18:48:10.007 | 2010-11-10T18:48:10.007 | 930 | 930 | null |
4398 | 2 | null | 4394 | 7 | null | To quote the "Encyclopedic dictionary of mathematics" by Kiyosi Itô (ed.):
> In many applied fields there exist systems of statistical methods which have been developed specifically for the respective fields, and although all of them are based essentially on the same general principles of statistical inference, each has its own special techniques and procedures. Specific names have been invented, such as biometrics, econometrics, psychometrics, technometrics, sociometrics, etc.
| null | CC BY-SA 2.5 | null | 2010-11-10T18:33:59.103 | 2010-11-10T18:33:59.103 | null | null | 439 | null |
4399 | 1 | 4415 | null | 5 | 3211 | I'm looking at attempting to capture the regularity of a time series of events, one measurement per day, with a year's worth of data, and there can be at most one event a day. Say for example the day you do laundry. What I want to capture is a measurement of regularity. Capturing irregularity is straight forward: goodness of fit of the times between consecutive events with a Poisson distribution. But this doesn't distinguish between poor fitting series very well.
So I want to measure the other end. How good am I to sticking to a weekly or bi-weekly schedule. My instinct is I want to fit an autoregressive model and take the variance of epsilon. Does this sound right? And how do I normalize for frequency? That is I don't want the epsilon to predict whether I do laundry 50 times a year or 100 times. Just the regularity.
Or is there a conventional way of doing this that I'm overlooking? In this case I'm biased towards fast conventional techniques rather than creative techniques.
| How do I capture regularity of a time series in a normalized way? | CC BY-SA 4.0 | null | 2010-11-10T19:06:41.087 | 2022-01-11T16:51:47.820 | 2021-09-10T15:33:39.483 | 11887 | 1951 | [
"time-series",
"pattern-recognition"
]
|
4400 | 2 | null | 4396 | 4 | null | Ignore my initial post and take Jon's advice, heres some sample code from the RANKS command where X1 is the variable to rank and GROUPVAR is the variable identifying groups:
```
RANK VARIABLES=X1 (A) BY GROUPVAR
/RANK
/PRINT=YES
/TIES=MEAN.
```
You can either go through the GUI to see all of its options or look up the ranks command in the help file. I can't see any reason to prefer my prior code over this simpler command, and it has options to specify fractional ranks or ntiles the ranks fall within (which would be the number you need to specify your select statement to only take the top half of the scores.
Finally I would note there is a [Google group forum](http://groups.google.com/group/comp.soft-sys.stat.spss/topics) that answers SPSS questions and the UGA [SPSS listserv](http://spssx-discussion.1045642.n5.nabble.com/) are both good places to ask SPSS questions.
| null | CC BY-SA 2.5 | null | 2010-11-10T19:20:20.963 | 2010-12-22T20:28:59.727 | 2010-12-22T20:28:59.727 | 1036 | 1036 | null |
4401 | 1 | null | null | 1 | 2630 | In our usage of R for a non-trivial data analysis and estimation project, we've been repeatedly burnt by how tolerant R is toward misspelled or missing columns in a data frame. Typical example is calculating the weighted mean of a variable MYVAR in a data frame using another variable WEIGHT for weights:
```
m <- weighted.mean(tbl$MYVAR, w = tbl$WEIGHT, na.rm = TRUE)
```
Suppose I make a typo in the WEIGHT name in the operation above. What will happen in that R will expand my misspelled column into NULL and will use it for performing the weighted mean resulting in a non-weighted one.
Therefore, the question: is there any way to make R treat attempts to "read" a non-existent variables in a data frame as an error?
| Make R report error on using non-existent column name in a data frame | CC BY-SA 2.5 | null | 2010-11-10T19:25:32.353 | 2010-11-10T19:50:12.800 | 2010-11-10T19:35:48.507 | 930 | 1330 | [
"r"
]
|
4402 | 2 | null | 4401 | 1 | null | Maybe, you can enclose your code into try-catch blocks, see `?try` and the associated examples. It is easy to test for the class of the results ("try-error") in turn, e.g.
```
> res <- try(log("A"), silent=TRUE)
> class(res)
[1] "try-error"
```
You can also test directly for the correct spelling, by first listing the variables of interest--in your case, `MYVAR` and `WEIGHT`-- and test that they are part of the data.frame `df`, e.g.
```
df <- data.frame(x=rnorm(100), g1=gl(2, 50), g2=gl(5,20))
sel.vars <- c("x","g2")
ifelse(all(sel.vars %in% colnames(df)), <compute things here>, "fail")
```
| null | CC BY-SA 2.5 | null | 2010-11-10T19:35:20.923 | 2010-11-10T19:43:46.667 | 2010-11-10T19:43:46.667 | 930 | 930 | null |
4403 | 1 | null | null | 7 | 302 | We need to estimate the probability of a person getting a question right for a given content area given his history of getting questions in the same content area right in the past. We would also presumably have records on how others have done on this question and in this content area.
Is there a good way or ways of doing this?
(This is sort of an education theory question, but I couldn't find a better place to post this question.)
| Estimating the probability of a person getting a question right | CC BY-SA 2.5 | null | 2010-11-10T19:36:52.677 | 2010-11-22T22:28:42.057 | 2010-11-22T22:28:42.057 | 930 | 1618 | [
"probability",
"psychometrics",
"hypothesis-testing"
]
|
4404 | 2 | null | 4401 | 3 | null | Hmm... when I tried out your example with some fake data, `weighted.mean()` actually failed:
```
#Some fake data
dat <- data.frame(x = rnorm(100), weight = rnorm(100))
#The right weight var
weighted.mean(x = dat$x, w = dat$weight)
[1] 0.6161606
#Misspelled weight var
weighted.mean(x = dat$x, w = dat$wieght)
Error in weighted.mean.default(x = dat$x, w = dat$wieght) :
'x' and 'w' must have the same length
```
But anyway, another way to cope with this problem is to access your variables via indexing - it returns an error if you try to pick non-existant columns:
```
dat$wieght
NULL
dat[ , "wieght"]
Error in `[.data.frame`(dat, , "wieght") : undefined columns selected
weighted.mean(x = dat[ , "x"], w = dat[ , "wieght"], na.rm = TRUE)
Error in `[.data.frame`(dat, , "wieght") : undefined columns selected
```
| null | CC BY-SA 2.5 | null | 2010-11-10T19:50:12.800 | 2010-11-10T19:50:12.800 | null | null | 71 | null |
4405 | 2 | null | 4403 | 7 | null | If I understand your question correctly, you have a set of items (pass-fail) and you want to assess the probability of endorsing the $k$th item given its preceding responses? If that's the case, what is usually done in psychometrics for educational assessment is to rely on [Item Response Model](http://en.wikipedia.org/wiki/Item_response_theory), like the [Rasch Model](http://en.wikipedia.org/wiki/Rasch_model). In short, you model the probability of endorsing an item as a function of item difficulty and person ability (the more proficient an individual is, the more likely his response to an easy item will be correct). This assumes that the content you are assessing is unidimensional, and that the items can be ordered by difficulty on that scale. A [Guttman model](http://en.wikipedia.org/wiki/Guttman_scale) is rarely applicable, so we may allow for some "imperfect" response patterns (e.g., 111011101110000, the 4th and 8th items were failed although the examinee reached the 11th item before giving up), but the sum score is a sufficient statistic for the Rasch Model. Under this approach, you need to have responses from other individuals on the same set of items. To get an idea, look at the `LSAT` data set and the way it is analysed in the [ltm](http://cran.r-project.org/web/packages/ltm/index.html) R package.
I have described a psychometrical model, not a purely probabilistic framework for estimating the probability of failing after the $k$th item, with all other items right (this would follow a geometric law).
| null | CC BY-SA 2.5 | null | 2010-11-10T19:58:04.940 | 2010-11-10T20:05:36.893 | 2010-11-10T20:05:36.893 | 930 | 930 | null |
4406 | 2 | null | 4394 | 4 | null | Biostatistics, biometrics and biometry are synonyms. Medical statistics (sometimes called 'clinical biostatistics' for no clear reason) is a subset of these.
| null | CC BY-SA 2.5 | null | 2010-11-10T20:05:47.163 | 2010-11-10T20:05:47.163 | null | null | 449 | null |
4408 | 1 | null | null | 2 | 367 | Could anybody provide me a latest material related to Cross validation especially R package?
| Latest article or new development in cross validation? | CC BY-SA 3.0 | null | 2010-11-10T20:13:27.987 | 2022-08-09T21:01:36.430 | 2012-12-10T06:26:13.847 | 2116 | null | [
"r",
"machine-learning",
"cross-validation"
]
|
4410 | 2 | null | 4408 | 7 | null | Your question is not really precise, but I think the [caret](http://caret.r-forge.r-project.org/) package and its associated vignettes may be a good start. Quoting the website, it is
>
a set of functions that attempt to
streamline the process for creating
predictive models.
In fact, it depends on a lot of other R packages dedicated to ML (see the list of suggested packages on [CRAN](http://cran.r-project.org/web/packages/caret/index.html)), but it definitively simplifies the management of cross-validation scheme (k-fold, leave-one-out).
[The Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/) (Hastie et al.) is available on-line in its second edition, and all illustrations are done in R. Cross-validation is described at length in Chapter 7.
| null | CC BY-SA 2.5 | null | 2010-11-10T20:22:27.040 | 2010-11-10T20:22:27.040 | null | null | 930 | null |
4411 | 2 | null | 4408 | 8 | null | The following is a recent survey: "A survey of cross-validation procedures for model selection" by Sylvain Arlot and Alain Celisse (Statistics Surveys, Volume 4 (2010), 40-79.)
The full paper can be downloaded by following the PDF link on [this page](https://projecteuclid.org/journals/statistics-surveys/volume-4/issue-none/A-survey-of-cross-validation-procedures-for-model-selection/10.1214/09-SS054.full).
>
Abstract
Used to estimate the risk of an estimator or to perform model selection, cross-validation is a widespread strategy because of its simplicity and its (apparent) universality. Many results exist on model selection performances of cross-validation procedures. This survey intends to relate these results to the most recent advances of model selection theory, with a particular emphasis on distinguishing empirical statements from rigorous theoretical results. As a conclusion, guidelines are provided for choosing the best cross-validation procedure according to the particular features of the problem in hand.
| null | CC BY-SA 4.0 | null | 2010-11-10T20:51:05.220 | 2022-08-09T21:01:36.430 | 2022-08-09T21:01:36.430 | 79696 | 439 | null |
4412 | 2 | null | 1780 | 1 | null | I would recommend using Association Rule Learning for this. It allows you to find words that often co-occur.
If you have a lot of data, it will be much faster than calculating a correlation matrix.
See my video series on text mining [here](http://vancouverdata.blogspot.com/2010/11/text-analytics-with-rapidminer-part-3.html). Includes a tutorial on Association Rules for text.
| null | CC BY-SA 2.5 | null | 2010-11-10T20:53:29.547 | 2010-11-10T20:53:29.547 | null | null | 74 | null |
4413 | 2 | null | 4259 | 2 | null | As far as books go, "The Elements of Statistical Learning" by Hastie, Tibshirani and Friedman is very good.
The full book is available on the [authors' web site](https://hastie.su.domains/ElemStatLearn/); you may want to take a look to see if it is at all suitable for your needs.
| null | CC BY-SA 4.0 | null | 2010-11-10T20:58:12.680 | 2022-12-03T04:29:36.237 | 2022-12-03T04:29:36.237 | 362671 | 439 | null |
4414 | 2 | null | 4259 | 2 | null | As for (on-line) references, I would recommend looking at Andrew Moore's tutorial slides on [Statistical Data Mining](https://web.archive.org/web/20100306025005/http://www.autonlab.org/tutorials/).
There are many textbooks on data mining and machine learning; maybe a good starting point is [Principles of Data Mining](http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=3520), by Hand et al., and [Introduction to Machine Learning](http://mitpress.mit.edu/catalog/item/default.asp?tid=10341&ttype=2), by Alpaydin.
| null | CC BY-SA 4.0 | null | 2010-11-10T21:11:31.560 | 2022-12-03T04:30:54.830 | 2022-12-03T04:30:54.830 | 362671 | 930 | null |
4415 | 2 | null | 4399 | 5 | null | Let $X_t$ be the time between events. Then for very regular events, $X_t$ will be approximately constant. e.g., if you do your laundry every Monday, then $X_t=7$ for all $t$. So you could just use the variance of $X_t$, where the small variance corresponds to highly regular and large variance corresponds to low regularity.
Update:
If outliers are a problem, or you do not want occasional "misses" to affect the result, use the interquartile range (IQR) instead of the variance. If that removes too many observations, try the difference of the 90th and 10th percentiles.
To normalize, divide by the median (safer than assuming you know the frequency). Thus, one possible measure which should work for you is:
$$
(q_{0.9} - q_{0.1})/q_{0.5}
$$
where $q_{\alpha}$ denotes the $\alpha$-quantile.
| null | CC BY-SA 2.5 | null | 2010-11-10T22:42:02.583 | 2010-11-11T06:02:57.460 | 2010-11-11T06:02:57.460 | 159 | 159 | null |
4417 | 1 | null | null | 50 | 21180 | In Bayesian statistics, it is often mentioned that the posterior distribution is intractable and thus approximate inference must be applied. What are the factors that cause this intractability?
| What are the factors that cause the posterior distributions to be intractable? | CC BY-SA 4.0 | null | 2010-11-11T00:33:28.430 | 2019-11-16T03:13:43.170 | 2019-11-16T03:13:43.170 | null | 1913 | [
"bayesian",
"approximation",
"inference"
]
|
4418 | 1 | null | null | 0 | 1092 | Let $Z=(X+Y)/2$, where $X$ and $Y$ are independent normally-distributed random variables with known variances $\sigma^2_X$ and $\sigma^2_Y$ and unknown (and possibly different) means. Given a sample $x_1$ from $X$ and $y_1$ from $Y$, what is the minimum mean squared error estimator of the mean of $Z$? Is there a biased estimator with an MSE improved over the maximum likelihood estimator $\frac{x_1+y_1}{2}$? Can we generalize to $n$ mutually-independent random variables?
| What is the minimum mean squared error estimator of the mean of two normally-distributed variables? | CC BY-SA 2.5 | null | 2010-11-11T01:35:14.477 | 2010-11-11T06:18:01.563 | 2010-11-11T06:18:01.563 | null | null | [
"estimation",
"mean",
"normal-distribution"
]
|
4419 | 2 | null | 898 | 0 | null | I'm not sure what sort of data you need to process, but statistical algorithms are used quite frequently in robotics, media art, digital signal processing etc. It may be worthwhile to borrow from these domains.
| null | CC BY-SA 2.5 | null | 2010-11-11T02:16:48.133 | 2010-11-11T02:16:48.133 | null | null | 162 | null |
4420 | 2 | null | 4418 | 1 | null | Minimize SSE = Σi (zi-a)2 with respect to a and you get the average of the two sample means, so it looks like your MLE is also your minimum MSE estimator, whether you have one (x,y) pair or several. Not surprising, since the expression for the SSE and the log likelihood function are almost identical.
| null | CC BY-SA 2.5 | null | 2010-11-11T03:30:24.730 | 2010-11-11T03:30:24.730 | null | null | 5792 | null |
4421 | 2 | null | 898 | 0 | null | you have postrank
[http://data.postrank.com/content](http://data.postrank.com/content)
How do you sort what content is current and meaningful? We do it for you. PostRank gathers original content, as well as information at the feed and story level. This includes title, author, language, tags, and related links.
We analyze that data to deliver timely, socially relevant information from millions of online sources. Then we deliver it how you want it. Real-time, custom, filtered access to every piece of content we index, along with its rich metadata
| null | CC BY-SA 2.5 | null | 2010-11-11T04:28:54.030 | 2010-11-11T04:28:54.030 | null | null | 1808 | null |
4422 | 1 | 4436 | null | 6 | 1933 | FULL DISCLOSURE: This is homework.
I have been provided with a small data set (n=21) the data are messy, looking at it in a scatterplot matrix provides me with little to no insight. I've been provided with 8 variables that are metrics created from a longditudinal study (BI, CONS, CL, CR, ..., VOBI). The other measurements are of mutual fund sales, returns, asset levels, market share, share of sales, and proportion of sales to assets

Correlations, are everywhere.
```
BI CONS CL CR QT COM CONV VOBI s r a ms ss share share2
BI 1.0000000 0.7620445 0.639830594 0.70384322 0.7741463 0.8451500 0.84704440 0.85003686 0.2106773 -0.238431047 0.36184548 0.40007830 0.4076563 0.31643802 -0.28283564
CONS 0.7620445 1.0000000 0.933595967 0.96979599 0.9892533 0.9069803 0.96781703 0.93416972 0.2316209 -0.074351798 0.31952292 0.40259511 0.4442877 0.24783884 -0.14788906
CL 0.6398306 0.9335960 1.000000000 0.88297431 0.8993748 0.8133169 0.89922684 0.81132166 0.1200420 -0.001107093 0.22132116 0.26729067 0.3033221 0.07650924 -0.25595278
CR 0.7038432 0.9697960 0.882974312 1.00000000 0.9788150 0.8965754 0.92335363 0.90848199 0.2934774 -0.119340914 0.35973640 0.46409570 0.5012178 0.32832247 -0.09005985
QT 0.7741463 0.9892533 0.899374782 0.97881497 1.0000000 0.9216887 0.95458369 0.94848419 0.2826278 -0.108430256 0.35520090 0.43290221 0.4823314 0.31761015 -0.12903075
COM 0.8451500 0.9069803 0.813316918 0.89657544 0.9216887 1.0000000 0.90302002 0.89682825 0.4305866 -0.255581594 0.50724121 0.55718441 0.5773171 0.40378679 -0.12085524
CONV 0.8470444 0.9678170 0.899226843 0.92335363 0.9545837 0.9030200 1.00000000 0.96097892 0.1993837 -0.065237725 0.32010735 0.41843335 0.4531298 0.28873934 -0.19668858
VOBI 0.8500369 0.9341697 0.811321664 0.90848199 0.9484842 0.8968283 0.96097892 1.00000000 0.2424889 -0.087126942 0.30390489 0.40390750 0.4845432 0.36588655 -0.07137107
s 0.2106773 0.2316209 0.120041993 0.29347742 0.2826278 0.4305866 0.19938371 0.24248894 1.0000000 -0.173034217 0.91766914 0.84673519 0.8596887 0.61299987 0.32072790
r -0.2384310 -0.0743518 -0.001107093 -0.11934091 -0.1084303 -0.2555816 -0.06523773 -0.08712694 -0.1730342 1.000000000 -0.22512978 -0.18337773 -0.1030943 -0.17650579 0.51768144
a 0.3618455 0.3195229 0.221321163 0.35973640 0.3552009 0.5072412 0.32010735 0.30390489 0.9176691 -0.225129778 1.00000000 0.92445370 0.8656139 0.63049461 0.03876774
ms 0.4000783 0.4025951 0.267290668 0.46409570 0.4329022 0.5571844 0.41843335 0.40390750 0.8467352 -0.183377734 0.92445370 1.00000000 0.9572730 0.77582501 0.08435813
ss 0.4076563 0.4442877 0.303322147 0.50121775 0.4823314 0.5773171 0.45312978 0.48454322 0.8596887 -0.103094325 0.86561394 0.95727301 1.0000000 0.83931302 0.24371447
share 0.3164380 0.2478388 0.076509240 0.32832247 0.3176102 0.4037868 0.28873934 0.36588655 0.6129999 -0.176505786 0.63049461 0.77582501 0.8393130 1.00000000 0.20313930
share2 -0.2828356 -0.1478891 -0.255952782 -0.09005985 -0.1290307 -0.1208552 -0.19668858 -0.07137107 0.3207279 0.517681444 0.03876774 0.08435813 0.2437145 0.20313930 1.00000000
```
Now, I've tried running a number of "tests", for example:
```
summary.lm(share2 ~ BI + ...)
```
However, none of them provide any reasonable result (mostly negative adjusted R^2).
I'm wondering, if you had data where it looked like there was no relationships (linear at least).
What would your next steps be?
P.S: I did try a number of model formulas that contained interaction effects and received much better results (R^2 Ra^2 > 80% and significant f-tests) but not all the interaction effects where significant.
| Small sample linear regression: Where to start | CC BY-SA 2.5 | null | 2010-11-11T04:33:28.400 | 2013-10-07T10:11:18.717 | 2010-11-11T06:06:59.780 | 159 | 776 | [
"regression",
"self-study",
"methodology"
]
|
4423 | 2 | null | 4347 | 8 | null | here is the best book I would recomend for the subject:
[http://www.amazon.com/Fuzzy-Sets-Logic-Theory-Applications/dp/0131011715](http://rads.stackoverflow.com/amzn/click/0131011715)
Here is an easy to read book:
[http://www.amazon.com/Fuzzy-Logic-Revolutionary-Computer-Technology/dp/0671875353](http://rads.stackoverflow.com/amzn/click/0671875353)
Besides here are a list of links that might help:
[http://www.seattlerobotics.org/encoder/mar98/fuz/flindex.html](http://www.seattlerobotics.org/encoder/mar98/fuz/flindex.html)
http://www.fuzzy-logic.com/
[http://videolectures.net/acai05_berthold_fl/](http://videolectures.net/acai05_berthold_fl/)
| null | CC BY-SA 2.5 | null | 2010-11-11T04:45:20.507 | 2010-11-11T04:45:20.507 | null | null | 1808 | null |
4424 | 2 | null | 898 | 1 | null | [Bayesian networks](http://en.wikipedia.org/wiki/Bayesian_network) are perfect for online estimation, and offer a great diversity of models.
| null | CC BY-SA 2.5 | null | 2010-11-11T05:30:32.227 | 2010-11-11T05:30:32.227 | null | null | 1709 | null |
4425 | 2 | null | 4422 | 2 | null | If you're frustrated with too many correlations, and since you already have your covariance matrix (well almost) you could do a principal components analysis. You'll end up with fewer dimensions, which is probably fine considering your data set size, and what you end up with won't be intercorrelated anymore.
| null | CC BY-SA 2.5 | null | 2010-11-11T05:35:52.333 | 2010-11-11T05:35:52.333 | null | null | 1951 | null |
4426 | 2 | null | 4383 | 4 | null | I'd add one point.
Be aware of the distinction between group (e.g., comparing group means over time) and individual level measurement (e.g., correlating scores on the scale with other scales at the individual-level).
Reliability applies differently to the two levels.
Perhaps the following simplification helps:
- Reliability of group-level measurement is heavily influenced by the number of participants you have and the degree to which there is true variability at the group-level.
- Reliability of individual-level measurement is heavily influenced by the number of items you have and the degree to which individuals truly vary.
| null | CC BY-SA 2.5 | null | 2010-11-11T05:57:00.530 | 2010-11-11T05:57:00.530 | null | null | 183 | null |
4427 | 2 | null | 1980 | 9 | null | [Koenker and Zeileis](http://www.econ.uiuc.edu/~roger/research/repro/) provide a webpage with a relatively complete example.
They share:
- Rnw (Sweave code)
- R analysis code
- Final PDF
- Discussion of version control issues
| null | CC BY-SA 2.5 | null | 2010-11-11T06:22:29.180 | 2010-11-11T06:22:29.180 | null | null | 183 | null |
4428 | 2 | null | 4417 | 26 | null | The issue is mainly that Bayesian analysis involves integrals, often multidimensional ones in realistic problems, and it's these integrals that are typically intractable analytically (except in a few special cases requiring the use of conjugate priors).
By contrast, much of non-Bayesian statistics is based on [maximum likelihood](http://en.wikipedia.org/wiki/Maximum_likelihood) -- finding the maximum of a (usually multidimensional) function, which involves knowledge of its [derivatives](http://en.wikipedia.org/wiki/Derivative), i.e. differentiation. Even so numerical methods are used in many more complex problems, but it's possible to get further more often without them, and the numerical methods can be simpler (even if less simple ones may perform better in practice).
So I'd say it comes down to the fact that differentiation is more tractable than integration.
| null | CC BY-SA 2.5 | null | 2010-11-11T06:53:21.377 | 2010-11-11T06:53:21.377 | null | null | 449 | null |
4429 | 1 | 4430 | null | 30 | 3347 | It is helpful to study the data analysis code of experts.
I've recently been perusing [github](https://github.com/) and there are a number of people sharing data analysis code there. This includes a few R Packages (which of course are available directly from CRAN), but also several examples of reproducible research, particularly using R ([see this R list on github](https://github.com/languages/R)).
- Who are good people to follow on github to learn about best practice in data analysis?
- Optionally, what kind of code do they share and why is this useful?
| Who to follow on github to learn about best practice in data analysis? | CC BY-SA 3.0 | null | 2010-11-11T06:59:46.883 | 2022-06-27T12:09:21.960 | 2018-09-21T22:34:07.653 | 11887 | 183 | [
"r",
"reproducible-research"
]
|
4430 | 2 | null | 4429 | 19 | null | [Hadley Wickham](https://github.com/hadley). He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but admittedly blind) trust in his best practices, particularly with respect to his own packages.
Plus, you get an early heads up on other projects he's working on!
| null | CC BY-SA 2.5 | null | 2010-11-11T07:35:57.207 | 2010-11-11T07:35:57.207 | null | null | 1106 | null |
4431 | 1 | 4486 | null | 1 | 821 | Way back when, I used to work in finance, and I remember helping a coworker use some kind of block bootstrap. (I believe the application was: we had weekly data on some financial indicator X, along with weekly data on some stock, and we wanted to measure how well X could be used to predict the stock's movements. And I believe we needed to bring in the bootstrap, because we only had a couple months of weekly data, so we didn't really have many datapoints. I might be misremembering all this, though.)
In any case, I totally forget now how the block bootstrap worked, and I want to remember/review/learn more, so can anyone suggest a good tutorial on it? I tried googling, but all I found were some random research papers. I also tried looking in my copy of Efron & Tibshirani's "An Introduction to the Bootstrap Book", but didn't find anything (unless it's under a name other than "block bootstrap").
| Resources to learn about block bootstrap in time series analysis | CC BY-SA 2.5 | null | 2010-11-11T07:50:43.590 | 2013-04-19T00:22:25.903 | null | null | 1106 | [
"time-series",
"regression",
"bootstrap"
]
|
4432 | 2 | null | 4429 | 8 | null | [Diego Valle Jones](http://www.diegovalle.net/). His [Github](https://github.com/diegovalle), especially [analysis of homicides in Mexico](https://github.com/diegovalle/Homicide-MX-Drug-War) is really interesting.
| null | CC BY-SA 2.5 | null | 2010-11-11T07:59:55.433 | 2010-11-11T07:59:55.433 | null | null | 22 | null |
4433 | 2 | null | 4431 | 4 | null | Try the [Handbook of Computational Statistics, Part III, section 2.4](http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/ebooks/html/csa/node132.html).
| null | CC BY-SA 3.0 | null | 2010-11-11T08:21:47.770 | 2013-04-19T00:22:25.903 | 2013-04-19T00:22:25.903 | 159 | 159 | null |
4434 | 2 | null | 4431 | 1 | null | The textbook by [Shumway and Stoffer](http://rads.stackoverflow.com/amzn/click/0387293175) has a short section on bootstrapping time series (state-space) models. Also you may look to:
Pfeffermann, D. and Tiller, R. (2005) Bootstrap approximation to prediction MSE for State-Space models with estimated parameters, Journal of Time Series Analisys, 26, 893-916,
and references therein.
| null | CC BY-SA 2.5 | null | 2010-11-11T08:24:39.063 | 2010-11-11T08:24:39.063 | null | null | 892 | null |
4435 | 2 | null | 4429 | 10 | null | I also follow [John Myles White](https://stats.stackexchange.com/users/303/john-myles-white)'s GitHub [repository](https://github.com/johnmyleswhite). There are several data-oriented projects, but also interesting stuff for R developers:
- ProjectTemplate, a template system for building R project;
- log4r, a logging system.
| null | CC BY-SA 2.5 | null | 2010-11-11T08:57:25.920 | 2010-11-12T10:35:00.423 | 2017-04-13T12:44:41.980 | -1 | 930 | null |
4436 | 2 | null | 4422 | 6 | null | I'd probably take a look at a ridge regression or, better, the lasso. These techniques are often used when there is multicollinearity. There are several options for doing this in R: See the Regularized and Shrinkage Methods section of the [Machine Learning & Statistical Learning](http://cran.r-project.org/web/views/MachineLearning.html) Task View on CRAN.
You don't have enough data to start thinking about some of the techniques listed in other sections of that Task View.
| null | CC BY-SA 3.0 | null | 2010-11-11T09:20:18.010 | 2013-10-07T10:11:18.717 | 2013-10-07T10:11:18.717 | 17230 | 1390 | null |
4437 | 1 | 4438 | null | -1 | 6856 | so we have a0:
```
>a0[1,1:2]
l11 l12
-0.921 0.389593
```
Then;
```
> is.numeric(a0[1,1:2])
[1] FALSE
```
Ok, the text file containing them is a bit of a mess. Then:
```
> as.numeric(a0[1,1:2])
[1] 131 3
```
I know there was a trick to solve that. I just can't remember what it was...
EDIT: sample file:
```
-0.921 0.389593 0.99998742210431 -0.00501553917135373 0.999984216926007 -0.00561835375178836 1 2.36 10 2 0.05 1 1
0.923842 0.382812 0.999998286086073 -0.00185143860707771 0.999999995689246 9.28520798418186e-05 0 2.03 10 2 0.05 2 1
-0.82904 0.559261 0.905909593351804 -0.423471142668741 0.899659482046882 -0.436592277031025 1 1.51 10 2 0.05 3 1
-0.796621 0.604487 0.852012998984401 -0.523520629547306 0.882445298428051 -0.470415024507325 0 2.53 10 2 0.05 4 1
0.836046 0.54875 0.9211906384523 0.389111561930306 0.926385125966013 0.376577479928014 1 2.66 10 2 0.05 5 1
-0.873104 -0.487579 0.942147732871282 0.335197925777448 0.957896161639222 0.287114861191206 0 2.5 10 2 0.05 6 1
-0.728606 0.684948 0.838364634070403 -0.545109842453794 0.770224459340949 -0.637772908042465 1 3.97 10 2 0.05 7 1
0.759379 -0.650765 0.842087096982091 -0.539341562552223 0.765143566165036 -0.643859707666391 0 1.13 10 2 0.05 8 1
-0.497911 -0.867269 0.480229829449393 0.877142697003747 0.225403470821452 0.974265505569012 1 0.62 10 2 0.05 9 1
0.465581 0.885042 0.173457702239973 0.98484131997679 0.159038890626454 0.987272318698497 0 0.79 10 2 0.05 10 1
-0.772559 -0.634978 0.866527018491628 0.499130169619119 0.859460127819 0.511202786269155 1 19.53 10 2 0.05 11 1
0.81446 0.580278 0.943553667636478 0.331219679804432 0.895967382559323 0.444119859260758 0 1.73 10 2 0.05 12 1
-0.792377 0.610095 0.865518659616982 -0.500876681284748 0.877697131009408 -0.479215761654241 1 1.24 10 2 0.05 13 1
0.844213 -0.536081 0.899692596172177 -0.436524034152723 0.906010964105467 -0.423254217841573 0 2.02 10 2 0.05 14 1
0.421542 0.906835 0.103422987236111 0.99463746446188 0.102255903720432 0.994758126458044 1 0.47 10 2 0.05 15 1
0.409305 0.912408 0.0729480114866381 0.997335744681873 0.0838704194992194 0.996476669437386 0 0.45 10 2 0.05 16 1
-0.664573 -0.747275 0.58764731149828 0.809117196263214 0.603464234608522 0.797390066120936 1 0.85 10 2 0.05 17 1
0.599777 0.800191 0.524086235602366 0.851665202795172 0.567148160033905 0.823615786984536 0 1.32 10 2 0.05 18 1
-0.397025 -0.917846 0.030139624142486 0.999545698333273 0.0311215453692834 0.999515607388813 1 0.46 10 2 0.05 19 1
-0.393222 -0.919468 0.0278258557873373 0.999612785907474 0.0265821201726647 0.999646633009448 0 0.42 10 2 0.05 20 1
0.772559 0.634978 0.924006045897534 0.38237785911949 0.908075241643739 0.418807062397072 1 8.12 20 2 0.05 1 2
-0.807518 -0.589907 0.947419893700908 0.319993039017663 0.96499482496042 0.262268922672146 0 5.8 20 2 0.05 2 2
V1 V2 V3 V4 V5 V6 NA NA NA NA NA NA NA
-0.526762 0.850092 -0.311928648717808 0.95010552998553 -0.316412744506619 0.94862161851488 0 3.25 20 2 0.05 4 2
-0.5161 -0.856543 0.309033921455991 0.951051016186583 0.248027721702134 0.96875293510123 1 1.82 20 2 0.05 5 2
0.491867 0.870721 0.268019541588267 0.96341347578639 0.167808915882198 0.98581954116889 0 2.69 20 2 0.05 6 2
0.58991 0.807579 0.51533762936385 0.856987238972464 0.421665311535527 0.906751545379244 1 1.91 20 2 0.05 7 2
-0.549902 -0.835295 0.442022781946357 0.897003823983155 0.390584116545383 0.920567242466547 0 2.06 20 2 0.05 8 2
0.800218 -0.599709 0.852476180705526 -0.522766067500292 0.833708092721497 -0.552205411174757 1 47.26 20 2 0.05 9 2
0.837387 -0.546652 0.910736035285173 -0.41298895146607 0.863018073213623 -0.505173044912974 0 5.36 20 2 0.05 10 2
0.46345 0.886149 0.179828914086857 0.98369790162343 0.176294786076906 0.984337415931193 1 1.84 20 2 0.05 11 2
V1 V2 V3 V4 V5 V6 NA NA NA NA NA NA NA
0.759379 0.650765 0.817799312011448 0.575503505874294 0.824911507994246 0.565261889727814 1 7.7 20 2 0.05 13 2
0.793956 0.608013 0.822124323319108 0.56930799836916 0.812590005961236 0.582835724893317 0 3.8 20 2 0.05 14 2
0.706746 0.707516 0.70298536741427 0.711204311855196 0.695840302062382 0.718196542755348 1 5.1 20 2 0.05 15 2
0.663932 0.747827 0.717210127099939 0.696856967809958 0.711000892677057 0.70319110532801 0 4.17 20 2 0.05 16 2
0.813568 -0.581494 0.904931790793546 -0.425556640191628 0.912655929947496 -0.408728704071137 1 4.24 20 2 0.05 17 2
-0.836046 0.54875 0.924852673074549 -0.380325561995065 0.92915112499605 -0.369700131077304 0 3.78 20 2 0.05 18 2
V1 V2 V3 V4 V5 V6 NA NA NA NA NA NA NA
0.586824 -0.809738 -0.610122851736008 0.792306825535108 -0.588615757355873 0.80841294533943 0 3.07 20 2 0.05 20 2
```
| R and as.numeric() | CC BY-SA 2.5 | null | 2010-11-11T09:26:39.780 | 2010-11-11T10:19:23.073 | 2010-11-11T09:45:06.437 | 603 | 603 | [
"r"
]
|
4438 | 2 | null | 4437 | 4 | null | This is because you have read the numbers as factors; if you use `read.table`, try `header=T` or restructure the data before read. Some sample of the file should be helpful to resolve it.
Workaround would be to first convert factors to strings using `as.character` and then back to numbers with `as.numeric`.
Edit: Code working for provided example:
```
readLines('<filename>')->z
read.table(textConnection(z[-grep('V',z)]))
```
| null | CC BY-SA 2.5 | null | 2010-11-11T09:32:00.040 | 2010-11-11T10:11:26.683 | 2010-11-11T10:11:26.683 | null | null | null |
4439 | 2 | null | 4437 | 3 | null | Would that help?
```
> a <- as.data.frame(matrix(scan("1.txt", what="character",
na.strings=c("NA",paste("V",1:6,sep=""))),
nc=13, byrow=T))
> class(a[,1])
[1] "factor"
> for (i in 1:ncol(a)) a[,i] <- as.numeric(as.character(a[,i]))
> class(a[,1])
[1] "numeric"
> summary(a) # should work here
```
The way you import data doesn't matter so much; I think the critical part if to convert value as character then as numeric (this allows to convert levels of a factor to their numerical counterparts).
| null | CC BY-SA 2.5 | null | 2010-11-11T10:11:18.593 | 2010-11-11T10:19:23.073 | 2010-11-11T10:19:23.073 | 930 | 930 | null |
4440 | 2 | null | 4422 | 5 | null | I find @ucfagls's idea most appropriate here, since you have very few observations and a lot of variables. Ridge regression should do its job for prediction purpose.
Another way to analyse the data would be to rely on [PLS regression](http://en.wikipedia.org/wiki/Partial_least_squares_regression) (in this case, PLS1), which bears some idea with regression on PCA scores but seems more interesting in your case. As multicollinearity might be an issue there, you can look at sparse solution (see e.g., the [spls](http://cran.r-project.org/web/packages/spls/index.html) or the [mixOmics](http://cran.r-project.org/web/packages/mixOmics/index.html) R packages).
| null | CC BY-SA 2.5 | null | 2010-11-11T10:40:28.693 | 2010-11-11T10:40:28.693 | null | null | 930 | null |
4441 | 1 | 4444 | null | 5 | 2700 | I am collecting longitudinal data using for 4 time waves. Although the survey is administrated to the same population, different individuals may decide to complete it at each time point. As a result there are a number of individuals that only completed it once, others that completed it twice, some that completed it three times and others who participated in all four waves. For example at the moment there are about 2000 participants in time 1 and 1900 for time 2 but only 1200 that participated in both time 1 and time 2 (at the moment i am still collecting data for time 3 so i don't know yet what the final matched sample will be).
The data are from different organizations so I would like to model this with using mixed effects with the lmer in R. e.g.
`
lmer(outcome~"some repeated variables"+"organization level variables"+timewave+(timewave|subject)+(1|organizations))
`
My questions are
- Do i need to remove individuals who completed it only once or twice to use a random slope for time?
Is it meaningful to also try to fit a quadratic effect for time given that there are only 4 waves? (and would i need to remove subjects that have not participated in all four?)
Many thanks,
George
| How many data points do we need for mixed effects longitudinal data? | CC BY-SA 2.5 | null | 2010-11-11T10:41:58.623 | 2010-11-11T13:17:09.457 | null | null | 1871 | [
"r",
"mixed-model",
"panel-data"
]
|
4442 | 2 | null | 4403 | -1 | null | Sounds like a classic data mining task:
Y_i = whether person i gets the question right,
X_i (vector) = set of past performance of person i.
Using a set of past data on n people (i=1,...,n), you can fit a predictive model of the sort Y = function of X.
A variety of models can be used to predict the performance of a new person on this question. If you are short of data, then logistic regression and discriminant analysis would be good choices. If you have data on lots of people, then you could try more data-driven methods such as classification trees, k-nearest neighbors, or neural nets.
| null | CC BY-SA 2.5 | null | 2010-11-11T12:58:08.970 | 2010-11-11T12:58:08.970 | null | null | 1945 | null |
4443 | 2 | null | 4337 | 0 | null | Aside from my practical statistical suggestion, I wanted to raise a slightly different issue: I realize that the cinema's goal is to maximize revenues, and of course the analysis (and strategy) can be geared towards that goal. However, I would like to suggest a broader, holistic view that companies as well as analysts should consider: the overall benefit. In this case, we can consider the value of the gaming addition to cinema goers. Are they happier or more satisfied with the overall experience? (this can be evaluated, e.g., via a quick questionnaire). Or, if the gaming is educational, for instance, then perhaps there is added benefit to those playing? I recall that in several cinemas in the U.S. there are word games on the screen before a movie starts. These can be perceived as fun and educational and could therefore be value added. In fact, if movie goers perceive the gaming service as value added, then they will likely choose this cinema over others and perhaps even visit it more frequently.
What I am trying to say is that it is useful to define "success" in a broad manner and to think big. In the end, success will depend also on the wellness of the "customers" and the impact of "treatments" on society, culture, the environment, etc.
Sorry if this is too philosophical, but I have had so many MBA students maximizing short-term financial gains and too few thinking of issues that are not monetary. Yet, data mining and statistics can be used for broader causes.
| null | CC BY-SA 2.5 | null | 2010-11-11T13:12:53.023 | 2010-11-11T13:12:53.023 | null | null | 1945 | null |
4444 | 2 | null | 4441 | 9 | null |
- No, you don't need to remove individuals with data for only only one (or only a limited number) of timepoints. You're right to think that individuals with only one timepoint contribute nothing to estimation of the slope but they contribute to estimation of the intercept and you want to estimate both jointly. The maths and the algorithm deal with this so you don't need to worry about it, and you're more likely to make errors than the programmers of lmer if you try to second-guess things by dropping observations you don't think will contribute.
- Yes, you could fit a quadratic effect for time with 4 timepoints. In fact, if you designed an experiment to look for a quadratic effect, you might well choose to have 4 timepoints. (3 would in principle maximise your power, but 4 allows 1 d.f. to test for fit of the quadratic curve). Clearly you need at least some people to participate in at least 3 waves. But as above, don't remove subjects that didn't.
| null | CC BY-SA 2.5 | null | 2010-11-11T13:17:09.457 | 2010-11-11T13:17:09.457 | null | null | 449 | null |
4445 | 1 | 4449 | null | 14 | 1360 | A database of (population, area, shape) can be used to map population density by assigning a constant value of population/area to each shape (which is a polygon such as a Census block, tract, county, state, whatever). Populations are usually not uniformly distributed within their polygons, however. [Dasymetric mapping](https://en.wikipedia.org/wiki/Dasymetric_map) is the process of refining these density estimates by means of auxiliary data. It is an important problem in the social sciences as [this recent review](https://link.springer.com/article/10.1007/s11113-007-9046-5?from=SL) indicates.
Suppose, then, that we have available an auxiliary map of land cover (or any other discrete factor). In the simplest case we can use obviously uninhabitable areas like waterbodies to delineate where the population isn't and, accordingly, assign all the population to the remaining areas. More generally, each Census unit $j$ is carved into $k$ portions having surface areas $x_{ji}$, $i = 1, 2, \ldots, k$. Our dataset is thereby augmented to a list of tuples
$$(y_{j}, x_{j1}, x_{j2}, \ldots, x_{jk})$$
where $y_{j}$ is the population (assumed measured without error) in unit $j$ and--although this is not strictly the case--we may assume every $x_{ji}$ is also exactly measured. In these terms, the objective is to partition each $y_{j}$ into a sum
$$ y_j = z_{j1} + z_{j2} + \cdots + z_{jk} $$
where each $z_{ji} \ge 0$ and $z_{ji}$ estimates the population within unit $j$ residing in land cover class $i$. The estimates need to be unbiased. This partition refines the population density map by assigning the density $z_{ji}/x_{ji}$ to the intersection of the $j^{\text{th}}$ Census polygon and the $i^{\text{th}}$ land cover class.
This problem differs from standard regression settings in salient ways:
- The partitioning of each $y_{j}$ must be exact.
- The components of every partition must be non-negative.
- There is (by assumption) no error in any of the data: all population counts $y_{j}$ and all areas $x_{ji}$ are correct.
There are many approaches to a solution, such as the "[intelligent dasymetric mapping](https://web.archive.org/web/20110506133809/http://astro.temple.edu/%7Ejmennis/pubs/mennis_cagis06.pdf)" method, but all those I have read about have ad hoc elements and an obvious potential for bias. I am seeking answers that suggest creative, computationally tractable statistical methods. The immediate application concerns a collection of c. $10^{5}$ - $10^{6}$ Census units averaging 40 people apiece (although a sizable fraction have 0 people) and about a dozen land cover classes.
| Model for population density estimation | CC BY-SA 4.0 | null | 2010-11-11T14:38:22.373 | 2022-11-23T09:46:24.493 | 2022-11-23T09:38:00.987 | 362671 | 919 | [
"modeling",
"unbiased-estimator",
"spatial"
]
|
4446 | 1 | 4448 | null | 3 | 358 | The standard sum of squares as I know it is:
$$
\sum(X-m)^2
$$
where $m$ is the mean. I ran into a different one which can be written two ways:
$$
\sum(X^2) - \frac{(\sum X)^2}{n} = \sum(X^2) - m\sum X
$$
I believe the latter is called the "correction term for the mean" (e.g. [here](http://www.itl.nist.gov/div898/handbook/prc/section4/prc431.htm)). My algebra seems to be inadequate to show these are equivalent, so I was looking for a derivation.
| Sum of squares two ways, how are they connected? | CC BY-SA 3.0 | null | 2010-11-11T14:59:10.677 | 2017-11-12T17:22:55.250 | 2017-11-12T17:22:55.250 | 11887 | 1959 | [
"mathematical-statistics",
"sums-of-squares"
]
|
4448 | 2 | null | 4446 | 5 | null | Expanding the square we get:
$\sum_i(X_i-m)^2 = \sum_i(X_i^2 + m^2 - 2 X_i m)$
Thus,
$\sum_i(X_i-m)^2 = \sum_i{X_i^2} + \sum_i{m^2} - 2 \sum_i{X_i m}$
Since $m$ is a constant, we have:
$\sum_i(X_i-m)^2 = \sum_i{X_i^2} + n m^2 - 2 m \sum_i{X_i}$
But,
$\sum_i{X_i} = n m$.
Thus,
$\sum_i(X_i-m)^2 = \sum_i{X_i^2} + n m^2 - 2 n m^2$
Which on simplifying gets us:
$\sum_i(X_i-m)^2 = \sum_i{X_i^2} - n m^2$
Thus, we get can rewrite the rhs of the above in two ways:
$\sum_i{X_i^2} - m (n m) = \sum_i{X_i^2} - m \sum_i{X_i}$
(as $n m = \sum_i{X_i}$)
and
$\sum_i{X_i^2} - n (m)^2 = \sum_i{X_i^2} - \frac{(\sum_i{X_i})^2}{n}$
(as $m = \frac{\sum_i{X_i}}{n}$)
| null | CC BY-SA 2.5 | null | 2010-11-11T15:15:36.757 | 2010-11-11T15:15:36.757 | null | null | null | null |
4449 | 2 | null | 4445 | 5 | null | You might want to check [work](http://www.informatik.uni-trier.de/%7Eley/db/indices/a-tree/l/Langford:Mitchel.html) of Mitchel Langford on dasymetric mapping.
He build rasters representing population distribution of Wales and some of his methodological approaches might be useful here.
Update: You might also have a look at work of [Jeremy Mennis](https://web.archive.org/web/20120121011421/http://www.temple.edu/gus/mennis/index.htm) (especially [these](http://www.blackwell-compass.com/subject/geography/article_view?article_id=geco_articles_bpl220)$^\dagger$ [two](http://onlinelibrary.wiley.com/doi/10.1111/0033-0124.10042/abstract) articles).
---
$\dagger$ This page doesn't exist anymore.
| null | CC BY-SA 4.0 | null | 2010-11-11T16:13:57.317 | 2022-11-23T09:44:38.793 | 2022-11-23T09:44:38.793 | 362671 | 22 | null |
4450 | 2 | null | 4445 | 2 | null | Interesting question. Here is a tentative stab at approaching this from a statistical angle. Suppose that we come up with a way to assign a population count to each area $x_{ji}$. Denote this relationship as below:
$$z_{ji} = f(x_{ji},\beta)$$
Clearly, whatever functional form we impose on $f(.)$ will be at best an approximation to the real relationship and thus the need to incorporate error into the above equation. Thus, the above becomes:
$$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$$
where,
$$\epsilon_{ji} \sim N(0,\sigma^2)$$
The distributional error assumption on the error term is for illustrative purposes. If necessary we can change it as appropriate.
However, we need an exact decomposition of $y_{ji}$. Thus, we need to impose a constraint on the error terms and the function $f(.)$ as below:
$$\sum_i{\epsilon_{ji}} = 0$$
$$\sum_i{f(x_{ji},\beta)} = y_j$$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$$z_j \sim N(f_j,\sigma^2 I) I({f_j}' e = y_j) I((z_j-f_j)' e = 0)$$
where,
$e$ is a vector of ones of appropriate dimension.
The first indicator constraint captures the idea that the sum of the deterministic terms should sum to $y_j$ and the second one captures the idea that the error residuals should sum to 0.
Model selection is trickier as we are decomposing the observed $y_j$ exactly. Perhaps, a way to approach model selection is to choose the model that yields the lowest error variance i.e., the one that yields the lowest estimate of $\sigma^2$.
Edit 1
Thinking some more the above formulation can be simplified as it has more constraints than needed.
$$z_{ji} = f(x_{ji},\beta) + \epsilon_{ji}$$
where,
$$\epsilon_{ji} \sim N(0,\sigma^2)$$
Denote the stacked vector of ${z_{ji}}$ by $z_j$ and the stacked deterministic terms of ${f(x_{ji},\beta)}$ by $f_j$. Thus, we have:
$$z_j \sim N(f_j,\sigma^2 I) I({z_j}' e = y_j)$$
where,
$e$ is a vector of ones of appropriate dimension.
The constraint on $z_j$ ensures an exact decomposition.
| null | CC BY-SA 4.0 | null | 2010-11-11T16:42:34.450 | 2022-11-23T09:46:24.493 | 2022-11-23T09:46:24.493 | 362671 | null | null |
4451 | 1 | null | null | 18 | 13551 | I am looking for social network datasets (twitter, friendfeed, facebook, lastfm, etc.) for classification tasks, preferably in arff format.
My searches via UCI and Google weren't successful so far... any suggestions?
| Social network datasets | CC BY-SA 3.0 | null | 2010-11-11T17:50:04.680 | 2015-09-29T07:10:33.623 | 2014-01-27T17:39:33.047 | 7290 | null | [
"classification",
"dataset"
]
|
4452 | 2 | null | 4422 | 6 | null | It seems to me that the only thing worth doing here is testing a very focussed hypothesis, if you have one. But it seems like you don't.
With so few cases and so many variables, anything else would (in my opinion) be a fishing expedition. That could be a bit useful, perhaps, to generate an hypothesis to test with new data. But any results from a multivariate unfocussed analysis of these data is likely to be a false positive coincidental finding that probably won't hold up with new data.
| null | CC BY-SA 2.5 | null | 2010-11-11T18:07:36.833 | 2010-11-11T18:07:36.833 | null | null | 25 | null |
4453 | 1 | 4526 | null | 4 | 892 | Suppose I have a set $\mathcal{S}$ of $N$ distinct items. Now consider the set $\mathcal{P}$ of all possible pairs that I can draw from $S$. Naturally, $|\mathcal{P}| = \binom{N}{2}$. Now when I draw $k$ items (pairs) from $\mathcal{P}$ with a uniform distribution, what is the expected number of distinct items from $S$ in those $k$ pairs?
| How to calculate the expected number of distinct items when drawing pairs? | CC BY-SA 2.5 | null | 2010-11-11T19:09:57.167 | 2010-11-30T18:21:49.360 | null | null | 977 | [
"binomial-distribution",
"expected-value"
]
|
4454 | 1 | 4678 | null | 3 | 407 | Are there any decent tools for writing/designing questionnaires before providing to programmers? Currently Microsoft Word is being used and tracking changes and keeping it standardized has become a headache.
Update: I think I'm being a little misunderstood here. Here's a scenario: A client speaks to a statistician/expert in survey design so they can design a survey to receive feedback from their customers. Once that survey is designed (not in the fashion sense) they might give it to an artist and programmer to actually create the survey (for the web or whatever). I am wondering what some good ways to communicate the logical design of the survey to the programmer might be.
| What is a good tool and format for representing and communicating the design content of a survey? | CC BY-SA 3.0 | null | 2010-11-11T19:24:19.787 | 2012-05-01T06:58:11.560 | 2012-05-01T06:58:11.560 | 9007 | 305 | [
"survey",
"software",
"communication"
]
|
4455 | 2 | null | 4454 | 1 | null | A combination of Perl + CGI is generally interesting for small surveys/questionnaires (because I hate PHP + MySQL). A gentle introduction can be found in [How to Conduct Behavioral Research over the Internet: A Beginner's Guide to Html and Cgi/Perl](http://www.web-research-design.net/).
Now, I think that Ruby and Rails should provide very handy tools for that particular purpose. I can think of [surveyor](https://github.com/breakpointer/surveyor), for example. I'm quite sure there are similar tools in Python.
As for an all-in-one system (no need to program anything, multiple and linked form available, automatic mailing, etc.), there's [Lime Survey](http://www.limesurvey.org/).
For off-line questionnaires, I would prefer $\LaTeX$ or Docbook.
| null | CC BY-SA 2.5 | null | 2010-11-11T20:18:09.713 | 2010-11-11T20:38:22.387 | 2010-11-11T20:38:22.387 | 930 | 930 | null |
4456 | 2 | null | 4334 | 5 | null | You can build this model with AD Model Builder's random effects package.
This is free software available at [http://admb-project.org](http://admb-project.org). What you will
get is full information maximum likelihood solutions with the ability to try
MCMC methods afterwards if you wish. The idea is to regard this as a random
effects problem and integrate over the random effects via the Laplace approximation. The trick is parameterize if properly so that the Hessian
with respect to the random effects is sparse. I built the model and a simulator. It seems to work well. To give you an idea I have included the
ADMB source for the model.
```
DATA_SECTION
init_int nobs
init_vector Y(1,nobs)
vector resids(2,nobs)
PARAMETER_SECTION
init_number Phi1
init_number Phi2
init_bounded_number log_sigma(-5.0,5.0,2);
init_bounded_number log_p1(-5.0,5.0,2);
init_bounded_number log_p2(-5.0,5.0,2);
objective_function_value f
init_number A2
init_number B2
random_effects_vector A(3,nobs)
random_effects_vector B(3,nobs)
vector nu_1(3,nobs);
vector nu_2(3,nobs);
vector pred_Y(2,nobs);
sdreport_number sigma
sdreport_number p1
sdreport_number p2
PROCEDURE_SECTION
f0(log_sigma,log_p1,log_p2,A2,B2);
f2(3,log_sigma,log_p1,log_p2,A(3),B(3),A2,B2,Phi1,Phi2);
for (int i=4;i<=nobs;i++)
{
f2(i,log_sigma,log_p1,log_p2,A(i),B(i),A(i-1),B(i-1),Phi1,Phi2);
}
if (sd_phase())
{
sigma=exp(log_sigma);
p1=exp(log_p1);
p2=exp(log_p2);
}
SEPARABLE_FUNCTION void f0( const prevariable& log_sigma, const prevariable& log_p1, const prevariable& log_p2, const prevariable& A2, const prevariable& B2)
dvariable sigma=exp(log_sigma);
dvariable p1=exp(log_p1);
dvariable p2=exp(log_p2);
f+=square(A2)+square(B2);
resids(2)=value(r);
f+=log_sigma+0.5*square(r/sigma);
SEPARABLE_FUNCTION void f2(int i, const prevariable& log_sigma, const prevariable& log_p1, const prevariable& log_p2, const prevariable& Ai, const prevariable& Bi, const prevariable& Ai1, const prevariable& Bi1, const prevariable& Phi1,const prevariable& Phi2)
dvariable sigma=exp(log_sigma);
dvariable p1=exp(log_p1);
dvariable p2=exp(log_p2);
dvariable r=Y(i)-Ai-Bi*Y(i-1);
resids(i)=value(r);
f+=log_sigma+0.5*square(r/sigma);
dvariable nu_1i=(Ai-Phi1*Ai1);
dvariable nu_2i=(Bi-Phi2*Bi1);
f+=log_p1+0.5*square(nu_1i/p1);
f+=log_p2+0.5*square(nu_2i/p2);
```
There is a "gotcha" with this approach.
Note that one can set A(i) such that
```
Y(i)-A(i)-B(i)*Y(i-1)=0
```
Then if you let sigma->0 the log-likelihood -> infinity.
So the "real" answer is a local maximum. You can stabilize the
estimation by putting a lower bound on sigma or keeping sigma fixed
at a reasonable value for a while.
| null | CC BY-SA 2.5 | null | 2010-11-11T20:45:50.343 | 2010-11-12T18:10:48.460 | 2010-11-12T18:10:48.460 | 1585 | 1585 | null |
4457 | 2 | null | 4451 | 3 | null | A large index of facebook pages was created and is available as a torrent (It is ~2.8Gb) [http://btjunkie.org/torrent/Facebook-directory-personal-details-for-100-million-users/3979e54c73099d291605e7579b90838c2cd86a8e9575](http://btjunkie.org/torrent/Facebook-directory-personal-details-for-100-million-users/3979e54c73099d291605e7579b90838c2cd86a8e9575)
Twitter datasets are tagged on Infochimps: [http://infochimps.com/tags/twitter](http://infochimps.com/tags/twitter)
A lastfm dataset is available at [http://mtg.upf.edu/node/1671](http://mtg.upf.edu/node/1671)
| null | CC BY-SA 2.5 | null | 2010-11-11T20:53:17.367 | 2010-11-13T16:30:53.093 | 2010-11-13T16:30:53.093 | 1874 | 1874 | null |
4459 | 2 | null | 1980 | 10 | null | I have a few such examples [on my research papers page](http://jakebowers.org/papers.html). (I am not allowed to post more than one hyperlink as a new member. So I'll just describe the papers on that site.)
(1) "Making Effects Manifest in Randomized Experiments" uses R's vignette system.
(2) "Attributing Effects to a Cluster Randomized Get-Out-The-Vote Campaign" was a more complex paper involving some time consuming simulations. We used a Makefile based system and posted it to the Dataverse
(3) "EDA for HLM" was my earliest attempt. Here I just put the data and associated Sweave files in a tarball.
One problem we discovered when creating our JASA archive was that versions and defaults of CRAN packages changed. So, in that archive, we also include the versions of the packages that we used. The vignette based system will probably break as folks change their packages (not sure how to include extra packages within the package that is the Compendium).
Finally, I wonder about what to do when R itself changes. Are there ways to produce, say, a virtual machine that reproduces the entire computational environment used for a paper such that the virtual machine is not enormous?
Anyway, I hope that these examples help. At least they show some of my own experiments in this area.
(Here are some plain text hyperlinks.)
[2]: http://jakebowers.org/manifesteffects-compendium-howto.txt
[3]: http://hdl.handle.net/1902.1/12174
[4]: http://hdl.handle.net/1902.1/13376
| null | CC BY-SA 2.5 | null | 2010-11-11T21:30:55.937 | 2010-11-11T21:37:18.953 | 2010-11-11T21:37:18.953 | 930 | 909 | null |
4461 | 2 | null | 4454 | 1 | null | For mocking up what you want the survey to look like for a programmer, I would definitely just code it up in HTML.
- HTML is easy in general; for things like this, it will be dead simple. I don't think you'd need to fuss with any CSS to make something useful for your programmer. Conversely...
- You or your programmer can use CSS to drastically change the appearance of the survey without needing to edit the individual elements.
- Version control becomes very easy - just use something like git, SVN, or Fossil that will let you easily distribute the most recent version of the survey and will let you track exactly which changes were made, when, and by whom.
- It's free!
| null | CC BY-SA 2.5 | null | 2010-11-11T22:02:31.417 | 2010-11-11T22:02:31.417 | null | null | 71 | null |
4462 | 1 | 4547 | null | 19 | 28761 | I do a bunch of real estate reporting and the median price is often reported, particularly by the NAR (National Association Of Realtors). As best I can tell, they only get the medians of real estate prices from each area. My question is, how should the national median be calculated, given the data restrictions? As a median of medians, as a simple average of medians, or weighted average of medians or something entirely different? Second, how valid would these estimates be? I know that the NAR is not getting the total transaction table, so can a reasonably accurate representation of the median still be estimated at the national level? I ask, in particular, because regional density and prices and market variances are so large.
| Median of Medians calculation | CC BY-SA 3.0 | null | 2010-11-11T22:21:09.420 | 2015-09-18T22:50:01.793 | 2015-09-18T22:50:01.793 | 40230 | 1963 | [
"median"
]
|
4463 | 2 | null | 4451 | 6 | null | Check out the Stanford large network dataset collection: [SNAP](http://snap.stanford.edu/data/).
| null | CC BY-SA 2.5 | null | 2010-11-11T22:33:57.397 | 2010-11-11T22:33:57.397 | null | null | 1913 | null |
4464 | 2 | null | 4451 | 6 | null |
- A huge twitter dataset that includes followers, not just tweets
- large collection of twitter datasets here
| null | CC BY-SA 2.5 | null | 2010-11-11T23:06:48.887 | 2010-11-14T11:22:27.570 | 2010-11-14T11:22:27.570 | 183 | 1808 | null |
4465 | 1 | 4479 | null | 9 | 1620 | I am a student, working with a team on a large-scale ecological experiment. We want to analyze survival data which has been derived from an experimental design with some pseudo-replication. This pseudo-replication was not discovered, unfortunately, until the middle of the experiment, at which point the design could not be altered.
The experimental design involves comparing the survival data of several treatment groups, each comprised of 10 replicates (aquaria) with 10 individuals in each tank. We have measured mortality as a response variable, in response to different environmental stressors. The trouble is that we cannot say that each of the deaths that occurred, have occurred independently since each tank has many individuals. We would like to acknowledge this problem and address it in our analysis.
All of the survival analysis tools we are aware of, assume independence between replicates. We are considering using Kaplan-Meier curves, Cox proportional hazard, or even a glm with Gamma error distributions.
Any ideas about how we can properly address this problem in order to detect a difference in survivorship between the treatment groups?
| Any ideas about how to analyze survival data with pseudo-replication (dependent data)? | CC BY-SA 2.5 | null | 2010-11-11T23:10:10.583 | 2010-11-12T13:22:26.720 | 2010-11-11T23:48:57.587 | 449 | 1862 | [
"modeling",
"survival",
"experiment-design",
"independence",
"frailty"
]
|
4466 | 1 | null | null | 31 | 1691 | Context:
In response to an earlier question about reproducible research [Jake wrote](https://stats.stackexchange.com/questions/1980/complete-substantive-examples-of-reproducible-research-using-r/4459#4459)
>
One problem we discovered when
creating our JASA archive was that
versions and defaults of CRAN packages
changed. So, in that archive, we also
include the versions of the packages
that we used. The vignette based
system will probably break as folks
change their packages (not sure how to
include extra packages within the
package that is the Compendium).
Finally, I wonder about what to do
when R itself changes. Are there ways
to produce, say, a virtual machine
that reproduces the entire
computational environment used for a
paper such that the virtual machine is
not enormous?
Question:
- What are good strategies for ensuring that reproducible data analysis is reproducible in the future (say, five, ten, or twenty years after publication)?
- Specifically, what are good strategies for maximising ongoing reproducibility when using Sweave and R?
This seems to be related to the issue of ensuring that a reproducible data analysis project will run on someone else's machine with slightly different defaults, packages, etc.
| How to increase longer term reproducibility of research (particularly using R and Sweave) | CC BY-SA 2.5 | null | 2010-11-12T01:05:10.823 | 2017-05-18T21:16:39.290 | 2017-05-18T21:16:39.290 | 28666 | 183 | [
"r",
"reproducible-research",
"project-management"
]
|
4467 | 2 | null | 4466 | 11 | null | One strategy involves using the `cacher` package.
- Peng RD, Eckel SP (2009). "Distributed reproducible research using cached computations," IEEE Computing in Science and Engineering, 11 (1), 28–34. (PDF online)
- also see more articles on
Roger Peng's website
Further discussion and examples can be found in the book:
- Statistical Methods for Environmental Epidemiology with R
However, I don't have first hand experience of its effectiveness in ensuring ongoing reproducibility.
| null | CC BY-SA 2.5 | null | 2010-11-12T01:14:44.813 | 2010-11-12T03:31:43.277 | 2010-11-12T03:31:43.277 | 183 | 183 | null |
4468 | 2 | null | 4466 | 18 | null | At some level, this becomes impossible. Consider the case of the famous Pentium floating point bug: you not only need to conserve your models, your data, your parameters, your packages, all external packages, the host system or language (say, R) as well as the OS ... plus potentially the hardware it all ran on. Now consider that some results may be simulation based and required a particular cluster of machines...
That's just a bit much for being practical.
With that said, I think more pragmatic solutions of versioning your code (and maybe also your data) in revisions control, storing versions of all relevant software and making it possible to reproduce the results by running a single top-level script may be a "good enough" compromise.
Your mileage may vary. This also differs across disciplines or industry. But remember the old saw about the impossibility of foolproof systems: you merely create smarter fools.
| null | CC BY-SA 2.5 | null | 2010-11-12T02:07:06.107 | 2010-11-12T03:11:01.320 | 2010-11-12T03:11:01.320 | 334 | 334 | null |
4469 | 1 | 4472 | null | 1 | 591 | Is there a way to type in a regression formula in SPSS in the same manner as R, with weights?
For example, in R, I would write something like:
```
lm(y ~ B1 + B2 + B1*B2, data=df, weights=x)
```
How would I go about recreating this in SPSS/PASW?
| Using R's formula style lm() with SPSS | CC BY-SA 2.5 | null | 2010-11-12T03:06:08.610 | 2012-06-12T14:18:11.463 | 2010-11-12T05:42:30.897 | 159 | 776 | [
"r",
"spss"
]
|
4470 | 2 | null | 4469 | 7 | null | In SPSS you can have the GUI write the syntax for you via the paste button. If you go through the regression command in the window there is an option to include weights.
Here is some sample code it just had the program output for me:
```
REGRESSION
/MISSING LISTWISE
/REGWGT=VAR3
/STATISTICS COEFF OUTS R ANOVA
/CRITERIA=PIN(.05) POUT(.10)
/NOORIGIN
/DEPENDENT VAR1
/METHOD=ENTER VAR2.
```
As with any program, I would suggest you check the documentation on how SPSS implements weights in OLS (I personally have no idea).
A comment by Wolfgang below points out that R and SPSS implement weights in the same manner (although I would still suggest checking out the documentation of how they implement weights.)
| null | CC BY-SA 3.0 | null | 2010-11-12T03:48:13.660 | 2012-06-12T14:18:11.463 | 2012-06-12T14:18:11.463 | 1036 | 1036 | null |
4471 | 1 | null | null | 2 | 9247 | For chi squared distribution, how would I find quantiles for the following three cases:
A) $P(X^2> X^2_{\alpha})=0.01$ when $v = 21$
B) $P(X^2 < X^2_{\alpha})=0.95$ when $v =6$
C) $P(X^2_{\alpha} < X^2 <23.209) = 0.015$ when $v = 10$
Here $X^2_{\alpha}$ is the $\alpha$-quantile of the $\chi^2_{v}$ distribution.
| Calculating quantiles for chi squared distribution | CC BY-SA 2.5 | null | 2010-11-12T04:41:13.047 | 2021-02-02T02:49:17.333 | 2021-02-02T02:49:17.333 | 11887 | null | [
"self-study",
"quantiles",
"chi-squared-distribution"
]
|
4472 | 2 | null | 4469 | 8 | null | In recent versions of SPSS you can run R code directly in SPSS on your SPSS datasets.
| null | CC BY-SA 2.5 | null | 2010-11-12T04:42:32.533 | 2010-11-12T04:42:32.533 | null | null | 183 | null |
4473 | 1 | 4474 | null | 4 | 2465 | I've created a model (cue ominous music) in R based on previous months data using lm(). Now, I would like to see how well it predicts the current months data.
For example, my model predicts sales figures. I have figures for January, February and March. My model is based on figures from January and February and I would like to estimate how good of a predictor the model is of March's sales.
Is there a method for doing this without having to manually apply all the coeficients and weights?
| Validating a linear model with R, lm() | CC BY-SA 3.0 | null | 2010-11-12T06:50:57.487 | 2011-04-13T15:01:29.147 | 2011-04-13T10:34:22.903 | 449 | 776 | [
"r",
"regression"
]
|
4474 | 2 | null | 4473 | 8 | null | ?predict (which will implement ?predict.lm)
Make sure you put into "newdata" a data.frame with the exact same variable names.
| null | CC BY-SA 2.5 | null | 2010-11-12T07:08:19.540 | 2010-11-12T07:08:19.540 | null | null | 253 | null |
4476 | 2 | null | 4446 | 1 | null | Although the formula are equal, the practical difference is ease-of-calculation if you're doing it by hand. If all I had was a piece of paper and a pencil, I'd prefer the second formula--- $\sum X^2$ and $\sum X$ together take less time and are less error prone to calculate than $\sum (X - m)^2$.
| null | CC BY-SA 2.5 | null | 2010-11-12T12:37:14.680 | 2010-11-12T12:37:14.680 | null | null | 1916 | null |
4477 | 2 | null | 4471 | 7 | null | By hand, you need to refer to a tabulated distribution of the $\chi^2$, which should be found easily on the web (e.g., [this one](http://www.unc.edu/~farkouh/usefull/chi.html)). Let x denotes the quantile of interest, and v the degrees of freedom of the chi-square distribution.
You just have to know that the total area under the curve (i.e., the density) equals 1 (this will help you to work through the third case), and that such Tables generally gives P(X < x) = p, for a certain v. Knowing x (resp. p), you can find the approximated value of p (resp. x). In the aforementioned Table, the first cell reads: P(X < 1.32) = 0.25, for a 1-df chi-square.
If you have R, the `qchisq()` function gives you the requested quantiles (look at the on-line help to be sure of what is returned, esp. the `lower.tail` argument). For the preceding example, we would use `qchisq(0.25, 1, lower.tail=FALSE)`.
It is always a good idea to draw the corresponding density curve, as illustrated below. Note that p3 is also 1-P(X < q2).

| null | CC BY-SA 2.5 | null | 2010-11-12T12:47:56.360 | 2010-11-12T13:12:37.960 | 2010-11-12T13:12:37.960 | 930 | 930 | null |
4478 | 2 | null | 4466 | 13 | null | The first step in reproducibility is making sure the data are in a format that is easy for future researchers to read. Flat files are the clear choice here (Fairbairn in press).
To make the code useful over the long term, perhaps the best thing to do is write clear documentation that explains both what the code does and also how it works, so that if your tool chain disappears, your analysis can be reimplemented in some future system.
- Fairbairn (in press) The advent of mandatory data archiving. Evolution DOI: 10.1111/j.1558-5646.2010.01182.x
| null | CC BY-SA 2.5 | null | 2010-11-12T12:54:03.257 | 2010-11-12T12:54:03.257 | null | null | 1916 | null |
4479 | 2 | null | 4465 | 3 | null | Here's some thoughts on what I'd do using relatively simple methods (i.e. avoiding frailty models, which I admit I've never used and don't really understand, so someone else may like to provide an answer involving them). I'm assuming you don't have other forms of censoring apart from the end of the experiment and that there are no time-dependent exposures (i.e. the treatment is either constant or applied only at the start before any deaths have taken place)
- Do some descriptive statistics and Kaplan-Meier plots ignoring the issue of dependence and therefore without reporting or displaying standard errors, confidence intervals or p-values.
- Ignore the time component and just count the number of deaths out of the total starting number in each aquarium. Fit a generalized linear model with a binomial distribution to these counts, using either a logistic link function (to give odds ratios) or a log link function (giving easier-to-interpret risk ratios at the price of potential problems with fitting the model). I think this is along the same lines as the analysis you said you're considering in the first comment to the question. As your mortalities are reasonably low, the loss in power over a full survival analysis with frailty modelling will probably be modest. This overcomes the dependence issue as you are using each aquarium as the unit of analysis instead of each individual.
| null | CC BY-SA 2.5 | null | 2010-11-12T13:22:26.720 | 2010-11-12T13:22:26.720 | null | null | 449 | null |
4480 | 1 | 4481 | null | 2 | 1516 | Or more specifically, why would the index of a dataframe not be in ascending numerical order?
When I display the dataframe, the first row has column names, the data begins on second row, but the leftmost column (which doesn't have a header), is not in order, instead it shows as:
2
1
4
6
5
3
| What order are the rows in a dataframe in R displayed in? | CC BY-SA 2.5 | null | 2010-11-12T16:26:13.343 | 2010-11-12T16:35:15.587 | null | null | 1965 | [
"r"
]
|
4481 | 2 | null | 4480 | 5 | null | Data frames have `rownames` attribute which is shown as the captionless leftmost column; you've propably altered the order of rows in data frame and it just shows the original indices. Try
```
rownames(df)<-NULL
```
to recreate it to 1:N.
| null | CC BY-SA 2.5 | null | 2010-11-12T16:32:03.157 | 2010-11-12T16:32:03.157 | null | null | null | null |
4482 | 2 | null | 4480 | 5 | null | If you reorder a data frame the row.names with be reordered to match. Note that if you haven't supplied rownames, R makes them up for you as characters `as.character(1:nrow(obj))`, where `obj` is your data frame. Hence these are row names.
```
> df <- data.frame(A = 5:1, B = LETTERS[1:5])
> df
A B
1 5 A
2 4 B
3 3 C
4 2 D
5 1 E
> df[order(df$A),]
A B
5 1 E
4 2 D
3 3 C
2 4 B
1 5 A
> df[sample(5),]
A B
3 3 C
2 4 B
4 2 D
1 5 A
5 1 E
```
But I could just as well have applied some rownames:
```
> rownames(df) <- letters[1:5]
> df
A B
a 5 A
b 4 B
c 3 C
d 2 D
e 1 E
> df[sample(5),]
A B
d 2 D
c 3 C
a 5 A
b 4 B
e 1 E
```
You would need to show us how your data were read in, example data, and what code you are using if the above doesn't answer your Q.
If the rownames are not important, ignore them or do:
```
rownames(df) <- NULL
```
| null | CC BY-SA 2.5 | null | 2010-11-12T16:35:15.587 | 2010-11-12T16:35:15.587 | null | null | 1390 | null |
4483 | 2 | null | 4187 | 5 | null | I would use the "logit.mixed" function in [Zelig](http://cran.at.r-project.org/web/packages/Zelig/index.html), which is a wrapper for lime4 and makes it very convenient to do prediction and simulation.
| null | CC BY-SA 2.5 | null | 2010-11-12T16:50:03.513 | 2010-11-12T16:50:03.513 | null | null | 1966 | null |
4484 | 2 | null | 4466 | 7 | null | If you are interested in the virtual machine route, I think it would be doable via a small linux distribution with the specific version of R and packages installed. Data is included, along with scripts, and package the whole thing in a [virtual box](http://www.virtualbox.org/) file.
This does not get around hardware problems mentioned earlier such as the Intel CPU bug.
| null | CC BY-SA 2.5 | null | 2010-11-12T17:35:23.330 | 2010-11-12T17:35:23.330 | null | null | 1965 | null |
4485 | 1 | 4488 | null | 4 | 993 | As described in a few of my previous questions ([here](https://stats.stackexchange.com/q/3207/1381) and [there](https://stats.stackexchange.com/q/2917/1381)), I am interested in deriving summary statistics from statistics reported in the literature.
I would very much appreciate any advice into the validity or errors found in the following calculations.
---
To solve for $MSE$ given $F$, $df_{\text{group}}$, and $SS$.
This is required when a partial anova table is provided.
Given:
\begin{equation}\label{eq:f}
F = MS_g/MS_e
\end{equation}
Where $g$ indicates the group, or treatment.
Rearranging this equation gives:
$$MS_e=MS_g/F$$
Given
$$MS_x = SS_x/df_x$$
Substitute $SS_g/df_g$ for $MS_g$ in the first equation
$$F=\frac{SS_g/df_g}{MS_e}$$
Then solve for $MS_e$
\begin{equation}\label{eq:mse}
MS_e = \frac{SS_g}{df_g\times F}
\end{equation}
---
Example from table 3 in [Starr 2008](http://instaar.metapress.com/index/L5L27V837P506226.pdf).
The results are from one (two?) factor ANOVA with repeated measures, with treatment and week as the factors and no replication. "Effects of treatment on individual species were analyzed using Repeated Measures ANOVA in the statistical package Superanova (Abacus Concepts, Berkeley)."

We will calculate MSE from the $SS_{\text{treatment}}$ df_{\text{treatment}}, and $F$-value given in the table; these are $109.58$, $2$, and $0.570$, respectively; $df_{\text{weeks}}$ is given as $10$.
For the 1997 \textit{Eriphorium vaginatum}, the mean $A_{max}$ in table 4 is $13.49$.
Calculate $MS_e$:
$$MS_e = \frac{109.58}{0.57 \times 2} = 96.12$$
---
If this is the correct way, then shouldn't $MS_e$ be the same calculated based on factor a or factor b?
---
This is only a first draft, but my first attempt at presenting this detailed of a mathematical derivation. Feedback on the writing and presentation would be much appreciated.
Thanks!
| Is this the correct way to calculate $MSE$ from $SS_{\text{treatment(s)}}$, $df_{\text{treatment(s)}}$, $F$? | CC BY-SA 2.5 | null | 2010-11-12T18:35:44.143 | 2010-11-14T04:50:42.380 | 2017-04-13T12:44:33.237 | -1 | 1381 | [
"anova",
"meta-analysis",
"error",
"descriptive-statistics",
"degrees-of-freedom"
]
|
4486 | 2 | null | 4431 | 4 | null | I have relied on [Resampling Methods for Dependent Data](http://books.google.com/books?id=e4f8sqm439UC&printsec=frontcover&dq=resampling+methods+for+dependent+data&source=bl&ots=hwangs1qCA&sig=WzmEiF0W2sRXlSm5MTvK6ESkt7k&hl=en&ei=_pPdTJnWIcGYnwfaqpCzDw&sa=X&oi=book_result&ct=result&resnum=1&ved=0CBIQ6AEwAA#v=onepage&q&f=false) by S.N. Lahiri and found it quite helpful. Once you determine some flavors you want to look at more closely (e.g. Circular Block Bootstrap, Stationary Block Bootstrap) it will be easier to find on-line resources discussing actual use cases and implementation details.
| null | CC BY-SA 2.5 | null | 2010-11-12T19:28:49.717 | 2010-11-12T19:28:49.717 | null | null | 1080 | null |
4487 | 2 | null | 4187 | 1 | null | Stephen Raudenbush has a book chapter in the Handbook of Multilevel Analysis on "[Many Small Groups](https://link.springer.com/book/10.1007/978-0-387-73186-5?from=SL)". If you are only interested in the effects of x on y and have no interest in higher level effects, his suggestion is simply to estimate a fixed effects model (i.e. a dummy variable for all possible higher level groupings).
I don't know how applicable that is towards prediction, but I would imagine some of what he writes is applicable to what you are trying to accomplish.
| null | CC BY-SA 4.0 | null | 2010-11-12T20:09:03.653 | 2022-05-15T12:13:42.687 | 2022-05-15T12:13:42.687 | 79696 | 1036 | null |
4488 | 2 | null | 4485 | 6 | null | Your derivation is perfectly fine for regular (non-repeated meaures) ANOVA. For repeated measures ANOVA the F-statistics does not always equal $MS_g/MS_e$. Assuming weeks is the repeated measure, this formula is correct for the weeks and weeks*treatmeant terms (note that they will give the same MSE), but not for the treatment term. There is another term that is not shown in this table: subjects within treatment, and the MSE you found actually corresponds to that term.
If you are interested in complicated ANOVA tables, I would recommend a rather old book, that has a lot of these formulas worked out:
O. J. Dunn and V.A. Clark: Applied Statistics: Analysis of Variance and Regression, Wiley
| null | CC BY-SA 2.5 | null | 2010-11-12T21:06:36.833 | 2010-11-12T21:06:36.833 | null | null | 279 | null |
4489 | 1 | 4490 | null | 18 | 119822 | Consider the following graph:
```
x <- 1:100
y1 <- rnorm(100)
y2 <- rnorm(100)+100
par(mar=c(5,5,5,5))
plot(x,y1,pch=0,type="b",col="red",yaxt="n",ylim=c(-8,2),ylab="")
axis(side=2, at=c(-2,0,2))
mtext("red line", side = 2, line=2.5, at=0)
par(new=T)
plot(x,y2,pch=1,type="b",col="blue",yaxt="n",ylim=c(98,108), ylab="")
axis(side=4, at=c(98,100,102), labels=c("98%","100%","102%"))
mtext("blue line", side=4, line=2.5, at=100)
```
How can I remove the automatically generated borders and preserve just the axis lines, to achieve a Tufte's style?
| Removing borders in R plots for achieving Tufte's axis | CC BY-SA 2.5 | null | 2010-11-13T01:46:15.850 | 2014-09-01T14:10:42.850 | null | null | 990 | [
"r",
"data-visualization"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.