Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7679
|
1
|
7682
| null |
9
|
1201
|
I've scoured Google and this site and I am still confused about the lmer function in the lme4 library.
I have some data collected from different psychiatric wards, which have a multilevel structure. To simplify, I'll pick two level 2 and two level 1 variables, although I actually have a few more.
Level 2- WardSize [this is the number of people on the ward] & WAS [this is a measure of how "nice" the ward is]
The grouping variable that tells R who's in which ward is called "Ward"
Level one- Gender [this is gender, obviously] & BSITotal [this is a measure of symptom severity]
Outcome is Selfreject, which again is what it sounds like.
I have this formula:
help=lmer(formula=Selfreject~WardSize+WAS+Gender+BSITotal+(1|Ward))
I'm hoping this means "each individual has a score related to their own Gender and symptom severity, and also a ward-level effect relating to the size of the ward and how "nice" it is"
Is this correct? The thing that's confusing me is that I can't see how R can tell which are level 1 and which level 2 variables, except for the ward level intercept given at the end.
If anyone could explain the notation so an idiot like me can understand that would be even better.
Many thanks!
|
Am I specifying my lmer model correctly?
|
CC BY-SA 2.5
| null |
2011-02-27T16:16:32.887
|
2013-04-04T03:55:09.250
| null | null |
199
|
[
"r",
"mixed-model",
"lme4-nlme"
] |
7680
|
2
| null |
2213
|
18
| null |
What George Dontas writes is correct, however the use of RNNs in practice today is restricted to a simpler class of problems: time series / sequential tasks.
While feedforward networks are used to learn datasets like $(i, t)$ where $i$ and $t$ are vectors, e.g. $i \in \mathcal{R}^n$, for recurrent networks $i$ will always be a sequence, e.g. $i \in (\mathcal{R}^n)^*$.
RNNs have been shown to be able to represent any measureable sequence to sequence mapping by Hammer.
Thus, RNNs are being used nowadays for all kinds of sequential tasks: time series prediction, sequence labeling, sequence classification etc. A good overview can be found on [Schmidhuber's page on RNNs](http://www.idsia.ch/~juergen/rnn.html).
| null |
CC BY-SA 4.0
| null |
2011-02-27T16:29:58.647
|
2020-01-07T17:45:25.723
|
2020-01-07T17:45:25.723
|
156277
|
2860
| null |
7681
|
1
| null | null |
8
|
608
|
Say we have the following 5 cities, each with the same population
- CityA with 20% each of 5 ethnicities
- CityB with 99% of one ethnicity, but 100 different ethnicities in the remaining 1%
- CityC with 40% of one ethnicity and the remaining 60% distributed evenly over 10 different ethnicities
How can one measure their relative diversities?
|
Is there a way to compute diversity in a population?
|
CC BY-SA 2.5
| null |
2011-02-27T19:05:14.440
|
2018-10-13T08:57:24.400
|
2018-10-13T08:57:24.400
|
11887
|
276
|
[
"distributions",
"population",
"diversity"
] |
7682
|
2
| null |
7679
|
12
| null |
Your model specification is fine.
The varying intercept for Ward, specified in lmer as you've done with (1 | Ward), is saying that subjects within each ward might be more similar to each other on Selfreject for reasons other than WardSize or Gender, so you are controlling for between-ward heterogeneity.
You can think of the "1" as a column of 1s (i.e., a constant) in the data to which an intercept is fit. Usually the "1" is implied automatically in lm, for instance
```
lm(Y ~ X1 + X2)
```
actually specifies
```
lm(Y ~ 1 + X1 + X2)
```
Now that you have your basic model, you can start asking further questions like "Does the relationship between BSItotal and Selfreject differ between wards?"
```
lmer(formula=Selfreject ~ WardSize + WAS + Gender + BSITotal + (1 + BSITotal | Ward))
```
That is, both the intercept and the slope of BSITotal can differ between wards.
If you haven't picked it up yet, Gelman & Hill's [Data Analysis Using Regression and Multilevel Model/Hierarchical Models](http://www.stat.columbia.edu/~gelman/arm/) is a great book that explains fitting models like this with lmer.
| null |
CC BY-SA 2.5
| null |
2011-02-27T19:38:14.600
|
2011-02-27T19:38:14.600
| null | null |
1916
| null |
7683
|
1
|
7685
| null |
15
|
6046
|
I'm struggling to find a method for reducing the number of categories in nominal or ordinal data.
For example, let's say that I want to build a regression model on a dataset that has a number of nominal and ordinal factors. While I have no problems with this step, I often run into situations where a nominal feature is without observations in the training set, but subsequently exists in the validation dataset. This naturally leads to and error when the model is presented with (so far) unseen cases.
Another situation where I would like to combine categories is simply when there are too many categories with few observations.
So my questions are:
- While I realize it might be best to combine many nominal (and ordinal) categories based on the prior real-world background information they represent, are there systematic methods (R packages preferably) available?
- What guidelines and suggestions would you make regarding, cut-off thresholds and so on?
- What are the most popular solutions in literature?
- Are there other strategies than combining small nominal categories to a new, "OTHERS" category?
Please feel free to chime in if you have other suggestions also.
|
Methods for merging / reducing categories in ordinal or nominal data?
|
CC BY-SA 3.0
| null |
2011-02-27T21:02:23.573
|
2017-02-09T13:36:51.180
|
2017-02-09T13:36:51.180
|
11887
|
3401
|
[
"r",
"categorical-data",
"dimensionality-reduction",
"many-categories"
] |
7684
|
2
| null |
213
|
12
| null |
I novel approach I saw was by IT Jolliffe Principal Components Analysis. You run a PCA on your data (Note: PCA can be quite a useful data exploration tool in its own right), but instead of looking at the first few Principal Components (PCs), you plot the last few PCs. These PCs are the linear relationships between your variables with the smallest variance possible. Thus they detect "exact" or close to exact multivariate relationships in your data.
A plot of the PC scores for the last PC will show outliers not easily detectable by looking individually at each variable. One example is for height and weight - some who has "above average" height and "below average" weight would be detected by the last PC of height and weight (assuming these are positively correlated), even if their height and weight were not "extreme" individually (e.g. someone who was 180cm and 60kg).
| null |
CC BY-SA 3.0
| null |
2011-02-27T21:22:37.633
|
2013-11-21T23:21:08.853
|
2013-11-21T23:21:08.853
|
17230
|
2392
| null |
7685
|
2
| null |
7683
|
12
| null |
This is a response to your second question.
I suspect the correct approach to these kinds of decisions will be determined largely by disciplinary norms and the expectations of the intended audience of your work. As a social scientist, I often work with survey (or survey-like) data and I always try to balance substantive and data-driven logics when I collapse ordinal scales or categorical variables. In other words, I'll do my best to consider what combinations of items "hang together" in terms of their substance as well as the distribution of responses before I collapse the items.
Here's a recent example of a specific (ordinal) survey question that involved a five-point frequency scale:
>
How often do you attend the meetings of a club or organization in your community?
Never
A few times a year
Once a month
A few times a month
Once a week or more
I don't have the data available to me at the moment, but the results were strongly skewed towards the "never" end of the scale. As a result, my co-author and I chose to pool responses into two groups: "Once a month or more" and "Less than once a month." The resulting (binary) variable was more evenly distributed and reflected a meaningful distinction in practical terms: since many clubs and organizations don't meet more than once a month, there are good reasons to believe that people who attend meetings at least that often are "active" members of such groups whereas those who attend less frequently (or never) are "inactive."
So in my experience, these decisions are at least as much art as science. That said, I also usually try to do this before fitting any models, since I work in a discipline where anything else is viewed (negatively) as data mining and highly un-scientific (fun times!) .
With that in mind, it might help if you could say a little bit more about what sort of audience you have in mind for this work. It would also be in your best interests to review a few prominent methodology textbooks in your field as they can often clarify what passes for "normal" behavior among a given research community.
| null |
CC BY-SA 2.5
| null |
2011-02-27T21:52:51.347
|
2011-03-01T03:12:29.883
|
2020-06-11T14:32:37.003
|
-1
|
3396
| null |
7686
|
1
|
7696
| null |
1
|
1829
|
if range of gaussian pdf is not a probability then how come it is used in Bayes rule in the same way as pmf?
|
Bayes rule and gaussian PDF
|
CC BY-SA 2.5
| null |
2011-02-27T22:04:16.123
|
2011-02-28T16:23:11.257
|
2011-02-27T22:18:27.367
| null |
3456
|
[
"bayesian",
"normal-distribution"
] |
7687
|
2
| null |
7681
|
8
| null |
How about the [Shannon index](http://en.wikipedia.org/wiki/Shannon_index)?
| null |
CC BY-SA 2.5
| null |
2011-02-28T00:42:13.393
|
2011-02-28T00:42:13.393
| null | null |
1050
| null |
7688
|
1
|
7693
| null |
2
|
535
|
I searched on the internet for books on statistics (particularly 4shared.com), and most of the books I found do not cover multivariate statistics in detail. Are there any good books which cover these topics in detail and with sufficient examples?
|
Books with good coverage of joint distributions, multivariate statistics, etc?
|
CC BY-SA 3.0
| null |
2011-02-28T01:24:26.660
|
2013-07-03T15:05:26.567
|
2013-07-03T15:05:26.567
|
17230
| null |
[
"multivariate-analysis",
"references",
"joint-distribution"
] |
7689
|
2
| null |
7681
|
2
| null |
You may be interested in [this paper](http://arxiv.org/abs/1101.5305): "A new axiomatic approach to diversity" from Chris Dowden.
| null |
CC BY-SA 2.5
| null |
2011-02-28T03:15:48.423
|
2011-02-28T03:15:48.423
| null | null |
3459
| null |
7690
|
2
| null |
7686
|
1
| null |
The posterior distribution derived using continuous distributions in Bayes Theorem can always be integrated (although maybe not be hand) to give a probability. If you want to convince yourself "caveman style," run the desired probabilities through Bayes Theorem using a Gaussian CDF, then take the derivative to get the posterior PDF. Sounds like a great homework problem to torture undergrads with. Thanks for the idea.
If you're not masochistic, and are amenable to a "handwaving argument," just think of $f(t) dt$ as an infinitesimal probability and plug it in to the theorem.
| null |
CC BY-SA 2.5
| null |
2011-02-28T03:36:47.303
|
2011-02-28T03:36:47.303
| null | null |
5792
| null |
7691
|
2
| null |
6155
|
8
| null |
Aside from what has been said, for the vusualization task alone (and outside from R), you might be interested in checking [Gephi](http://gephi.org).
| null |
CC BY-SA 2.5
| null |
2011-02-28T06:14:53.413
|
2011-02-28T06:14:53.413
| null | null |
892
| null |
7692
|
2
| null |
7688
|
1
| null |
Despite @whuber's sound comment--covering all advances in MV analysis for the last 30 years is also outside the scope of e.g. the famous [Handbook of Statistics](http://www.elsevier.com/locate/inca/BS_HS) series--, I would like to recommend
>
Izenman, Modern Multivariate
Statistical Techniques, Springer
2008.
Although it has pretty much the same coverage than the [Elements of Statistical Learning](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), from Hastie and coll., it has some different applications and covers extra topic, like Correspondence Analysis. There is a [short review](http://www.jstatsoft.org/v29/b11/paper) by John Maindonald in the JSS.
| null |
CC BY-SA 2.5
| null |
2011-02-28T07:39:54.400
|
2011-02-28T07:39:54.400
| null | null |
930
| null |
7693
|
2
| null |
7688
|
2
| null |
Last year, I spent every lunchtime for a week going to the Waterstones University bookshop in London looking for a good book on multivariate statistics (sad I know!). I also endorse Izenman, Modern Multivariate Statistical Techniques, Springer 2008, as it really was the stand-out book. It starts every chapter with an easy to understand outline and gradually progresses through the very detailed theory with lots of real world examples, data sources and visualisations. It's actually quite a good read for the basic principles and you come back later when the going gets heavy.
| null |
CC BY-SA 2.5
| null |
2011-02-28T07:57:23.627
|
2011-02-28T07:57:23.627
| null | null | null | null |
7694
|
1
| null | null |
16
|
693
|
I am doing time series data analysis by state space methods. With my data the stochastic local level model totally outperformed the deterministic one. But the deterministic level and slope model gives better results than with stochastic level and stochastic/deterministic slope. Is this something usual?
All methods in R require initial values, and I read somewhere that fitting an ARIMA model first and taking values from there as initial values for state space analysis is one way; possible? or any other proposition?
I should confess here that I'm totally new to state space analysis.
|
How to check which model is better in state space time series analysis?
|
CC BY-SA 3.0
|
0
|
2011-02-28T08:38:16.633
|
2013-08-18T22:49:08.440
|
2013-01-17T12:27:44.190
|
17230
| null |
[
"time-series",
"state-space-models"
] |
7695
|
1
| null | null |
2
|
345
|
The Pareto distribution can be used to give a pdf for the wealth of a person chosen randomly from a population. (In fact, this was its origin. See, for instance, [http://en.wikipedia.org/wiki/Pareto_principle](http://en.wikipedia.org/wiki/Pareto_principle) ).
I would like to explore the reciprocal question: Given the total amount of wealth in a population, what is the pdf of the portion that a randomly chosen person has. I conjecture that this is simply a constant times the Pareto distribution.
More interestingly: What is the shape of the distribution curve, if the richest person would be at the 0 point on the x axis, the next richest person to the right, and so on - we would see a monotonically decreasing curve. But what is its shape? What is its derivative?
It's quite likely that I'm not phrasing that question properly. Let me ask a more basic question: What is the appropriate terminology to explore the question? Give a probability distribution applied many times over, what is the shape of the resultant allocation curve?
|
Understanding the Pareto distribution as applied to wealth
|
CC BY-SA 2.5
| null |
2011-02-28T08:55:17.277
|
2011-02-28T09:26:52.597
|
2011-02-28T09:26:52.597
| null | null |
[
"distributions",
"modeling",
"predictive-models",
"application",
"pareto-distribution"
] |
7696
|
2
| null |
7686
|
3
| null |
@Ahmed - you are definitely correct in thinking that something is not quite right here.
Conditioning on "point values" which have probability/measure 0 can be "dangerous" and can lead to what is called a [Borel and Kolmogorov Paradox](http://en.wikipedia.org/wiki/Borel%E2%80%93Kolmogorov_paradox). The lesson from this is that in order to be certain that we are defining a conditional probability unambiguously, it is necessary to:
- Specify a definite limiting process towards the null point.
- Ensure that the limit exists and is "well behaved."
- Don't change limits halfway through your calculations!
This is a rather tedious and often unnecessary process, but if find yourself getting weird answers, this is the "insurance policy" so to speak. You can pretty much never get absurdities if you follow the above rules. But like all insurance, there is a price - the painful process of working out a limit.
I like Edwin Jaynes in this case:
there is no right choice or wrong choice because it is for us to say which limit we want to take; i.e., which problem we want to solve.
This is Sermon #1 on mathematical limits; although it was given long ago by Kolmogorov, many who try to do probability calculations still fail to heed it and get themselves into trouble. The moral is that, unless they are defined in the statement of a problem, probabilities conditional on point values of a parameter, have no meaning until the specific limiting process is stated. More generally, probabilities conditional on any propositions of probability zero, are undefined.
| null |
CC BY-SA 2.5
| null |
2011-02-28T09:26:48.403
|
2011-02-28T16:23:11.257
|
2011-02-28T16:23:11.257
|
919
|
2392
| null |
7697
|
2
| null |
672
|
3
| null |
Bayes theorem in its most obvious form is simply a re-statement of two things:
- the joint probability is symmetric in its arguments $P(HD|I)=P(DH|I)$
- the product rule $P(HD|I)=P(H|I)P(D|HI)$
So by using the symmetry:
$$P(HD|I)=P(H|I)P(D|HI)=P(D|I)P(H|DI)$$
Now if $P(D|I) \neq 0$ you can divide both sides by $P(D|I)$ to get:
$$P(H|DI)=P(H|I)\frac{P(D|HI)}{P(D|I)}$$
So this is it? How can something so simple be so awesome? As with most things "its the journey that's more important than the destination". Bayes theorem rocks because of the arguments that lead to it.
What is missing from this is that the product rule and sum rule $P(H|I)=1-P(\overline{H}|I)$, can be derived using deductive logic based on axioms of consistent reasoning.
Now the "rule" in deductive logic is that if you have a relationship "A implies B" then you also have "Not B implies Not A". So we have "consistent reasoning implies Bayes theorem". This means "Not Bayes theorem implies Not consistent reasoning". i.e. if your result isn't equivalent to a Bayesian result for some prior and likelihood then you are reasoning inconsistently.
This result is called Cox's theorem and was proved in "Algebra of Probable inference" in the 1940's. A more recent derivation is given in Proability theory: The logic of science.
| null |
CC BY-SA 2.5
| null |
2011-02-28T09:55:19.267
|
2011-02-28T09:55:19.267
| null | null |
2392
| null |
7698
|
1
| null | null |
6
|
9472
|
I want to approximate a non-linear function with a limited value range by an artificial neural network (feed forward, back propagation).
Most tools and literature availabe suggest linear functions for the output neurons when doing regressions. However, I know a priori that my goal function is of limited range, therefore is it reasonable to use a function for the output neurons with limited value range, too? To be more explicit: My target function's values are in the range between 0 and 1, but the neural net does predict occasionally values that exceed this range (e.g. -1.3). Can i prevent the net from doing so, and is it reasonable?
|
Output layer of artificial neural networks when learning non-linear functions with limited value range
|
CC BY-SA 4.0
| null |
2011-02-28T10:53:14.850
|
2018-10-11T10:59:28.927
|
2018-10-11T10:58:59.127
|
128677
|
3465
|
[
"neural-networks"
] |
7699
|
1
|
7700
| null |
1
|
2744
|
I have just done a Chow test on a regression in order to see whether there is a structural break. I am a bit stumped however as my Chow test returns a negative number. Now what do I do?
More specifically, this expression (from the Wikipedia [entry](http://en.wikipedia.org/wiki/Chow_test) for the Chow test):
$\frac{(S_c-(S_1+S_2))/(k)}{(S_1+S_2)/(N_1+N_2-2k)}$
turns out negative for me.
EDIT: It turns out it was a simple programming error.
|
What to do with negative Chow test?
|
CC BY-SA 3.0
| null |
2011-02-28T11:36:50.210
|
2017-11-01T11:31:04.367
|
2017-11-01T11:31:04.367
|
28666
|
3086
|
[
"change-point",
"chow-test"
] |
7700
|
2
| null |
7699
|
4
| null |
You've made a mistake somewhere in your calculations. It's not possible for the sum of the squared residuals from a single regression using the combined data to be less than the sum of the sums of squared residuals from the regressions using the two separate sets of data.
| null |
CC BY-SA 2.5
| null |
2011-02-28T12:09:40.833
|
2011-02-28T12:09:40.833
| null | null |
449
| null |
7701
|
1
| null | null |
3
|
8633
|
I have data from human participants in a study. There are more females in the study (60%) and males are older. I have a binary categorical variable $O$. If those who are $True$ for $O$ are older, do I need to correct for sex and/or age? Maybe those $True$ for $O$ contain more men. What concepts or methods should I use to determine this? I'm using R.
|
Adjusting for Confounding Variables
|
CC BY-SA 2.5
| null |
2011-02-28T12:10:44.647
|
2011-02-28T14:33:46.443
|
2011-02-28T14:12:06.410
|
2116
|
2824
|
[
"r",
"regression",
"categorical-data"
] |
7702
|
2
| null |
7681
|
3
| null |
[Tree diversity analysis](http://www.worldagroforestry.org/units/library/books/PDFs/Kindt%20b2005.pdf) book will get you up to speed with common diversity indices, along with some useful packages in R and their usage. While the book talks about trees, it can be used with marine fauna (which I did for my thesis) or even people.
| null |
CC BY-SA 2.5
| null |
2011-02-28T12:32:27.213
|
2011-02-28T12:32:27.213
| null | null |
144
| null |
7703
|
2
| null |
7698
|
2
| null |
If you use a logistic activation function in the output layer it will restrict the output to the range 0-1 as you require.
However if you have a regression problem with a restricted output range the sum-of-squares error metric may not be ideal and maybe a beta noise model might be more appropriate (c.f. beta regression, which IIRC is implemented in an R package, but I have never used it myself)
| null |
CC BY-SA 4.0
| null |
2011-02-28T13:01:10.830
|
2018-10-11T10:59:28.927
|
2018-10-11T10:59:28.927
|
128677
|
887
| null |
7704
|
2
| null |
7681
|
4
| null |
This paper by [Massey and Denton 1988](http://dx.doi.org/10.2307/2579183) is a fairly prolific overview of commonly used indices in Sociology/Demography. It would also be useful for some other key terms used for searching articles. Frequently in Sociology the indices are labelled with names such as "heterogeneity" and "segregation" as well as "diversity".
Part of the reason no absolute right answer exists to your question is that people frequently only use epistemic logic to reason why one index is a preferred measurement. Infrequently are those arguments so strong that one should entirely discount other suggested measures. The work of Massey and Denton is useful to highlight what many of these indices theoretically measure and when they differ to a substantively noticeable extent (in large cities in the US).
| null |
CC BY-SA 3.0
| null |
2011-02-28T13:07:11.443
|
2013-06-27T13:15:47.193
|
2013-06-27T13:15:47.193
|
22047
|
1036
| null |
7705
|
2
| null |
7698
|
0
| null |
If you know an absolute range for the output, but there is no reason to expect it to have the non-linear characteristic of the typical logistic activation function (i.e. a value in the middle is just as likely as a value near 0 or 1), then you can just transform the output by dividing by the absolute maximum. If the minimum were not 0, you could subtract the absolute minimum before dividing by the value (maximum - minimum).
So basically don't try to train the neural network to the raw value, train it to the percentile value (0 for minimum, 1 for maximum).
| null |
CC BY-SA 2.5
| null |
2011-02-28T13:45:26.013
|
2011-02-28T13:45:26.013
| null | null |
2917
| null |
7706
|
1
|
7710
| null |
3
|
1063
|
If my dataset comprises few censored variables (<1%) and I fit the OLS regression using a heteroscedastic resistant estimator (the residuals are not terribly heteroscedastic to begin with)- are the results valid?
|
What is the magnitude of bias in censored regression when OLS is applied?
|
CC BY-SA 2.5
| null |
2011-02-28T14:03:40.473
|
2011-02-28T19:50:18.567
|
2011-02-28T14:41:16.510
|
2116
|
1291
|
[
"survival",
"least-squares",
"censoring"
] |
7707
|
2
| null |
7701
|
1
| null |
Edited several times to reflect comments
I realized I should give an example of what I meant by "what your model looks like now." From what you've said, I'm assuming that $Variant$ or $O$ is your dependent or outcome variable and that you're starting with something like the following:
$Variant = \beta_0 + \beta_1(Female) + \beta_2(Age) + \beta_3(Treatment) + \varepsilon$
Where I've inserted "Treatment" you can also think of any other sort of indicator that may be relevant to you (and in fact, if you're really dealing with experimental data, I would recommend considering something other than regression!)
If that's the case (and assuming you're confident in your sampling methods, data collection procedures, decision to use regression, etc.), then the answers to your questions are in the results of your regression model -- post the results and maybe we can help you interpret them.
If $Variant$ is on the other side of the equation (i.e. if it's a covariate or an independent variable), then you might want to think about [interaction terms](http://en.wikipedia.org/wiki/Interaction_%28statistics%29). They can help you test for associations in your data that are not simply additive.
From your response to my comment above (and below), it sounds like the model might look more like this (I've now replaced all the occurrences of $O$ with $Variant$ just to distinguish $O$ from $\theta$):
$Y = \beta_0 + \beta_1(Female) + \beta_2(Age) + \beta_3(Variant) + \varepsilon$
In that case, I would definitely consider introducing interaction terms between $Age$ and $Variant$ as well as between $Gender$ and $Variant$.
| null |
CC BY-SA 2.5
| null |
2011-02-28T14:05:58.000
|
2011-02-28T14:33:46.443
|
2011-02-28T14:33:46.443
|
3396
|
3396
| null |
7709
|
2
| null |
6809
|
5
| null |
UPDATE:
Now on CRAN:
[http://cran.r-project.org/web/packages/C50/index.html](http://cran.r-project.org/web/packages/C50/index.html)
ORIGINAL:
We've been working on this for a bit now (starting with Cubist then working on C5.0).
If you'd like to contribute:
[https://r-forge.r-project.org/projects/rulebasedmodels/](https://r-forge.r-project.org/projects/rulebasedmodels/)
was created recently and we should be checking the initial code in.
We've had access to the Cubist sources for a while now (but there was an explicit agreement not to link it to other sw) and been debating the different options for incorporating the code, but I thin
| null |
CC BY-SA 3.0
| null |
2011-02-28T14:09:51.923
|
2012-08-30T18:48:56.600
|
2012-08-30T18:48:56.600
|
3468
|
3468
| null |
7710
|
2
| null |
7706
|
5
| null |
Suppose you observe $(y_i,x_i)$, which come frome censored regression model:
\begin{align}
y^*_i&=x_i\beta+u_i \\
y_i&= \max(y_i^*,0)
\end{align}
with $u_i|x_i\sim N(0,\sigma^2)$,
Then this model is equivalent to:
\begin{align}
y_i=x_i\beta+\sigma\lambda(x_i\beta/\sigma)+e_i,
\end{align}
where $E(e_i|x_i,y_i>0)=0$ and $\lambda$ is [inverse Mills ratio](http://en.wikipedia.org/wiki/Mills_ratio).
If you apply OLS ignoring truncation then you omit the term with Mills ratio, so it does not matter that you account for heteroscedasticity, your coefficients will still be biased. If as you say you have only few censored observations, then the bias might be small. But it is better to come to this conclusion by comparing both models.
I've lifted most of the material from [Wooldridge's book](http://books.google.com/books?id=cdBPOJUP4VsC&lpg=PP1&dq=wooldridge%20econometrics&hl=fr&pg=PP1#v=onepage&q=wooldridge%20econometrics&f=false), page 524, so you can read more about it there. You should also check whether you observe truncation or censoring, since these are usually confused and different results apply.
| null |
CC BY-SA 2.5
| null |
2011-02-28T14:38:32.627
|
2011-02-28T19:50:18.567
|
2011-02-28T19:50:18.567
|
930
|
2116
| null |
7712
|
1
|
7713
| null |
2
|
211
|
I am not sure if this is an instance of vectorizing the operations in R, but this is where I am stuck:
I want to get:
```
dpois(1, 0.1)
dpois(2, 0.2)
dpois(3, 0.3)
```
and I tried:
```
dpois(1:3, 0.1:0.3)
```
and
```
do.call(dpois, list(x = 1:3, lambda = 0.1:0.3))
```
both do not work.
It there a R-ish way of doing this?
|
How do I "vectorize" calls to dpois?
|
CC BY-SA 2.5
| null |
2011-02-28T16:30:13.000
|
2018-10-19T02:12:21.990
| null | null |
1307
|
[
"r"
] |
7713
|
2
| null |
7712
|
6
| null |
From `help(dpois)` it looks like you need `x` and `lambda` to be vectors (read more about object classes in the R Intro or any other R documentation to understand what this means).
The following works:
`dpois(1:3, c(seq(0.1, 0.3, .1)))`
Your first attempt fails because you are not concatenating (see: `help(c)`) the values for `0.1:0.3` into a vector and you are not providing any way for R to know what you want it to do with `0.1:0.3`. Calling `seq()` in the manner above tells it to get a sequence from 0.1 to 0.3 by 0.1.
Your second attempt is pretty far into the weeds. There's no way you need a power-tool like `do.call` for this kind of thing.
| null |
CC BY-SA 2.5
| null |
2011-02-28T16:48:15.783
|
2011-02-28T17:16:06.103
|
2011-02-28T17:16:06.103
|
3396
|
3396
| null |
7714
|
1
| null | null |
3
|
870
|
What inter-rate reliability test is best for continuous data? I am doing a study with one variable with continuous data, now the measurement involves measurements done by two people. I would wish to do inter-rater reliability test for the data, so far I have collected a few samples and a sample data I have given below, what test would I use?
Rater A 23.1 22.0 21.8 24.1 20.2 22.1 23.8
Rater B 23.0 21.5 22.0 23.9 19.8 21.9 22.9
|
Interater reliability
|
CC BY-SA 2.5
| null |
2011-02-28T17:00:48.593
|
2011-02-28T19:08:43.810
|
2011-02-28T18:59:08.360
|
3472
|
3472
|
[
"reliability",
"agreement-statistics"
] |
7716
|
2
| null |
2957
|
20
| null |
Unbiased estimates are typical in introductory statistics courses because they are: 1) classic, 2) easy to analyze mathematically. The Cramer-Rao lower bound is one of the main tools for 2). Away from unbiased estimates there is possible improvement. The bias-variance trade off is an important concept in statistics for understanding how biased estimates can be better than unbiased estimates.
Unfortunately, biased estimators are typically harder to analyze. In regression, much of the research in the past 40 years has been about biased estimation. This began with ridge regression (Hoerl and Kennard, 1970). See [Frank and Friedman (1996)](http://www.jstor.org/stable/1269656) and [Burr and Fry (2005)](http://www.jstor.org/stable/25471022) for some review and insights.
The bias-variance tradeoff becomes more important in high-dimensions, where the number of variables is large. Charles Stein surprised everyone when he proved that in the Normal means problem the sample mean is no longer admissible if $p \geq 3$ (see Stein, 1956). The James-Stein estimator (James and Stein 1961) was the first example of an estimator that dominates the sample mean. However, it is also inadmissible.
An important part of the bias-variance problem is determining how bias should be traded off. There is no single “best” estimator. Sparsity has been an important part of research in the past decade. See [Hesterberg et al. (2008)](http://arxiv.org/pdf/0802.0964) for a partial review.
Most of the estimators referenced above are non-linear in $Y$. Even ridge regression is non-linear once the data is used to determine the ridge parameter.
| null |
CC BY-SA 2.5
| null |
2011-02-28T18:06:45.623
|
2011-03-01T01:07:08.683
|
2011-03-01T01:07:08.683
|
1670
|
1670
| null |
7717
|
2
| null |
7714
|
1
| null |
I'd suggest you [plot the difference against the mean ](http://en.wikipedia.org/wiki/Bland-Altman_plot) then quantify things using the mean difference and the standard deviation of the difference. Seven samples is rather few though.
| null |
CC BY-SA 2.5
| null |
2011-02-28T19:08:43.810
|
2011-02-28T19:08:43.810
| null | null |
449
| null |
7718
|
1
|
7725
| null |
12
|
7232
|
Is there any standard method to determine an "optimal" operation point on a [precision recall](http://en.wikipedia.org/wiki/Precision_and_recall) curve? (i.e., determining the point on the curve that offers a good trade-off between precision and recall)
Thanks
|
How to choose a good operation point from precision recall curves?
|
CC BY-SA 2.5
| null |
2011-02-28T19:56:26.907
|
2020-12-26T19:35:38.027
|
2011-02-28T21:22:53.433
|
930
|
2798
|
[
"machine-learning",
"precision-recall"
] |
7719
|
1
| null | null |
3
|
890
|
The Data:
The observed probability (proportions) of three mutually exclusive events for five species.
What is the best way to plot these data in R along with their standard errors? I'd like to avoid a "beside" bar plot with error bars (3 bars for each species). I was hoping to use a stacked bar plot, but I'm unsure how to incorporate standard error!
Thanks!
|
Plotting Multiple Proportions With Standard Error
|
CC BY-SA 2.5
| null |
2011-02-28T20:21:29.907
|
2011-02-28T20:21:29.907
| null | null |
3474
|
[
"r",
"data-visualization",
"proportion"
] |
7720
|
1
| null | null |
31
|
48740
|
I am new to R, ordered logistic regression, and `polr`.
The "Examples" section at the bottom of the help page for [polr](http://stat.ethz.ch/R-manual/R-patched/library/MASS/html/polr.html) (that fits a logistic or probit regression model to an ordered factor response) shows
```
options(contrasts = c("contr.treatment", "contr.poly"))
house.plr <- polr(Sat ~ Infl + Type + Cont, weights = Freq, data = housing)
pr <- profile(house.plr)
plot(pr)
pairs(pr)
```
- What information does pr contain? The help page on profile is
generic, and gives no guidance for polr.
- What is plot(pr) showing? I see six graphs. Each has an X axis that is
numeric, although the label is an indicator variable (looks like an input variable that is an indicator for an ordinal value). Then the Y axis
is "tau" which is completely unexplained.
- What is pairs(pr) showing? It looks like a plot for each pair of input
variables, but again I see no explanation of the X or Y axes.
- How can one understand if the model gave a good fit?
summary(house.plr) shows Residual Deviance 3479.149 and AIC (Akaike
Information Criterion?) of 3495.149. Is that good? In the case those
are only useful as relative measures (i.e. to compare to another model
fit), what is a good absolute measure? Is the residual deviance approximately chi-squared distributed? Can one use "% correctly predicted" on the original data or some cross-validation? What is the easiest way to do that?
- How does one apply and interpret anova on this model? The docs say "There are methods for the standard model-fitting functions, including predict, summary, vcov, anova." However, running anova(house.plr) results in anova is not implemented for a single "polr" object
- How does one interpret the t values for each coefficient? Unlike some
model fits, there are no P values here.
I realize this is a lot of questions, but it makes sense to me to ask as one bundle ("how do I use this thing?") rather than 7 different questions. Any information appreciated.
|
How to understand output from R's polr function (ordered logistic regression)?
|
CC BY-SA 2.5
| null |
2011-02-28T20:51:28.700
|
2018-01-08T16:24:10.643
|
2011-03-01T21:22:49.080
|
2849
|
2849
|
[
"r",
"logistic"
] |
7721
|
1
|
7770
| null |
8
|
412
|
I need to do a high dimensional biological data analysis. My data consists of hundreds of thousands of dimensions. I am looking for an implementation of multinomial logistic regression that will scale well to data of this size.
Ideally, it should allow me to also do Ridge and Lasso regressions also. Which software should I be using?
|
Scalable multinomial regression implementation
|
CC BY-SA 3.0
| null |
2011-02-28T21:24:23.323
|
2017-07-26T13:50:10.523
|
2017-07-26T13:50:10.523
|
128677
|
3301
|
[
"regression",
"lasso",
"ridge-regression",
"multinomial-logit"
] |
7723
|
1
| null | null |
6
|
8453
|
We are learning pivot functions, test statistics, and hypothesis testing at university but it makes no sense. I've tried reading my text book/notes, going through examples, etc., but the concepts seem like a random guess and I'm clueless about how to even start guessing what the answer could be.
### 1st part
Can you please explain how to calculate a pivot function? E.g $X_{1},\dots,X_{n} \sim N(\mu, \sigma^2)$. Pivot function for $\sigma^2$ when $\mu$ is known and when $\mu$ is unknown. Why does $\mu$ being un/known matter?
Also how would you calculate the pivot function for the ratio of two variances ($\sigma_{x}^2$ and $\sigma_{y}^2$)? Is it an F distribution? Assume $\mu_x$ and $\mu_y$ are known and $X_1,\dots,X_n \sim N(\mu_x,\sigma_x^2)$ and $Y_1,\dots,Y_n \sim N(\mu_y, \sigma_y^2)$.
### 2nd part
Can you please explain how to calculate a test statistic (I get how to show it's a test statistic but don't know how to form one from scratch).
Last, I have a few questions about hypothesis testing. I don't really understand how to calculate the power of a test or even what it means, to be honest. There is a whole bunch of theory and many definitions out there but they are rather abstract so I don't get it at all... I don't understand the notation or how to calculate the size/power of a test (generic form - not just with numbers).
Example: $X \sim N(\mu, \sigma^2)$. $H_0: \mu >= \mu_0$ and $H_1: \mu < \mu_0$. $\mu$ and $\sigma$ unknown. Calculate the power and size of this test. How do I even start? I'm so confused. :(
I'm really stuck with all of this and I hope you can help me! :) if there's a better resource out there to help please do let me know.
EDIT1:
Thanks for your reply.
I did ask my lecturer to clarify… but ended up even more confused. He agrees the notes are unclear but will not rectify them because everyone else seems to get them! :( I also went to my class teacher and read through the Statistical Inference chapter several times – I get the basics but still don’t really understand most of it. I have googled around – and read Wikipedia – but it’s just more and more theory with no step by step examples explaining what to do. Everything seems randomly chosen and guesswork and hence my massive confusion.
One thing though – I can’t read this:
$$T_{X}=\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2} \sim \chi^{2}(N)$$
Is it supposed to be in mathematical notation? How do I view it properly?
Yes- you are correct – the pivot function is used to calculate the confidence interval. The thing is – once I have the pivot it’s quite straightforward to calculate the CI. But it’s the pivot that’s causing the problems.
I still don’t get the following:
Pivot function for sigma^2 when mu is known and when mu is unknown. Why does mu being un/known matter?
How would you calculate the pivot function for the ratio of 2 variances (sigmax^2 and sigmay^2)? Is it an F distribution? Assume mu x and mu y are known and X1....Xn - N(mu x,sigmax^2) and Y1...Yn -N(mu y, sigmay^2). Is it 1/Fn-1, m-1 = Fm-1,n-1?
And the hypothesis testing questions above please…. Can you shed some light on this please?
EDIT2:
I did ask my lecturer to clarify… but ended up even more confused. He agrees the notes are unclear but will not rectify them because everyone else seems to get them! :( I also went to my class teacher and read through the Statistical Inference chapter several times – I get the basics but still don’t really understand most of it. I have googled around – and read Wikipedia – but it’s just more and more theory with no step by step examples explaining what to do. Everything seems randomly chosen and guesswork and hence my massive confusion.
One thing though – I can’t read this:
$$T_{X}=\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2} \sim \chi^{2}(N)$$
Is it supposed to be in mathematical notation? How do I view it properly?
Yes- you are correct – the pivot function is used to calculate the confidence interval. The thing is – once I have the pivot it’s quite straightforward to calculate the CI. But it’s the pivot that’s causing the problems.
I still don’t get the following:
Pivot function for sigma^2 when mu is known and when mu is unknown. Why does mu being un/known matter?
How would you calculate the pivot function for the ratio of 2 variances (sigmax^2 and sigmay^2)? Is it an F distribution? Assume mu x and mu y are known and X1....Xn - N(mu x,sigmax^2) and Y1...Yn -N(mu y, sigmay^2). Is it 1/Fn-1, m-1 = Fm-1,n-1?
And the hypothesis testing questions above please…. Can you shed some light on this please?
|
Pivotal quantities, test statistics and hypothesis tests
|
CC BY-SA 2.5
| null |
2011-02-28T21:58:04.160
|
2011-03-08T20:04:13.470
|
2011-03-08T20:04:13.470
| null | null |
[
"hypothesis-testing",
"pivot"
] |
7725
|
2
| null |
7718
|
14
| null |
The definition of "optimal" will of course depend on your specific goals, but here are a few relatively "standard" methods:
- Equal error rate (EER) point: the point where precision equals recall. This feels to some people like a "natural" operating point.
- A refined and more principled version of the above is to specify cost of the different kind of errors and optimize that cost. Say misclassifying an item (an error in precision) is twice as expensive as missing an item completely (error in recall). Then the best operating point is that where (1 - recall) = 2*(1 - precision).
- In some problems people have a natural minimal acceptable rate of either precision or recall. Say you know that if more than 20% of retrieved data is incorrect, the users will stop using your application. Then it is natural to set precision to 80% (or a bit lower) and accept whatever recall you have at that point.
| null |
CC BY-SA 4.0
| null |
2011-02-28T22:11:04.873
|
2019-11-08T15:10:57.790
|
2019-11-08T15:10:57.790
|
11032
|
3369
| null |
7726
|
2
| null |
7554
|
1
| null |
As @rolando2 mentioned, the histogram might be a tool for displaying the variations; and also, as @ashaw stated, you might need to find where 95% percentage of number go to, then, you can probably just use box plot to generate the basic features of your dataset, and put the data to the dashboard that is shown in the company without the box plot. For box plot, you might check here:
```
http://en.wikipedia.org/wiki/Box_plot
```
Hope this helps, at least for inspiration...
| null |
CC BY-SA 2.5
| null |
2011-02-28T23:04:16.287
|
2011-02-28T23:04:16.287
| null | null |
3296
| null |
7727
|
1
| null | null |
11
|
11332
|
I asked [this question](https://stackoverflow.com/questions/5130808/how-to-correlate-two-time-series-with-gaps-and-different-time-bases) over on StackOverflow, and was recommended to ask it here.
---
I have two time series of 3D accelerometer data that have different time bases (clocks started at different times, with some very slight creep during the sampling time), as well as containing many gaps of different size (due to delays associated with writing to separate flash devices).
The accelerometers I'm using are the inexpensive [GCDC X250-2](http://www.gcdataconcepts.com/x250-2.html). I'm running the accelerometers at their highest gain, so the data has a significant noise floor.
The time series each have about 2 million data points (over an hour at 512 samples/sec), and contain about 500 events of interest, where a typical event spans 100-150 samples (200-300 ms each). Many of these events are affected by data outages during flash writes.
So, the data isn't pristine, and isn't even very pretty. But my eyeball inspection shows it clearly contains the information I'm interested in. (I can post plots, if needed.)
The accelerometers are in similar environments but are only moderately coupled, meaning that I can tell by eye which events match from each accelerometer, but I have been unsuccessful so far doing so in software. Due to physical limitations, the devices are also mounted in different orientations, where the axes don't match, but they are as close to orthogonal as I could make them. So, for example, for 3-axis accelerometers A & B, +Ax maps to -By (up-down), +Az maps to -Bx (left-right), and +Ay maps to -Bz (front-back).
My initial goal is to correlate shock events on the vertical axis, though I would eventually like to a) automatically discover the axis mapping, b) correlate activity on the mapped aces, and c) extract behavior differences between the two accelerometers (such as twisting or flexing).
The nature of the time series data makes Python's numpy.correlate() unusable. I've also looked at R's Zoo package, but have made no headway with it. I've looked to different fields of signal analysis for help, but I've made no progress.
Anyone have any clues for what I can do, or approaches I should research?
Update 28 Feb 2011: Added some plots [here](https://picasaweb.google.com/FlyMyPG/VibData?authkey=Gv1sRgCLPo0u-7jafQjwE#) showing examples of the data.
|
How to correlate two time series with gaps and different time bases?
|
CC BY-SA 3.0
| null |
2011-03-01T01:13:29.410
|
2019-07-23T22:34:49.777
|
2019-07-23T22:34:49.777
|
11887
|
3479
|
[
"time-series",
"correlation",
"unevenly-spaced-time-series"
] |
7728
|
2
| null |
7683
|
6
| null |
The kinds of approaches ashaw discusses can lead to a relatively more systematic methodology. But I also think that by systematic you mean algorithmic. Here data mining tools may fill a gap. For one, there's the chi-squared automated interaction detection (CHAID) procedure built into SPSS's Decision Tree module; it can, according to rules set by the user, collapse ordinal or nominal categories of predictor variables when they show similar values on the outcome variable (whether it's continuous or nominal). These rules might depend on the size of the groups being collapsed or being created by collapsing, or on the p-values of related statistical tests. I believe some classification and regression tree (CART) programs can do the same things. Other respondents should be able to speak about similar functions performed by neural network or other applications provided through various data mining packages.
| null |
CC BY-SA 2.5
| null |
2011-03-01T01:29:23.287
|
2011-03-01T01:29:23.287
| null | null |
2669
| null |
7730
|
1
|
7731
| null |
8
|
2095
|
If I have a dependent variable and $N$ predictor variables and wanted my stats software to examine all the possible models, there would be $2^N$ possible resulting equations.
I am curious to find out what the limitations are with regard to $N$ for major/popular statistic software since as $N$ gets large there is a combinatorial explosion.
I've poked around the various web pages for packages but not been able to find this information. I would suspect a value of 10 - 20 for $N$?
If anyone knows (and has links) I would be grateful for this information.
Aside from R, Minitab, I can think of these packages SAS, SPPS, Stata, Matlab, Excel(?), any other packages I should consider?
|
What are the software limitations in all possible subsets selection in regression?
|
CC BY-SA 2.5
| null |
2011-03-01T03:09:39.890
|
2011-03-02T14:06:57.093
|
2011-03-01T12:53:58.627
|
10633
|
10633
|
[
"regression",
"model-selection",
"multivariable"
] |
7731
|
2
| null |
7730
|
12
| null |
I suspect 30--60 is about the best you'll get. The standard approach is the leaps-and-bounds algorithm which doesn't require fitting every possible model. In $R$, the [leaps](http://cran.r-project.org/web/packages/leaps/index.html) package is one implementation.
The documentation for the `regsubsets` function in the leaps package states that it will handle up to 50 variables without complaining. It can be "forced" to do more than 50 by setting the appropriate boolean flag.
You might do a bit better with some parallelization technique, but the number of total models you can consider will (almost undoubtedly) only scale linearly with the number of CPU cores available to you. So, if 50 variables is the upper limit for a single core, and you have 1000 cores at your disposal, you could bump that to about 60 variables.
| null |
CC BY-SA 2.5
| null |
2011-03-01T03:19:54.343
|
2011-03-01T04:09:02.487
|
2011-03-01T04:09:02.487
|
2970
|
2970
| null |
7732
|
1
| null | null |
1
|
2962
|
I have 2 acceleration vectors, each represented by a matrix with its first column corresponding to the magnitude of acceleration and second column corresponding to the time (in ms) They both represent the same data, but one sensor is started a little later than the other, so I'm trying to remove the time lag using correlation.
How do I find the correlation between these 2 matrices? I tried to do `xcorr2` but it doesn't seem right and doing `xcorr` would probably calculate correlation with only the values into consideration, but I also want to account for the time. The idea is that I should finally get a correlation vector wrt time and then when I calculate the peak of the correlation vector, its index would correspond to the time lag.
I'm really confused here, will an FFT help? How would you suggest I go about it?
|
Calculate Cross Correlation of two matrices of the 'Values Vs. Time' representation
|
CC BY-SA 2.5
| null |
2011-03-01T03:21:16.320
|
2011-03-01T04:27:36.000
|
2011-03-01T04:27:36.000
|
2116
| null |
[
"correlation",
"matlab",
"autocorrelation",
"cross-correlation",
"fourier-transform"
] |
7734
|
1
| null | null |
23
|
1365
|
### Context:
In an effort to structure the center pieces that I have came across in probability theory and statics, I created a reference document focussing on the mathematical essentials (available [here](https://github.com/mavam/stat-cookbook)).
By sharing this document, I hope to give statistics students a comprehensive summary of the core material taught in graduate courses about these topics. While mainly intended as a teaching resource, folks might also find it helpful as a personal reference, e.g., to look up distribution relationships or illustrations of common PDFs. I also maintain a
[page](http://matthias.vallentin.net/probability-and-statistics-cookbook/) with updates and fixes. Feedback is always much appreciated.
### Question:
- What are your favorite statistics cheat sheets, references, or cookbooks that I could use for inspiration?
- What helped you to structure your knowledge in this domain?
- In the long term, my plan is to enrich this document (or create a separate one) with R examples to bridge the gap between theory and practice. Would you deem this a valuable extension?
|
Suggestions for improving a probability and statistics cheat sheet
|
CC BY-SA 3.0
| null |
2011-03-01T06:45:55.910
|
2012-11-28T06:50:32.550
|
2012-11-28T06:50:32.550
|
1537
|
1537
|
[
"teaching"
] |
7735
|
2
| null |
7734
|
4
| null |
My favorite is the [R Inferno](http://www.burns-stat.com/pages/Tutor/R_inferno.pdf) by Patrick Burns.
| null |
CC BY-SA 2.5
| null |
2011-03-01T07:40:34.397
|
2011-03-01T07:40:34.397
| null | null |
3309
| null |
7736
|
2
| null |
7734
|
6
| null |
[Tom Short's R Reference Card](http://cran.r-project.org/doc/contrib/Short-refcard.pdf) is excellent.
| null |
CC BY-SA 2.5
| null |
2011-03-01T08:01:54.057
|
2011-03-01T08:01:54.057
| null | null |
183
| null |
7737
|
2
| null |
7723
|
5
| null |
The first thing you should do is challenge your lecturer to explain these things clearly. If anything whatsoever seems counter-intuitive or backwards, them demand that he/she explains why it is intuitive. Statistics always makes sense if you think about it in the "right" way.
Calculating pivotal quantities is a very tricky business - I completely understand your bewilderment in "where should I start?"
For normal variance parameters, The "pivotal quantity" is the sum of squares divided by the variance parameters:
$$T_{X}=\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2} \sim \chi^{2}(N)$$
And a similar expression for $T_{Y}$. Note that the distribution only depends on $N$, which is known (if $\mu_{X}$ is unknown, replace by $\overline{X}$ and you lose one degree of freedom in the chi-square distribution). Thus $\frac{T_{X}}{T_{Y}}$ is a pivotal quantity, which has a value of:
$$\frac{\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2}}{\sum_{i=1}^{N}\Big(\frac{Y_{i}-\mu_{Y}}{\sigma{Y}}\Big)^{2}}
$$
Note that because it is a pivotal quantity, we can create an exact confidence interval using the pivot as a starting point, and then substituting in our statistic. Now because the degrees of freedom are the same for each chi-square, we do indeed have an F distribution. So you can write:
$$1-\alpha=Pr(L < F < U)=Pr(L < \frac{T_{X}}{T_{Y}} < U)$$
$$1-\alpha=Pr(L < \frac{\sum_{i=1}^{N}\Big(\frac{X_{i}-\mu_{X}}{\sigma{X}}\Big)^{2}}{\sum_{i=1}^{N}\Big(\frac{Y_{i}-\mu_{Y}}{\sigma{Y}}\Big)^{2}} < U)$$
$$1-\alpha=Pr(L < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}}\frac{\sum_{i=1}^{N}(X_{i}-\mu_{X})^{2}}{\sum_{i=1}^{N}(Y_{i}-\mu_{Y})^{2}} < U)$$
Writing the observed ratio of the sum of squares as $R$ we get:
$$1-\alpha=Pr(L < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}} R < U)$$
$$1-\alpha=Pr(\frac{L}{R} < \frac{\sigma_{Y}^{2}}{\sigma_{X}^{2}} < \frac{U}{R})$$
As for how this solution comes about, I have absolutely no idea. What "principles" were followed (apart from being good at re-arranging statistical expression)?
One thing that I can think of is that you need to find some way to "standardise" your sampling distribution. So for example, normals you subtract mean and divide by standard deviation. For gamma you multiply by the scale parameter. I don't know many pivotal quantities that exist outside of the normal and gamma families.
I think this is one reason why ordinary "sampling statistics" is an art more than a sciecne, because you have to use your intuition about what statistics to try. And then you have to try and figure out if you can standardise your data.
I am almost certain your lecturer will bring up the subject of confidence intervals - be sure to ask him/her what you should do when you only have one sample, or when you have 2 or more nuisance parameters. :)
| null |
CC BY-SA 2.5
| null |
2011-03-01T09:34:17.290
|
2011-03-06T12:13:16.900
|
2011-03-06T12:13:16.900
|
2392
|
2392
| null |
7739
|
2
| null |
7730
|
3
| null |
As $N$ gets big, your ability to use maths becomes absolutely crucial. "inefficient" mathematics will cost you at the PC. The upper limit depends on what equation you are solving. Avoiding matrix inverse or determinant calculations is a big advantage.
One way to help with increasing the limit is to use theorems for decomposing a large matrix inverse from into smaller matrix inverses. This can often means the difference between feasible and not feasible. But this involves some hard work, and often quite complicated mathematical manipulations! But it is usually worth the time. Do the maths or do the time!
Bayesian methods might be able to give an alternative way to get your result - might be quicker, which means your "upper limit" will increase (if only because it gives you two alternative ways of calculating the same answer - the smaller of two, will always be smaller than one of them!).
If you can calculate a regression coefficient without inverting a matrix, then you will probably save a lot of time. This may be particularly useful in the Bayesian case, because "inside" a normal marginalisation integral, the $X^{T}X$ matrix does not need to be inverted, you just calculate a sum of squares. Further, the determinant matrix will form part of the normalising constant. So "in theory" you could use sampling techniques to numerically evaluate the integral (even though it has an analytic expression) which will be eons faster than trying to evaluate the "combinatorical explosion" of matrix inverses and determinants. (it will still be a "combinatorical explosion" of numerical integrations, but this may be quicker).
This suggestion above is a bit of a "thought bubble" of mine. I want to actually test it out, see if it's any good. I think it would be (5,000 simulations + calculate exp(sum of squares) + calculate weighted average beta should be faster than matrix inversion for a big enough matrix.)
The cost is approximate rather than exact estimates. There is nothing to stop you from using the same set of pseudo random numbers to numerically evaluate the integral, which will again, save you a great deal of time.
There is also nothing stopping you from using a combination of either technique. Use exact when the matrices are small, use simulation when they are big. This is because in this part of the analysis. It is just different numerical techniques - just pick the technique which is quickest!
Of course this is all just a bit of "hand wavy" arguments, I don't exactly know the best software packages to use - and worse, trying to figure out which algorithms they actually use.
| null |
CC BY-SA 2.5
| null |
2011-03-01T10:17:03.143
|
2011-03-01T10:17:03.143
| null | null |
2392
| null |
7741
|
2
| null |
7698
|
0
| null |
"Would it work to use the linear function and simply cut all values below 0 to 0, and values above 1 to 1?"
I believe in many cases the cut-off value should be the percentage split of the training data. Eg if your training data has 13% - 0's and 87% - 1's, then the cut-off would be 0.13; For example anything 0.13 and below on the output is 0 and anything 0.14 and above is 1. Obviously there is more uncertainty the closer to the cut-off the output provides. It may also help adjusting the cut-off limits especially where the cost of a mis-classification is high. This link may help a little [http://timmanns.blogspot.com/2009/11/building-neural-networks-on-unbalanced.html](http://timmanns.blogspot.com/2009/11/building-neural-networks-on-unbalanced.html)
| null |
CC BY-SA 2.5
| null |
2011-03-01T10:48:20.840
|
2011-03-01T10:48:20.840
| null | null | null | null |
7742
|
1
| null | null |
10
|
1925
|
Suppose that the quantity which we want to infer is a probability distribution. All we know is that the distribution comes from a set $E$ determined, say, by some of its moments and we have a prior $Q$.
The maximum entropy principle(MEP) says that the $P^{\star}\in E$ which has least relative entropy from $Q$ (i.e., $P^{\star}=\displaystyle \text{argmin}_{P\in E}D(P\|Q)$) is the best one to select. Whereas the Bayesian rule of selection has a process of selecting the posterior given the prior which is supported by Bayes' theorem.
My question is whether there is any connection between these two inference methods (i. e., whether the two methods apply to the same problem and have something in common)? Or whether in Bayesian inference the setting is completely different from the above mentioned setting? Or am I not making sense?!
|
Bayesian vs Maximum entropy
|
CC BY-SA 2.5
| null |
2011-03-01T12:01:31.540
|
2017-06-05T23:00:38.103
|
2017-06-05T23:00:38.103
|
11887
|
3485
|
[
"bayesian",
"estimation",
"maximum-entropy"
] |
7743
|
2
| null |
7730
|
10
| null |
Just a caveat, but feature selection is a risky business, and the more features you have, the more degrees of freedom you have with which to optimise the feature selection criterion, and hence the greater the risk of over-fitting the feature selection criterion and in doing so obtain a model with poor generalisation ability. It is possible that with an efficient algorithm and careful coding you can perform all subsets selection with a large number of features, that doesn't mean that it is a good idea to do it, especially if you have relatively few observations. If you do use all subsets selection, it is vital to properly cross-validate the whole model fitting procedure (so that all-subset selection is performed independently in each fold of the cross-validation). In practice, ridge regression with no feature selection often out-performs linear regression with feature selection (that advice is given in Millar's monograph on feature selection).
| null |
CC BY-SA 2.5
| null |
2011-03-01T13:01:35.607
|
2011-03-01T13:01:35.607
| null | null |
887
| null |
7744
|
1
| null | null |
2
|
493
|
Can anyone give some advice on how to start proving this algebraically?
Define the residual from a regression (one independent variable) algebraically and show that:
- the mean of the residuals is zero
- the correlation of the residuals and the independent variable is zero
|
Algebraic definition of a residual from a regression
|
CC BY-SA 2.5
| null |
2011-03-01T13:42:42.827
|
2011-03-01T14:35:34.517
|
2011-03-01T13:51:30.480
|
8
| null |
[
"regression",
"self-study",
"residuals"
] |
7745
|
1
|
7752
| null |
2
|
131
|
hopefully you can help me with the meaning of the following, I don't really understand the terminology:
"regression of a vector of ones on the matrix $W$",
where $W$ is something like $(W_t)' = (w_{1t},w_{2t},w_{3t}, w_{4t})$.
I don't understand, which regression I actually have to compute. If it is of help for you, I'm trying to understand the Godfrey/Wickens test according to this article: Ghosh, S., Gilbert, C. L., & Hallett, A. J. H. (1983). Empirical Economics, 8, 63–69. [DOI: 10.1007/BF01973190](http://dx.doi.org/10.1007/BF01973190)
Greeting, Julius
|
Terminology question concerning regression
|
CC BY-SA 2.5
| null |
2011-03-01T14:04:45.603
|
2011-03-01T15:05:06.350
|
2011-03-01T14:34:47.363
|
449
|
3104
|
[
"regression",
"hypothesis-testing",
"terminology"
] |
7746
|
2
| null |
7744
|
2
| null |
Suppose you have the following regression model:
$$
y_i=\alpha+\beta x_i+\varepsilon_i
$$
Least squares problem looks for $\alpha$ and $\beta$ which minimize the following function:
$$g(\alpha,\beta)=\sum_{i=1}^n(y_i-\alpha-\beta x_i)^2$$
Solution for this problem will satisfy
$$\frac{\partial g}{\partial \alpha}=0, \quad \frac{\partial g}{\partial \beta}=0.$$
Try differencing and look hard at the resulting expressions. You will see that this answers your problem.
Note that @cardinal is right, if you do not include $\alpha$, your first statement is false.
Update This is might be considered non-algebraic solution, so please state more clearly what do you mean by algebraic. If this is not helpful, I will retract my answer, which really is a long comment.
| null |
CC BY-SA 2.5
| null |
2011-03-01T14:10:29.967
|
2011-03-01T14:35:34.517
|
2011-03-01T14:35:34.517
|
2116
|
2116
| null |
7747
|
2
| null |
5292
|
119
| null |
You can also try the brand-new [RStudio](http://www.rstudio.org/). Reasonably full-featured IDE with easy set-up. I played with it yesterday and it seems nice.
Update
I now like RStudio even more. They actively implement feature requests, and it shows in the little things getting better and better. It also includes Git support (including remote syncing so Github integration is seamless).
A bunch of big names just joined so hopefully things will continue getting even better.
Update again
And indeed things have only gotten better, in rapid fashion. Package build-check cycles are now point-and-click, and the little stuff continues to improve as well. It now comes with an integrated [debugging environment](http://www.rstudio.com/ide/docs/debugging/overview), too.
| null |
CC BY-SA 3.0
| null |
2011-03-01T14:19:56.700
|
2014-03-06T17:03:55.387
|
2014-03-06T17:03:55.387
|
36515
|
3488
| null |
7748
|
2
| null |
7698
|
2
| null |
I am opposed to cutting values of, since this will lead to an undifferentiable transfer function and your gradient based training algorithm might screw up.
The sigmoid function at the output layer is fine: $\sigma(x) = \frac{1}{1 + e^{-x}}$. It will squash any output to lie within $(0, 1)$. So you can get arbitrarily close to the targets.
However, if you use the squared error you will lose the property of a "matching loss function". When using linear outputs for a squared error, the derivatives of the error reduce to $y - t$ where $y$ is the output and $t$ the corresponding target value. So you have to check your gradients.
I have personally had good results with sigmoids as outputs when I have targets in that range and using sum of squares error anyway.
| null |
CC BY-SA 2.5
| null |
2011-03-01T14:31:43.597
|
2011-03-01T14:31:43.597
| null | null |
2860
| null |
7749
|
1
| null | null |
2
|
284
|
"Under what condition (or conditions if you think it necessary) would one observe no change in the regression coefficient (e.g., b-hat Y on X1) for some variable when another variable is added to the regression equation?"
I think the answer is when the exogenous variables are perfectly uncorrelated - is that correct?
|
Changes in the regression coefficient
|
CC BY-SA 2.5
| null |
2011-03-01T14:36:24.950
|
2011-03-01T17:15:36.960
| null | null | null |
[
"regression"
] |
7750
|
2
| null |
7749
|
5
| null |
Basically yes. This follows from the [omitted variable bias](http://en.wikipedia.org/wiki/Omitted_variable_bias) problem. As you can see the bias depends on crossproduct of the variables in regression (in this case the intercept and your variable of interest) and the omitted variable. If the sample correlation of the variables is zero and the sample mean of the omitted variable is zero, then crossproduct is zero, hence there will be no change in the coefficient value. If true correlation and expectation of the omitted variable is zero then the expectation of this crossproduct is zero and bias is zero.
| null |
CC BY-SA 2.5
| null |
2011-03-01T14:55:17.340
|
2011-03-01T17:15:36.960
|
2011-03-01T17:15:36.960
|
2116
|
2116
| null |
7751
|
2
| null |
7730
|
5
| null |
I was able to generate all possible subsets using 50 variables in SAS. I do not believe there is any hard limitation other than memory and CPU speed.
### Edit
I generated the 2 best models for N=1 to 50 variables for 5000 observations.
@levon9 - No, this ran in under 10 seconds. I generated 50 random variables from (0,1)
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-03-01T14:56:19.553
|
2011-03-02T14:06:57.093
|
2020-06-11T14:32:37.003
|
-1
|
3489
| null |
7752
|
2
| null |
7745
|
4
| null |
Ordinary least squares regression of $y$ on $X$ involves solving the normal equations
$$X'X\hat{\beta} = X'y$$
for $\hat{\beta}$, so I'd assume OLS regression of a vector of ones on $W$ implies solving
$$W'W\hat{\beta} = W'\bf{1},$$
where $\bf{1}$ is a vector of ones. If the matrix $X$ itself contained a column of ones, i.e. if the RHS of the regression included a constant, then the solution would be trivial, so I assume none of the $w_{it}$'s are constant in your case.
| null |
CC BY-SA 2.5
| null |
2011-03-01T15:05:06.350
|
2011-03-01T15:05:06.350
| null | null |
449
| null |
7753
|
2
| null |
5292
|
2
| null |
Despite all of the good recommendations, I've not found anything radically better than the default Mac GUI. R-Studio shows promise, but it's not currently that much more customizable or featureful than R and, say, BBEdit to edit.
| null |
CC BY-SA 2.5
| null |
2011-03-01T16:16:25.957
|
2011-03-01T16:16:25.957
| null | null |
1764
| null |
7754
|
1
|
7755
| null |
7
|
41787
|
## Background
I have two estimates of variance and their associated standard errors calculated from sample sizes of $n=500$ and $n=10,000$ the results are $\hat{\sigma^2} (sd_{\hat{\sigma^2}})$:
$$\hat{\sigma^2}_{n=500}=69 (6.4)$$
$$\hat{\sigma^2}_{n=10,000}=72 (1.5)$$
## Question
If I say that variance increased by 3, what is the standard deviation around this estimate?
## Notes
- $SD$ of var calculated using $SD_{\hat{\sigma^2}}=\sqrt{s^4(2/(n-1) + k/n)}$
- I suspect that the fact that I am estimating the sd of a variance is not relevant to the calculation, but may help in the interpretation of what I am doing.
|
How to calculate the difference of two standard deviations?
|
CC BY-SA 2.5
| null |
2011-03-01T16:37:19.447
|
2011-03-01T19:10:57.737
|
2020-06-11T14:32:37.003
|
-1
|
1381
|
[
"standard-deviation",
"variance"
] |
7755
|
2
| null |
7754
|
8
| null |
The standard deviation of the difference between two independent random variables is the square root of the sum of the squares of their individual standard deviations (easier to express as variances) so in this case
$$\sqrt{6.4^2 + 1.5^2} \approx 6.6$$
| null |
CC BY-SA 2.5
| null |
2011-03-01T17:00:15.870
|
2011-03-01T17:00:15.870
| null | null |
2958
| null |
7757
|
1
|
7759
| null |
67
|
140962
|
I am trying to predict the outcome of a complex system using neural networks (ANN's). The outcome (dependent) values range between 0 and 10,000. The different input variables have different ranges. All the variables have roughly normal distributions.
I consider different options to scale the data before training. One option is to scale the input (independent) and output (dependent) variables to [0, 1] by [computing cumulative distribution function](http://en.wikipedia.org/wiki/Cumulative_distribution_function) using the mean and standard deviation values of each variable, independently. The problem with this method is that if I use the sigmoid activation function at the output, I will very likely miss extreme data, especially those not seen in the training set
Another option is to use a z-score. In that case I don't have the extreme data problem; however, I'm limited to a linear activation function at the output.
What are other accepted normalization techniques that are in use with ANN's? I tried to look for reviews on this topic, but failed to find anything useful.
|
Data normalization and standardization in neural networks
|
CC BY-SA 2.5
| null |
2011-03-01T18:53:04.537
|
2020-11-10T12:35:37.593
|
2019-11-05T12:36:28.050
|
219619
|
1496
|
[
"machine-learning",
"neural-networks",
"normalization",
"standardization"
] |
7758
|
1
|
7767
| null |
4
|
1348
|
I play a lot with [PyBrain](http://pybrain.org) -- Artificial Neural Network implementation in Python. I have noticed that in all the models that I receive the weights of the connections are roughly normally distributed around zero with a pretty low standard deviation (~0.3), which means that they are effectively limited within the [-1, 1] range. What does this mean? Is it a requirement of ANN? An outcome of backpropagaion learning? A sign for network health? Or just a random observation?

|
On connection weights in an Artificial Neural Network
|
CC BY-SA 2.5
| null |
2011-03-01T19:27:34.413
|
2012-02-09T04:00:27.067
| null | null |
1496
|
[
"neural-networks"
] |
7759
|
2
| null |
7757
|
50
| null |
A standard approach is to scale the inputs to have mean 0 and a variance of 1. Also linear decorrelation/whitening/pca helps a lot.
If you are interested in the tricks of the trade, I can recommend [LeCun's efficient backprop paper.](http://yann.lecun.com/exdb/publis/pdf/lecun-98b.pdf)
| null |
CC BY-SA 2.5
| null |
2011-03-01T20:27:31.693
|
2011-03-01T20:27:31.693
| null | null |
2860
| null |
7763
|
1
|
7764
| null |
4
|
163
|
I'm examining correlations in a data set with a large number of variables but small sample sizes. To get a feel for how these quantities behave, I generated some random data and looked at the distribution of correlations:
```
n = 4
y = matrix(rnorm(1000 * n), 1000, n)
x = matrix(rnorm(1000 * n), 1000, n)
p = as.numeric(cor(t(x),t(y)))
hist(p)
```
To my surprise, the distribution is almost perfectly uniform:

Does anyone have an explanation for this phenomenon? It makes some sense in that for n=2 we have either p=1 or p=-1, and as n->infinity the distribution becomes normal, so this distribution falls somewhere inbetween. But why uniform? I'm stumped.
|
Curious Sample Correlation Property
|
CC BY-SA 2.5
| null |
2011-03-01T21:10:20.670
|
2011-03-01T21:37:34.823
| null | null |
2111
|
[
"distributions",
"correlation"
] |
7764
|
2
| null |
7763
|
12
| null |
For independent Normal variates, the distribution of the correlation coefficient $r$ is proportional to $(1 - r^2)^{{1\over2} (n-4)}dr$. When $n=4$, that's uniform.
### Reference
R. A. Fisher, [Frequency-distribution of the values of the correlation coefficient in samples from an indefinitely large population](http://www.stat.duke.edu/courses/Spring05/sta215/lec/Fish1915.pdf). Biometrika, 10, 507. See Section 3. (Quoted in Kendall's Advanced Theory of Statistics, 5th Ed., section 16.24.)
| null |
CC BY-SA 2.5
| null |
2011-03-01T21:37:34.823
|
2011-03-01T21:37:34.823
|
2020-06-11T14:32:37.003
|
-1
|
919
| null |
7766
|
1
|
7773
| null |
11
|
2988
|
How should I syntax the `rma` function from [metafor](http://cran.r-project.org/web/packages/metafor/index.html) package in order to get results in the following real-life example of a small meta-analysis?
(random-effect, summary statistic SMD)
```
study, mean1, sd1, n1, mean2, sd2, n2
Foo2000, 0.78, 0.05, 20, 0.82, 0.07, 25
Sun2003, 0.74, 0.08, 30, 0.72, 0.05, 19
Pric2005, 0.75, 0.12, 20, 0.74, 0.09, 29
Rota2008, 0.62, 0.05, 24, 0.66, 0.03, 24
Pete2008, 0.68, 0.03, 10, 0.68, 0.02, 10
```
|
Meta-analysis in R using metafor package
|
CC BY-SA 3.0
| null |
2011-03-01T22:38:49.620
|
2018-02-02T16:11:34.790
|
2018-02-02T16:11:34.790
|
101426
|
3333
|
[
"r",
"meta-analysis"
] |
7767
|
2
| null |
7758
|
6
| null |
I just took a look at some of my neural networks; the weights in those look normally distributed.
One possible argument is that each weight is the sum of IID delta values during backpropagation, so they will be Gaussian (due to the central limit theorem). This argument involves making some simplifications; for example the summed deltas are probably not independent of each other during backpropagation.
| null |
CC BY-SA 2.5
| null |
2011-03-01T23:01:28.537
|
2011-03-02T01:00:12.740
|
2011-03-02T01:00:12.740
|
2965
|
2965
| null |
7768
|
1
|
7804
| null |
12
|
1178
|
I've never really found any good text or examples on how to handle 'non-existent' data for inputs to any sort of classifier. I've read a lot on missing data but what can be done about data that cannot or doesn't exist in relation to multivariate inputs. I understand this is a very complex question and will vary depending on training methods used...
Eg if trying to predict laptime for several runners with good accurate data. Amongst many inputs, possible variables amongst many are:
- Input Variable - First time runner (Y/N)
- Input Variable - Previous laptime ( 0 - 500 seconds)
- Input Variable - Age
- Input Variable - Height
.
.
. many more Input variables etc
& Output Predictor - Predicted Laptime (0 - 500 seconds)
A 'missing variable' for '2.Previous laptime' could be computed several ways but '1. First time runner' would always equal N . But for 'NON EXISTENT DATA' for a first time runner (where '1. First time runner' = Y) what value/treatment should I give for '2. Previous laptime'?
For example assigning '2. Previous laptime' as -99 or 0 can skew the distribution dramatically and make it look like a new runner has performed well.
My current training methods have been using Logistic regression, SVM, NN & Decision trees
|
How to handle non existent (not missing) data?
|
CC BY-SA 2.5
| null |
2011-03-01T23:04:01.467
|
2012-03-08T21:46:38.083
|
2011-03-02T11:37:02.397
|
930
| null |
[
"missing-data"
] |
7769
|
2
| null |
7718
|
2
| null |
Following up on SheldonCooper's second and third bullet points: The ideal choice is to have somebody else make the choice, either in the form of a threshold (point 3) or a cost benefit tradeoff (point 2). And perhaps the nicest way to offer them the choice is with an [ROC curve](http://en.wikipedia.org/wiki/Receiver_operating_characteristic).
| null |
CC BY-SA 2.5
| null |
2011-03-01T23:35:48.853
|
2011-03-01T23:35:48.853
| null | null |
1739
| null |
7770
|
2
| null |
7721
|
4
| null |
I've had good experiences with Madigan's and Lewis's [BMR and BBR](http://www.bayesianregression.org) packages for multiple category dependent variables, lasso or ridge priors on parameters, and high dimensional input data. Not quite as high as yours, but it might still be worth a look. Instructions are here: [http://bayesianregression.com/bmr.html](http://bayesianregression.com/bmr.html)
| null |
CC BY-SA 2.5
| null |
2011-03-01T23:49:23.417
|
2011-03-02T20:44:34.913
|
2011-03-02T20:44:34.913
|
1739
|
1739
| null |
7771
|
1
|
7849
| null |
7
|
6483
|
is there a way of calculating an effect size for the Kolmogorov-Smirnov Z statistic (in SPSS or by hand)? Or should I stick to the Mann-Whitney test, even though my group sizes are less than n=25?
|
How do I calculate the effect size for the Kolmogorov-Smirnov Z statistic?
|
CC BY-SA 2.5
| null |
2011-03-02T00:54:11.730
|
2017-11-19T13:13:10.180
|
2011-04-28T20:23:56.830
|
919
|
2025
|
[
"effect-size",
"kolmogorov-smirnov-test"
] |
7772
|
1
|
7819
| null |
6
|
521
|
I'm trying to fit a multivariate multiple regression model where the independent variable X is latent but I don't know where to start (I have prior information about the coefficient matrix so I can use some iterative method).
The dependent variable Y is a NxM matrix denoting N observations each from M variables. The latent variable X is a NxP matrix about which we know nothing except the dimensions. In addition to these, we have an initial estimate of the coefficient matrix beta based on prior knowledge. My goal is to find the estimates of latent X and coefficient matrix beta by both using the data matrix Y and the initial coefficient matrix. I thought of constructing an EM algorithm but because of the complexity of multivariate data and latent variable concept, I am totally confused.
Thank you.
|
Is it possible to fit a multivariate regression model where the independent variable is latent?
|
CC BY-SA 2.5
| null |
2011-03-02T03:05:21.403
|
2011-03-02T20:42:51.933
|
2011-03-02T20:20:02.473
|
3499
|
3499
|
[
"regression",
"multivariate-analysis",
"latent-variable"
] |
7773
|
2
| null |
7766
|
11
| null |
Create a proper `data.frame`:
```
df <- structure(list(study = structure(c(1L, 5L, 3L, 4L, 2L), .Label = c("Foo2000",
"Pete2008", "Pric2005", "Rota2008", "Sun2003"), class = "factor"),
mean1 = c(0.78, 0.74, 0.75, 0.62, 0.68), sd1 = c(0.05, 0.08,
0.12, 0.05, 0.03), n1 = c(20L, 30L, 20L, 24L, 10L), mean2 = c(0.82,
0.72, 0.74, 0.66, 0.68), sd2 = c(0.07, 0.05, 0.09, 0.03,
0.02), n2 = c(25L, 19L, 29L, 24L, 10L)), .Names = c("study",
"mean1", "sd1", "n1", "mean2", "sd2", "n2"), class = "data.frame", row.names = c(NA,
-5L))
```
Run the `rma`-function:
```
library(metafor)
rma(measure = "SMD", m1i = mean1, m2i = mean2,
sd1i = sd1, sd2i = sd2, n1i = n1, n2i = n2,
method = "REML", data = df)
```
Please be aware that `rma` assumes `(m1i-m2i)`. This results in the following univariate random effects model meta-analysis:
```
> rma(measure = "SMD", m1i = mean1, m2i = mean2,
+ sd1i = sd1, sd2i = sd2, n1i = n1, n2i = n2,
+ method = "REML", data = df)
Random-Effects Model (k = 5; tau^2 estimator: REML)
tau^2 (estimate of total amount of heterogeneity): 0.1951 (SE = 0.2127)
tau (sqrt of the estimate of total heterogeneity): 0.4416
I^2 (% of total variability due to heterogeneity): 65.61%
H^2 (total variability / within-study variance): 2.91
Test for Heterogeneity:
Q(df = 4) = 11.8763, p-val = 0.0183
Model Results:
estimate se zval pval ci.lb ci.ub
-0.2513 0.2456 -1.0233 0.3061 -0.7326 0.2300
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
```
You might want to change the estimation `method`, e.g. `method = "DL"` (but I would stick with `REML`).
| null |
CC BY-SA 2.5
| null |
2011-03-02T03:34:49.143
|
2011-03-02T04:38:14.870
|
2011-03-02T04:38:14.870
|
307
|
307
| null |
7774
|
1
|
7917
| null |
22
|
4284
|
In the classic [Coupon Collector's problem](http://en.wikipedia.org/wiki/Coupon_collector%27s_problem), it is well known that the time $T$ necessary to complete a set of $n$ randomly-picked coupons satisfies $E[T] \sim n \ln n $,$Var(T) \sim n^2$, and $\Pr(T > n \ln n + cn) < e^{-c}$.
This upper bound is better than the one given by the Chebyshev inequality, which would be roughly $1/c^2$.
My question is: is there a corresponding better-than-Chebyshev lower bound for $T$? (e.g., something like $\Pr(T < n \ln n - cn) < e^{-c}$ ) ?
|
What is a tight lower bound on the coupon collector time?
|
CC BY-SA 2.5
| null |
2011-03-02T03:58:17.613
|
2021-12-31T16:14:38.127
|
2016-02-27T20:19:54.317
|
919
|
3500
|
[
"probability",
"probability-inequalities",
"coupon-collector-problem"
] |
7775
|
1
|
9708
| null |
10
|
9891
|
Does anyone have suggestions or packages that will calculate the coefficient of partial determination?
The coefficient of partial determination can be defined as the percent of variation that cannot be explained in a reduced model, but can be explained by the predictors specified in a full(er) model. This coefficient is used to provide insight into whether or not one or more additional predictors may be useful in a more fully specified regression model.
The calculation for the partial r^2 is relatively straight forward after estimating your two models and generating the ANOVA tables for them. The calculation for the partial r^2 is:
(SSEreduced - SSEfull) / SSEreduced
I've written this relatively simple function that will calculate this for a multiple linear regression model. I'm unfamiliar with other model structures in R where this function may not perform as well:
```
partialR2 <- function(model.full, model.reduced){
anova.full <- anova(model.full)
anova.reduced <- anova(model.reduced)
sse.full <- tail(anova.full$"Sum Sq", 1)
sse.reduced <- tail(anova.reduced$"Sum Sq", 1)
pR2 <- (sse.reduced - sse.full) / sse.reduced
return(pR2)
}
```
Any suggestions or tips on more robust functions to accomplish this task and/or more efficient implementations of the above code would be much appreciated.
|
R implementation of coefficient of partial determination
|
CC BY-SA 2.5
| null |
2011-03-02T04:13:28.443
|
2021-08-09T20:43:58.457
| null | null |
696
|
[
"r",
"regression",
"anova"
] |
7776
|
2
| null |
5292
|
9
| null |
At least on linux, [RKWard](http://rkward.sourceforge.net/) offers the best functionality. The new [RStudio](https://www.rstudio.com/) appears quite promising as well.
| null |
CC BY-SA 3.0
| null |
2011-03-02T05:24:33.013
|
2016-08-13T09:35:40.303
|
2016-08-13T09:35:40.303
|
2461
| null | null |
7777
|
2
| null |
7720
|
24
| null |
I would suggest you look at books on categorical data analysis (cf. Alan Agresti's Categorical Data Analysis, 2002) for better explanation and understanding of ordered logistic regression. All the questions that you ask are basically answered by a few chapters in such books. If you are only interested in `R` related examples, Extending Linear Models in R by Julian Faraway (CRC Press, 2008) is a great reference.
Before I answer your questions, ordered logistic regression is a case of multinomial logit models in which the categories are ordered. Suppose we have $J$ ordered categories and that for individual $i$, with ordinal response $Y_i$,
$p_{ij}=P(Yi=j)$ for $j=1,..., J$. With an ordered response, it is often easier to work with the cumulative probabilities, $\gamma_{ij}=P(Y_i \le j)$. The cumulative probabilities are increasing and invariant to combining adjacent categories. Furthermore, $\gamma_{iJ}=1$, so we need only model $J–1$ probabilities.
Now we want to link $\gamma_{ij}$s to covariates $x$. In your case, `Sat` has 3 ordered levels: `low`, `medium`, `high`. It makes more sense to treat them as ordered rather than unordered. The remaining variables are your covariates. The specific model that you are considering is the proportional odds model and is mathematically equivalent to:
$$\mbox{logit } \gamma_j(x_i) = \theta_j - \beta^T x_i, j = 1 \ldots J-1$$
$$\mbox{where }\gamma_j(x_i)=P(Y_i \le j | x_i)$$
It is so called because the relative odds for $Y \le j$ comparing $x_1$ and $x_2$ are:
$$\left(\frac {\gamma_j(x_1)}{1-\gamma_j(x_1)}\right) / \left(\frac {\gamma_j(x_2)}{1-\gamma_j(x_2)}\right)=\exp(-\beta^T (x_1-x_2))$$
Notice, the above expression does not depend on $j$. Of course, the assumption of proportional odds does need to be checked for a given dataset.
Now, I will answer some (1, 2, 4) questions.
>
How can one understand if the model
gave a good fit? summary(house.plr)
shows Residual Deviance 3479.149 and
AIC (Akaike Information Criterion?) of
3495.149. Is that good? In the case those are only useful as relative
measures (i.e. to compare to another
model fit), what is a good absolute
measure? Is the residual deviance
approximately chi-squared distributed?
Can one use "% correctly predicted" on
the original data or some
cross-validation? What is the easiest
way to do that?
A model fit by `polr` is a special `glm`, so all the assumptions that hold for a traditional `glm` hold here. If you take care of the parameters properly, you can figure out the distribution. Specifically, to test the if the model is good or not you may want to do a goodness of fit test, which test the following null (notice this is subtle, mostly you want to reject the null, but here you don't want to reject it to get a good fit):
$$H_o: \mbox{ current model is good enough }$$
You would use the chi-square test for this. The p-value is obtained as:
```
1-pchisq(deviance(house.plr),df.residual(house.plr))
```
Most of the time you'd hope to obtain a p-value greater than 0.05 so that you don't reject the null to conclude that the model is good fit (philosophical correctness is ignored here).
AIC should be high for a good fit at the same time you don't want to have a large number of parameters. `stepAIC` is a good way to check this.
Yes, you can definitely use cross validation to see if the predictions hold. See `predict` function (option: `type = "probs"`) in `?polr`. All you need to take care of is the covariates.
>
What information does pr contain? The
help page on profile is generic, and
gives no guidance for polr
As pointed by @chl and others, `pr` contains all the information needed for obtaining CIs and other likelihood related information of the `polr fit`. All `glm`s are fit using iteratively weighted least square estimation method for the log likelihood. In this optimization you obtain a lot of information (please see the references) which will be needed for calculating Variance Covariance Matrix, CI, t-value etc. It includes all of it.
>
How does one interpret the t values for each coefficient? Unlike some model >fits, there are no P values here.
Unlike normal linear model (special `glm`) other `glm`s are don't have the nice t-distribution for the regression coefficients. Therefore all you can get is the parameter estimates and their asymptotic variance covariance matrix using the max-likelihood theory. Therefore:
$$\text{Variance}(\hat \beta) = (X^T W X)^{-1}\hat \phi$$
Estimate divided by its standard error is what BDR and WV call t-value (I am assuming `MASS` convention here). It is equivalent to t-value from normal linear regression but does not follow a t-distribution. Using CLT, it is asymptotically normally distributed. But they prefer not to use this approx (I guess), hence no p-values. (I hope I am not wrong, and if I am, I hope BDR is not on this forum. I further hope, someone will correct me if I am wrong.)
| null |
CC BY-SA 3.0
| null |
2011-03-02T05:54:08.080
|
2015-03-04T12:06:56.747
|
2015-03-04T12:06:56.747
|
8413
|
1307
| null |
7780
|
1
|
7781
| null |
12
|
5295
|
I have a question about group sequential methods.
According to Wikipedia:
>
In a randomized trial with two treatment groups, classical group sequential testing is used in the following manner: If n subjects in each group are available, an interim analysis is conducted on the 2n subjects. The statistical analysis is performed to compare the two groups, and if the alternative hypothesis is accepted, the trial is terminated. Otherwise, the trial continues for another 2n subjects, with n subjects per group. The statistical analysis is performed again on the 4n subjects. If the alternative is accepted, then the trial is terminated. Otherwise, it continues with periodic evaluations until N sets of 2n subjects are available. At this point, the last statistical test is conducted, and the trial is discontinued
But by repeatedly testing accumulating data in this fashion, the type I error level is inflated...
If the samples were independent of one another, the overall type I error, $\alpha^{\star}$, would be
>
$\alpha^{\star} = 1 - (1 - \alpha)^k$
where $\alpha$ is the level of each test, and $k$ is the number of interim looks.
But the samples are not independent since they overlap. Assuming interim analyses are performed at equal information increments, [it can be found that (slide 6)](http://www.biostat.uzh.ch/teaching/master/methods2009/biostat6_group_sequential.pdf)

Can you explain me how this table is obtained?
|
Overall type I error when repeatedly testing accumulating data
|
CC BY-SA 2.5
| null |
2011-03-02T07:31:29.113
|
2013-02-16T10:44:22.070
| null | null |
3019
|
[
"multiple-comparisons",
"clinical-trials",
"type-i-and-ii-errors"
] |
7781
|
2
| null |
7780
|
12
| null |
The following slides, through 14, explain the idea. The point, as you note, is that the sequence of statistics is correlated.
The context is a z-test with known standard deviation. The first test statistic $z_1$, suitably standardized, has a Normal(0,1) distribution with cdf $\Phi$. So does the second statistic $z_2$, but--because the first uses a subset of the data used for the second--the two statistics are correlated with correlation coefficient $\sqrt{1/2}$. Therefore $(z_1, z_2)$ has a binormal distribution. The probability of a type I error (under the null hypothesis) equals the probability that either (a) a type I error occurs in the first test or (b) a type I error does not occur in the first test but does occur in the second test. Let $c = \Phi^{-1}(1 - 0.05/2)$ be the critical value (for a two-sided test with nominal size $\alpha$ = 0.05). Then the chance of a type I error after two analyses equals the chance that $|z_1| > c$ or $|z_1| \le c$ and $|z_2| > c$. Numeric integration gives the value 0.0831178 for this probability, agreeing with the table. Subsequent values in the table are obtained with similar reasoning (and more complicated integrations).
This graphic depicts the binormal pdf and the region of integration (solid surface).

| null |
CC BY-SA 2.5
| null |
2011-03-02T08:23:13.177
|
2011-03-02T08:36:21.063
|
2011-03-02T08:36:21.063
|
919
|
919
| null |
7782
|
1
| null | null |
8
|
1116
|
### Context
I have got some problems with my doctoral dissertation. My thesis is Investigating Secondary Primary School Teachers' Organizational citizenship behaviours through their perceptions about organizational culture and their organizational trust levels.
I have a sample of 871 teachers.
I have three instruments but they were developed by other researchers and they were used in some other studies.
I have been trying to analyse my data using structural equation modelling.
However while trying to do confirmatory factor analysis, only one instrument (organizational citizenship) was ok.
The other two intruments did not give the first order confirmatory analysis.
The RMSEA values were around 0.100.
Chi-square was too high, and chi-square divided by degrees of freedom was also too high.
### Question
- What should I do with my instruments and CFAs?
- Would item parcelling or removing items from these instruments be a good idea?
- Or should I skip structural equation modelling and go on with regression?
|
What to do following poor fit statistics for a confirmatory factor analysis?
|
CC BY-SA 2.5
| null |
2011-03-02T10:01:07.933
|
2011-03-02T14:30:59.527
|
2011-03-02T11:08:14.763
|
930
| null |
[
"factor-analysis",
"structural-equation-modeling"
] |
7783
|
2
| null |
7782
|
1
| null |
Instead of looking for statistical solutions that directly solve this problem, I would look for solutions that improve the diagnosis.
First, I'd compare the different samples used in the different studies.
Then, if you have the data, I'd look at the correlation patterns among the variables in the different samples. (You may be able to get these from other authors).
| null |
CC BY-SA 2.5
| null |
2011-03-02T10:59:44.943
|
2011-03-02T10:59:44.943
| null | null |
686
| null |
7784
|
1
| null | null |
9
|
2763
|
I am training an artificial neural network (backpropagation, feed-forward) with non-normal distributed data. Beside the root mean squared error, literature does often suggest the Pearson correlation coefficient for evaluating the quality of the trained net. But, is the Pearson correlation coefficient reasonable, if the training data is not normally distributed? Would it not be more reasonable, to use a rank-based correlation measure, e.g. Spearman rho?
|
Measuring correlation of trained neural networks
|
CC BY-SA 2.5
| null |
2011-03-02T11:04:04.407
|
2011-11-18T16:52:12.623
|
2011-11-18T16:52:12.623
|
919
|
3465
|
[
"correlation",
"neural-networks",
"spearman-rho"
] |
7785
|
1
|
7789
| null |
14
|
698
|
I am looking for good references on using directional data (measure of direction in degrees) as an independent variable in regression; ideally, it would also be useful for hierarchical nonlinear models (the data are nested). I am also interested in directional data more generally.
I have found a text by Mardia, which I am going to get, but wondered if there were good articles.
I am more interested in practical articles about how to deal with this type of data than in theorems and proofs, or formal statements of distributions and such. Thanks
UPDATE I have got the Mardia text, which is quite comprehensive. After some more reading, I may be back with more questions.
|
Logistic regression with directional data as IV
|
CC BY-SA 2.5
| null |
2011-03-02T11:06:55.250
|
2011-03-05T11:29:56.993
|
2011-03-05T11:29:56.993
|
686
|
686
|
[
"circular-statistics"
] |
7786
|
1
| null | null |
8
|
1033
|
I am looking for some suggestions about assessing the representativeness of a particular dataset I am analyzing.
In this dataset I am looking at the relationship between two variables (e.g., X and Y) in a population that is split into five distinct blocks. The main problem is that the data is based upon reports from the public, so some blocks have much more data than others.
The goal is to assess whether the relationship between X and Y differs between the blocks, but also to determine how reliable such estimates are given that we do not have a truly random sample of the overall population.
Any suggestions appreciated.
Thanks
|
Assessing the representativeness of population sampling
|
CC BY-SA 2.5
| null |
2011-03-02T11:19:11.983
|
2023-04-17T20:07:45.987
|
2011-03-02T11:41:04.033
|
930
|
3136
|
[
"sampling",
"survey",
"dataset",
"resampling"
] |
7787
|
2
| null |
7768
|
8
| null |
For a logistic regression fitted by maximum likelihood, as long as you have both (1) and (2) in the model, then no matter what "default" value that you give new runners for (2), the estimate for (1) will adjust accordingly.
For example, let $X_1$ be the indicator variable for "is a new runner", and $X_2$ be the variable "previous laptime in seconds". Then the linear predictor is:
$\eta = \alpha + \beta_1 X_1 + \beta_2 X_2 + \ldots$
If the default for $X_2$ is zero, then the linear predictor for a new runner is:
$\eta = \alpha + \beta_1 + \ldots$
whereas for an existing runner, it will be:
$\eta = \alpha + \beta_2 X_2 + \ldots$
Now suppose that you change the default for $X_2$ from 0 to -99. Then the linear predictor for a new runner is now:
$\eta = \alpha + \beta'_1 - 99 \beta_2 + \ldots$
but for an existing runner, it will remain the same. So all you've done is reparameterise the model, such that $\beta'_1 - 99 \beta_2 = \beta_1$, and since maximum likelihood is paremeterisation invariant, the estimates will adjust accordingly.
Of course, if you're not using maximum likelihood (i.e. you're using some sort of penalisation or prior on the parameters), then you're going to get different values unless you adjust the penalisation/prior accordingly. And if the model is non-linear (e.g. SVM, NN & Decision trees), then this argument doesn't work at all.
| null |
CC BY-SA 2.5
| null |
2011-03-02T11:48:13.157
|
2011-03-02T11:48:13.157
| null | null |
495
| null |
7788
|
1
|
7793
| null |
4
|
180
|
Some edits made...
I have a dataset which other researchers have used mixed effects modelling with to come up with a nice set of associations. I also have a much smaller dataset which is the same variables but from a different country. The first dataset is plenty powerful enough (350 individuals from 30 locations) but the second dataset is not very powerful at all (70 individuals from 15 locations).
Is there some way to take what I know about the first data set and use it to boost the power of the second dataset? I've pooled all the data and analysed it and replicated the results, but I fear that is because the first dataset is just swamping the second one. So far I've come up with:
1) Analyse first dataset, then pool, analyse again, and compare AIC to see if it gets worse (seems a bit crude).
2) Analyse both datasets using a dummy variable for first/ second dataset (they're from different countries). This seems like a good idea but I'm not convinced it would eliminate the whole problem of the first swamping the second.
3) It sounds a bit like Bayesian statistics, but I don't know the first thing about Bayesian statistics, so if it is this then I'd better hit the library.
Any and all help as ever gratefully received!
Clarifications:
1) dependent variable same for both countries
2) oh dear, not sure I understand this question. All the variables are the same, so I have a list of their data, with a country code, and then the same variables again, with a different country code. All observations are individuals clustered in wards clustered in countries. Does this answer?
3) Really I'm just trying to replicate the significant linear associations which they found (or, I guess, highlight where they are different). So e.g. they found that size of ward improved outcomes, we would like to replicate that, along with some other associations.
|
How to replicate large well powered mixed effects model with a smaller sample?
|
CC BY-SA 2.5
| null |
2011-03-02T11:54:29.660
|
2011-03-03T09:46:22.250
|
2011-03-03T09:46:22.250
|
199
|
199
|
[
"bayesian",
"mixed-model",
"statistical-power"
] |
7789
|
2
| null |
7785
|
8
| null |
I would suggest applying a transform which deals with periodicity. i.e. $\lim_{x \to 360} f(x) = f(0)$. An easy option is to take the sin and cos, and put them both as covariates in the model.
| null |
CC BY-SA 2.5
| null |
2011-03-02T11:55:26.133
|
2011-03-02T11:55:26.133
| null | null |
495
| null |
7790
|
1
|
7807
| null |
3
|
2002
|
I have two exclusive groups of people and a counter of how many events happened for each group.
Lets say group 1 has 7000 people and group 2 has 3000 people.
group 1 had 50 events and group 2 had 40 events.
I'm calculating the event percentage for each group for example for group1 its 50/7000. for group 2 its 40/3000.
I want to calculate how much statistical validity these results have (in other words is the groups large enough or I need to collect more data). Probably in percentage (where >95% means its valid statistically)
Can someone point me to how to do it. I need to implement it in PHP code. I have little statistics knowledge. I think it involves square chi function but I'm not sure how to use the data with PHP chi square function [http://www.php.net/manual/en/function.stats-cdf-chisquare.php](http://www.php.net/manual/en/function.stats-cdf-chisquare.php)
Thanks!
ADDITIONAL INFO:
We're talking about visitors to a web store. I divide them randomly to 30% group B (test group) and 70% group A. I expose gruop A to a certain message.
I compare the conversion rates of the groups (% of visitors who buy something). And I want to know when the samples are large enough to be statistically significant.
|
How to determine statistical validity of results
|
CC BY-SA 2.5
| null |
2011-03-02T12:03:36.580
|
2011-03-02T17:14:47.650
|
2011-03-02T13:32:53.977
|
3506
|
3506
|
[
"statistical-significance",
"chi-squared-test"
] |
7791
|
1
|
7808
| null |
9
|
345
|
Some edits made...
This question is just for fun, so if it isn't fun then please feel free to ignore it. I already get a lot of help from this site so I don't want to bite the hand that feeds me. It's based on a real life example and it's just something I've wondered about a lot.
I visit my local dojo to train on an essentially random basis Monday-Friday. Let's assume I visit twice a week. This means I visit exactly twice, every week, with only the two days varying. There is one individual who is nearly always there whenever I am there. If he visits on the same day as me then I will see him. Let's assume he's there 90% of the time when I'm there. I want to know two things:
1) how often he trains
2) whether he comes on a random basis or on set days of the week.
I'm guessing perhaps we have to assume one to guess the other? I've really got nowhere with this at all. I just think about it in the warm-up every week and am baffled anew. Even if somebody gave me a way in to think about the problem I would be most grateful.
Cheers!
|
Can I estimate the frequency of an event based on random samplings of its occurrence?
|
CC BY-SA 2.5
|
0
|
2011-03-02T12:04:56.220
|
2011-03-02T17:09:01.147
|
2011-03-02T17:09:01.147
|
919
|
199
|
[
"probability",
"estimation",
"sampling"
] |
7792
|
2
| null |
168
|
3
| null |
For univariate kernel density estimation, the bandwidth can be estimated by Normal reference rule or Cross Validation method or plug-in approach.
For multivariate kernel density estimation, a Bayesian bandwidth selection method may be utilized, see [Zhang, X., M.L. King and R.J. Hyndman (2006), A Bayesian approach to bandwidth selection for multivariate kernel density estimation, Computational Statistics and Data Analysis, 50, 3009-3031](http://www.sciencedirect.com/science/article/pii/S0167947305001362)
| null |
CC BY-SA 3.0
| null |
2011-03-02T12:11:19.977
|
2016-04-12T20:24:11.920
|
2016-04-12T20:24:11.920
|
10416
| null | null |
7793
|
2
| null |
7788
|
2
| null |
Sounds like what you want is a hierarchical model: you would have a country-level effect, and nested within that would be a location-level effect. I've only ever skimmed through it, but the Gelman and Hill book seems to be quite highly regarded in this area:
[http://www.stat.columbia.edu/~gelman/arm/](http://www.stat.columbia.edu/~gelman/arm/)
As for whether this sounds Bayesian: this is one area where the distinction between Bayesian and non-Bayesian techniques becomes very blurred. Personally, I find it easier to think of these models in Bayesian terms, because then you don't need to get bogged down in making the distinction between fixed and random effects.
| null |
CC BY-SA 2.5
| null |
2011-03-02T12:20:21.893
|
2011-03-02T12:20:21.893
| null | null |
495
| null |
7795
|
1
| null | null |
13
|
12603
|
I would like to find out the values `(x, y)` used in plotting `plot(b, seWithMean=TRUE)` in mgcv package. Does anyone know how I can extract or compute these values?
Here is an example:
```
library(mgcv)
set.seed(0)
dat <- gamSim(1, n=400, dist="normal", scale=2)
b <- gam(y~s(x0), data=dat)
plot(b, seWithMean=TRUE)
```
|
How to obtain the values used in plot.gam in mgcv?
|
CC BY-SA 3.0
| null |
2011-03-02T13:16:19.200
|
2015-04-27T09:54:09.650
|
2014-04-29T14:26:27.517
|
7290
| null |
[
"r",
"time-series",
"smoothing",
"mgcv"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.