Id
stringlengths 1
6
| PostTypeId
stringclasses 7
values | AcceptedAnswerId
stringlengths 1
6
⌀ | ParentId
stringlengths 1
6
⌀ | Score
stringlengths 1
4
| ViewCount
stringlengths 1
7
⌀ | Body
stringlengths 0
38.7k
| Title
stringlengths 15
150
⌀ | ContentLicense
stringclasses 3
values | FavoriteCount
stringclasses 3
values | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 1
6
⌀ | OwnerUserId
stringlengths 1
6
⌀ | Tags
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
7457
|
1
|
7458
| null |
13
|
250810
|
I'm trying to normalize a set of columns of data in an excel spreadsheet.
I need to get the values so that the highest value in a column is = 1 and lowest is = to 0, so I've come up with the formula:
`=(A1-MIN(A1:A30))/(MAX(A1:A30)-MIN(A1:A30))`
This seems to work fine, but when I drag down the formula to populate the cells below it, now only does `A1` increase, but `A1:A30` does too.
Is there a way to lock the range while updating just the number I'm interested in?
I've tried putting the Max and min in a different cell and referencing that but it just references the cell under the one that the Max and min are in and I get divide by zero errors because there is nothing there.
|
How to stop excel from changing a range when you drag a formula down?
|
CC BY-SA 2.5
| null |
2011-02-21T16:54:06.813
|
2013-01-30T20:54:57.593
|
2011-02-21T19:04:33.937
|
919
|
3348
|
[
"excel"
] |
7458
|
2
| null |
7457
|
44
| null |
A '$' will lock down the reference to an absolute one versus a relative one. You can lock down the column, row or both. Here is a locked down absolute reference for your example.
```
(A1-MIN($A$1:$A$30))/(MAX($A$1:$A$30)-MIN($A$1:$A$30))
```
| null |
CC BY-SA 2.5
| null |
2011-02-21T17:02:21.187
|
2011-02-21T17:02:21.187
| null | null |
2040
| null |
7459
|
2
| null |
7450
|
15
| null |
Update: 7 Apr 2011
This answer is getting quite long and covers multiple aspects of the problem at hand. However, I've resisted, so far, breaking it into separate answers.
I've added at the very bottom a discussion of the performance of Pearson's $\chi^2$ for this example.
---
Bruce M. Hill authored, perhaps, the "seminal" paper on estimation in a Zipf-like context. He wrote several papers in the mid-1970's on the topic. However, the "Hill estimator" (as it's now called) essentially relies on the maximal order statistics of the sample and so, depending on the type of truncation present, that could get you in some trouble.
The main paper is:
B. M. Hill, [A simple general approach to inference about the tail of a distribution](http://projecteuclid.org/euclid.aos/1176343247), Ann. Stat., 1975.
If your data truly are initially Zipf and are then truncated, then a nice correspondence between the degree distribution and the Zipf plot can be harnessed to your advantage.
Specifically, the degree distribution is simply the empirical distribution of the number of times that each integer response is seen,
$$
d_i = \frac{\#\{j: X_j = i\}}{n} .
$$
If we plot this against $i$ on a log-log plot, we'll get a linear trend with a slope corresponding to the scaling coefficient.
On the other hand, if we plot the Zipf plot, where we sort the sample from largest to smallest and then plot the values against their ranks, we get a different linear trend with a different slope. However the slopes are related.
If $\alpha$ is the scaling-law coefficient for the Zipf distribution, then the slope in the first plot is $-\alpha$ and the slope in the second plot is $-1/(\alpha-1)$. Below is an example plot for $\alpha = 2$ and $n = 10^6$. The left-hand pane is the degree distribution and the slope of the red line is $-2$. The right-hand side is the Zipf plot, with the superimposed red line having a slope of $-1/(2-1) = -1$.

So, if your data have been truncated so that you see no values larger than some threshold $\tau$, but the data are otherwise Zipf-distributed and $\tau$ is reasonably large, then you can estimate $\alpha$ from the degree distribution. A very simple approach is to fit a line to the log-log plot and use the corresponding coefficient.
If your data are truncated so that you don't see small values (e.g., the way much filtering is done for large web data sets), then you can use the Zipf plot to estimate the slope on a log-log scale and then "back out" the scaling exponent. Say your estimate of the slope from the Zipf plot is $\hat{\beta}$. Then, one simple estimate of the scaling-law coefficient is
$$
\hat{\alpha} = 1 - \frac{1}{\hat{\beta}} .
$$
@csgillespie gave one recent paper co-authored by Mark Newman at Michigan regarding this topic. He seems to publish a lot of similar articles on this. Below is another along with a couple other references that might be of interest. Newman sometimes doesn't do the most sensible thing statistically, so be cautious.
MEJ Newman, [Power laws, Pareto distributions and Zipf's law](http://arxiv.org/abs/cond-mat/0412004), Contemporary Physics 46, 2005, pp. 323-351.
M. Mitzenmacher, [A Brief History of Generative Models for Power Law and Lognormal Distributions](http://projecteuclid.org/euclid.im/1089229510), Internet Math., vol. 1, no. 2, 2003, pp. 226-251.
K. Knight, [A simple modification of the Hill estimator with applications to robustness and bias reduction](http://www.utstat.utoronto.ca/keith/papers/robusthill.pdf), 2010.
---
Addendum:
Here is a simple simulation in $R$ to demonstrate what you might expect if you took a sample of size $10^5$ from your distribution (as described in your comment below your original question).
```
> x <- (1:500)^(-0.9)
> p <- x / sum(x)
> y <- sample(length(p), size=100000, repl=TRUE, prob=p)
> tab <- table(y)
> plot( 1:500, tab/sum(tab), log="xy", pch=20,
main="'Truncated' Zipf simulation (truncated at i=500)",
xlab="Response", ylab="Probability" )
> lines(p, col="red", lwd=2)
```
The resulting plot is

From the plot, we can see that the relative error of the degree distribution for $i \leq 30$ (or so) is very good. You could do a formal chi-square test, but this does not strictly tell you that the data follow the prespecified distribution. It only tells you that you have no evidence to conclude that they don't.
Still, from a practical standpoint, such a plot should be relatively compelling.
---
Addendum 2: Let's consider the example that Maurizio uses in his comments below. We'll assume that $\alpha = 2$ and $n = 300\,000$, with a truncated Zipf distribution having maximum value $x_{\mathrm{max}} = 500$.
We'll calculate Pearson's $\chi^2$ statistic in two ways. The standard way is via the statistic
$$
X^2 = \sum_{i=1}^{500} \frac{(O_i - E_i)^2}{E_i}
$$
where $O_i$ is the observed counts of the value $i$ in the sample and $E_i = n p_i = n i^{-\alpha} / \sum_{j=1}^{500} j^{-\alpha}$.
We'll also calculate a second statistic formed by first binning the counts in bins of size 40, as shown in Maurizio's spreadsheet (the last bin only contains the sum of twenty separate outcome values.
Let's draw 5000 separate samples of size $n$ from this distribution and calculate the $p$-values using these two different statistics.
The histograms of the $p$-values are below and are seen to be quite uniform. The empirical Type I error rates are 0.0716 (standard, unbinned method) and 0.0502 (binned method), respectively and neither are statistically significantly different from the target 0.05 value for the sample size of 5000 that we've chosen.

Here is the $R$ code.
```
# Chi-square testing of the truncated Zipf.
a <- 2
n <- 300000
xmax <- 500
nreps <- 5000
zipf.chisq.test <- function(n, a=0.9, xmax=500, bin.size = 40)
{
# Make the probability vector
x <- (1:xmax)^(-a)
p <- x / sum(x)
# Do the sampling
y <- sample(length(p), size=n, repl=TRUE, prob=p)
# Use tabulate, NOT table!
tab <- tabulate(y,xmax)
# unbinned chi-square stat and p-value
discrepancy <- (tab-n*p)^2/(n*p)
chi.stat <- sum(discrepancy)
p.val <- pchisq(chi.stat, df=xmax-1, lower.tail = FALSE)
# binned chi-square stat and p-value
bins <- seq(bin.size,xmax,by=bin.size)
if( bins[length(bins)] != xmax )
bins <- c(bins, xmax)
tab.bin <- cumsum(tab)[bins]
tab.bin <- c(tab.bin[1], diff(tab.bin))
prob.bin <- cumsum(p)[bins]
prob.bin <- c(prob.bin[1], diff(prob.bin))
disc.bin <- (tab.bin - n*prob.bin)^2/(n * prob.bin)
chi.stat.bin <- sum(disc.bin)
p.val.bin <- pchisq(chi.stat.bin, df=length(tab.bin)-1, lower.tail = FALSE)
# Return the binned and unbineed p-values
c(p.val, p.val.bin, chi.stat, chi.stat.bin)
}
set.seed( .Random.seed[2] )
all <- replicate(nreps, zipf.chisq.test(n, a, xmax))
par(mfrow=c(2,1))
hist( all[1,], breaks=20, col="darkgrey", border="white",
main="Histogram of unbinned chi-square p-values", xlab="p-value")
hist( all[2,], breaks=20, col="darkgrey", border="white",
main="Histogram of binned chi-square p-values", xlab="p-value" )
type.one.error <- rowMeans( all[1:2,] < 0.05 )
```
| null |
CC BY-SA 3.0
| null |
2011-02-21T17:31:02.337
|
2011-04-08T01:51:15.640
|
2011-04-08T01:51:15.640
|
2970
|
2970
| null |
7460
|
2
| null |
7224
|
5
| null |
If you want to do it yourself, I would recommend using Intel's free and open source OpenCV (CV for computer vision) project.
[http://opencv.willowgarage.com/](http://opencv.willowgarage.com/)
[http://oreilly.com/catalog/9780596516130](http://oreilly.com/catalog/9780596516130)
| null |
CC BY-SA 2.5
| null |
2011-02-21T18:23:19.980
|
2011-02-21T18:23:19.980
| null | null |
74
| null |
7461
|
2
| null |
6870
|
3
| null |
Thanks for the tips David. I posted this question on the JAGS support forum and got a useful answer. The key was to use a two dimensional array for the 'true' values.
```
for (j in 1:n){
x_obs[j] ~ dnorm(xy_true[j,1], prec_x)T(xy_true[j,1],)
y_obs[j] ~ dnorm(xy_true[j,2], prec_y)
xy_true[j, ] ~ dmnorm(mu[ z [j],1:2], tau[z[j],1:2,1:2])
z[j]~dcat(prob[ ])
}
#priors for measurement error
e_x~dunif(.1,.9)
prec_x<-1/pow(e_x,2)
e_y~dunif(2,4)
prec_y<-1/pow(e_y,2)
```
| null |
CC BY-SA 2.5
| null |
2011-02-21T18:53:41.467
|
2011-02-21T18:53:41.467
| null | null |
2310
| null |
7462
|
2
| null |
6653
|
1
| null |
Check out the stats nerds at [Football Outsiders](http://www.footballoutsiders.com/info/methods) as well as the book [Mathletics](http://rads.stackoverflow.com/amzn/click/069113913X) for some inspiration.
The Football Outsiders guys make game predictions based on every play in a football game.
Winston in Mathletics uses some techniques such as dynamic programming as well.
You can also consider other algorithms such as SVM.
| null |
CC BY-SA 2.5
| null |
2011-02-21T19:22:09.767
|
2011-02-21T19:22:09.767
| null | null |
74
| null |
7465
|
2
| null |
6538
|
5
| null |
I would go to the curriculum websites of the top stats schools, write down the books they use in their undergrad courses, see which ones are highly rated on Amazon, and order them at your public/university library.
Some schools to consider:
- MIT - technically, cross-taught with Harvard.
- Caltech
- Carnegie Mellon
- Stanford
Supplement the texts with the various lecture video sites such as MIT OCW and videolectures.net.
Caltech doesn't have an undergrad degree in statistics, but you won't go wrong by following the curriculum of their undergrad stats courses.
| null |
CC BY-SA 2.5
| null |
2011-02-21T19:51:03.823
|
2011-02-22T00:26:09.117
|
2011-02-22T00:26:09.117
|
74
|
74
| null |
7466
|
1
|
7468
| null |
6
|
2286
|
I want to cluster elements in array. The crucial difference from a normal clustering algorithm is that the order of elements is significant. For instance if we look at a simple sequence of numbers like this:
```
1.1, 1.2, 1.0, 3.3, 3.3, 2.9, 1.0, 1.1, 3.0, 2.8, 3.2
```
It is obvious that there are two clusters in there (1.1, 1.2, 1.0, 1.0, 1.1) and (3.3, 3.3, 2.9, 3.0, 2.8, 3.2). What I want is to find sequential groups of similar elements
```
(1.1, 1.2, 1.0), (3.3, 3.3, 2.9), (1.0, 1.1), (3.0, 2.8, 3.2)
```
4 in this case. Of course I can run some variant of a normal clustering algorithm and then split clusters according elements' indices, but there's probably a simpler way to do this.
Is there any algorithm that I can use for this?
|
Sequential clustering algorithm
|
CC BY-SA 2.5
| null |
2011-02-21T20:09:05.493
|
2011-02-21T20:28:50.977
| null | null |
255
|
[
"clustering"
] |
7467
|
1
|
7472
| null |
18
|
13329
|
Background
I am overseeing the input of data from primary literature into a [database](http://ebi-forecast.igb.illinois.edu/). The data entry process is error prone, particularly because users must interpret experimental design, extract data from graphics and tables, and transform results to standardized units.
Data are input into a MySQL database through a web interface. Over 10k data points from > 20 variables, > 100 species, and > 500 citations have been included so far. I need to run checks of the quality of not only the variable data, but also the data contained in lookup tables, such as the species associated with each data point, the location of the study, etc.
Data entry is ongoing, so QA/QC will need to be run intermittently. The data have not yet been publicly released, but we are planning to release them in the next few months.
Currently, my QA/QC involves three steps:
- a second user checks each data point.
- visually inspect histogram each variable for outliers.
- users report questionable data after spurious results are obtained.
Questions
- Are there guidelines that I can use for developing a robust QA/QC procedure for this database?
- The first step is the most time consuming; is there anything that I can do to make this more efficient?
|
Quality assurance and quality control (QA/QC) guidelines for a database
|
CC BY-SA 3.0
| null |
2011-02-21T20:24:52.310
|
2016-08-18T18:36:49.497
|
2016-08-18T18:36:49.497
|
22468
|
1381
|
[
"dataset",
"meta-analysis",
"quality-control",
"database"
] |
7468
|
2
| null |
7466
|
0
| null |
Constrained clustering maintains data order. There is a package in R called 'rioja' that implements this in the function 'chclust'.
The procedure isn't too complex though:
- Calculate inter-point distance
- Find the smallest distance between adjacent points
- Average the value of the two points to generate a single value
- Spit the list out again and start from one until you have a single point.
You need to maintain some sort of tree structure, but with some elementary programming experience you should be able to do it.
| null |
CC BY-SA 2.5
| null |
2011-02-21T20:28:50.977
|
2011-02-21T20:28:50.977
| null | null | null | null |
7469
|
2
| null |
4805
|
4
| null |
The classical transformations include the log, sqrt, and inverse (1/Y) transformations. More sophisticated transformations include the power transformation, from which the Box-Cox optimization chooses a particular transformation which optimized a log-likelihood. Which transformation to use is almost becoming a lost art form, but there is an excellent book by A. C. Atkinson (1985) called Plots, Transformations, and Regression that talks about how to analyze your data and decide how to transform it. For example, the book discusses special transformations for data that are proportions.
| null |
CC BY-SA 2.5
| null |
2011-02-21T20:55:54.027
|
2011-02-21T20:55:54.027
| null | null |
2773
| null |
7470
|
2
| null |
7152
|
11
| null |
A very nice discussion of structural zeros in contingency tables is provided by
West, L. and Hankin, R. (2008), “Exact Tests for Two-Way Contingency Tables with Structural Zeros,” Journal of Statistical Software, 28(11), 1–19.
URL [http://www.jstatsoft.org/v28/i11](http://www.jstatsoft.org/v28/i11)
As the title implies, they implement Fisher’s exact test for two-way contingency tables
in the case where some of the table entries are constrained to be zero.
| null |
CC BY-SA 2.5
| null |
2011-02-21T21:01:10.717
|
2011-02-21T21:01:10.717
| null | null |
2773
| null |
7471
|
1
| null | null |
14
|
9958
|
Can the standard deviation be calculated for the harmonic mean? I understand that the standard deviation can be calculated for arithmetic mean, but if you have harmonic mean, how do you calculate the standard deviation or CV?
|
Can the standard deviation be calculated for harmonic mean?
|
CC BY-SA 2.5
| null |
2011-02-21T22:39:49.407
|
2021-09-26T06:03:51.507
|
2017-02-28T13:35:29.330
|
11887
| null |
[
"standard-deviation",
"harmonic-mean"
] |
7472
|
2
| null |
7467
|
25
| null |
This response focuses on the second question, but in the process a partial answer to the first question (guidelines for a QA/QC procedure) will emerge.
By far the best thing you can do is check data quality at the time entry is attempted. The user checks and reports are labor-intensive and so should be reserved for later in the process, as late as is practicable.
Here are some principles, guidelines, and suggestions, derived from extensive experience (with the design and creation of many databases comparable to and much larger than yours). They are not rules; you do not have to follow them to be successful and efficient; but they are all here for excellent reasons and you should think hard about deviating from them.
- Separate data entry from all intellectually demanding activities. Do not ask data entry operators simultaneously to check anything, count anything, etc. Restrict their work to creating a computer-readable facsimile of the data, nothing more. In particular, this principle implies the data-entry forms should reflect the format in which you originally obtain the data, not the format in which you plan to store the data. It is relatively easy to transform one format to another later, but it's an error-prone process to attempt the transformation on the fly while entering data.
- Create a data audit trail: whenever anything is done to the data, starting at the data entry stage, document this and record the procedure in a way that makes it easy to go back and check what went wrong (because things will go wrong). Consider filling out fields for time stamps, identifiers of data entry operators, identifiers of sources for the original data (such as reports and their page numbers), etc. Storage is cheap, but the time to track down an error is expensive.
- Automate everything. Assume any step will have to be redone (at the worst possible time, according to Murphy's Law), and plan accordingly. Don't try to save time now by doing a few "simple steps" by hand.
- In particular, create support for data entry: make a front end for each table (even a spreadsheet can do nicely) that provides a clear, simple, uniform way to get data in. At the same time the front end should enforce your "business rules:" that is, it should perform as many simple validity checks as it can. (E.g., pH must be between 0 and 14; counts must be positive.) Ideally, use a DBMS to enforce relational integrity checks (e.g., every species associated with a measurement really exists in the database).
- Constantly count things and check that counts exactly agree. E.g., if a study is supposed to measure attributes of 10 species, make sure (as soon as data entry is complete) that 10 species really are reported. Although checking counts is simple and uninformative, it's great at detecting duplicated and omitted data.
- If the data are valuable and important, consider independently double-entering the entire dataset. This means that each item will be entered at separate times by two different non-interacting people. This is a great way to catch typos, missing data, and so on. The cross-checking can be completely automated. This is faster, better at catching errors, and more efficient than 100% manual double checking. (The data entry "people" can include devices such as scanners with OCR.)
- Use a DBMS to store and manage the data. Spreadsheets are great for supporting data entry, but get your data out of the spreadsheets or text files and into a real database as soon as possible. This prevents all kinds of insidious errors while adding lots of support for automatic data integrity checks. If you must, use your statistical software for data storage and management, but seriously consider using a dedicated DBMS: it will do a better job.
- After all data are entered and automatically checked, draw pictures: make sorted tables, histograms, scatterplots, etc., and look at them all. These are easily automated with any full-fledged statistical package.
- Do not ask people to do repetitive tasks that the computer can do. The computer is much faster and more reliable at these. Get into the habit of writing (and documenting) little scripts and small programs to do any task that cannot be completed immediately. These will become part of your audit trail and they will enable work to be redone easily. Use whatever platform you're comfortable with and that is suitable to the task. (Over the years, depending on what was available, I have used a wide range of such platforms and all have been effective in their way, ranging from C and Fortran programs through AWK and SED scripts, VBA scripts for Excel and Word, and custom programs written for relational database systems, GIS, and statistical analysis platforms like R and Stata.)
If you follow most of these guidelines, approximately 50%-80% of the work in getting data into the database will be database design and writing the supporting scripts. It is not unusual to get 90% through such a project and be less than 50% complete, yet still finish on time: once everything is set up and has been tested, the data entry and checking can be amazingly efficient.
| null |
CC BY-SA 2.5
| null |
2011-02-21T23:27:05.723
|
2011-02-21T23:27:05.723
| null | null |
919
| null |
7473
|
2
| null |
7455
|
15
| null |
In machine learning a full probability model p(x,y) is called generative because it can be used to generate the data whereas a conditional model p(y|x) is called discriminative because it does not specify a probability model for p(x) and can only generate y given x. Both can be estimated in Bayesian fashion.
Bayesian estimation is inherently about specifying a full probability model and performing inference conditional on the model and data. That makes many Bayesian models have a generative feel. However to a Bayesian the important distinction is not so much about how to generate the data, but more about what is needed to obtain the posterior distribution of the unknown parameters of interest.
The discriminative model p(y|x) is part of bigger model where p(y, x) = p(y|x)p(x). In many instances, p(x) is irrelevant to the posterior distribution of the parameters in the model p(y|x). Specifically, if the parameters of p(x) are distinct from p(y|x) and the priors are independent, then the model p(x) contains no information about the unknown parameters of the conditional model p(y|x), so a Bayesian does not need to model it.
---
At a more intuitive level, there is a clear link between "generating data" and "computing the posterior distribution." Rubin (1984) gives the following excellent description of this link:

---
Bayesian statistics is useful given missing data primarily because it provides a unified way to eliminate nuisance parameters -- integration. Missing data can be thought of as (many) nuisance parameters. Alternative proposals such as plugging in the expected value typically will perform poorly because we can rarely estimate missing data cells with high levels of accuracy. Here, integration is better than maximization.
Discriminative models like p(y|x) also become problematic if x includes missing data because we only have data to estimate p(y|x_obs) but most sensible models are written with respect to the complete data p(y|x). If you have a fully probability model p(y,x) and are Bayesian, then you're fine because you can just integrate over the missing data like you would any other unknown quantity.
| null |
CC BY-SA 2.5
| null |
2011-02-21T23:50:01.567
|
2011-02-22T00:06:57.613
|
2011-02-22T00:06:57.613
|
493
|
493
| null |
7474
|
2
| null |
7471
|
2
| null |
Here is an example for Exponential r.v's.
The harmonic mean for $n$ data points is defined as
$$S = \frac{1}{\frac{1}{n} \sum_{i=1}^n X_i}$$
Suppose you have $n$ iid samples of an Exponential random variable, $X_i \sim {\rm Exp}(\lambda)$. The sum of $n$ Exponential variables follows a Gamma distribution
$$\sum_{i=1}^n X_i \sim {\rm Gamma}(n, \theta)$$
where $\theta = \frac{1}{\lambda}$. We also know that
$$\frac{1}{n} {\rm Gamma}(n, \theta) \sim {\rm Gamma}(n, \frac{\theta}{n})$$
The distribution of $S$ is therefore
$$S \sim {\rm InvGamma}(n, \frac{n}{\theta})$$
The variance (and standard deviation) of this r.v. are well known, see, for example [here](http://en.wikipedia.org/wiki/Inverse-gamma_distribution).
| null |
CC BY-SA 3.0
| null |
2011-02-21T23:51:06.993
|
2017-02-28T16:55:14.717
|
2017-02-28T16:55:14.717
|
919
|
530
| null |
7475
|
1
|
7479
| null |
1
|
1044
|
I am trying to do a multiple logistic regression for 2 similar groups. I have a few questions:
- In doing a univariate analysis, do I enter each independent variable, one at a time, first into the binary regression, before going on to do the multivariate analysis? Or is the significance values from Chi-square or t-test enough to go on with?
- I have a test group and a control group and I want to determine the effect of independent variables (e.g HIV status, maternal weight etc) on a particular dependent variable (low birth weight). Should I perform the regression on a dataset with both the test and the control cases or should I split the file? In this case I want to see the effect of HIV on birthweight and I am having a hard time knowing how to move on.
|
Entering variables in multivariate logistic regression and running regression across two groups
|
CC BY-SA 2.5
| null |
2011-02-22T01:09:31.170
|
2011-02-22T06:57:06.070
|
2011-02-22T06:57:06.070
|
2116
| null |
[
"logistic"
] |
7476
|
1
|
7493
| null |
11
|
1662
|
Before submission of my meta-analysis I want to make a funnel plot to test for heterogeneity and publication bias. I have the pooled effect size and the effect sizes from each study, that take values from -1 to +1. I have the sample sizes n1, n2 for patients and controls from each study. As I cannot calculate the standard error (SE), I cannot perform Egger's regression. I cannot use SE or precision=1/SE on the vertical axis.
### Questions
- Can I still make a funnel plot with effect size on the horizontal axon and total sample size n (n=n1+n2) on the vertical axis?
- How should such a funnel plot be interpreted?
Some published papers presented such funnel plot with total sample size on the vertical axis (Pubmed PMIDs: 10990474, 10456970). Also, wikipedia funnel plot wiki agree on this. But, most importantly, Mathhias Egger's paper on BMJ 1999 (PubMed PMID: 9451274) shows such a funnel plot, with no SE but only sample size on the vertical axis.
### More Questions
- Is such a plot acceptable when the standard error is not known?
- Is it the same as the classical funnel plot with SE or presicion=1/SE on the vertical axon?
- Is its interpretation different?
- How should I set the lines to make the equilateral triangle?
|
Alternative funnel plot, without using standard error (SE)
|
CC BY-SA 2.5
| null |
2011-02-22T01:12:00.607
|
2011-02-22T12:08:20.327
|
2011-02-22T12:02:14.783
|
8
|
3333
|
[
"meta-analysis",
"sample-size",
"standard-error",
"funnel-plot",
"publication-bias"
] |
7477
|
2
| null |
6538
|
86
| null |
(Very) short story
Long story short, in some sense, statistics is like any other technical field: [There is no fast track](http://norvig.com/21-days.html).
Long story
Bachelor's degree programs in statistics are relatively rare in the U.S. One reason I believe this is true is that it is quite hard to pack all that is necessary to learn statistics well into an undergraduate curriculum. This holds particularly true at universities that have significant general-education requirements.
Developing the necessary skills (mathematical, computational, and intuitive) takes a lot of effort and time. Statistics can begin to be understood at a fairly decent "operational" level once the student has mastered calculus and a decent amount of linear and matrix algebra. However, any applied statistician knows that it is quite easy to find oneself in territory that doesn't conform to a cookie-cutter or recipe-based approach to statistics. To really understand what is going on beneath the surface requires as a prerequisite mathematical and, in today's world, computational maturity that are only really attainable in the later years of undergraduate training. This is one reason that true statistical training mostly starts at the M.S. level in the U.S. (India, with their dedicated ISI is a little different story. A similar argument might be made for some Canadian-based education. I'm not familiar enough with European-based or Russian-based undergraduate statistics education to have an informed opinion.)
Nearly any (interesting) job would require an M.S. level education and the really interesting (in my opinion) jobs essentially require a doctorate-level education.
Seeing as you have a doctorate in mathematics, though we don't know in what area, here are my suggestions for something closer to an M.S.-level education. I include some parenthetical remarks to explain the choices.
- D. Huff, How to Lie with Statistics. (Very quick, easy read. Shows many of the conceptual ideas and pitfalls, in particular, in presenting statistics to the layman.)
- Mood, Graybill, and Boes, Introduction to the Theory of Statistics, 3rd ed., 1974. (M.S.-level intro to theoretical statistics. You'll learn about sampling distributions, point estimation and hypothesis testing in a classical, frequentist framework. My opinion is that this is generally better, and a bit more advanced, than modern counterparts such as Casella & Berger or Rice.)
- Seber & Lee, Linear Regression Analysis, 2nd ed. (Lays out the theory behind point estimation and hypothesis testing for linear models, which is probably the most important topic to understand in applied statistics. Since you probably have a good linear algebra background, you should immediately be able to understand what is going on geometrically, which provides a lot of intuition. Also has good information related to assessment issues in model selection, departures from assumptions, prediction, and robust versions of linear models.)
- Hastie, Tibshirani, and Friedman, Elements of Statistical Learning, 2nd ed., 2009. (This book has a much more applied feeling than the last and broadly covers lots of modern machine-learning topics. The major contribution here is in providing statistical interpretations of many machine-learning ideas, which pays off particularly in quantifying uncertainty in such models. This is something that tends to go un(der)addressed in typical machine-learning books. Legally available for free here.)
- A. Agresti, Categorical Data Analysis, 2nd ed. (Good presentation of how to deal with discrete data in a statistical framework. Good theory and good practical examples. Perhaps on the traditional side in some respects.)
- Boyd & Vandenberghe, Convex Optimization. (Many of the most popular modern statistical estimation and hypothesis-testing problems can be formulated as convex optimization problems. This also goes for numerous machine-learning techniques, e.g., SVMs. Having a broader understanding and the ability to recognize such problems as convex programs is quite valuable, I think. Legally available for free here.)
- Efron & Tibshirani, An Introduction to the Bootstrap. (You ought to at least be familiar with the bootstrap and related techniques. For a textbook, it's a quick and easy read.)
- J. Liu, Monte Carlo Strategies in Scientific Computing or P. Glasserman, Monte Carlo Methods in Financial Engineering. (The latter sounds very directed to a particular application area, but I think it'll give a good overview and practical examples of all the most important techniques. Financial engineering applications have driven a fair amount of Monte Carlo research over the last decade or so.)
- E. Tufte, The Visual Display of Quantitative Information. (Good visualization and presentation of data is [highly] underrated, even by statisticians.)
- J. Tukey, Exploratory Data Analysis. (Standard. Oldie, but goodie. Some might say outdated, but still worth having a look at.)
Complements
Here are some other books, mostly of a little more advanced, theoretical and/or auxiliary nature, that are helpful.
- F. A. Graybill, Theory and Application of the Linear Model. (Old fashioned, terrible typesetting, but covers all the same ground of Seber & Lee, and more. I say old-fashioned because more modern treatments would probably tend to use the SVD to unify and simplify a lot of the techniques and proofs.)
- F. A. Graybill, Matrices with Applications in Statistics. (Companion text to the above. A wealth of good matrix algebra results useful to statistics here. Great desk reference.)
- Devroye, Gyorfi, and Lugosi, A Probabilistic Theory of Pattern Recognition. (Rigorous and theoretical text on quantifying performance in classification problems.)
- Brockwell & Davis, Time Series: Theory and Methods. (Classical time-series analysis. Theoretical treatment. For more applied ones, Box, Jenkins & Reinsel or Ruey Tsay's texts are decent.)
- Motwani and Raghavan, Randomized Algorithms. (Probabilistic methods and analysis for computational algorithms.)
- D. Williams, Probability and Martingales and/or R. Durrett, Probability: Theory and Examples. (In case you've seen measure theory, say, at the level of D. L. Cohn, but maybe not probability theory. Both are good for getting quickly up to speed if you already know measure theory.)
- F. Harrell, Regression Modeling Strategies. (Not as good as Elements of Statistical Learning [ESL], but has a different, and interesting, take on things. Covers more "traditional" applied statistics topics than does ESL and so worth knowing about, for sure.)
More Advanced (Doctorate-Level) Texts
- Lehmann and Casella, Theory of Point Estimation. (PhD-level treatment of point estimation. Part of the challenge of this book is reading it and figuring out what is a typo and what is not. When you see yourself recognizing them quickly, you'll know you understand. There's plenty of practice of this type in there, especially if you dive into the problems.)
- Lehmann and Romano, Testing Statistical Hypotheses. (PhD-level treatment of hypothesis testing. Not as many typos as TPE above.)
- A. van der Vaart, Asymptotic Statistics. (A beautiful book on the asymptotic theory of statistics with good hints on application areas. Not an applied book though. My only quibble is that some rather bizarre notation is used and details are at times brushed under the rug.)
| null |
CC BY-SA 3.0
| null |
2011-02-22T02:08:30.283
|
2016-12-20T19:57:24.177
|
2016-12-20T19:57:24.177
|
22047
|
2970
| null |
7478
|
1
| null | null |
3
|
251
|
I've performed a study which yielded (?) the following results:
```
- no bike box bike box % change
correct procedure 173 55 -27%
incorrect procedure 68 50 69%
```
Since a result could only be one of the two - correct and incorrect procedure, do I need both quantities in my data, or should I just interpret one of them? If so, which one should I use, or does it depend what i'm trying to demonstrate?
Sorry for being so vague, but I hope this is enough to answer my question.
|
How to compare outcomes from single variable experiment?
|
CC BY-SA 2.5
| null |
2011-02-22T02:32:12.893
|
2015-12-20T00:19:15.963
|
2015-12-20T00:19:15.963
|
28666
|
3357
|
[
"contingency-tables",
"fishers-exact-test",
"relative-risk"
] |
7479
|
2
| null |
7475
|
4
| null |
I would start with estimating a (simple) bivariate correlation matrix which includes your outcome variable as well as all predictors. This will give you very first insights into the dependency structure of all your variables. Especially correlation coefficients of $|r| > 0.4$ (between your predictor variables) can indicate later multicollinearity problems.
Next, I would continue with a series of bivariate regressions. That is, for each predictor ('independent variable') run one logistic regression. One of these regressions will focus on your control-/treatment-group variable. This will inform you about the 'gross' effects, that is the unadjusted effects. Please, do not think that statistically non-significant effects could be excluded from later analysis.
Do you assume an interaction effect between HIV and your treatment-variable (1=treatment, 0=control)? Then you could run two separate models with HIV as predictor variable, i.e. one model for control- and one model for treatment-group. Given that you observe different coefficients for HIV, you will need to run another model which includes an interaction effect $HIV \times treatment$ (and the two main effects). This second model will allow you to test for statistically significant differences between the groups. You also might include your other predictor variables.
Please be aware that interaction effects in logistic regression models are more complicated than in the case of a simple linear regression model. Edward Norton has published [several papers](http://www.unc.edu/~enorton/) which discuss that topic.
You also did not tell us what software you are using to estimate the models. However, most software packages are able to test for multicollinearity (VIF or tolerance).
| null |
CC BY-SA 2.5
| null |
2011-02-22T02:34:30.143
|
2011-02-22T02:34:30.143
| null | null |
307
| null |
7480
|
2
| null |
7478
|
2
| null |
You can report percent correct along with sample size $n$, and reporting percent correct instead would be sufficient in most cases, even if you focus more on the percent incorrect in your interpretation.
| null |
CC BY-SA 2.5
| null |
2011-02-22T03:03:53.333
|
2011-02-22T03:03:53.333
| null | null |
1381
| null |
7481
|
1
| null | null |
9
|
25781
|
### Context
I have a survey that asks 11 questions about self-efficacy.
Each question has 3 response options (disagree, agree, strongly agree).
Nine questions ask about self-esteem.
I have used a factor analysis of the 11 self-efficacy items and extracted two factors.
$x_1$ to $x_{11}$ denote the 11 self-efficacy questions in the survey, and $f_1$ ($x_1$ to $x_6$) , $f_2$ ($x_7$ to $x_{11}$) denote the two factors I got from the factor analysis.
$y$ is a Dependent variable.
Then I created two new variables:
```
f1=mean(x1 to x6);
f2=mean(x7-x11).
```
So the logistic regression would looks like this:
```
y=a+bf1+cf2+....
```
### My question:
- Can i use these two factors as predictor variables in my multivariate logistic regression model?
- Should I calculate the mean of each items in each factor and use this mean as a continuous variable in my logistic regression model?
- Is this an appropriate use of factor analysis?
|
How to use variables derived from factor analysis as predictors in logistic regression?
|
CC BY-SA 2.5
| null |
2011-02-22T03:24:55.477
|
2014-11-07T07:55:53.000
|
2011-02-22T07:46:48.870
|
2116
| null |
[
"logistic",
"factor-analysis"
] |
7482
|
1
| null | null |
4
|
356
|
If kNN doesn't perform well for classification on a dataset, is there any hope for parametric methods to perform better? Kernel-based methods, SVM, random forests, and neural networks. Could any of these outperform kNN method?
|
Accuracy of advanced parametric methods compared to kNN method
|
CC BY-SA 2.5
| null |
2011-02-22T06:56:36.897
|
2011-02-23T21:39:34.423
|
2011-02-23T21:39:34.423
| null | null |
[
"machine-learning",
"k-nearest-neighbour"
] |
7483
|
2
| null |
7482
|
4
| null |
Hastie et al give a nice overview in [their book](http://www-stat.stanford.edu/~tibs/ElemStatLearn/), look into 2nd chapter. The short answer is yes. Otherwise why do you think these methods were developed and are still widely used?
| null |
CC BY-SA 2.5
| null |
2011-02-22T07:02:04.770
|
2011-02-22T07:02:04.770
| null | null |
2116
| null |
7484
|
2
| null |
7471
|
15
| null |
The harmonic mean $H$ of random variables $X_1,...,X_n$ is defined as
$$H=\frac{1}{\frac{1}{n}\sum_{i=1}^n\frac{1}{X_i}}$$
Taking moments of fractions is a messy business, so instead I would prefer working with the $1/H$. Now
$$\frac{1}{H}=\frac{1}{n}\sum_{i=1}^n\frac{1}{X_i}$$.
Usin central limit theorem we immediately get that
$$\sqrt{n}\left(H^{-1}-EX_1^{-1}\right)\to N(0,VarX_1^{-1})$$
if of course $VarX_1^{-1}<\infty$ and $X_i$ are iid, since we simple work with arithmetic mean of variables $Y_i=X_i^{-1}$.
Now using delta method for function $g(x)=x^{-1}$ we get that
$$\sqrt{n}(H-(EX_1^{-1})^{-1})\to N\left(0, \frac{VarX_1^{-1}}{(EX_1^{-1})^4}\right)$$
This result is asymptotic, but for simple applications it might suffice.
Update As @whuber rightfully points out, simple applications is a misnomer. The central limit theorem holds only if $VarX_1^{-1}$ exists, which is quite a restrictive assumption.
Update 2 If you have a sample, then to calculate the standard deviation, simply plug sample moments into the formula. So for sample $X_1,...,X_n$, the estimate of harmonic mean is
\begin{align}
\hat{H}=\frac{1}{\frac{1}{n}\sum_{i=1}^n\frac{1}{X_i}}
\end{align}
the sample moments $EX_1^{-1}$ and $Var(X_1^{-1})$ respectively are:
\begin{align}
\hat{\mu}_{R}&=\frac{1}{n}\sum_{i=1}^n\frac{1}{X_i}\\\\
\hat{\sigma}_{R}^2&=\frac{1}{n}\sum_{i=1}^n\left(\frac{1}{X_i}-\mu_R\right)^2
\end{align}
here $R$ stands for reciprocal.
Finally the approximate formula for standard deviation of $\hat{H}$ is
\begin{align*}
sd(\hat{H})=\sqrt{\frac{\hat{\sigma}_R^2}{n\hat{\mu}_R^4}}
\end{align*}
I ran some Monte-Carlo simulations for random variables uniformly distributed in interval $[2,3]$. Here is the code:
```
hm <- function(x)1/mean(1/x)
sdhm <- function(x)sqrt((mean(1/x))^(-4)*var(1/x)/length(x))
n<-1000
nn <- c(10,30,50,100,500,1000,5000,10000)
N<-1000
mc<-foreach(n=nn,.combine=rbind) %do% {
rr <- matrix(runif(n*N,min=2,max=3),nrow=N)
c(n,mean(apply(rr,1,sdhm)),sd(apply(rr,1,sdhm)),sd(apply(rr,1,hm)))
}
colnames(mc) <- c("n","DeltaSD","sdDeltaSD","trueSD")
> mc
n DeltaSD sdDeltaSD trueSD
result.1 10 0.089879211 1.528423e-02 0.091677622
result.2 30 0.052870477 4.629262e-03 0.051738941
result.3 50 0.040915607 2.705137e-03 0.040257673
result.4 100 0.029017031 1.407511e-03 0.028284458
result.5 500 0.012959582 2.750145e-04 0.013200580
result.6 1000 0.009139193 1.357630e-04 0.009115592
result.7 5000 0.004094048 2.685633e-05 0.004070593
result.8 10000 0.002894254 1.339128e-05 0.002964259
```
I simulated `N` samples of `n` sized sample. For each `n` sized sample I calculated estimate of standard estimation (function `sdhm`). Then I compare the mean and standard deviation of these estimates with the sample standard deviation of harmonic mean estimated for each sample, which supposably should be the true standard deviation of harmonic mean.
As you can see the results are quite good even for moderate sample sizes. Of course uniform distribution is a very well behaved one, so it is not surprising that results are good. I'll leave for someone else to investigate the behaviour for other distributions, the code is very easy to adapt.
Note: In previous version of this answer there was an error in the result of delta method, incorrect variance.
| null |
CC BY-SA 4.0
| null |
2011-02-22T07:43:52.837
|
2019-05-05T21:57:24.093
|
2019-05-05T21:57:24.093
|
22452
|
2116
| null |
7487
|
1
| null | null |
2
|
1550
|
How can I test the statistical significance of regression coefficients in multivariate multiple regression?
|
How to test for statistical significance of regression coefficients in multivariate multiple regression?
|
CC BY-SA 2.5
| null |
2011-02-22T09:12:10.840
|
2011-02-22T14:37:56.237
|
2011-02-22T09:23:48.610
|
2116
| null |
[
"regression"
] |
7488
|
2
| null |
7478
|
2
| null |
If I understand correctly, your IV is "bike box" vs. "no bike box", and your DV is "correct" vs. "incorrrect". The resulting $2 \times 2$ classification table can be summarized with the Odds Ratio: "given the bike-box condition, what are the odds of getting a correct response?" compared to "given the no-bike-box condition, what are the odds of getting a correct response?" If the odds are identical, OR is 1. Yule's Q standardizes OR to the $[-1, 1]$ interval. In R:
```
> IV <- factor(rep(c("no bbox", "bbox"), c(241, 105)))
> DVnbb <- rep(c("correct", "incorrect"), c(173, 68))
> DVbb <- rep(c("correct", "incorrect"), c( 55, 50))
> DV <- factor(c(DVnbb, DVbb))
> cTab <- table(IV, DV)
> addmargins(cTab)
DV
IV correct incorrect Sum
bbox 55 50 105
no bbox 173 68 241
Sum 228 118 346
> library(vcd) # for oddsratio()
> (OR <- oddsratio(cTab, log=FALSE))
[1] 0.4323699
> (55/50) / (173/68) # check: ratio of odds
[1] 0.4323699
> (Q <- (OR-1) / (OR+1)) # Yule's Q
[1] -0.3962873
```
A corresponding test for equal distributions of your DV within IV groups is Fisher's test.
```
> fisher.test(cTab)
Fisher's Exact Test for Count Data
data: cTab
p-value = 0.0008111
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.2619504 0.7158848
sample estimates:
odds ratio
0.4334897
```
Note that `fisher.test()` does not report the empirical OR, but a maximum-likelihood estimation.
Edit: Reading your answer, another measure that might capture some relevant information is relative risk: its definition is very similar to OR but calculates the "risk" of getting a correct response given one of the two conditions (and not the odds), i.e., the conditional relative frequency of a correct response.
```
# risk of getting correct (1st column) response in the two conditions
# calculated as conditional frequency: (cell count) / (sum of row counts)
> (risk <- prop.table(cTab, margin=1))
DV
IV correct incorrect
bbox 0.5238095 0.4761905
no bbox 0.7178423 0.2821577
# compare risk in experimental condition to risk in control condition
> (relRisk <- risk[1, 1] / risk[2, 1])
0.7297
```
| null |
CC BY-SA 2.5
| null |
2011-02-22T10:35:29.137
|
2011-02-22T23:27:11.683
|
2011-02-22T23:27:11.683
|
1909
|
1909
| null |
7489
|
2
| null |
7487
|
6
| null |
For a start, have a look at this pdf: [multivariate multiple regression](http://www.psych.yorku.ca/lab/psy6140/lectures/MultivariateRegression2x2.pdf). An example in R:
```
N <- 50 # number of participants
X1 <- rnorm(N, 175, 7) # predictor 1
X2 <- rnorm(N, 30, 8) # predictor 2
X3 <- rnorm(N, 60, 30) # predictor 3
Y1 <- 0.2*X1 - 0.3*X2 - 0.2*X3 + rnorm(N, 0, 50) # predicted variable 1
Y2 <- -0.1*X1 + 0.2*X2 + rnorm(N, 50) # predicted variable 2
Y <- cbind(Y1, Y2) # predicted variables in multivariate form
# fit OLS regression, coefficients are identical to the two separate univariate fits
(lmFit <- lm(Y ~ X1 + X2 + X3))
# fit MANOVA and do several multivariate tests
manFit <- manova(lmFit)
summary(manFit, test="Hotelling-Lawley") # Hotelling-Lawley trace
summary(manFit, test="Pillai") # Pillai-Bartlett trace
summary(manFit, test="Roy") # Roy's largest root
summary(manFit, test="Wilks") # Wilks' lambda
# compare to separate univariate regression analyses: different p-values
summary(lm(Y1 ~ X1 + X2 + X3))
summary(lm(Y2 ~ X1 + X2 + X3))
```
| null |
CC BY-SA 2.5
| null |
2011-02-22T11:15:15.733
|
2011-02-22T11:15:15.733
| null | null |
1909
| null |
7490
|
2
| null |
7481
|
11
| null |
If I understand you correctly, you are using FA to extract two subscales from your 11-item questionnaire. They are supposed to reflect some specific dimensions of self-efficacy (for example, self-regulatory vs. self-assertive efficacy).
Then, you are free to use individual mean (or sum) scores computed on the two subscales as predictors in a regression model. In others words, instead of considering 11 item scores, you are now working with 2 subscores, computed as described above for each individual. The only assumption that is made is that those scores reflect one's location on an "hypothetical construct" or latent variable, defined as a continuous scale.
As @JMS said, there are other issues that you might further clarify, especially which kind of FA was done. A subtle issue is that measurement error will not be accounted for by a standard regression approach. An alternative is to use [Structural Equation Models](http://en.wikipedia.org/wiki/Structural_equation_modeling) or any latent variables model (e.g. those coming from the [IRT](http://en.wikipedia.org/wiki/Item_response_theory) literature), but here the regression approach should provide a good approximation. The analysis of ordinal variables (Likert-type item) has been discussed elsewhere on this site.
However, in current practice, your approach is what is commonly found when validating a questionnaire or constructing scoring rules: We use weighted or unweighted combination of item scores (hence, they are treated as numeric variables) to report individual location on the latent trait(s) under consideration.
| null |
CC BY-SA 2.5
| null |
2011-02-22T11:23:45.177
|
2011-02-22T11:23:45.177
| null | null |
930
| null |
7491
|
2
| null |
7481
|
10
| null |
### Using factor scores as predictors
Yes, you can use variables derived from a factor analysis as predictors in subsequent analyses.
Other options include running some form of structural equation model where you posit a latent variable with the items or bundles of items as observed variables.
### Mean as scale score
Yes, in your case, the mean would be a typical option for computing a scale score.
If you have any reversed items, you have to deal with this.
You could also use factor saved scores instead of taking the mean. Although when all items load reasonably well on each factor and all items are on the same scale and all items are positively worded, there is rarely much difference between the mean and factor saved scores.
You could also look at methods that acknowledge the ordinal nature of the scale and therefore do not treat the scale options as equally distant.
| null |
CC BY-SA 2.5
| null |
2011-02-22T11:24:18.743
|
2011-02-22T11:24:18.743
| null | null |
183
| null |
7492
|
2
| null |
7481
|
1
| null |
Everything have be said by chl and Jeromy for the theorical part... If you don't have use sum/mean of variables you identify with FA you can use scores of FA.
Regarding the syntax you use you're probably using SAS. So to do a correct use of factor analysis you must use the score of observations and not the mean of variables.
You find below the code to obtain score for 2 factors with an FA. Scores you'll have to use will be call Factor1, Factor2, ... by SAS.
This is a 2 steps... 1) First FA then 2) call the proc score to compute Scores.
```
proc factor
data = Data
method = ml
rotate = promax
outstat = FAstats
n=3
heywood residuals msa score
;
var x:;
run;
proc score data=Data score=FAstats out=MyScores;
var x:;
run;
```
The variables to use are Factor1, Factor2, ... in MyScores datasets.
| null |
CC BY-SA 2.5
| null |
2011-02-22T11:37:13.330
|
2011-02-22T12:55:24.417
|
2011-02-22T12:55:24.417
|
1154
|
1154
| null |
7493
|
2
| null |
7476
|
13
| null |
Q: Can I still make a funnel plot with effect size on the horizontal axon and total sample size n (n=n1+n2) on the vertical axis?
A: Yes
Q: How should such a funnel plot be interpreted?
A: It is still a funnel plot. However, funnel plots should be interpreted with caution. For example, if you have only 5-10 effect sizes, a funnel plot is useless. Furthermore, although funnel plots are a helpful visualization technique, their interpretation can be misleading. The presence of an asymmetry does not proof the existence of publication bias. Egger et al. (1997: 632f.) mention a number of reasons that can result in funnel plot asymmetries, e.g. true heterogeneity, data irregularities like methodologically poorly designed small studies or fraud. So, funnel plots can be helpful in identifying possible publication bias, however, they should always be combined with a statistical test.
Q: Is such a plot acceptable when the standard error is not known?
A: Yes
Q: Is it the same as the classical funnel plot with SE or presicion=1/SE on the vertical axon?
A: No, the shape of the 'funnel' can be different.
Q: Is its interpretation different?
A: Yes, see above
Q: How should I set the lines to make the equilateral triangle?
A: What do you mean by "lines to make the equilateral triangle"? Do you mean the 95%-CI lines? You will need the standard errors...
You also might be interested in:
[Peters, Jaime L., Alex J. Sutton, David R. Jones, Keith R. Abrams, and Lesly Rushton. 2006. Comparison of two methods to detect publication bias in meta-analysis. Journal of the American Medical Association 295, no. 6: 676--80](http://jama.ama-assn.org/content/295/6/676.abstract). (see "An Alternative to Egger’s
Regression Test")
They propose a statistical test which focuses on sample size instead of standard errors.
By the way, do you know the book "[Publication Bias in Meta-Analysis: Prevention, Assessment and Adjustments](http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470870141.html)"? It will answer a lot of your questions.
| null |
CC BY-SA 2.5
| null |
2011-02-22T12:08:20.327
|
2011-02-22T12:08:20.327
| null | null |
307
| null |
7494
|
1
|
7507
| null |
4
|
1254
|
There is a catalog of noninformative priors over here:
[http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf](http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf)
in page 11, they give the noninformative Jeffreys prior for the Dirichlet distribution. They give the Fisher information matrix for the Dirichlet. Can someone tell me exactly what is cell (i,j) there for the matrix?
Is it all 0s, except for the diagonals and the upper right element and the bottom left element?
Thanks.
|
Fisher information matrix for the Dirichlet distribution
|
CC BY-SA 2.5
| null |
2011-02-22T14:24:42.300
|
2011-02-22T16:26:30.153
|
2011-02-22T14:48:49.360
|
2116
|
3347
|
[
"distributions"
] |
7495
|
2
| null |
7482
|
7
| null |
The "no free lunch" theorems (Wolpert) suggest there are no a-priori distinctions between classifiers; essentially whether one classifier performs better than another depends on the nature of the dataset. Note also for kNN a lot depends on what distance metric you use and how you choose a good value for k. It is not unlikely that a well-tuned kNN classifier will out-perform a badly tuned SVM. At the end of the day, there is only one way to know for sure if an SVM will out-perform a kNN on a particular dataset, which is to try it.
| null |
CC BY-SA 2.5
| null |
2011-02-22T14:35:49.593
|
2011-02-22T14:35:49.593
| null | null |
887
| null |
7496
|
2
| null |
7487
|
1
| null |
The standard approach is to use the partial F test, explained on fine websites all over town.
| null |
CC BY-SA 2.5
| null |
2011-02-22T14:37:56.237
|
2011-02-22T14:37:56.237
| null | null |
5792
| null |
7497
|
1
|
7506
| null |
29
|
11688
|
This is a question about terminology. Is a "vague prior" the same as a non-informative prior, or is there some difference between the two?
My impression is that they are same (from looking up vague and non-informative together), but I can't be certain.
|
Is a vague prior the same as a non-informative prior?
|
CC BY-SA 2.5
| null |
2011-02-22T14:49:46.453
|
2013-11-01T11:21:02.647
|
2011-02-22T16:13:36.303
|
8
|
3347
|
[
"bayesian",
"prior",
"terminology"
] |
7498
|
2
| null |
7497
|
3
| null |
I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be used to mean complete ignorance regarding the value of that parameter. It would perhaps be used to show that the analysis was not completely objective.
For example a very broad Gaussian might be a vague prior for a parameter where a non-informative prior would be uniform. The Gaussian would be very nearly flat on the scale of interest, but would nevertheless favour one particular value a bit more than any other (but it might make the problem more mathematically tractable).
| null |
CC BY-SA 2.5
| null |
2011-02-22T15:01:50.813
|
2011-02-22T15:01:50.813
| null | null |
887
| null |
7499
|
1
|
7550
| null |
4
|
3039
|
When presenting statistical information using bar charts, when will you need to use Error Bar charts?
|
Which type of statistical information will need Error Bar charts for presentation?
|
CC BY-SA 2.5
| null |
2011-02-22T15:14:04.833
|
2011-02-24T14:56:55.430
|
2011-02-24T11:54:32.427
| null |
546
|
[
"data-visualization",
"error"
] |
7500
|
2
| null |
7497
|
10
| null |
Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very informative prior on some other transformation $f(\theta)$. This is at least part of the motivation for Jeffreys' prior, which was initially constructed to be as non-informative as possible.
Vague priors can also do some pretty miserable things to your model. The now-classic example is using $\mathrm{InverseGamma}(\epsilon, \epsilon)$ as $\epsilon\rightarrow 0$ priors on variance components in a hierarchical model.
The improper limiting prior gives an improper posterior in this case. A popular alternative was to take $\epsilon$ to be really small, which results in a prior that looks almost uniform on $\mathbb{R}^+$. But it also results in a posterior that is almost improper, and model fitting and inferences suffered. See Gelman's [Prior distributions for variance parameters in hierarchical models](http://www.stat.columbia.edu/~gelman/research/published/taumain.pdf) for a complete exposition.
Edit: @csgillespie (rightly!) points out that I haven't completely answered your question. To my mind a non-informative prior is one that is vague in the sense that it doesn't particularly favor one area of the parameter space over another, but in doing so it shouldn't induce informative priors on other parameters. So a non-informative prior is vague but a vague prior isn't necessarily noninformative. One example where this comes into play is Bayesian variable selection; a "vague" prior on variable inclusion probabilities can actually induce a pretty informative prior on the total number of variables included in the model!
It seems to me that the search for truly noninformative priors is quixotic (though many would disagree); better to use so-called "weakly" informative priors (which, I suppose, are generally vague in some sense). Really, how often do we know nothing about the parameter in question?
| null |
CC BY-SA 3.0
| null |
2011-02-22T15:21:27.733
|
2013-11-01T11:21:02.647
|
2013-11-01T11:21:02.647
|
17230
|
26
| null |
7501
|
2
| null |
7251
|
3
| null |
Short answer.
The problem you mention is well studied by Granger C.W.J. with co-authors, and known as the forecasts combination (or pooling) problem. The general idea is to choose the loss function criterion and the parameters (may be time dependent) that minimize the latter. Below I put some references that may be useful (only publicly available, look for the original works in references after the text).
- K.F.Wallis Combining Density and Interval Forecasts: A Modest Proposal // Oxford bulletin of economics and statistics, 67, supplement (2005) 0305-9049 (provides a general idea of how to combine interval forecasts, though there is no details on how to choose the weights)
- Allan Timmermann Forecast combinations. (a survey on different aspects of the forecast combinations by one of the co-editors of Handbook of economic forecasting that I would like to study myself)
Hoping for the longer answer from the community.
| null |
CC BY-SA 2.5
| null |
2011-02-22T15:26:12.237
|
2011-02-22T15:26:12.237
| null | null |
2645
| null |
7503
|
2
| null |
7497
|
6
| null |
Lambert et al (2005) raise the question ["How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS](http://onlinelibrary.wiley.com/doi/10.1002/sim.2112/abstract)". They write: "We do not advocate the use of the term non-informative prior distribution as we consider all priors to contribute some information". I tend to agree but I am definitely no expert in Bayesian statistics.
| null |
CC BY-SA 2.5
| null |
2011-02-22T15:37:55.493
|
2011-02-22T15:37:55.493
| null | null |
307
| null |
7504
|
2
| null |
7494
|
2
| null |
I think it's meant to have constant off-diagonal entries, i.e. it could also be written
$$I(\alpha_1, \alpha_2, \ldots, \alpha_k) = \operatorname{diag}\left[ PG(1,\alpha_1) , PG(1, \alpha_2), \ldots, PG(1,\alpha_k) \right] - PG(1,\alpha_0) J_k $$
where $J_k$ is a $k \times k$ [matrix of ones](http://en.wikipedia.org/wiki/Matrix_of_ones).
| null |
CC BY-SA 2.5
| null |
2011-02-22T15:45:19.270
|
2011-02-22T15:45:19.270
| null | null |
449
| null |
7505
|
1
| null | null |
3
|
710
|
there is a short introduction to AB Tests in [this question](https://stats.stackexchange.com/questions/4884/aggregation-level-in-ab-tests) or [here at 20bits](http://20bits.com/articles/statistical-analysis-and-ab-testing/).
We are currently testing different versions of landing pages and are using the conversion rate (e.g. 4% vs. 5%) to track which version performs better. This is working fine so far.
What I would like to do is start calculating whether a version performs better using the sales volume. So I could say the control version sold $\$$10.000 with 100 visitors, while the new version sold $\$$12.000 with 110 visitors. Is the difference statistically significant?
I would appreciate any points in the right direction. Specifically I am having problems understanding how I can calculate 95% and 99% percentile intervals for the data above.
I can share my Excel sheet for calculating "basic" A/B tests, if that helps.
Thank you in advance for any help!
|
Moving from conversion rates to sales volume in A/B tests
|
CC BY-SA 2.5
| null |
2011-02-22T15:52:50.807
|
2011-04-21T15:05:07.710
|
2017-04-13T12:44:46.680
|
-1
|
3367
|
[
"confidence-interval",
"hypothesis-testing",
"ab-test"
] |
7506
|
2
| null |
7497
|
18
| null |
Gelman et al. (2003) say:
>
there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'reference prior distributions' and the prior density is described as vague, flat, or noninformative.[emphasis from original text]
Based on my reading of the discussion of Jeffreys' prior in Gelman et al. (2003, p.62ff, there is no consensus about the existence of a truly non-informative prior, and that sufficiently vague/flat/diffuse priors are sufficient.
Some of the points that they make:
- Any prior includes information, including priors that state that no information is known.
For example, if we know that we know nothing about the parameter in question, then we know something about it.
- In most applied contexts, there is no clear advantage to a truly non-informative prior when sufficiently vague priors suffice, and in many cases there are advantages - like finding a proper prior - to using a vague parameterization of a conjugate prior.
- Jeffreys' principle can be useful to construct priors that minimize Fisher's information content in univariate models, but there is no analogue for the multivariate case
- When comparing models, the Jeffreys' prior will vary with the distribution of the likelihood, so priors would also have to change
- there has generally been a lot of debate about whether a non-informative prior even exists (because of 1, but also see discussion and references on p.66 in Gelman et al. for the history of this debate).
note this is community wiki - The underlying theory is at the limits of my understanding, and I would appreciate contributions to this answer.
[Gelman et al. 2003 Bayesian Data Analysis, Chapman and Hall/CRC](http://www.stat.columbia.edu/~gelman/book/)
| null |
CC BY-SA 3.0
| null |
2011-02-22T16:13:39.453
|
2013-06-08T19:37:30.417
|
2013-06-08T19:37:30.417
|
22047
|
1381
| null |
7507
|
2
| null |
7494
|
8
| null |
Let's work it out.
The logarithm of the Dirichlet density function is
$$\lambda(\mathbf{x}|\mathbf{\alpha}) = \log(\Gamma(\alpha_0)) - \sum_{i=1}^{k}{\log(\Gamma(\alpha_i)))} + \sum_{i=1}^{k}{(\alpha_i - 1)\log(x_i)},$$
where $\alpha_0 = \alpha_1 + \alpha_2 + \cdots + \alpha_k$.
Taking second partial derivatives with respect to the parameters $\alpha_i$ is particularly simple; all we really need to know (in addition to the most basic properties of derivatives) is that $\partial \alpha_0 / \partial \alpha_i = 1$ and $\partial x_j / \partial \alpha_i = 0$. Thus
$$\frac{\partial \lambda}{\partial \alpha_i} = \psi(\alpha_0) - \psi(\alpha_i) + \log(x_i)$$
and
$$\frac{\partial^2 \lambda}{\partial \alpha_i \partial \alpha_j}
= \psi'(\alpha_0) - \psi'(\alpha_j)\delta_{i j},$$
where $\psi$ (the [digamma function](http://mathworld.wolfram.com/GammaFunction.html)) is the derivative of $\log(\Gamma)$ and $\delta_{i j} = 1$ if and only if $i = j$ and is $0$ otherwise: that is, it's the $k$ by $k$ identity matrix. The [Fisher Information Matrix](http://en.wikipedia.org/wiki/Fisher_information) is, by definition, the negative expectation of the matrix of second partial derivatives. Because its entries are constant with respect to the random variable $\mathbf{x}$, taking expectations is trivial. We obtain a matrix with the values $\psi'(\alpha_i)$ along the diagonal and $\psi'(\alpha_0)$ is subtracted everywhere, showing that @onestop's interpretation is correct. ("$PG(1,\alpha)$" is merely an idiosyncratic notation for the polygamma function $\psi'(\alpha)$.)
| null |
CC BY-SA 2.5
| null |
2011-02-22T16:26:30.153
|
2011-02-22T16:26:30.153
| null | null |
919
| null |
7508
|
1
| null | null |
11
|
2513
|
If I hypothesize that a gene signature will identify subjects at a lower risk of recurrence, that is decrease by 0.5 (hazard ratio of 0.5) the event rate in 20% of the population and I intend to use samples from a retrospective cohort study does the sample size need to be adjusted for unequal numbers in the two hypothesised groups?
For example using Collett, D: Modelling Survival Data in Medical Research, Second Edition - 2nd Edition 2003. The required total number of events, d, can be found using,
\begin{equation}
d = \frac{(Z_{\alpha/2} + Z_{\beta/2})^2}{p_1 p_2 (\theta R)^2}
\end{equation}
where $Z_{\alpha/2}$ and $Z_{\beta/2}$ are the upper $\alpha/2$ and upper $\beta/2$ points, respectively, of the standard normal distribution.
For the particular values,
- $p_1 = 0.20$
- $p_2 = 1 - p_1$
- $\theta R = -0.693$
- $\alpha = 0.05$ and so $Z_{0.025}= 1.96$
- $\beta = 0.10$ and so $Z_{0.05} = 1.28$,
and taking $\theta R = \log \psi R = \log 0.50 = -0.693$, the number of events required (rounded up) to have a 90% chance of detecting a hazard ratio of 0.50 to be significant at the two-sided 5% level is then given by
\begin{equation}
d = \frac{(1.96 + 1.28)^2}{0.20 \times 0.80\times (\log 0.5)^2}= 137
\end{equation}
|
Power analysis for survival analysis
|
CC BY-SA 2.5
| null |
2011-02-22T16:39:14.313
|
2011-11-13T20:52:42.547
|
2011-11-13T20:52:42.547
|
930
| null |
[
"survival",
"statistical-power",
"genetics"
] |
7509
|
2
| null |
2787
|
1
| null |
Jakob Nielsen [recommends testing with five users](http://www.useit.com/alertbox/20000319.html) for optimal results. This assertion has been challenged a few times, both empirically and theoretically, but generally seems to hold quite well.
| null |
CC BY-SA 2.5
| null |
2011-02-22T17:40:39.037
|
2011-02-22T17:40:39.037
| null | null |
3367
| null |
7510
|
2
| null |
7481
|
1
| null |
Continuous latent variables with discrete (polytomous in your case) manifest variables is part of item response analysis. Package 'ltm' in R covers a variety of such models. I refer you to [this](http://goo.gl/lJh5s) paper, which deals with exactly same problem.
| null |
CC BY-SA 2.5
| null |
2011-02-22T18:06:08.293
|
2011-02-22T18:06:08.293
| null | null |
609
| null |
7511
|
1
| null | null |
5
|
137
|
We are dealing with a measurement apparatus that is cursed by noise and are trying to find out if a measurement was noise or an actual measurement.
Assume we have a beam of light incident on an square array of photo detectors. The "counts" measured in the individual tubes follow Poisson statistics. A typical beam distributes light across 4 photo tubes, most in the tube closest to the point of incident. Noise would be distributed uncorrelated.
We can measure the count distributions for a location where we know we had an actual beam hitting, and for a location where we know no beam was present. From these measurements we can construct empirical count PDFs for measurements in the individual tubes, $p_{i,\,\rm{true}}$ and $p_{i,\,\rm{noise}}$.
Since for a noise measurement the counts in the individual tubes are uncorrelated I would guess that
$$ P_\mathrm{noise} = \prod_i p_{i,\,\mathrm{noise}} $$
would be a good guess for the probability that this measurement was noise. Can I use a similar expression
$$ P_\mathrm{true} = \prod_i p_{i,\,\mathrm{true}} $$
even though counts would be correlated for the "true" case?
|
Construct probability that measurement belongs to one of two sets
|
CC BY-SA 2.5
| null |
2011-02-22T18:27:41.433
|
2011-02-23T16:02:13.747
|
2011-02-22T18:58:19.827
|
56
|
56
|
[
"hypothesis-testing",
"correlation"
] |
7512
|
1
| null | null |
3
|
3312
|
What is the best way to correlate zero-inflated count variables with a small sample size (n=~50 and N=99)?
|
Zero inflated correlation
|
CC BY-SA 2.5
| null |
2011-02-22T18:43:11.643
|
2012-02-13T16:43:21.527
| null | null | null |
[
"correlation"
] |
7513
|
1
|
7530
| null |
21
|
116981
|
Could anyone offer some pointers on how to use the `weights` argument in R's `lm` function? Say, for instance you were trying to fit a model on traffic data, and you had several hundred rows, each of which represented a city (with a different population). If you wanted the model to adjust the relative influence of each observation based on population size, could you simply specify `weights=[the column containing the city's population]`? Is that the sort of vector that can go into `weights`? Or would you need to use a different R function/package/approach entirely?
Curious to hear how people tackle this one - didn't see it covered in any of the linear modeling tutorials I saw out there. Thanks!
|
How to use weights in function lm in R?
|
CC BY-SA 2.5
| null |
2011-02-22T19:38:50.313
|
2016-11-03T15:17:41.363
|
2011-02-23T08:20:59.090
|
2116
|
3320
|
[
"r",
"regression"
] |
7514
|
2
| null |
7478
|
0
| null |
my brother posted the original data.
what is happening here is that i am observing two different conditions. the control is no bike box. this is the baseline. a normal intersection with bike lanes. the experimental condition is an intersection that has bike boxes added. i am attempting to interpret the correct/incorrect use as a function of this change in the environment. basically, i want to know how best to report the data in a simple and understandable way.
the problem i find it that stuff gets messed up because boxes/no boxes are independent from the correct/incorrect behaviour. i think.
if you take this as percentages
```
correct incorrect
```
no box (control) 72% 28%
bbox (exp) 52% 48%
i basically want to have a descriptive value so i can explain how the bike boxes effect the outcome of correct or incorrect behaviour at a specific intersection.
the two intersections have different sample sizes which complicate things from my basic psych stats text and knowledge. what i am also thinking is that i need to use the no box data as my baseline in which to compare the experimental condition? that makes sense, no?
what i was thinking is that i could report a decrease of 20% correct behaviour and an increase of 20% for incorrect responses. so does that mean there is a total difference of 40%? or should i just focus on the 20% and report for one variable, correct or incorrect as to which i see as more important for the report. but to me this data seems incomplete. it doesnt really tell the whole story.
i also thought about doing a percent change ie (new val/old val)-1. but again, i used the percentages of the total per outcome (correct incorrect) as reported above. so i then got those values:
-27% change for correct behaviour due to addition of bike boxes
+69% change for incorrect behaviour due to addition of bike boxes
does that make any sense to what i am trying to show??
i am basically confused as to what makes the best sense to report in this case given the comparison i am making and what is the most accurate data that describes these changes.
thank you all.
| null |
CC BY-SA 2.5
| null |
2011-02-22T19:58:23.480
|
2011-02-22T19:58:23.480
| null | null | null | null |
7515
|
1
|
7516
| null |
22
|
7984
|
What are some techniques for sampling two correlated random variables:
- if their probability
distributions are parameterized
(e.g., log-normal)
- if they have non-parametric
distributions.
The data are two time series for which we can compute non-zero correlation coefficients. We wish to simulate these data in the future, assuming the historical correlation and time series CDF is constant.
For case (2), the 1-D analogue would be to construct the CDF and sample from it. So I guess, I could construct a 2-D CDF and do the same thing. However, I wonder if there is a way to come close by using the individual 1-D CDFs and somehow linking the picks.
Thanks!
|
What are some techniques for sampling two correlated random variables?
|
CC BY-SA 2.5
| null |
2011-02-22T20:43:25.633
|
2018-01-03T16:14:47.500
|
2011-02-22T21:35:51.057
|
919
|
2260
|
[
"correlation",
"sampling",
"monte-carlo",
"stochastic-processes",
"copula"
] |
7516
|
2
| null |
7515
|
26
| null |
I think what you're looking for is a copula. You've got two marginal distributions (specified by either parametric or empirical cdfs) and now you want to specify the dependence between the two. For the bivariate case there are all kinds of choices, but the basic recipe is the same. I'll use a Gaussian copula for ease of interpretation.
To draw from the Gaussian copula with correlation matrix $C$
- Draw $(Z=(Z_1, Z_2)\sim N(0, C)$
- Set $U_i = \Phi(Z_i)$ for $i=1, 2$ (with $\Phi$ the standard normal cdf). Now $U_1, U_2\sim U[0,1]$, but they're dependent.
- Set $Y_i = F_i^{-1}(U_i)$ where $F_i^{-1}$ is the (pseudo) inverse of the marginal cdf for variable $i$. This implies that $Y_i$ follow the desired distribution (this step is just inverse transform sampling).
Voila! Try it for some simple cases, and look at marginal histograms and scatterpolots, it's fun.
No guarantee that this is appropriate for your particular application though (in particular, you might need to replace the Gaussian copula with a t copula) but this should get you started. A good reference on copula modeling is Nelsen (1999), An Introduction to Copulas, but there are some pretty good introductions online too.
| null |
CC BY-SA 3.0
| null |
2011-02-22T21:09:42.563
|
2018-01-03T16:14:47.500
|
2018-01-03T16:14:47.500
|
7290
|
26
| null |
7517
|
2
| null |
7511
|
3
| null |
According to the comments after the question, this is a hypothesis testing situation. You have stipulated that you can accurately assess the null distribution of the individual cell counts. We need a test statistic. The nature of the problem suggests running a small kernel over the array (essentially to deconvolve the signal). A simple choice would be a 2 x 2 mean. We would then look at the maximum of those values. The Neyman-Pearson lemma says that a good test is based on setting a critical value for this statistic. The test size depends on the null distribution of the test statistic while its power depends on the intensity of the signal. You need to choose a critical value to appropriately balance the expected false positive rate with the power for the kinds of signals you are looking for.
All this is routine; the only non-routine aspect is determining the null distribution of the statistic. It's the maximum of a set of fairly highly correlated linear combinations of Poisson variates. Specifically, if we index the rows and columns of the array and let $X_{i,j}$ be the Poisson variate for the cell in row $i$, $1 \le i \le m$, and column $j$, $1 \le j \le n$, then the 2 x 2 mean is the array of values $Y_{i,j} = (X_{i,j} + X_{i+1,j} + X_{i,j+1} + X_{i+1,j+1})/4$, $1 \le i \le m-1$, $1 \le j \le n-1$, and the test statistic is $t = \max\{Y_{i,j}\}$.
If someone could jump in and tell us how to compute the distribution of $t$ that would be nice, but I suspect it's not an easy calculation. Would you consider a small simulation? It's easy to set up in R or Mathematica, for instance.
Alternatively, use an approximation. The results of a few simulations with arrays from 9 to 1600 elements and photon intensities from 1 to 4096 per cell are consistent with two obvious approximations: one can treat all $(m-1)(n-1)$ of the $Y_{i,j}$ as independent or one can take every fourth one and treat them as independent, and then calculate the distribution of their maximum. For small intensities the upper tails of the simulated distributions of $t$ (10,000 iterations per simulation) appear to behave like the latter approximation: that is, $t$ behaves like the largest of $(m-1)(n-1)/4$ independent averages of four Poisson variates. For large intensities the tails are quite close to the former approximation: that is, $t$ behaves like the largest of $(m-1)(n-1)$ independent averages of Poisson variates.

This is a histogram of $t$ for a 30 by 40 grid with cell intensity 100 (10,000 simulations). Because its 99th quantile equals 122.25, you could create a test with at most 1% false positive rate by setting the critical value to 123. An average photon count of around 135 in a single block of four cells would be readily detectable with this method. That would represent a total flux of 4*(135 - 100) = 140 photons above background.
For the record, here is the Mathematica code used to generate this histogram.
```
simulate[m_Integer, n_Integer, \[Mu]_] /;
m >= 2 && n >= 2 && \[Mu] > 0 := With[
{f = PoissonDistribution[\[Mu]]},
y = ListConvolve[{{1, 1}, {1, 1}}/4, RandomInteger[f, {m, n}]]
];
With[{m = 30, n = 40, \[Mu] = 100, nTrials = 10000},
null = ParallelTable[Max[Flatten[simulate[m, n, \[Mu]]]], {i, 1, nTrials}];
Histogram[null, {1/4}, AxesLabel -> {"t", "Count"}]
]
```
| null |
CC BY-SA 2.5
| null |
2011-02-22T21:18:31.240
|
2011-02-23T16:02:13.747
|
2011-02-23T16:02:13.747
|
919
|
919
| null |
7518
|
1
| null | null |
3
|
139
|
If I define a function in IML:
```
start func(a);
submit a / R;
print(&a);
endsubmit;
finish;
```
and run it:
```
run func("character string");
```
I get the error message: object 'character string' does not exist.
So R or IML is evaluating the character string into an object. I want R to output "character string." How do I do this?
|
How to pass character strings to R from IML Studio
|
CC BY-SA 2.5
| null |
2011-02-22T21:20:34.683
|
2011-02-23T00:49:33.997
|
2011-02-22T21:22:59.367
| null | null |
[
"r",
"sas"
] |
7519
|
1
|
7526
| null |
33
|
6151
|
Consider a Jeffreys prior where $p(\theta) \propto \sqrt{|i(\theta)|}$, where $i$ is the Fisher information.
I keep seeing this prior being mentioned as a uninformative prior, but I never saw an argument why it is uninformative. After all, it is not a constant prior, so there has to be some other argument.
I understand that it does not depends on reparametrization, which brings me to the next question. Is it that the determinant of the Fisher information does not depend on reparametrization? Because Fisher information definitely depends on the parametrization of the problem.
Thanks.
|
Why are Jeffreys priors considered noninformative?
|
CC BY-SA 2.5
| null |
2011-02-22T23:01:36.607
|
2020-10-21T18:31:53.380
|
2011-02-22T23:49:45.727
| null |
3347
|
[
"bayesian",
"prior"
] |
7520
|
2
| null |
7513
|
3
| null |
What you suggest should work. See if this makes sense:
```
lm(c(8000, 50000, 116000) ~ c(6, 7, 8))
lm(c(8000, 50000, 116000) ~ c(6, 7, 8), weight = c(123, 123, 246))
lm(c(8000, 50000, 116000, 116000) ~ c(6, 7, 8, 8))
```
The second line produces the same intercept and slope as the third line (distinct from the first line's result), by giving one observation relatively twice the weight of each of the other two observations, similar to the impact of duplicating the third observation.
| null |
CC BY-SA 2.5
| null |
2011-02-22T23:05:22.627
|
2011-02-22T23:05:22.627
| null | null |
2958
| null |
7521
|
1
|
7566
| null |
3
|
353
|
I have quarterly sales data for a variety of stores and would like to estimate the effect of a regulation on sales. A panel type model would appear to be appropriate in this case, with the regulation as a dummy variable.
However, the quantity of sales varies by two-orders of magnitude between stores. How should I account for this?
Apologies for any problems with the question, I am something of a beginner.
|
Subjects with different order of magnitude values in panel data
|
CC BY-SA 2.5
| null |
2011-02-23T00:30:42.133
|
2017-11-12T17:21:31.740
|
2017-11-12T17:21:31.740
|
11887
|
179
|
[
"time-series",
"panel-data"
] |
7522
|
2
| null |
7518
|
2
| null |
I haven't used the new R/IML interface functionality, but from reading the help it looks like `submit foo` does text substitution. That is, if the IML variable `foo` contains the value `x`, then any occurrence of `&foo` in the submit block is replaced with `x`. When R is involved, IML generates R code to treat x as a symbol rather plain text. With that in mind, see if
```
start func(a);
submit a / R;
print("&a");
endsubmit;
finish;
```
does what you want. Alternatively, you could use the ExportMatrixToR module to send the data to R:
```
start func(a);
run ExportMatrixToR(a, "a");
submit / R;
print(a);
endsubmit;
finish;
```
Caveat: untested code.
| null |
CC BY-SA 2.5
| null |
2011-02-23T00:33:24.057
|
2011-02-23T00:49:33.997
|
2011-02-23T00:49:33.997
|
1569
|
1569
| null |
7523
|
1
| null | null |
3
|
199
|
I would appreciate some advice on how best to weight or give more importance to a percentage with a larger denominator.
Eg
- A. 1 out of 2 = 50%
- B. 5 out of 10 = 50%
- C. 500 out of 1000 = 50%
Some of the data is sparse and continually emerging so
- A) could just be a blip and could change,
- B) is emerging and may vary slightly
- C) is becoming fairly certain
Also, I'm trying to minimize the number of parameters so is there a way to weight the relative 50%'s for each scenario instead of creating a separate parameter?
|
How do I introduce features and their confidence values into classifiers
|
CC BY-SA 2.5
| null |
2011-02-23T01:01:33.197
|
2011-03-01T11:57:42.837
|
2011-03-01T11:57:42.837
| null | null |
[
"proportion",
"weighted-mean"
] |
7525
|
2
| null |
7523
|
4
| null |
It might be a good idea to rename the question to something like "how do I introduce features and their confidence values into classifiers". I can think of two ways to do it, but if you phrase the question differently, more people will look at it and will maybe have additional suggestions.
The first option is to add two features to the classifier, the first feature being the predictor itself (e.g. the success rate), and the second feature being the confidence of this predictor. As mentioned below, this confidence can be just the denominator or width of the confidence interval. This is the more principled way to do it. The problem is that this will not work with linear classifiers. Basically what you want the classifier to learn is that the predictor has high weight if the confidence is high, and low weight if the confidence is low. I.e. you want the weight for one feature to depend on another feature. Linear classifiers cannot do that. So linear NNs (perceptron) or linear SVM will not work. Decision trees are likely to work, and you might have a chance with non-linear NNs and kernel SVM. But these have their own issues.
The second option is to have some kind of "prior" on your estimate of the predictor itself. This can be a rigorous prior in the Bayesian sense, or can be just something simple and hacky (since you feed it to a classifier anyway). One simple way to do it is to add some number (e.g. 10) "fabricated" observations to each trainer's data. E.g. you believe that each trainer's prior probability of winning on a particular course is 50%. So you start with 5 wins and 5 loses for every trainer. To these fabricated observations you add the real statistics. For Catterick this will give you 11 wins out of 58 total, so 19%. The more real data you have, the less significant the effect of this prior will be. On the other hand, for the Exeter data this will give you 6 wins out of 11 total, so 55% (rather than your overly confident estimate of 100%).
So the second option is to compute the percentage this way, and then feed just this single percentage figure to the classifier.
Depending on what you expect the classifier to do, you may vary the prior probability as needed. E.g. if you expect the classifier to heed data close to 0% or to 100% and ignore the 50% region, then starting with the 50/50 prior may be OK. If you want to get more accurate estimates of these percentages, you can change the prior according to the trainer. E.g. you have only 1/1 real statistics for the trainer at Exeter, but you know this particular trainer wins 80% of races at other courses. You can account for this by introducing 8 fabricated wins and 2 losses (rather than 5 and 5).
---
Weighing by the denominator itself may be reasonable, especially if it represents the number of samples. This way you can easily combine percentages. E.g. in one (small) experiment you get 3 out of 10 = 30%; in another (larger) experiment you get 50 out of 100 = 50%, and the average is (30%*10 + 50%*100) / (10 + 100) = 48%.
If you just want to know how good your estimate is, then try computing "confidence intervals" -- this gives you an idea of how certain your estimates are. E.g. for a small experiment you might get an interval of [25%, 75%], for a larger one you may get [49%. 51%] interval. How to compute these confidence intervals will depend on the details of your data and the model.
| null |
CC BY-SA 2.5
| null |
2011-02-23T02:02:42.370
|
2011-02-24T20:36:45.130
|
2011-02-24T20:36:45.130
|
3369
|
3369
| null |
7526
|
2
| null |
7519
|
15
| null |
It's considered noninformative because of the parameterization invariance. You seem to have the impression that a uniform (constant) prior is noninformative. Sometimes it is, sometimes it isn't.
What happens with Jeffreys' prior under a transformation is that the Jacobian from the transformation gets sucked into the original Fisher information, which ends up giving you the Fisher information under the new parameterization. No magic (in the mechanics at least), just a little calculus and linear algebra.
| null |
CC BY-SA 3.0
| null |
2011-02-23T02:27:01.233
|
2013-10-23T15:43:12.733
|
2013-10-23T15:43:12.733
|
17230
|
26
| null |
7527
|
1
|
7529
| null |
17
|
3147
|
I'm trying to implement a "change point" analysis, or a multiphase regression using `nls()` in R.
[Here's some fake data I've made](https://i.stack.imgur.com/27f1S.png). The formula I want to use to fit the data is:
$y = \beta_0 + \beta_1x + \beta_2\max(0,x-\delta)$
What this is supposed to do is fit the data up to a certain point with a certain intercept and slope ($\beta_0$ and $\beta_1$), then, after a certain x value ($\delta$), augment the slope by $\beta_2$. That's what the whole max thing is about. Before the $\delta$ point, it'll equal 0, and $\beta_2$ will be zeroed out.
So, here's my function to do this:
```
changePoint <- function(x, b0, slope1, slope2, delta){
b0 + (x*slope1) + (max(0, x-delta) * slope2)
}
```
And I try to fit the model this way
```
nls(y ~ changePoint(x, b0, slope1, slope2, delta),
data = data,
start = c(b0 = 50, slope1 = 0, slope2 = 2, delta = 48))
```
I chose those starting parameters, because I know those are the starting parameters, because I made the data up.
However, I get this error:
```
Error in nlsModel(formula, mf, start, wts) :
singular gradient matrix at initial parameter estimates
```
Have I just made unfortunate data? I tried fitting this on real data first, and was getting the same error, and I just figured that my initial starting parameters weren't good enough.
|
Change point analysis using R's nls()
|
CC BY-SA 3.0
| null |
2011-02-23T03:27:05.930
|
2020-01-10T11:13:01.667
|
2015-05-12T08:04:38.237
|
35989
|
287
|
[
"r",
"regression",
"change-point",
"nls"
] |
7528
|
1
|
7557
| null |
3
|
644
|
A simplified regression equation $ES=\frac{a+b}{n_1+n_2}$ has been suggested as an alternative to Egger's regression equation $\frac{ES}{SE}=\frac{a+b}{SE}$, where ES=Effect Size, $n_1$=sample size of the patients, $n_2$=sample size of the controls, SE=Standard Error.
This alternative test, that was presented by Peters et al. in their [2006 paper in JAMA](http://www.ncbi.nlm.nih.gov/pubmed/16467236), is supposed to be better than Egger's test when the ES is the lnOR.
This alternative test could also be valuable in cases Standard Error (SE) cannot be calculated, as SE is not taking part in the equation.
Could this alternative Egger's test be used with the other types of Effect Size? When the ES is the SMD? When the ES is the RR? When the ES is the Pearson's correlation coefficient?
|
Alternative Egger's test, without using standard error
|
CC BY-SA 2.5
| null |
2011-02-23T03:42:14.227
|
2011-02-24T00:34:36.170
|
2011-02-23T22:19:05.607
| null |
3333
|
[
"meta-analysis",
"standard-error",
"effect-size",
"funnel-plot",
"publication-bias"
] |
7529
|
2
| null |
7527
|
13
| null |
(At first I thought it could be a problem resulting from the fact that `max` is not vectorized, but that's not true. It does make it a pain to work with changePoint, wherefore the following modification:
```
changePoint <- function(x, b0, slope1, slope2, delta) {
b0 + (x*slope1) + (sapply(x-delta, function (t) max(0, t)) * slope2)
}
```
[This R-help mailing list post](http://tolstoy.newcastle.edu.au/R/help/06/06/28419.html) describes one way in which this error may result: the rhs of the formula is overparameterized, such that changing two parameters in tandem gives the same fit to the data. I can't see how that is true of your model, but maybe it is.
In any case, you can write your own objective function and minimize it. The following function gives the squared error for data points (x,y) and a certain value of the parameters (the weird argument structure of the function is to account for how `optim` works):
```
sqerror <- function (par, x, y) {
sum((y - changePoint(x, par[1], par[2], par[3], par[4]))^2)
}
```
Then we say:
```
optim(par = c(50, 0, 2, 48), fn = sqerror, x = x, y = data)
```
And see:
```
$par
[1] 54.53436800 -0.09283594 2.07356459 48.00000006
```
Note that for my fake data (`x <- 40:60; data <- changePoint(x, 50, 0, 2, 48) + rnorm(21, 0, 0.5)`) there are lots of local maxima depending on the initial parameter values you give. I suppose if you wanted to take this seriously you'd call the optimizer many times with random initial parameters and examine the distribution of results.
| null |
CC BY-SA 3.0
| null |
2011-02-23T06:27:52.350
|
2015-05-12T07:52:01.270
|
2015-05-12T07:52:01.270
|
35989
|
2975
| null |
7530
|
2
| null |
7513
|
17
| null |
I think R help page of `lm` answers your question pretty well. The only requirement for weights is that the vector supplied must be the same length as the data. You can even supply only the name of the variable in the data set, R will take care of the rest, NA management, etc. You can also use formulas in the `weight` argument. Here is the example:
```
x <-c(rnorm(10),NA)
df <- data.frame(y=1+2*x+rnorm(11)/2, x=x, wght1=1:11)
## Fancy weights as numeric vector
summary(lm(y~x,data=df,weights=(df$wght1)^(3/4)))
# Fancy weights as formula on column of the data set
summary(lm(y~x,data=df,weights=I(wght1^(3/4))))
# Mundane weights as the column of the data set
summary(lm(y~x,data=df,weights=wght1))
```
Note that weights must be positive, otherwise R will produce an error.
| null |
CC BY-SA 3.0
| null |
2011-02-23T08:15:22.403
|
2016-11-03T15:17:41.363
|
2016-11-03T15:17:41.363
|
25138
|
2116
| null |
7531
|
1
|
7545
| null |
4
|
936
|
I have the data about process duration (in minutes) and components (procedures) done during it like this (CSV):
---
```
id,time,p1,p2,p3,p4
1,30,1,0,0,0
2,32,1,0,0,0
3,56,1,1,0,0
4,78,1,1,0,1
5,78,1,1,0,1
6,100,1,1,1,1
7,98,0,1,1,1
```
I need to estimate the duration of each component(procedure)
I want to get something like this:
```
component,timeMax,timeMin,timeAverage,timeSD,samples
p1,...
p2,....
p3,...
p4,....
```
Note: I need the estimated time of procedures not time of processes where procedure was used.
I think the solution shold initially group all combinations first and then
simple procedures (1 process = 1 procedure ) time shuld be evaluated
$$t_1 = 30$$ #id=1
$$t_1 = 32$$ #id=2
then more complex actions should be performed: for example, time of procedure 2 (from sample) could be calculated by subtraction:
$$t_1 = \sum{(t_1+t_2+t_3+t_4)} - \sum{(t_2+t_3+t_4)} = 100 - 98 = 2$$ # id 6 - 7
$$t_2 = \sum{(t_1+t_2)} - t_1 = 56 - 30\pm1 = 26\pm1$$ #id 3 - (1,2)
$$t_3 = \sum{(t_1+t_2+t_3+t_4)} - \sum{(t_1+t_2+t_4)} = 100 - 78 = 22$$ # id 6 - 5
$$t_4 = \sum{(t_1+t_2+t_4)} - \sum{(t_1+t_2)} = 78 - 56 = 22$$ #id 4 - 3
then average, SD, Min,Max for all $t_i$ is calculated.
If there are several procedures are always combined time for each is calculated by dividing combination time by combination size. I think this procedure should be performed only before result output.
May be also some kind of correction for procedures that are performed during this sequence.
May be there should be iteration limit, or stop condition then last iteration brings no or <1% result change in comparison with previous one.
The second part is to compare procedure times when it is done separately and in combination with others. And to estimate most effective (reducing total time) and ineffective (increasing total time) procedure combinations.
The question is:
- How to achieve this?
- What methods should/could be used?
- What statistical software could be used for this task?
|
Multiple regression with binary predictors. Component value analysis
|
CC BY-SA 2.5
| null |
2011-02-23T08:23:08.650
|
2011-02-25T13:23:21.070
|
2011-02-24T10:48:25.453
|
3376
|
3376
|
[
"methodology",
"multiple-regression"
] |
7532
|
1
|
7540
| null |
9
|
24799
|
I would like to ask you, what is the correct number of Lags in ARCH LM Test? I am referring to ArchTest in FinTS package, but other ArchTest (such as the one in Eviews) provide same results. In many time series, when I choose Lags between 1:5 the p.value is usually higher than 0.05, but with increasing of Lags, p.value becomes smaller. So how to do the correct decision if for lag=1, the time series looks homoscedastic(NO ARCH Effects), but for lags=5 and lags=12 result is heteroscedastic (presence of ACH) or reverse?
Thank you
Sincerely Jan
```
#Example code in R
library(quantmod)
library(FinTS)
getSymbols("XPT/USD",src="oanda")
ret_xptusd<-as.numeric(diff(log(XPTUSD)))
ones<-rep(1,500)
ols<-lm(ts(ret_xptusd)~ones);ols
residuals<-ols$residuals
ArchTest(residuals,lags=1) # p-value = 0.008499
ArchTest(residuals,lags=5) # p-value = 0.08166
ArchTest(residuals,lags=12) #p-value = 0.2317
```
|
How to choose number of lags in ARCH models using ARCH LM test?
|
CC BY-SA 2.5
| null |
2011-02-23T09:50:38.280
|
2017-10-25T09:51:40.533
|
2011-02-23T11:39:21.630
|
2116
|
3378
|
[
"time-series",
"garch"
] |
7533
|
2
| null |
7531
|
1
| null |
Software:
I advice to use R because it is free and designed for data analysis.
[http://cran.r-project.org/](http://cran.r-project.org/)
About your first question, the R code to answer it, is the following (I suppose that your data are in the file "data.csv")
```
# load the file "data.csv"
d <- read.table("data.csv",header=T,sep=",")
# create a data frame for the results
res <- data.frame(component=paste("op",1:(ncol(d)-2),sep=""),timeMax=rep(0,ncol(d)-2),timeMin=rep(0,ncol(d)-2),timeAverage=rep(0,ncol(d)-2),timeSD=rep(0,ncol(d)-2),samples=rep(0,ncol(d)-2))
# loop over the operations
for (ind in 3:ncol(d))
{
res[ind-2,2:6]<-c(max(d$time*d[,ind]),min(d$time*d[,ind]),mean(d$time*d[,ind]),sd(d$time*d[,ind]),sum(d[,ind]))
}
```
For the question about the combinations, you could use the same trick by replacing
`d$time*d[,ind]`
by
`d$time*d[,ind1]*d[,ind2]`
if you want to obtain statistics about the combination of op1 and op2. But if you have many op, this is maybe not suited to your case as the number of combinations is equal to 2^N... Is N large? Do you want statistics about all the possible combinations or only about some of them?
| null |
CC BY-SA 2.5
| null |
2011-02-23T09:52:39.107
|
2011-02-23T13:46:38.320
|
2011-02-23T13:46:38.320
|
2116
|
3377
| null |
7534
|
1
|
7646
| null |
3
|
410
|
I have a contingency table which is not fixed, it varies depending on on the data.
I want to know how significant a particular cell is compared with the other cells. Is there a method of doing this?
|
How do you determine if a particular cell of an i x j contingency table is statistically significant among all other cells?
|
CC BY-SA 3.0
| null |
2011-02-23T10:12:20.310
|
2012-09-02T01:46:33.087
|
2012-09-02T01:46:33.087
|
3826
|
3379
|
[
"contingency-tables"
] |
7535
|
1
|
7537
| null |
17
|
2076
|
What is the appropriate strategy for deciding which model to use with count data?
I have count data that i need to model as a multilevel model and it was recommended to me (on this site) that the best way to do so this is through bugs or MCMCglmm. However i am still trying to learn about bayesian statistics, and i thought i should first try to fit my data as generalized linear models and ignore the nested structure of the data (just so i can get a vague idea of what to expect).
About 70% of the data are 0 and the ratio of variance to the mean is 33. So the data is quite over-dispersed.
After trying a number of different options (including poisson, negative binomial, quasi and zero inflated model) i see very little consistency in the results (varying from everything is significant to nothing is significant).
How can i go about making an informed decision about which type of model to choose based on the 0 inflation and over-dispersion?
For instance, how can i infer that quasi-poisson is more appropriate than negative binomial (or vise versa) and how can i know that using either has dealt adequately (or not) with the excess zeros?
Similarly, how do i evaluate that there is no more over-dispersion if a zero-inflated model is used? or how should i decide between a zero inflated poisson and a zero inflated negative binomial?
|
Strategy for deciding appropriate model for count data
|
CC BY-SA 4.0
| null |
2011-02-23T11:05:14.260
|
2020-01-05T13:43:33.923
|
2020-01-05T13:43:33.923
|
11887
|
1871
|
[
"generalized-linear-model",
"poisson-distribution",
"count-data",
"negative-binomial-distribution",
"overdispersion"
] |
7536
|
2
| null |
6330
|
5
| null |
By Their Fruits Ye Shall Know Them
The assumption (to be tested) is that the errors from the model have constant variance. Note this does not mean the errors from an assumed model. When you use a simple graphical analysis you are essentially assuming a linear model in time.
Thus if you have an inadequate model such as might be suggested by a casual plot of the data against time you may incorrectly conclude about the need for a power transform. Box and Jenkins did so with their Airline Data example. They did not not account for 3 unusual values in the most recent data thus they incorrectly concluded that there was higher variation in the residuals at the highest level of the series.
For more on this subject please see [http://www.autobox.com/pdfs/vegas_ibf_09a.pdf](http://www.autobox.com/pdfs/vegas_ibf_09a.pdf)
| null |
CC BY-SA 4.0
| null |
2011-02-23T13:29:56.240
|
2018-06-19T11:57:38.263
|
2018-06-19T11:57:38.263
|
3382
|
3382
| null |
7537
|
2
| null |
7535
|
10
| null |
You can always compare count models by looking at their predictions (preferrably on a hold out set). J. Scott Long discusses this graphically (plotting the predicted values against actuals). His text book [here](http://rads.stackoverflow.com/amzn/click/0803973748) describes in details but you can also look at [6.4 on this document](http://www.stata-journal.com/sjpdf.html?articlenum=st0002). You can compare models using AIC or BIC and there is also a test called Voung test that I am not terribly familiar with but can compare zero inflated to non nested models. Here is a Sas paper describing it briefly on page [10](http://www2.sas.com/proceedings/forum2008/371-2008.pdf) to get you started. It also is implmented in R [posting](http://tolstoy.newcastle.edu.au/R/e2/help/06/09/1538.html)
| null |
CC BY-SA 2.5
| null |
2011-02-23T14:24:38.070
|
2011-02-24T07:37:31.597
|
2011-02-24T07:37:31.597
|
2116
|
2040
| null |
7539
|
2
| null |
7535
|
5
| null |
A couple things to add to what B_Miner said:
1) You wrote that the models varied from "everything significant" to "nothing significant" but this is not a good way to compare models. Look, instead, at predicted values (as B_miner suggested) and effect sizes.
2) If 70% of the data are 0, I can't imagine that a model without 0 inflation is appropriate.
3) Even if you don't want to go Bayesian, you can use GLMMs in SAS (PROC GLIMMIX or NLMIXED) and in R (various packages). Ignoring the nested nature may mess everything up.
4) In general, deciding on which model is best is an art, not a science. There are statistics to use, but they are a guide to judgment. Just looking at what you wrote, I would say a ZINB model looks right
| null |
CC BY-SA 2.5
| null |
2011-02-23T15:20:31.053
|
2011-02-23T15:20:31.053
| null | null |
686
| null |
7540
|
2
| null |
7532
|
9
| null |
Arch LM tests whether coefficients in the regression:
$$a_t^2=\alpha_0+\alpha_1 a_{t-1}^2+...+\alpha_p a_{t-p}^2+e_t$$
are zero, where $a_t$ is either observed series which we want to test for ARCH effects. So the null hypothesis is
$$\alpha_1=...=\alpha_p=0$$
If hypothesis is accepted then we can say that series have no ARCH effects. If it is rejected then one or more coefficients are non zero and we say that there are ARCH effects.
Here we have classical regression problem of joint hypotheses versus individual hypothesis. When more regressors are included the regression is jointly insignificant, although a few regressors seem to be significant. All the introductory books about regression usually have chapter dedicated to this. The key motive is that joint hypotheses take into account all the interactions, when individual hypotheses do not. So in this case the statistic with few lags do not take into account the effects of more lags.
When statistical tests give conflicting results, for me it is an indication that data should be reexamined. Statistical tests usually have certain assumptions, which data may violate. In your case if we look at the graph of the series, we see a lot of zeroes.

So this is not an ordinary time series and I would hesitate to use plain ARCH model.
| null |
CC BY-SA 3.0
| null |
2011-02-23T15:21:49.200
|
2017-10-25T09:51:40.533
|
2017-10-25T09:51:40.533
|
2116
|
2116
| null |
7541
|
2
| null |
7534
|
0
| null |
I am not sure if I understand the first part of your question - everything varies depending on the data, does it not?
There are ways to partition chi-square; I am not near my references, but this article seems apropos [http://www.jstor.org/pss/2283933](http://www.jstor.org/pss/2283933)
If you know in advance which cell you will want to test, you can simply combine the other rows or columns.
| null |
CC BY-SA 2.5
| null |
2011-02-23T15:24:51.763
|
2011-02-23T15:24:51.763
| null | null |
686
| null |
7542
|
1
| null | null |
5
|
2482
|
I have to demonstrate that a generator of VoIP calls generates calls uniformly distributed between callers.
In particular the distribution is the uniform (min, max) one where the volume per caller distribution is uniformly distributed between a minimum and maximum. So by running a test with 10000 users and a min value equal to 30 calls per week and a max value equal to 90 calls/week i obtain that not all the users respect this limits.
The situation is depicted in figure:
The few users that generate <30 or >90 calls spoil the chi-square goodness of fit test and I don't know how can i proceed with a goodness of fit test. Most of the values is within the interval (and from these we obtain low chi-square value), but the few out-of-range values spoil the final chi-square calculation.
In your opinion what is the best way to operate? What should I do with the "out-of-range"values? Thank you.
PS: the chi-square goodness-of-fit test performed is reported in the following figure:

where still I don't know what to do with the out-of-range values.
UPDATE
after talking with people linked with the project we concluded that the generator does not satisfy the uniform distribution. We have to do a theoretic analysis of what we expect really to be the distribution at the end of the generation based on the inputs.
This means that I have to do it!
More details:
The generator assigns a "probability" between 0 and 1 to the callers (with a particular method, that probably is the problem). Then it generates a random value from 0 to 1 and it finds the associated user and assigns the call to him.
The generator generates calls for a week with the constant rate equal to 1 call per second, this means that it generates ca. 604800 total calls.
My goal is to distribute the callers between the min and max number of calls in a week. For example if I have 10000 users and the min limit is equal to 30 calls per week and max = 90 calls per week I should obtain something about:
30 calls : 163 users.
31 calls : 163 users.
....
90 calls : 163 users.
So 163 users generate 30 calls in a week..etc, etc and finally 163 users generate 90 calls in a week. How should I assign the probability to callers in order that the generator distributes the callers uniformly between the range 30-90?
|
How to perform goodness of fit test and how to assign probability with uniform distribution?
|
CC BY-SA 3.0
| null |
2011-02-23T16:03:54.663
|
2013-06-28T19:10:21.147
|
2012-06-02T22:17:39.543
| null |
3342
|
[
"distributions",
"chi-squared-test",
"goodness-of-fit",
"uniform-distribution"
] |
7543
|
2
| null |
7084
|
7
| null |
### The question:
- How can normality be validated without using visual cues such as QQ plots? (the validation will be a part of larger software)
- Can a "goodness of fit" score be calculated?
Although enumerated separately, these parts are (appropriately) one question: you compute an appropriate goodness of fit and use that as a test statistic in a hypothesis test.
### Some answers
There are plenty of such tests; the best among them are the Kolmogorov-Smirnov, Shapiro-Wilks, and Anderson-Darling tests. Their properties have been extensively studied. An excellent resource is the work of M. A. Stephens, especially the 1974 article, [EDF Statistics for Goodness of Fit and Some Comparisons](http://www.jstor.org/pss/2286009). Rather than supply a long list of references, I will leave it to you to Google this title: the trail quickly leads to useful information.
One thing I like about Stephens' work, in addition to the comparisons of the properties of various GoF tests, is that it provides clear descriptions of how to compute the statistics and how to compute, or at least approximate, their null distributions. This gives you the option to implement your favorite test yourself. The EDF statistics (empirical distribution function) are easy to compute: they tend to be linear combinations of the order statistics, so all you have to do is sort the data and go. The complications concern (a) computing the coefficients--this used to be a barrier in applying the S-W test, but good approximations now exist--and (b) computing the null distributions. Most of those can be computed or have been adequately tabulated.
What is characteristic about any GoF tests for distributions is that (a) they need a certain amount of data to become powerful (for detecting true deviations) and (b) very quickly thereafter, as you acquire more data, they become so powerful that deviations that are practically inconsequential become statistically significant. (This is very well known and is easily confirmed with simulation or mathematical analysis.) In this is the origin of the reluctance to answer the original question without obtaining substantial clarification. If you have a few hundred values or more, you will find that any of these tests demonstrate your data are not "normal." But does this matter for your intended analysis? We simply cannot say.
| null |
CC BY-SA 2.5
| null |
2011-02-23T16:21:47.927
|
2011-02-23T16:21:47.927
|
2020-06-11T14:32:37.003
|
-1
|
919
| null |
7544
|
2
| null |
7084
|
0
| null |
I would maybe (I don't know if it is feasible in your context) suggest another approach.
You could force your experimental data to follow a standard normal distribution by applying a normal quantile tranformation on it. The principle is to
1) rank your values from high to low
2) assign the value of r-th rank the (r-0.5)/n th quantile of the standard normal distribution. This ensures that your data is N(0,1)
3) perform the analysis on the transformed data
I feel that if your original data is close to normal this would not change much your inference. Then of course you can simulate data from a standard normal distribution
and it will fit your experimental data.
| null |
CC BY-SA 2.5
| null |
2011-02-23T18:11:56.453
|
2011-02-23T18:11:56.453
| null | null | null | null |
7545
|
2
| null |
7531
|
5
| null |
I don't think the problem, as is, is well-defined. You mention the possibility that the duration of each operation may vary if done in combination with others. If that's the case, the "duration of operation" is not defined. E.g. in your example, op1 time is 30-32 min, and you say "therefore op2 time is 24-26 min", but how do you know that? Maybe op1, when done together with op2, takes only 10 min and op2 takes the remaining 46 min. So you'd need some more assumptions to figure out individual durations from such data.
If you assume the operations are independent, then it seems that an easy first step would be to build a set of linear equations. In your example, the first three equations would be:
$$t_1 = 30$$
$$t_1 = 32$$
$$t_1 + t_2 = 56$$
Then solve it (in the least squares sense). The way to solve it is probably using linear regression, with operation durations as independent variables and the process times as (known) dependent variables. Standard regression solvers will give you all kinds of estimates of variance.
Solution for R:
```
d <- read.table("data.csv",header=T,sep=",") # read data
r <- lm(time ~ 0+p1+p2+p3+p4, data=d) # multiple linear regression
summary(r) #result output
```
| null |
CC BY-SA 2.5
| null |
2011-02-23T18:21:58.147
|
2011-02-25T13:23:21.070
|
2011-02-25T13:23:21.070
|
3376
|
3369
| null |
7546
|
1
| null | null |
7
|
1324
|
For R, I understand that the package lme4 and the function glmer roughly corresponds to glimmix in SAS. What is the default covariance structure when fit and can it be changed? If so how?
|
What is the default covariance structure in glmer and can I change it?
|
CC BY-SA 2.5
| null |
2011-02-23T18:29:52.990
|
2011-02-24T07:51:30.473
| null | null |
1364
|
[
"r",
"mixed-model",
"covariance-matrix"
] |
7547
|
2
| null |
7542
|
-2
| null |
What you are describing resembles a "continuous uniform distribution",
[http://mathworld.wolfram.com/UniformDistribution.html](http://mathworld.wolfram.com/UniformDistribution.html)
-Ralph Winters
| null |
CC BY-SA 2.5
| null |
2011-02-23T19:43:14.900
|
2011-02-23T19:43:14.900
| null | null |
3489
| null |
7548
|
2
| null |
4200
|
0
| null |
You should consider the Cross Correlation Function as that is meant to identify the lead/lag relationship. Dirk had mentioned the Autocorrelation Function, but that is meant for just one single time series and not for multivariate. You should consider looking at the Box-Jenkins textbook Chapter 10 where they introduce the steps do this.
You say your data is noisy, but if it has a pattern where the lead/lag response is strong then you will find significance.
| null |
CC BY-SA 2.5
| null |
2011-02-23T19:56:53.833
|
2011-02-23T20:19:48.650
|
2011-02-23T20:19:48.650
|
8
|
3382
| null |
7549
|
1
| null | null |
6
|
1171
|
As the title suggests, I'm pretty well befuddled about which approach makes the most sense for my data. Let me try to succinctly explain the problem.
I have binary choice data representing whether a specific person for a specific event took the train or bus. I have event level predictors (location of event, duration of event) as well as person-level predictors (income level, education level). There are multiple, but unbalanced, events per person.
Here's the slightly unusual part: I have a bunch of historic info with all predictor values as well as observed choice. I want to build a regression model from that I can then apply to new data (consisting of everything except education level) to infer with as much confidence as possible that person's education, based on their observed choices.
My thoughts on how to do this:
- Build a mixed-effect, multilevel logistic regression model, with transportation choice as my dependent variable, and education_level as one of the predictors. Now solve for education_level using something like inverse logistic regression.
- Do a regression on counts. Now, education is the dependent variable, and we sum up counts of each subset of predictor variables we've seen (eg, there were 5 nearby events where rich males took the bus, 3 faraway events where...)
- Some kind of latent class model?
What are the tradeoffs between these alternatives? Also, are there still other approaches worth examining (eg, CFA)?
(And please let me know if I need to provide more detail on the problem.)
Thank you for your time,
Ian.
|
Inverse logistic regression vs. repeated-measures vs. latent class?
|
CC BY-SA 2.5
| null |
2011-02-23T20:03:53.400
|
2011-02-24T16:39:05.287
|
2011-02-23T21:24:22.357
| null |
3387
|
[
"regression",
"latent-class"
] |
7550
|
2
| null |
7499
|
6
| null |
Adding error bars to a bar graph is a choice you make as a presenter to communicate more information to your audience. They are useful because they communicate visually how certain you can be, based on your data, of the specific values you are presenting.
In some cases, there is no uncertainty. Imagine you are graphing the number of students in each grade in a school district. These numbers are known so presenting the exact values without error bars makes sense.
If however you are graphing the height of students by gender and you only had time to measure students in one class, you are making statistical inferences about the larger population. In this case, error bars are helpful to communicate the range of likely true values. If the 15 boys in the class you measured averaged 48 inches you could include error bars to show that you are 95% sure that the average of all boys in the district is between say, 46 and 50.
| null |
CC BY-SA 2.5
| null |
2011-02-23T20:06:57.793
|
2011-02-23T20:06:57.793
| null | null |
3388
| null |
7551
|
1
|
7553
| null |
9
|
10355
|
I am running an ordinal logistic regression in R and running into trouble when I include dummy variables. My model works great with my first set of predictors. Next I want to add dummy variables for each of the years represented in my dataset.
I created the dummy variables with `car:recode` in this manner (one statement like this for each of the 11 years)
```
fsd$admityear2000 <- recode(fsd$ApplicationYear ,"2000=1;else=0")
```
The lrm model is specified as follows
```
library(Design)
ddist<- datadist(fsd)
options(datadist='ddist')
m4 <- lrm(Outcome ~ relGPA + mcAvgGPA + Interview_Z + WorkHistory_years + GMAT + UGI_Gourman + admityear1999 + admityear2000 + admityear2001 + admityear2002 + admityear2003 + admityear2004 + admityear2005 + admityear2006 + admityear2007 + admityear2008 + admityear2009, data=fsd)
```
(sorry for all of the other random variables, but I don't want to introduce confusion by changing my code)
I get the error
```
singular information matrix in lrm.fit (rank= 22 ). Offending variable(s):
admityear2009 admityear2000 admityear1999
Error in lrm(Outcome ~ relGPA + mcAvgGPA + Interview_Z + WorkHistory_years + :
Unable to fit model using “lrm.fit”
```
I understand that including all options of a dummy variable over-defines the model, but I get the error whether I include all 11 years or just 10.
I found a suggestion [here](http://r.789695.n4.nabble.com/Singular-information-matrix-in-lrm-fit-td869221.html) to set the penalty parameter of `lrm` to a small positive value. Setting it to 1 or 5 changes the error such that it only names one of the variables as offending. The error doesn't go away even with `penalty=100`.
I'm pretty new to R, but loving the freedom so far. Thanks for any help!
Responses and Lessons
- Factors are awesome and I can't believe I didn't notice them earlier. Man that cleans up my code a lot. Thanks!
- My DV, 'Outcome' is indeed ordinal and after making it a factor(), I also made it ordered().
- The str() command is also awesome and this is what my data now looks like (with some of the non-relevant variables omitted)
output:
```
str(fsd)
Outcome : Ord.factor w/ 3 levels "0"<"1"<"2"
relGPA : num
mcAvgGPA : num
admitschool : Factor w/ 4 levels "1","2","3","4"
appyear : Factor w/ 11 levels "1999","2000",..
```
- both lrm() and polr() now run successfully, and they both deal with appyear by dropping some values of the factor. lrm() drops 1999, 2000, and 2001 while polr() just drops 1999 and 2000. lrm() gives no warnings while polr() says "design appears to be rank-deficient, so dropping some coefs." This is an improvement, but I still don't understand why more than one value needs to be dropped. xtabs shows that there isn't full seperation right?
output:
```
xtabs(~fsd$appyear + fsd$Outcome)
fsd$Outcome
fsd$appyear 0 1 2
1999 1207 123 418
2000 1833 246 510
2001 1805 294 553
2002 1167 177 598
2003 4070 158 1076
2004 2803 106 1138
2005 3749 513 2141
2006 4429 519 2028
2007 6134 670 1947
2008 7446 662 1994
2009 4411 86 1118
```
|
Singular information matrix error in lrm.fit in R
|
CC BY-SA 2.5
| null |
2011-02-23T20:22:00.287
|
2011-02-24T16:55:33.830
|
2011-02-24T16:55:33.830
|
3388
|
3388
|
[
"r",
"logistic"
] |
7553
|
2
| null |
7551
|
7
| null |
Creating dummy variables should not be necessary. You should just use factors when modeling in R.
```
admityear <- factor(admityear)
m4 <- lrm(Outcome ~ relGPA + mcAvgGPA + Interview_Z + WorkHistory_years +
GMAT + UGI_Gourman + admityear, data=fsd)
```
If the singular condition still persists, then you have multicollinearity and need to try dropping other variables. (I would be suspicious of WorkHistory_years.) I also don't see anything ordinal about that model. Ordinal logistic regression in the rms package (or the no longer actively supported Design package) is done with polr(). And it would be really helpful to see the results from str(fasd).
| null |
CC BY-SA 2.5
| null |
2011-02-23T22:45:27.753
|
2011-02-23T22:45:27.753
| null | null |
2129
| null |
7554
|
1
| null | null |
7
|
21032
|
I have data about how many unique users do a certain thing for each day of a month. I can average it, and i would like to display the variation in a intuitive format (such as % of something).
Is there a standard way of doing this?
I've found standard error, which is $\frac{\sigma}{\sqrt{n}}$, which is not particularly intuitive for users of the data.
If anyone needs clarifications, please ask. I'm not too clear about this myself.
EDIT: in response to the answers, it's for building out a analytics dashboard for use by the entire company (so many people probably don't understand standard deviation). In particular, we are doing A/B testing for various metrics over a period of say 1 month. We basically would average all the metric per day to give a number for that period, but there are variations day-to-day and would like to have some good way of expressing that.
|
How to express error as a percentage?
|
CC BY-SA 3.0
| null |
2011-02-24T00:05:24.610
|
2015-11-12T14:24:03.723
|
2012-07-19T11:20:52.490
|
12540
|
3392
|
[
"variance"
] |
7555
|
1
|
7573
| null |
6
|
9100
|
I am trying to manage a meta-regression in SPSS17 using the effect size as the dependent variable. I want to explore if my independent variables affects the effect size. Some small practical questions:
- What is the minimum number of studies necessary for a meta-regression?
Some people suggest at least 10 studies are required. Why not 20 or 5 studies?
Is the total sample size an important consideration?
Why would 10 studies with 200 patients be enough, but 5 studies with 400 patients not be enough?
- Can I enter all three regressors at once and report the global model, or do I have to enter one regressor at a time and report 3 models each one separately?
How does the correlation between the independent variables affects this choice?
How does the number of the studies affect the number of independent variables that I should enter simultaneously?
- Does the independent variable have to be a scale variable?
The dependent variable (=effect size) is, of course, scale. The independent variable must be also scale, or could be ordinal or nominal?
- How can I weight my effect size for sample size?
- What is the preferable level of significance?
Is p<0.05 still acceptable for clinical research in such an analysis?
|
How to do meta-regression in SPSS?
|
CC BY-SA 2.5
| null |
2011-02-24T00:21:51.900
|
2016-06-17T13:09:29.513
|
2011-02-24T07:22:39.853
|
2116
|
3333
|
[
"spss",
"meta-analysis",
"sample-size",
"meta-regression"
] |
7556
|
2
| null |
7554
|
4
| null |
It may be meaningful to just divide by the average. E.g. the average number is 1000 and the std is 200, so in a sense this means the actual number can vary by 20% from the baseline.
Also, if you could say who the user of the data are and what they are doing with it, it might be useful.
| null |
CC BY-SA 2.5
| null |
2011-02-24T00:28:50.507
|
2011-02-24T00:28:50.507
| null | null |
3369
| null |
7557
|
2
| null |
7528
|
2
| null |
Yes, I think that this approach can be use with other types of effect sizes as long as they are (approximately) normally distributed (that's why you use $log(OR)$; to be more precise, the errors of the linear regression model need to be $N(0,1)$).
Your regression equation is wrong. It is $\overline{ES} = a + b \cdot \frac{1}{N}$.
Furthermore, it is a weighted regression. So, unfortunately, you still need the standard errors. [Macascill et al (2001: 644)](http://www.ncbi.nlm.nih.gov/pubmed/11223905) write: "The observations are weighted by the inverse variance of the estimate to allow for possible heteroscedasticity (FIV)". However, since I [know](https://stats.stackexchange.com/questions/7426/eggers-test-in-spss/7428#7428) that you can compute the standard errors, this shouldn't be a problem (trust the authors ;-).
| null |
CC BY-SA 2.5
| null |
2011-02-24T00:34:36.170
|
2011-02-24T00:34:36.170
|
2017-04-13T12:44:44.530
|
-1
|
307
| null |
7558
|
2
| null |
7554
|
2
| null |
The standard error of the mean tells you how precise your estimate of the mean is; that doesn't seem to capture what you're trying to do. I would use either a) a histogram, if you care mostly about showing variation, or b) a line chart or area chart, if you want to say something about variation while also showing progression over time.
| null |
CC BY-SA 2.5
| null |
2011-02-24T01:02:48.370
|
2011-02-24T01:02:48.370
| null | null |
2669
| null |
7559
|
1
|
7560
| null |
7
|
3397
|
The linear SVM in textbook takes form of maximizing
$L_D = \sum_i{a_i} - \frac{1}{2}\sum_{i,j}{a_ia_jy_iy_jx_i^Tx_j}$
over $a_i$ where $a_i \geq 0$ and $\sum_i{a_iy_i} = 0$
Since $w = \sum_i{a_iy_ix_i}$, the classifier will take the form $\text{Sgn}(wx - b)$.
Thus, it seems to solve linear SVM, I need to figure out $a_i$ with some gradient based methods. However, recently, I came across a paper which states that they try to minimize the following form:
$L_P = \frac{1}{2}||w||^2+C\sum_i{\text{max}(0, 1-y_if_w(x_i))}$
and they claim $C$ is a constant. It seems to me that this form is quite different from primal form of $L_P$ in linear SVM because of the missing $a_i$. As far as the paper goes, it seems to me that they optimized on $w$ directly. I am puzzled here as if I missed something. Can you optimize $w$ directly on linear SVM? Why is that possible?
|
The difference between linear SVM and other linear classifiers?
|
CC BY-SA 2.5
| null |
2011-02-24T03:01:15.167
|
2011-02-24T07:24:09.663
|
2011-02-24T07:24:09.663
|
2116
|
3395
|
[
"svm",
"linear-model"
] |
7560
|
2
| null |
7559
|
8
| null |
There are two things going on here.
- Difference between primal and dual problem. The "original" objective function of SVM is to minimize $1/2 ||w||^2$. This is called "primal form". Turns out that the objective function you wrote (the one involving $L_D$) is the dual form of this problem. So the two lead to equivalent solutions and can be used interchangeably.
- The second formulation you describe is called "soft margin SVM". It is obtained by taking the primal form of (1) above and replacing the constraint $y_i f_w(x_i) \geq 1$ by the penalty term $C \cdot \max(0, 1 - y_i f_w(x_i))$. The effect is that you allow violations of the constraint. This is useful e.g. if your data is not linearly separable. You can obtain a dual formulation of this (similar to your expression with $L_D$) as well.
| null |
CC BY-SA 2.5
| null |
2011-02-24T03:11:36.903
|
2011-02-24T03:11:36.903
| null | null |
3369
| null |
7561
|
2
| null |
7546
|
1
| null |
I do not know about SAS, but variance in `glmer` is controlled by `family` argument. If you want to change correlation structure then I suspect you will have to use `nlme` from nlme package.
| null |
CC BY-SA 2.5
| null |
2011-02-24T07:51:30.473
|
2011-02-24T07:51:30.473
| null | null |
2116
| null |
7562
|
1
|
8390
| null |
9
|
5114
|
I just got my hands on the [ANES (American National Election Studies)](http://www.electionstudies.org/) 2008 data set, and would like to do some simple analysis in R. However, I've never worked with this complex of a data set before and I've run into an issue.
The survey uses oversampling and has a variable for post stratification weights. I had only the vaguest idea of what that meant, so I read the [wikipedia page](http://en.wikipedia.org/wiki/Sampling_%28statistics%29#Stratified_sampling) on it, which I understand conceptually. Unfortunately, I don't know how to manipulate R such that the post stratification weights are reflected when I do my analysis.
While conceptually, the idea of oversampling didn't confuse me, the following documentation for the R "survey" package is completely unintelligible to me. I'll show what I've found so far, and I would really appreciate either an explanation of what's going on with these methods, or, if anyone knows a simpler way to apply a post-stratification weight to a data frame of variables, I'd love to here that too.
So, I found the "survey" package from CRAN, and I have the [manual](http://cran.r-project.org/web/packages/survey/survey.pdf), and, after looking through it, it seems that the most promising method is:
```
postStratify(design, strata, population, partial = FALSE, ...)
```
However, when I look at the documentation for what needs to be passed for each of these arguments, I'm completely lost. They are as follows:
```
design A survey design with replicate weights
strata A formula or data frame of post-stratifying variables
population A table, xtabs or data.frame with population frequencies
partial if TRUE, ignore population strata not present in the sample
```
None of these make a lot of sense to me, but I'm pretty sure that the design argument is supposed to be of a class also defined in this package:
```
svydesign(ids, probs=NULL, strata = NULL, variables = NULL, fpc=NULL,
data = NULL, nest = FALSE,
check.strata = !nest, weights=NULL,pps=FALSE,...)
```
If you notice, there are a ton of optional arguments here, which all seem to do similar types of things (at least to me, after reading the docs...).
I'm basically at a loss for why this is so complicated in R. Am I misunderstanding things? Is there a simpler way to do this? Any help would be appreciated.
|
Simple post-stratification weights in R
|
CC BY-SA 2.5
| null |
2011-02-24T10:15:16.720
|
2017-09-15T03:49:24.013
|
2011-02-24T11:57:29.023
| null |
726
|
[
"r",
"survey",
"post-hoc",
"stratification"
] |
7563
|
1
|
7568
| null |
11
|
322
|
What techniques/approaches are useful in testing statistical software? I'm particularly interested in programs that do parametric estimation using maximum likelihood.
Comparing results to those from other programs or published sources is not always possible since most of the time when I write a program of my own it is because the computation I need is not already implemented in an existing system.
I am not insisting on methods which can guarantee correctness. I would be happy with techniques that can catch some fraction of errors.
|
Testing statistical software
|
CC BY-SA 2.5
| null |
2011-02-24T10:16:34.377
|
2012-11-21T23:16:47.400
| null | null |
1393
|
[
"software",
"computational-statistics"
] |
7564
|
1
| null | null |
8
|
389
|
I would like to know if there are SOM implementations (preferably R) available that accept fuzzy input. That is, I have data in which some nominal features are spread out between a number of categories. For example: feature 1 has 5 categories and an observation might have the values (which are actually probabilities) [0, 0.5, 0.25, 0.25, 0].
|
Self-organizing maps: fuzzy input?
|
CC BY-SA 3.0
| null |
2011-02-24T10:53:47.720
|
2018-08-12T15:46:33.880
|
2017-12-15T20:34:36.997
|
128677
|
3401
|
[
"machine-learning",
"neural-networks",
"self-organizing-maps"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.