idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
---|---|---|---|---|---|---|
4,501 | What are the main theorems in Machine (Deep) Learning? | I wouldn't call it a main theorem, but I think the following (sometimes referred to as the Universal approximation theorem) is an interesting (and at least for me surprising) one as it states the approximative power of feed-forward neural networks.
Theorem:
Let $\sigma$ be a nonconstant and monotinically-increasing continous function. For any continuos function $f:[0,1]^m\to\mathbb{R}$ and any $\epsilon>0$, there exist an integern $N$ and an multilayer perceptron $F$ with one hidden layer having $N$ Neurons that has $\sigma$ as activation function so that
$$|F(x)-f(x)|\le\epsilon$$
for all $x\in[0,1]^m$.
Of course, as this is a statement on existence, its impact for practitioners is negligible.
A proof can be found in Hornik, Approximation Capabilities of Muitilayer Feedforward Networks, Neural Networks 4 (2), 1991, | What are the main theorems in Machine (Deep) Learning? | I wouldn't call it a main theorem, but I think the following (sometimes referred to as the Universal approximation theorem) is an interesting (and at least for me surprising) one as it states the appr | What are the main theorems in Machine (Deep) Learning?
I wouldn't call it a main theorem, but I think the following (sometimes referred to as the Universal approximation theorem) is an interesting (and at least for me surprising) one as it states the approximative power of feed-forward neural networks.
Theorem:
Let $\sigma$ be a nonconstant and monotinically-increasing continous function. For any continuos function $f:[0,1]^m\to\mathbb{R}$ and any $\epsilon>0$, there exist an integern $N$ and an multilayer perceptron $F$ with one hidden layer having $N$ Neurons that has $\sigma$ as activation function so that
$$|F(x)-f(x)|\le\epsilon$$
for all $x\in[0,1]^m$.
Of course, as this is a statement on existence, its impact for practitioners is negligible.
A proof can be found in Hornik, Approximation Capabilities of Muitilayer Feedforward Networks, Neural Networks 4 (2), 1991, | What are the main theorems in Machine (Deep) Learning?
I wouldn't call it a main theorem, but I think the following (sometimes referred to as the Universal approximation theorem) is an interesting (and at least for me surprising) one as it states the appr |
4,502 | What are the main theorems in Machine (Deep) Learning? | A nice post focusing on this question (specifically deep learning rather than general machine learning theorems) is here:
https://medium.com/mlreview/modern-theory-of-deep-learning-why-does-it-works-so-well-9ee1f7fb2808
It gives an accessible summary of the main emerging theorems for the ability of deep neural networks to generalize so well. | What are the main theorems in Machine (Deep) Learning? | A nice post focusing on this question (specifically deep learning rather than general machine learning theorems) is here:
https://medium.com/mlreview/modern-theory-of-deep-learning-why-does-it-works-s | What are the main theorems in Machine (Deep) Learning?
A nice post focusing on this question (specifically deep learning rather than general machine learning theorems) is here:
https://medium.com/mlreview/modern-theory-of-deep-learning-why-does-it-works-so-well-9ee1f7fb2808
It gives an accessible summary of the main emerging theorems for the ability of deep neural networks to generalize so well. | What are the main theorems in Machine (Deep) Learning?
A nice post focusing on this question (specifically deep learning rather than general machine learning theorems) is here:
https://medium.com/mlreview/modern-theory-of-deep-learning-why-does-it-works-s |
4,503 | Do we have to tune the number of trees in a random forest? | It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all other hyper-parameters are fixed, the model’s loss stochastically decreases as the number of trees increases.
Intuitive explanation
Each tree in a random forest is identically distributed. The trees are identically distributed because each tree is grown using a randomization strategy that is repeated for each tree: boot-strap the training data, and then grow each tree by picking the best split for a feature from among the $m$ features selected for that node. The random forest procedure stands in contrast to boosting because the trees are grown on their own bootstrap subsample without regard to any of the other trees. (It is in this sense that the random forest algorithm is "embarrassingly parallel": you can parallelize tree construction because each tree is fit independently.)
In the binary case, each random forest tree votes 1 for the positive class or 0 for the negative class for each sample. The average of all of these votes is taken as the classification score of the entire forest. (In the general $k$-nary case, we simply have a categorical distribution instead, but all of these arguments still apply.)
The Weak Law of Large Numbers is applicable in these circumstances because
the trees' decisions are identically-distributed r.v.s (in the sense that a random procedure determines whether the tree votes 1 or 0) and
the variable of interest only takes values $\{0,1\}$ for each tree and therefore each experiment (tree decision) has finite variance (because all moments of countably finite r.v.s are finite).
Applying WLLN in this case implies that, for each sample, the ensemble will tend toward a particular mean prediction value for that sample as the number of trees tends towards infinity. Additionally, for a given set of samples, a statistic of interest among those samples (such as the expected log-loss) will converge to a mean value as well, as the number of trees tends toward infinity.
Elements of Statistical Learning
Hastie et al. address this question very briefly in ESL (page 596).
Another claim is that random forests “cannot overfit” the data. It is certainly true that increasing $\mathcal{B}$ [the number of trees in the ensemble] does not cause the random forest sequence to overfit... However, this limit can overfit the data; the average of fully grown trees can result in too rich a model, and incur unnecessary variance. Segal (2004) demonstrates small gains in performance by controlling the depths of the individual trees grown in random forests. Our experience is that using full-grown trees seldom costs much, and results in one less tuning parameter.
Stated another way, for a fixed hyperparameter configuration, increasing the number of trees cannot overfit the data; however, the other hyperparameters might be a source of overfit.
Mathematical explanation
This section summarizes Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?". The key results are
The expected error rate and area under the ROC curve can be a non-monotonous function of the number of trees.
a. The expected error rate (equiv. $\text{error rate} = 1 - \text{accuracy}$) as a function of $T$ the number of trees is given by
$$
E(e_i(T)) = P\left(\sum_{t=1}^T e_{it} > 0.5\cdot T\right)
$$
where $e_{it}$ is a binomial r.v. with expectation $E(e_{it}) = \epsilon_i$, the decision of a particular tree indexed by $t$. This function is increasing in $T$ for $\epsilon_{i} > 0.5$ and decreasing in $T$ for $\epsilon_{i} < 0.5$. The authors observe
We see that the convergence rate of the error rate curve is only dependent on the distribution of the $\epsilon_i$ of the observations. Hence, the convergence rate of the error rate curve is not directly dependent on the number of observations n or the number of features, but these characteristics could influence the empirical distribution of the $\epsilon_i$’s and hence possibly the convergence rate as outlined in Section 4.3.1
b. The authors note that ROC AUC (aka $c$-statistic) can be manipulated to have monotonous or non-monotonous curves as a function of $T$ depending on how the samples' expected scores align to their true classes.
Probability-based measures, such as cross entropy and Brier score, are are monotonic as a function of the number of trees.
a. The Breier Score has expectation
$$
E(b_i(T)) = E(e_{it})^2 + \frac{\text{Var}(e_{it})}{T}
$$
which is clearly a monotonously decreasing function of $T$.
b. The log-loss (aka cross entropy loss) has expectation which can be approximated by a Taylor expansion
$$
E(l_i(T)) \approx -\log(1 - \epsilon_i + a) + \frac{\epsilon_i (1 - \epsilon_i) }{ 2 T (1 - \epsilon_i + a)^2}
$$
which is likewise a decreasing function of $T$. (The constant $a$ is a small positive number that keeps the values inside the logarithm and denominator away from zero.)
Experimental results considering 306 data sets support these findings.
Experimental Demonstration
This is a practical demonstration using the diamonds data that ships with ggplot2. I turned it into a classification task by binarizing the price into "high" and "low" categories, with the dividing line determined by the median price.
From the perspective of cross-entropy, model improvements are very smooth. (However, the plot is not monotonic -- the divergence from the theoretical results presented above is because the theoretical results pertain to the expectation, rather than to the particular realizations of any one experiment.)
On the other hand, error rate is deceptive in the sense that it can swing up or down, and sometimes stay there for a number of additional trees, before reverting. This is because it does not measure the degree of incorrectness of the classification decision. This can cause the error rate to have "blips" of improved performance w.r.t. the number of trees, by which I mean that some sample which is on the decision boundary will bounce back and forth between predicted classes. A very large number of trees can be required for this behavior to be more-or-less suppressed.
Also, look at the behavior of error rate for a very small number of trees -- the results are wildly divergent! This implies that a method premised on choosing the number of trees this way is subject to a large amount of randomness. Moreover, repeating the same experiment with a different random seed could lead one to select a different number of trees purely on the basis of this randomness. In this sense, the behavior of the error rate for a small number of trees is entirely an artifact, both because we know that the LLN means that as the number of trees increases, this will tend towards its expectation, and because of the theoretical results in section 2. (Cross-validated hosts a number of questions comparing the merits of error rate/accuracy to other statistics.)
By contrast, the cross-entropy measurement is essentially stable after 200 trees, and virtually flat after 500.
Finally, I repeated the exact same experiment for error rate with a different random seed. The results are strikingly different for small $T$.
Code for this demonstration is available in this gist.
"So how should I choose $T$ if I'm not tuning it?"
Tuning the number of trees is unnecessary; instead, simply set the number of trees to a large, computationally feasible number, and let the asymptotic behavior of LLN do the rest.
In the case that you have some kind of constraint (a cap on the total number of terminal nodes, a cap on the model estimation time, a limit to the size of the model on disk), this amounts to choosing the largest $T$ that satisfies your constraint.
"Why do people tune over $T$ if it's wrong to do so?"
This is purely speculation, but I think that the belief that tuning the number of trees in a random forest persists is related to two facts:
Boosting algorithms like AdaBoost and XGBoost do require users to tune the number of trees in the ensemble and some software users are not sophisticated enough to distinguish between boosting and bagging. (For a discussion of the distinction between boosting and bagging, see Is random forest a boosting algorithm?)
Standard random forest implementations, like R's randomForest (which is, basically, the R interface to Breiman's FORTRAN code), only report error rate (or, equivalently, accuracy) as a function of trees. This is deceptive, because the accuracy is not a monotonic function of the number of trees, whereas continuous proper scoring rules such as Brier score and logloss are monotonic functions.
Citation
Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?" | Do we have to tune the number of trees in a random forest? | It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all | Do we have to tune the number of trees in a random forest?
It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all other hyper-parameters are fixed, the model’s loss stochastically decreases as the number of trees increases.
Intuitive explanation
Each tree in a random forest is identically distributed. The trees are identically distributed because each tree is grown using a randomization strategy that is repeated for each tree: boot-strap the training data, and then grow each tree by picking the best split for a feature from among the $m$ features selected for that node. The random forest procedure stands in contrast to boosting because the trees are grown on their own bootstrap subsample without regard to any of the other trees. (It is in this sense that the random forest algorithm is "embarrassingly parallel": you can parallelize tree construction because each tree is fit independently.)
In the binary case, each random forest tree votes 1 for the positive class or 0 for the negative class for each sample. The average of all of these votes is taken as the classification score of the entire forest. (In the general $k$-nary case, we simply have a categorical distribution instead, but all of these arguments still apply.)
The Weak Law of Large Numbers is applicable in these circumstances because
the trees' decisions are identically-distributed r.v.s (in the sense that a random procedure determines whether the tree votes 1 or 0) and
the variable of interest only takes values $\{0,1\}$ for each tree and therefore each experiment (tree decision) has finite variance (because all moments of countably finite r.v.s are finite).
Applying WLLN in this case implies that, for each sample, the ensemble will tend toward a particular mean prediction value for that sample as the number of trees tends towards infinity. Additionally, for a given set of samples, a statistic of interest among those samples (such as the expected log-loss) will converge to a mean value as well, as the number of trees tends toward infinity.
Elements of Statistical Learning
Hastie et al. address this question very briefly in ESL (page 596).
Another claim is that random forests “cannot overfit” the data. It is certainly true that increasing $\mathcal{B}$ [the number of trees in the ensemble] does not cause the random forest sequence to overfit... However, this limit can overfit the data; the average of fully grown trees can result in too rich a model, and incur unnecessary variance. Segal (2004) demonstrates small gains in performance by controlling the depths of the individual trees grown in random forests. Our experience is that using full-grown trees seldom costs much, and results in one less tuning parameter.
Stated another way, for a fixed hyperparameter configuration, increasing the number of trees cannot overfit the data; however, the other hyperparameters might be a source of overfit.
Mathematical explanation
This section summarizes Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?". The key results are
The expected error rate and area under the ROC curve can be a non-monotonous function of the number of trees.
a. The expected error rate (equiv. $\text{error rate} = 1 - \text{accuracy}$) as a function of $T$ the number of trees is given by
$$
E(e_i(T)) = P\left(\sum_{t=1}^T e_{it} > 0.5\cdot T\right)
$$
where $e_{it}$ is a binomial r.v. with expectation $E(e_{it}) = \epsilon_i$, the decision of a particular tree indexed by $t$. This function is increasing in $T$ for $\epsilon_{i} > 0.5$ and decreasing in $T$ for $\epsilon_{i} < 0.5$. The authors observe
We see that the convergence rate of the error rate curve is only dependent on the distribution of the $\epsilon_i$ of the observations. Hence, the convergence rate of the error rate curve is not directly dependent on the number of observations n or the number of features, but these characteristics could influence the empirical distribution of the $\epsilon_i$’s and hence possibly the convergence rate as outlined in Section 4.3.1
b. The authors note that ROC AUC (aka $c$-statistic) can be manipulated to have monotonous or non-monotonous curves as a function of $T$ depending on how the samples' expected scores align to their true classes.
Probability-based measures, such as cross entropy and Brier score, are are monotonic as a function of the number of trees.
a. The Breier Score has expectation
$$
E(b_i(T)) = E(e_{it})^2 + \frac{\text{Var}(e_{it})}{T}
$$
which is clearly a monotonously decreasing function of $T$.
b. The log-loss (aka cross entropy loss) has expectation which can be approximated by a Taylor expansion
$$
E(l_i(T)) \approx -\log(1 - \epsilon_i + a) + \frac{\epsilon_i (1 - \epsilon_i) }{ 2 T (1 - \epsilon_i + a)^2}
$$
which is likewise a decreasing function of $T$. (The constant $a$ is a small positive number that keeps the values inside the logarithm and denominator away from zero.)
Experimental results considering 306 data sets support these findings.
Experimental Demonstration
This is a practical demonstration using the diamonds data that ships with ggplot2. I turned it into a classification task by binarizing the price into "high" and "low" categories, with the dividing line determined by the median price.
From the perspective of cross-entropy, model improvements are very smooth. (However, the plot is not monotonic -- the divergence from the theoretical results presented above is because the theoretical results pertain to the expectation, rather than to the particular realizations of any one experiment.)
On the other hand, error rate is deceptive in the sense that it can swing up or down, and sometimes stay there for a number of additional trees, before reverting. This is because it does not measure the degree of incorrectness of the classification decision. This can cause the error rate to have "blips" of improved performance w.r.t. the number of trees, by which I mean that some sample which is on the decision boundary will bounce back and forth between predicted classes. A very large number of trees can be required for this behavior to be more-or-less suppressed.
Also, look at the behavior of error rate for a very small number of trees -- the results are wildly divergent! This implies that a method premised on choosing the number of trees this way is subject to a large amount of randomness. Moreover, repeating the same experiment with a different random seed could lead one to select a different number of trees purely on the basis of this randomness. In this sense, the behavior of the error rate for a small number of trees is entirely an artifact, both because we know that the LLN means that as the number of trees increases, this will tend towards its expectation, and because of the theoretical results in section 2. (Cross-validated hosts a number of questions comparing the merits of error rate/accuracy to other statistics.)
By contrast, the cross-entropy measurement is essentially stable after 200 trees, and virtually flat after 500.
Finally, I repeated the exact same experiment for error rate with a different random seed. The results are strikingly different for small $T$.
Code for this demonstration is available in this gist.
"So how should I choose $T$ if I'm not tuning it?"
Tuning the number of trees is unnecessary; instead, simply set the number of trees to a large, computationally feasible number, and let the asymptotic behavior of LLN do the rest.
In the case that you have some kind of constraint (a cap on the total number of terminal nodes, a cap on the model estimation time, a limit to the size of the model on disk), this amounts to choosing the largest $T$ that satisfies your constraint.
"Why do people tune over $T$ if it's wrong to do so?"
This is purely speculation, but I think that the belief that tuning the number of trees in a random forest persists is related to two facts:
Boosting algorithms like AdaBoost and XGBoost do require users to tune the number of trees in the ensemble and some software users are not sophisticated enough to distinguish between boosting and bagging. (For a discussion of the distinction between boosting and bagging, see Is random forest a boosting algorithm?)
Standard random forest implementations, like R's randomForest (which is, basically, the R interface to Breiman's FORTRAN code), only report error rate (or, equivalently, accuracy) as a function of trees. This is deceptive, because the accuracy is not a monotonic function of the number of trees, whereas continuous proper scoring rules such as Brier score and logloss are monotonic functions.
Citation
Philipp Probst & Anne-Laure Boulesteix "To tune or not to tune the number of trees in random forest?" | Do we have to tune the number of trees in a random forest?
It's common to find code snippets that treat $T$ as a hyper-parameter, and attempt to optimize over it in the same way as any other hyper-parameter. This is just wasting computational power: when all |
4,504 | R libraries for deep learning | OpenSource h2o.deepLearning() is package for deeplearning in R from h2o.ai
here's a write up http://www.r-bloggers.com/things-to-try-after-user-part-1-deep-learning-with-h2o/
And code: https://gist.github.com/woobe/3e728e02f6cc03ab86d8#file-link_data-r
######## *Convert Breast Cancer data into H2O*
dat <- BreastCancer[, -1] # remove the ID column
dat_h2o <- as.h2o(localH2O, dat, key = 'dat')
######## *Import MNIST CSV as H2O*
dat_h2o <- h2o.importFile(localH2O, path = ".../mnist_train.csv")
######## *Using the DNN model for predictions*
h2o_yhat_test <- h2o.predict(model, test_h2o)
######## *Converting H2O format into data frame*
df_yhat_test <- as.data.frame(h2o_yhat_test)
######## Start a local cluster with 2GB RAM
library(h2o)
localH2O = h2o.init(ip = "localhost", port = 54321, startH2O = TRUE,
Xmx = '2g')
########Execute deeplearning
model <- h2o.deeplearning( x = 2:785, # column numbers for predictors
y = 1, # column number for label
data = train_h2o, # data in H2O format
activation = "TanhWithDropout", # or 'Tanh'
input_dropout_ratio = 0.2, # % of inputs dropout
hidden_dropout_ratios = c(0.5,0.5,0.5), # % for nodes dropout
balance_classes = TRUE,
hidden = c(50,50,50), # three layers of 50 nodes
epochs = 100) # max. no. of epochs | R libraries for deep learning | OpenSource h2o.deepLearning() is package for deeplearning in R from h2o.ai
here's a write up http://www.r-bloggers.com/things-to-try-after-user-part-1-deep-learning-with-h2o/
And code: https://gist.gi | R libraries for deep learning
OpenSource h2o.deepLearning() is package for deeplearning in R from h2o.ai
here's a write up http://www.r-bloggers.com/things-to-try-after-user-part-1-deep-learning-with-h2o/
And code: https://gist.github.com/woobe/3e728e02f6cc03ab86d8#file-link_data-r
######## *Convert Breast Cancer data into H2O*
dat <- BreastCancer[, -1] # remove the ID column
dat_h2o <- as.h2o(localH2O, dat, key = 'dat')
######## *Import MNIST CSV as H2O*
dat_h2o <- h2o.importFile(localH2O, path = ".../mnist_train.csv")
######## *Using the DNN model for predictions*
h2o_yhat_test <- h2o.predict(model, test_h2o)
######## *Converting H2O format into data frame*
df_yhat_test <- as.data.frame(h2o_yhat_test)
######## Start a local cluster with 2GB RAM
library(h2o)
localH2O = h2o.init(ip = "localhost", port = 54321, startH2O = TRUE,
Xmx = '2g')
########Execute deeplearning
model <- h2o.deeplearning( x = 2:785, # column numbers for predictors
y = 1, # column number for label
data = train_h2o, # data in H2O format
activation = "TanhWithDropout", # or 'Tanh'
input_dropout_ratio = 0.2, # % of inputs dropout
hidden_dropout_ratios = c(0.5,0.5,0.5), # % for nodes dropout
balance_classes = TRUE,
hidden = c(50,50,50), # three layers of 50 nodes
epochs = 100) # max. no. of epochs | R libraries for deep learning
OpenSource h2o.deepLearning() is package for deeplearning in R from h2o.ai
here's a write up http://www.r-bloggers.com/things-to-try-after-user-part-1-deep-learning-with-h2o/
And code: https://gist.gi |
4,505 | R libraries for deep learning | There is a package called "darch"
http://cran.um.ac.ir/web/packages/darch/index.html
Quote from CRAN:
darch: Package for deep architectures and Restricted-Bolzmann-Machines
The darch package is build on the basis of the code from G. E. Hinton
and R. R. Salakhutdinov (available under Matlab Code for deep belief
nets : last visit: 01.08.2013). This package is for generating neural
networks with many layers (deep architectures) and train them with the
method introduced by the publications "A fast learning algorithm for
deep belief nets" (G. E. Hinton, S. Osindero, Y. W. Teh) and "Reducing
the dimensionality of data with neural networks" (G. E. Hinton, R. R.
Salakhutdinov). This method includes a pre training with the
contrastive divergence method publishing by G.E Hinton (2002) and a
fine tuning with common known training algorithms like backpropagation
or conjugate gradient. | R libraries for deep learning | There is a package called "darch"
http://cran.um.ac.ir/web/packages/darch/index.html
Quote from CRAN:
darch: Package for deep architectures and Restricted-Bolzmann-Machines
The darch package is build | R libraries for deep learning
There is a package called "darch"
http://cran.um.ac.ir/web/packages/darch/index.html
Quote from CRAN:
darch: Package for deep architectures and Restricted-Bolzmann-Machines
The darch package is build on the basis of the code from G. E. Hinton
and R. R. Salakhutdinov (available under Matlab Code for deep belief
nets : last visit: 01.08.2013). This package is for generating neural
networks with many layers (deep architectures) and train them with the
method introduced by the publications "A fast learning algorithm for
deep belief nets" (G. E. Hinton, S. Osindero, Y. W. Teh) and "Reducing
the dimensionality of data with neural networks" (G. E. Hinton, R. R.
Salakhutdinov). This method includes a pre training with the
contrastive divergence method publishing by G.E Hinton (2002) and a
fine tuning with common known training algorithms like backpropagation
or conjugate gradient. | R libraries for deep learning
There is a package called "darch"
http://cran.um.ac.ir/web/packages/darch/index.html
Quote from CRAN:
darch: Package for deep architectures and Restricted-Bolzmann-Machines
The darch package is build |
4,506 | R libraries for deep learning | There's another new package for deep networks in R: deepnet
I haven't tried to use it yet, but it's already been incorporated into the caret package. | R libraries for deep learning | There's another new package for deep networks in R: deepnet
I haven't tried to use it yet, but it's already been incorporated into the caret package. | R libraries for deep learning
There's another new package for deep networks in R: deepnet
I haven't tried to use it yet, but it's already been incorporated into the caret package. | R libraries for deep learning
There's another new package for deep networks in R: deepnet
I haven't tried to use it yet, but it's already been incorporated into the caret package. |
4,507 | R libraries for deep learning | To answer my own question, I wrote a small package in R for RBMs: https://github.com/zachmayer/rbm
This package is still under heavy development, and I know very little about RBMs, so I'd welcome any feedback (and pull requests!) you have. You can install the package using devtools:
devtools:::install_github('zachmayer/rbm')
library(rbm)
?rbm
?rbm_gpu
?stacked_rbm
The code is similar to Andrew Landgraf's implementation in R and Edwin Chen's implementation in python, but I wrote the function to be similar to the pca function in base R and include functionality for stacking. I think it's a little more user-friendly than the darch package, which I could never figure out how to use (even before it was removed from CRAN).
If you have the gputools package installed you can use your GPU for matrix operations with the rbm_gpu function. This speeds things up a lot! Furthermore, most of the work in an RBM is done with matrix operations, so just installing a good BLAS, such as openBLAS will also speed things up a lot.
Here's what happens when you run the code on Edwin's example dataset:
set.seed(10)
print('Data from: https://github.com/echen/restricted-boltzmann-machines')
Alice <- c('Harry_Potter' = 1, Avatar = 1, 'LOTR3' = 1, Gladiator = 0, Titanic = 0, Glitter = 0) #Big SF/fantasy fan.
Bob <- c('Harry_Potter' = 1, Avatar = 0, 'LOTR3' = 1, Gladiator = 0, Titanic = 0, Glitter = 0) #SF/fantasy fan, but doesn't like Avatar.
Carol <- c('Harry_Potter' = 1, Avatar = 1, 'LOTR3' = 1, Gladiator = 0, Titanic = 0, Glitter = 0) #Big SF/fantasy fan.
David <- c('Harry_Potter' = 0, Avatar = 0, 'LOTR3' = 1, Gladiator = 1, Titanic = 1, Glitter = 0) #Big Oscar winners fan.
Eric <- c('Harry_Potter' = 0, Avatar = 0, 'LOTR3' = 1, Gladiator = 1, Titanic = 0, Glitter = 0) #Oscar winners fan, except for Titanic.
Fred <- c('Harry_Potter' = 0, Avatar = 0, 'LOTR3' = 1, Gladiator = 1, Titanic = 1, Glitter = 0) #Big Oscar winners fan.
dat <- rbind(Alice, Bob, Carol, David, Eric, Fred)
#Fit a PCA model and an RBM model
PCA <- prcomp(dat, retx=TRUE)
RBM <- rbm_gpu(dat, retx=TRUE, num_hidden=2)
#Examine the 2 models
round(PCA$rotation, 2) #PCA weights
round(RBM$rotation, 2) #RBM weights | R libraries for deep learning | To answer my own question, I wrote a small package in R for RBMs: https://github.com/zachmayer/rbm
This package is still under heavy development, and I know very little about RBMs, so I'd welcome any | R libraries for deep learning
To answer my own question, I wrote a small package in R for RBMs: https://github.com/zachmayer/rbm
This package is still under heavy development, and I know very little about RBMs, so I'd welcome any feedback (and pull requests!) you have. You can install the package using devtools:
devtools:::install_github('zachmayer/rbm')
library(rbm)
?rbm
?rbm_gpu
?stacked_rbm
The code is similar to Andrew Landgraf's implementation in R and Edwin Chen's implementation in python, but I wrote the function to be similar to the pca function in base R and include functionality for stacking. I think it's a little more user-friendly than the darch package, which I could never figure out how to use (even before it was removed from CRAN).
If you have the gputools package installed you can use your GPU for matrix operations with the rbm_gpu function. This speeds things up a lot! Furthermore, most of the work in an RBM is done with matrix operations, so just installing a good BLAS, such as openBLAS will also speed things up a lot.
Here's what happens when you run the code on Edwin's example dataset:
set.seed(10)
print('Data from: https://github.com/echen/restricted-boltzmann-machines')
Alice <- c('Harry_Potter' = 1, Avatar = 1, 'LOTR3' = 1, Gladiator = 0, Titanic = 0, Glitter = 0) #Big SF/fantasy fan.
Bob <- c('Harry_Potter' = 1, Avatar = 0, 'LOTR3' = 1, Gladiator = 0, Titanic = 0, Glitter = 0) #SF/fantasy fan, but doesn't like Avatar.
Carol <- c('Harry_Potter' = 1, Avatar = 1, 'LOTR3' = 1, Gladiator = 0, Titanic = 0, Glitter = 0) #Big SF/fantasy fan.
David <- c('Harry_Potter' = 0, Avatar = 0, 'LOTR3' = 1, Gladiator = 1, Titanic = 1, Glitter = 0) #Big Oscar winners fan.
Eric <- c('Harry_Potter' = 0, Avatar = 0, 'LOTR3' = 1, Gladiator = 1, Titanic = 0, Glitter = 0) #Oscar winners fan, except for Titanic.
Fred <- c('Harry_Potter' = 0, Avatar = 0, 'LOTR3' = 1, Gladiator = 1, Titanic = 1, Glitter = 0) #Big Oscar winners fan.
dat <- rbind(Alice, Bob, Carol, David, Eric, Fred)
#Fit a PCA model and an RBM model
PCA <- prcomp(dat, retx=TRUE)
RBM <- rbm_gpu(dat, retx=TRUE, num_hidden=2)
#Examine the 2 models
round(PCA$rotation, 2) #PCA weights
round(RBM$rotation, 2) #RBM weights | R libraries for deep learning
To answer my own question, I wrote a small package in R for RBMs: https://github.com/zachmayer/rbm
This package is still under heavy development, and I know very little about RBMs, so I'd welcome any |
4,508 | R libraries for deep learning | You can try H2O's Deep Learning module, it is distributed and offers many advanced techniques such as dropout regularization and adaptive learning rate.
Slides: http://www.slideshare.net/0xdata/h2o-deeplearning-nextml
Video: https://www.youtube.com/watch?v=gAKbAQu900w&feature=youtu.be
Tutorials: http://learn.h2o.ai
Data and Scripts: http://data.h2o.ai
Documentation: http://docs.h2o.ai
GitBooks: http://gitbook.io/@h2o | R libraries for deep learning | You can try H2O's Deep Learning module, it is distributed and offers many advanced techniques such as dropout regularization and adaptive learning rate.
Slides: http://www.slideshare.net/0xdata/h2o-de | R libraries for deep learning
You can try H2O's Deep Learning module, it is distributed and offers many advanced techniques such as dropout regularization and adaptive learning rate.
Slides: http://www.slideshare.net/0xdata/h2o-deeplearning-nextml
Video: https://www.youtube.com/watch?v=gAKbAQu900w&feature=youtu.be
Tutorials: http://learn.h2o.ai
Data and Scripts: http://data.h2o.ai
Documentation: http://docs.h2o.ai
GitBooks: http://gitbook.io/@h2o | R libraries for deep learning
You can try H2O's Deep Learning module, it is distributed and offers many advanced techniques such as dropout regularization and adaptive learning rate.
Slides: http://www.slideshare.net/0xdata/h2o-de |
4,509 | R libraries for deep learning | To add another answer:
mxnet is amazing, and I love it It's a little difficult to install, but it supports GPUs and multiple CPUs. If you're going to do deep learning in R (particularly on images), I highly recommend you start with mxnet. | R libraries for deep learning | To add another answer:
mxnet is amazing, and I love it It's a little difficult to install, but it supports GPUs and multiple CPUs. If you're going to do deep learning in R (particularly on images), | R libraries for deep learning
To add another answer:
mxnet is amazing, and I love it It's a little difficult to install, but it supports GPUs and multiple CPUs. If you're going to do deep learning in R (particularly on images), I highly recommend you start with mxnet. | R libraries for deep learning
To add another answer:
mxnet is amazing, and I love it It's a little difficult to install, but it supports GPUs and multiple CPUs. If you're going to do deep learning in R (particularly on images), |
4,510 | R libraries for deep learning | While I haven't encountered a dedicated deep learning library for R, I have run into a similar discussion out on r-bloggers. The discussion centers on using RBM (Restricted Boltzman Machines). Take a look at the following link--
http://www.r-bloggers.com/restricted-boltzmann-machines-in-r/ (reposted from 'alandgraf.blogspot.com')
The author actually does a really good job of encapsulating a self-implemented algorithm in R. It must be said that I have not yet vetted the validity of the code but at least there is a glint of deep learning starting to show in R.
I hope this helps. | R libraries for deep learning | While I haven't encountered a dedicated deep learning library for R, I have run into a similar discussion out on r-bloggers. The discussion centers on using RBM (Restricted Boltzman Machines). Take | R libraries for deep learning
While I haven't encountered a dedicated deep learning library for R, I have run into a similar discussion out on r-bloggers. The discussion centers on using RBM (Restricted Boltzman Machines). Take a look at the following link--
http://www.r-bloggers.com/restricted-boltzmann-machines-in-r/ (reposted from 'alandgraf.blogspot.com')
The author actually does a really good job of encapsulating a self-implemented algorithm in R. It must be said that I have not yet vetted the validity of the code but at least there is a glint of deep learning starting to show in R.
I hope this helps. | R libraries for deep learning
While I haven't encountered a dedicated deep learning library for R, I have run into a similar discussion out on r-bloggers. The discussion centers on using RBM (Restricted Boltzman Machines). Take |
4,511 | R libraries for deep learning | You can now also use TensorFlow from R:
https://rstudio.github.io/tensorflow/ | R libraries for deep learning | You can now also use TensorFlow from R:
https://rstudio.github.io/tensorflow/ | R libraries for deep learning
You can now also use TensorFlow from R:
https://rstudio.github.io/tensorflow/ | R libraries for deep learning
You can now also use TensorFlow from R:
https://rstudio.github.io/tensorflow/ |
4,512 | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$ | The dichotomy between the cases $d < 3$ and $d \geq 3$ for the admissibility of the MLE of the mean of a $d$-dimensional multivariate normal random variable is certainly shocking.
There is another very famous example in probability and statistics in which there is a dichotomy between the $d < 3$ and $d \geq 3$ cases. This is the recurrence of a simple random walk on the lattice $\mathbb{Z}^d$. That is, the $d$-dimensional simple random walk is recurrent in 1 or 2 dimensions, but is transient in $d \geq 3$ dimensions. The continuous-time analogue (in the form of Brownian motion) also holds.
It turns out that the two are closely related.
Larry Brown proved that the two questions are essentially equivalent. That is, the best invariant estimator $\hat{\mu} \equiv \hat{\mu}(X) = X$ of a $d$-dimensional multivariate normal mean vector is admissible if and only if the $d$-dimensional Brownian motion is recurrent.
In fact, his results go much further. For any sensible (i.e., generalized Bayes) estimator $\tilde{\mu} \equiv \tilde{\mu}(X)$ with bounded (generalized) $L_2$ risk, there is an explicit(!) corresponding $d$-dimensional diffusion such that the estimator $\tilde{\mu}$ is admissible if and only if its corresponding diffusion is recurrent.
The local mean of this diffusion is essentially the discrepancy between the two estimators, i.e., $\tilde{\mu} - \hat{\mu}$ and the covariance of the diffusion is $2 I$. From this, it is easy to see that for the case of the MLE $\tilde{\mu} = \hat{\mu} = X$, we recover (rescaled) Brownian motion.
So, in some sense, we can view the question of admissibility through the lens of stochastic processes and use well-studied properties of diffusions to arrive at the desired conclusions.
References
L. Brown (1971). Admissible estimators, recurrent diffusions, and insoluble boundary value problems. Ann. Math. Stat., vol. 42, no. 3, pp. 855–903.
R. N. Bhattacharya (1978). Criteria for recurrence and existence of invariant measures for multidimensional diffusions. Ann. Prob., vol. 6, no. 4, 541–553. | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$ | The dichotomy between the cases $d < 3$ and $d \geq 3$ for the admissibility of the MLE of the mean of a $d$-dimensional multivariate normal random variable is certainly shocking.
There is another ver | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$
The dichotomy between the cases $d < 3$ and $d \geq 3$ for the admissibility of the MLE of the mean of a $d$-dimensional multivariate normal random variable is certainly shocking.
There is another very famous example in probability and statistics in which there is a dichotomy between the $d < 3$ and $d \geq 3$ cases. This is the recurrence of a simple random walk on the lattice $\mathbb{Z}^d$. That is, the $d$-dimensional simple random walk is recurrent in 1 or 2 dimensions, but is transient in $d \geq 3$ dimensions. The continuous-time analogue (in the form of Brownian motion) also holds.
It turns out that the two are closely related.
Larry Brown proved that the two questions are essentially equivalent. That is, the best invariant estimator $\hat{\mu} \equiv \hat{\mu}(X) = X$ of a $d$-dimensional multivariate normal mean vector is admissible if and only if the $d$-dimensional Brownian motion is recurrent.
In fact, his results go much further. For any sensible (i.e., generalized Bayes) estimator $\tilde{\mu} \equiv \tilde{\mu}(X)$ with bounded (generalized) $L_2$ risk, there is an explicit(!) corresponding $d$-dimensional diffusion such that the estimator $\tilde{\mu}$ is admissible if and only if its corresponding diffusion is recurrent.
The local mean of this diffusion is essentially the discrepancy between the two estimators, i.e., $\tilde{\mu} - \hat{\mu}$ and the covariance of the diffusion is $2 I$. From this, it is easy to see that for the case of the MLE $\tilde{\mu} = \hat{\mu} = X$, we recover (rescaled) Brownian motion.
So, in some sense, we can view the question of admissibility through the lens of stochastic processes and use well-studied properties of diffusions to arrive at the desired conclusions.
References
L. Brown (1971). Admissible estimators, recurrent diffusions, and insoluble boundary value problems. Ann. Math. Stat., vol. 42, no. 3, pp. 855–903.
R. N. Bhattacharya (1978). Criteria for recurrence and existence of invariant measures for multidimensional diffusions. Ann. Prob., vol. 6, no. 4, 541–553. | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$
The dichotomy between the cases $d < 3$ and $d \geq 3$ for the admissibility of the MLE of the mean of a $d$-dimensional multivariate normal random variable is certainly shocking.
There is another ver |
4,513 | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$ | @cardinal gave a great answer (+1), but the whole issue remains mysterious unless one is familiar with the proofs (and I am not). So I think the question remains as to what is an intuitive reason that Stein's paradox does not appear in $\mathbb R$ and $\mathbb R^2$.
I find very helpful a regression perspective offered in Stephen Stigler, 1990, A Galtonian Perspective on Shrinkage Estimators. Consider independent measurements $X_i$, each measuring some underlying (unobserved) $\theta_i$ and sampled from $\mathcal N(\theta_i, 1)$. If we somehow knew the $\theta_i$, we could make a scatter plot of $(X_i, \theta_i)$ pairs:
The diagonal line $\theta = X$ corresponds to zero noise and perfect estimation; in reality the noise is non-zero and so the points are displaced from the diagonal line in horizontal direction. Correspondinly, $\theta = X$ can be seen as a regression line of $X$ on $\theta$. We, however, know $X$ and want to estimate $\theta$, so we should rather consider a regression line of $\theta$ on $X$ -- which will have a different slope, biased horizontally, as shown on the figure (dashed line).
Quoting from the Stigler's paper:
This Galtonian perspective on the Stein paradox renders it nearly transparent. The "ordinary" estimators $\hat \theta_i^0 = X_i$ are derived from the theoretical regression line of $X$ on $\theta$. That line would be useful if our goal were to predict $X$ from $\theta$, but our problem is the reverse, namely to predict $\theta$ from $X$ using the sum of squared errors $\sum (\theta_i - \hat \theta_i)^2$ as a criterion. For that criterion, the optimum linear estimators are given by the least squares regression line of $\theta$ on $X$, and the James-Stein and Efron-Morris estimators are themselves estimators of that optimum linear estimator. The "ordinary" estimators are derived from the wrong regression line, the James-Stein and Efron-Morris estimators are derived from approximations to the right regression line.
And now comes the crucial bit (emphasis added):
We can even see why $k\ge 3$ is necessary: if $k=1$ or $2$, the least squares line of $\theta$ on $X$ must pass through the points $(X_i, \theta_i)$, and hence for $k=1$ or $2$, the two regression lines (of $X$ on $\theta$ and of $\theta$ on $X$) must agree at each $X_i$.
I think this makes it very clear what is special about $k=1$ and $k=2$. | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$ | @cardinal gave a great answer (+1), but the whole issue remains mysterious unless one is familiar with the proofs (and I am not). So I think the question remains as to what is an intuitive reason that | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$
@cardinal gave a great answer (+1), but the whole issue remains mysterious unless one is familiar with the proofs (and I am not). So I think the question remains as to what is an intuitive reason that Stein's paradox does not appear in $\mathbb R$ and $\mathbb R^2$.
I find very helpful a regression perspective offered in Stephen Stigler, 1990, A Galtonian Perspective on Shrinkage Estimators. Consider independent measurements $X_i$, each measuring some underlying (unobserved) $\theta_i$ and sampled from $\mathcal N(\theta_i, 1)$. If we somehow knew the $\theta_i$, we could make a scatter plot of $(X_i, \theta_i)$ pairs:
The diagonal line $\theta = X$ corresponds to zero noise and perfect estimation; in reality the noise is non-zero and so the points are displaced from the diagonal line in horizontal direction. Correspondinly, $\theta = X$ can be seen as a regression line of $X$ on $\theta$. We, however, know $X$ and want to estimate $\theta$, so we should rather consider a regression line of $\theta$ on $X$ -- which will have a different slope, biased horizontally, as shown on the figure (dashed line).
Quoting from the Stigler's paper:
This Galtonian perspective on the Stein paradox renders it nearly transparent. The "ordinary" estimators $\hat \theta_i^0 = X_i$ are derived from the theoretical regression line of $X$ on $\theta$. That line would be useful if our goal were to predict $X$ from $\theta$, but our problem is the reverse, namely to predict $\theta$ from $X$ using the sum of squared errors $\sum (\theta_i - \hat \theta_i)^2$ as a criterion. For that criterion, the optimum linear estimators are given by the least squares regression line of $\theta$ on $X$, and the James-Stein and Efron-Morris estimators are themselves estimators of that optimum linear estimator. The "ordinary" estimators are derived from the wrong regression line, the James-Stein and Efron-Morris estimators are derived from approximations to the right regression line.
And now comes the crucial bit (emphasis added):
We can even see why $k\ge 3$ is necessary: if $k=1$ or $2$, the least squares line of $\theta$ on $X$ must pass through the points $(X_i, \theta_i)$, and hence for $k=1$ or $2$, the two regression lines (of $X$ on $\theta$ and of $\theta$ on $X$) must agree at each $X_i$.
I think this makes it very clear what is special about $k=1$ and $k=2$. | Intuition behind why Stein's paradox only applies in dimensions $\ge 3$
@cardinal gave a great answer (+1), but the whole issue remains mysterious unless one is familiar with the proofs (and I am not). So I think the question remains as to what is an intuitive reason that |
4,514 | How to simulate artificial data for logistic regression? | No. The response variable $y_i$ is a Bernoulli random variable taking value $1$ with probability $pr(i)$.
> set.seed(666)
> x1 = rnorm(1000) # some continuous variables
> x2 = rnorm(1000)
> z = 1 + 2*x1 + 3*x2 # linear combination with a bias
> pr = 1/(1+exp(-z)) # pass through an inv-logit function
> y = rbinom(1000,1,pr) # bernoulli response variable
>
> #now feed it to glm:
> df = data.frame(y=y,x1=x1,x2=x2)
> glm( y~x1+x2,data=df,family="binomial")
Call: glm(formula = y ~ x1 + x2, family = "binomial", data = df)
Coefficients:
(Intercept) x1 x2
0.9915 2.2731 3.1853
Degrees of Freedom: 999 Total (i.e. Null); 997 Residual
Null Deviance: 1355
Residual Deviance: 582.9 AIC: 588.9 | How to simulate artificial data for logistic regression? | No. The response variable $y_i$ is a Bernoulli random variable taking value $1$ with probability $pr(i)$.
> set.seed(666)
> x1 = rnorm(1000) # some continuous variables
> x2 = rnorm(1000)
> | How to simulate artificial data for logistic regression?
No. The response variable $y_i$ is a Bernoulli random variable taking value $1$ with probability $pr(i)$.
> set.seed(666)
> x1 = rnorm(1000) # some continuous variables
> x2 = rnorm(1000)
> z = 1 + 2*x1 + 3*x2 # linear combination with a bias
> pr = 1/(1+exp(-z)) # pass through an inv-logit function
> y = rbinom(1000,1,pr) # bernoulli response variable
>
> #now feed it to glm:
> df = data.frame(y=y,x1=x1,x2=x2)
> glm( y~x1+x2,data=df,family="binomial")
Call: glm(formula = y ~ x1 + x2, family = "binomial", data = df)
Coefficients:
(Intercept) x1 x2
0.9915 2.2731 3.1853
Degrees of Freedom: 999 Total (i.e. Null); 997 Residual
Null Deviance: 1355
Residual Deviance: 582.9 AIC: 588.9 | How to simulate artificial data for logistic regression?
No. The response variable $y_i$ is a Bernoulli random variable taking value $1$ with probability $pr(i)$.
> set.seed(666)
> x1 = rnorm(1000) # some continuous variables
> x2 = rnorm(1000)
> |
4,515 | How to simulate artificial data for logistic regression? | LogisticRegression is suitable for fitting if probabilities or proportions are provided as the targets, not only 0/1 outcomes.
import numpy as np
import pandas as pd
def logistic(x, b, noise=None):
L = x.T.dot(b)
if noise is not None:
L = L+noise
return 1/(1+np.exp(-L))
x = np.arange(-10., 10, 0.05)
bias = np.ones(len(x))
X = np.vstack([x,bias]) # Add intercept
B = [1., 1.] # Sigmoid params for X
# True mean
p = logistic(X, B)
# Noisy mean
pnoisy = logistic(X, B, noise=np.random.normal(loc=0., scale=1., size=len(x)))
# dichotomize pnoisy -- sample 0/1 with probability pnoisy
dichot = np.random.binomial(1., pnoisy)
pd.Series(p, index=x).plot(style='-')
pd.Series(pnoisy, index=x).plot(style='.')
pd.Series(dichot, index=x).plot(style='.')
Here we have three potential targets for logistic regression. p which is the true/target proportion/probability, pnoisy which is p with normal noise added in the log odds scale, and dichot, which is pnoisy treated as a parameter to the binomial PDF, and sampled from that. You should test all 3 -- I found some open source LR implementations can't fit p.
Depending on your application, you may prefer pnoisy.
In practice, you should also consider how the noise is likely to be shaped in you target application and try to emulate that. | How to simulate artificial data for logistic regression? | LogisticRegression is suitable for fitting if probabilities or proportions are provided as the targets, not only 0/1 outcomes.
import numpy as np
import pandas as pd
def logistic(x, b, noise=None):
| How to simulate artificial data for logistic regression?
LogisticRegression is suitable for fitting if probabilities or proportions are provided as the targets, not only 0/1 outcomes.
import numpy as np
import pandas as pd
def logistic(x, b, noise=None):
L = x.T.dot(b)
if noise is not None:
L = L+noise
return 1/(1+np.exp(-L))
x = np.arange(-10., 10, 0.05)
bias = np.ones(len(x))
X = np.vstack([x,bias]) # Add intercept
B = [1., 1.] # Sigmoid params for X
# True mean
p = logistic(X, B)
# Noisy mean
pnoisy = logistic(X, B, noise=np.random.normal(loc=0., scale=1., size=len(x)))
# dichotomize pnoisy -- sample 0/1 with probability pnoisy
dichot = np.random.binomial(1., pnoisy)
pd.Series(p, index=x).plot(style='-')
pd.Series(pnoisy, index=x).plot(style='.')
pd.Series(dichot, index=x).plot(style='.')
Here we have three potential targets for logistic regression. p which is the true/target proportion/probability, pnoisy which is p with normal noise added in the log odds scale, and dichot, which is pnoisy treated as a parameter to the binomial PDF, and sampled from that. You should test all 3 -- I found some open source LR implementations can't fit p.
Depending on your application, you may prefer pnoisy.
In practice, you should also consider how the noise is likely to be shaped in you target application and try to emulate that. | How to simulate artificial data for logistic regression?
LogisticRegression is suitable for fitting if probabilities or proportions are provided as the targets, not only 0/1 outcomes.
import numpy as np
import pandas as pd
def logistic(x, b, noise=None):
|
4,516 | Why do we use ReLU in neural networks and how do we use it? | The ReLU function is $f(x)=\max(0, x).$ Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. In MLP usages, rectifier units replace all other activation functions except perhaps the readout layer. But I suppose you could mix-and-match them if you'd like.
One way ReLUs improve neural networks is by speeding up training. The gradient computation is very simple (either 0 or 1 depending on the sign of $x$). Also, the computational step of a ReLU is easy: any negative elements are set to 0.0 -- no exponentials, no multiplication or division operations.
Gradients of logistic and hyperbolic tangent networks are smaller than the positive portion of the ReLU. This means that the positive portion is updated more rapidly as training progresses. However, this comes at a cost. The 0 gradient on the left-hand side is has its own problem, called "dead neurons," in which a gradient update sets the incoming values to a ReLU such that the output is always zero; modified ReLU units such as ELU (or Leaky ReLU, or PReLU, etc.) can ameliorate this.
$\frac{d}{dx}\text{ReLU}(x)=1\forall x > 0$ . By contrast, the gradient of a sigmoid unit is at most $0.25$; on the other hand, $\tanh$ fares better for inputs in a region near 0 since $0.25 < \frac{d}{dx}\tanh(x) \le 1 \forall x \in [-1.31, 1.31]$ (approximately). | Why do we use ReLU in neural networks and how do we use it? | The ReLU function is $f(x)=\max(0, x).$ Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. In MLP usages, rectifier units replace all other act | Why do we use ReLU in neural networks and how do we use it?
The ReLU function is $f(x)=\max(0, x).$ Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. In MLP usages, rectifier units replace all other activation functions except perhaps the readout layer. But I suppose you could mix-and-match them if you'd like.
One way ReLUs improve neural networks is by speeding up training. The gradient computation is very simple (either 0 or 1 depending on the sign of $x$). Also, the computational step of a ReLU is easy: any negative elements are set to 0.0 -- no exponentials, no multiplication or division operations.
Gradients of logistic and hyperbolic tangent networks are smaller than the positive portion of the ReLU. This means that the positive portion is updated more rapidly as training progresses. However, this comes at a cost. The 0 gradient on the left-hand side is has its own problem, called "dead neurons," in which a gradient update sets the incoming values to a ReLU such that the output is always zero; modified ReLU units such as ELU (or Leaky ReLU, or PReLU, etc.) can ameliorate this.
$\frac{d}{dx}\text{ReLU}(x)=1\forall x > 0$ . By contrast, the gradient of a sigmoid unit is at most $0.25$; on the other hand, $\tanh$ fares better for inputs in a region near 0 since $0.25 < \frac{d}{dx}\tanh(x) \le 1 \forall x \in [-1.31, 1.31]$ (approximately). | Why do we use ReLU in neural networks and how do we use it?
The ReLU function is $f(x)=\max(0, x).$ Usually this is applied element-wise to the output of some other function, such as a matrix-vector product. In MLP usages, rectifier units replace all other act |
4,517 | Why do we use ReLU in neural networks and how do we use it? | One important thing to point out is that ReLU is idempotent. Given that ReLU is $\rho(x) = \max(0, x)$, it's easy to see that $\rho \circ \rho \circ \rho \circ \dots \circ \rho = \rho$ is true for any finite composition. This property is very important for deep neural networks, because each layer in the network applies a nonlinearity. Now, let's apply two sigmoid-family functions to the same input repeatedly 1-3 times:
You can immediately see that sigmoid functions "squash" their inputs resulting in the vanishing gradient problem: derivatives approach zero as $n$ (the number of repeated applications) approaches infinity. | Why do we use ReLU in neural networks and how do we use it? | One important thing to point out is that ReLU is idempotent. Given that ReLU is $\rho(x) = \max(0, x)$, it's easy to see that $\rho \circ \rho \circ \rho \circ \dots \circ \rho = \rho$ is true for any | Why do we use ReLU in neural networks and how do we use it?
One important thing to point out is that ReLU is idempotent. Given that ReLU is $\rho(x) = \max(0, x)$, it's easy to see that $\rho \circ \rho \circ \rho \circ \dots \circ \rho = \rho$ is true for any finite composition. This property is very important for deep neural networks, because each layer in the network applies a nonlinearity. Now, let's apply two sigmoid-family functions to the same input repeatedly 1-3 times:
You can immediately see that sigmoid functions "squash" their inputs resulting in the vanishing gradient problem: derivatives approach zero as $n$ (the number of repeated applications) approaches infinity. | Why do we use ReLU in neural networks and how do we use it?
One important thing to point out is that ReLU is idempotent. Given that ReLU is $\rho(x) = \max(0, x)$, it's easy to see that $\rho \circ \rho \circ \rho \circ \dots \circ \rho = \rho$ is true for any |
4,518 | Why do we use ReLU in neural networks and how do we use it? | Why do we use ReLUs? We use ReLUs for the same reason we use any other non-linear activation function: To achieve a non-linear transformation of the data.
Why do we need non-linear transformations? We apply non-linear transformations in the hope that the transformed data will be (close to) linear (for regression) or (close to) linearly separable (for classification). Drawing a linear function through non-linearly transformed data is equivalent to drawing a non-linear function through original data.
Why are ReLUs better than other activation functions? They are simple, fast to compute, and don't suffer from vanishing gradients, like sigmoid functions (logistic, tanh, erf, and similar). The simplicity of implementation makes them suitable for use on GPUs, which are very common today due to being optimised for matrix operations (which are also needed for 3D graphics).
Why do we need matrix operations in neural networks?: It's a compact and computationally efficient way of propagating the signals between the layers (multiplying the output of the previous layer with the weight matrix).
Isn't softmax activation function for neural networks? Softmax is not really an activation function of a single neuron, but a way of normalising outputs of multiple neurons. It is usually used in the output layer, to enforce the sum of outputs to be one, so that they can be interpreted as probabilities. You could also use it in hidden layers, to enforce the outputs to be in a limited range, but other approaches, like batch normalisation, are better suited for that purpose.
P.S. (1) ReLU stands for "rectified linear unit", so, strictly speaking, it is a neuron with a (half-wave) rectified-linear activation function. But people usually mean the activation function when they talk about ReLUs.
P.S. (2) Passing the output of softmax to a ReLU doesn't have any effect because softmax produces only non-negative values, in range $[0, 1]$, where ReLU acts as identity function, i.e. doesn't change them. | Why do we use ReLU in neural networks and how do we use it? | Why do we use ReLUs? We use ReLUs for the same reason we use any other non-linear activation function: To achieve a non-linear transformation of the data.
Why do we need non-linear transformations? We | Why do we use ReLU in neural networks and how do we use it?
Why do we use ReLUs? We use ReLUs for the same reason we use any other non-linear activation function: To achieve a non-linear transformation of the data.
Why do we need non-linear transformations? We apply non-linear transformations in the hope that the transformed data will be (close to) linear (for regression) or (close to) linearly separable (for classification). Drawing a linear function through non-linearly transformed data is equivalent to drawing a non-linear function through original data.
Why are ReLUs better than other activation functions? They are simple, fast to compute, and don't suffer from vanishing gradients, like sigmoid functions (logistic, tanh, erf, and similar). The simplicity of implementation makes them suitable for use on GPUs, which are very common today due to being optimised for matrix operations (which are also needed for 3D graphics).
Why do we need matrix operations in neural networks?: It's a compact and computationally efficient way of propagating the signals between the layers (multiplying the output of the previous layer with the weight matrix).
Isn't softmax activation function for neural networks? Softmax is not really an activation function of a single neuron, but a way of normalising outputs of multiple neurons. It is usually used in the output layer, to enforce the sum of outputs to be one, so that they can be interpreted as probabilities. You could also use it in hidden layers, to enforce the outputs to be in a limited range, but other approaches, like batch normalisation, are better suited for that purpose.
P.S. (1) ReLU stands for "rectified linear unit", so, strictly speaking, it is a neuron with a (half-wave) rectified-linear activation function. But people usually mean the activation function when they talk about ReLUs.
P.S. (2) Passing the output of softmax to a ReLU doesn't have any effect because softmax produces only non-negative values, in range $[0, 1]$, where ReLU acts as identity function, i.e. doesn't change them. | Why do we use ReLU in neural networks and how do we use it?
Why do we use ReLUs? We use ReLUs for the same reason we use any other non-linear activation function: To achieve a non-linear transformation of the data.
Why do we need non-linear transformations? We |
4,519 | Why do we use ReLU in neural networks and how do we use it? | ReLU is a literal switch. With an electrical switch 1 volt in gives 1 volt out, n volts in gives n volts out when on. On/Off when you decide to switch at zero gives exactly the same graph as ReLU.
The weighted sum (dot product) of a number of weighted sums is still a linear system.
For a particular input the ReLU switches are individually on or off. That results in a particular linear projection from the input to the output, as various weighted sums of weighted sum of ... are connected together by the switches.
For a particular input and a particular output neuron there is a compound system of weighted sums that actually can be summarized to a single effective weighted sum.
Since ReLU switches state at zero there are no sudden discontinuities in the output for gradual changes in the input.
There are other numerically efficient weighted sum (dot product) algorithms around like the FFT and Walsh Hadamard transform. There is no reason you can't incorporate those into an ReLU based neural network and benefit from the computational gains.
(eg. Fixed filter bank neural networks.) | Why do we use ReLU in neural networks and how do we use it? | ReLU is a literal switch. With an electrical switch 1 volt in gives 1 volt out, n volts in gives n volts out when on. On/Off when you decide to switch at zero gives exactly the same graph as ReLU.
| Why do we use ReLU in neural networks and how do we use it?
ReLU is a literal switch. With an electrical switch 1 volt in gives 1 volt out, n volts in gives n volts out when on. On/Off when you decide to switch at zero gives exactly the same graph as ReLU.
The weighted sum (dot product) of a number of weighted sums is still a linear system.
For a particular input the ReLU switches are individually on or off. That results in a particular linear projection from the input to the output, as various weighted sums of weighted sum of ... are connected together by the switches.
For a particular input and a particular output neuron there is a compound system of weighted sums that actually can be summarized to a single effective weighted sum.
Since ReLU switches state at zero there are no sudden discontinuities in the output for gradual changes in the input.
There are other numerically efficient weighted sum (dot product) algorithms around like the FFT and Walsh Hadamard transform. There is no reason you can't incorporate those into an ReLU based neural network and benefit from the computational gains.
(eg. Fixed filter bank neural networks.) | Why do we use ReLU in neural networks and how do we use it?
ReLU is a literal switch. With an electrical switch 1 volt in gives 1 volt out, n volts in gives n volts out when on. On/Off when you decide to switch at zero gives exactly the same graph as ReLU.
|
4,520 | Why do we use ReLU in neural networks and how do we use it? | ReLU is the max function(x,0) with input x e.g. matrix from a convolved image. ReLU then sets all negative values in the matrix x to zero and all other values are kept constant.
ReLU is computed after the convolution and is a nonlinear activation function like tanh or sigmoid.
Softmax is a classifier at the end of the neural network. That is logistic regression to normalize outputs to values between 0 and 1. (Alternative here is a SVM classifier).
CNN Forward Pass e.g.: input->conv->ReLU->Pool->conv->ReLU->Pool->FC->softmax | Why do we use ReLU in neural networks and how do we use it? | ReLU is the max function(x,0) with input x e.g. matrix from a convolved image. ReLU then sets all negative values in the matrix x to zero and all other values are kept constant.
ReLU is computed afte | Why do we use ReLU in neural networks and how do we use it?
ReLU is the max function(x,0) with input x e.g. matrix from a convolved image. ReLU then sets all negative values in the matrix x to zero and all other values are kept constant.
ReLU is computed after the convolution and is a nonlinear activation function like tanh or sigmoid.
Softmax is a classifier at the end of the neural network. That is logistic regression to normalize outputs to values between 0 and 1. (Alternative here is a SVM classifier).
CNN Forward Pass e.g.: input->conv->ReLU->Pool->conv->ReLU->Pool->FC->softmax | Why do we use ReLU in neural networks and how do we use it?
ReLU is the max function(x,0) with input x e.g. matrix from a convolved image. ReLU then sets all negative values in the matrix x to zero and all other values are kept constant.
ReLU is computed afte |
4,521 | Why do we use ReLU in neural networks and how do we use it? | ReLU is probably one of the simplest nonlinear function possible. A step function is simpler. However, a step function has the first derivative (gradient) zero everywhere but in one point, at which it has an infinite gradient. ReLU has a finite derivative (gradient) everywhere. It has an infinite second derivative in one point.
The feed forward networks are trained by looking for a zero gradient. The important thing here is that there's a lot of first derivatives to calculate in a large net's backpropagation routine, and it helps that they are fast to compute like ReLU. The second is that unlike step function, ReLU's gradients are always finite and they're not trivial zeros almost everywhere. Finally, we need nonlinear activations for the deep learning net to work well, but that's different subject. | Why do we use ReLU in neural networks and how do we use it? | ReLU is probably one of the simplest nonlinear function possible. A step function is simpler. However, a step function has the first derivative (gradient) zero everywhere but in one point, at which it | Why do we use ReLU in neural networks and how do we use it?
ReLU is probably one of the simplest nonlinear function possible. A step function is simpler. However, a step function has the first derivative (gradient) zero everywhere but in one point, at which it has an infinite gradient. ReLU has a finite derivative (gradient) everywhere. It has an infinite second derivative in one point.
The feed forward networks are trained by looking for a zero gradient. The important thing here is that there's a lot of first derivatives to calculate in a large net's backpropagation routine, and it helps that they are fast to compute like ReLU. The second is that unlike step function, ReLU's gradients are always finite and they're not trivial zeros almost everywhere. Finally, we need nonlinear activations for the deep learning net to work well, but that's different subject. | Why do we use ReLU in neural networks and how do we use it?
ReLU is probably one of the simplest nonlinear function possible. A step function is simpler. However, a step function has the first derivative (gradient) zero everywhere but in one point, at which it |
4,522 | Regularization methods for logistic regression | Yes, Regularization can be used in all linear methods, including both regression and classification. I would like to show you that there are not too much difference between regression and classification: the only difference is the loss function.
Specifically, there are three major components of linear method, Loss Function, Regularization, Algorithms. Where loss function plus regularization is the objective function in the problem in optimization form and the algorithm is the way to solve it (the objective function is convex, we will not discuss in this post).
In loss function setting, we can have different loss in both regression and classification cases. For example, Least squares and least absolute deviation loss can be used for regression. And their math representation are $L(\hat y,y)=(\hat y -y)^2$ and $L(\hat y,y)=|\hat y -y|$. (The function $L( \cdot ) $ is defined on two scalar, $y$ is ground truth value and $\hat y$ is predicted value.)
On the other hand, logistic loss and hinge loss can be used for classification. Their math representations are $L(\hat y, y)=\log (1+ \exp(-\hat y y))$ and $L(\hat y, y)= (1- \hat y y)_+$. (Here, $y$ is the ground truth label in $\{-1,1\}$ and $\hat y$ is predicted "score". The definition of $\hat y$ is a little bit unusual, please see the comment section.)
In regularization setting, you mentioned about the L1 and L2 regularization, there are also other forms, which will not be discussed in this post.
Therefore, in a high level a linear method is
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} L(w^{\top} x,y)+\lambda h(w)$$
If you replace the Loss function from regression setting to logistic loss, you get the logistic regression with regularization.
For example, in ridge regression, the optimization problem is
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} (w^{\top} x-y)^2+\lambda w^\top w$$
If you replace the loss function with logistic loss, the problem becomes
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} \log(1+\exp{(-w^{\top}x \cdot y)})+\lambda w^\top w$$
Here you have the logistic regression with L2 regularization.
This is how it looks like in a toy synthesized binary data set. The left figure is the data with the linear model (decision boundary). The right figure is the objective function contour (x and y axis represents the values for 2 parameters.). The data set was generated from two Gaussian, and we fit the logistic regression model without intercept, so there are only two parameters we can visualize in the right sub-figure.
The blue lines are the logistic regression without regularization and the black lines are logistic regression with L2 regularization. The blue and black points in right figure are optimal parameters for objective function.
In this experiment, we set a large $\lambda$, so you can see two coefficients are close to $0$. In addition, from the contour, we can observe the regularization term is dominated and the whole function is like a quadratic bowl.
Here is another example with L1 regularization.
Note that, the purpose of this experiment is trying to show how the regularization works in logistic regression, but not argue regularized model is better.
Here are some animations about L1 and L2 regularization and how it affects the logistic loss objective. In each frame, the title suggests the regularization type and $\lambda$, the plot is objective function (logistic loss + regularization) contour. We increase the regularization parameter $\lambda$ in each frame and the optimal solution will shrink to $0$ frame by frame.
Some notation comments. $w$ and $x$ are column vectors,$y$ is a scalar. So the linear model $\hat y = f(x)=w^\top x$. If we want to include the intercept term, we can append $1$ as a column to the data.
In regression setting, $y$ is a real number and in classification setting $y \in \{-1,1\}$.
Note it is a little bit strange for the definition of $\hat y=w^{\top} x$ in classification setting. Since most people use $\hat y$ to represent a predicted value of $y$. In our case, $\hat y = w^{\top} x$ is a real number, but not in $\{-1,1\}$. We use this definition of $\hat y$ because we can simplify the notation on logistic loss and hinge loss.
Also note that, in some other notation system, $y \in \{0,1\}$, the form of the logistic loss function would be different.
The code can be found in my other answer here.
Is there any intuitive explanation of why logistic regression will not work for perfect separation case? And why adding regularization will fix it? | Regularization methods for logistic regression | Yes, Regularization can be used in all linear methods, including both regression and classification. I would like to show you that there are not too much difference between regression and classificati | Regularization methods for logistic regression
Yes, Regularization can be used in all linear methods, including both regression and classification. I would like to show you that there are not too much difference between regression and classification: the only difference is the loss function.
Specifically, there are three major components of linear method, Loss Function, Regularization, Algorithms. Where loss function plus regularization is the objective function in the problem in optimization form and the algorithm is the way to solve it (the objective function is convex, we will not discuss in this post).
In loss function setting, we can have different loss in both regression and classification cases. For example, Least squares and least absolute deviation loss can be used for regression. And their math representation are $L(\hat y,y)=(\hat y -y)^2$ and $L(\hat y,y)=|\hat y -y|$. (The function $L( \cdot ) $ is defined on two scalar, $y$ is ground truth value and $\hat y$ is predicted value.)
On the other hand, logistic loss and hinge loss can be used for classification. Their math representations are $L(\hat y, y)=\log (1+ \exp(-\hat y y))$ and $L(\hat y, y)= (1- \hat y y)_+$. (Here, $y$ is the ground truth label in $\{-1,1\}$ and $\hat y$ is predicted "score". The definition of $\hat y$ is a little bit unusual, please see the comment section.)
In regularization setting, you mentioned about the L1 and L2 regularization, there are also other forms, which will not be discussed in this post.
Therefore, in a high level a linear method is
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} L(w^{\top} x,y)+\lambda h(w)$$
If you replace the Loss function from regression setting to logistic loss, you get the logistic regression with regularization.
For example, in ridge regression, the optimization problem is
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} (w^{\top} x-y)^2+\lambda w^\top w$$
If you replace the loss function with logistic loss, the problem becomes
$$\underset{w}{\text{minimize}}~~~ \sum_{x,y} \log(1+\exp{(-w^{\top}x \cdot y)})+\lambda w^\top w$$
Here you have the logistic regression with L2 regularization.
This is how it looks like in a toy synthesized binary data set. The left figure is the data with the linear model (decision boundary). The right figure is the objective function contour (x and y axis represents the values for 2 parameters.). The data set was generated from two Gaussian, and we fit the logistic regression model without intercept, so there are only two parameters we can visualize in the right sub-figure.
The blue lines are the logistic regression without regularization and the black lines are logistic regression with L2 regularization. The blue and black points in right figure are optimal parameters for objective function.
In this experiment, we set a large $\lambda$, so you can see two coefficients are close to $0$. In addition, from the contour, we can observe the regularization term is dominated and the whole function is like a quadratic bowl.
Here is another example with L1 regularization.
Note that, the purpose of this experiment is trying to show how the regularization works in logistic regression, but not argue regularized model is better.
Here are some animations about L1 and L2 regularization and how it affects the logistic loss objective. In each frame, the title suggests the regularization type and $\lambda$, the plot is objective function (logistic loss + regularization) contour. We increase the regularization parameter $\lambda$ in each frame and the optimal solution will shrink to $0$ frame by frame.
Some notation comments. $w$ and $x$ are column vectors,$y$ is a scalar. So the linear model $\hat y = f(x)=w^\top x$. If we want to include the intercept term, we can append $1$ as a column to the data.
In regression setting, $y$ is a real number and in classification setting $y \in \{-1,1\}$.
Note it is a little bit strange for the definition of $\hat y=w^{\top} x$ in classification setting. Since most people use $\hat y$ to represent a predicted value of $y$. In our case, $\hat y = w^{\top} x$ is a real number, but not in $\{-1,1\}$. We use this definition of $\hat y$ because we can simplify the notation on logistic loss and hinge loss.
Also note that, in some other notation system, $y \in \{0,1\}$, the form of the logistic loss function would be different.
The code can be found in my other answer here.
Is there any intuitive explanation of why logistic regression will not work for perfect separation case? And why adding regularization will fix it? | Regularization methods for logistic regression
Yes, Regularization can be used in all linear methods, including both regression and classification. I would like to show you that there are not too much difference between regression and classificati |
4,523 | Regularization methods for logistic regression | A shrinkage/regularization method that was originally proposed for logistic regression based on considerations of higher order asymptotic was Firth logistic regression... some while before all of these talks about lasso and what not started, although after ridge regression risen and subsided in popularity through 1970s. It amounted to adding a penalty term to the likelihood,
$$
l^*(\beta) = l(\beta) + \frac12 \ln |i(\beta)|
$$
where $i(\beta) = \frac1n \sum_i p_i (1-p_i) x_i x_i'$ is the information matrix normalized per observation. Firth demonstrated that this correction has a Bayesian interpretation in that it corresponds to Jeffreys prior shrinking towards zero. The excitement it generated was due to it helping fixing the problem of perfect separation: say a dataset $\{(y_i,x_i)\| = \{(1,1),(0,0)\}$ would nominally produce infinite ML estimates, and glm in R is still susceptible to the problem, I believe. | Regularization methods for logistic regression | A shrinkage/regularization method that was originally proposed for logistic regression based on considerations of higher order asymptotic was Firth logistic regression... some while before all of thes | Regularization methods for logistic regression
A shrinkage/regularization method that was originally proposed for logistic regression based on considerations of higher order asymptotic was Firth logistic regression... some while before all of these talks about lasso and what not started, although after ridge regression risen and subsided in popularity through 1970s. It amounted to adding a penalty term to the likelihood,
$$
l^*(\beta) = l(\beta) + \frac12 \ln |i(\beta)|
$$
where $i(\beta) = \frac1n \sum_i p_i (1-p_i) x_i x_i'$ is the information matrix normalized per observation. Firth demonstrated that this correction has a Bayesian interpretation in that it corresponds to Jeffreys prior shrinking towards zero. The excitement it generated was due to it helping fixing the problem of perfect separation: say a dataset $\{(y_i,x_i)\| = \{(1,1),(0,0)\}$ would nominally produce infinite ML estimates, and glm in R is still susceptible to the problem, I believe. | Regularization methods for logistic regression
A shrinkage/regularization method that was originally proposed for logistic regression based on considerations of higher order asymptotic was Firth logistic regression... some while before all of thes |
4,524 | Regularization methods for logistic regression | Yes, it is applicable to logistic regression. In R, using glmnet, you simply specify the appropriate family which is "binomial" for logistic regression. There are a couple of others (poison, multinomial, etc) that you can specify depending on your data and the problem you are addressing. | Regularization methods for logistic regression | Yes, it is applicable to logistic regression. In R, using glmnet, you simply specify the appropriate family which is "binomial" for logistic regression. There are a couple of others (poison, multinomi | Regularization methods for logistic regression
Yes, it is applicable to logistic regression. In R, using glmnet, you simply specify the appropriate family which is "binomial" for logistic regression. There are a couple of others (poison, multinomial, etc) that you can specify depending on your data and the problem you are addressing. | Regularization methods for logistic regression
Yes, it is applicable to logistic regression. In R, using glmnet, you simply specify the appropriate family which is "binomial" for logistic regression. There are a couple of others (poison, multinomi |
4,525 | What is the distribution of the sum of non i.i.d. gaussian variates? | See my comment on probabilityislogic's answer to this question. Here,
$$
\begin{align*}
X + Y &\sim N(\mu_X + \mu_Y,\; \sigma_X^2 + \sigma_Y^2 + 2\sigma_{X,Y})\\
aX + bY &\sim N(a\mu_X + b\mu_Y,\; a^2\sigma_X^2 + b^2\sigma_Y^2 + 2ab\sigma_{X,Y})
\end{align*}
$$
where $\sigma_{X,Y}$ is the covariance of $X$ and $Y$. Nobody writes the off-diagonal entries in the covariance matrix as $\sigma_{xy}^2$ as you have
done. The off-diagonal entries are covariances which
can be negative. | What is the distribution of the sum of non i.i.d. gaussian variates? | See my comment on probabilityislogic's answer to this question. Here,
$$
\begin{align*}
X + Y &\sim N(\mu_X + \mu_Y,\; \sigma_X^2 + \sigma_Y^2 + 2\sigma_{X,Y})\\
aX + bY &\sim N(a\mu_X + b\mu_Y,\; a | What is the distribution of the sum of non i.i.d. gaussian variates?
See my comment on probabilityislogic's answer to this question. Here,
$$
\begin{align*}
X + Y &\sim N(\mu_X + \mu_Y,\; \sigma_X^2 + \sigma_Y^2 + 2\sigma_{X,Y})\\
aX + bY &\sim N(a\mu_X + b\mu_Y,\; a^2\sigma_X^2 + b^2\sigma_Y^2 + 2ab\sigma_{X,Y})
\end{align*}
$$
where $\sigma_{X,Y}$ is the covariance of $X$ and $Y$. Nobody writes the off-diagonal entries in the covariance matrix as $\sigma_{xy}^2$ as you have
done. The off-diagonal entries are covariances which
can be negative. | What is the distribution of the sum of non i.i.d. gaussian variates?
See my comment on probabilityislogic's answer to this question. Here,
$$
\begin{align*}
X + Y &\sim N(\mu_X + \mu_Y,\; \sigma_X^2 + \sigma_Y^2 + 2\sigma_{X,Y})\\
aX + bY &\sim N(a\mu_X + b\mu_Y,\; a |
4,526 | What is the distribution of the sum of non i.i.d. gaussian variates? | @dilip's answer is sufficient, but I just thought I'd add some details on how you get to the result. We can use the method of characteristic functions. For any $d$-dimensional multivariate normal distribution $X\sim N_{d}(\mu,\Sigma)$ where $\mu=(\mu_1,\dots,\mu_d)^T$ and $\Sigma_{jk}=cov(X_j,X_k)\;\;j,k=1,\dots,d$, the characteristic function is given by:
$$\varphi_{X}({\bf{t}})=E\left[\exp(i{\bf{t}}^TX)\right]=\exp\left(i{\bf{t}}^T\mu-\frac{1}{2}{\bf{t}}^T\Sigma{\bf{t}}\right)$$
$$=\exp\left(i\sum_{j=1}^{d}t_j\mu_j-\frac{1}{2}\sum_{j=1}^{d}\sum_{k=1}^{d}t_jt_k\Sigma_{jk}\right)$$
For a one-dimensional normal variable $Y\sim N_1(\mu_Y,\sigma_Y^2)$ we get:
$$\varphi_Y(t)=\exp\left(it\mu_Y-\frac{1}{2}t^2\sigma_Y^2\right)$$
Now, suppose we define a new random variable $Z={\bf{a}}^TX=\sum_{j=1}^{d}a_jX_j$. For your case, we have $d=2$ and $a_1=a_2=1$. The characteristic function for $Z$ is the basically the same as that for $X$.
$$\varphi_{Z}(t)=E\left[\exp(itZ)\right]=E\left[\exp(it{\bf{a}}^TX)\right]=\varphi_{X}(t{\bf{a}})$$
$$=\exp\left(it\sum_{j=1}^{d}a_j\mu_j-\frac{1}{2}t^2\sum_{j=1}^{d}\sum_{k=1}^{d}a_ja_k\Sigma_{jk}\right)$$
If we compare this characteristic function with the characteristic function $\varphi_Y(t)$ we see that they are the same, but with $\mu_Y$ being replaced by $\mu_Z=\sum_{j=1}^{d}a_j\mu_j$ and with $\sigma_Y^2$ being replaced by $\sigma^2_Z=\sum_{j=1}^{d}\sum_{k=1}^{d}a_ja_k\Sigma_{jk}$. Hence because the characteristic function of $Z$ is equivalent to the characteristic function of $Y$, the distributions must also be equal. Hence $Z$ is normally distributed. We can simplify the expression for the variance by noting that $\Sigma_{jk}=\Sigma_{kj}$ and we get:
$$\sigma^2_Z=\sum_{j=1}^{d}a_j^2\Sigma_{jj}+2\sum_{j=2}^{d}\sum_{k=1}^{j-1}a_ja_k\Sigma_{jk}$$
This is also the general formula for the variance of a linear combination of any set of random variables, independent or not, normal or not, where $\Sigma_{jj}=var(X_j)$ and $\Sigma_{jk}=cov(X_j,X_k)$. Now if we specialise to $d=2$ and $a_1=a_2=1$, the above formula becomes:
$$\sigma^2_Z=\sum_{j=1}^{2}(1)^2\Sigma_{jj}+2\sum_{j=2}^{2}\sum_{k=1}^{j-1}(1)(1)\Sigma_{jk}=\Sigma_{11}+\Sigma_{22}+2\Sigma_{21}$$ | What is the distribution of the sum of non i.i.d. gaussian variates? | @dilip's answer is sufficient, but I just thought I'd add some details on how you get to the result. We can use the method of characteristic functions. For any $d$-dimensional multivariate normal di | What is the distribution of the sum of non i.i.d. gaussian variates?
@dilip's answer is sufficient, but I just thought I'd add some details on how you get to the result. We can use the method of characteristic functions. For any $d$-dimensional multivariate normal distribution $X\sim N_{d}(\mu,\Sigma)$ where $\mu=(\mu_1,\dots,\mu_d)^T$ and $\Sigma_{jk}=cov(X_j,X_k)\;\;j,k=1,\dots,d$, the characteristic function is given by:
$$\varphi_{X}({\bf{t}})=E\left[\exp(i{\bf{t}}^TX)\right]=\exp\left(i{\bf{t}}^T\mu-\frac{1}{2}{\bf{t}}^T\Sigma{\bf{t}}\right)$$
$$=\exp\left(i\sum_{j=1}^{d}t_j\mu_j-\frac{1}{2}\sum_{j=1}^{d}\sum_{k=1}^{d}t_jt_k\Sigma_{jk}\right)$$
For a one-dimensional normal variable $Y\sim N_1(\mu_Y,\sigma_Y^2)$ we get:
$$\varphi_Y(t)=\exp\left(it\mu_Y-\frac{1}{2}t^2\sigma_Y^2\right)$$
Now, suppose we define a new random variable $Z={\bf{a}}^TX=\sum_{j=1}^{d}a_jX_j$. For your case, we have $d=2$ and $a_1=a_2=1$. The characteristic function for $Z$ is the basically the same as that for $X$.
$$\varphi_{Z}(t)=E\left[\exp(itZ)\right]=E\left[\exp(it{\bf{a}}^TX)\right]=\varphi_{X}(t{\bf{a}})$$
$$=\exp\left(it\sum_{j=1}^{d}a_j\mu_j-\frac{1}{2}t^2\sum_{j=1}^{d}\sum_{k=1}^{d}a_ja_k\Sigma_{jk}\right)$$
If we compare this characteristic function with the characteristic function $\varphi_Y(t)$ we see that they are the same, but with $\mu_Y$ being replaced by $\mu_Z=\sum_{j=1}^{d}a_j\mu_j$ and with $\sigma_Y^2$ being replaced by $\sigma^2_Z=\sum_{j=1}^{d}\sum_{k=1}^{d}a_ja_k\Sigma_{jk}$. Hence because the characteristic function of $Z$ is equivalent to the characteristic function of $Y$, the distributions must also be equal. Hence $Z$ is normally distributed. We can simplify the expression for the variance by noting that $\Sigma_{jk}=\Sigma_{kj}$ and we get:
$$\sigma^2_Z=\sum_{j=1}^{d}a_j^2\Sigma_{jj}+2\sum_{j=2}^{d}\sum_{k=1}^{j-1}a_ja_k\Sigma_{jk}$$
This is also the general formula for the variance of a linear combination of any set of random variables, independent or not, normal or not, where $\Sigma_{jj}=var(X_j)$ and $\Sigma_{jk}=cov(X_j,X_k)$. Now if we specialise to $d=2$ and $a_1=a_2=1$, the above formula becomes:
$$\sigma^2_Z=\sum_{j=1}^{2}(1)^2\Sigma_{jj}+2\sum_{j=2}^{2}\sum_{k=1}^{j-1}(1)(1)\Sigma_{jk}=\Sigma_{11}+\Sigma_{22}+2\Sigma_{21}$$ | What is the distribution of the sum of non i.i.d. gaussian variates?
@dilip's answer is sufficient, but I just thought I'd add some details on how you get to the result. We can use the method of characteristic functions. For any $d$-dimensional multivariate normal di |
4,527 | How to calculate relative error when the true value is zero? | There are many alternatives, depending on the purpose.
A common one is the "Relative Percent Difference," or RPD, used in laboratory quality control procedures. Although you can find many seemingly different formulas, they all come down to comparing the difference of two values to their average magnitude:
$$d_1(x,y) = \frac{x - y}{(|x| + |y|)/2} = 2\frac{x - y}{|x| + |y|}.$$
This is a signed expression, positive when $x$ exceeds $y$ and negative when $y$ exceeds $x$. Its value always lies between $-2$ and $2$. By using absolute values in the denominator it handles negative numbers in a reasonable way. Most of the references I can find, such as the New Jersey DEP Site Remediation Program Data Quality Assessment and Data Usability Evaluation Technical Guidance, use the absolute value of $d_1$ because they are interested only in the magnitude of the relative error.
A Wikipedia article on Relative Change and Difference observes that
$$d_\infty(x,y) = \frac{|x - y|}{\max(|x|, |y|)}$$
is frequently used as a relative tolerance test in floating point numerical algorithms. The same article also points out that formulas like $d_1$ and $d_\infty$ may be generalized to
$$d_f(x,y) = \frac{x - y}{f(x,y)}$$
where the function $f$ depends directly on the magnitudes of $x$ and $y$ (usually assuming $x$ and $y$ are positive). As examples it offers their max, min, and arithmetic mean (with and without taking the absolute values of $x$ and $y$ themselves), but one could contemplate other sorts of averages such as the geometric mean $\sqrt{|x y|}$, the harmonic mean $2/(1/|x| + 1/|y|)$ and $L^p$ means $((|x|^p + |y|^p)/2)^{1/p}$. ($d_1$ corresponds to $p=1$ and $d_\infty$ corresponds to the limit as $p\to \infty$.) One might choose an $f$ based on the expected statistical behavior of $x$ and $y$. For instance, with approximately lognormal distributions the geometric mean would be an attractive choice for $f$ because it is a meaningful average in that circumstance.
Most of these formulas run into difficulties when the denominator equals zero. In many applications that either is not possible or it is harmless to set the difference to zero when $x=y=0$.
Note that all these definitions share a fundamental invariance property: whatever the relative difference function $d$ may be, it does not change when the arguments are uniformly rescaled by $\lambda \gt 0$:
$$d(x,y) = d(\lambda x, \lambda y).$$
It is this property that allows us to consider $d$ to be a relative difference. Thus, in particular, a non-invariant function like
$$d(x,y) =?\ \frac{|x-y|}{1 + |y|}$$
simply does not qualify. Whatever virtues it might have, it does not express a relative difference.
The story does not end here. We might even find it fruitful to push the implications of invariance a little further.
The set of all ordered pairs of real numbers $(x,y)\ne (0,0)$ where $(x,y)$ is considered to be the same as $(\lambda x, \lambda y)$ is the Real Projective Line $\mathbb{RP}^1$. In both a topological sense and an algebraic sense, $\mathbb{RP}^1$ is a circle. Any $(x,y)\ne (0,0)$ determines a unique line through the origin $(0,0)$. When $x\ne 0$ its slope is $y/x$; otherwise we may consider its slope to be "infinite" (and either negative or positive). A neighborhood of this vertical line consists of lines with extremely large positive or extremely large negative slopes. We may parameterize all such lines in terms of their angle $\theta = \arctan(y/x)$, with $-\pi/2 \lt \theta \le \pi/2$. Associated with every such $\theta$ is a point on the circle,
$$(\xi, \eta) = (\cos(2\theta), \sin(2\theta)) = \left(\frac{x^2-y^2}{x^2+y^2}, \frac{2xy}{x^2+y^2}\right).$$
Any distance defined on the circle can therefore be used to define a relative difference.
As an example of where this can lead, consider the usual (Euclidean) distance on the circle, whereby the distance between two points is the size of the angle between them. The relative difference is least when $x=y$, corresponding to $2\theta = \pi/2$ (or $2\theta = -3\pi/2$ when $x$ and $y$ have opposite signs). From this point of view a natural relative difference for positive numbers $x$ and $y$ would be the distance to this angle:
$$d_S(x,y) = \left|2\arctan\left(\frac{y}{x}\right) - \pi/2\right|.$$
To first order, this is the relative distance $|x-y|/|y|$--but it works even when $y=0$. Moreover, it doesn't blow up, but instead (as a signed distance) is limited between $-\pi/2$ and $\pi/2$, as this graph indicates:
This hints at how flexible the choices are when selecting a way to measure relative differences. | How to calculate relative error when the true value is zero? | There are many alternatives, depending on the purpose.
A common one is the "Relative Percent Difference," or RPD, used in laboratory quality control procedures. Although you can find many seemingly | How to calculate relative error when the true value is zero?
There are many alternatives, depending on the purpose.
A common one is the "Relative Percent Difference," or RPD, used in laboratory quality control procedures. Although you can find many seemingly different formulas, they all come down to comparing the difference of two values to their average magnitude:
$$d_1(x,y) = \frac{x - y}{(|x| + |y|)/2} = 2\frac{x - y}{|x| + |y|}.$$
This is a signed expression, positive when $x$ exceeds $y$ and negative when $y$ exceeds $x$. Its value always lies between $-2$ and $2$. By using absolute values in the denominator it handles negative numbers in a reasonable way. Most of the references I can find, such as the New Jersey DEP Site Remediation Program Data Quality Assessment and Data Usability Evaluation Technical Guidance, use the absolute value of $d_1$ because they are interested only in the magnitude of the relative error.
A Wikipedia article on Relative Change and Difference observes that
$$d_\infty(x,y) = \frac{|x - y|}{\max(|x|, |y|)}$$
is frequently used as a relative tolerance test in floating point numerical algorithms. The same article also points out that formulas like $d_1$ and $d_\infty$ may be generalized to
$$d_f(x,y) = \frac{x - y}{f(x,y)}$$
where the function $f$ depends directly on the magnitudes of $x$ and $y$ (usually assuming $x$ and $y$ are positive). As examples it offers their max, min, and arithmetic mean (with and without taking the absolute values of $x$ and $y$ themselves), but one could contemplate other sorts of averages such as the geometric mean $\sqrt{|x y|}$, the harmonic mean $2/(1/|x| + 1/|y|)$ and $L^p$ means $((|x|^p + |y|^p)/2)^{1/p}$. ($d_1$ corresponds to $p=1$ and $d_\infty$ corresponds to the limit as $p\to \infty$.) One might choose an $f$ based on the expected statistical behavior of $x$ and $y$. For instance, with approximately lognormal distributions the geometric mean would be an attractive choice for $f$ because it is a meaningful average in that circumstance.
Most of these formulas run into difficulties when the denominator equals zero. In many applications that either is not possible or it is harmless to set the difference to zero when $x=y=0$.
Note that all these definitions share a fundamental invariance property: whatever the relative difference function $d$ may be, it does not change when the arguments are uniformly rescaled by $\lambda \gt 0$:
$$d(x,y) = d(\lambda x, \lambda y).$$
It is this property that allows us to consider $d$ to be a relative difference. Thus, in particular, a non-invariant function like
$$d(x,y) =?\ \frac{|x-y|}{1 + |y|}$$
simply does not qualify. Whatever virtues it might have, it does not express a relative difference.
The story does not end here. We might even find it fruitful to push the implications of invariance a little further.
The set of all ordered pairs of real numbers $(x,y)\ne (0,0)$ where $(x,y)$ is considered to be the same as $(\lambda x, \lambda y)$ is the Real Projective Line $\mathbb{RP}^1$. In both a topological sense and an algebraic sense, $\mathbb{RP}^1$ is a circle. Any $(x,y)\ne (0,0)$ determines a unique line through the origin $(0,0)$. When $x\ne 0$ its slope is $y/x$; otherwise we may consider its slope to be "infinite" (and either negative or positive). A neighborhood of this vertical line consists of lines with extremely large positive or extremely large negative slopes. We may parameterize all such lines in terms of their angle $\theta = \arctan(y/x)$, with $-\pi/2 \lt \theta \le \pi/2$. Associated with every such $\theta$ is a point on the circle,
$$(\xi, \eta) = (\cos(2\theta), \sin(2\theta)) = \left(\frac{x^2-y^2}{x^2+y^2}, \frac{2xy}{x^2+y^2}\right).$$
Any distance defined on the circle can therefore be used to define a relative difference.
As an example of where this can lead, consider the usual (Euclidean) distance on the circle, whereby the distance between two points is the size of the angle between them. The relative difference is least when $x=y$, corresponding to $2\theta = \pi/2$ (or $2\theta = -3\pi/2$ when $x$ and $y$ have opposite signs). From this point of view a natural relative difference for positive numbers $x$ and $y$ would be the distance to this angle:
$$d_S(x,y) = \left|2\arctan\left(\frac{y}{x}\right) - \pi/2\right|.$$
To first order, this is the relative distance $|x-y|/|y|$--but it works even when $y=0$. Moreover, it doesn't blow up, but instead (as a signed distance) is limited between $-\pi/2$ and $\pi/2$, as this graph indicates:
This hints at how flexible the choices are when selecting a way to measure relative differences. | How to calculate relative error when the true value is zero?
There are many alternatives, depending on the purpose.
A common one is the "Relative Percent Difference," or RPD, used in laboratory quality control procedures. Although you can find many seemingly |
4,528 | How to calculate relative error when the true value is zero? | First, note that you typically take the absolute value in computing the relative error.
A common solution to the problem is to compute
$$\text{relative error}=\frac{\left| x_{\text{true}}- x_{\text{test}} \right|}{1+\left|x_{\text{true}} \right|} .$$ | How to calculate relative error when the true value is zero? | First, note that you typically take the absolute value in computing the relative error.
A common solution to the problem is to compute
$$\text{relative error}=\frac{\left| x_{\text{true}}- x_{\text{ | How to calculate relative error when the true value is zero?
First, note that you typically take the absolute value in computing the relative error.
A common solution to the problem is to compute
$$\text{relative error}=\frac{\left| x_{\text{true}}- x_{\text{test}} \right|}{1+\left|x_{\text{true}} \right|} .$$ | How to calculate relative error when the true value is zero?
First, note that you typically take the absolute value in computing the relative error.
A common solution to the problem is to compute
$$\text{relative error}=\frac{\left| x_{\text{true}}- x_{\text{ |
4,529 | How to calculate relative error when the true value is zero? | Finding MAPE,
It is very debatable topic and many opensource contributors have discussed on the above topic. The most efficient approach till now is followed by the developers. Please refer to this PR to know more. | How to calculate relative error when the true value is zero? | Finding MAPE,
It is very debatable topic and many opensource contributors have discussed on the above topic. The most efficient approach till now is followed by the developers. Please refer to this PR | How to calculate relative error when the true value is zero?
Finding MAPE,
It is very debatable topic and many opensource contributors have discussed on the above topic. The most efficient approach till now is followed by the developers. Please refer to this PR to know more. | How to calculate relative error when the true value is zero?
Finding MAPE,
It is very debatable topic and many opensource contributors have discussed on the above topic. The most efficient approach till now is followed by the developers. Please refer to this PR |
4,530 | How to calculate relative error when the true value is zero? | I was a bit confused on this for a while. In the end, its because if you are trying to measure relative error with respect to zero then you are trying to force something that simply does not exist.
If you think about it, you're comparing apples to oranges when you compare relative error to the error measured from zero, because the error measured from zero is equivalent to the measured value (that's why you get 100% error when you divide by the test number).
For example, consider measuring error of gauge pressure (the relative pressure from atmospheric) vs absolute pressure. Say that you use an instrument to measure the gauge pressure at perfect atmospheric conditions, and your device measured atmospheric pressure spot on so that it should record 0% error. Using the equation you provided, and first assuming we used the measured gauge pressure, to calculate relative error:
$$ \text{relative error} = \frac{P_{gauge, true}-P_{gauge, test}}{P_{gauge, true}} $$
Then $P_{gauge, true}=0$ and $P_{gauge,test}=0$ and you do not get 0% error, instead it is undefined. That is because the actual percent error should be using the absolute pressure values like this:
$$ \text{relative error} = \frac{P_{absolute, true}-P_{absolute, test}}{P_{absolute, true}} $$
Now $P_{absolute, true}=1atm$ and $P_{absolute,test}=1atm$ and you get 0% error. This is the proper application of relative error. The original application that used gauge pressure was more like "relative error of the relative value" which is a different thing than "relative error". You need to convert the gauge pressure to absolute before measuring the relative error.
The solution to your question is to make sure you are dealing with absolute values when measuring relative error, so that zero is not a possibility. Then you are actually getting relative error, and can use that as an uncertainty or a metric of your real percent error. If you must stick with relative values, than you should be using absolute error, because the relative (percent) error will change depending on your reference point.
It's hard to put a concrete definition on 0...
"Zero is the integer denoted 0 that, when used as a counting number, means that no objects are present." - Wolfram MathWorld http://mathworld.wolfram.com/Zero.html
Feel free to nit pick, but zero essentially means nothing, it is not there. This is why it does not make sense to use gauge pressure when calculating relative error. Gauge pressure, though useful, assumes there is nothing at atmospheric pressure. We know this is not the case though, because it has an absolute pressure of 1 atm. Thus, the relative error with respect to nothing, just does not exist, it's undefined.
Feel free to argue against this, simply put: any quick fixes, such as adding one to the bottom value, are faulty and not accurate. They can be still be usefully if you are simply trying to minimize error. If you are trying to make accurate measurements of uncertainty though, not so much... | How to calculate relative error when the true value is zero? | I was a bit confused on this for a while. In the end, its because if you are trying to measure relative error with respect to zero then you are trying to force something that simply does not exist.
If | How to calculate relative error when the true value is zero?
I was a bit confused on this for a while. In the end, its because if you are trying to measure relative error with respect to zero then you are trying to force something that simply does not exist.
If you think about it, you're comparing apples to oranges when you compare relative error to the error measured from zero, because the error measured from zero is equivalent to the measured value (that's why you get 100% error when you divide by the test number).
For example, consider measuring error of gauge pressure (the relative pressure from atmospheric) vs absolute pressure. Say that you use an instrument to measure the gauge pressure at perfect atmospheric conditions, and your device measured atmospheric pressure spot on so that it should record 0% error. Using the equation you provided, and first assuming we used the measured gauge pressure, to calculate relative error:
$$ \text{relative error} = \frac{P_{gauge, true}-P_{gauge, test}}{P_{gauge, true}} $$
Then $P_{gauge, true}=0$ and $P_{gauge,test}=0$ and you do not get 0% error, instead it is undefined. That is because the actual percent error should be using the absolute pressure values like this:
$$ \text{relative error} = \frac{P_{absolute, true}-P_{absolute, test}}{P_{absolute, true}} $$
Now $P_{absolute, true}=1atm$ and $P_{absolute,test}=1atm$ and you get 0% error. This is the proper application of relative error. The original application that used gauge pressure was more like "relative error of the relative value" which is a different thing than "relative error". You need to convert the gauge pressure to absolute before measuring the relative error.
The solution to your question is to make sure you are dealing with absolute values when measuring relative error, so that zero is not a possibility. Then you are actually getting relative error, and can use that as an uncertainty or a metric of your real percent error. If you must stick with relative values, than you should be using absolute error, because the relative (percent) error will change depending on your reference point.
It's hard to put a concrete definition on 0...
"Zero is the integer denoted 0 that, when used as a counting number, means that no objects are present." - Wolfram MathWorld http://mathworld.wolfram.com/Zero.html
Feel free to nit pick, but zero essentially means nothing, it is not there. This is why it does not make sense to use gauge pressure when calculating relative error. Gauge pressure, though useful, assumes there is nothing at atmospheric pressure. We know this is not the case though, because it has an absolute pressure of 1 atm. Thus, the relative error with respect to nothing, just does not exist, it's undefined.
Feel free to argue against this, simply put: any quick fixes, such as adding one to the bottom value, are faulty and not accurate. They can be still be usefully if you are simply trying to minimize error. If you are trying to make accurate measurements of uncertainty though, not so much... | How to calculate relative error when the true value is zero?
I was a bit confused on this for a while. In the end, its because if you are trying to measure relative error with respect to zero then you are trying to force something that simply does not exist.
If |
4,531 | What are the values p, d, q, in ARIMA? | What does ARIMA(1, 0, 12) mean?
Specifically for your model, ARIMA(1, 0, 12) means that it you are describing some response variable (Y) by combining a 1st order Auto-Regressive model and a 12th order Moving Average model. A good way to think about it is (AR, I, MA). This makes your model look the following, in simple terms:
Y = (Auto-Regressive Parameters) + (Moving Average Parameters)
The 0 between the 1 and the 12 represents the 'I' part of the model (the Integrative part) and it signifies a model where you're taking the difference between response variable data - this can be done with non-stationary data and it doesn't seem like you're dealing with that, so you can just ignore it.
The link that DanTheMan posted shows a nice mix of models that could help you understand yours by comparing it to those.
What values can be assigned to p, d, q?
Lots of different whole numbers. There are diagnostic tests you can do to try to find the best values of p,d,q (see part 3).
What is the process to find the values of p, d, q?
There are a number of ways, and I don't intend this to be exhaustive:
look at an autocorrelation graph of the data (will help if Moving Average (MA) model is appropriate)
look at a partial autocorrelation graph of the data (will help if AutoRegressive (AR) model is appropriate)
look at extended autocorrelation chart of the data (will help if a combination of AR and MA are needed)
try Akaike's Information Criterion (AIC) on a set of models and investigate the models with the lowest AIC values
try the Schwartz Bayesian Information Criterion (BIC) and investigate the models with the lowest BIC values
Without knowing how much more you need to know, I can't go too much farther, but if you have more questions, feel free to ask and maybe I, or someone else, can help.
* Edit: All of the ways to find p, d, q that I listed here can be found in the R package TSA if you are familiar with R. | What are the values p, d, q, in ARIMA? | What does ARIMA(1, 0, 12) mean?
Specifically for your model, ARIMA(1, 0, 12) means that it you are describing some response variable (Y) by combining a 1st order Auto-Regressive model and a 12th orde | What are the values p, d, q, in ARIMA?
What does ARIMA(1, 0, 12) mean?
Specifically for your model, ARIMA(1, 0, 12) means that it you are describing some response variable (Y) by combining a 1st order Auto-Regressive model and a 12th order Moving Average model. A good way to think about it is (AR, I, MA). This makes your model look the following, in simple terms:
Y = (Auto-Regressive Parameters) + (Moving Average Parameters)
The 0 between the 1 and the 12 represents the 'I' part of the model (the Integrative part) and it signifies a model where you're taking the difference between response variable data - this can be done with non-stationary data and it doesn't seem like you're dealing with that, so you can just ignore it.
The link that DanTheMan posted shows a nice mix of models that could help you understand yours by comparing it to those.
What values can be assigned to p, d, q?
Lots of different whole numbers. There are diagnostic tests you can do to try to find the best values of p,d,q (see part 3).
What is the process to find the values of p, d, q?
There are a number of ways, and I don't intend this to be exhaustive:
look at an autocorrelation graph of the data (will help if Moving Average (MA) model is appropriate)
look at a partial autocorrelation graph of the data (will help if AutoRegressive (AR) model is appropriate)
look at extended autocorrelation chart of the data (will help if a combination of AR and MA are needed)
try Akaike's Information Criterion (AIC) on a set of models and investigate the models with the lowest AIC values
try the Schwartz Bayesian Information Criterion (BIC) and investigate the models with the lowest BIC values
Without knowing how much more you need to know, I can't go too much farther, but if you have more questions, feel free to ask and maybe I, or someone else, can help.
* Edit: All of the ways to find p, d, q that I listed here can be found in the R package TSA if you are familiar with R. | What are the values p, d, q, in ARIMA?
What does ARIMA(1, 0, 12) mean?
Specifically for your model, ARIMA(1, 0, 12) means that it you are describing some response variable (Y) by combining a 1st order Auto-Regressive model and a 12th orde |
4,532 | What are the values p, d, q, in ARIMA? | order(p,d,q) means, that you have an ARIMA(p, d, q) model: $\phi(B)(1-B)^d X_t=\theta(B)Z_t$, where $B$ is a lag operator and $\phi(B)=1-\phi_1B-\dots-\phi_pB^p$ also $\theta(B)=1+\theta_1B+\dots+\theta_qB^q$.
The best way to find p, d, q values in R is to use auto.arima function from library(forecast). For example, auto.arima(x, ic = "aic"). For more information look up ?auto.arima. | What are the values p, d, q, in ARIMA? | order(p,d,q) means, that you have an ARIMA(p, d, q) model: $\phi(B)(1-B)^d X_t=\theta(B)Z_t$, where $B$ is a lag operator and $\phi(B)=1-\phi_1B-\dots-\phi_pB^p$ also $\theta(B)=1+\theta_1B+\dots+\the | What are the values p, d, q, in ARIMA?
order(p,d,q) means, that you have an ARIMA(p, d, q) model: $\phi(B)(1-B)^d X_t=\theta(B)Z_t$, where $B$ is a lag operator and $\phi(B)=1-\phi_1B-\dots-\phi_pB^p$ also $\theta(B)=1+\theta_1B+\dots+\theta_qB^q$.
The best way to find p, d, q values in R is to use auto.arima function from library(forecast). For example, auto.arima(x, ic = "aic"). For more information look up ?auto.arima. | What are the values p, d, q, in ARIMA?
order(p,d,q) means, that you have an ARIMA(p, d, q) model: $\phi(B)(1-B)^d X_t=\theta(B)Z_t$, where $B$ is a lag operator and $\phi(B)=1-\phi_1B-\dots-\phi_pB^p$ also $\theta(B)=1+\theta_1B+\dots+\the |
4,533 | What are the values p, d, q, in ARIMA? | Simply put the Autoregressive Integrated Moving Average (ARIMA) tries to model a time series where your time series in question, y, can be explained by its own lagged values (Autoregressive part) and error terms (Moving Average part). The "Integrated" part of the model (the "I" in "ARIMA") refers to how many times the series has been differenced to achieve stationarity.
Stationarity is a must before you can model your data: what stationarity refers to is constant mean and variance. Think of these two moments as not being time dependent. The reason for this is quite simple, it's difficult to model something which changes over time.
So your ARMA model or order (1,12) is an AR(1)+MA(12) model: it is modelled by 1 lagged value and 12 error terms. I can't speak about your data but I think it sounds like a lot of parameters (possibly overfitted).
Hope this helps. | What are the values p, d, q, in ARIMA? | Simply put the Autoregressive Integrated Moving Average (ARIMA) tries to model a time series where your time series in question, y, can be explained by its own lagged values (Autoregressive part) and | What are the values p, d, q, in ARIMA?
Simply put the Autoregressive Integrated Moving Average (ARIMA) tries to model a time series where your time series in question, y, can be explained by its own lagged values (Autoregressive part) and error terms (Moving Average part). The "Integrated" part of the model (the "I" in "ARIMA") refers to how many times the series has been differenced to achieve stationarity.
Stationarity is a must before you can model your data: what stationarity refers to is constant mean and variance. Think of these two moments as not being time dependent. The reason for this is quite simple, it's difficult to model something which changes over time.
So your ARMA model or order (1,12) is an AR(1)+MA(12) model: it is modelled by 1 lagged value and 12 error terms. I can't speak about your data but I think it sounds like a lot of parameters (possibly overfitted).
Hope this helps. | What are the values p, d, q, in ARIMA?
Simply put the Autoregressive Integrated Moving Average (ARIMA) tries to model a time series where your time series in question, y, can be explained by its own lagged values (Autoregressive part) and |
4,534 | Standard deviation of standard deviation | Let $X_1, ..., X_n \sim N(\mu, \sigma^2)$. As shown in this thread, the standard deviation of the sample standard deviation,
$$
s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X}) }, $$
is
$$ {\rm SD}(s) = \sqrt{ E \left( [E(s)- s]^2 \right) } = \sigma \sqrt{ 1 - \frac{2}{n-1} \cdot \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$
where $\Gamma(\cdot)$ is the gamma function, $n$ is the sample size and $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ is the sample mean. Since $s$ is a consistent estimator of $\sigma$, this suggests replacing $\sigma$ with $s$ in the equation above to get a consistent estimator of ${\rm SD}(s)$.
If it is an unbiased estimator you seek, we see in this thread that $ E(s)
= \sigma \cdot \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } $, which, by linearity of expectation, suggests
$$ s \cdot \sqrt{ \frac{n-1}{2} } \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } $$
as an unbiased estimator of $\sigma$. All of this together with linearity of expectation gives an unbiased estimator of ${\rm SD}(s)$:
$$ s \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } \cdot \sqrt{\frac{n-1}{2} - \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$ | Standard deviation of standard deviation | Let $X_1, ..., X_n \sim N(\mu, \sigma^2)$. As shown in this thread, the standard deviation of the sample standard deviation,
$$
s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X}) }, $$
is | Standard deviation of standard deviation
Let $X_1, ..., X_n \sim N(\mu, \sigma^2)$. As shown in this thread, the standard deviation of the sample standard deviation,
$$
s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X}) }, $$
is
$$ {\rm SD}(s) = \sqrt{ E \left( [E(s)- s]^2 \right) } = \sigma \sqrt{ 1 - \frac{2}{n-1} \cdot \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$
where $\Gamma(\cdot)$ is the gamma function, $n$ is the sample size and $\overline{X} = \frac{1}{n} \sum_{i=1}^{n} X_i$ is the sample mean. Since $s$ is a consistent estimator of $\sigma$, this suggests replacing $\sigma$ with $s$ in the equation above to get a consistent estimator of ${\rm SD}(s)$.
If it is an unbiased estimator you seek, we see in this thread that $ E(s)
= \sigma \cdot \sqrt{ \frac{2}{n-1} } \cdot \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } $, which, by linearity of expectation, suggests
$$ s \cdot \sqrt{ \frac{n-1}{2} } \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } $$
as an unbiased estimator of $\sigma$. All of this together with linearity of expectation gives an unbiased estimator of ${\rm SD}(s)$:
$$ s \cdot \frac{\Gamma( \frac{n-1}{2} )}{ \Gamma(n/2) } \cdot \sqrt{\frac{n-1}{2} - \left( \frac{ \Gamma(n/2) }{ \Gamma( \frac{n-1}{2} ) } \right)^2 } $$ | Standard deviation of standard deviation
Let $X_1, ..., X_n \sim N(\mu, \sigma^2)$. As shown in this thread, the standard deviation of the sample standard deviation,
$$
s = \sqrt{ \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \overline{X}) }, $$
is |
4,535 | Standard deviation of standard deviation | Assume you observe $X_1,\dots,X_n$ iid from a normal with mean zero and variance $\sigma^2$. The (empirical) standard deviation is the square root of the estimator $\hat{\sigma}^2$ of $\sigma^2$ (unbiased or not that is not the question). As an estimator (obtained with $X_1,\dots,X_n$), $\hat{\sigma}$ has a variance that can be calculated theoretically. Maybe what you call the standard deviation of standard deviation is actually the square root of the variance of the standard deviation, i.e. $\sqrt{E[(\sigma-\hat{\sigma})^2]}$? It is not an estimator, it is a theoretical quantity (something like $\sigma/\sqrt{n}$ to be confirmed) that can be calculated explicitely ! | Standard deviation of standard deviation | Assume you observe $X_1,\dots,X_n$ iid from a normal with mean zero and variance $\sigma^2$. The (empirical) standard deviation is the square root of the estimator $\hat{\sigma}^2$ of $\sigma^2$ (unbi | Standard deviation of standard deviation
Assume you observe $X_1,\dots,X_n$ iid from a normal with mean zero and variance $\sigma^2$. The (empirical) standard deviation is the square root of the estimator $\hat{\sigma}^2$ of $\sigma^2$ (unbiased or not that is not the question). As an estimator (obtained with $X_1,\dots,X_n$), $\hat{\sigma}$ has a variance that can be calculated theoretically. Maybe what you call the standard deviation of standard deviation is actually the square root of the variance of the standard deviation, i.e. $\sqrt{E[(\sigma-\hat{\sigma})^2]}$? It is not an estimator, it is a theoretical quantity (something like $\sigma/\sqrt{n}$ to be confirmed) that can be calculated explicitely ! | Standard deviation of standard deviation
Assume you observe $X_1,\dots,X_n$ iid from a normal with mean zero and variance $\sigma^2$. The (empirical) standard deviation is the square root of the estimator $\hat{\sigma}^2$ of $\sigma^2$ (unbi |
4,536 | Standard deviation of standard deviation | @Macro provided a great mathematical explanation with equation to compute. Here is a more general explation for less mathematical people.
I think the terminology "SD of SD" is confusing to many. It is easier to think about the confidence interval of a SD. How precise is the standard deviation you compute from a sample? Just by chance you may have happened to obtain data that are closely bunched together, making the sample SD much lower than the population SD. Or you may have randomly obtained values that are far more scattered than the overall population, making the sample SD higher than the population SD.
Interpreting the CI of the SD is straightforward. Start with the customary assumption that your data were randomly and independently sampled from a Gaussian distribution. Now repeat this sampling many times. You expect 95% of those confidence intervals to include the true population SD.
How wide is the 95% confidence interval of a SD? It depends on sample size (n) of course.
n: 95% CI of SD
2: 0.45*SD to 31.9*SD
3: 0.52*SD to 6.29*SD
5: 0.60*SD to 2.87*SD
10: 0.69*SD to 1.83*SD
25: 0.78*SD to 1.39*SD
50: 0.84*SD to 1.25*SD
100: 0.88*SD to 1.16*SD
500: 0.94*SD to 1.07*SD
Free web calculator | Standard deviation of standard deviation | @Macro provided a great mathematical explanation with equation to compute. Here is a more general explation for less mathematical people.
I think the terminology "SD of SD" is confusing to many. It is | Standard deviation of standard deviation
@Macro provided a great mathematical explanation with equation to compute. Here is a more general explation for less mathematical people.
I think the terminology "SD of SD" is confusing to many. It is easier to think about the confidence interval of a SD. How precise is the standard deviation you compute from a sample? Just by chance you may have happened to obtain data that are closely bunched together, making the sample SD much lower than the population SD. Or you may have randomly obtained values that are far more scattered than the overall population, making the sample SD higher than the population SD.
Interpreting the CI of the SD is straightforward. Start with the customary assumption that your data were randomly and independently sampled from a Gaussian distribution. Now repeat this sampling many times. You expect 95% of those confidence intervals to include the true population SD.
How wide is the 95% confidence interval of a SD? It depends on sample size (n) of course.
n: 95% CI of SD
2: 0.45*SD to 31.9*SD
3: 0.52*SD to 6.29*SD
5: 0.60*SD to 2.87*SD
10: 0.69*SD to 1.83*SD
25: 0.78*SD to 1.39*SD
50: 0.84*SD to 1.25*SD
100: 0.88*SD to 1.16*SD
500: 0.94*SD to 1.07*SD
Free web calculator | Standard deviation of standard deviation
@Macro provided a great mathematical explanation with equation to compute. Here is a more general explation for less mathematical people.
I think the terminology "SD of SD" is confusing to many. It is |
4,537 | Normalization vs. scaling | I am not aware of an "official" definition and even if there it is, you shouldn't trust it as you will see it being used inconsistently in practice.
This being said, scaling in statistics usually means a linear transformation of the form
$f(x) = ax+b$.
Normalizing can either mean applying a transformation so that you transformed data is roughly normally distributed, but it can also simply mean putting different variables on a common scale. Standardizing, which means subtracting the mean and dividing by the standard deviation, is an example of the later usage. As you may see it's also an example of scaling. An example for the first would be taking the log for lognormal distributed data.
But what you should take away is that when you read it you should look for a more precise description of what the author did. Sometimes you can get it from the context. | Normalization vs. scaling | I am not aware of an "official" definition and even if there it is, you shouldn't trust it as you will see it being used inconsistently in practice.
This being said, scaling in statistics usually mean | Normalization vs. scaling
I am not aware of an "official" definition and even if there it is, you shouldn't trust it as you will see it being used inconsistently in practice.
This being said, scaling in statistics usually means a linear transformation of the form
$f(x) = ax+b$.
Normalizing can either mean applying a transformation so that you transformed data is roughly normally distributed, but it can also simply mean putting different variables on a common scale. Standardizing, which means subtracting the mean and dividing by the standard deviation, is an example of the later usage. As you may see it's also an example of scaling. An example for the first would be taking the log for lognormal distributed data.
But what you should take away is that when you read it you should look for a more precise description of what the author did. Sometimes you can get it from the context. | Normalization vs. scaling
I am not aware of an "official" definition and even if there it is, you shouldn't trust it as you will see it being used inconsistently in practice.
This being said, scaling in statistics usually mean |
4,538 | Normalization vs. scaling | Scaling is a personal choice about making the numbers feel right, e.g. between zero and one, or one and a hundred. For example converting data given in millimeters to meters because it's more convenient, or imperial to metric.
While normalisation is about scaling to an external 'standard' - the local norm - such as removing the mean value and dividing by the sample standard deviation, e.g. so that your sorted data can be compared with a cummulative normal, or a cummulative Poisson, or whatever.
So if a lecturer or manager wants data 'normalised' it means "re-scale it my way" ;-) | Normalization vs. scaling | Scaling is a personal choice about making the numbers feel right, e.g. between zero and one, or one and a hundred. For example converting data given in millimeters to meters because it's more convenie | Normalization vs. scaling
Scaling is a personal choice about making the numbers feel right, e.g. between zero and one, or one and a hundred. For example converting data given in millimeters to meters because it's more convenient, or imperial to metric.
While normalisation is about scaling to an external 'standard' - the local norm - such as removing the mean value and dividing by the sample standard deviation, e.g. so that your sorted data can be compared with a cummulative normal, or a cummulative Poisson, or whatever.
So if a lecturer or manager wants data 'normalised' it means "re-scale it my way" ;-) | Normalization vs. scaling
Scaling is a personal choice about making the numbers feel right, e.g. between zero and one, or one and a hundred. For example converting data given in millimeters to meters because it's more convenie |
4,539 | Normalization vs. scaling | I don't know if you mean exactly this, but I see a lot of people referring to Normalization meaning data Standardization. Standardization is transforming your data so it has mean 0 and standard deviation 1:
x <- (x - mean(x)) / sd(x)
I also see people using the term Normalization for Data Scaling, as in transforming your data to a 0-1 range:
x <- (x - min(x)) / (max(x) - min(x))
It can be confusing!
Both techniques have their pros and cons. When scaling a dataset with too many outliers, your non-outlier data might end up in a very small interval. So if your dataset has too many outliers, you might want to consider Standardizing it. Nonetheless, when you do that you will end up with negative data (sometimes you don't want that) and unbounded data (you might not want that either). | Normalization vs. scaling | I don't know if you mean exactly this, but I see a lot of people referring to Normalization meaning data Standardization. Standardization is transforming your data so it has mean 0 and standard deviat | Normalization vs. scaling
I don't know if you mean exactly this, but I see a lot of people referring to Normalization meaning data Standardization. Standardization is transforming your data so it has mean 0 and standard deviation 1:
x <- (x - mean(x)) / sd(x)
I also see people using the term Normalization for Data Scaling, as in transforming your data to a 0-1 range:
x <- (x - min(x)) / (max(x) - min(x))
It can be confusing!
Both techniques have their pros and cons. When scaling a dataset with too many outliers, your non-outlier data might end up in a very small interval. So if your dataset has too many outliers, you might want to consider Standardizing it. Nonetheless, when you do that you will end up with negative data (sometimes you don't want that) and unbounded data (you might not want that either). | Normalization vs. scaling
I don't know if you mean exactly this, but I see a lot of people referring to Normalization meaning data Standardization. Standardization is transforming your data so it has mean 0 and standard deviat |
4,540 | Normalization vs. scaling | Centering means substacting the mean of the random variable from the variables. I.e x -xi
Scalelling means dividing variable by its standard deviation. I.e xi /s
Combination of the two is called normalization or standization. I.e x-xi/s | Normalization vs. scaling | Centering means substacting the mean of the random variable from the variables. I.e x -xi
Scalelling means dividing variable by its standard deviation. I.e xi /s
Combination of the two is called norma | Normalization vs. scaling
Centering means substacting the mean of the random variable from the variables. I.e x -xi
Scalelling means dividing variable by its standard deviation. I.e xi /s
Combination of the two is called normalization or standization. I.e x-xi/s | Normalization vs. scaling
Centering means substacting the mean of the random variable from the variables. I.e x -xi
Scalelling means dividing variable by its standard deviation. I.e xi /s
Combination of the two is called norma |
4,541 | How does centering make a difference in PCA (for SVD and eigen decomposition)? | As you remarked yourself and as explained by @ttnphns in the comments, computing covariance matrix implicitly performs centering: variance, by definition, is the average squared deviation from the mean. Centered and non-centered data will have identical covariance matrices. So if by PCA we understand the following procedure: $$\mathrm{Data}\to\text{Covariance matrix}\to\text{Eigen-decomposition},$$ then centering does not make any difference.
[Wikipedia:] To find the axes of the ellipse, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data...
And so you are right to observe that this is not a very accurate formulation.
When people talk about "PCA on non-centered data", they mean that instead of covariance matrix the eigen-decomposition is performed on the $\mathbf X^\top \mathbf X/(n-1)$ matrix. If $\mathbf X$ is centered then this will
be exactly the covariance matrix. If not then not. So if by PCA we understand the following procedure:
$$\text{Data } \mathbf X\to\text{Matrix } \mathbf X^\top \mathbf X/(n-1)\to\text{Eigen-decomposition},$$
then centering matters a lot and has the effect described and illustrated by @ttnphns in How does centering the data get rid of the intercept in regression and PCA?
It might seem weird to even mention this "strange" procedure, however consider that PCA can be very conveniently performed via singular value decomposition (SVD) of the data matrix $\mathbf X$ itself. I describe this in detail here: Relationship between SVD and PCA. How to use SVD to perform PCA? In this case the procedure is as follows:
$$\text{Data } \mathbf X \to \text{Singular value decomposition}.$$
If $\mathbf X$ is centered then this is equivalent to standard PCA done via covariance matrix. But if not, then it's equivalent to the "non-centered" PCA as described above. Since SVD is a very common and very convenient way to perform PCA, in practice it can be quite important to remember to center the data before calling svd function. I certainly had my share of bugs because of forgetting to do it. | How does centering make a difference in PCA (for SVD and eigen decomposition)? | As you remarked yourself and as explained by @ttnphns in the comments, computing covariance matrix implicitly performs centering: variance, by definition, is the average squared deviation from the mea | How does centering make a difference in PCA (for SVD and eigen decomposition)?
As you remarked yourself and as explained by @ttnphns in the comments, computing covariance matrix implicitly performs centering: variance, by definition, is the average squared deviation from the mean. Centered and non-centered data will have identical covariance matrices. So if by PCA we understand the following procedure: $$\mathrm{Data}\to\text{Covariance matrix}\to\text{Eigen-decomposition},$$ then centering does not make any difference.
[Wikipedia:] To find the axes of the ellipse, we must first subtract the mean of each variable from the dataset to center the data around the origin. Then, we compute the covariance matrix of the data...
And so you are right to observe that this is not a very accurate formulation.
When people talk about "PCA on non-centered data", they mean that instead of covariance matrix the eigen-decomposition is performed on the $\mathbf X^\top \mathbf X/(n-1)$ matrix. If $\mathbf X$ is centered then this will
be exactly the covariance matrix. If not then not. So if by PCA we understand the following procedure:
$$\text{Data } \mathbf X\to\text{Matrix } \mathbf X^\top \mathbf X/(n-1)\to\text{Eigen-decomposition},$$
then centering matters a lot and has the effect described and illustrated by @ttnphns in How does centering the data get rid of the intercept in regression and PCA?
It might seem weird to even mention this "strange" procedure, however consider that PCA can be very conveniently performed via singular value decomposition (SVD) of the data matrix $\mathbf X$ itself. I describe this in detail here: Relationship between SVD and PCA. How to use SVD to perform PCA? In this case the procedure is as follows:
$$\text{Data } \mathbf X \to \text{Singular value decomposition}.$$
If $\mathbf X$ is centered then this is equivalent to standard PCA done via covariance matrix. But if not, then it's equivalent to the "non-centered" PCA as described above. Since SVD is a very common and very convenient way to perform PCA, in practice it can be quite important to remember to center the data before calling svd function. I certainly had my share of bugs because of forgetting to do it. | How does centering make a difference in PCA (for SVD and eigen decomposition)?
As you remarked yourself and as explained by @ttnphns in the comments, computing covariance matrix implicitly performs centering: variance, by definition, is the average squared deviation from the mea |
4,542 | How does centering make a difference in PCA (for SVD and eigen decomposition)? | I'll try to provide a mathematical justification.
By centering, I assume you mean applying eigendecomposition on $XX^T$ instead of $(X-\mu)(X-\mu)^T$
Here $\mu$ denotes the sample mean and not the true mean, thus $\mu = \sum_i x_i / n$
Let $B = \sum_i x_ix_i^T$
Let $A = \sum_i (x_i - \mu)(x_i - \mu)^T$
$A= \sum_i (x_ix_i^T -\mu x_i^T - x_i \mu^T + \mu\mu^T) = B + \sum_i (-\mu x_i^T - x_i \mu^T + \mu\mu^T)$
$A = B - \mu(\sum_i x_i)^T - (\sum_i x_i)\mu^T + \sum_i \mu\mu^T$
$A = B - \mu(n\mu)^T - (n\mu)\mu^T + n\mu\mu^T$
$A = B - n\mu\mu^T - n\mu\mu^T + n\mu\mu^T$
$A = B - n\mu\mu^T$
Usually, we take the eigendecomposition of $C = A/(n-1)$ (This is definition of the sample covariance matrix)
Here, say if you take the eigendecomposition of $D = B/(n-1)$, thus,
$$\boxed{D = C + \frac{n\mu\mu^T}{n-1}}$$
Then it is clear that the eigenvalues and eigenvectors corresponding to $D$ won't be the same as those for $C$ (unless $\mu$ = 0)
Thus, you would obtain wrong principal components (The correct principal components correspond to the eigendecompostion of $C$). | How does centering make a difference in PCA (for SVD and eigen decomposition)? | I'll try to provide a mathematical justification.
By centering, I assume you mean applying eigendecomposition on $XX^T$ instead of $(X-\mu)(X-\mu)^T$
Here $\mu$ denotes the sample mean and not the tru | How does centering make a difference in PCA (for SVD and eigen decomposition)?
I'll try to provide a mathematical justification.
By centering, I assume you mean applying eigendecomposition on $XX^T$ instead of $(X-\mu)(X-\mu)^T$
Here $\mu$ denotes the sample mean and not the true mean, thus $\mu = \sum_i x_i / n$
Let $B = \sum_i x_ix_i^T$
Let $A = \sum_i (x_i - \mu)(x_i - \mu)^T$
$A= \sum_i (x_ix_i^T -\mu x_i^T - x_i \mu^T + \mu\mu^T) = B + \sum_i (-\mu x_i^T - x_i \mu^T + \mu\mu^T)$
$A = B - \mu(\sum_i x_i)^T - (\sum_i x_i)\mu^T + \sum_i \mu\mu^T$
$A = B - \mu(n\mu)^T - (n\mu)\mu^T + n\mu\mu^T$
$A = B - n\mu\mu^T - n\mu\mu^T + n\mu\mu^T$
$A = B - n\mu\mu^T$
Usually, we take the eigendecomposition of $C = A/(n-1)$ (This is definition of the sample covariance matrix)
Here, say if you take the eigendecomposition of $D = B/(n-1)$, thus,
$$\boxed{D = C + \frac{n\mu\mu^T}{n-1}}$$
Then it is clear that the eigenvalues and eigenvectors corresponding to $D$ won't be the same as those for $C$ (unless $\mu$ = 0)
Thus, you would obtain wrong principal components (The correct principal components correspond to the eigendecompostion of $C$). | How does centering make a difference in PCA (for SVD and eigen decomposition)?
I'll try to provide a mathematical justification.
By centering, I assume you mean applying eigendecomposition on $XX^T$ instead of $(X-\mu)(X-\mu)^T$
Here $\mu$ denotes the sample mean and not the tru |
4,543 | How does centering make a difference in PCA (for SVD and eigen decomposition)? | Carefully read through this thread which helps me greatly on understanding the whole PCA process, here is my summary (not sure if I get the true idea or not):
PCA <-> Eigen-decomposition on cov matrix -> it will find axis on whose direction the data has max spread. it doesn't matter whether we center the data data before hand, the cov matrix is the same and thus we will always get axis maximize the data spread.
I saw some text book when introducing PCA, they assume that the data matrix is centered. I think this is the reason that triggers my confusion, it induced me that the data needs to be centered before doing PCA analysis. Now my feeling is that I need to look these in a revers way: centering the data helps to justify that eigen vectors of covariance matrix is the right axis that we are looking for. | How does centering make a difference in PCA (for SVD and eigen decomposition)? | Carefully read through this thread which helps me greatly on understanding the whole PCA process, here is my summary (not sure if I get the true idea or not):
PCA <-> Eigen-decomposition on cov matrix | How does centering make a difference in PCA (for SVD and eigen decomposition)?
Carefully read through this thread which helps me greatly on understanding the whole PCA process, here is my summary (not sure if I get the true idea or not):
PCA <-> Eigen-decomposition on cov matrix -> it will find axis on whose direction the data has max spread. it doesn't matter whether we center the data data before hand, the cov matrix is the same and thus we will always get axis maximize the data spread.
I saw some text book when introducing PCA, they assume that the data matrix is centered. I think this is the reason that triggers my confusion, it induced me that the data needs to be centered before doing PCA analysis. Now my feeling is that I need to look these in a revers way: centering the data helps to justify that eigen vectors of covariance matrix is the right axis that we are looking for. | How does centering make a difference in PCA (for SVD and eigen decomposition)?
Carefully read through this thread which helps me greatly on understanding the whole PCA process, here is my summary (not sure if I get the true idea or not):
PCA <-> Eigen-decomposition on cov matrix |
4,544 | Why are regression problems called "regression" problems? | The term "regression" was used by Francis Galton in his 1886 paper "Regression towards mediocrity in hereditary stature". To my knowledge he only used the term in the context of regression toward the mean. The term was then adopted by others to get more or less the meaning it has today as a general statistical method. | Why are regression problems called "regression" problems? | The term "regression" was used by Francis Galton in his 1886 paper "Regression towards mediocrity in hereditary stature". To my knowledge he only used the term in the context of regression toward the | Why are regression problems called "regression" problems?
The term "regression" was used by Francis Galton in his 1886 paper "Regression towards mediocrity in hereditary stature". To my knowledge he only used the term in the context of regression toward the mean. The term was then adopted by others to get more or less the meaning it has today as a general statistical method. | Why are regression problems called "regression" problems?
The term "regression" was used by Francis Galton in his 1886 paper "Regression towards mediocrity in hereditary stature". To my knowledge he only used the term in the context of regression toward the |
4,545 | Why are regression problems called "regression" problems? | @Mark White mentioned the link already but for those of you who do not have much time to check the link, here's the exact properly referenced answer:
Origin of 'regression'
The term "regression" was coined by Francis Galton in the 19th century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean)(Galton, reprinted 1989). For Galton, regression had only this biological meaning (Galton, 1887), but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context (Pearson, 1903).
References
https://en.wikipedia.org/wiki/Regression_analysis#History
Galton, F. (1877). Typical laws of heredity. III. Nature, 15(389), 512-514.
Galton, F. (reprinted 1989). Kinship and Correlation. Statistical Science, 4(2), 80–86.
Pearson, K. (1903). The law of ancestral heredity. Biometrika, 2(2), 211-228. | Why are regression problems called "regression" problems? | @Mark White mentioned the link already but for those of you who do not have much time to check the link, here's the exact properly referenced answer:
Origin of 'regression'
The term "regression" was c | Why are regression problems called "regression" problems?
@Mark White mentioned the link already but for those of you who do not have much time to check the link, here's the exact properly referenced answer:
Origin of 'regression'
The term "regression" was coined by Francis Galton in the 19th century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean)(Galton, reprinted 1989). For Galton, regression had only this biological meaning (Galton, 1887), but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context (Pearson, 1903).
References
https://en.wikipedia.org/wiki/Regression_analysis#History
Galton, F. (1877). Typical laws of heredity. III. Nature, 15(389), 512-514.
Galton, F. (reprinted 1989). Kinship and Correlation. Statistical Science, 4(2), 80–86.
Pearson, K. (1903). The law of ancestral heredity. Biometrika, 2(2), 211-228. | Why are regression problems called "regression" problems?
@Mark White mentioned the link already but for those of you who do not have much time to check the link, here's the exact properly referenced answer:
Origin of 'regression'
The term "regression" was c |
4,546 | Why are regression problems called "regression" problems? | As opposed to progressing, we are falling back to the mean, i.e. regressing. Hence the term regression ! I think its something that got picked up and stuck. | Why are regression problems called "regression" problems? | As opposed to progressing, we are falling back to the mean, i.e. regressing. Hence the term regression ! I think its something that got picked up and stuck. | Why are regression problems called "regression" problems?
As opposed to progressing, we are falling back to the mean, i.e. regressing. Hence the term regression ! I think its something that got picked up and stuck. | Why are regression problems called "regression" problems?
As opposed to progressing, we are falling back to the mean, i.e. regressing. Hence the term regression ! I think its something that got picked up and stuck. |
4,547 | Why are regression problems called "regression" problems? | I arrived here via a search for how a regression got its name. Here are the interesting parts of what I found (mostly from wikipedia.)
The term "regression" was coined by Francis Galton in the nineteenth century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean).7,8 For Galton, regression had only this biological meaning,9,10 but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context.11,12 In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925.13,14,15 Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
Also very interesting:
In the 1950s and 1960s, economists used electromechanical desk "calculators" to calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.16
Sources
Mogull, Robert G. (2004). Second-Semester Applied Statistics. Kendall/Hunt Publishing Company. p. 59. ISBN 978-0-7575-1181-3.
Galton, Francis (1989). "Kinship and Correlation (reprinted 1989)". Statistical Science. 4 (2): 80–86. doi:10.1214/ss/1177012581. JSTOR 2245330.
Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492–495, 512–514, 532–533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.)
Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.)
Yule, G. Udny (1897). "On the Theory of Correlation". Journal of the Royal Statistical Society. 60 (4): 812–54. doi:10.2307/2979746. JSTOR 2979746.
Pearson, Karl; Yule, G.U.; Blanchard, Norman; Lee,Alice (1903). "The Law of Ancestral Heredity". Biometrika. 2 (2): 211–236. doi:10.1093/biomet/2.2.211. JSTOR 2331683.
Fisher, R.A. (1922). "The goodness of fit of regression formulae, and the distribution of regression coefficients". Journal of the Royal Statistical Society. 85 (4): 597–612. doi:10.2307/2341124. JSTOR 2341124. PMC 1084801.
Ronald A. Fisher (1954). Statistical Methods for Research Workers (Twelfth ed.). Edinburgh: Oliver and Boyd. ISBN 978-0-05-002170-5.
Aldrich, John (2005). "Fisher and Regression". Statistical Science. 20 (4): 401–417. doi:10.1214/088342305000000331. JSTOR 20061201.
Rodney Ramcharan. Regressions: Why Are Economists Obessessed with Them? March 2006. Accessed 2011-12-03. | Why are regression problems called "regression" problems? | I arrived here via a search for how a regression got its name. Here are the interesting parts of what I found (mostly from wikipedia.)
The term "regression" was coined by Francis Galton in the ninete | Why are regression problems called "regression" problems?
I arrived here via a search for how a regression got its name. Here are the interesting parts of what I found (mostly from wikipedia.)
The term "regression" was coined by Francis Galton in the nineteenth century to describe a biological phenomenon. The phenomenon was that the heights of descendants of tall ancestors tend to regress down towards a normal average (a phenomenon also known as regression toward the mean).7,8 For Galton, regression had only this biological meaning,9,10 but his work was later extended by Udny Yule and Karl Pearson to a more general statistical context.11,12 In the work of Yule and Pearson, the joint distribution of the response and explanatory variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925.13,14,15 Fisher assumed that the conditional distribution of the response variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.
Also very interesting:
In the 1950s and 1960s, economists used electromechanical desk "calculators" to calculate regressions. Before 1970, it sometimes took up to 24 hours to receive the result from one regression.16
Sources
Mogull, Robert G. (2004). Second-Semester Applied Statistics. Kendall/Hunt Publishing Company. p. 59. ISBN 978-0-7575-1181-3.
Galton, Francis (1989). "Kinship and Correlation (reprinted 1989)". Statistical Science. 4 (2): 80–86. doi:10.1214/ss/1177012581. JSTOR 2245330.
Francis Galton. "Typical laws of heredity", Nature 15 (1877), 492–495, 512–514, 532–533. (Galton uses the term "reversion" in this paper, which discusses the size of peas.)
Francis Galton. Presidential address, Section H, Anthropology. (1885) (Galton uses the term "regression" in this paper, which discusses the height of humans.)
Yule, G. Udny (1897). "On the Theory of Correlation". Journal of the Royal Statistical Society. 60 (4): 812–54. doi:10.2307/2979746. JSTOR 2979746.
Pearson, Karl; Yule, G.U.; Blanchard, Norman; Lee,Alice (1903). "The Law of Ancestral Heredity". Biometrika. 2 (2): 211–236. doi:10.1093/biomet/2.2.211. JSTOR 2331683.
Fisher, R.A. (1922). "The goodness of fit of regression formulae, and the distribution of regression coefficients". Journal of the Royal Statistical Society. 85 (4): 597–612. doi:10.2307/2341124. JSTOR 2341124. PMC 1084801.
Ronald A. Fisher (1954). Statistical Methods for Research Workers (Twelfth ed.). Edinburgh: Oliver and Boyd. ISBN 978-0-05-002170-5.
Aldrich, John (2005). "Fisher and Regression". Statistical Science. 20 (4): 401–417. doi:10.1214/088342305000000331. JSTOR 20061201.
Rodney Ramcharan. Regressions: Why Are Economists Obessessed with Them? March 2006. Accessed 2011-12-03. | Why are regression problems called "regression" problems?
I arrived here via a search for how a regression got its name. Here are the interesting parts of what I found (mostly from wikipedia.)
The term "regression" was coined by Francis Galton in the ninete |
4,548 | Why are regression problems called "regression" problems? | "Regression" comes from "regress" which in turn comes from latin "regressus" - to go back (to something).
In that sense, regression is the technique that allows "to go back" from messy, hard to interpret data, to a clearer and more meaningful model. As a physicist, I like the idea, as physicists see natural phenomena as the multiple possible outcomes of a relatively simple natural law.
In other words, the word regression seems to suggest that data is just the visible, tangible effect of a "statistical model". In other words, the model comes first, and your desire is use the data "to go back" to what originated them. | Why are regression problems called "regression" problems? | "Regression" comes from "regress" which in turn comes from latin "regressus" - to go back (to something).
In that sense, regression is the technique that allows "to go back" from messy, hard to inter | Why are regression problems called "regression" problems?
"Regression" comes from "regress" which in turn comes from latin "regressus" - to go back (to something).
In that sense, regression is the technique that allows "to go back" from messy, hard to interpret data, to a clearer and more meaningful model. As a physicist, I like the idea, as physicists see natural phenomena as the multiple possible outcomes of a relatively simple natural law.
In other words, the word regression seems to suggest that data is just the visible, tangible effect of a "statistical model". In other words, the model comes first, and your desire is use the data "to go back" to what originated them. | Why are regression problems called "regression" problems?
"Regression" comes from "regress" which in turn comes from latin "regressus" - to go back (to something).
In that sense, regression is the technique that allows "to go back" from messy, hard to inter |
4,549 | How to visualize a fitted multiple regression model? | There is nothing wrong with your current strategy. If you have a multiple regression model with only two explanatory variables then you could try to make a 3D-ish plot that displays the predicted regression plane, but most software don't make this easy to do. Another possibility is to use a coplot (see also: coplot in R or this pdf), which can represent three or even four variables, but many people don't know how to read them. Essentially however, if you don't have any interactions, then the predicted marginal relationship between $x_j$ and $y$ will be the same as predicted conditional relationship (plus or minus some vertical shift) at any specific level of your other $x$ variables. Thus, you can simply set all other $x$ variables at their means and find the predicted line $\hat y = \hat\beta_0 + \cdots + \hat\beta_j x_j + \cdots + \hat\beta_p \bar x_p$ and plot that line on a scatterplot of $(x_j, y)$ pairs. Moreover, you will end up with $p$ such plots, although you might not include some of them if you think they are not important. (For example, it is common to have a multiple regression model with a single variable of interest and some control variables, and only present the first such plot).
On the other hand, if you do have interactions, then you should figure out which of the interacting variables you are most interested in and plot the predicted relationship between that variable and the response variable, but with several lines on the same plot. The other interacting variable is set to different levels for each of those lines. Typical values would be the mean and $\pm$ 1 SD of the interacting variable. To make this clearer, imagine you have only two variables, $x_1$ and $x_2$, and you have an interaction between them, and that $x_1$ is the focus of your study, then you might make a single plot with these three lines:
\begin{align}
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 - s_{x_2}) + \hat\beta_3 x_1(\bar x_2 - s_{x_2}) \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 \bar x_2 \quad\quad\quad\ + \hat\beta_3 x_1\bar x_2 \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 + s_{x_2}) + \hat\beta_3 x_1(\bar x_2 + s_{x_2})
\end{align}
An example plot that's similar (albeit with a binary moderator) can be seen in my answer to Plot regression with interaction in R. | How to visualize a fitted multiple regression model? | There is nothing wrong with your current strategy. If you have a multiple regression model with only two explanatory variables then you could try to make a 3D-ish plot that displays the predicted reg | How to visualize a fitted multiple regression model?
There is nothing wrong with your current strategy. If you have a multiple regression model with only two explanatory variables then you could try to make a 3D-ish plot that displays the predicted regression plane, but most software don't make this easy to do. Another possibility is to use a coplot (see also: coplot in R or this pdf), which can represent three or even four variables, but many people don't know how to read them. Essentially however, if you don't have any interactions, then the predicted marginal relationship between $x_j$ and $y$ will be the same as predicted conditional relationship (plus or minus some vertical shift) at any specific level of your other $x$ variables. Thus, you can simply set all other $x$ variables at their means and find the predicted line $\hat y = \hat\beta_0 + \cdots + \hat\beta_j x_j + \cdots + \hat\beta_p \bar x_p$ and plot that line on a scatterplot of $(x_j, y)$ pairs. Moreover, you will end up with $p$ such plots, although you might not include some of them if you think they are not important. (For example, it is common to have a multiple regression model with a single variable of interest and some control variables, and only present the first such plot).
On the other hand, if you do have interactions, then you should figure out which of the interacting variables you are most interested in and plot the predicted relationship between that variable and the response variable, but with several lines on the same plot. The other interacting variable is set to different levels for each of those lines. Typical values would be the mean and $\pm$ 1 SD of the interacting variable. To make this clearer, imagine you have only two variables, $x_1$ and $x_2$, and you have an interaction between them, and that $x_1$ is the focus of your study, then you might make a single plot with these three lines:
\begin{align}
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 - s_{x_2}) + \hat\beta_3 x_1(\bar x_2 - s_{x_2}) \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 \bar x_2 \quad\quad\quad\ + \hat\beta_3 x_1\bar x_2 \\
\hat y &= \hat\beta_0 + \hat\beta_1 x_1 + \hat\beta_2 (\bar x_2 + s_{x_2}) + \hat\beta_3 x_1(\bar x_2 + s_{x_2})
\end{align}
An example plot that's similar (albeit with a binary moderator) can be seen in my answer to Plot regression with interaction in R. | How to visualize a fitted multiple regression model?
There is nothing wrong with your current strategy. If you have a multiple regression model with only two explanatory variables then you could try to make a 3D-ish plot that displays the predicted reg |
4,550 | How to visualize a fitted multiple regression model? | Here is a web-based, interactive tool for plotting regression results in three dimensions.
This 3-D plot works with one dependent variable and two explanatory variables. You can also set the intercept to zero (i.e., remove the intercept from the regression equation).
This page shows a 3D scatter plot without the fitted regression model. | How to visualize a fitted multiple regression model? | Here is a web-based, interactive tool for plotting regression results in three dimensions.
This 3-D plot works with one dependent variable and two explanatory variables. You can also set the intercept | How to visualize a fitted multiple regression model?
Here is a web-based, interactive tool for plotting regression results in three dimensions.
This 3-D plot works with one dependent variable and two explanatory variables. You can also set the intercept to zero (i.e., remove the intercept from the regression equation).
This page shows a 3D scatter plot without the fitted regression model. | How to visualize a fitted multiple regression model?
Here is a web-based, interactive tool for plotting regression results in three dimensions.
This 3-D plot works with one dependent variable and two explanatory variables. You can also set the intercept |
4,551 | How to visualize a fitted multiple regression model? | To visualize the model, rather than the data, JMP uses an interactive "profiler" plot. Here's a static view.
And here's a link to a dynamic view.
It's similar to your scatter plot idea and can be combined with it. The idea is that each frame shows a slice of the model for the corresponding X and Y variables with the other X variables held constant at their indicated values. In the interactive version, the X values can be changed by dragging the red vertical lines.
Disclosure: I'm a JMP developer, so don't take this as an unbiased endorsement. | How to visualize a fitted multiple regression model? | To visualize the model, rather than the data, JMP uses an interactive "profiler" plot. Here's a static view.
And here's a link to a dynamic view.
It's similar to your scatter plot idea and can be com | How to visualize a fitted multiple regression model?
To visualize the model, rather than the data, JMP uses an interactive "profiler" plot. Here's a static view.
And here's a link to a dynamic view.
It's similar to your scatter plot idea and can be combined with it. The idea is that each frame shows a slice of the model for the corresponding X and Y variables with the other X variables held constant at their indicated values. In the interactive version, the X values can be changed by dragging the red vertical lines.
Disclosure: I'm a JMP developer, so don't take this as an unbiased endorsement. | How to visualize a fitted multiple regression model?
To visualize the model, rather than the data, JMP uses an interactive "profiler" plot. Here's a static view.
And here's a link to a dynamic view.
It's similar to your scatter plot idea and can be com |
4,552 | How to visualize a fitted multiple regression model? | See the R rms package and the RMS course notes, in particular the nomogram and Predict functions to obtain nomograms and partial effect plots. The summary.rms function computes one-number effect summaries of each predictor (inter-quartile range effects). Nomograms provide the most complete single representation of regression models, if there are not too many interaction terms. | How to visualize a fitted multiple regression model? | See the R rms package and the RMS course notes, in particular the nomogram and Predict functions to obtain nomograms and partial effect plots. The summary.rms function computes one-number effect sum | How to visualize a fitted multiple regression model?
See the R rms package and the RMS course notes, in particular the nomogram and Predict functions to obtain nomograms and partial effect plots. The summary.rms function computes one-number effect summaries of each predictor (inter-quartile range effects). Nomograms provide the most complete single representation of regression models, if there are not too many interaction terms. | How to visualize a fitted multiple regression model?
See the R rms package and the RMS course notes, in particular the nomogram and Predict functions to obtain nomograms and partial effect plots. The summary.rms function computes one-number effect sum |
4,553 | Is it possible to do time-series clustering based on curve shape? | Several directions for analyzing longitudinal data were discussed in the link provided by @Jeromy, so I would suggest you to read them carefully, especially those on functional data analysis. Try googling for "Functional Clustering of Longitudinal Data", or the PACE Matlab toolbox which is specifically concerned with model-based clustering of irregularly sampled trajectories (Peng and Müller, Distance-based clustering of sparsely observed stochastic processes, with applications to online auctions, Annals of Applied Statistics 2008 2: 1056). I can imagine that there may be a good statistical framework for financial time series, but I don't know about that.
The kml package basically relies on k-means, working (by default) on euclidean distances between the $t$ measurements observed on $n$ individuals. What is called a trajectory is just the series of observed values for individual $i$, $y_i=(y_{i1},y_{i2},\dots,y_{it})$, and $d(y_i,y_j)=\sqrt{t^{-1}\sum_{k=1}^t(y_{ik}-y_{jk})^2}$. Missing data are handled through a slight modification of the preceding distance measure (Gower adjustment) associated to a nearest neighbor-like imputation scheme (for computing Calinski criterion). As I don't represent myself what you real data would look like, I cannot say if it will work. At least, it work with longitudinal growth curves, "polynomial" shape, but I doubt it will allow you to detect very specific patterns (like local minima/maxima at specific time-points with time-points differing between clusters, by a translation for example). If you are interested in clustering possibly misaligned curves, then you definitively have to look at other solutions; Functional clustering and alignment, from Sangalli et al., and references therein may provide a good starting point.
Below, I show you some code that may help to experiment with it (my seed is generally set at 101, if you want to reproduce the results). Basically, for using kml you just have to construct a clusterizLongData object (an id number for the first column, and the $t$ measurements in the next columns).
library(lattice)
xyplot(var0 ~ date, data=test.data, groups=store, type=c("l","g"))
tw <- reshape(test.data, timevar="date", idvar="store", direction="wide")
parallel(tw[,-1], horizontal.axis=F,
scales=list(x=list(rot=45,
at=seq(1,ncol(tw)-1,by=2),
labels=substr(names(tw[,-1])[seq(1,ncol(tw)-1,by=2)],6,100),
cex=.5)))
library(kml)
names(tw) <- c("id", paste("t", 1:(ncol(tw)-1)))
tw.cld <- as.cld(tw)
cld.res <- kml(tw.cld,nbRedrawing=5)
plot(tw.cld)
The next two figures are the raw simulated data and the five-cluster solution (according to Calinski criterion, also used in the fpc package). I don't show the scaled version. | Is it possible to do time-series clustering based on curve shape? | Several directions for analyzing longitudinal data were discussed in the link provided by @Jeromy, so I would suggest you to read them carefully, especially those on functional data analysis. Try goog | Is it possible to do time-series clustering based on curve shape?
Several directions for analyzing longitudinal data were discussed in the link provided by @Jeromy, so I would suggest you to read them carefully, especially those on functional data analysis. Try googling for "Functional Clustering of Longitudinal Data", or the PACE Matlab toolbox which is specifically concerned with model-based clustering of irregularly sampled trajectories (Peng and Müller, Distance-based clustering of sparsely observed stochastic processes, with applications to online auctions, Annals of Applied Statistics 2008 2: 1056). I can imagine that there may be a good statistical framework for financial time series, but I don't know about that.
The kml package basically relies on k-means, working (by default) on euclidean distances between the $t$ measurements observed on $n$ individuals. What is called a trajectory is just the series of observed values for individual $i$, $y_i=(y_{i1},y_{i2},\dots,y_{it})$, and $d(y_i,y_j)=\sqrt{t^{-1}\sum_{k=1}^t(y_{ik}-y_{jk})^2}$. Missing data are handled through a slight modification of the preceding distance measure (Gower adjustment) associated to a nearest neighbor-like imputation scheme (for computing Calinski criterion). As I don't represent myself what you real data would look like, I cannot say if it will work. At least, it work with longitudinal growth curves, "polynomial" shape, but I doubt it will allow you to detect very specific patterns (like local minima/maxima at specific time-points with time-points differing between clusters, by a translation for example). If you are interested in clustering possibly misaligned curves, then you definitively have to look at other solutions; Functional clustering and alignment, from Sangalli et al., and references therein may provide a good starting point.
Below, I show you some code that may help to experiment with it (my seed is generally set at 101, if you want to reproduce the results). Basically, for using kml you just have to construct a clusterizLongData object (an id number for the first column, and the $t$ measurements in the next columns).
library(lattice)
xyplot(var0 ~ date, data=test.data, groups=store, type=c("l","g"))
tw <- reshape(test.data, timevar="date", idvar="store", direction="wide")
parallel(tw[,-1], horizontal.axis=F,
scales=list(x=list(rot=45,
at=seq(1,ncol(tw)-1,by=2),
labels=substr(names(tw[,-1])[seq(1,ncol(tw)-1,by=2)],6,100),
cex=.5)))
library(kml)
names(tw) <- c("id", paste("t", 1:(ncol(tw)-1)))
tw.cld <- as.cld(tw)
cld.res <- kml(tw.cld,nbRedrawing=5)
plot(tw.cld)
The next two figures are the raw simulated data and the five-cluster solution (according to Calinski criterion, also used in the fpc package). I don't show the scaled version. | Is it possible to do time-series clustering based on curve shape?
Several directions for analyzing longitudinal data were discussed in the link provided by @Jeromy, so I would suggest you to read them carefully, especially those on functional data analysis. Try goog |
4,554 | Is it possible to do time-series clustering based on curve shape? | An alternative approach was published by a stats.se regular in Wang, Xiaozhe, Kate Smith, and Rob Hyndman.
‘Characteristic-Based Clustering for Time Series Data’. Data Mining
and Knowledge Discovery 13, no. 3 (2006): 335–364.
They write:
This paper proposes a method for clustering of time series based on
their structural characteristics. Unlike other alternatives, this
method does not cluster point values using a distance metric, rather
it clusters based on global features extracted from the time series.
The feature measures are obtained from each individual series and can
be fed into arbitrary clustering algorithms, including an unsupervised
neural network algorithm, self-organizing map, or hierarchal
clustering algorithm. Global measures describing the time series are
obtained by applying statistical operations that best capture the
underlying characteristics: trend, seasonality, periodicity, serial
correlation, skewness, kurtosis, chaos, nonlinearity, and
self-similarity. Since the method clusters using extracted global
measures, it reduces the dimensionality of the time series and is much
less sensitive to missing or noisy data. We further provide a search
mechanism to find the best selection from the feature set that should
be used as the clustering inputs.
R code is available on Rob's blog. | Is it possible to do time-series clustering based on curve shape? | An alternative approach was published by a stats.se regular in Wang, Xiaozhe, Kate Smith, and Rob Hyndman.
‘Characteristic-Based Clustering for Time Series Data’. Data Mining
and Knowledge Discovery | Is it possible to do time-series clustering based on curve shape?
An alternative approach was published by a stats.se regular in Wang, Xiaozhe, Kate Smith, and Rob Hyndman.
‘Characteristic-Based Clustering for Time Series Data’. Data Mining
and Knowledge Discovery 13, no. 3 (2006): 335–364.
They write:
This paper proposes a method for clustering of time series based on
their structural characteristics. Unlike other alternatives, this
method does not cluster point values using a distance metric, rather
it clusters based on global features extracted from the time series.
The feature measures are obtained from each individual series and can
be fed into arbitrary clustering algorithms, including an unsupervised
neural network algorithm, self-organizing map, or hierarchal
clustering algorithm. Global measures describing the time series are
obtained by applying statistical operations that best capture the
underlying characteristics: trend, seasonality, periodicity, serial
correlation, skewness, kurtosis, chaos, nonlinearity, and
self-similarity. Since the method clusters using extracted global
measures, it reduces the dimensionality of the time series and is much
less sensitive to missing or noisy data. We further provide a search
mechanism to find the best selection from the feature set that should
be used as the clustering inputs.
R code is available on Rob's blog. | Is it possible to do time-series clustering based on curve shape?
An alternative approach was published by a stats.se regular in Wang, Xiaozhe, Kate Smith, and Rob Hyndman.
‘Characteristic-Based Clustering for Time Series Data’. Data Mining
and Knowledge Discovery |
4,555 | Is it possible to do time-series clustering based on curve shape? | You could look at the work of Eamonn Keogh (UC Riverside) on time series clustering. His website has a lot of resources. I think he provides Matlab code samples, so you'd have to translate this to R. | Is it possible to do time-series clustering based on curve shape? | You could look at the work of Eamonn Keogh (UC Riverside) on time series clustering. His website has a lot of resources. I think he provides Matlab code samples, so you'd have to translate this to R. | Is it possible to do time-series clustering based on curve shape?
You could look at the work of Eamonn Keogh (UC Riverside) on time series clustering. His website has a lot of resources. I think he provides Matlab code samples, so you'd have to translate this to R. | Is it possible to do time-series clustering based on curve shape?
You could look at the work of Eamonn Keogh (UC Riverside) on time series clustering. His website has a lot of resources. I think he provides Matlab code samples, so you'd have to translate this to R. |
4,556 | Modern successor to Exploratory Data Analysis by Tukey? | The closest thing is Cleveland's Visualizing Data. It's about Exploratory Data Analysis, it's about computer-generated visualizations, it's profound, it's a classic. | Modern successor to Exploratory Data Analysis by Tukey? | The closest thing is Cleveland's Visualizing Data. It's about Exploratory Data Analysis, it's about computer-generated visualizations, it's profound, it's a classic. | Modern successor to Exploratory Data Analysis by Tukey?
The closest thing is Cleveland's Visualizing Data. It's about Exploratory Data Analysis, it's about computer-generated visualizations, it's profound, it's a classic. | Modern successor to Exploratory Data Analysis by Tukey?
The closest thing is Cleveland's Visualizing Data. It's about Exploratory Data Analysis, it's about computer-generated visualizations, it's profound, it's a classic. |
4,557 | Modern successor to Exploratory Data Analysis by Tukey? | Well, its not an exact replica, but I found tons of useful plotting advice (and R code) in Gelman and Hill's Data Analysis using Regression and Multilevel/Hierarchical Models
In addition, his blog is often full of useful graphics advice. | Modern successor to Exploratory Data Analysis by Tukey? | Well, its not an exact replica, but I found tons of useful plotting advice (and R code) in Gelman and Hill's Data Analysis using Regression and Multilevel/Hierarchical Models
In addition, his blog is | Modern successor to Exploratory Data Analysis by Tukey?
Well, its not an exact replica, but I found tons of useful plotting advice (and R code) in Gelman and Hill's Data Analysis using Regression and Multilevel/Hierarchical Models
In addition, his blog is often full of useful graphics advice. | Modern successor to Exploratory Data Analysis by Tukey?
Well, its not an exact replica, but I found tons of useful plotting advice (and R code) in Gelman and Hill's Data Analysis using Regression and Multilevel/Hierarchical Models
In addition, his blog is |
4,558 | Modern successor to Exploratory Data Analysis by Tukey? | Interactive Graphics for Data Analysis: Principles and Examples is one I like; the book description says it "discusses exploratory data analysis (EDA) and how interactive graphical methods can help gain insights as well as generate new questions and hypotheses from datasets." | Modern successor to Exploratory Data Analysis by Tukey? | Interactive Graphics for Data Analysis: Principles and Examples is one I like; the book description says it "discusses exploratory data analysis (EDA) and how interactive graphical methods can help ga | Modern successor to Exploratory Data Analysis by Tukey?
Interactive Graphics for Data Analysis: Principles and Examples is one I like; the book description says it "discusses exploratory data analysis (EDA) and how interactive graphical methods can help gain insights as well as generate new questions and hypotheses from datasets." | Modern successor to Exploratory Data Analysis by Tukey?
Interactive Graphics for Data Analysis: Principles and Examples is one I like; the book description says it "discusses exploratory data analysis (EDA) and how interactive graphical methods can help ga |
4,559 | Modern successor to Exploratory Data Analysis by Tukey? | Hadley Wickham's ggplot2 book is interesting because it teaches both the Grammar of Graphics and how to use the ggplot2 software. | Modern successor to Exploratory Data Analysis by Tukey? | Hadley Wickham's ggplot2 book is interesting because it teaches both the Grammar of Graphics and how to use the ggplot2 software. | Modern successor to Exploratory Data Analysis by Tukey?
Hadley Wickham's ggplot2 book is interesting because it teaches both the Grammar of Graphics and how to use the ggplot2 software. | Modern successor to Exploratory Data Analysis by Tukey?
Hadley Wickham's ggplot2 book is interesting because it teaches both the Grammar of Graphics and how to use the ggplot2 software. |
4,560 | Modern successor to Exploratory Data Analysis by Tukey? | Ronald Pearson's Exploring Data in Engineering, the Sciences, and Medicine is worth mentioning here. Its main target readership seems to be scientists not afraid of a little mathematics who wish they knew more statistics. That is quite a large group, and one well represented here. It's a little quirky and offbeat, but it covers a lot of ground and it includes much sensible advice. It's not Tukey revisited in the sense that it offers many new ideas, but it can be rewarding to study, even when you think it is a little wrong-headed.
This book seems to have attracted very little notice, quite possibly because it is very expensive, not obviously suitable as a course text, and as yet only available in hardback. But it is intelligent and readable and free of the garbage of modern introductory textbooks (pages and pages of elementary exercises, silly icons, gratuitous photos of happy young people, fussy layout with boxes, whatever, etc.). | Modern successor to Exploratory Data Analysis by Tukey? | Ronald Pearson's Exploring Data in Engineering, the Sciences, and Medicine is worth mentioning here. Its main target readership seems to be scientists not afraid of a little mathematics who wish they | Modern successor to Exploratory Data Analysis by Tukey?
Ronald Pearson's Exploring Data in Engineering, the Sciences, and Medicine is worth mentioning here. Its main target readership seems to be scientists not afraid of a little mathematics who wish they knew more statistics. That is quite a large group, and one well represented here. It's a little quirky and offbeat, but it covers a lot of ground and it includes much sensible advice. It's not Tukey revisited in the sense that it offers many new ideas, but it can be rewarding to study, even when you think it is a little wrong-headed.
This book seems to have attracted very little notice, quite possibly because it is very expensive, not obviously suitable as a course text, and as yet only available in hardback. But it is intelligent and readable and free of the garbage of modern introductory textbooks (pages and pages of elementary exercises, silly icons, gratuitous photos of happy young people, fussy layout with boxes, whatever, etc.). | Modern successor to Exploratory Data Analysis by Tukey?
Ronald Pearson's Exploring Data in Engineering, the Sciences, and Medicine is worth mentioning here. Its main target readership seems to be scientists not afraid of a little mathematics who wish they |
4,561 | Modern successor to Exploratory Data Analysis by Tukey? | Also Interactive and Dynamic Graphics for Data Analysis: With Examples Using R and GGobi, Cook and Swayne
This has two chapters publicly available on the web that describe the process of data analysis, and handling missing values. There's a new book coming out by Antony Unwin soon. | Modern successor to Exploratory Data Analysis by Tukey? | Also Interactive and Dynamic Graphics for Data Analysis: With Examples Using R and GGobi, Cook and Swayne
This has two chapters publicly available on the web that describe the process of data analysis | Modern successor to Exploratory Data Analysis by Tukey?
Also Interactive and Dynamic Graphics for Data Analysis: With Examples Using R and GGobi, Cook and Swayne
This has two chapters publicly available on the web that describe the process of data analysis, and handling missing values. There's a new book coming out by Antony Unwin soon. | Modern successor to Exploratory Data Analysis by Tukey?
Also Interactive and Dynamic Graphics for Data Analysis: With Examples Using R and GGobi, Cook and Swayne
This has two chapters publicly available on the web that describe the process of data analysis |
4,562 | Modern successor to Exploratory Data Analysis by Tukey? | Claus Wilke's 2019 book "Fundamentals of Data Visualization" is another possible "modern successor." The book's preprint is still freely available online.
Like Tukey's EDA, Wilke's book is focused on exploring your data using graphs while keeping in mind the things that matter to statisticians: thinking in terms of distributions, thinking about precision & uncertainty in our estimates, thinking about bias-variance tradeoffs when smoothing a trend or choosing a histogram bin size, and so on.
Wilke assumes you'll be making your graphs on the computer and provides the code for all his graphs (mostly in R's ggplot2) on GitHub. But the book itself is written in a software-agnostic way: the text is about best practices, not about how to implement them in a specific software tool. There's a brief chapter on choosing the right viz software tool for your needs.
He also concisely introduces concepts like Wilkinson's Grammar of Graphics; recommends best practices in line with folks like Cleveland and Tufte; and discusses how to make effective graphics for communication, not just exploration. Wilke's book does not break new ground on these fronts (unlike the Tukey or Cleveland books mentioned in other answers), but rather does a great job of distilling it and putting it all in one place, illustrated with good/bad/ugly examples using real datasets. It's become my go-to book for introducing data visualization to statisticians. | Modern successor to Exploratory Data Analysis by Tukey? | Claus Wilke's 2019 book "Fundamentals of Data Visualization" is another possible "modern successor." The book's preprint is still freely available online.
Like Tukey's EDA, Wilke's book is focused on | Modern successor to Exploratory Data Analysis by Tukey?
Claus Wilke's 2019 book "Fundamentals of Data Visualization" is another possible "modern successor." The book's preprint is still freely available online.
Like Tukey's EDA, Wilke's book is focused on exploring your data using graphs while keeping in mind the things that matter to statisticians: thinking in terms of distributions, thinking about precision & uncertainty in our estimates, thinking about bias-variance tradeoffs when smoothing a trend or choosing a histogram bin size, and so on.
Wilke assumes you'll be making your graphs on the computer and provides the code for all his graphs (mostly in R's ggplot2) on GitHub. But the book itself is written in a software-agnostic way: the text is about best practices, not about how to implement them in a specific software tool. There's a brief chapter on choosing the right viz software tool for your needs.
He also concisely introduces concepts like Wilkinson's Grammar of Graphics; recommends best practices in line with folks like Cleveland and Tufte; and discusses how to make effective graphics for communication, not just exploration. Wilke's book does not break new ground on these fronts (unlike the Tukey or Cleveland books mentioned in other answers), but rather does a great job of distilling it and putting it all in one place, illustrated with good/bad/ugly examples using real datasets. It's become my go-to book for introducing data visualization to statisticians. | Modern successor to Exploratory Data Analysis by Tukey?
Claus Wilke's 2019 book "Fundamentals of Data Visualization" is another possible "modern successor." The book's preprint is still freely available online.
Like Tukey's EDA, Wilke's book is focused on |
4,563 | Modern successor to Exploratory Data Analysis by Tukey? | Another couple of good books to read are Beautiful Visualization and Beautiful Data. These are edited books, there are amazingly good examples of exploring data with plots, and some absolutely appalling chapters.
Another book that has some good examples of using ggplot2 is a new one by Winston Chang | Modern successor to Exploratory Data Analysis by Tukey? | Another couple of good books to read are Beautiful Visualization and Beautiful Data. These are edited books, there are amazingly good examples of exploring data with plots, and some absolutely appalli | Modern successor to Exploratory Data Analysis by Tukey?
Another couple of good books to read are Beautiful Visualization and Beautiful Data. These are edited books, there are amazingly good examples of exploring data with plots, and some absolutely appalling chapters.
Another book that has some good examples of using ggplot2 is a new one by Winston Chang | Modern successor to Exploratory Data Analysis by Tukey?
Another couple of good books to read are Beautiful Visualization and Beautiful Data. These are edited books, there are amazingly good examples of exploring data with plots, and some absolutely appalli |
4,564 | Modern successor to Exploratory Data Analysis by Tukey? | I think of Understanding robust and exploratory analysis by Hoaglin, Mosteller and Tukey an the companion volume on Exploring data tables and shapes as the technical follow-up to EDA.
I also see data analysis and regression, a second course in statistics by Mosteller and Tukey as follow-up to EDA. The various Cleveland books mentioned above are treasures. | Modern successor to Exploratory Data Analysis by Tukey? | I think of Understanding robust and exploratory analysis by Hoaglin, Mosteller and Tukey an the companion volume on Exploring data tables and shapes as the technical follow-up to EDA.
I also see data | Modern successor to Exploratory Data Analysis by Tukey?
I think of Understanding robust and exploratory analysis by Hoaglin, Mosteller and Tukey an the companion volume on Exploring data tables and shapes as the technical follow-up to EDA.
I also see data analysis and regression, a second course in statistics by Mosteller and Tukey as follow-up to EDA. The various Cleveland books mentioned above are treasures. | Modern successor to Exploratory Data Analysis by Tukey?
I think of Understanding robust and exploratory analysis by Hoaglin, Mosteller and Tukey an the companion volume on Exploring data tables and shapes as the technical follow-up to EDA.
I also see data |
4,565 | How do I find peaks in a dataset? | A general approach is to smooth the data and then find peaks by comparing a local maximum filter to the smooth. In R:
argmax <- function(x, y, w=1, ...) {
require(zoo)
n <- length(y)
y.smooth <- loess(y ~ x, ...)$fitted
y.max <- rollapply(zoo(y.smooth), 2*w+1, max,
align="center")
delta <- y.max - y.smooth[-c(1:w, n+1-1:w)]
i.max <- which(delta <= 0) + w
list(x=x[i.max], i=i.max, y.hat=y.smooth)
}
Its return value includes the arguments of the local maxima (x)--which answers the question--and the indexes into the x- and y-arrays where those local maxima occur (i).
There are two parameters to be tuned to the circumstances: w is the half-width of the window used to compute the local maximum. (Its value should be substantially less than half the length of the array of data.) Small values will pick up tiny local bumps whereas larger values will pass right over those. Another--not explicit in this code--is the span argument of the loess smoother. (It is typically between zero and one; it reflects a window width as a proportion of the range of x values.) Larger values will smooth the data more aggressively, making local bumps disappear altogether.
To see this tuning in effect, let's create a little test function to plot the results:
test <- function(w, span) {
peaks <- argmax(x, y, w=w, span=span)
plot(x, y, cex=0.75, col="Gray", main=paste("w = ", w, ",
span = ", span, sep=""))
lines(x, peaks$y.hat, lwd=2) #$
y.min <- min(y)
sapply(peaks$i, function(i) lines(c(x[i],x[i]), c(y.min,
peaks$y.hat[i]),
col="Red", lty=2))
points(x[peaks$i], peaks$y.hat[peaks$i], col="Red", pch=19,
cex=1.25)
}
Here are a few experiments applied to some synthetic, slightly noisy data.
x <- 1:1000 / 100 - 5
y <- exp(abs(x)/20) * sin(2 * x + (x/5)^2) + cos(10*x) / 5 +
rnorm(length(x), sd=0.05)
par(mfrow=c(3,1))
test(2, 0.05)
test(30, 0.05)
test(2, 0.2)
Either a wide window (middle plot) or more aggressive smooth (bottom plot) eliminate the local maxima detected in the top plot. The best combination here is likely a wide window and only gentle smoothing, because aggressive smoothing appears to shift these peaks (see the middle and right points in the bottom plot and compare their locations to the apparent peaks of the raw data). In this example, w=50 and span=0.05 does a great job (not shown).
Notice the local maxima at the endpoints are not detected. These can be inspected separately. (To support this, argmax returns the smoothed y-values.)
This approach has several advantages over more formal modeling for general purpose work:
It does not adopt any preconceived model of the data.
It can be adapted to the data characteristics.
It can be adapted to detect the kinds of peaks one is interested in. | How do I find peaks in a dataset? | A general approach is to smooth the data and then find peaks by comparing a local maximum filter to the smooth. In R:
argmax <- function(x, y, w=1, ...) {
require(zoo)
n <- length(y)
| How do I find peaks in a dataset?
A general approach is to smooth the data and then find peaks by comparing a local maximum filter to the smooth. In R:
argmax <- function(x, y, w=1, ...) {
require(zoo)
n <- length(y)
y.smooth <- loess(y ~ x, ...)$fitted
y.max <- rollapply(zoo(y.smooth), 2*w+1, max,
align="center")
delta <- y.max - y.smooth[-c(1:w, n+1-1:w)]
i.max <- which(delta <= 0) + w
list(x=x[i.max], i=i.max, y.hat=y.smooth)
}
Its return value includes the arguments of the local maxima (x)--which answers the question--and the indexes into the x- and y-arrays where those local maxima occur (i).
There are two parameters to be tuned to the circumstances: w is the half-width of the window used to compute the local maximum. (Its value should be substantially less than half the length of the array of data.) Small values will pick up tiny local bumps whereas larger values will pass right over those. Another--not explicit in this code--is the span argument of the loess smoother. (It is typically between zero and one; it reflects a window width as a proportion of the range of x values.) Larger values will smooth the data more aggressively, making local bumps disappear altogether.
To see this tuning in effect, let's create a little test function to plot the results:
test <- function(w, span) {
peaks <- argmax(x, y, w=w, span=span)
plot(x, y, cex=0.75, col="Gray", main=paste("w = ", w, ",
span = ", span, sep=""))
lines(x, peaks$y.hat, lwd=2) #$
y.min <- min(y)
sapply(peaks$i, function(i) lines(c(x[i],x[i]), c(y.min,
peaks$y.hat[i]),
col="Red", lty=2))
points(x[peaks$i], peaks$y.hat[peaks$i], col="Red", pch=19,
cex=1.25)
}
Here are a few experiments applied to some synthetic, slightly noisy data.
x <- 1:1000 / 100 - 5
y <- exp(abs(x)/20) * sin(2 * x + (x/5)^2) + cos(10*x) / 5 +
rnorm(length(x), sd=0.05)
par(mfrow=c(3,1))
test(2, 0.05)
test(30, 0.05)
test(2, 0.2)
Either a wide window (middle plot) or more aggressive smooth (bottom plot) eliminate the local maxima detected in the top plot. The best combination here is likely a wide window and only gentle smoothing, because aggressive smoothing appears to shift these peaks (see the middle and right points in the bottom plot and compare their locations to the apparent peaks of the raw data). In this example, w=50 and span=0.05 does a great job (not shown).
Notice the local maxima at the endpoints are not detected. These can be inspected separately. (To support this, argmax returns the smoothed y-values.)
This approach has several advantages over more formal modeling for general purpose work:
It does not adopt any preconceived model of the data.
It can be adapted to the data characteristics.
It can be adapted to detect the kinds of peaks one is interested in. | How do I find peaks in a dataset?
A general approach is to smooth the data and then find peaks by comparing a local maximum filter to the smooth. In R:
argmax <- function(x, y, w=1, ...) {
require(zoo)
n <- length(y)
|
4,566 | How do I find peaks in a dataset? | A classic peak detection approach in signal processing is as follows:
Filter the signal to some reasonable reasonable range, depending on
sampling rate and signal properties, e.g. for ECG, an IIR bandpass
filter @0.5-20Hz, a zero-phase filter will ensure that no phase
shift (and associated time lag) is introduced
A hilbert transform or a wavelet approach can then be used to emphasize the peaks
A static or dynamic threshold can then be applied, where all samples above
the threshold are deemed peaks. In the case of a dynamic threshold,
it is usually defined as a threshold N standard deviations above or below a moving average estimate of the mean.
Another approach that works is to compare a sharply highpass filtered signal against a heavily smoothed (low-pass or median filtered) and apply step 3.
Hope this helps. | How do I find peaks in a dataset? | A classic peak detection approach in signal processing is as follows:
Filter the signal to some reasonable reasonable range, depending on
sampling rate and signal properties, e.g. for ECG, an IIR ban | How do I find peaks in a dataset?
A classic peak detection approach in signal processing is as follows:
Filter the signal to some reasonable reasonable range, depending on
sampling rate and signal properties, e.g. for ECG, an IIR bandpass
filter @0.5-20Hz, a zero-phase filter will ensure that no phase
shift (and associated time lag) is introduced
A hilbert transform or a wavelet approach can then be used to emphasize the peaks
A static or dynamic threshold can then be applied, where all samples above
the threshold are deemed peaks. In the case of a dynamic threshold,
it is usually defined as a threshold N standard deviations above or below a moving average estimate of the mean.
Another approach that works is to compare a sharply highpass filtered signal against a heavily smoothed (low-pass or median filtered) and apply step 3.
Hope this helps. | How do I find peaks in a dataset?
A classic peak detection approach in signal processing is as follows:
Filter the signal to some reasonable reasonable range, depending on
sampling rate and signal properties, e.g. for ECG, an IIR ban |
4,567 | How do I find peaks in a dataset? | As I mentioned in comment if the time series appears to be periodic fitting a harmonic regression model provides a way to smooth the function and identify the peak by applying the first and second derivative tests. Huber has pointed out a nonparametric test that has advantages when there are multiple peaks and the function is not necessarily periodic. But there is no free lunch. While there are the advantages to his method that he mentions there can be disadvantages if a parametric model is appropriate. That is always the flip side to using nonparametric techniques. Although it avoids parametric assumptions, the parametric approach is better when the parametric assumptions are appropriate. His procedure also does not take full advantage of the time series structure in the data. My approach does but relies on a specific form for the model that assumes periodicity.
I think that while it is appropriate to point out advantages of a suggested procedure it is also important to point out the potential disadvantages. Both my approach and Huber's find the peaks in an efficient manner. However I think his procedure takes a little more work when a local maximum is lower than the previously determined highest peak. | How do I find peaks in a dataset? | As I mentioned in comment if the time series appears to be periodic fitting a harmonic regression model provides a way to smooth the function and identify the peak by applying the first and second der | How do I find peaks in a dataset?
As I mentioned in comment if the time series appears to be periodic fitting a harmonic regression model provides a way to smooth the function and identify the peak by applying the first and second derivative tests. Huber has pointed out a nonparametric test that has advantages when there are multiple peaks and the function is not necessarily periodic. But there is no free lunch. While there are the advantages to his method that he mentions there can be disadvantages if a parametric model is appropriate. That is always the flip side to using nonparametric techniques. Although it avoids parametric assumptions, the parametric approach is better when the parametric assumptions are appropriate. His procedure also does not take full advantage of the time series structure in the data. My approach does but relies on a specific form for the model that assumes periodicity.
I think that while it is appropriate to point out advantages of a suggested procedure it is also important to point out the potential disadvantages. Both my approach and Huber's find the peaks in an efficient manner. However I think his procedure takes a little more work when a local maximum is lower than the previously determined highest peak. | How do I find peaks in a dataset?
As I mentioned in comment if the time series appears to be periodic fitting a harmonic regression model provides a way to smooth the function and identify the peak by applying the first and second der |
4,568 | Pandas / Statsmodel / Scikit-learn | Scikit-learn (sklearn) is the best choice for machine learning, out of the three listed. While Pandas and Statsmodels do contain some predictive learning algorithms, they are hidden/not production-ready yet. Often, as authors will work on different projects, the libraries are complimentary. For example, recently Pandas' Dataframes were integrated into Statsmodels. A relationship between sklearn and Pandas is not present (yet).
Define functionality. They all run. If you mean what is the most useful, then it depends on your application. I would definitely give Pandas a +1 here, as it has added a great new data structure to Python (dataframes). Pandas also probably has the best API.
They are all actively supported, though I would say Pandas has the best code base. Sklearn and Pandas are more active than Statsmodels.
The clear choice is Sklearn. It is easy and clear how to perform it.
from sklearn.linear_models import LogisticRegression as LR
logr = LR()
logr.fit( X, Y )
results = logr.predict( test_data) | Pandas / Statsmodel / Scikit-learn | Scikit-learn (sklearn) is the best choice for machine learning, out of the three listed. While Pandas and Statsmodels do contain some predictive learning algorithms, they are hidden/not production-rea | Pandas / Statsmodel / Scikit-learn
Scikit-learn (sklearn) is the best choice for machine learning, out of the three listed. While Pandas and Statsmodels do contain some predictive learning algorithms, they are hidden/not production-ready yet. Often, as authors will work on different projects, the libraries are complimentary. For example, recently Pandas' Dataframes were integrated into Statsmodels. A relationship between sklearn and Pandas is not present (yet).
Define functionality. They all run. If you mean what is the most useful, then it depends on your application. I would definitely give Pandas a +1 here, as it has added a great new data structure to Python (dataframes). Pandas also probably has the best API.
They are all actively supported, though I would say Pandas has the best code base. Sklearn and Pandas are more active than Statsmodels.
The clear choice is Sklearn. It is easy and clear how to perform it.
from sklearn.linear_models import LogisticRegression as LR
logr = LR()
logr.fit( X, Y )
results = logr.predict( test_data) | Pandas / Statsmodel / Scikit-learn
Scikit-learn (sklearn) is the best choice for machine learning, out of the three listed. While Pandas and Statsmodels do contain some predictive learning algorithms, they are hidden/not production-rea |
4,569 | Pandas / Statsmodel / Scikit-learn | I would like to qualify and clarify a bit the accepted answer.
The three packages are complementary to each other since they cover different areas, have different main objectives, or emphasize different areas in machine learning/statistics.
pandas is mainly a package to handle and operate directly on data.
scikit-learn is doing machine learning with emphasis on predictive modeling with often large and sparse data
statsmodels is doing "traditional" statistics and econometrics, with much stronger emphasis on parameter estimation and (statistical) testing.
statsmodels has pandas as a dependency, pandas optionally uses statsmodels for some statistics. statsmodels is using patsy to provide a similar formula interface to the models as R.
There is some overlap in models between scikit-learn and statsmodels, but with different objectives.
see for example The Two Cultures: statistics vs. machine learning?
some more about statsmodels
statsmodels has the lowest developement activity and longest release cycle of the three. statsmodels has many contributors but unfortunately still only two "maintainers" (I'm one of them.)
The core of statsmodels is "production ready": linear models, robust linear models, generalised linear models and discrete models have been around for several years and are verified against Stata and R. statsmodels also has a time series analysis part covering AR, ARMA and VAR (vector autoregressive) regression, which are not available in any other python package.
Some examples to show some specific differences between the machine learning approach in scikit-learn and the statistics and econometrics approach in statsmodels:
Simple linear Regression, OLS, has a large number of post-estimation analysis
http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLSResults.html including tests on parameters, outlier measures and specification tests http://statsmodels.sourceforge.net/devel/stats.html#residual-diagnostics-and-specification-tests
Logistic Regression can be done in statsmodels either as Logit model in discrete or as a family in generalized linear model (GLM). http://statsmodels.sourceforge.net/devel/glm.html#module-reference
GLM includes the usual families, discrete models contains besides Logit also Probit, multinomial and count regression.
Logit
Using Logit is as simple as this
http://statsmodels.sourceforge.net/devel/examples/generated/example_discrete.html
>>> import statsmodels.api as sm
>>> x = sm.add_constant(data.exog, prepend=False)
>>> y = data.endog
>>> res1 = sm.Logit(y, x).fit()
Optimization terminated successfully.
Current function value: 0.402801
Iterations 7
>>> print res1.summary()
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 32
Model: Logit Df Residuals: 28
Method: MLE Df Model: 3
Date: Sat, 26 Jan 2013 Pseudo R-squ.: 0.3740
Time: 07:34:59 Log-Likelihood: -12.890
converged: True LL-Null: -20.592
LLR p-value: 0.001502
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1 2.8261 1.263 2.238 0.025 0.351 5.301
x2 0.0952 0.142 0.672 0.501 -0.182 0.373
x3 2.3787 1.065 2.234 0.025 0.292 4.465
const -13.0213 4.931 -2.641 0.008 -22.687 -3.356
==============================================================================
>>> dir(res1)
...
>>> res1.predict(x.mean(0))
0.25282026208742708 | Pandas / Statsmodel / Scikit-learn | I would like to qualify and clarify a bit the accepted answer.
The three packages are complementary to each other since they cover different areas, have different main objectives, or emphasize differe | Pandas / Statsmodel / Scikit-learn
I would like to qualify and clarify a bit the accepted answer.
The three packages are complementary to each other since they cover different areas, have different main objectives, or emphasize different areas in machine learning/statistics.
pandas is mainly a package to handle and operate directly on data.
scikit-learn is doing machine learning with emphasis on predictive modeling with often large and sparse data
statsmodels is doing "traditional" statistics and econometrics, with much stronger emphasis on parameter estimation and (statistical) testing.
statsmodels has pandas as a dependency, pandas optionally uses statsmodels for some statistics. statsmodels is using patsy to provide a similar formula interface to the models as R.
There is some overlap in models between scikit-learn and statsmodels, but with different objectives.
see for example The Two Cultures: statistics vs. machine learning?
some more about statsmodels
statsmodels has the lowest developement activity and longest release cycle of the three. statsmodels has many contributors but unfortunately still only two "maintainers" (I'm one of them.)
The core of statsmodels is "production ready": linear models, robust linear models, generalised linear models and discrete models have been around for several years and are verified against Stata and R. statsmodels also has a time series analysis part covering AR, ARMA and VAR (vector autoregressive) regression, which are not available in any other python package.
Some examples to show some specific differences between the machine learning approach in scikit-learn and the statistics and econometrics approach in statsmodels:
Simple linear Regression, OLS, has a large number of post-estimation analysis
http://statsmodels.sourceforge.net/devel/generated/statsmodels.regression.linear_model.OLSResults.html including tests on parameters, outlier measures and specification tests http://statsmodels.sourceforge.net/devel/stats.html#residual-diagnostics-and-specification-tests
Logistic Regression can be done in statsmodels either as Logit model in discrete or as a family in generalized linear model (GLM). http://statsmodels.sourceforge.net/devel/glm.html#module-reference
GLM includes the usual families, discrete models contains besides Logit also Probit, multinomial and count regression.
Logit
Using Logit is as simple as this
http://statsmodels.sourceforge.net/devel/examples/generated/example_discrete.html
>>> import statsmodels.api as sm
>>> x = sm.add_constant(data.exog, prepend=False)
>>> y = data.endog
>>> res1 = sm.Logit(y, x).fit()
Optimization terminated successfully.
Current function value: 0.402801
Iterations 7
>>> print res1.summary()
Logit Regression Results
==============================================================================
Dep. Variable: y No. Observations: 32
Model: Logit Df Residuals: 28
Method: MLE Df Model: 3
Date: Sat, 26 Jan 2013 Pseudo R-squ.: 0.3740
Time: 07:34:59 Log-Likelihood: -12.890
converged: True LL-Null: -20.592
LLR p-value: 0.001502
==============================================================================
coef std err z P>|z| [95.0% Conf. Int.]
------------------------------------------------------------------------------
x1 2.8261 1.263 2.238 0.025 0.351 5.301
x2 0.0952 0.142 0.672 0.501 -0.182 0.373
x3 2.3787 1.065 2.234 0.025 0.292 4.465
const -13.0213 4.931 -2.641 0.008 -22.687 -3.356
==============================================================================
>>> dir(res1)
...
>>> res1.predict(x.mean(0))
0.25282026208742708 | Pandas / Statsmodel / Scikit-learn
I would like to qualify and clarify a bit the accepted answer.
The three packages are complementary to each other since they cover different areas, have different main objectives, or emphasize differe |
4,570 | Book for reading before Elements of Statistical Learning? | I bought, but have not yet read,
S. Marsland, Machine Learning: An Algorithmic Perspective, Chapman & Hall, 2009.
However, the reviews are favorable and state that it is more suitable for beginners than other ML books that have more depth. Flipping through the pages, it looks to me to be good for me because I have little math background. | Book for reading before Elements of Statistical Learning? | I bought, but have not yet read,
S. Marsland, Machine Learning: An Algorithmic Perspective, Chapman & Hall, 2009.
However, the reviews are favorable and state that it is more suitable for beginner | Book for reading before Elements of Statistical Learning?
I bought, but have not yet read,
S. Marsland, Machine Learning: An Algorithmic Perspective, Chapman & Hall, 2009.
However, the reviews are favorable and state that it is more suitable for beginners than other ML books that have more depth. Flipping through the pages, it looks to me to be good for me because I have little math background. | Book for reading before Elements of Statistical Learning?
I bought, but have not yet read,
S. Marsland, Machine Learning: An Algorithmic Perspective, Chapman & Hall, 2009.
However, the reviews are favorable and state that it is more suitable for beginner |
4,571 | Book for reading before Elements of Statistical Learning? | The authors of Elements of Statistical Learning have come out with a new book (Aug 2013) aimed at users without heavy math backgrounds. An Introduction to Statistical Learning: with Applications in R
The free PDF version of this book can currently be found here. | Book for reading before Elements of Statistical Learning? | The authors of Elements of Statistical Learning have come out with a new book (Aug 2013) aimed at users without heavy math backgrounds. An Introduction to Statistical Learning: with Applications in R
| Book for reading before Elements of Statistical Learning?
The authors of Elements of Statistical Learning have come out with a new book (Aug 2013) aimed at users without heavy math backgrounds. An Introduction to Statistical Learning: with Applications in R
The free PDF version of this book can currently be found here. | Book for reading before Elements of Statistical Learning?
The authors of Elements of Statistical Learning have come out with a new book (Aug 2013) aimed at users without heavy math backgrounds. An Introduction to Statistical Learning: with Applications in R
|
4,572 | Book for reading before Elements of Statistical Learning? | I found Programming Collective Intelligence the easiest book for beginners, since the author Toby Segaran is is focused on allowing the median software developer to get his/her hands dirty with data hacking as fast as possible.
Typical chapter: The data problem is clearly described, followed by a rough explanation how the algorithm works and finally shows how to create some insights with just a few lines of code.
The usage of python allows one to understand everything rather fast (you do not need to know python, seriously, I did not know it before, too). DONT think that this book is only focused on creating recommender system. It also deals with text mining / spam filtering / optimization / clustering / validation etc. and hence gives you a neat overview over the basic tools of every data miner.
Chapter 10 even deals with stock market data, but the focus is not on time series data mining. Maybe the only drawback (for you) of this excellent book. | Book for reading before Elements of Statistical Learning? | I found Programming Collective Intelligence the easiest book for beginners, since the author Toby Segaran is is focused on allowing the median software developer to get his/her hands dirty with data h | Book for reading before Elements of Statistical Learning?
I found Programming Collective Intelligence the easiest book for beginners, since the author Toby Segaran is is focused on allowing the median software developer to get his/her hands dirty with data hacking as fast as possible.
Typical chapter: The data problem is clearly described, followed by a rough explanation how the algorithm works and finally shows how to create some insights with just a few lines of code.
The usage of python allows one to understand everything rather fast (you do not need to know python, seriously, I did not know it before, too). DONT think that this book is only focused on creating recommender system. It also deals with text mining / spam filtering / optimization / clustering / validation etc. and hence gives you a neat overview over the basic tools of every data miner.
Chapter 10 even deals with stock market data, but the focus is not on time series data mining. Maybe the only drawback (for you) of this excellent book. | Book for reading before Elements of Statistical Learning?
I found Programming Collective Intelligence the easiest book for beginners, since the author Toby Segaran is is focused on allowing the median software developer to get his/her hands dirty with data h |
4,573 | Book for reading before Elements of Statistical Learning? | Introduction to Machine Learning, by E. Alpaydin (MIT Press, 2010, 2nd ed.), covers a lot of topics with nice illustrations (much like Bishop's Pattern Recognition and Machine Learning).
In addition, Andrew W. Moore has some nice tutorials on Statistical Data Mining. | Book for reading before Elements of Statistical Learning? | Introduction to Machine Learning, by E. Alpaydin (MIT Press, 2010, 2nd ed.), covers a lot of topics with nice illustrations (much like Bishop's Pattern Recognition and Machine Learning).
In addition, | Book for reading before Elements of Statistical Learning?
Introduction to Machine Learning, by E. Alpaydin (MIT Press, 2010, 2nd ed.), covers a lot of topics with nice illustrations (much like Bishop's Pattern Recognition and Machine Learning).
In addition, Andrew W. Moore has some nice tutorials on Statistical Data Mining. | Book for reading before Elements of Statistical Learning?
Introduction to Machine Learning, by E. Alpaydin (MIT Press, 2010, 2nd ed.), covers a lot of topics with nice illustrations (much like Bishop's Pattern Recognition and Machine Learning).
In addition, |
4,574 | Book for reading before Elements of Statistical Learning? | Mayhaps Wasserman's All of Statistics would be of interest. You can sample the book from the link given - and just the first few paragraphs of the preface make a hard sale to your market - and you can likely download the book free through Springer if you are associated with a university.
EDIT: Oops, didn't notice how ancient this thread was. | Book for reading before Elements of Statistical Learning? | Mayhaps Wasserman's All of Statistics would be of interest. You can sample the book from the link given - and just the first few paragraphs of the preface make a hard sale to your market - and you can | Book for reading before Elements of Statistical Learning?
Mayhaps Wasserman's All of Statistics would be of interest. You can sample the book from the link given - and just the first few paragraphs of the preface make a hard sale to your market - and you can likely download the book free through Springer if you are associated with a university.
EDIT: Oops, didn't notice how ancient this thread was. | Book for reading before Elements of Statistical Learning?
Mayhaps Wasserman's All of Statistics would be of interest. You can sample the book from the link given - and just the first few paragraphs of the preface make a hard sale to your market - and you can |
4,575 | Book for reading before Elements of Statistical Learning? | The Elements Of Statistical Learning might be a tough read, especially for a self-learner. While searching for some explanations on the second chapter I have stumbled on the following resource: https://waxworksmath.com/Authors/G_M/Hastie/WriteUp/Weatherwax_Epstein_Hastie_Solution_Manual.pdf. It contains 100+ pages of annotations and explanations that clarify some complicated moments of the book. A great resource for everyone reading this book. This complementary text includes solutions for exercises. | Book for reading before Elements of Statistical Learning? | The Elements Of Statistical Learning might be a tough read, especially for a self-learner. While searching for some explanations on the second chapter I have stumbled on the following resource: https: | Book for reading before Elements of Statistical Learning?
The Elements Of Statistical Learning might be a tough read, especially for a self-learner. While searching for some explanations on the second chapter I have stumbled on the following resource: https://waxworksmath.com/Authors/G_M/Hastie/WriteUp/Weatherwax_Epstein_Hastie_Solution_Manual.pdf. It contains 100+ pages of annotations and explanations that clarify some complicated moments of the book. A great resource for everyone reading this book. This complementary text includes solutions for exercises. | Book for reading before Elements of Statistical Learning?
The Elements Of Statistical Learning might be a tough read, especially for a self-learner. While searching for some explanations on the second chapter I have stumbled on the following resource: https: |
4,576 | Book for reading before Elements of Statistical Learning? | I'd strongly recommend A First Course in Machine Learning by Rogers and Girolami. It covers the key ideas in a very logical order, with good examples and with the minimum level of maths to have a proper grounding in the fundamentals. It doesn't have the breadth of coverage of some books, but that is exactly why it is so good as an introductory text. | Book for reading before Elements of Statistical Learning? | I'd strongly recommend A First Course in Machine Learning by Rogers and Girolami. It covers the key ideas in a very logical order, with good examples and with the minimum level of maths to have a pro | Book for reading before Elements of Statistical Learning?
I'd strongly recommend A First Course in Machine Learning by Rogers and Girolami. It covers the key ideas in a very logical order, with good examples and with the minimum level of maths to have a proper grounding in the fundamentals. It doesn't have the breadth of coverage of some books, but that is exactly why it is so good as an introductory text. | Book for reading before Elements of Statistical Learning?
I'd strongly recommend A First Course in Machine Learning by Rogers and Girolami. It covers the key ideas in a very logical order, with good examples and with the minimum level of maths to have a pro |
4,577 | Book for reading before Elements of Statistical Learning? | Another book that is very interesting is Bayesian Reasoning and Machine Learning by David Barber. The book is available as a free download from the author's website:
http://www.cs.ucl.ac.uk/staff/d.barber/brml/ | Book for reading before Elements of Statistical Learning? | Another book that is very interesting is Bayesian Reasoning and Machine Learning by David Barber. The book is available as a free download from the author's website:
http://www.cs.ucl.ac.uk/staff/d.b | Book for reading before Elements of Statistical Learning?
Another book that is very interesting is Bayesian Reasoning and Machine Learning by David Barber. The book is available as a free download from the author's website:
http://www.cs.ucl.ac.uk/staff/d.barber/brml/ | Book for reading before Elements of Statistical Learning?
Another book that is very interesting is Bayesian Reasoning and Machine Learning by David Barber. The book is available as a free download from the author's website:
http://www.cs.ucl.ac.uk/staff/d.b |
4,578 | Replicating Stata's "robust" option in R | Charles is nearly there in his answer, but robust option of the regress command (and other regression estimation commands) in Stata makes it possible to use multiple types of heteroskedasticity and autocorrelation robust variance-covariance matrix estimators, as does the coeftest function in the lmtest package, which in turn depends on the respective variance-covariance matrices produced by the vcovHC function in the sandwich package.
However, the default variance-covariance matrices used by the two is different:
1. The default variance-covariance matrix returned by vcocHC is the so-called HC3 for reasons described in the man page for vcovHC.
2. The sandwich option used by Charles makes coeftest use the HC0 robust variance-covariance matrix.
3. To reproduce the Stata default behavior of using the robust option in a call to regress you need to request vcovHC to use the HC1 robust variance-covariance matrix.
Read more about it here.
The following example that demonstrates all the points made above is based on the example here.
library(foreign)
library(sandwich)
library(lmtest)
dfAPI = read.dta("http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2.dta")
lmAPI = lm(api00 ~ acs_k3 + acs_46 + full + enroll, data= dfAPI)
summary(lmAPI) # non-robust
# check that "sandwich" returns HC0
coeftest(lmAPI, vcov = sandwich) # robust; sandwich
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC0")) # robust; HC0
# check that the default robust var-cov matrix is HC3
coeftest(lmAPI, vcov = vcovHC(lmAPI)) # robust; HC3
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC3")) # robust; HC3 (default)
# reproduce the Stata default
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC1")) # robust; HC1 (Stata default)
The last line of code above reproduces results from Stata:
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2
regress api00 acs_k3 acs_46 full enroll, robust | Replicating Stata's "robust" option in R | Charles is nearly there in his answer, but robust option of the regress command (and other regression estimation commands) in Stata makes it possible to use multiple types of heteroskedasticity and au | Replicating Stata's "robust" option in R
Charles is nearly there in his answer, but robust option of the regress command (and other regression estimation commands) in Stata makes it possible to use multiple types of heteroskedasticity and autocorrelation robust variance-covariance matrix estimators, as does the coeftest function in the lmtest package, which in turn depends on the respective variance-covariance matrices produced by the vcovHC function in the sandwich package.
However, the default variance-covariance matrices used by the two is different:
1. The default variance-covariance matrix returned by vcocHC is the so-called HC3 for reasons described in the man page for vcovHC.
2. The sandwich option used by Charles makes coeftest use the HC0 robust variance-covariance matrix.
3. To reproduce the Stata default behavior of using the robust option in a call to regress you need to request vcovHC to use the HC1 robust variance-covariance matrix.
Read more about it here.
The following example that demonstrates all the points made above is based on the example here.
library(foreign)
library(sandwich)
library(lmtest)
dfAPI = read.dta("http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2.dta")
lmAPI = lm(api00 ~ acs_k3 + acs_46 + full + enroll, data= dfAPI)
summary(lmAPI) # non-robust
# check that "sandwich" returns HC0
coeftest(lmAPI, vcov = sandwich) # robust; sandwich
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC0")) # robust; HC0
# check that the default robust var-cov matrix is HC3
coeftest(lmAPI, vcov = vcovHC(lmAPI)) # robust; HC3
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC3")) # robust; HC3 (default)
# reproduce the Stata default
coeftest(lmAPI, vcov = vcovHC(lmAPI, "HC1")) # robust; HC1 (Stata default)
The last line of code above reproduces results from Stata:
use http://www.ats.ucla.edu/stat/stata/webbooks/reg/elemapi2
regress api00 acs_k3 acs_46 full enroll, robust | Replicating Stata's "robust" option in R
Charles is nearly there in his answer, but robust option of the regress command (and other regression estimation commands) in Stata makes it possible to use multiple types of heteroskedasticity and au |
4,579 | Replicating Stata's "robust" option in R | I found a description on the following website that replicates Stata's ''robust'' option in R.
https://economictheoryblog.com/2016/08/08/robust-standard-errors-in-r
Following the instructions, all you need to do is load a function into your R session and then set the parameter ''robust'' in you summary function to TRUE.
summary(lm.object, robust=TRUE) | Replicating Stata's "robust" option in R | I found a description on the following website that replicates Stata's ''robust'' option in R.
https://economictheoryblog.com/2016/08/08/robust-standard-errors-in-r
Following the instructions, all yo | Replicating Stata's "robust" option in R
I found a description on the following website that replicates Stata's ''robust'' option in R.
https://economictheoryblog.com/2016/08/08/robust-standard-errors-in-r
Following the instructions, all you need to do is load a function into your R session and then set the parameter ''robust'' in you summary function to TRUE.
summary(lm.object, robust=TRUE) | Replicating Stata's "robust" option in R
I found a description on the following website that replicates Stata's ''robust'' option in R.
https://economictheoryblog.com/2016/08/08/robust-standard-errors-in-r
Following the instructions, all yo |
4,580 | Replicating Stata's "robust" option in R | As of April 2018 I believe you want the estimatr package, which provides a near drop in replacement for lm. Several examples pulled nearly from the documentation:
library(estimatr)
library(car)
# HC1 robust standard errors
model <- lm_robust(GPA_year2 ~ gpa0 + ssp, data = alo_star_men,
se_type = "stata")
summary(model)
#>
#> Call:
#> lm_robust(formula = GPA_year2 ~ gpa0 + ssp, data = alo_star_men,
#> se_type = "stata")
#>
#> Standard error type: HC1
#>
#> Coefficients:
#> Estimate Std. Error Pr(>|t|) CI Lower CI Upper DF
#> (Intercept) -3.60625 1.60084 0.0258665 -6.77180 -0.4407 137
#> gpa0 0.06814 0.02024 0.0009868 0.02812 0.1082 137
#> ssp 0.31917 0.18202 0.0817589 -0.04077 0.6791 137
#>
#> Multiple R-squared: 0.09262 , Adjusted R-squared: 0.07937
#> F-statistic: 6.992 on 2 and 137 DF, p-value: 0.001284
# HC1 cluster robust standard errors
model2 <- lm_robust(GPA_year2 ~ gpa0 + ssp, cluster = ssp,
data = alo_star_men, se_type = "stata")
summary(model2)
#>
#> Call:
#> lm_robust(formula = GPA_year2 ~ gpa0 + ssp, data = alo_star_men,
#> clusters = ssp, se_type = "stata")
#>
#> Standard error type: stata
#>
#> Coefficients:
#> Estimate Std. Error Pr(>|t|) CI Lower CI Upper DF
#> (Intercept) -3.60625 1.433195 0.240821 -21.8167 14.6042 1
#> gpa0 0.06814 0.018122 0.165482 -0.1621 0.2984 1
#> ssp 0.31917 0.004768 0.009509 0.2586 0.3798 1
#>
#> Multiple R-squared: 0.09262 , Adjusted R-squared: 0.07937
#> F-statistic: 6.992 on 2 and 137 DF, p-value: 0.001284
The car package then makes it easy to perform omnibus hypothesis tests for these models:
linearHypothesis(model, c("gpa0 = ssp"))
#> Linear hypothesis test
#>
#> Hypothesis:
#> gpa0 - ssp = 0
#>
#> Model 1: restricted model
#> Model 2: GPA_year2 ~ gpa0 + ssp
#>
#> Res.Df Df Chisq Pr(>Chisq)
#> 1 138
#> 2 137 1 1.8859 0.1697 | Replicating Stata's "robust" option in R | As of April 2018 I believe you want the estimatr package, which provides a near drop in replacement for lm. Several examples pulled nearly from the documentation:
library(estimatr)
library(car)
# HC1 | Replicating Stata's "robust" option in R
As of April 2018 I believe you want the estimatr package, which provides a near drop in replacement for lm. Several examples pulled nearly from the documentation:
library(estimatr)
library(car)
# HC1 robust standard errors
model <- lm_robust(GPA_year2 ~ gpa0 + ssp, data = alo_star_men,
se_type = "stata")
summary(model)
#>
#> Call:
#> lm_robust(formula = GPA_year2 ~ gpa0 + ssp, data = alo_star_men,
#> se_type = "stata")
#>
#> Standard error type: HC1
#>
#> Coefficients:
#> Estimate Std. Error Pr(>|t|) CI Lower CI Upper DF
#> (Intercept) -3.60625 1.60084 0.0258665 -6.77180 -0.4407 137
#> gpa0 0.06814 0.02024 0.0009868 0.02812 0.1082 137
#> ssp 0.31917 0.18202 0.0817589 -0.04077 0.6791 137
#>
#> Multiple R-squared: 0.09262 , Adjusted R-squared: 0.07937
#> F-statistic: 6.992 on 2 and 137 DF, p-value: 0.001284
# HC1 cluster robust standard errors
model2 <- lm_robust(GPA_year2 ~ gpa0 + ssp, cluster = ssp,
data = alo_star_men, se_type = "stata")
summary(model2)
#>
#> Call:
#> lm_robust(formula = GPA_year2 ~ gpa0 + ssp, data = alo_star_men,
#> clusters = ssp, se_type = "stata")
#>
#> Standard error type: stata
#>
#> Coefficients:
#> Estimate Std. Error Pr(>|t|) CI Lower CI Upper DF
#> (Intercept) -3.60625 1.433195 0.240821 -21.8167 14.6042 1
#> gpa0 0.06814 0.018122 0.165482 -0.1621 0.2984 1
#> ssp 0.31917 0.004768 0.009509 0.2586 0.3798 1
#>
#> Multiple R-squared: 0.09262 , Adjusted R-squared: 0.07937
#> F-statistic: 6.992 on 2 and 137 DF, p-value: 0.001284
The car package then makes it easy to perform omnibus hypothesis tests for these models:
linearHypothesis(model, c("gpa0 = ssp"))
#> Linear hypothesis test
#>
#> Hypothesis:
#> gpa0 - ssp = 0
#>
#> Model 1: restricted model
#> Model 2: GPA_year2 ~ gpa0 + ssp
#>
#> Res.Df Df Chisq Pr(>Chisq)
#> 1 138
#> 2 137 1 1.8859 0.1697 | Replicating Stata's "robust" option in R
As of April 2018 I believe you want the estimatr package, which provides a near drop in replacement for lm. Several examples pulled nearly from the documentation:
library(estimatr)
library(car)
# HC1 |
4,581 | Replicating Stata's "robust" option in R | I'd edit the question. You're confusing robust regression with Stata's robust command. There seems to be no benefit to introducing this confusion.
I think there are a few approaches. I haven't looked at them all and not sure which is the best:
The sandwich package:
library(sandwich)
coeftest(model, vcov=sandwich)
But this doesn't give me the same answers I get from Stata for some reason. I've never tried to work out why - but above in comments there is a suggested answer - I just don't use this package.
The rms package:
I find this a bit of a pain to work with but usually get good answers with some effort. And it is the most useful for me.
model = ols(a~b, x=TRUE)
robcov(model)
You can code it from scratch
See this blog post (http://thetarzan.wordpress.com/2011/05/28/heteroskedasticity-robust-and-clustered-standard-errors-in-r/). It looks like the most painful option, but remarkably easy and this option often works the best. | Replicating Stata's "robust" option in R | I'd edit the question. You're confusing robust regression with Stata's robust command. There seems to be no benefit to introducing this confusion.
I think there are a few approaches. I haven't looked | Replicating Stata's "robust" option in R
I'd edit the question. You're confusing robust regression with Stata's robust command. There seems to be no benefit to introducing this confusion.
I think there are a few approaches. I haven't looked at them all and not sure which is the best:
The sandwich package:
library(sandwich)
coeftest(model, vcov=sandwich)
But this doesn't give me the same answers I get from Stata for some reason. I've never tried to work out why - but above in comments there is a suggested answer - I just don't use this package.
The rms package:
I find this a bit of a pain to work with but usually get good answers with some effort. And it is the most useful for me.
model = ols(a~b, x=TRUE)
robcov(model)
You can code it from scratch
See this blog post (http://thetarzan.wordpress.com/2011/05/28/heteroskedasticity-robust-and-clustered-standard-errors-in-r/). It looks like the most painful option, but remarkably easy and this option often works the best. | Replicating Stata's "robust" option in R
I'd edit the question. You're confusing robust regression with Stata's robust command. There seems to be no benefit to introducing this confusion.
I think there are a few approaches. I haven't looked |
4,582 | Generic sum of Gamma random variables | First, combine any sums having the same scale factor: a $\Gamma(n, \beta)$ plus a $\Gamma(m,\beta)$ variate form a $\Gamma(n+m,\beta)$ variate.
Next, observe that the characteristic function (cf) of $\Gamma(n, \beta)$ is $(1-i \beta t)^{-n}$, whence the cf of a sum of these distributions is the product
$$\prod_{j} \frac{1}{(1-i \beta_j t)^{n_j}}.$$
When the $n_j$ are all integral, this product expands as a partial fraction into a linear combination of $(1-i \beta_j t)^{-\nu}$ where the $\nu$ are integers between $1$ and $n_j$. In the example with $\beta_1 = 1, n_1=8$ (from the sum of $\Gamma(3,1)$ and $\Gamma(5,1)$) and $\beta_2 = 2, n_2=4$ we find
$$\begin{aligned}&\frac{1}{(1-i t)^{8}}\frac{1}{(1- 2i t)^{4}} = \\
&\frac{1}{(t+i)^8}-\frac{8 i}{(t+i)^7}-\frac{40}{(t+i)^6}+\frac{160 i}{(t+i)^5}+\frac{560}{(t+i)^4}-\frac{1792 i}{(t+i)^3}\\
&-\frac{5376}{(t+i)^2}+\frac{15360 i}{t+i}+\frac{256}{(2t+i)^4}+\frac{2048 i}{(2 t+i)^3}-\frac{9216}{(2t+i)^2}-\frac{30720 i}{2t+i}.
\end{aligned}$$
The inverse of taking the cf is the inverse Fourier Transform, which is linear: that means we may apply it term by term. Each term is recognizable as a multiple of the cf of a Gamma distribution and so is readily inverted to yield the PDF. In the example we obtain
$$\begin{aligned}
&\frac{e^{-t} t^7}{5040}+\frac{1}{90} e^{-t} t^6+\frac{1}{3} e^{-t} t^5+\frac{20}{3} e^{-t} t^4+\frac{8}{3} e^{-\frac{t}{2}} t^3+\frac{280}{3} e^{-t} t^3\\
&-128 e^{-\frac{t}{2}} t^2+896 e^{-t} t^2+2304 e^{-\frac{t}{2}} t+5376 e^{-t} t-15360 e^{-\frac{t}{2}}+15360 e^{-t}
\end{aligned}$$
for the PDF of the sum.
This is a finite mixture of Gamma distributions having scale factors equal to those within the sum and shape factors less than or equal to those within the sum. Except in special cases (where some cancellation might occur), the number of terms is given by the total shape parameter $n_1 + n_2 + \cdots$ (assuming all the $n_j$ are different).
As a test, here is a histogram of $10^4$ results obtained by adding independent draws from the $\Gamma(8,1)$ and $\Gamma(4,2)$ distributions. On it is superimposed the graph of $10^4$ times the preceding function. The fit is very good.
Moschopoulos carries this idea one step further by expanding the cf of the sum into an infinite series of Gamma characteristic functions whenever one or more of the $n_i$ is non-integral, and then terminates the infinite series at a point where it is reasonably well approximated. | Generic sum of Gamma random variables | First, combine any sums having the same scale factor: a $\Gamma(n, \beta)$ plus a $\Gamma(m,\beta)$ variate form a $\Gamma(n+m,\beta)$ variate.
Next, observe that the characteristic function (cf) of $ | Generic sum of Gamma random variables
First, combine any sums having the same scale factor: a $\Gamma(n, \beta)$ plus a $\Gamma(m,\beta)$ variate form a $\Gamma(n+m,\beta)$ variate.
Next, observe that the characteristic function (cf) of $\Gamma(n, \beta)$ is $(1-i \beta t)^{-n}$, whence the cf of a sum of these distributions is the product
$$\prod_{j} \frac{1}{(1-i \beta_j t)^{n_j}}.$$
When the $n_j$ are all integral, this product expands as a partial fraction into a linear combination of $(1-i \beta_j t)^{-\nu}$ where the $\nu$ are integers between $1$ and $n_j$. In the example with $\beta_1 = 1, n_1=8$ (from the sum of $\Gamma(3,1)$ and $\Gamma(5,1)$) and $\beta_2 = 2, n_2=4$ we find
$$\begin{aligned}&\frac{1}{(1-i t)^{8}}\frac{1}{(1- 2i t)^{4}} = \\
&\frac{1}{(t+i)^8}-\frac{8 i}{(t+i)^7}-\frac{40}{(t+i)^6}+\frac{160 i}{(t+i)^5}+\frac{560}{(t+i)^4}-\frac{1792 i}{(t+i)^3}\\
&-\frac{5376}{(t+i)^2}+\frac{15360 i}{t+i}+\frac{256}{(2t+i)^4}+\frac{2048 i}{(2 t+i)^3}-\frac{9216}{(2t+i)^2}-\frac{30720 i}{2t+i}.
\end{aligned}$$
The inverse of taking the cf is the inverse Fourier Transform, which is linear: that means we may apply it term by term. Each term is recognizable as a multiple of the cf of a Gamma distribution and so is readily inverted to yield the PDF. In the example we obtain
$$\begin{aligned}
&\frac{e^{-t} t^7}{5040}+\frac{1}{90} e^{-t} t^6+\frac{1}{3} e^{-t} t^5+\frac{20}{3} e^{-t} t^4+\frac{8}{3} e^{-\frac{t}{2}} t^3+\frac{280}{3} e^{-t} t^3\\
&-128 e^{-\frac{t}{2}} t^2+896 e^{-t} t^2+2304 e^{-\frac{t}{2}} t+5376 e^{-t} t-15360 e^{-\frac{t}{2}}+15360 e^{-t}
\end{aligned}$$
for the PDF of the sum.
This is a finite mixture of Gamma distributions having scale factors equal to those within the sum and shape factors less than or equal to those within the sum. Except in special cases (where some cancellation might occur), the number of terms is given by the total shape parameter $n_1 + n_2 + \cdots$ (assuming all the $n_j$ are different).
As a test, here is a histogram of $10^4$ results obtained by adding independent draws from the $\Gamma(8,1)$ and $\Gamma(4,2)$ distributions. On it is superimposed the graph of $10^4$ times the preceding function. The fit is very good.
Moschopoulos carries this idea one step further by expanding the cf of the sum into an infinite series of Gamma characteristic functions whenever one or more of the $n_i$ is non-integral, and then terminates the infinite series at a point where it is reasonably well approximated. | Generic sum of Gamma random variables
First, combine any sums having the same scale factor: a $\Gamma(n, \beta)$ plus a $\Gamma(m,\beta)$ variate form a $\Gamma(n+m,\beta)$ variate.
Next, observe that the characteristic function (cf) of $ |
4,583 | Generic sum of Gamma random variables | The Welch–Satterthwaite equation could be used to give an approximate answer in the form of a gamma distribution. This has the nice property of letting us treat gamma distributions as being (approximately) closed under addition. This is the approximation in the commonly used Welch's t-test.
(The gamma distribution is can be viewed as a scaled chi-square distribution, and allowing non-integer shape parameter.)
I've adapted the approximation to the $k, \theta$ parametrization of the gamma distriubtion:
$$
k_{sum} = { (\sum_i \theta_i k_i)^2 \over \sum_i \theta_i^2 k_i }
$$
$$
\theta_{sum} = { { \sum \theta_i k_i } \over k_{sum} }
$$
Let $k=(3,4,5)$, $\theta=(1,2,1)$
So we get approximately Gamma(10.666... ,1.5)
We see the shape parameter $k$ has been more or less totalled, but slightly less because the input scale parameters $\theta_i$ differ. $\theta$ is such that the sum has the correct mean value. | Generic sum of Gamma random variables | The Welch–Satterthwaite equation could be used to give an approximate answer in the form of a gamma distribution. This has the nice property of letting us treat gamma distributions as being (approxima | Generic sum of Gamma random variables
The Welch–Satterthwaite equation could be used to give an approximate answer in the form of a gamma distribution. This has the nice property of letting us treat gamma distributions as being (approximately) closed under addition. This is the approximation in the commonly used Welch's t-test.
(The gamma distribution is can be viewed as a scaled chi-square distribution, and allowing non-integer shape parameter.)
I've adapted the approximation to the $k, \theta$ parametrization of the gamma distriubtion:
$$
k_{sum} = { (\sum_i \theta_i k_i)^2 \over \sum_i \theta_i^2 k_i }
$$
$$
\theta_{sum} = { { \sum \theta_i k_i } \over k_{sum} }
$$
Let $k=(3,4,5)$, $\theta=(1,2,1)$
So we get approximately Gamma(10.666... ,1.5)
We see the shape parameter $k$ has been more or less totalled, but slightly less because the input scale parameters $\theta_i$ differ. $\theta$ is such that the sum has the correct mean value. | Generic sum of Gamma random variables
The Welch–Satterthwaite equation could be used to give an approximate answer in the form of a gamma distribution. This has the nice property of letting us treat gamma distributions as being (approxima |
4,584 | Generic sum of Gamma random variables | I will show another possible solution, that is quite widely applicable, and with todays R software, quite easy to implement. That is the saddlepoint density approximation, which ought to be wider known!
For terminology about the gamma distribution, I will follow https://en.wikipedia.org/wiki/Gamma_distribution with the shape/scale parametrization, $k$ is shape parameter and $\theta$ is scale. For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint approximations with applications" (Cambridge UP). The saddlepoint approximation is explained here: How does saddlepoint approximation work?
here I will show how it is used in this application.
Let $X$ be a random variable with existing momentgenerating function
$$
M(s) = E e^{sX}
$$ which must exist for $s$ in some open interval that contains zero. Then define the cumulant generating function by
$$
K(s) = \log M(s)
$$
It is known that $E X = K'(0), \text{Var} (X) = K''(0)$. The saddlepoint equation is $$ K'(\hat{s}) = x$$ which implicitly defines $s$ as a function of $x$ (which must be in the range of $X$). We write this implicitely defined function as
$\hat{s}(x)$. Note that the saddlepoint equation always has exactly one solution, because the cumulant function is convex.
Then the saddlepoint approximation to the density $f$ of $X$ is given by
$$
\hat{f}(x) = \frac1{\sqrt{2\pi K''(\hat{s})}} \exp(K(\hat{s}) - \hat{s} x)
$$
This approximate density function is not guaranteed to integrate to 1, so is the unnormalized saddlepoint approximation. We could integrate it numerically and the renormalize to get a better approximation. But this approximation is guaranteed to be non-negative.
Now let $X_1, X_2, \dots, X_n$ be independent gamma random variables, where $X_i$ has the distribution with parameters $(k_i, \theta_i)$. Then the cumulant generating function is
$$
K(s) = -\sum_{i=1}^n k_i \ln(1-\theta_i s)
$$ defined for $s<1/\max(\theta_1, \theta_2, \dots, \theta_n)$.
The first derivative is
$$
K'(s) = \sum_{i=1}^n \frac{k_i \theta_i}{1-\theta_i s}
$$
and the second derivative is
$$
K''(s) = \sum_{i=1}^n \frac{k_i \theta_i^2}{(1-\theta_i s)^2}.
$$
In the following I will give some R code calculating this, and will use the parameter values $n=3$, $k=(1,2,3)$, $\theta=(1,2,3)$. Note that the following R code uses a new argument in the uniroot function introduced in R 3.1, so will not run in older R's.
shape <- 1:3 #ki
scale <- 1:3 # thetai
# For this case, we get expectation=14, variance=36
make_cumgenfun <- function(shape, scale) {
# we return list(shape, scale, K, K', K'')
n <- length(shape)
m <- length(scale)
stopifnot( n == m, shape > 0, scale > 0 )
return( list( shape=shape, scale=scale,
Vectorize(function(s) {-sum(shape * log(1-scale * s) ) }),
Vectorize(function(s) {sum((shape*scale)/(1-s*scale))}) ,
Vectorize(function(s) { sum(shape*scale*scale/(1-s*scale))
})) )
}
solve_speq <- function(x, cumgenfun) {
# Returns saddle point!
shape <- cumgenfun[[1]]
scale <- cumgenfun[[2]]
Kd <- cumgenfun[[4]]
uniroot(function(s) Kd(s)-x,lower=-100,
upper = 0.3333,
extendInt = "upX")$root
}
make_fhat <- function(shape, scale) {
cgf1 <- make_cumgenfun(shape, scale)
K <- cgf1[[3]]
Kd <- cgf1[[4]]
Kdd <- cgf1[[5]]
# Function finding fhat for one specific x:
fhat0 <- function(x) {
# Solve saddlepoint equation:
s <- solve_speq(x, cgf1)
# Calculating saddlepoint density value:
(1/sqrt(2*pi*Kdd(s)))*exp(K(s)-s*x)
}
# Returning a vectorized version:
return(Vectorize(fhat0))
} #end make_fhat
fhat <- make_fhat(shape, scale)
plot(fhat, from=0.01, to=40, col="red",
main="unnormalized saddlepoint approximation\n
to sum of three gamma variables")
resulting in the following plot:
I will leave the normalized saddlepoint approximation as an exercise. | Generic sum of Gamma random variables | I will show another possible solution, that is quite widely applicable, and with todays R software, quite easy to implement. That is the saddlepoint density approximation, which ought to be wider kno | Generic sum of Gamma random variables
I will show another possible solution, that is quite widely applicable, and with todays R software, quite easy to implement. That is the saddlepoint density approximation, which ought to be wider known!
For terminology about the gamma distribution, I will follow https://en.wikipedia.org/wiki/Gamma_distribution with the shape/scale parametrization, $k$ is shape parameter and $\theta$ is scale. For the saddlepoint approximation I will follow Ronald W Butler: "Saddlepoint approximations with applications" (Cambridge UP). The saddlepoint approximation is explained here: How does saddlepoint approximation work?
here I will show how it is used in this application.
Let $X$ be a random variable with existing momentgenerating function
$$
M(s) = E e^{sX}
$$ which must exist for $s$ in some open interval that contains zero. Then define the cumulant generating function by
$$
K(s) = \log M(s)
$$
It is known that $E X = K'(0), \text{Var} (X) = K''(0)$. The saddlepoint equation is $$ K'(\hat{s}) = x$$ which implicitly defines $s$ as a function of $x$ (which must be in the range of $X$). We write this implicitely defined function as
$\hat{s}(x)$. Note that the saddlepoint equation always has exactly one solution, because the cumulant function is convex.
Then the saddlepoint approximation to the density $f$ of $X$ is given by
$$
\hat{f}(x) = \frac1{\sqrt{2\pi K''(\hat{s})}} \exp(K(\hat{s}) - \hat{s} x)
$$
This approximate density function is not guaranteed to integrate to 1, so is the unnormalized saddlepoint approximation. We could integrate it numerically and the renormalize to get a better approximation. But this approximation is guaranteed to be non-negative.
Now let $X_1, X_2, \dots, X_n$ be independent gamma random variables, where $X_i$ has the distribution with parameters $(k_i, \theta_i)$. Then the cumulant generating function is
$$
K(s) = -\sum_{i=1}^n k_i \ln(1-\theta_i s)
$$ defined for $s<1/\max(\theta_1, \theta_2, \dots, \theta_n)$.
The first derivative is
$$
K'(s) = \sum_{i=1}^n \frac{k_i \theta_i}{1-\theta_i s}
$$
and the second derivative is
$$
K''(s) = \sum_{i=1}^n \frac{k_i \theta_i^2}{(1-\theta_i s)^2}.
$$
In the following I will give some R code calculating this, and will use the parameter values $n=3$, $k=(1,2,3)$, $\theta=(1,2,3)$. Note that the following R code uses a new argument in the uniroot function introduced in R 3.1, so will not run in older R's.
shape <- 1:3 #ki
scale <- 1:3 # thetai
# For this case, we get expectation=14, variance=36
make_cumgenfun <- function(shape, scale) {
# we return list(shape, scale, K, K', K'')
n <- length(shape)
m <- length(scale)
stopifnot( n == m, shape > 0, scale > 0 )
return( list( shape=shape, scale=scale,
Vectorize(function(s) {-sum(shape * log(1-scale * s) ) }),
Vectorize(function(s) {sum((shape*scale)/(1-s*scale))}) ,
Vectorize(function(s) { sum(shape*scale*scale/(1-s*scale))
})) )
}
solve_speq <- function(x, cumgenfun) {
# Returns saddle point!
shape <- cumgenfun[[1]]
scale <- cumgenfun[[2]]
Kd <- cumgenfun[[4]]
uniroot(function(s) Kd(s)-x,lower=-100,
upper = 0.3333,
extendInt = "upX")$root
}
make_fhat <- function(shape, scale) {
cgf1 <- make_cumgenfun(shape, scale)
K <- cgf1[[3]]
Kd <- cgf1[[4]]
Kdd <- cgf1[[5]]
# Function finding fhat for one specific x:
fhat0 <- function(x) {
# Solve saddlepoint equation:
s <- solve_speq(x, cgf1)
# Calculating saddlepoint density value:
(1/sqrt(2*pi*Kdd(s)))*exp(K(s)-s*x)
}
# Returning a vectorized version:
return(Vectorize(fhat0))
} #end make_fhat
fhat <- make_fhat(shape, scale)
plot(fhat, from=0.01, to=40, col="red",
main="unnormalized saddlepoint approximation\n
to sum of three gamma variables")
resulting in the following plot:
I will leave the normalized saddlepoint approximation as an exercise. | Generic sum of Gamma random variables
I will show another possible solution, that is quite widely applicable, and with todays R software, quite easy to implement. That is the saddlepoint density approximation, which ought to be wider kno |
4,585 | Generic sum of Gamma random variables | An exact solution to the convolution (i.e., sum) of $n$ gamma distributions is given as Eq. (1) in the linked pdf by DiSalvo. As this is a bit long, it will take some time to copy it over here. For only two gamma distributions, their exact sum in closed form is specified by Eq. (2) of DiSalvo and without weights by Eq. (5) of Wesolowski et al., which also appears on the CV site as an answer to that question. That is, $$\mathrm{G}\mathrm{D}\mathrm{C}\left(\mathrm{a}\kern0.1em ,\mathrm{b}\kern0.1em ,\alpha, \beta; \tau \right)=\left\{\begin{array}{cc}\hfill \frac{{\mathrm{b}}^{\mathrm{a}}{\beta}^{\alpha }}{\Gamma \left(\mathrm{a}+\alpha \right)}{e}^{-\mathrm{b}\tau }{\tau^{\mathrm{a}+\alpha-1}}{}_1F_1\left[\alpha, \mathrm{a}+\alpha, \left(\mathrm{b}-\beta \right)\tau \right],\hfill & \hfill \tau >0\hfill \\ {}\hfill \kern2em 0\kern6.6em ,\hfill \kern5.4em \tau \kern0.30em \le \kern0.30em 0\hfill \end{array}\right.,$$
where the notation in the questions above; $Gamma(a,b) \rightarrow \Gamma(a,1/b)$, here. That is, $b$ and $\beta$ are rate constants here and not time scalars. | Generic sum of Gamma random variables | An exact solution to the convolution (i.e., sum) of $n$ gamma distributions is given as Eq. (1) in the linked pdf by DiSalvo. As this is a bit long, it will take some time to copy it over here. For on | Generic sum of Gamma random variables
An exact solution to the convolution (i.e., sum) of $n$ gamma distributions is given as Eq. (1) in the linked pdf by DiSalvo. As this is a bit long, it will take some time to copy it over here. For only two gamma distributions, their exact sum in closed form is specified by Eq. (2) of DiSalvo and without weights by Eq. (5) of Wesolowski et al., which also appears on the CV site as an answer to that question. That is, $$\mathrm{G}\mathrm{D}\mathrm{C}\left(\mathrm{a}\kern0.1em ,\mathrm{b}\kern0.1em ,\alpha, \beta; \tau \right)=\left\{\begin{array}{cc}\hfill \frac{{\mathrm{b}}^{\mathrm{a}}{\beta}^{\alpha }}{\Gamma \left(\mathrm{a}+\alpha \right)}{e}^{-\mathrm{b}\tau }{\tau^{\mathrm{a}+\alpha-1}}{}_1F_1\left[\alpha, \mathrm{a}+\alpha, \left(\mathrm{b}-\beta \right)\tau \right],\hfill & \hfill \tau >0\hfill \\ {}\hfill \kern2em 0\kern6.6em ,\hfill \kern5.4em \tau \kern0.30em \le \kern0.30em 0\hfill \end{array}\right.,$$
where the notation in the questions above; $Gamma(a,b) \rightarrow \Gamma(a,1/b)$, here. That is, $b$ and $\beta$ are rate constants here and not time scalars. | Generic sum of Gamma random variables
An exact solution to the convolution (i.e., sum) of $n$ gamma distributions is given as Eq. (1) in the linked pdf by DiSalvo. As this is a bit long, it will take some time to copy it over here. For on |
4,586 | Generic sum of Gamma random variables | According to Ansari et al. 2012, the PDF and CDF of independent gamma random variables with different distribution can be expressed in terms of Fox's Ĥ function (H-bar function). The paper referenced below also contains an implementation of this function in the Wolfram language.
REFERENCES:
Ansari, I.S., Yilmaz, F., et al., "New Results on the Sum of Gamma Random Variates With Application to the Performance of Wireless Communication Systems over Nakagami-m Fading Channels", arXiv:1202.2576 [cs.IT], 2012. | Generic sum of Gamma random variables | According to Ansari et al. 2012, the PDF and CDF of independent gamma random variables with different distribution can be expressed in terms of Fox's Ĥ function (H-bar function). The paper referenced | Generic sum of Gamma random variables
According to Ansari et al. 2012, the PDF and CDF of independent gamma random variables with different distribution can be expressed in terms of Fox's Ĥ function (H-bar function). The paper referenced below also contains an implementation of this function in the Wolfram language.
REFERENCES:
Ansari, I.S., Yilmaz, F., et al., "New Results on the Sum of Gamma Random Variates With Application to the Performance of Wireless Communication Systems over Nakagami-m Fading Channels", arXiv:1202.2576 [cs.IT], 2012. | Generic sum of Gamma random variables
According to Ansari et al. 2012, the PDF and CDF of independent gamma random variables with different distribution can be expressed in terms of Fox's Ĥ function (H-bar function). The paper referenced |
4,587 | Is sampling relevant in the time of 'big data'? | In a word, yes. I believe there are still clear situations where sampling is appropriate, within and without the "big data" world, but the nature of big data will certainly change our approach to sampling, and we will use more datasets that are nearly complete representations of the underlying population.
On sampling: Depending on the circumstances it will almost always be clear if sampling is an appropriate thing to do. Sampling is not an inherently beneficial activity; it is just what we do because we need to make tradeoffs on the cost of implementing data collection. We are trying to characterize populations and need to select the appropriate method for gathering and analyzing data about the population. Sampling makes sense when the marginal cost of a method of data collection or data processing is high. Trying to reach 100% of the population is not a good use of resources in that case, because you are often better off addressing things like non-response bias than making tiny improvements in the random sampling error.
How is big data different? "Big data" addresses many of the same questions we've had for ages, but what's "new" is that the data collection happens off an existing, computer-mediated process, so the marginal cost of collecting data is essentially zero. This dramatically reduces our need for sampling.
When will we still use sampling? If your "big data" population is the right population for the problem, then you will only employ sampling in a few cases: the need to run separate experimental groups, or if the sheer volume of data is too large to capture and process (many of us can handle millions of rows of data with ease nowadays, so the boundary here is getting further and further out). If it seems like I'm dismissing your question, it's probably because I've rarely encountered situations where the volume of the data was a concern in either the collection or processing stages, although I know many have
The situation that seems hard to me is when your "big data" population doesn't perfectly represent your target population, so the tradeoffs are more apples to oranges. Say you are a regional transportation planner, and Google has offered to give you access to its Android GPS navigation logs to help you. While the dataset would no doubt be interesting to use, the population would probably be systematically biased against the low-income, the public-transportation users, and the elderly. In such a situation, traditional travel diaries sent to a random household sample, although costlier and smaller in number, could still be the superior method of data collection. But, this is not simply a question of "sampling vs. big data", it's a question of which population combined with the relevant data collection and analysis methods you can apply to that population will best meet your needs. | Is sampling relevant in the time of 'big data'? | In a word, yes. I believe there are still clear situations where sampling is appropriate, within and without the "big data" world, but the nature of big data will certainly change our approach to samp | Is sampling relevant in the time of 'big data'?
In a word, yes. I believe there are still clear situations where sampling is appropriate, within and without the "big data" world, but the nature of big data will certainly change our approach to sampling, and we will use more datasets that are nearly complete representations of the underlying population.
On sampling: Depending on the circumstances it will almost always be clear if sampling is an appropriate thing to do. Sampling is not an inherently beneficial activity; it is just what we do because we need to make tradeoffs on the cost of implementing data collection. We are trying to characterize populations and need to select the appropriate method for gathering and analyzing data about the population. Sampling makes sense when the marginal cost of a method of data collection or data processing is high. Trying to reach 100% of the population is not a good use of resources in that case, because you are often better off addressing things like non-response bias than making tiny improvements in the random sampling error.
How is big data different? "Big data" addresses many of the same questions we've had for ages, but what's "new" is that the data collection happens off an existing, computer-mediated process, so the marginal cost of collecting data is essentially zero. This dramatically reduces our need for sampling.
When will we still use sampling? If your "big data" population is the right population for the problem, then you will only employ sampling in a few cases: the need to run separate experimental groups, or if the sheer volume of data is too large to capture and process (many of us can handle millions of rows of data with ease nowadays, so the boundary here is getting further and further out). If it seems like I'm dismissing your question, it's probably because I've rarely encountered situations where the volume of the data was a concern in either the collection or processing stages, although I know many have
The situation that seems hard to me is when your "big data" population doesn't perfectly represent your target population, so the tradeoffs are more apples to oranges. Say you are a regional transportation planner, and Google has offered to give you access to its Android GPS navigation logs to help you. While the dataset would no doubt be interesting to use, the population would probably be systematically biased against the low-income, the public-transportation users, and the elderly. In such a situation, traditional travel diaries sent to a random household sample, although costlier and smaller in number, could still be the superior method of data collection. But, this is not simply a question of "sampling vs. big data", it's a question of which population combined with the relevant data collection and analysis methods you can apply to that population will best meet your needs. | Is sampling relevant in the time of 'big data'?
In a word, yes. I believe there are still clear situations where sampling is appropriate, within and without the "big data" world, but the nature of big data will certainly change our approach to samp |
4,588 | Is sampling relevant in the time of 'big data'? | While there may be hell of a lot of Big Data being produced by the mobile devices and such, there is little usable data in it. If you want to predict the urban travel patterns using foursquare, you may be off by an order of magnitude in estimated flows. Worse, you won't know if you are overestimated or underestimating these flows. You can get an insanely accurate picture of the urban travel patterns of maniacal foursquare users, but unless everyone is required (1) to keep a working smartphone, (2) to run the foursquare app all the time, and (3) to register at any place they stay at for longer than 10 minutes (i.e., get an electronic Census; let libertarians complain about Google and Facebook knowing everything about you), your data will contain unknown biases, and your electronic Deweys will continue to defeat the real-word Trumans (clickable):
(source: whatisasurvey.info)
If anything, I would expect that this piece of history will be repeating itself, and some big "beer+diapers" forecasts produced from Big Data would be overturned by researchers using more rigorous sampling approaches. It is surprising that probability-based surveys remain accurate even despite falling response rates. | Is sampling relevant in the time of 'big data'? | While there may be hell of a lot of Big Data being produced by the mobile devices and such, there is little usable data in it. If you want to predict the urban travel patterns using foursquare, you ma | Is sampling relevant in the time of 'big data'?
While there may be hell of a lot of Big Data being produced by the mobile devices and such, there is little usable data in it. If you want to predict the urban travel patterns using foursquare, you may be off by an order of magnitude in estimated flows. Worse, you won't know if you are overestimated or underestimating these flows. You can get an insanely accurate picture of the urban travel patterns of maniacal foursquare users, but unless everyone is required (1) to keep a working smartphone, (2) to run the foursquare app all the time, and (3) to register at any place they stay at for longer than 10 minutes (i.e., get an electronic Census; let libertarians complain about Google and Facebook knowing everything about you), your data will contain unknown biases, and your electronic Deweys will continue to defeat the real-word Trumans (clickable):
(source: whatisasurvey.info)
If anything, I would expect that this piece of history will be repeating itself, and some big "beer+diapers" forecasts produced from Big Data would be overturned by researchers using more rigorous sampling approaches. It is surprising that probability-based surveys remain accurate even despite falling response rates. | Is sampling relevant in the time of 'big data'?
While there may be hell of a lot of Big Data being produced by the mobile devices and such, there is little usable data in it. If you want to predict the urban travel patterns using foursquare, you ma |
4,589 | Is sampling relevant in the time of 'big data'? | Whenever one applies techniques of statistical inference, it is important to be clear as to the population about which one aims to draw conclusions. Even if the data that has been collected is very big, it may still relate only to a small part of the population, and may not be very representative of the whole.
Suppose for example that a company operating in a certain industry has collected 'big data' on its customers in a certain country. If it wants to use that data to draw conclusions about its existing customers in that country, then sampling might not be very relevant. If however it wants to draw conclusions about a larger population - potential as well as existing customers, or customers in another country - then it becomes essential to consider to what extent the customers about whom data has been collected are representative - perhaps in income, age, gender, education, etc - of the larger population.
The time dimension also needs to be considered. If the aim is to use statistical inference to support predictions, then the population must be understood to extend into the future. If so, then again it becomes essential to consider whether the data set, however large, was obtained in circumstances representative of those that may obtain in the future. | Is sampling relevant in the time of 'big data'? | Whenever one applies techniques of statistical inference, it is important to be clear as to the population about which one aims to draw conclusions. Even if the data that has been collected is very b | Is sampling relevant in the time of 'big data'?
Whenever one applies techniques of statistical inference, it is important to be clear as to the population about which one aims to draw conclusions. Even if the data that has been collected is very big, it may still relate only to a small part of the population, and may not be very representative of the whole.
Suppose for example that a company operating in a certain industry has collected 'big data' on its customers in a certain country. If it wants to use that data to draw conclusions about its existing customers in that country, then sampling might not be very relevant. If however it wants to draw conclusions about a larger population - potential as well as existing customers, or customers in another country - then it becomes essential to consider to what extent the customers about whom data has been collected are representative - perhaps in income, age, gender, education, etc - of the larger population.
The time dimension also needs to be considered. If the aim is to use statistical inference to support predictions, then the population must be understood to extend into the future. If so, then again it becomes essential to consider whether the data set, however large, was obtained in circumstances representative of those that may obtain in the future. | Is sampling relevant in the time of 'big data'?
Whenever one applies techniques of statistical inference, it is important to be clear as to the population about which one aims to draw conclusions. Even if the data that has been collected is very b |
4,590 | Is sampling relevant in the time of 'big data'? | From what I've seen of the big data/ML craze, thinking about sampling and the population from which your sample is drawn is just as important as ever--but thought about even less.
I'm "auditing" the Stanford ML class, and thus far we've covered regression and neural networks with nary a mention of population inference. Since this class has been taken by 6 figures-worth of people, there are now an awful lot of people out there who know how to fit data very will without any notion of the idea of a sample. | Is sampling relevant in the time of 'big data'? | From what I've seen of the big data/ML craze, thinking about sampling and the population from which your sample is drawn is just as important as ever--but thought about even less.
I'm "auditing" the S | Is sampling relevant in the time of 'big data'?
From what I've seen of the big data/ML craze, thinking about sampling and the population from which your sample is drawn is just as important as ever--but thought about even less.
I'm "auditing" the Stanford ML class, and thus far we've covered regression and neural networks with nary a mention of population inference. Since this class has been taken by 6 figures-worth of people, there are now an awful lot of people out there who know how to fit data very will without any notion of the idea of a sample. | Is sampling relevant in the time of 'big data'?
From what I've seen of the big data/ML craze, thinking about sampling and the population from which your sample is drawn is just as important as ever--but thought about even less.
I'm "auditing" the S |
4,591 | Is sampling relevant in the time of 'big data'? | Yes, sampling is relevant and will remain relevant. Bottom line is that the accuracy of a statistical estimate is generally a function of the sample size, not the population to which we want to generalize. So a mean or an average proportion computed from a sample of 1,000 respondents will yield an estimate of a certain accuracy (with respect to the entire population from which we sampled), regardless of the size of the population (or “how big” the “big data” are are).
Having said that: There are specific issues and challenges that are relevant and should be mentioned:
Taking a good probability sample is not always easy. Theoretically, every individual in the population to which we want to generalized (about which we want to make inferences) must have a known probability of being selected; ideally that probability should be the same (equal probability sample or EPSEM – Equal Probability of Selection). That is an important consideration and one should have a clear understanding of how the sampling process will assign selection probabilities to the members of the population to which one wants to generalize. For example, can one derive from Twitter feeds accurate estimates of overall sentiments in the population at large, including those individuals without twitter accounts?
Big data may contain very complex details and information; put another way, the issue is not sampling, but (micro-) segmentation, pulling out the right details for a small subset of observations that are relevant. Here the challenge is not sampling, but to identify the specific stratification and segmentation of the big data that yields the most accurate actionable information that can be turned into valuable insights.
Another general rule of opinion measurement is that non-sampling errors and biases are usually much bigger than the sampling error and biases. Just because you process 1 hundred gazillion records of respondents expressing an opinion doesn’t make the results more useful if you only have data of a 1000 person subsample, in particular if the questions for the respective survey were not written well and induced bias.
Sometimes sampling is required: For example, if one were to build a predictive model from all data, how would one validate it? How would one compare the accuracy of different models? When there are “big data” (very large data repositories) then one can build multiple models and modeling scenarios for different samples, and validate them (try them out) in other independent samples. If one were to build one model for all data – how would one validate it?
You can check out our 'Big Data Revolution' here. | Is sampling relevant in the time of 'big data'? | Yes, sampling is relevant and will remain relevant. Bottom line is that the accuracy of a statistical estimate is generally a function of the sample size, not the population to which we want to genera | Is sampling relevant in the time of 'big data'?
Yes, sampling is relevant and will remain relevant. Bottom line is that the accuracy of a statistical estimate is generally a function of the sample size, not the population to which we want to generalize. So a mean or an average proportion computed from a sample of 1,000 respondents will yield an estimate of a certain accuracy (with respect to the entire population from which we sampled), regardless of the size of the population (or “how big” the “big data” are are).
Having said that: There are specific issues and challenges that are relevant and should be mentioned:
Taking a good probability sample is not always easy. Theoretically, every individual in the population to which we want to generalized (about which we want to make inferences) must have a known probability of being selected; ideally that probability should be the same (equal probability sample or EPSEM – Equal Probability of Selection). That is an important consideration and one should have a clear understanding of how the sampling process will assign selection probabilities to the members of the population to which one wants to generalize. For example, can one derive from Twitter feeds accurate estimates of overall sentiments in the population at large, including those individuals without twitter accounts?
Big data may contain very complex details and information; put another way, the issue is not sampling, but (micro-) segmentation, pulling out the right details for a small subset of observations that are relevant. Here the challenge is not sampling, but to identify the specific stratification and segmentation of the big data that yields the most accurate actionable information that can be turned into valuable insights.
Another general rule of opinion measurement is that non-sampling errors and biases are usually much bigger than the sampling error and biases. Just because you process 1 hundred gazillion records of respondents expressing an opinion doesn’t make the results more useful if you only have data of a 1000 person subsample, in particular if the questions for the respective survey were not written well and induced bias.
Sometimes sampling is required: For example, if one were to build a predictive model from all data, how would one validate it? How would one compare the accuracy of different models? When there are “big data” (very large data repositories) then one can build multiple models and modeling scenarios for different samples, and validate them (try them out) in other independent samples. If one were to build one model for all data – how would one validate it?
You can check out our 'Big Data Revolution' here. | Is sampling relevant in the time of 'big data'?
Yes, sampling is relevant and will remain relevant. Bottom line is that the accuracy of a statistical estimate is generally a function of the sample size, not the population to which we want to genera |
4,592 | Is sampling relevant in the time of 'big data'? | Many big data methods are actually designed around sampling.
The question should be more on the line of:
Shouldn't we use systematic sampling with big data, too?
A lot of the "big data" stuff is still pretty fresh, and sometimes naive. K-means for example can be trivially parallelized, and thus works for "big data" (I'm not going to talk about the results, they are not very meaningful; and probably not very different to those obtained on a sample!). As far as I know this is what the k-means implementation in Mahout does.
However, research is going beyond the naive parallelization (that may still require a large amount of iterations) and tries to do K-means in a fixed number of iterations. Example for this:
Fast clustering using MapReduce
Ene, A. and Im, S. and Moseley, B.
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011
And guess what, their approach is heavily based on sampling.
Next example: Decision forests. That is essentially: for several samples from the data set, build a decision tree each. Can again be trivially parallelized: put each sample on a separate machine. And again, it is a sampling based approach.
So sampling is one of the key ingredients to big data approaches!
And there is nothing wrong with this. | Is sampling relevant in the time of 'big data'? | Many big data methods are actually designed around sampling.
The question should be more on the line of:
Shouldn't we use systematic sampling with big data, too?
A lot of the "big data" stuff is sti | Is sampling relevant in the time of 'big data'?
Many big data methods are actually designed around sampling.
The question should be more on the line of:
Shouldn't we use systematic sampling with big data, too?
A lot of the "big data" stuff is still pretty fresh, and sometimes naive. K-means for example can be trivially parallelized, and thus works for "big data" (I'm not going to talk about the results, they are not very meaningful; and probably not very different to those obtained on a sample!). As far as I know this is what the k-means implementation in Mahout does.
However, research is going beyond the naive parallelization (that may still require a large amount of iterations) and tries to do K-means in a fixed number of iterations. Example for this:
Fast clustering using MapReduce
Ene, A. and Im, S. and Moseley, B.
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011
And guess what, their approach is heavily based on sampling.
Next example: Decision forests. That is essentially: for several samples from the data set, build a decision tree each. Can again be trivially parallelized: put each sample on a separate machine. And again, it is a sampling based approach.
So sampling is one of the key ingredients to big data approaches!
And there is nothing wrong with this. | Is sampling relevant in the time of 'big data'?
Many big data methods are actually designed around sampling.
The question should be more on the line of:
Shouldn't we use systematic sampling with big data, too?
A lot of the "big data" stuff is sti |
4,593 | Is sampling relevant in the time of 'big data'? | Cross validation is an specific example of sub-sampling which is quite important in ML/big data. More generally, big data is still usually a sample of a population, as other people here have mentioned.
But, I think OP might be specifically referring to sampling as it applies to a controlled experiments, versus observational data. Usually big data is thought of as the latter, but to me at least there are exceptions. I would think of randomized trials, A/B testing, and multiarmed bandits in e-commerce and social network settings as examples of "sampling in big data settings." | Is sampling relevant in the time of 'big data'? | Cross validation is an specific example of sub-sampling which is quite important in ML/big data. More generally, big data is still usually a sample of a population, as other people here have mentioned | Is sampling relevant in the time of 'big data'?
Cross validation is an specific example of sub-sampling which is quite important in ML/big data. More generally, big data is still usually a sample of a population, as other people here have mentioned.
But, I think OP might be specifically referring to sampling as it applies to a controlled experiments, versus observational data. Usually big data is thought of as the latter, but to me at least there are exceptions. I would think of randomized trials, A/B testing, and multiarmed bandits in e-commerce and social network settings as examples of "sampling in big data settings." | Is sampling relevant in the time of 'big data'?
Cross validation is an specific example of sub-sampling which is quite important in ML/big data. More generally, big data is still usually a sample of a population, as other people here have mentioned |
4,594 | Is sampling relevant in the time of 'big data'? | In the areas where Big Data is gaining popularity: Search, Advertising, Recommender Systems like Amazon, Netflix , there is a very Big incentive to explore the entire data set.
The objective of these systems is to tailor recommendations / suggestions to every single member of the population. Also, the number of attributes being studied is enormous. The average web analytics system may measure click-through rate, "thermal tracking" of the "hot areas" in a page, social interactions, etc and weigh these against a large set of predetermined objectives.
More importantly, most of the places where Big Data is now ubiquitous are "online" data streams i.e data is constantly being added / updated. Devising a sampling scheme which covers all these attributes without an inherent bias and still deliver promising results (read better margins) is a challenge.
Sampling still remains highly relevant for surveys, medical trials, A/B testing, quality assurance.
In a nutshell, sampling is very useful when the population to be studied is very large and you are interested in the macroscopic properties of the population. 100% checking (Big Data) is necessary for exploiting the microscopic properties of the system
Hope this helps :) | Is sampling relevant in the time of 'big data'? | In the areas where Big Data is gaining popularity: Search, Advertising, Recommender Systems like Amazon, Netflix , there is a very Big incentive to explore the entire data set.
The objective of these | Is sampling relevant in the time of 'big data'?
In the areas where Big Data is gaining popularity: Search, Advertising, Recommender Systems like Amazon, Netflix , there is a very Big incentive to explore the entire data set.
The objective of these systems is to tailor recommendations / suggestions to every single member of the population. Also, the number of attributes being studied is enormous. The average web analytics system may measure click-through rate, "thermal tracking" of the "hot areas" in a page, social interactions, etc and weigh these against a large set of predetermined objectives.
More importantly, most of the places where Big Data is now ubiquitous are "online" data streams i.e data is constantly being added / updated. Devising a sampling scheme which covers all these attributes without an inherent bias and still deliver promising results (read better margins) is a challenge.
Sampling still remains highly relevant for surveys, medical trials, A/B testing, quality assurance.
In a nutshell, sampling is very useful when the population to be studied is very large and you are interested in the macroscopic properties of the population. 100% checking (Big Data) is necessary for exploiting the microscopic properties of the system
Hope this helps :) | Is sampling relevant in the time of 'big data'?
In the areas where Big Data is gaining popularity: Search, Advertising, Recommender Systems like Amazon, Netflix , there is a very Big incentive to explore the entire data set.
The objective of these |
4,595 | Choosing the right linkage method for hierarchical clustering | Methods overview
Short reference about some linkage methods of hierarchical agglomerative cluster analysis (HAC).
Basic version of HAC algorithm is one generic; it amounts to updating, at each step, by the formula known as Lance-Williams formula, the proximities between the emergent (merged of two) cluster and all the other clusters (including singleton objects) existing so far. There exist implementations not using Lance-Williams formula. But using it is convenient: it lets one code various linkage methods by the same template.
The recurrence formula includes several parameters (alpha, beta, gamma). Depending on the linkage method, the parameters are set differently and so the unwrapped formula obtains a specific view. Many texts on HAC show the formula, its method-specific views and explain the methods. I would recommend articles by Janos Podani as very thorough.
The room and need for the different methods arise from the fact that a proximity (distance or similarity) between two clusters or between a cluster and a singleton object could be formulated in many various ways. HAC merges at each step two most close clusters or points, but how to compute the aforesaid proximity in the face that the input proximity matrix was defined between singleton objects only, is the problem to formulate.
So, the methods differ in respect to how they define proximity between any two clusters at every step. "Colligation coefficient" (output in agglomeration schedule/history and forming the "Y" axis on a dendrogram) is just the proximity between the two clusters merged at a given step.
Method of single linkage or nearest neighbour. Proximity
between two clusters is the proximity between their two closest
objects. This value is one of values of the input matrix. The conceptual metaphor of this build of cluster, its archetype, is spectrum or chain. Chains could be straight or curvilinear, or could be like "snowflake" or "amoeba" view. Two most dissimilar cluster members can happen to be very much dissimilar in comparison to two most similar. Single linkage method controls only nearest neighbours similarity.
Method of complete linkage or farthest neighbour. Proximity
between two clusters is the proximity between their two most distant
objects. This value is one of values of the input matrix. The metaphor of this build of cluster is circle (in the sense, by hobby or plot) where two most distant from each other members cannot be much more dissimilar than other quite dissimilar pairs (as in circle). Such clusters are "compact" contours by their borders, but they are not necessarily compact inside.
Method of between-group average linkage (UPGMA). Proximity
between two clusters is the arithmetic mean of all the proximities
between the objects of one, on one side, and the objects of the
other, on the other side. The metaphor of this build of cluster is quite generic, just united class or close-knit collective; and the method is frequently set the default one in hierarhical clustering packages. Clusters of miscellaneous shapes and outlines can be produced.
Simple average, or method of equilibrious between-group average linkage (WPGMA) is the modified previous. Proximity between two clusters is the
arithmetic mean of all the proximities between the objects of one, on
one side, and the objects of the other, on the other side; while the
subclusters of which each of these two clusters were merged recently
have equalized influence on that proximity – even if the subclusters
differed in the number of objects.
Method of within-group average linkage (MNDIS). Proximity between
two clusters is the arithmetic mean of all the proximities in their
joint cluster. This method is an alternative to UPGMA. It usually will lose to it in terms of cluster density, but sometimes will uncover cluster shapes which UPGMA will not.
Centroid method (UPGMC). Proximity between two clusters is the proximity between their geometric centroids: [squared] euclidean
distance between those. The metaphor of this build of cluster is proximity of platforms (politics). Like in political parties, such clusters can have fractions or "factions", but unless their central figures are apart from each other the union is consistent. Clusters can be various by outline.
Median, or equilibrious centroid method (WPGMC) is the modified previous. Proximity between two clusters is the proximity between their geometric
centroids ([squared] euclidean distance between those); while the
centroids are defined so that the subclusters of which each of these
two clusters were merged recently have equalized influence on its
centroid – even if the subclusters differed in the number of objects. Name "median" is partly misleading because the method doesn't use medians of data distributions, it is still based on centroids (the means).
Ward’s method, or minimal increase of sum-of-squares (MISSQ), sometimes incorrectly called "minimum variance" method. Proximity
between two clusters is the magnitude by which the summed square in
their joint cluster will be greater than the combined summed square
in these two clusters: $SS_{12}-(SS_1+SS_2)$. (Between two singleton objects
this quantity = squared euclidean distance / $2$.) The metaphor of this build of cluster is type. Intuitively, a type is a cloud more dense and more concentric towards its middle, whereas marginal points are few and could be scattered relatively freely.
Some among less well-known methods (see Podany J. New combinatorial clustering methods // Vegetatio, 1989, 81: 61-77.) [also implemented by me as a SPSS macro found on my web-page]:
Method of minimal sum-of-squares (MNSSQ). Proximity between two
clusters is the summed square in their joint cluster: $SS_{12}$. (Between
two singleton objects this quantity = squared euclidean distance /
$2$.)
Method of minimal increase of variance (MIVAR). Proximity between
two clusters is the magnitude by which the mean square in their joint
cluster will be greater than the weightedly (by the number of
objects) averaged mean square in these two clusters:
$MS_{12}-(n_1MS_1+n_2MS_2)/(n_1+n_2) = [SS_{12}-(SS_1+SS_2)]/(n_1+n_2)$. (Between two
singleton objects this quantity = squared euclidean distance / $4$.)
Method of minimal variance (MNVAR). Proximity between two
clusters is the mean square in their joint cluster: $MS_{12} =
SS_{12}/(n_1+n_2)$. (Between two singleton objects this quantity = squared
euclidean distance / $4$.).
Still other methods represent some specialized set distances. HAC algorithm can be based on them, only not on the generic Lance-Williams formula; such distances include, among other: Hausdorff distance and Point-centroid cross-distance (I've implemented a HAC program for SPSS based on those.)
First 5 methods described permit any proximity measures (any similarities or distances) and results will, naturally, depend on the measure chosen.
Next 6 methods described require distances; and fully correct will be to use only squared euclidean distances with them, because these methods compute centroids in euclidean space. Therefore distances should be euclidean for the sake of geometric correctness (these 6 methods are called together geometric linkage methods). At worst case, you might input other metric distances at admitting more heuristic, less rigorous analysis. Now about that "squared". Computation of centroids and deviations from them are most convenient mathematically/programmically to perform on squared distances, that's why HAC packages usually require to input and are tuned to process the squared ones. However, there exist implementations - fully equivalent yet a bit slower - based on nonsquared distances input and requiring those; see for example "Ward-2" implementation for Ward's method. You should consult with the documentation of you clustering program to know which - squared or not - distances it expects at input to a "geometric method" in order to do it right.
Methods MNDIS, MNSSQ, and MNVAR require on steps, in addition to just update the Lance-Williams formula, to store a within-cluster statistic (which depends on the method).
Methods which are most frequently used in studies where clusters are expected to be solid more or less round clouds, - are methods of average linkage, complete linkage method, and Ward's method.
Ward's method is the closest, by it properties and efficiency, to K-means clustering; they share the same objective function - minimization of the pooled within-cluster SS "in the end". Of course, K-means (being iterative and if provided with decent initial centroids) is usually a better minimizer of it than Ward. However, Ward seems to me a bit more accurate than K-means in uncovering clusters of uneven physical sizes (variances) or clusters thrown about space very irregularly. MIVAR method is weird to me, I can't imagine when it could be recommended, it doesn't produce dense enough clusters.
Methods centroid, median, minimal increase of variance – may give sometimes the so-called reversals: a phenomenon when the two clusters being merged at some step appear closer to each other than pairs of clusters merged earlier. That is because these methods do not belong to the so called ultrametric. This situation is inconvenient but is theoretically OK.
Methods of single linkage and centroid belong to so called space contracting, or “chaining”. That means - roughly speaking - that they tend to attach objects one by one to clusters, and so they demonstrate relatively smooth growth of curve “% of clustered objects”. On the contrary, methods of complete linkage, Ward’s, sum-of-squares, increase of variance, and variance commonly get considerable share of objects clustered even on early steps, and then proceed merging yet those – therefore their curve “% of clustered objects” is steep from the first steps. These methods are called space dilating. Other methods fall in-between.
Flexible versions. By adding the additional parameter into the Lance-Willians formula it is possible to make a method become specifically self-tuning on its steps. The parameter brings in correction for the being computed between-cluster proximity, which depends on the size (amount of de-compactness) of the clusters. The meaning of the parameter is that it makes the method of agglomeration more space dilating or space contracting than the standard method is doomed to be. Most well-known implementation of the flexibility so far is to average linkage methods UPGMA and WPGMA (Belbin, L. et al. A Comparison of Two Approaches to Beta-Flexible Clustering // Multivariate Behavioral Research, 1992, 27, 417–433.).
Dendrogram. On a dendrogram "Y" axis, typically displayed is the proximity between the merging clusters - as was defined by methods above. Therefore, for example, in centroid method the squared distance is typically gauged (ultimately, it depends on the package and it options) - some researchers are not aware of that. Also, by tradition, with methods based on increment of nondensity, such as Ward’s, usually shown on the dendrogram is cumulative value - it is sooner for convenience reasons than theoretical ones. Thus, (in many packages) the plotted coefficient in Ward’s method represents the overall, across all clusters, within-cluster sum-of-squares observed at the moment of a given step. Don't miss to read the documentation of your package to find out in which form the particular program displays colligation coefficient (cluster distance) on its dendrogram.
One should refrain from judging which linkage method is "better" for his data by comparing the looks of the dendrograms: not only because the looks change when you change what modification of the coefficient you plot there - as it was just described, - but because the look will differ even on the data with no clusters.
To choose the "right" method
There is no single criterion. Some guidelines how to go about selecting a method of cluster analysis (including a linkage method in HAC as a particular case) are outlined in this answer and the whole thread therein. | Choosing the right linkage method for hierarchical clustering | Methods overview
Short reference about some linkage methods of hierarchical agglomerative cluster analysis (HAC).
Basic version of HAC algorithm is one generic; it amounts to updating, at each step, b | Choosing the right linkage method for hierarchical clustering
Methods overview
Short reference about some linkage methods of hierarchical agglomerative cluster analysis (HAC).
Basic version of HAC algorithm is one generic; it amounts to updating, at each step, by the formula known as Lance-Williams formula, the proximities between the emergent (merged of two) cluster and all the other clusters (including singleton objects) existing so far. There exist implementations not using Lance-Williams formula. But using it is convenient: it lets one code various linkage methods by the same template.
The recurrence formula includes several parameters (alpha, beta, gamma). Depending on the linkage method, the parameters are set differently and so the unwrapped formula obtains a specific view. Many texts on HAC show the formula, its method-specific views and explain the methods. I would recommend articles by Janos Podani as very thorough.
The room and need for the different methods arise from the fact that a proximity (distance or similarity) between two clusters or between a cluster and a singleton object could be formulated in many various ways. HAC merges at each step two most close clusters or points, but how to compute the aforesaid proximity in the face that the input proximity matrix was defined between singleton objects only, is the problem to formulate.
So, the methods differ in respect to how they define proximity between any two clusters at every step. "Colligation coefficient" (output in agglomeration schedule/history and forming the "Y" axis on a dendrogram) is just the proximity between the two clusters merged at a given step.
Method of single linkage or nearest neighbour. Proximity
between two clusters is the proximity between their two closest
objects. This value is one of values of the input matrix. The conceptual metaphor of this build of cluster, its archetype, is spectrum or chain. Chains could be straight or curvilinear, or could be like "snowflake" or "amoeba" view. Two most dissimilar cluster members can happen to be very much dissimilar in comparison to two most similar. Single linkage method controls only nearest neighbours similarity.
Method of complete linkage or farthest neighbour. Proximity
between two clusters is the proximity between their two most distant
objects. This value is one of values of the input matrix. The metaphor of this build of cluster is circle (in the sense, by hobby or plot) where two most distant from each other members cannot be much more dissimilar than other quite dissimilar pairs (as in circle). Such clusters are "compact" contours by their borders, but they are not necessarily compact inside.
Method of between-group average linkage (UPGMA). Proximity
between two clusters is the arithmetic mean of all the proximities
between the objects of one, on one side, and the objects of the
other, on the other side. The metaphor of this build of cluster is quite generic, just united class or close-knit collective; and the method is frequently set the default one in hierarhical clustering packages. Clusters of miscellaneous shapes and outlines can be produced.
Simple average, or method of equilibrious between-group average linkage (WPGMA) is the modified previous. Proximity between two clusters is the
arithmetic mean of all the proximities between the objects of one, on
one side, and the objects of the other, on the other side; while the
subclusters of which each of these two clusters were merged recently
have equalized influence on that proximity – even if the subclusters
differed in the number of objects.
Method of within-group average linkage (MNDIS). Proximity between
two clusters is the arithmetic mean of all the proximities in their
joint cluster. This method is an alternative to UPGMA. It usually will lose to it in terms of cluster density, but sometimes will uncover cluster shapes which UPGMA will not.
Centroid method (UPGMC). Proximity between two clusters is the proximity between their geometric centroids: [squared] euclidean
distance between those. The metaphor of this build of cluster is proximity of platforms (politics). Like in political parties, such clusters can have fractions or "factions", but unless their central figures are apart from each other the union is consistent. Clusters can be various by outline.
Median, or equilibrious centroid method (WPGMC) is the modified previous. Proximity between two clusters is the proximity between their geometric
centroids ([squared] euclidean distance between those); while the
centroids are defined so that the subclusters of which each of these
two clusters were merged recently have equalized influence on its
centroid – even if the subclusters differed in the number of objects. Name "median" is partly misleading because the method doesn't use medians of data distributions, it is still based on centroids (the means).
Ward’s method, or minimal increase of sum-of-squares (MISSQ), sometimes incorrectly called "minimum variance" method. Proximity
between two clusters is the magnitude by which the summed square in
their joint cluster will be greater than the combined summed square
in these two clusters: $SS_{12}-(SS_1+SS_2)$. (Between two singleton objects
this quantity = squared euclidean distance / $2$.) The metaphor of this build of cluster is type. Intuitively, a type is a cloud more dense and more concentric towards its middle, whereas marginal points are few and could be scattered relatively freely.
Some among less well-known methods (see Podany J. New combinatorial clustering methods // Vegetatio, 1989, 81: 61-77.) [also implemented by me as a SPSS macro found on my web-page]:
Method of minimal sum-of-squares (MNSSQ). Proximity between two
clusters is the summed square in their joint cluster: $SS_{12}$. (Between
two singleton objects this quantity = squared euclidean distance /
$2$.)
Method of minimal increase of variance (MIVAR). Proximity between
two clusters is the magnitude by which the mean square in their joint
cluster will be greater than the weightedly (by the number of
objects) averaged mean square in these two clusters:
$MS_{12}-(n_1MS_1+n_2MS_2)/(n_1+n_2) = [SS_{12}-(SS_1+SS_2)]/(n_1+n_2)$. (Between two
singleton objects this quantity = squared euclidean distance / $4$.)
Method of minimal variance (MNVAR). Proximity between two
clusters is the mean square in their joint cluster: $MS_{12} =
SS_{12}/(n_1+n_2)$. (Between two singleton objects this quantity = squared
euclidean distance / $4$.).
Still other methods represent some specialized set distances. HAC algorithm can be based on them, only not on the generic Lance-Williams formula; such distances include, among other: Hausdorff distance and Point-centroid cross-distance (I've implemented a HAC program for SPSS based on those.)
First 5 methods described permit any proximity measures (any similarities or distances) and results will, naturally, depend on the measure chosen.
Next 6 methods described require distances; and fully correct will be to use only squared euclidean distances with them, because these methods compute centroids in euclidean space. Therefore distances should be euclidean for the sake of geometric correctness (these 6 methods are called together geometric linkage methods). At worst case, you might input other metric distances at admitting more heuristic, less rigorous analysis. Now about that "squared". Computation of centroids and deviations from them are most convenient mathematically/programmically to perform on squared distances, that's why HAC packages usually require to input and are tuned to process the squared ones. However, there exist implementations - fully equivalent yet a bit slower - based on nonsquared distances input and requiring those; see for example "Ward-2" implementation for Ward's method. You should consult with the documentation of you clustering program to know which - squared or not - distances it expects at input to a "geometric method" in order to do it right.
Methods MNDIS, MNSSQ, and MNVAR require on steps, in addition to just update the Lance-Williams formula, to store a within-cluster statistic (which depends on the method).
Methods which are most frequently used in studies where clusters are expected to be solid more or less round clouds, - are methods of average linkage, complete linkage method, and Ward's method.
Ward's method is the closest, by it properties and efficiency, to K-means clustering; they share the same objective function - minimization of the pooled within-cluster SS "in the end". Of course, K-means (being iterative and if provided with decent initial centroids) is usually a better minimizer of it than Ward. However, Ward seems to me a bit more accurate than K-means in uncovering clusters of uneven physical sizes (variances) or clusters thrown about space very irregularly. MIVAR method is weird to me, I can't imagine when it could be recommended, it doesn't produce dense enough clusters.
Methods centroid, median, minimal increase of variance – may give sometimes the so-called reversals: a phenomenon when the two clusters being merged at some step appear closer to each other than pairs of clusters merged earlier. That is because these methods do not belong to the so called ultrametric. This situation is inconvenient but is theoretically OK.
Methods of single linkage and centroid belong to so called space contracting, or “chaining”. That means - roughly speaking - that they tend to attach objects one by one to clusters, and so they demonstrate relatively smooth growth of curve “% of clustered objects”. On the contrary, methods of complete linkage, Ward’s, sum-of-squares, increase of variance, and variance commonly get considerable share of objects clustered even on early steps, and then proceed merging yet those – therefore their curve “% of clustered objects” is steep from the first steps. These methods are called space dilating. Other methods fall in-between.
Flexible versions. By adding the additional parameter into the Lance-Willians formula it is possible to make a method become specifically self-tuning on its steps. The parameter brings in correction for the being computed between-cluster proximity, which depends on the size (amount of de-compactness) of the clusters. The meaning of the parameter is that it makes the method of agglomeration more space dilating or space contracting than the standard method is doomed to be. Most well-known implementation of the flexibility so far is to average linkage methods UPGMA and WPGMA (Belbin, L. et al. A Comparison of Two Approaches to Beta-Flexible Clustering // Multivariate Behavioral Research, 1992, 27, 417–433.).
Dendrogram. On a dendrogram "Y" axis, typically displayed is the proximity between the merging clusters - as was defined by methods above. Therefore, for example, in centroid method the squared distance is typically gauged (ultimately, it depends on the package and it options) - some researchers are not aware of that. Also, by tradition, with methods based on increment of nondensity, such as Ward’s, usually shown on the dendrogram is cumulative value - it is sooner for convenience reasons than theoretical ones. Thus, (in many packages) the plotted coefficient in Ward’s method represents the overall, across all clusters, within-cluster sum-of-squares observed at the moment of a given step. Don't miss to read the documentation of your package to find out in which form the particular program displays colligation coefficient (cluster distance) on its dendrogram.
One should refrain from judging which linkage method is "better" for his data by comparing the looks of the dendrograms: not only because the looks change when you change what modification of the coefficient you plot there - as it was just described, - but because the look will differ even on the data with no clusters.
To choose the "right" method
There is no single criterion. Some guidelines how to go about selecting a method of cluster analysis (including a linkage method in HAC as a particular case) are outlined in this answer and the whole thread therein. | Choosing the right linkage method for hierarchical clustering
Methods overview
Short reference about some linkage methods of hierarchical agglomerative cluster analysis (HAC).
Basic version of HAC algorithm is one generic; it amounts to updating, at each step, b |
4,596 | Choosing the right linkage method for hierarchical clustering | The correlation between the distance matrix and the cophenetic distance is one metric to help assess which clustering linkage to select. From ?cophenetic:
It can be argued that a dendrogram is an appropriate summary of some
data if the correlation between the original distances and the
cophenetic distances is high.
This use of cor(dist,cophenetic(hclust(dist))) as a linkage selection metric is referenced in pg 38 of this vegan vignette.
See example code below:
# Data
d0=dist(USArrests)
# Hierarchical Agglomerative Clustering
h1=hclust(d0,method='average')
h2=hclust(d0,method='complete')
h3=hclust(d0,method='ward.D')
h4=hclust(d0,method='single')
# Cophenetic Distances, for each linkage
c1=cophenetic(h1)
c2=cophenetic(h2)
c3=cophenetic(h3)
c4=cophenetic(h4)
# Correlations
cor(d0,c1) # 0.7658983
cor(d0,c2) # 0.7636926
cor(d0,c3) # 0.7553367
cor(d0,c4) # 0.5702505
# Dendograms
par(mfrow=c(2,2))
plot(h1,main='Average Linkage')
plot(h2,main='Complete Linkage')
plot(h3,main='Ward Linkage')
plot(h4,main='Single Linkage')
par(mfrow=c(1,1))
We see that the correlations for average and complete are extremely similar, and their dendograms appear very similar. The correlation for ward is similar to average and complete but the dendogram looks fairly different. single linkage is doing its own thing. Best professional judgement from a subject matter expert, or precedence toward a certain link in the field of interest should probably override numeric output from cor(). | Choosing the right linkage method for hierarchical clustering | The correlation between the distance matrix and the cophenetic distance is one metric to help assess which clustering linkage to select. From ?cophenetic:
It can be argued that a dendrogram is an app | Choosing the right linkage method for hierarchical clustering
The correlation between the distance matrix and the cophenetic distance is one metric to help assess which clustering linkage to select. From ?cophenetic:
It can be argued that a dendrogram is an appropriate summary of some
data if the correlation between the original distances and the
cophenetic distances is high.
This use of cor(dist,cophenetic(hclust(dist))) as a linkage selection metric is referenced in pg 38 of this vegan vignette.
See example code below:
# Data
d0=dist(USArrests)
# Hierarchical Agglomerative Clustering
h1=hclust(d0,method='average')
h2=hclust(d0,method='complete')
h3=hclust(d0,method='ward.D')
h4=hclust(d0,method='single')
# Cophenetic Distances, for each linkage
c1=cophenetic(h1)
c2=cophenetic(h2)
c3=cophenetic(h3)
c4=cophenetic(h4)
# Correlations
cor(d0,c1) # 0.7658983
cor(d0,c2) # 0.7636926
cor(d0,c3) # 0.7553367
cor(d0,c4) # 0.5702505
# Dendograms
par(mfrow=c(2,2))
plot(h1,main='Average Linkage')
plot(h2,main='Complete Linkage')
plot(h3,main='Ward Linkage')
plot(h4,main='Single Linkage')
par(mfrow=c(1,1))
We see that the correlations for average and complete are extremely similar, and their dendograms appear very similar. The correlation for ward is similar to average and complete but the dendogram looks fairly different. single linkage is doing its own thing. Best professional judgement from a subject matter expert, or precedence toward a certain link in the field of interest should probably override numeric output from cor(). | Choosing the right linkage method for hierarchical clustering
The correlation between the distance matrix and the cophenetic distance is one metric to help assess which clustering linkage to select. From ?cophenetic:
It can be argued that a dendrogram is an app |
4,597 | CNN architectures for regression? | First of all a general suggestion: do a literature search before you start making experiments on a topic you're not familiar with. You'll save yourself a lot of time.
In this case, looking at existing papers you may have noticed that
CNNs have been used multiple times for regression: this is a classic but it's old (yes, 3 years is old in DL). A more modern paper wouldn't have used AlexNet for this task. This is more recent, but it's for a vastly more complicated problem (3D rotation), and anyway I'm not familiar with it.
Regression with CNNs is not a trivial problem. Looking again at the first paper, you'll see that they have a problem where they can basically generate infinite data. Their objective is to predict the rotation angle needed to rectify 2D pictures. This means that I can basically take my training set and augment it by rotating each image by arbitrary angles, and I'll obtain a valid, bigger training set. Thus the problem seems relatively simple, as far as Deep Learning problems go. By the way, note the other data augmentation tricks they use:
We use translations (up to 5% of the image width), brightness
adjustment in the range [−0.2, 0.2], gamma adjustment with γ ∈ [−0.5,
0.1] and Gaussian pixel noise with a standard deviation in the range [0,
0.02].
I don't know your problem well enough to say if it makes sense to consider
variations in position, brightness and gamma noise for your pictures, carefully
shot in a lab. But you can always try, and remove it if it doesn't improve your test set loss. Actually, you should really use a validation set or $k-$fold cross-validation for these kinds of experiments, and don't look at the test set until you have defined your setup, if you want the test set loss to be representative of the generalization error.
Anyway, even in their ideal conditions, the naive approach didn't work that well (section 4.2). They stripped out the output layer (the softmax layer) and substituted it with a layer with two units which would predict the sine $y$ and cosine $x$ of the rotation angle. The actual angle would then be computed as $\alpha=\text{atan2}(y,x)$. The neural network was also pretrained on ImageNet (this is called transfer learning). Of course the training on ImageNet had been for a different task (classification), but still training the neural network from scratch must have given such horrible results that they decided not to publish them. So you had all ingredients to make a good omelette: potentially infinite training data, a pretrained network and an apparently simple regression problem (predict two numbers between -1 and 1). Yet, the best they could get with this approach was a 21° error. It's not clear if this is an RMSE error, a MAD error or what, but still it's not great: since the maximum error you can make is 180°, the average error is $>11\%$ of the maximum possible error. They did slightly better by using two networks in series: the first one would perform classification (predict whether the angle would be in the $[-180°,-90°],[-90°,0°],[0°,90°]$ or $[90°,180°]$ class), then the image, rotated by the amount predicted by the first network, would be fed to another neural network (for regression, this time), which would predict the final additional rotation in the $[-45°,45°]$ range.
On a much simpler (rotated MNIST) problem, you can get something better, but still you don't go below an RMSE error which is $2.6\%$ of the maximum possible error.
So, what can we learn from this? First of all, that 5000 images is a small data set for your task. The first paper used a network which was pretrained on images similar to that for which they wanted to learn the regression task: not only you need to learn a different task from that for which the architecture was designed (classification), but your training set doesn't look anything at all like the training sets on which these networks are usually trained (CIFAR-10/100 or ImageNet). So you probably won't get any benefits from transfer learning. The MATLAB example had 5000 images, but they were black and white and semantically all very similar (well, this could be your case too).
Then, how realistic is doing better than 0.3? We must first of all understand what do you mean by 0.3 average loss. Do you mean that the RMSE error is 0.3,
$$\frac{1}{N}\sum_{i=1}^N (h(\mathbf{x}_i)-y_i)^2$$
where $N$ is the size of your training set (thus, $N< 5000$), $h(\mathbf{x}_i)$ is the output of your CNN for image $\mathbf{x}_i$ and $y_i$ is the corresponding concentration of the chemical? Since $y_i\in[80,350]$, then assuming that you clip the predictions of your CNN between 80 and 350 (or you just use a logit to make them fit in that interval), you're getting less than $0.12\%$ error. Seriously, what do you expect? it doesn't seem to me a big error at all.
Also, just try to compute the number of parameters in your network: I'm in a hurry and I may be making silly mistakes, so by all means double check my computations with some summary function from whatever framework you may be using. However, roughly I would say you have
$$9\times(3\times 32 + 2\times 32\times 32 + 32\times64+2\times64\times64+ 64\times128+2\times128\times128) +128\times128+128\times32+32 \times32\times32=533344$$
(note I skipped the parameters of the batch norm layers, but they're just 4 parameters for layer so they don't make a difference). You have half a million parameters and 5000 examples...what would you expect? Sure, the number of parameters is not a good indicator for the capacity of a neural network (it's a non-identifiable model), but still...I don't think you can do much better than this, but you can try a few things:
normalize all inputs (for example, rescale the RGB intensities of each pixel between -1 and 1, or use standardization) and all outputs. This will especially help if you have convergence issues.
go to grayscale: this would reduce your input channels from 3 to 1. All your images seem (to my highly untrained eye) to be of relatively similar colors. Are you sure it's the color that it's needed to predict $y$, and not the existence of darker or brighter areas? Maybe you're sure (I'm not an expert): in this case skip this suggestion.
data augmentation: since you said that flipping, rotating by an arbitrary angle or mirroring your images should result in the same output, you can increase the size of your data set a lot. Note that with a bigger dataset the error on the training set will go up: what we're looking for here is a smaller gap between training set loss and test set loss. Also, if the training set loss increases a lot, this could be good news: it may mean that you can train a deeper network on this bigger training set without the risk of overfitting. Try adding more layers and see if now you get a smaller training set and test set loss. Finally, you could try also the other data augmentation tricks I quoted above, if they make sense in the context of your application.
use the classification-then-regression trick: a first network only determines if $y$ should be in one of, say, 10 bins, such as $[80,97],[97,124]$,etc. A second network then computes a $[0,27]$ correction: centering and normalizing may help here too. Can't say without trying.
try using a modern architecture (Inception or ResNet) instead than a vintage one. ResNet has actually less parameters than VGG-net. Of course, you want to use the small ResNets here - I don't think ResNet-101 could help on a 5000 images data set. You can augment the data set a lot, though....
Since your output is invariant to rotation, another great idea would be to use either group equivariant CNNs, whose output (when used as classifiers) is invariant to discrete rotations, or steerable CNNs whose output is invariant to continuous rotations. The invariance property would allow you to get good results with much less data augmentation, or ideally none at all (for what it concerns rotations: of course you still need the other types of d. a.). Group equivariant CNNs are more mature than steerable CNNs from an implementation point of view, so I’d try group CNNs first. You can try the classification-then-regression, using the G-CNN for the classification part, or you may experiment with the pure regression approach. Remember to change the top layer accordingly.
experiment with the batch size (yeah, yeah, I know hyperparameters-hacking is not cool, but this is the best I could come with in a limited time frame & for free :-)
finally, there are architectures which have been especially developed to make accurate predictions with small data sets. Most of them used dilated convolutions: one famous example is the mixed-scale dense convolutional neural network. The implementation is not trivial, though. | CNN architectures for regression? | First of all a general suggestion: do a literature search before you start making experiments on a topic you're not familiar with. You'll save yourself a lot of time.
In this case, looking at existin | CNN architectures for regression?
First of all a general suggestion: do a literature search before you start making experiments on a topic you're not familiar with. You'll save yourself a lot of time.
In this case, looking at existing papers you may have noticed that
CNNs have been used multiple times for regression: this is a classic but it's old (yes, 3 years is old in DL). A more modern paper wouldn't have used AlexNet for this task. This is more recent, but it's for a vastly more complicated problem (3D rotation), and anyway I'm not familiar with it.
Regression with CNNs is not a trivial problem. Looking again at the first paper, you'll see that they have a problem where they can basically generate infinite data. Their objective is to predict the rotation angle needed to rectify 2D pictures. This means that I can basically take my training set and augment it by rotating each image by arbitrary angles, and I'll obtain a valid, bigger training set. Thus the problem seems relatively simple, as far as Deep Learning problems go. By the way, note the other data augmentation tricks they use:
We use translations (up to 5% of the image width), brightness
adjustment in the range [−0.2, 0.2], gamma adjustment with γ ∈ [−0.5,
0.1] and Gaussian pixel noise with a standard deviation in the range [0,
0.02].
I don't know your problem well enough to say if it makes sense to consider
variations in position, brightness and gamma noise for your pictures, carefully
shot in a lab. But you can always try, and remove it if it doesn't improve your test set loss. Actually, you should really use a validation set or $k-$fold cross-validation for these kinds of experiments, and don't look at the test set until you have defined your setup, if you want the test set loss to be representative of the generalization error.
Anyway, even in their ideal conditions, the naive approach didn't work that well (section 4.2). They stripped out the output layer (the softmax layer) and substituted it with a layer with two units which would predict the sine $y$ and cosine $x$ of the rotation angle. The actual angle would then be computed as $\alpha=\text{atan2}(y,x)$. The neural network was also pretrained on ImageNet (this is called transfer learning). Of course the training on ImageNet had been for a different task (classification), but still training the neural network from scratch must have given such horrible results that they decided not to publish them. So you had all ingredients to make a good omelette: potentially infinite training data, a pretrained network and an apparently simple regression problem (predict two numbers between -1 and 1). Yet, the best they could get with this approach was a 21° error. It's not clear if this is an RMSE error, a MAD error or what, but still it's not great: since the maximum error you can make is 180°, the average error is $>11\%$ of the maximum possible error. They did slightly better by using two networks in series: the first one would perform classification (predict whether the angle would be in the $[-180°,-90°],[-90°,0°],[0°,90°]$ or $[90°,180°]$ class), then the image, rotated by the amount predicted by the first network, would be fed to another neural network (for regression, this time), which would predict the final additional rotation in the $[-45°,45°]$ range.
On a much simpler (rotated MNIST) problem, you can get something better, but still you don't go below an RMSE error which is $2.6\%$ of the maximum possible error.
So, what can we learn from this? First of all, that 5000 images is a small data set for your task. The first paper used a network which was pretrained on images similar to that for which they wanted to learn the regression task: not only you need to learn a different task from that for which the architecture was designed (classification), but your training set doesn't look anything at all like the training sets on which these networks are usually trained (CIFAR-10/100 or ImageNet). So you probably won't get any benefits from transfer learning. The MATLAB example had 5000 images, but they were black and white and semantically all very similar (well, this could be your case too).
Then, how realistic is doing better than 0.3? We must first of all understand what do you mean by 0.3 average loss. Do you mean that the RMSE error is 0.3,
$$\frac{1}{N}\sum_{i=1}^N (h(\mathbf{x}_i)-y_i)^2$$
where $N$ is the size of your training set (thus, $N< 5000$), $h(\mathbf{x}_i)$ is the output of your CNN for image $\mathbf{x}_i$ and $y_i$ is the corresponding concentration of the chemical? Since $y_i\in[80,350]$, then assuming that you clip the predictions of your CNN between 80 and 350 (or you just use a logit to make them fit in that interval), you're getting less than $0.12\%$ error. Seriously, what do you expect? it doesn't seem to me a big error at all.
Also, just try to compute the number of parameters in your network: I'm in a hurry and I may be making silly mistakes, so by all means double check my computations with some summary function from whatever framework you may be using. However, roughly I would say you have
$$9\times(3\times 32 + 2\times 32\times 32 + 32\times64+2\times64\times64+ 64\times128+2\times128\times128) +128\times128+128\times32+32 \times32\times32=533344$$
(note I skipped the parameters of the batch norm layers, but they're just 4 parameters for layer so they don't make a difference). You have half a million parameters and 5000 examples...what would you expect? Sure, the number of parameters is not a good indicator for the capacity of a neural network (it's a non-identifiable model), but still...I don't think you can do much better than this, but you can try a few things:
normalize all inputs (for example, rescale the RGB intensities of each pixel between -1 and 1, or use standardization) and all outputs. This will especially help if you have convergence issues.
go to grayscale: this would reduce your input channels from 3 to 1. All your images seem (to my highly untrained eye) to be of relatively similar colors. Are you sure it's the color that it's needed to predict $y$, and not the existence of darker or brighter areas? Maybe you're sure (I'm not an expert): in this case skip this suggestion.
data augmentation: since you said that flipping, rotating by an arbitrary angle or mirroring your images should result in the same output, you can increase the size of your data set a lot. Note that with a bigger dataset the error on the training set will go up: what we're looking for here is a smaller gap between training set loss and test set loss. Also, if the training set loss increases a lot, this could be good news: it may mean that you can train a deeper network on this bigger training set without the risk of overfitting. Try adding more layers and see if now you get a smaller training set and test set loss. Finally, you could try also the other data augmentation tricks I quoted above, if they make sense in the context of your application.
use the classification-then-regression trick: a first network only determines if $y$ should be in one of, say, 10 bins, such as $[80,97],[97,124]$,etc. A second network then computes a $[0,27]$ correction: centering and normalizing may help here too. Can't say without trying.
try using a modern architecture (Inception or ResNet) instead than a vintage one. ResNet has actually less parameters than VGG-net. Of course, you want to use the small ResNets here - I don't think ResNet-101 could help on a 5000 images data set. You can augment the data set a lot, though....
Since your output is invariant to rotation, another great idea would be to use either group equivariant CNNs, whose output (when used as classifiers) is invariant to discrete rotations, or steerable CNNs whose output is invariant to continuous rotations. The invariance property would allow you to get good results with much less data augmentation, or ideally none at all (for what it concerns rotations: of course you still need the other types of d. a.). Group equivariant CNNs are more mature than steerable CNNs from an implementation point of view, so I’d try group CNNs first. You can try the classification-then-regression, using the G-CNN for the classification part, or you may experiment with the pure regression approach. Remember to change the top layer accordingly.
experiment with the batch size (yeah, yeah, I know hyperparameters-hacking is not cool, but this is the best I could come with in a limited time frame & for free :-)
finally, there are architectures which have been especially developed to make accurate predictions with small data sets. Most of them used dilated convolutions: one famous example is the mixed-scale dense convolutional neural network. The implementation is not trivial, though. | CNN architectures for regression?
First of all a general suggestion: do a literature search before you start making experiments on a topic you're not familiar with. You'll save yourself a lot of time.
In this case, looking at existin |
4,598 | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples? | Answer to question 1: This occurs because the $p$-value becomes arbitrarily small as the sample size increases in frequentist tests for difference (i.e. tests with a null hypothesis of no difference/some form of equality) when a true difference exactly equal to zero, as opposed to arbitraily close to zero, is not realistic (see Nick Stauner's comment to the OP). The $p$-value becomes arbitrarily small because the error of frequentist test statistics generally decreases with sample size, with the upshot that all differences are significant to an arbitrary level with a large enough sample size. Cosma Shalizi has written eruditely about this.
Answer to question 2: Within a frequentist hypothesis testing framework, one can guard against this by not making inference solely about detecting difference. For example, one can combine inferences about difference and equivalence so that one is not favoring (or conflating!) the burden of proof on evidence of effect versus evidence of absence of effect. Evidence of absence of an effect comes from, for example:
two one-sided tests for equivalence (TOST),
uniformly most powerful tests for equivalence, and
the confidence interval approach to equivalence (i.e. if the $1-2\alpha$%CI of the test statistic is within the a priori-defined range of equivalence/relevance, then one concludes equivalence at the $\alpha$ level of significance).
What these approaches all share is an a priori decision about what effect size constitutes a relevant difference and a null hypothesis framed in terms of a difference at least as large as what is considered relevant.
Combined inference from tests for difference and tests for equivalence thus protects against the bias you describe when sample sizes are large in this way (two-by-two table showing the four possibilities resulting from combined tests for difference—positivist null hypothesis, $\text{H}_{0}^{+}$—and equivalence—negativist null hypothesis, $\text{H}_{0}^{-}$):
Notice the upper left quadrant: an overpowered test is one where yes you reject the null hypothesis of no difference, but you also reject the null hypothesis of relevant difference, so yes there's a difference, but you have a priori decided you do not care about it because it is too small.
Answer to question 3: See answer to 2. | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with suf | Answer to question 1: This occurs because the $p$-value becomes arbitrarily small as the sample size increases in frequentist tests for difference (i.e. tests with a null hypothesis of no difference/s | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples?
Answer to question 1: This occurs because the $p$-value becomes arbitrarily small as the sample size increases in frequentist tests for difference (i.e. tests with a null hypothesis of no difference/some form of equality) when a true difference exactly equal to zero, as opposed to arbitraily close to zero, is not realistic (see Nick Stauner's comment to the OP). The $p$-value becomes arbitrarily small because the error of frequentist test statistics generally decreases with sample size, with the upshot that all differences are significant to an arbitrary level with a large enough sample size. Cosma Shalizi has written eruditely about this.
Answer to question 2: Within a frequentist hypothesis testing framework, one can guard against this by not making inference solely about detecting difference. For example, one can combine inferences about difference and equivalence so that one is not favoring (or conflating!) the burden of proof on evidence of effect versus evidence of absence of effect. Evidence of absence of an effect comes from, for example:
two one-sided tests for equivalence (TOST),
uniformly most powerful tests for equivalence, and
the confidence interval approach to equivalence (i.e. if the $1-2\alpha$%CI of the test statistic is within the a priori-defined range of equivalence/relevance, then one concludes equivalence at the $\alpha$ level of significance).
What these approaches all share is an a priori decision about what effect size constitutes a relevant difference and a null hypothesis framed in terms of a difference at least as large as what is considered relevant.
Combined inference from tests for difference and tests for equivalence thus protects against the bias you describe when sample sizes are large in this way (two-by-two table showing the four possibilities resulting from combined tests for difference—positivist null hypothesis, $\text{H}_{0}^{+}$—and equivalence—negativist null hypothesis, $\text{H}_{0}^{-}$):
Notice the upper left quadrant: an overpowered test is one where yes you reject the null hypothesis of no difference, but you also reject the null hypothesis of relevant difference, so yes there's a difference, but you have a priori decided you do not care about it because it is too small.
Answer to question 3: See answer to 2. | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with suf
Answer to question 1: This occurs because the $p$-value becomes arbitrarily small as the sample size increases in frequentist tests for difference (i.e. tests with a null hypothesis of no difference/s |
4,599 | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples? | Frequentist tests with large samples DO NOT exhibit bias towards rejecting the null hypothesis if the null hypothesis is true. If the assumptions of the test are valid and the null hypothesis is true then there is no more risk of a large sample leading to rejection of the null hypothesis than a small sample. If the null is not true then we surely would be pleased to reject it, so the fact that a large sample will more frequently reject a false null than a small sample is not 'bias' but appropriate behaviour.
The fear of 'overpowered experiments' is based on assuming that it is not a good thing to reject the null hypothesis when it is nearly true. But if it is only nearly true then it is actually false! Reject away, but do not fail to notice (and clearly report) the effect size observed. It may be trivially small and therefore not worthy of serious consideration, but a decision on that issue has to be made after consideration of information from outside the hypothesis test. | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with suf | Frequentist tests with large samples DO NOT exhibit bias towards rejecting the null hypothesis if the null hypothesis is true. If the assumptions of the test are valid and the null hypothesis is true | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples?
Frequentist tests with large samples DO NOT exhibit bias towards rejecting the null hypothesis if the null hypothesis is true. If the assumptions of the test are valid and the null hypothesis is true then there is no more risk of a large sample leading to rejection of the null hypothesis than a small sample. If the null is not true then we surely would be pleased to reject it, so the fact that a large sample will more frequently reject a false null than a small sample is not 'bias' but appropriate behaviour.
The fear of 'overpowered experiments' is based on assuming that it is not a good thing to reject the null hypothesis when it is nearly true. But if it is only nearly true then it is actually false! Reject away, but do not fail to notice (and clearly report) the effect size observed. It may be trivially small and therefore not worthy of serious consideration, but a decision on that issue has to be made after consideration of information from outside the hypothesis test. | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with suf
Frequentist tests with large samples DO NOT exhibit bias towards rejecting the null hypothesis if the null hypothesis is true. If the assumptions of the test are valid and the null hypothesis is true |
4,600 | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples? | Nobody seems to have noted that with good experimental design, sample sizes should be chosen to reflect a meaningful difference under H1 (ie as large as needed but no larger). The problem of rejecting H0 because of a huge sample and a trivial difference is thus avoided. | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with suf | Nobody seems to have noted that with good experimental design, sample sizes should be chosen to reflect a meaningful difference under H1 (ie as large as needed but no larger). The problem of rejectin | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples?
Nobody seems to have noted that with good experimental design, sample sizes should be chosen to reflect a meaningful difference under H1 (ie as large as needed but no larger). The problem of rejecting H0 because of a huge sample and a trivial difference is thus avoided. | Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with suf
Nobody seems to have noted that with good experimental design, sample sizes should be chosen to reflect a meaningful difference under H1 (ie as large as needed but no larger). The problem of rejectin |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.