idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,101
Why does a time series have to be stationary?
Time Series is about analysing the way values of a series are dependent on previous values. As SRKX suggested one can difference or de-trend or de-mean a non-stationary series but not unnecessarily!) to create a stationary series. ARMA analysis requires stationarity. $X$ is strictly stationary if the distribution of $(X_{t+1},\ldots,X_{t+k})$ is identical to that of $(X_1,\ldots,X_k)$ for each $t$ and $k$. From Wiki: a stationary process (or strict(ly) stationary process or strong(ly) stationary process) is a stochastic process whose joint probability distribution does not change when shifted in time or space. Consequently, parameters such as the mean and variance, if they exist, also do not change over time or position. In addition as Cardinal has correctly pointed out below the autocorrelation function must be invariant over time (which means that the covariance function is constant over time) converts to parameters of the ARMA model being invariant/constant for all time intervals. The idea of stationarity of the ARMA model is closely tied into the idea of invertibility. Consider a model of the form $y(t)=1.1 \,y(t-1)$. This model is explosive as the polynomial $(1-1.1 B)$ has roots inside the unit circle and thus violates a requirement. A model that has roots inside the unit circle means that "older data" is more important than "newer data" which of course doesn't make sense.
Why does a time series have to be stationary?
Time Series is about analysing the way values of a series are dependent on previous values. As SRKX suggested one can difference or de-trend or de-mean a non-stationary series but not unnecessarily!
Why does a time series have to be stationary? Time Series is about analysing the way values of a series are dependent on previous values. As SRKX suggested one can difference or de-trend or de-mean a non-stationary series but not unnecessarily!) to create a stationary series. ARMA analysis requires stationarity. $X$ is strictly stationary if the distribution of $(X_{t+1},\ldots,X_{t+k})$ is identical to that of $(X_1,\ldots,X_k)$ for each $t$ and $k$. From Wiki: a stationary process (or strict(ly) stationary process or strong(ly) stationary process) is a stochastic process whose joint probability distribution does not change when shifted in time or space. Consequently, parameters such as the mean and variance, if they exist, also do not change over time or position. In addition as Cardinal has correctly pointed out below the autocorrelation function must be invariant over time (which means that the covariance function is constant over time) converts to parameters of the ARMA model being invariant/constant for all time intervals. The idea of stationarity of the ARMA model is closely tied into the idea of invertibility. Consider a model of the form $y(t)=1.1 \,y(t-1)$. This model is explosive as the polynomial $(1-1.1 B)$ has roots inside the unit circle and thus violates a requirement. A model that has roots inside the unit circle means that "older data" is more important than "newer data" which of course doesn't make sense.
Why does a time series have to be stationary? Time Series is about analysing the way values of a series are dependent on previous values. As SRKX suggested one can difference or de-trend or de-mean a non-stationary series but not unnecessarily!
1,102
Why does a time series have to be stationary?
In my view stochastic process is the process which is govern by three statistical properties which must be time -invariant .They are mean variance and auto correlation function.Though the first two doesn't tell anything about the evolution of the process in time ,so the third property which is auto-correlation function should be considered which tell one that how the dependence decay as the time proceed (lag).
Why does a time series have to be stationary?
In my view stochastic process is the process which is govern by three statistical properties which must be time -invariant .They are mean variance and auto correlation function.Though the first two do
Why does a time series have to be stationary? In my view stochastic process is the process which is govern by three statistical properties which must be time -invariant .They are mean variance and auto correlation function.Though the first two doesn't tell anything about the evolution of the process in time ,so the third property which is auto-correlation function should be considered which tell one that how the dependence decay as the time proceed (lag).
Why does a time series have to be stationary? In my view stochastic process is the process which is govern by three statistical properties which must be time -invariant .They are mean variance and auto correlation function.Though the first two do
1,103
Why does a time series have to be stationary?
To solve anything we need to model the equations mathematically using statics. To solve such equations it needs to be independent and stationary(not moving) In stationary data only we can able to get insights and do mathematical operations(mean, variance etc..) for multi-purpose In non-stationary, it is hard to get data During the conversion process, we will get a trend and seasonality
Why does a time series have to be stationary?
To solve anything we need to model the equations mathematically using statics. To solve such equations it needs to be independent and stationary(not moving) In stationary data only we can able to get
Why does a time series have to be stationary? To solve anything we need to model the equations mathematically using statics. To solve such equations it needs to be independent and stationary(not moving) In stationary data only we can able to get insights and do mathematical operations(mean, variance etc..) for multi-purpose In non-stationary, it is hard to get data During the conversion process, we will get a trend and seasonality
Why does a time series have to be stationary? To solve anything we need to model the equations mathematically using statics. To solve such equations it needs to be independent and stationary(not moving) In stationary data only we can able to get
1,104
Clustering on the output of t-SNE
The problem with t-SNE is that it does not preserve distances nor density. It only to some extent preserves nearest-neighbors. The difference is subtle, but affects any density- or distance based algorithm. While clustering after t-SNE will sometimes (often?) work, you will never know whether the "clusters" you find are real, or just artifacts of t-SNE. You will not be able to explain the clusters. You may just be seeing 'shapes in clouds'. To see this effect, simply generate a multivariate Gaussian distribution. If you visualize this, you will have a ball that is dense and gets much less dense outwards, with some outliers that can be really far away. Now run t-SNE on this data. You will usually get a circle of rather uniform density. If you use a low perplexity, it may even have some odd patterns in there. But you cannot really tell apart outliers anymore. Now lets make things more complicated. Let's use 250 points in a normal distribution at (-2,0), and 750 points in a normal distribution at (+2,0). This is supposed to be an easy data set, for example with EM: If we run t-SNE with default perplexity of 40, we get an oddly shaped pattern: Not bad, but also not that easy to cluster, is it? You will have a hard time finding a clustering algorithm that works here exactly as desired. And even if you would ask humans to cluster this data, most likely they will find much more than 2 clusters here. If we run t-SNE with a too small perplexity such as 20, we get more of these patterns that do not exist: This will cluster e.g. with DBSCAN, but it will yield four clusters. So beware, t-SNE can produce "fake" patterns! The optimum perplexity appears to be somewhere around 80 for this data set; but I don't think this parameter should work for every other data set. Now this is visually pleasing, but not better for analysis. A human annotator could likely select a cut and get a decent result; k-means however will fail even in this very very easy scenario! You can already see that density information is lost, all data seems to live in area of almost the same density. If we would instead further increase the perplexity, the uniformity would increase, and the separation would reduce again. In conclusions, use t-SNE for visualization (and try different parameters to get something visually pleasing!), but rather do not run clustering afterwards, in particular do not use distance- or density based algorithms, as this information was intentionally (!) lost. Neighborhood-graph based approaches may be fine, but then you don't need to first run t-SNE beforehand, just use the neighbors immediately (because t-SNE tries to keep this nn-graph largely intact). More examples These examples were prepared for the presentation of the paper (but cannot be found in the paper yet, as I did this experiment later) Erich Schubert, and Michael Gertz. Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection – A Remedy Against the Curse of Dimensionality? In: Proceedings of the 10th International Conference on Similarity Search and Applications (SISAP), Munich, Germany. 2017 First, we have this input data: As you may guess, this is derived from a "color me" image for kids. If we run this through SNE (NOT t-SNE, but the predecessor): Wow, our fish has become quite a sea monster! Because the kernel size is chosen locally, we lose much of the density information. But you will be really surprised by the output of t-SNE: I have actually tried two implementations (the ELKI, and the sklearn implementations), and both produced such a result. Some disconnected fragments, but that each look somewhat consistent with the original data. Two important points to explain this: SGD relies on an iterative refinement procedure, and may get stuck in local optima. In particular, this makes it hard for the algorithm to "flip" a part of the data that it has mirrored, as this would require moving points through others that are supposed to be separate. So if some parts of the fish are mirrored, and other parts are not mirrored, it may be unable to fix this. t-SNE uses the t-distribution in the projected space. In contrast to the Gaussian distribution used by regular SNE, this means most points will repel each other, because they have 0 affinity in the input domain (Gaussian gets zero quickly), but >0 affinity in the output domain. Sometimes (as in MNIST) this makes nicer visualization. In particular, it can help "splitting" a data set a bit more than in the input domain. This additional repulsion also often causes points to more evenly use the area, which can also be desirable. But here in this example, the repelling effects actually cause fragments of the fish to separate. We can help (on this toy data set) the first issue by using the original coordinates as initial placement, rather than random coordinates (as usually used with t-SNE). This time, the image is sklearn instead of ELKI, because the sklearn version already had a parameter to pass initial coordinates: As you can see, even with "perfect" initial placement, t-SNE will "break" the fish in a number of places that were originally connected because the Student-t repulsion in the output domain is stronger than the Gaussian affinity in the input space. As you can see, t-SNE (and SNE, too!) are interesting visualization techniques, but they need to be handled carefully. I would rather not apply k-means on the result! because the result will be heavily distorted, and neither distances nor density are preserved well. Instead, rather use it for visualization.
Clustering on the output of t-SNE
The problem with t-SNE is that it does not preserve distances nor density. It only to some extent preserves nearest-neighbors. The difference is subtle, but affects any density- or distance based algo
Clustering on the output of t-SNE The problem with t-SNE is that it does not preserve distances nor density. It only to some extent preserves nearest-neighbors. The difference is subtle, but affects any density- or distance based algorithm. While clustering after t-SNE will sometimes (often?) work, you will never know whether the "clusters" you find are real, or just artifacts of t-SNE. You will not be able to explain the clusters. You may just be seeing 'shapes in clouds'. To see this effect, simply generate a multivariate Gaussian distribution. If you visualize this, you will have a ball that is dense and gets much less dense outwards, with some outliers that can be really far away. Now run t-SNE on this data. You will usually get a circle of rather uniform density. If you use a low perplexity, it may even have some odd patterns in there. But you cannot really tell apart outliers anymore. Now lets make things more complicated. Let's use 250 points in a normal distribution at (-2,0), and 750 points in a normal distribution at (+2,0). This is supposed to be an easy data set, for example with EM: If we run t-SNE with default perplexity of 40, we get an oddly shaped pattern: Not bad, but also not that easy to cluster, is it? You will have a hard time finding a clustering algorithm that works here exactly as desired. And even if you would ask humans to cluster this data, most likely they will find much more than 2 clusters here. If we run t-SNE with a too small perplexity such as 20, we get more of these patterns that do not exist: This will cluster e.g. with DBSCAN, but it will yield four clusters. So beware, t-SNE can produce "fake" patterns! The optimum perplexity appears to be somewhere around 80 for this data set; but I don't think this parameter should work for every other data set. Now this is visually pleasing, but not better for analysis. A human annotator could likely select a cut and get a decent result; k-means however will fail even in this very very easy scenario! You can already see that density information is lost, all data seems to live in area of almost the same density. If we would instead further increase the perplexity, the uniformity would increase, and the separation would reduce again. In conclusions, use t-SNE for visualization (and try different parameters to get something visually pleasing!), but rather do not run clustering afterwards, in particular do not use distance- or density based algorithms, as this information was intentionally (!) lost. Neighborhood-graph based approaches may be fine, but then you don't need to first run t-SNE beforehand, just use the neighbors immediately (because t-SNE tries to keep this nn-graph largely intact). More examples These examples were prepared for the presentation of the paper (but cannot be found in the paper yet, as I did this experiment later) Erich Schubert, and Michael Gertz. Intrinsic t-Stochastic Neighbor Embedding for Visualization and Outlier Detection – A Remedy Against the Curse of Dimensionality? In: Proceedings of the 10th International Conference on Similarity Search and Applications (SISAP), Munich, Germany. 2017 First, we have this input data: As you may guess, this is derived from a "color me" image for kids. If we run this through SNE (NOT t-SNE, but the predecessor): Wow, our fish has become quite a sea monster! Because the kernel size is chosen locally, we lose much of the density information. But you will be really surprised by the output of t-SNE: I have actually tried two implementations (the ELKI, and the sklearn implementations), and both produced such a result. Some disconnected fragments, but that each look somewhat consistent with the original data. Two important points to explain this: SGD relies on an iterative refinement procedure, and may get stuck in local optima. In particular, this makes it hard for the algorithm to "flip" a part of the data that it has mirrored, as this would require moving points through others that are supposed to be separate. So if some parts of the fish are mirrored, and other parts are not mirrored, it may be unable to fix this. t-SNE uses the t-distribution in the projected space. In contrast to the Gaussian distribution used by regular SNE, this means most points will repel each other, because they have 0 affinity in the input domain (Gaussian gets zero quickly), but >0 affinity in the output domain. Sometimes (as in MNIST) this makes nicer visualization. In particular, it can help "splitting" a data set a bit more than in the input domain. This additional repulsion also often causes points to more evenly use the area, which can also be desirable. But here in this example, the repelling effects actually cause fragments of the fish to separate. We can help (on this toy data set) the first issue by using the original coordinates as initial placement, rather than random coordinates (as usually used with t-SNE). This time, the image is sklearn instead of ELKI, because the sklearn version already had a parameter to pass initial coordinates: As you can see, even with "perfect" initial placement, t-SNE will "break" the fish in a number of places that were originally connected because the Student-t repulsion in the output domain is stronger than the Gaussian affinity in the input space. As you can see, t-SNE (and SNE, too!) are interesting visualization techniques, but they need to be handled carefully. I would rather not apply k-means on the result! because the result will be heavily distorted, and neither distances nor density are preserved well. Instead, rather use it for visualization.
Clustering on the output of t-SNE The problem with t-SNE is that it does not preserve distances nor density. It only to some extent preserves nearest-neighbors. The difference is subtle, but affects any density- or distance based algo
1,105
Clustering on the output of t-SNE
I would like to provide a somewhat dissenting opinion to the well argued (+1) and highly upvoted answer by @ErichSchubert. Erich does not recommend clustering on the t-SNE output, and shows some toy examples where it can be misleading. His suggestion is to apply clustering to the original data instead. use t-SNE for visualization (and try different parameters to get something visually pleasing!), but rather do not run clustering afterwards, in particular do not use distance- or density based algorithms, as this information was intentionally (!) lost. I am well aware of the ways in which t-SNE output may be misleading (see https://distill.pub/2016/misread-tsne/) and I agree that it can produce weird results in some situations. But let us consider some real high-dimensional data. Take MNIST data: 70000 single-digit images. We know that there are 10 classes in the data. These classes appear well-separated to a human observer. However, clustering MNIST data into 10 clusters is a very difficult problem. I am not aware of any clustering algorithm that would correctly cluster the data into 10 clusters; more importantly, I am not aware of any clustering heuristic that would indicate that there are 10 (not more and not less) clusters in the data. I am certain that most common approaches would not be able to indicate that. But let's do t-SNE instead. (One can find many figures of t-SNE applied to MNIST online, but they are often suboptimal. In my experience, it's necessary to run early exaggeration for quite some time to get good results. Below I am using perplexity=50, max_iter=2000, early_exag_coeff=12, stop_lying_iter=1000). Here is what I get, on the left unlabeled, and on the right colored according to the ground truth: I would argue that the unlabeled t-SNE representation does suggest 10 clusters. Applying a good density based clustering algorithm such as HDBSCAN with carefully selected parameters will allow to cluster these 2D data into 10 clusters. In case somebody will doubt that the left plot above indeed suggests 10 clusters, here is what I get with the "late exaggeration" trick where I additionally run max_iter=200 iterations with exaggeration=4 (this trick is suggested in this great paper: https://arxiv.org/abs/1712.09005): Now it should be very obvious that there are 10 clusters. I encourage everybody who thinks clustering after t-SNE is a bad idea to show a clustering algorithm that would achieve comparably good result. And now even more real data. In the MNIST case we know the ground truth. Consider now some data with unknown ground truth. Clustering and t-SNE are routinely used to describe cell variability in single cell RNA-seq data. E.g. Shekhar et al. 2016 tried to identify clusters among 27000 retinal cells (there are around 20k genes in the mouse genome so dimensionality of the data is in principle about 20k; however one usually starts with reducing dimensionality with PCA down to 50 or so). They do t-SNE and they separately do clustering (a complicated clustering pipeline followed by some cluster merges etc.). The final result looks pleasing: The reason it looks so pleasing is that t-SNE produces clearly distinct clusters and clustering algorithm yields exactly the same clusters. Nice. However, if you look in the supplementaries you will see that the authors tried many different clustering approaches. Many of them look awful on the t-SNE plot because e.g. the big central cluster gets split into many sub-clusters: So what do you believe: the output of your favourite clustering algorithm together with your favourite heuristic for identifying the number of clusters, or what you see on the t-SNE plot? To be honest, despite all the shortcomings of t-SNE, I tend to believe t-SNE more. Or in any case, I don't see why I should believe it less.
Clustering on the output of t-SNE
I would like to provide a somewhat dissenting opinion to the well argued (+1) and highly upvoted answer by @ErichSchubert. Erich does not recommend clustering on the t-SNE output, and shows some toy e
Clustering on the output of t-SNE I would like to provide a somewhat dissenting opinion to the well argued (+1) and highly upvoted answer by @ErichSchubert. Erich does not recommend clustering on the t-SNE output, and shows some toy examples where it can be misleading. His suggestion is to apply clustering to the original data instead. use t-SNE for visualization (and try different parameters to get something visually pleasing!), but rather do not run clustering afterwards, in particular do not use distance- or density based algorithms, as this information was intentionally (!) lost. I am well aware of the ways in which t-SNE output may be misleading (see https://distill.pub/2016/misread-tsne/) and I agree that it can produce weird results in some situations. But let us consider some real high-dimensional data. Take MNIST data: 70000 single-digit images. We know that there are 10 classes in the data. These classes appear well-separated to a human observer. However, clustering MNIST data into 10 clusters is a very difficult problem. I am not aware of any clustering algorithm that would correctly cluster the data into 10 clusters; more importantly, I am not aware of any clustering heuristic that would indicate that there are 10 (not more and not less) clusters in the data. I am certain that most common approaches would not be able to indicate that. But let's do t-SNE instead. (One can find many figures of t-SNE applied to MNIST online, but they are often suboptimal. In my experience, it's necessary to run early exaggeration for quite some time to get good results. Below I am using perplexity=50, max_iter=2000, early_exag_coeff=12, stop_lying_iter=1000). Here is what I get, on the left unlabeled, and on the right colored according to the ground truth: I would argue that the unlabeled t-SNE representation does suggest 10 clusters. Applying a good density based clustering algorithm such as HDBSCAN with carefully selected parameters will allow to cluster these 2D data into 10 clusters. In case somebody will doubt that the left plot above indeed suggests 10 clusters, here is what I get with the "late exaggeration" trick where I additionally run max_iter=200 iterations with exaggeration=4 (this trick is suggested in this great paper: https://arxiv.org/abs/1712.09005): Now it should be very obvious that there are 10 clusters. I encourage everybody who thinks clustering after t-SNE is a bad idea to show a clustering algorithm that would achieve comparably good result. And now even more real data. In the MNIST case we know the ground truth. Consider now some data with unknown ground truth. Clustering and t-SNE are routinely used to describe cell variability in single cell RNA-seq data. E.g. Shekhar et al. 2016 tried to identify clusters among 27000 retinal cells (there are around 20k genes in the mouse genome so dimensionality of the data is in principle about 20k; however one usually starts with reducing dimensionality with PCA down to 50 or so). They do t-SNE and they separately do clustering (a complicated clustering pipeline followed by some cluster merges etc.). The final result looks pleasing: The reason it looks so pleasing is that t-SNE produces clearly distinct clusters and clustering algorithm yields exactly the same clusters. Nice. However, if you look in the supplementaries you will see that the authors tried many different clustering approaches. Many of them look awful on the t-SNE plot because e.g. the big central cluster gets split into many sub-clusters: So what do you believe: the output of your favourite clustering algorithm together with your favourite heuristic for identifying the number of clusters, or what you see on the t-SNE plot? To be honest, despite all the shortcomings of t-SNE, I tend to believe t-SNE more. Or in any case, I don't see why I should believe it less.
Clustering on the output of t-SNE I would like to provide a somewhat dissenting opinion to the well argued (+1) and highly upvoted answer by @ErichSchubert. Erich does not recommend clustering on the t-SNE output, and shows some toy e
1,106
Clustering on the output of t-SNE
I think with large perplexity t-SNE can reconstruct the global topology, as indicated in https://distill.pub/2016/misread-tsne/. From the fish image, I sampled 4000 points for t-SNE. With a large perplexity (2000), the fish image was virtually reconstructed. Here is the original image. Here is the image reconstructed by t-SNE with perplexity = 2000.
Clustering on the output of t-SNE
I think with large perplexity t-SNE can reconstruct the global topology, as indicated in https://distill.pub/2016/misread-tsne/. From the fish image, I sampled 4000 points for t-SNE. With a large per
Clustering on the output of t-SNE I think with large perplexity t-SNE can reconstruct the global topology, as indicated in https://distill.pub/2016/misread-tsne/. From the fish image, I sampled 4000 points for t-SNE. With a large perplexity (2000), the fish image was virtually reconstructed. Here is the original image. Here is the image reconstructed by t-SNE with perplexity = 2000.
Clustering on the output of t-SNE I think with large perplexity t-SNE can reconstruct the global topology, as indicated in https://distill.pub/2016/misread-tsne/. From the fish image, I sampled 4000 points for t-SNE. With a large per
1,107
Clustering on the output of t-SNE
Based on the mathematical evidence which we have, this method could technically preserve distances! why do you all ignore this feature! t-SNE is converting the high-dimensional Euclidean distances between samples into conditional probabilities which represent similarities. I have tried t-SNE with more than 11,000 samples (in genomics context) in parallel with different consensus clustering algorithms including Spectral clustering, Affinity and importantly with GMM clustering (which is a density based clustering algorithm!). As a result, I found very a good concordant result between two approaches (t-SNE vs. consensus clustering algorithms). I believe integrating t-SNE with consensus clustering algorithms could provide the best evidence of existing local and global structures of data.
Clustering on the output of t-SNE
Based on the mathematical evidence which we have, this method could technically preserve distances! why do you all ignore this feature! t-SNE is converting the high-dimensional Euclidean distances bet
Clustering on the output of t-SNE Based on the mathematical evidence which we have, this method could technically preserve distances! why do you all ignore this feature! t-SNE is converting the high-dimensional Euclidean distances between samples into conditional probabilities which represent similarities. I have tried t-SNE with more than 11,000 samples (in genomics context) in parallel with different consensus clustering algorithms including Spectral clustering, Affinity and importantly with GMM clustering (which is a density based clustering algorithm!). As a result, I found very a good concordant result between two approaches (t-SNE vs. consensus clustering algorithms). I believe integrating t-SNE with consensus clustering algorithms could provide the best evidence of existing local and global structures of data.
Clustering on the output of t-SNE Based on the mathematical evidence which we have, this method could technically preserve distances! why do you all ignore this feature! t-SNE is converting the high-dimensional Euclidean distances bet
1,108
Clustering on the output of t-SNE
You could try the DBSCAN clustering algorithm. Also, the perplexty of tsne should be about the same size as the smallest expected cluster.
Clustering on the output of t-SNE
You could try the DBSCAN clustering algorithm. Also, the perplexty of tsne should be about the same size as the smallest expected cluster.
Clustering on the output of t-SNE You could try the DBSCAN clustering algorithm. Also, the perplexty of tsne should be about the same size as the smallest expected cluster.
Clustering on the output of t-SNE You could try the DBSCAN clustering algorithm. Also, the perplexty of tsne should be about the same size as the smallest expected cluster.
1,109
Clustering on the output of t-SNE
Personally, I have experienced this once, but not with t-SNE or PCA. My original data is in 15-dimensional space. Using UMAP to reduce it to 2D and 3D embeddings, I got 2 perfectly and visually seperable clusters on both 2D and 3D plots. Too good to be true. But when I "looked" at the orginal data from the persistence diagram, I realized that there are much more "significant" clusters, not just 2. Clustering on the output of the dimension reduction technique must be done with a lot of caution, otherwise any interpretation can be very misleading or wrong because reducing dimension will surely result in feature loss (maybe noisy or true features, but a priori, we don't know which). In my opinion, you can trust/interpret the clusters, if: The clusters in the projected data correspond/confirm to some classification defined a priori (think of MNIST dataset, where the clusters of projected data match very nicely with the classification of digits), and/or, You can confirm the presence of these clusters in the original data using other methods, like persistence diagrams. Counting only the number of connected components can be done in a quite reasonable amount of time.
Clustering on the output of t-SNE
Personally, I have experienced this once, but not with t-SNE or PCA. My original data is in 15-dimensional space. Using UMAP to reduce it to 2D and 3D embeddings, I got 2 perfectly and visually sepera
Clustering on the output of t-SNE Personally, I have experienced this once, but not with t-SNE or PCA. My original data is in 15-dimensional space. Using UMAP to reduce it to 2D and 3D embeddings, I got 2 perfectly and visually seperable clusters on both 2D and 3D plots. Too good to be true. But when I "looked" at the orginal data from the persistence diagram, I realized that there are much more "significant" clusters, not just 2. Clustering on the output of the dimension reduction technique must be done with a lot of caution, otherwise any interpretation can be very misleading or wrong because reducing dimension will surely result in feature loss (maybe noisy or true features, but a priori, we don't know which). In my opinion, you can trust/interpret the clusters, if: The clusters in the projected data correspond/confirm to some classification defined a priori (think of MNIST dataset, where the clusters of projected data match very nicely with the classification of digits), and/or, You can confirm the presence of these clusters in the original data using other methods, like persistence diagrams. Counting only the number of connected components can be done in a quite reasonable amount of time.
Clustering on the output of t-SNE Personally, I have experienced this once, but not with t-SNE or PCA. My original data is in 15-dimensional space. Using UMAP to reduce it to 2D and 3D embeddings, I got 2 perfectly and visually sepera
1,110
Clustering on the output of t-SNE
for anyone who is looking into similar questions, I have performed DBSCAN(metric using cosine similarity) on word embeddings of 50 dimensions as well as tsne 2d dimensions. For my corpus containing 1600 lines, I have exactly the same clustering groups (same number of cluster, same items in the groups, same number of noises). Sometimes theoretically the problem becomes too complex and you just have to take the engineering approach.
Clustering on the output of t-SNE
for anyone who is looking into similar questions, I have performed DBSCAN(metric using cosine similarity) on word embeddings of 50 dimensions as well as tsne 2d dimensions. For my corpus containing 16
Clustering on the output of t-SNE for anyone who is looking into similar questions, I have performed DBSCAN(metric using cosine similarity) on word embeddings of 50 dimensions as well as tsne 2d dimensions. For my corpus containing 1600 lines, I have exactly the same clustering groups (same number of cluster, same items in the groups, same number of noises). Sometimes theoretically the problem becomes too complex and you just have to take the engineering approach.
Clustering on the output of t-SNE for anyone who is looking into similar questions, I have performed DBSCAN(metric using cosine similarity) on word embeddings of 50 dimensions as well as tsne 2d dimensions. For my corpus containing 16
1,111
Should one remove highly correlated variables before doing PCA?
This expounds upon the insightful hint provided in a comment by @ttnphns. Adjoining nearly correlated variables increases the contribution of their common underlying factor to the PCA. We can see this geometrically. Consider these data in the XY plane, shown as a point cloud: There is little correlation, approximately equal covariance, and the data are centered: PCA (no matter how conducted) would report two approximately equal components. Let us now throw in a third variable $Z$ equal to $Y$ plus a tiny amount of random error. The correlation matrix of $(X,Y,Z)$ shows this with the small off-diagonal coefficients except between the second and third rows and columns ($Y$ and $Z$): $$\left( \begin{array}{ccc} 1. & -0.0344018 & -0.046076 \\ -0.0344018 & 1. & 0.941829 \\ -0.046076 & 0.941829 & 1. \end{array} \right)$$ Geometrically, we have displaced all the original points nearly vertically, lifting the previous picture right out of the plane of the page. This pseudo 3D point cloud attempts to illustrate the lifting with a side perspective view (based on a different dataset, albeit generated in the same way as before): The points originally lie in the blue plane and are lifted to the red dots. The original $Y$ axis points to the right. The resulting tilting also stretches the points out along the YZ directions, thereby doubling their contribution to the variance. Consequently, a PCA of these new data would still identify two major principal components, but now one of them will have twice the variance of the other. This geometric expectation is borne out with some simulations in R. For this I repeated the "lifting" procedure by creating near-collinear copies of the second variable a second, third, fourth, and fifth time, naming them $X_2$ through $X_5$. Here is a scatterplot matrix showing how those last four variables are well correlated: The PCA is done using correlations (although it doesn't really matter for these data), using the first two variables, then three, ..., and finally five. I show the results using plots of the contributions of the principal components to the total variance. Initially, with two almost uncorrelated variables, the contributions are almost equal (upper left corner). After adding one variable correlated with the second--exactly as in the geometric illustration--there are still just two major components, one now twice the size of the other. (A third component reflects the lack of perfect correlation; it measures the "thickness" of the pancake-like cloud in the 3D scatterplot.) After adding another correlated variable ($X_4$), the first component is now about three-fourths of the total; after a fifth is added, the first component is nearly four-fifths of the total. In all four cases components after the second would likely be considered inconsequential by most PCA diagnostic procedures; in the last case it's possible some procedures would conclude there is only one principal component worth considering. We can see now that there may be merit in discarding variables thought to be measuring the same underlying (but "latent") aspect of a collection of variables, because including the nearly-redundant variables can cause the PCA to overemphasize their contribution. There is nothing mathematically right (or wrong) about such a procedure; it's a judgment call based on the analytical objectives and knowledge of the data. But it should be abundantly clear that setting aside variables known to be strongly correlated with others can have a substantial effect on the PCA results. Here is the R code. n.cases <- 240 # Number of points. n.vars <- 4 # Number of mutually correlated variables. set.seed(26) # Make these results reproducible. eps <- rnorm(n.vars, 0, 1/4) # Make "1/4" smaller to *increase* the correlations. x <- matrix(rnorm(n.cases * (n.vars+2)), nrow=n.cases) beta <- rbind(c(1,rep(0, n.vars)), c(0,rep(1, n.vars)), cbind(rep(0,n.vars), diag(eps))) y <- x%*%beta # The variables. cor(y) # Verify their correlations are as intended. plot(data.frame(y)) # Show the scatterplot matrix. # Perform PCA on the first 2, 3, 4, ..., n.vars+1 variables. p <- lapply(2:dim(beta)[2], function(k) prcomp(y[, 1:k], scale=TRUE)) # Print summaries and display plots. tmp <- lapply(p, summary) par(mfrow=c(2,2)) tmp <- lapply(p, plot)
Should one remove highly correlated variables before doing PCA?
This expounds upon the insightful hint provided in a comment by @ttnphns. Adjoining nearly correlated variables increases the contribution of their common underlying factor to the PCA. We can see thi
Should one remove highly correlated variables before doing PCA? This expounds upon the insightful hint provided in a comment by @ttnphns. Adjoining nearly correlated variables increases the contribution of their common underlying factor to the PCA. We can see this geometrically. Consider these data in the XY plane, shown as a point cloud: There is little correlation, approximately equal covariance, and the data are centered: PCA (no matter how conducted) would report two approximately equal components. Let us now throw in a third variable $Z$ equal to $Y$ plus a tiny amount of random error. The correlation matrix of $(X,Y,Z)$ shows this with the small off-diagonal coefficients except between the second and third rows and columns ($Y$ and $Z$): $$\left( \begin{array}{ccc} 1. & -0.0344018 & -0.046076 \\ -0.0344018 & 1. & 0.941829 \\ -0.046076 & 0.941829 & 1. \end{array} \right)$$ Geometrically, we have displaced all the original points nearly vertically, lifting the previous picture right out of the plane of the page. This pseudo 3D point cloud attempts to illustrate the lifting with a side perspective view (based on a different dataset, albeit generated in the same way as before): The points originally lie in the blue plane and are lifted to the red dots. The original $Y$ axis points to the right. The resulting tilting also stretches the points out along the YZ directions, thereby doubling their contribution to the variance. Consequently, a PCA of these new data would still identify two major principal components, but now one of them will have twice the variance of the other. This geometric expectation is borne out with some simulations in R. For this I repeated the "lifting" procedure by creating near-collinear copies of the second variable a second, third, fourth, and fifth time, naming them $X_2$ through $X_5$. Here is a scatterplot matrix showing how those last four variables are well correlated: The PCA is done using correlations (although it doesn't really matter for these data), using the first two variables, then three, ..., and finally five. I show the results using plots of the contributions of the principal components to the total variance. Initially, with two almost uncorrelated variables, the contributions are almost equal (upper left corner). After adding one variable correlated with the second--exactly as in the geometric illustration--there are still just two major components, one now twice the size of the other. (A third component reflects the lack of perfect correlation; it measures the "thickness" of the pancake-like cloud in the 3D scatterplot.) After adding another correlated variable ($X_4$), the first component is now about three-fourths of the total; after a fifth is added, the first component is nearly four-fifths of the total. In all four cases components after the second would likely be considered inconsequential by most PCA diagnostic procedures; in the last case it's possible some procedures would conclude there is only one principal component worth considering. We can see now that there may be merit in discarding variables thought to be measuring the same underlying (but "latent") aspect of a collection of variables, because including the nearly-redundant variables can cause the PCA to overemphasize their contribution. There is nothing mathematically right (or wrong) about such a procedure; it's a judgment call based on the analytical objectives and knowledge of the data. But it should be abundantly clear that setting aside variables known to be strongly correlated with others can have a substantial effect on the PCA results. Here is the R code. n.cases <- 240 # Number of points. n.vars <- 4 # Number of mutually correlated variables. set.seed(26) # Make these results reproducible. eps <- rnorm(n.vars, 0, 1/4) # Make "1/4" smaller to *increase* the correlations. x <- matrix(rnorm(n.cases * (n.vars+2)), nrow=n.cases) beta <- rbind(c(1,rep(0, n.vars)), c(0,rep(1, n.vars)), cbind(rep(0,n.vars), diag(eps))) y <- x%*%beta # The variables. cor(y) # Verify their correlations are as intended. plot(data.frame(y)) # Show the scatterplot matrix. # Perform PCA on the first 2, 3, 4, ..., n.vars+1 variables. p <- lapply(2:dim(beta)[2], function(k) prcomp(y[, 1:k], scale=TRUE)) # Print summaries and display plots. tmp <- lapply(p, summary) par(mfrow=c(2,2)) tmp <- lapply(p, plot)
Should one remove highly correlated variables before doing PCA? This expounds upon the insightful hint provided in a comment by @ttnphns. Adjoining nearly correlated variables increases the contribution of their common underlying factor to the PCA. We can see thi
1,112
Should one remove highly correlated variables before doing PCA?
I will further illustrate the same process and idea as @whuber did, but with the loading plots, - because loadings are the essense of PCA results. Here is three 3 analyses. In the first, we have two variables, $X_1$ and $X_2$ (in this example, they do not correlate). In the second, we added $X_3$ which is almost a copy of $X_2$ and therefore correlates with it strongly. In the third, we still similarly added 2 more "copies" of it: $X_4$ and $X_5$. The plots of loadings of the first 2 principal components then go. Red spikes on the plots tell of correlations between the variables, so that the bunch of several spikes is where a cluster of tightly correlated variables is found. The components are the grey lines; the relative "strength" of a component (its relative eigenvalue magnitude) is given by weight of the line. Two effects of adding the "copies" can be observed: Component 1 becomes stronger and stronger, and Component 2 weaker and weaker. Orientation of the components change: at first, Component 1 went in the middle between $X_1$ and $X_2$; as we added $X_3$ to $X_2$ Component 1 immediately re-oriented itself to follow the emergent bunch of variables; and you may be sure that after we further added two more variables to the bunch the attachment of Component 1 to that bunch of closely correlated variables became more undisputable. I will not resume the moral because @whuber already did it. Addition. Below are some pictures in response to @whuber's comments. It is about a distinction between "variable space" and "subject space" and how components orient themselves here and there. Three bivariate PCAs are presented: first row analyzes $r=0$, second row analyzes $r=0.62$, and third row $r=0.77$. The left column is scatterplots (of standardized data) and the right column is loading plots. On a scatterplot, the correlation between $X_1$ and $X_2$ is rendered as oblongness of the cloud. The angle (its cosine) between a component line and a variable line is the corresponding eigenvector element. Eigenvectors are identical in all three analyses (so the angles on all 3 graphs are the same). [But, it is true, that with $r=0$ exactly, eigenvectors (and hence the angles) are theoretically arbitrary; because the cloud is perfectly "round" any pair of orthogonal lines coming through the origin could serve as the two components, - even $X_1$ and $X_2$ lines themselves could be chosen as the components.] The coordinates of data points (200 subjects) onto a component are component scores, and their sum of squares devided by 200-1 is the component's eigenvalue. On a loading plot, the points (vectors) are variables; they spread the space which is 2-dimensional (because we have 2 points + origin) but is actually a reduced 200-dimensional (number of subjects) "subject space". Here the angle (cosine) between the red vectors is $r$. The vectors are of equal, unit length, because the data had been standardized. The first component is such a dimension axis in this space which rushes towards the overal accumulation of the points; in case of just 2 variables it is always the bisector between $X_1$ and $X_2$ (but adding a 3rd variable can deflect it anyhow). The angle (cosine) between a variable vector and a component line is the correlation between them, and because the vectors are unit lenght and the components are orthogonal, this is nothing else than the coordinates, the loading. Sum of squared loadings onto the component is its eigenvalue (the component just orients itself in this subject space so as to maximize it) Addition2. In Addition above I was speaking about "variable space" and "subject space" as if they are incompatible together like water and oil. I had to reconsider it and may say that - at least when we speak about PCA - both spaces are isomorphic in the end, and by that virtue we can correctly display all the PCA details - data points, variable axes, component axes, variables as points, - on a single undistorted biplot. Below are the scatterplot (variable space) and the loading plot (component space, which is subject space by its genetic origin). Everything that could be shown on the one, could also be shown on the other. The pictures are identical, only rotated by 45 degrees (and reflected, in this particular case) relative each other. That was a PCA of variables v1 and v2 (standardized, thus it was r that was analyzed). Black lines on the pictures are the variables as axes; green/yellow lines are the components as axes; blue points are the data cloud (subjects); red points are the variables displayed as points (vectors).
Should one remove highly correlated variables before doing PCA?
I will further illustrate the same process and idea as @whuber did, but with the loading plots, - because loadings are the essense of PCA results. Here is three 3 analyses. In the first, we have two v
Should one remove highly correlated variables before doing PCA? I will further illustrate the same process and idea as @whuber did, but with the loading plots, - because loadings are the essense of PCA results. Here is three 3 analyses. In the first, we have two variables, $X_1$ and $X_2$ (in this example, they do not correlate). In the second, we added $X_3$ which is almost a copy of $X_2$ and therefore correlates with it strongly. In the third, we still similarly added 2 more "copies" of it: $X_4$ and $X_5$. The plots of loadings of the first 2 principal components then go. Red spikes on the plots tell of correlations between the variables, so that the bunch of several spikes is where a cluster of tightly correlated variables is found. The components are the grey lines; the relative "strength" of a component (its relative eigenvalue magnitude) is given by weight of the line. Two effects of adding the "copies" can be observed: Component 1 becomes stronger and stronger, and Component 2 weaker and weaker. Orientation of the components change: at first, Component 1 went in the middle between $X_1$ and $X_2$; as we added $X_3$ to $X_2$ Component 1 immediately re-oriented itself to follow the emergent bunch of variables; and you may be sure that after we further added two more variables to the bunch the attachment of Component 1 to that bunch of closely correlated variables became more undisputable. I will not resume the moral because @whuber already did it. Addition. Below are some pictures in response to @whuber's comments. It is about a distinction between "variable space" and "subject space" and how components orient themselves here and there. Three bivariate PCAs are presented: first row analyzes $r=0$, second row analyzes $r=0.62$, and third row $r=0.77$. The left column is scatterplots (of standardized data) and the right column is loading plots. On a scatterplot, the correlation between $X_1$ and $X_2$ is rendered as oblongness of the cloud. The angle (its cosine) between a component line and a variable line is the corresponding eigenvector element. Eigenvectors are identical in all three analyses (so the angles on all 3 graphs are the same). [But, it is true, that with $r=0$ exactly, eigenvectors (and hence the angles) are theoretically arbitrary; because the cloud is perfectly "round" any pair of orthogonal lines coming through the origin could serve as the two components, - even $X_1$ and $X_2$ lines themselves could be chosen as the components.] The coordinates of data points (200 subjects) onto a component are component scores, and their sum of squares devided by 200-1 is the component's eigenvalue. On a loading plot, the points (vectors) are variables; they spread the space which is 2-dimensional (because we have 2 points + origin) but is actually a reduced 200-dimensional (number of subjects) "subject space". Here the angle (cosine) between the red vectors is $r$. The vectors are of equal, unit length, because the data had been standardized. The first component is such a dimension axis in this space which rushes towards the overal accumulation of the points; in case of just 2 variables it is always the bisector between $X_1$ and $X_2$ (but adding a 3rd variable can deflect it anyhow). The angle (cosine) between a variable vector and a component line is the correlation between them, and because the vectors are unit lenght and the components are orthogonal, this is nothing else than the coordinates, the loading. Sum of squared loadings onto the component is its eigenvalue (the component just orients itself in this subject space so as to maximize it) Addition2. In Addition above I was speaking about "variable space" and "subject space" as if they are incompatible together like water and oil. I had to reconsider it and may say that - at least when we speak about PCA - both spaces are isomorphic in the end, and by that virtue we can correctly display all the PCA details - data points, variable axes, component axes, variables as points, - on a single undistorted biplot. Below are the scatterplot (variable space) and the loading plot (component space, which is subject space by its genetic origin). Everything that could be shown on the one, could also be shown on the other. The pictures are identical, only rotated by 45 degrees (and reflected, in this particular case) relative each other. That was a PCA of variables v1 and v2 (standardized, thus it was r that was analyzed). Black lines on the pictures are the variables as axes; green/yellow lines are the components as axes; blue points are the data cloud (subjects); red points are the variables displayed as points (vectors).
Should one remove highly correlated variables before doing PCA? I will further illustrate the same process and idea as @whuber did, but with the loading plots, - because loadings are the essense of PCA results. Here is three 3 analyses. In the first, we have two v
1,113
Should one remove highly correlated variables before doing PCA?
Without details from your paper, I would conjecture that this discarding of highly-correlated variables was done merely to save off on computational power or workload. I cannot see a reason for why PCA would 'break' for highly correlated variables. Projecting data back onto the bases found by PCA has the effect of whitening the data, (or de-correlating them). That is the whole point behind PCA.
Should one remove highly correlated variables before doing PCA?
Without details from your paper, I would conjecture that this discarding of highly-correlated variables was done merely to save off on computational power or workload. I cannot see a reason for why PC
Should one remove highly correlated variables before doing PCA? Without details from your paper, I would conjecture that this discarding of highly-correlated variables was done merely to save off on computational power or workload. I cannot see a reason for why PCA would 'break' for highly correlated variables. Projecting data back onto the bases found by PCA has the effect of whitening the data, (or de-correlating them). That is the whole point behind PCA.
Should one remove highly correlated variables before doing PCA? Without details from your paper, I would conjecture that this discarding of highly-correlated variables was done merely to save off on computational power or workload. I cannot see a reason for why PC
1,114
Should one remove highly correlated variables before doing PCA?
From my understanding correlated variables are ok, because PCA outputs vectors that are orthogonal.
Should one remove highly correlated variables before doing PCA?
From my understanding correlated variables are ok, because PCA outputs vectors that are orthogonal.
Should one remove highly correlated variables before doing PCA? From my understanding correlated variables are ok, because PCA outputs vectors that are orthogonal.
Should one remove highly correlated variables before doing PCA? From my understanding correlated variables are ok, because PCA outputs vectors that are orthogonal.
1,115
Should one remove highly correlated variables before doing PCA?
Well, it depends on your algorithm. Highly correlated variables may mean an ill-conditioned matrix. If you use an algorithm that's sensitive to that it might make sense. But I dare saying that most of the modern algorithms used for cranking out eigenvalues and eigenvectors are robust to this. Try removing the highly correlated variables. Do the eigenvalues and eigenvector change by much? If they do, then ill-conditioning might be the answer. Because highly correlated variables don't add information, the PCA decomposition shouldn't change
Should one remove highly correlated variables before doing PCA?
Well, it depends on your algorithm. Highly correlated variables may mean an ill-conditioned matrix. If you use an algorithm that's sensitive to that it might make sense. But I dare saying that most of
Should one remove highly correlated variables before doing PCA? Well, it depends on your algorithm. Highly correlated variables may mean an ill-conditioned matrix. If you use an algorithm that's sensitive to that it might make sense. But I dare saying that most of the modern algorithms used for cranking out eigenvalues and eigenvectors are robust to this. Try removing the highly correlated variables. Do the eigenvalues and eigenvector change by much? If they do, then ill-conditioning might be the answer. Because highly correlated variables don't add information, the PCA decomposition shouldn't change
Should one remove highly correlated variables before doing PCA? Well, it depends on your algorithm. Highly correlated variables may mean an ill-conditioned matrix. If you use an algorithm that's sensitive to that it might make sense. But I dare saying that most of
1,116
Should one remove highly correlated variables before doing PCA?
Depends on what principle component selection method you use doesn't it? I tend to use any principle component with an eigenvalue > 1. So it wouldn't effect me. And from the examples above even the scree plot method would usually pick the right one. IF YOU KEEP ALL BEFORE THE ELBOW. However if you simply picked the principle component with the 'dominant' eigenvalue you would be lead astray. But that is not the right way to use a scree plot!
Should one remove highly correlated variables before doing PCA?
Depends on what principle component selection method you use doesn't it? I tend to use any principle component with an eigenvalue > 1. So it wouldn't effect me. And from the examples above even the sc
Should one remove highly correlated variables before doing PCA? Depends on what principle component selection method you use doesn't it? I tend to use any principle component with an eigenvalue > 1. So it wouldn't effect me. And from the examples above even the scree plot method would usually pick the right one. IF YOU KEEP ALL BEFORE THE ELBOW. However if you simply picked the principle component with the 'dominant' eigenvalue you would be lead astray. But that is not the right way to use a scree plot!
Should one remove highly correlated variables before doing PCA? Depends on what principle component selection method you use doesn't it? I tend to use any principle component with an eigenvalue > 1. So it wouldn't effect me. And from the examples above even the sc
1,117
Where should I place dropout layers in a neural network?
In the original paper that proposed dropout layers, by Hinton (2012), dropout (with p=0.5) was used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. This became the most commonly used configuration. More recent research has shown some value in applying dropout also to convolutional layers, although at much lower levels: p=0.1 or 0.2. Dropout was used after the activation function of each convolutional layer: CONV->RELU->DROP.
Where should I place dropout layers in a neural network?
In the original paper that proposed dropout layers, by Hinton (2012), dropout (with p=0.5) was used on each of the fully connected (dense) layers before the output; it was not used on the convolutiona
Where should I place dropout layers in a neural network? In the original paper that proposed dropout layers, by Hinton (2012), dropout (with p=0.5) was used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. This became the most commonly used configuration. More recent research has shown some value in applying dropout also to convolutional layers, although at much lower levels: p=0.1 or 0.2. Dropout was used after the activation function of each convolutional layer: CONV->RELU->DROP.
Where should I place dropout layers in a neural network? In the original paper that proposed dropout layers, by Hinton (2012), dropout (with p=0.5) was used on each of the fully connected (dense) layers before the output; it was not used on the convolutiona
1,118
Where should I place dropout layers in a neural network?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In front of every linear projections. Refer to Srivastava et al. (2014).
Where should I place dropout layers in a neural network?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Where should I place dropout layers in a neural network? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In front of every linear projections. Refer to Srivastava et al. (2014).
Where should I place dropout layers in a neural network? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,119
Where should I place dropout layers in a neural network?
The original paper proposed dropout layers that were used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. We must not use dropout layer after convolutional layer as we slide the filter over the width and height of the input image we produce a 2-dimensional activation map that gives the responses of that filter at every spatial position. So as dropout layer neutralizes (makes it zero) random neurons there are chances of loosing very important feature in an image in our training process.
Where should I place dropout layers in a neural network?
The original paper proposed dropout layers that were used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. We must not use dropout layer a
Where should I place dropout layers in a neural network? The original paper proposed dropout layers that were used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. We must not use dropout layer after convolutional layer as we slide the filter over the width and height of the input image we produce a 2-dimensional activation map that gives the responses of that filter at every spatial position. So as dropout layer neutralizes (makes it zero) random neurons there are chances of loosing very important feature in an image in our training process.
Where should I place dropout layers in a neural network? The original paper proposed dropout layers that were used on each of the fully connected (dense) layers before the output; it was not used on the convolutional layers. We must not use dropout layer a
1,120
Where should I place dropout layers in a neural network?
Some people interpret the dropout enabled neural network as an approximation of Bayesian Neural Network. And we can see this problem from the Bayesian perspective or treat such networks as stochastic artificial neural networks. Artificial neural network An artificial neural network maps some inputs/features to the output/predictions, which can be simplified as the following process: $l_0 = x, $ $l_i = nl_i(W_il_{i-1}+b_i)\hspace{1cm} \forall i \in [1, n],$ $y=l_n.$ where $nl_i$ represents the non-linear activation function in the ith layer. Stochastic artificial neural networks There are two methods to convert a traditional neural network into a stochastic artificial neural network, simulating multiple possible models $\theta$ with their corresponding probability $p(\theta)$ distribution: 1) give the network stochastic activation(depicted below on the left), 2) or stochastic weights/coefficients(on the right). The Dropout model In this awesome article: What My Deep Model Doesn't Know... Yarin Gal views it as a stochastic network: Notice that the dropout mechanism applied on $W_1$ works on the X layer and the dropout mechanism applied on $W_2$ works on the $\sigma$ layer. And the process (with n layers) can be formulated as this: $l_0 = x, $ $z_{i,j} \sim \text{Bernouilli}(p_i)\hspace{1cm} \forall i \in [1, n],$ $l_i = nl_i((l_{i-1} \cdot \text{diag} (z_i))W_i +b_i)\hspace{1cm} \forall i \in [1, n],$ $y=l_n.$ where $l_{i-1} \cdot \text{diag} (z_i)$ means that we randomly zero out some elements of the input(preceding layer) with probability $1-p_i$. TL;DR Then normally we apply the dropout before the activation to dropout the input elements in the preceding layer. Here is an illustration of the dropout machanism. References: Hands-on Bayesian Neural Networks - a Tutorial for Deep Learning Users What My Deep Model Doesn't Know...
Where should I place dropout layers in a neural network?
Some people interpret the dropout enabled neural network as an approximation of Bayesian Neural Network. And we can see this problem from the Bayesian perspective or treat such networks as stochastic
Where should I place dropout layers in a neural network? Some people interpret the dropout enabled neural network as an approximation of Bayesian Neural Network. And we can see this problem from the Bayesian perspective or treat such networks as stochastic artificial neural networks. Artificial neural network An artificial neural network maps some inputs/features to the output/predictions, which can be simplified as the following process: $l_0 = x, $ $l_i = nl_i(W_il_{i-1}+b_i)\hspace{1cm} \forall i \in [1, n],$ $y=l_n.$ where $nl_i$ represents the non-linear activation function in the ith layer. Stochastic artificial neural networks There are two methods to convert a traditional neural network into a stochastic artificial neural network, simulating multiple possible models $\theta$ with their corresponding probability $p(\theta)$ distribution: 1) give the network stochastic activation(depicted below on the left), 2) or stochastic weights/coefficients(on the right). The Dropout model In this awesome article: What My Deep Model Doesn't Know... Yarin Gal views it as a stochastic network: Notice that the dropout mechanism applied on $W_1$ works on the X layer and the dropout mechanism applied on $W_2$ works on the $\sigma$ layer. And the process (with n layers) can be formulated as this: $l_0 = x, $ $z_{i,j} \sim \text{Bernouilli}(p_i)\hspace{1cm} \forall i \in [1, n],$ $l_i = nl_i((l_{i-1} \cdot \text{diag} (z_i))W_i +b_i)\hspace{1cm} \forall i \in [1, n],$ $y=l_n.$ where $l_{i-1} \cdot \text{diag} (z_i)$ means that we randomly zero out some elements of the input(preceding layer) with probability $1-p_i$. TL;DR Then normally we apply the dropout before the activation to dropout the input elements in the preceding layer. Here is an illustration of the dropout machanism. References: Hands-on Bayesian Neural Networks - a Tutorial for Deep Learning Users What My Deep Model Doesn't Know...
Where should I place dropout layers in a neural network? Some people interpret the dropout enabled neural network as an approximation of Bayesian Neural Network. And we can see this problem from the Bayesian perspective or treat such networks as stochastic
1,121
Where should I place dropout layers in a neural network?
You apply dropout after the non-linear activation function. Sources for this: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf - Formula on page 1933 and diagram on the next page. https://sebastianraschka.com/faq/docs/dropout-activation.html https://pgaleone.eu/deep-learning/regularization/2017/01/10/anaysis-of-dropout/
Where should I place dropout layers in a neural network?
You apply dropout after the non-linear activation function. Sources for this: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf - Formula on page 1933 and diagram on the next page. https://seb
Where should I place dropout layers in a neural network? You apply dropout after the non-linear activation function. Sources for this: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf - Formula on page 1933 and diagram on the next page. https://sebastianraschka.com/faq/docs/dropout-activation.html https://pgaleone.eu/deep-learning/regularization/2017/01/10/anaysis-of-dropout/
Where should I place dropout layers in a neural network? You apply dropout after the non-linear activation function. Sources for this: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf - Formula on page 1933 and diagram on the next page. https://seb
1,122
Where should I place dropout layers in a neural network?
For transformers I think you should do it like this: According to the original paper (https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) they say: Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of P_drop = 0.1. which makes me think they do the following: assert SubLayer is FullyConnected or MultiHeadedSeltAttention (not the output of LN+Add) x = SubLayer(x) x = torch.nn.dropout(x, p=0.1) x = nn.LayerNorm(x) + x So right after the multiheaded attention or fully connected (before the LN+ADD) during the transformer blocks/stack. But for the input just before the input to the actual encoder stack - i.e. together with the table look up embeddings. batch_token_seqs: list[list[tokens]] = tokenize(batch_of_tokens) batch_embeddings: torch.Tensor = table_look_up(batch_tokens) * D**0.5 # note this usually outputs masks, one for the right shift no cheating another for padding. D**0.5 is there for completeness, paper mentions it but doesn't justify it. It's not the same as the MHA division. batch_embeddings = batch_embeddings + pos_embedding batch_embeddings = self.drop_out(batch_embeddings)
Where should I place dropout layers in a neural network?
For transformers I think you should do it like this: According to the original paper (https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) they say: Residual Dropout We
Where should I place dropout layers in a neural network? For transformers I think you should do it like this: According to the original paper (https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) they say: Residual Dropout We apply dropout [27] to the output of each sub-layer, before it is added to the sub-layer input and normalized. In addition, we apply dropout to the sums of the embeddings and the positional encodings in both the encoder and decoder stacks. For the base model, we use a rate of P_drop = 0.1. which makes me think they do the following: assert SubLayer is FullyConnected or MultiHeadedSeltAttention (not the output of LN+Add) x = SubLayer(x) x = torch.nn.dropout(x, p=0.1) x = nn.LayerNorm(x) + x So right after the multiheaded attention or fully connected (before the LN+ADD) during the transformer blocks/stack. But for the input just before the input to the actual encoder stack - i.e. together with the table look up embeddings. batch_token_seqs: list[list[tokens]] = tokenize(batch_of_tokens) batch_embeddings: torch.Tensor = table_look_up(batch_tokens) * D**0.5 # note this usually outputs masks, one for the right shift no cheating another for padding. D**0.5 is there for completeness, paper mentions it but doesn't justify it. It's not the same as the MHA division. batch_embeddings = batch_embeddings + pos_embedding batch_embeddings = self.drop_out(batch_embeddings)
Where should I place dropout layers in a neural network? For transformers I think you should do it like this: According to the original paper (https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) they say: Residual Dropout We
1,123
What is the .632+ rule in bootstrapping?
I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated using the data $(\mathbf{Y}, \mathbf{X})$, e.g. $f(\mathbf{X}) = \mathbf{X}\mathbf{\beta}$ A naïve estimate of prediction error is $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ where $L$ is some loss function, e.g. squared error loss. This is often called training error. Efron et al. calls it apparent error rate or resubstitution rate. It's not very good since we use our data $(x_i,y_i)$ to fit $f$. This results in $\overline{err}$ being downward biased. You want to know how well your model $f$ does in predicting new values. Often we use cross-validation as a simple way to estimate the expected extra-sample prediction error (how well does our model do on data not in our training set?). $$Err = \text{E}\left[ L(Y, f(X))\right]$$ A popular way to do this is to do $K$-fold cross-validation. Split your data into $K$ groups (e.g. 10). For each group $k$, fit your model on the remaining $K-1$ groups and test it on the $k$th group. Our cross-validated extra-sample prediction error is just the average $$Err_{CV} = \dfrac{1}{N}\sum_{i=1}^N L(y_i, f_{-\kappa(i)}(x_i))$$ where $\kappa$ is some index function that indicates the partition to which observation $i$ is allocated and $f_{-\kappa(i)}(x_i)$ is the predicted value of $x_i$ using data not in the $\kappa(i)$th set. This estimator is approximately unbiased for the true prediction error when $K=N$ and has larger variance and is more computationally expensive for larger $K$. So once again we see the bias–variance trade-off at play. Instead of cross-validation we could use the bootstrap to estimate the extra-sample prediction error. Bootstrap resampling can be used to estimate the sampling distribution of any statistic. If our training data is $\mathbf{X} = (x_1,\ldots,x_N)$, then we can think of taking $B$ bootstrap samples (with replacement) from this set $\mathbf{Z}_1,\ldots,\mathbf{Z}_B$ where each $\mathbf{Z}_i$ is a set of $N$ samples. Now we can use our bootstrap samples to estimate extra-sample prediction error: $$Err_{boot} = \dfrac{1}{B}\sum_{b=1}^B\dfrac{1}{N}\sum_{i=1}^N L(y_i, f_b(x_i))$$ where $f_b(x_i)$ is the predicted value at $x_i$ from the model fit to the $b$th bootstrap dataset. Unfortunately, this is not a particularly good estimator because bootstrap samples used to produce $f_b(x_i)$ may have contained $x_i$. The leave-one-out bootstrap estimator offers an improvement by mimicking cross-validation and is defined as: $$Err_{boot(1)} = \dfrac{1}{N}\sum_{i=1}^N\dfrac{1}{|C^{-i}|}\sum_{b\in C^{-i}}L(y_i,f_b(x_i))$$ where $C^{-i}$ is the set of indices for the bootstrap samples that do not contain observation $i$, and $|C^{-i}|$ is the number of such samples. $Err_{boot(1)}$ solves the overfitting problem, but is still biased (this one is upward biased). The bias is due to non-distinct observations in the bootstrap samples that result from sampling with replacement. The average number of distinct observations in each sample is about $0.632N$ (see this answer for an explanation of why Why on average does each bootstrap sample contain roughly two thirds of observations?). To solve the bias problem, Efron and Tibshirani proposed the 0.632 estimator: $$ Err_{.632} = 0.368\overline{err} + 0.632Err_{boot(1)}$$ where $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ is the naïve estimate of prediction error often called training error. The idea is to average a downward biased estimate and an upward biased estimate. However, if we have a highly overfit prediction function (i.e. $\overline{err}=0$) then even the .632 estimator will be downward biased. The .632+ estimator is designed to be a less-biased compromise between $\overline{err}$ and $Err_{boot(1)}$. $$ Err_{.632+} = (1 - w) \overline{err} + w Err_{boot(1)} $$ with $$w = \dfrac{0.632}{1 - 0.368R} \quad\text{and}\quad R = \dfrac{Err_{boot(1)} - \overline{err}}{\gamma - \overline{err}} $$ where $\gamma$ is the no-information error rate, estimated by evaluating the prediction model on all possible combinations of targets $y_i$ and predictors $x_i$. $$\gamma = \dfrac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N L(y_i, f(x_j))$$. Here $R$ measures the relative overfitting rate. If there is no overfitting (R=0, when the $Err_{boot(1)} = \overline{err}$) this is equal to the .632 estimator.
What is the .632+ rule in bootstrapping?
I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated
What is the .632+ rule in bootstrapping? I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated using the data $(\mathbf{Y}, \mathbf{X})$, e.g. $f(\mathbf{X}) = \mathbf{X}\mathbf{\beta}$ A naïve estimate of prediction error is $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ where $L$ is some loss function, e.g. squared error loss. This is often called training error. Efron et al. calls it apparent error rate or resubstitution rate. It's not very good since we use our data $(x_i,y_i)$ to fit $f$. This results in $\overline{err}$ being downward biased. You want to know how well your model $f$ does in predicting new values. Often we use cross-validation as a simple way to estimate the expected extra-sample prediction error (how well does our model do on data not in our training set?). $$Err = \text{E}\left[ L(Y, f(X))\right]$$ A popular way to do this is to do $K$-fold cross-validation. Split your data into $K$ groups (e.g. 10). For each group $k$, fit your model on the remaining $K-1$ groups and test it on the $k$th group. Our cross-validated extra-sample prediction error is just the average $$Err_{CV} = \dfrac{1}{N}\sum_{i=1}^N L(y_i, f_{-\kappa(i)}(x_i))$$ where $\kappa$ is some index function that indicates the partition to which observation $i$ is allocated and $f_{-\kappa(i)}(x_i)$ is the predicted value of $x_i$ using data not in the $\kappa(i)$th set. This estimator is approximately unbiased for the true prediction error when $K=N$ and has larger variance and is more computationally expensive for larger $K$. So once again we see the bias–variance trade-off at play. Instead of cross-validation we could use the bootstrap to estimate the extra-sample prediction error. Bootstrap resampling can be used to estimate the sampling distribution of any statistic. If our training data is $\mathbf{X} = (x_1,\ldots,x_N)$, then we can think of taking $B$ bootstrap samples (with replacement) from this set $\mathbf{Z}_1,\ldots,\mathbf{Z}_B$ where each $\mathbf{Z}_i$ is a set of $N$ samples. Now we can use our bootstrap samples to estimate extra-sample prediction error: $$Err_{boot} = \dfrac{1}{B}\sum_{b=1}^B\dfrac{1}{N}\sum_{i=1}^N L(y_i, f_b(x_i))$$ where $f_b(x_i)$ is the predicted value at $x_i$ from the model fit to the $b$th bootstrap dataset. Unfortunately, this is not a particularly good estimator because bootstrap samples used to produce $f_b(x_i)$ may have contained $x_i$. The leave-one-out bootstrap estimator offers an improvement by mimicking cross-validation and is defined as: $$Err_{boot(1)} = \dfrac{1}{N}\sum_{i=1}^N\dfrac{1}{|C^{-i}|}\sum_{b\in C^{-i}}L(y_i,f_b(x_i))$$ where $C^{-i}$ is the set of indices for the bootstrap samples that do not contain observation $i$, and $|C^{-i}|$ is the number of such samples. $Err_{boot(1)}$ solves the overfitting problem, but is still biased (this one is upward biased). The bias is due to non-distinct observations in the bootstrap samples that result from sampling with replacement. The average number of distinct observations in each sample is about $0.632N$ (see this answer for an explanation of why Why on average does each bootstrap sample contain roughly two thirds of observations?). To solve the bias problem, Efron and Tibshirani proposed the 0.632 estimator: $$ Err_{.632} = 0.368\overline{err} + 0.632Err_{boot(1)}$$ where $$\overline{err} = \dfrac{1}{N}\sum_{i=1}^N L(y_i,f(x_i))$$ is the naïve estimate of prediction error often called training error. The idea is to average a downward biased estimate and an upward biased estimate. However, if we have a highly overfit prediction function (i.e. $\overline{err}=0$) then even the .632 estimator will be downward biased. The .632+ estimator is designed to be a less-biased compromise between $\overline{err}$ and $Err_{boot(1)}$. $$ Err_{.632+} = (1 - w) \overline{err} + w Err_{boot(1)} $$ with $$w = \dfrac{0.632}{1 - 0.368R} \quad\text{and}\quad R = \dfrac{Err_{boot(1)} - \overline{err}}{\gamma - \overline{err}} $$ where $\gamma$ is the no-information error rate, estimated by evaluating the prediction model on all possible combinations of targets $y_i$ and predictors $x_i$. $$\gamma = \dfrac{1}{N^2}\sum_{i=1}^N\sum_{j=1}^N L(y_i, f(x_j))$$. Here $R$ measures the relative overfitting rate. If there is no overfitting (R=0, when the $Err_{boot(1)} = \overline{err}$) this is equal to the .632 estimator.
What is the .632+ rule in bootstrapping? I will get to the 0.632 estimator, but it'll be a somewhat long development: Suppose we want to predict $Y$ with $X$ using the function $f$, where $f$ may depend on some parameters that are estimated
1,124
What is the .632+ rule in bootstrapping?
You will find more information in section 3 of this1 paper. But to summarize, if you call $S$ a sample of $n$ numbers from $\{1:n\}$ drawn randomly and with replacement, $S$ contains on average approximately $(1-e^{-1})\,n \approx 0.63212056\, n$ unique elements. The reasoning is as follows. We populate $S=\{s_1,\ldots,s_n\}$ by sampling $i=1,\ldots,n$ times (randomly and with replacement) from $\{1:n\}$. Consider a particular index $m\in\{1:n\}$. Then: $$P(s_i=m)=1/n$$ and $$P(s_i\neq m)=1-1/n$$ and this is true $\forall 1\leq i \leq n$ (intuitively, since we sample with replacement, the probabilities do not depend on $i$) thus $$P(m\in S)=1-P(m\notin S)=1-P(\cap_{i=1}^n s_i\neq m)\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=1-\prod_{i=1}^n P(s_i\neq m)=1-(1-1/n)^n\approx 1-e^{-1}$$ You can also carry this little simulation to check empirically the quality of the approximation (which depends on $n$): n <- 100 fx01 <- function(ll,n){ a1 <- sample(1:n, n, replace=TRUE) length(unique(a1))/n } b1 <- c(lapply(1:1000,fx01,n=100), recursive=TRUE) mean(b1) 1. Bradley Efron and Robert Tibshirani (1997). Improvements on Cross-Validation: The .632+ Bootstrap Method. Journal of the American Statistical Association, Vol. 92, No. 438, pp. 548--560.
What is the .632+ rule in bootstrapping?
You will find more information in section 3 of this1 paper. But to summarize, if you call $S$ a sample of $n$ numbers from $\{1:n\}$ drawn randomly and with replacement, $S$ contains on average approx
What is the .632+ rule in bootstrapping? You will find more information in section 3 of this1 paper. But to summarize, if you call $S$ a sample of $n$ numbers from $\{1:n\}$ drawn randomly and with replacement, $S$ contains on average approximately $(1-e^{-1})\,n \approx 0.63212056\, n$ unique elements. The reasoning is as follows. We populate $S=\{s_1,\ldots,s_n\}$ by sampling $i=1,\ldots,n$ times (randomly and with replacement) from $\{1:n\}$. Consider a particular index $m\in\{1:n\}$. Then: $$P(s_i=m)=1/n$$ and $$P(s_i\neq m)=1-1/n$$ and this is true $\forall 1\leq i \leq n$ (intuitively, since we sample with replacement, the probabilities do not depend on $i$) thus $$P(m\in S)=1-P(m\notin S)=1-P(\cap_{i=1}^n s_i\neq m)\\ \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;=1-\prod_{i=1}^n P(s_i\neq m)=1-(1-1/n)^n\approx 1-e^{-1}$$ You can also carry this little simulation to check empirically the quality of the approximation (which depends on $n$): n <- 100 fx01 <- function(ll,n){ a1 <- sample(1:n, n, replace=TRUE) length(unique(a1))/n } b1 <- c(lapply(1:1000,fx01,n=100), recursive=TRUE) mean(b1) 1. Bradley Efron and Robert Tibshirani (1997). Improvements on Cross-Validation: The .632+ Bootstrap Method. Journal of the American Statistical Association, Vol. 92, No. 438, pp. 548--560.
What is the .632+ rule in bootstrapping? You will find more information in section 3 of this1 paper. But to summarize, if you call $S$ a sample of $n$ numbers from $\{1:n\}$ drawn randomly and with replacement, $S$ contains on average approx
1,125
What is the .632+ rule in bootstrapping?
In my experience, primarily based on simulations, the 0.632 and 0.632+ bootstrap variants were needed only because of severe problems caused by the use of an improper accuracy scoring rule, namely the proportion "classified" correctly. When you use proper (e.g., deviance-based or Brier score) or semi-proper (e.g., $c$-index = AUROC) scoring rules, the standard Efron-Gong optimism bootstrap works just fine.
What is the .632+ rule in bootstrapping?
In my experience, primarily based on simulations, the 0.632 and 0.632+ bootstrap variants were needed only because of severe problems caused by the use of an improper accuracy scoring rule, namely the
What is the .632+ rule in bootstrapping? In my experience, primarily based on simulations, the 0.632 and 0.632+ bootstrap variants were needed only because of severe problems caused by the use of an improper accuracy scoring rule, namely the proportion "classified" correctly. When you use proper (e.g., deviance-based or Brier score) or semi-proper (e.g., $c$-index = AUROC) scoring rules, the standard Efron-Gong optimism bootstrap works just fine.
What is the .632+ rule in bootstrapping? In my experience, primarily based on simulations, the 0.632 and 0.632+ bootstrap variants were needed only because of severe problems caused by the use of an improper accuracy scoring rule, namely the
1,126
What is the .632+ rule in bootstrapping?
Those answers are very useful. I couldn't find a way to demonstrate it with maths so I wrote some Python code which works quite well though: from numpy import mean from numpy.random import choice N = 3000 variables = range(N) num_loop = 1000 # Proportion of remaining variables p_var = [] for i in range(num_loop): set_var = set(choice(variables, N)) p=len(set_var)/float(N) if i%50==0: print "value for ", i, " iteration ", "p = ",p p_var.append(p) print "Estimator of the proportion of remaining variables, ", mean(p_var)
What is the .632+ rule in bootstrapping?
Those answers are very useful. I couldn't find a way to demonstrate it with maths so I wrote some Python code which works quite well though: from numpy import mean from numpy.random import cho
What is the .632+ rule in bootstrapping? Those answers are very useful. I couldn't find a way to demonstrate it with maths so I wrote some Python code which works quite well though: from numpy import mean from numpy.random import choice N = 3000 variables = range(N) num_loop = 1000 # Proportion of remaining variables p_var = [] for i in range(num_loop): set_var = set(choice(variables, N)) p=len(set_var)/float(N) if i%50==0: print "value for ", i, " iteration ", "p = ",p p_var.append(p) print "Estimator of the proportion of remaining variables, ", mean(p_var)
What is the .632+ rule in bootstrapping? Those answers are very useful. I couldn't find a way to demonstrate it with maths so I wrote some Python code which works quite well though: from numpy import mean from numpy.random import cho
1,127
What is the .632+ rule in bootstrapping?
I was struggling with this concept of 632+. The given answers here clear up some things for me but I find it all rather technical. For those of you that are at my level I'll try to explain it: The 632+ bootstrap trains en test models with a bootstraps of your dataset and then calculates scores, for example accuracy, for those models. To check how much the predicted values of your model differs from what should be expected, some magical math with e and other stuff is done to adjust your results to what might be expected in the real population. If you want to implement it in Python without knowing whats going on this is great resource with explaination and a great library: bootstrap_point632_score - mlxtend (rasbt.github.io) That method generates a list with scores for each trained model which you can use to evaluate your model.
What is the .632+ rule in bootstrapping?
I was struggling with this concept of 632+. The given answers here clear up some things for me but I find it all rather technical. For those of you that are at my level I'll try to explain it: The 632
What is the .632+ rule in bootstrapping? I was struggling with this concept of 632+. The given answers here clear up some things for me but I find it all rather technical. For those of you that are at my level I'll try to explain it: The 632+ bootstrap trains en test models with a bootstraps of your dataset and then calculates scores, for example accuracy, for those models. To check how much the predicted values of your model differs from what should be expected, some magical math with e and other stuff is done to adjust your results to what might be expected in the real population. If you want to implement it in Python without knowing whats going on this is great resource with explaination and a great library: bootstrap_point632_score - mlxtend (rasbt.github.io) That method generates a list with scores for each trained model which you can use to evaluate your model.
What is the .632+ rule in bootstrapping? I was struggling with this concept of 632+. The given answers here clear up some things for me but I find it all rather technical. For those of you that are at my level I'll try to explain it: The 632
1,128
KL divergence between two univariate Gaussians
OK, my bad. The error is in the last equation: \begin{align} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &=\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\ &= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} \end{align} Note the missing $-\frac{1}{2}$. The last line becomes zero when $\mu_1=\mu_2$ and $\sigma_1=\sigma_2$.
KL divergence between two univariate Gaussians
OK, my bad. The error is in the last equation: \begin{align} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &=\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2
KL divergence between two univariate Gaussians OK, my bad. The error is in the last equation: \begin{align} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &=\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} (1 + \log 2 \pi \sigma_1^2)\\\\ &= \log \frac{\sigma_2}{\sigma_1} + \frac{\sigma_1^2 + (\mu_1 - \mu_2)^2}{2 \sigma_2^2} - \frac{1}{2} \end{align} Note the missing $-\frac{1}{2}$. The last line becomes zero when $\mu_1=\mu_2$ and $\sigma_1=\sigma_2$.
KL divergence between two univariate Gaussians OK, my bad. The error is in the last equation: \begin{align} KL(p, q) &= - \int p(x) \log q(x) dx + \int p(x) \log p(x) dx\\\\ &=\frac{1}{2} \log (2 \pi \sigma_2^2) + \frac{\sigma_1^2 + (\mu_1 - \mu_2
1,129
KL divergence between two univariate Gaussians
I did not have a look at your calculation but here is mine with a lot of details. Suppose $p$ is the density of a normal random variable with mean $\mu_1$ and variance $\sigma^2_1$, and that $q$ is the density of a normal random variable with mean $\mu_2$ and variance $\sigma^2_2$. The Kullback-Leibler distance from $q$ to $p$ is: $$\int \left[\log( p(x)) - \log( q(x)) \right] p(x) dx$$ \begin{align}&=\int \left[ -\frac{1}{2} \log(2\pi) - \log(\sigma_1) - \frac{1}{2} \left(\frac{x-\mu_1}{\sigma_1}\right)^2 + \frac{1}{2}\log(2\pi) + \log(\sigma_2) + \frac{1}{2} \left(\frac{x-\mu_2}{\sigma_2}\right)^2 \right]\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\&=\int \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right] \right\}\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\& =E_{1} \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right]\right\}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2\sigma_1^2} E_1 \left\{(X-\mu_1)^2\right\}\\ &=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2};\end{align} (Now note that $(X - \mu_2)^2 = (X-\mu_1+\mu_1-\mu_2)^2 = (X-\mu_1)^2 + 2(X-\mu_1)(\mu_1-\mu_2) + (\mu_1-\mu_2)^2$) \begin{align}&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} \left[E_1\left\{(X-\mu_1)^2\right\} + 2(\mu_1-\mu_2)E_1\left\{X-\mu_1\right\} + (\mu_1-\mu_2)^2\right] - \frac{1}{2}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{\sigma_1^2 + (\mu_1-\mu_2)^2}{2\sigma_2^2} - \frac{1}{2}.\end{align}
KL divergence between two univariate Gaussians
I did not have a look at your calculation but here is mine with a lot of details. Suppose $p$ is the density of a normal random variable with mean $\mu_1$ and variance $\sigma^2_1$, and that $q$ is th
KL divergence between two univariate Gaussians I did not have a look at your calculation but here is mine with a lot of details. Suppose $p$ is the density of a normal random variable with mean $\mu_1$ and variance $\sigma^2_1$, and that $q$ is the density of a normal random variable with mean $\mu_2$ and variance $\sigma^2_2$. The Kullback-Leibler distance from $q$ to $p$ is: $$\int \left[\log( p(x)) - \log( q(x)) \right] p(x) dx$$ \begin{align}&=\int \left[ -\frac{1}{2} \log(2\pi) - \log(\sigma_1) - \frac{1}{2} \left(\frac{x-\mu_1}{\sigma_1}\right)^2 + \frac{1}{2}\log(2\pi) + \log(\sigma_2) + \frac{1}{2} \left(\frac{x-\mu_2}{\sigma_2}\right)^2 \right]\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\&=\int \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right] \right\}\times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx\\& =E_{1} \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right]\right\}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2\sigma_1^2} E_1 \left\{(X-\mu_1)^2\right\}\\ &=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2};\end{align} (Now note that $(X - \mu_2)^2 = (X-\mu_1+\mu_1-\mu_2)^2 = (X-\mu_1)^2 + 2(X-\mu_1)(\mu_1-\mu_2) + (\mu_1-\mu_2)^2$) \begin{align}&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} \left[E_1\left\{(X-\mu_1)^2\right\} + 2(\mu_1-\mu_2)E_1\left\{X-\mu_1\right\} + (\mu_1-\mu_2)^2\right] - \frac{1}{2}\\&=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{\sigma_1^2 + (\mu_1-\mu_2)^2}{2\sigma_2^2} - \frac{1}{2}.\end{align}
KL divergence between two univariate Gaussians I did not have a look at your calculation but here is mine with a lot of details. Suppose $p$ is the density of a normal random variable with mean $\mu_1$ and variance $\sigma^2_1$, and that $q$ is th
1,130
Is Facebook coming to an end?
The answers so far have focused on the data itself, which makes sense with the site this is on, and the flaws about it. But I'm a computational/mathematical epidemiologist by inclination, so I'm also going to talk about the model itself for a little bit, because it's also relevant to the discussion. In my mind, the biggest problem with the paper is not the Google data. Mathematical models in epidemiology handle messy data all the time, and to my mind the problems with it could be addressed with a fairly straightforward sensitivity analysis. The biggest problem, to me, is that the researchers have "doomed themselves to success" — something that should always be avoided in research. They do this in the model they decided to fit to the data: a standard SIR model. Briefly, a SIR model (which stands for susceptible (S) infectious (I) recovered (R)) is a series of differential equations that track the health states of a population as it experiences an infectious disease. Infected individuals interact with susceptible individuals and infect them, and then in time move on to the recovered category. This produces a curve that looks like this: Beautiful, is it not? And yes, this one is for a zombie epidemic. Long story. In this case, the red line is what's being modeled as "Facebook users". The problem is this: In the basic SIR model, the I class will eventually, and inevitably, asymptotically approach zero. It must happen. It doesn't matter if you're modeling zombies, measles, Facebook, or Stack Exchange, etc. If you model it with a SIR model, the inevitable conclusion is that the population in the infectious (I) class drops to approximately zero. There are extremely straightforward extensions to the SIR model that make this not true — either you can have people in the recovered (R) class come back to susceptible (S) (essentially, this would be people who left Facebook changing from "I'm never going back" to "I might go back someday"), or you can have new people come into the population (this would be little Timmy and Claire getting their first computers). Unfortunately, the authors didn't fit those models. This is, incidentally, a widespread problem in mathematical modeling. A statistical model is an attempt to describe the patterns of variables and their interactions within the data. A mathematical model is an assertion about reality. You can get a SIR model to fit lots of things, but your choice of a SIR model is also an assertion about the system. Namely, that once it peaks, it's heading to zero. Incidentally, Internet companies do use user-retention models that look a heck of a lot like epidemic models, but they're also considerably more complex than the one presented in the paper.
Is Facebook coming to an end?
The answers so far have focused on the data itself, which makes sense with the site this is on, and the flaws about it. But I'm a computational/mathematical epidemiologist by inclination, so I'm also
Is Facebook coming to an end? The answers so far have focused on the data itself, which makes sense with the site this is on, and the flaws about it. But I'm a computational/mathematical epidemiologist by inclination, so I'm also going to talk about the model itself for a little bit, because it's also relevant to the discussion. In my mind, the biggest problem with the paper is not the Google data. Mathematical models in epidemiology handle messy data all the time, and to my mind the problems with it could be addressed with a fairly straightforward sensitivity analysis. The biggest problem, to me, is that the researchers have "doomed themselves to success" — something that should always be avoided in research. They do this in the model they decided to fit to the data: a standard SIR model. Briefly, a SIR model (which stands for susceptible (S) infectious (I) recovered (R)) is a series of differential equations that track the health states of a population as it experiences an infectious disease. Infected individuals interact with susceptible individuals and infect them, and then in time move on to the recovered category. This produces a curve that looks like this: Beautiful, is it not? And yes, this one is for a zombie epidemic. Long story. In this case, the red line is what's being modeled as "Facebook users". The problem is this: In the basic SIR model, the I class will eventually, and inevitably, asymptotically approach zero. It must happen. It doesn't matter if you're modeling zombies, measles, Facebook, or Stack Exchange, etc. If you model it with a SIR model, the inevitable conclusion is that the population in the infectious (I) class drops to approximately zero. There are extremely straightforward extensions to the SIR model that make this not true — either you can have people in the recovered (R) class come back to susceptible (S) (essentially, this would be people who left Facebook changing from "I'm never going back" to "I might go back someday"), or you can have new people come into the population (this would be little Timmy and Claire getting their first computers). Unfortunately, the authors didn't fit those models. This is, incidentally, a widespread problem in mathematical modeling. A statistical model is an attempt to describe the patterns of variables and their interactions within the data. A mathematical model is an assertion about reality. You can get a SIR model to fit lots of things, but your choice of a SIR model is also an assertion about the system. Namely, that once it peaks, it's heading to zero. Incidentally, Internet companies do use user-retention models that look a heck of a lot like epidemic models, but they're also considerably more complex than the one presented in the paper.
Is Facebook coming to an end? The answers so far have focused on the data itself, which makes sense with the site this is on, and the flaws about it. But I'm a computational/mathematical epidemiologist by inclination, so I'm also
1,131
Is Facebook coming to an end?
My primary concern with this paper is that it focuses primarily on Google search results. It is a well-established fact that smartphone use is on the rise (Pew Internet, Brandwatch), and traditional computer sales are declining (possibly just due to old computers still functioning) (Slate, ExtremeTech), as more people use smartphones to access the internet. Considering there is a native Facebook app for (at least) iOS, Android, Blackberry, and Windows Phone, it's no surprise that the number of Google queries for "facebook" has fallen significantly. If users no longer need to open a browser and mistype "facebook.com" in the URL bar, then that would definitely negatively impact the number of searches. In fact, the number of FB users who use the app has gone up significantly (TechCrunch, Forbes). I think this study is just some "huh, interesting correlation" that got taken too far by alarmist media outlets; "Did you know the world is changing? How unexpected!"
Is Facebook coming to an end?
My primary concern with this paper is that it focuses primarily on Google search results. It is a well-established fact that smartphone use is on the rise (Pew Internet, Brandwatch), and traditional c
Is Facebook coming to an end? My primary concern with this paper is that it focuses primarily on Google search results. It is a well-established fact that smartphone use is on the rise (Pew Internet, Brandwatch), and traditional computer sales are declining (possibly just due to old computers still functioning) (Slate, ExtremeTech), as more people use smartphones to access the internet. Considering there is a native Facebook app for (at least) iOS, Android, Blackberry, and Windows Phone, it's no surprise that the number of Google queries for "facebook" has fallen significantly. If users no longer need to open a browser and mistype "facebook.com" in the URL bar, then that would definitely negatively impact the number of searches. In fact, the number of FB users who use the app has gone up significantly (TechCrunch, Forbes). I think this study is just some "huh, interesting correlation" that got taken too far by alarmist media outlets; "Did you know the world is changing? How unexpected!"
Is Facebook coming to an end? My primary concern with this paper is that it focuses primarily on Google search results. It is a well-established fact that smartphone use is on the rise (Pew Internet, Brandwatch), and traditional c
1,132
Is Facebook coming to an end?
Well, this paper establishes the fact that the number of Google searches on Facebook fits a certain curve nicely. So at best it can predict that the searches on Facebook will decline by 80%. Which might be feasible, because Facebook might become so ubiquitous that nobody would need to search about it. The problem with such type of models is that they assume that no other factors can influence the dynamics of the observed variable. This assumption is hard to justify when dealing with data related to people. For example, this model assumes that Facebook cannot do anything to counter the loss of its users, which is a very questionable assumption to make.
Is Facebook coming to an end?
Well, this paper establishes the fact that the number of Google searches on Facebook fits a certain curve nicely. So at best it can predict that the searches on Facebook will decline by 80%. Which mig
Is Facebook coming to an end? Well, this paper establishes the fact that the number of Google searches on Facebook fits a certain curve nicely. So at best it can predict that the searches on Facebook will decline by 80%. Which might be feasible, because Facebook might become so ubiquitous that nobody would need to search about it. The problem with such type of models is that they assume that no other factors can influence the dynamics of the observed variable. This assumption is hard to justify when dealing with data related to people. For example, this model assumes that Facebook cannot do anything to counter the loss of its users, which is a very questionable assumption to make.
Is Facebook coming to an end? Well, this paper establishes the fact that the number of Google searches on Facebook fits a certain curve nicely. So at best it can predict that the searches on Facebook will decline by 80%. Which mig
1,133
Is Facebook coming to an end?
Google Trend in my opinion can't produce a good data set for this case of study. Google trend shows how often a term is searched with Google so there are at least two reasons for raising some doubts about the prevision: We don't know if the user searches on Google Facebook to log in or if he searches information about Facebook Facebook is not only a site is a phenomenon, with many articles, books and a film about it and Facebook Inc. on May 18, 2012 began selling stock to the public and trading on the NASDAQ. Google Trend shows you both: the searches for the site and the searches for the "phenomenon". New things always have a great impact to the mass, TV had a great impact to the mass now no one write articles about it but is still one of the most used appliance. Most users don't search "facebook" on Google to login With mobile applications and Bookmarks a user with a decent knowledge of internet search "facebook" on Google only the first time then he usually saves the page as a bookmark or download the application. The graph below is the Google trend for Wikipedia, it seems that we will not use Wikipedia in the future. Obviously this is not true we simply don't access to wikipedia typing "wikipedia" we simply search and then use the wikipedia page or we use the bookmark to access to it.
Is Facebook coming to an end?
Google Trend in my opinion can't produce a good data set for this case of study. Google trend shows how often a term is searched with Google so there are at least two reasons for raising some doubts a
Is Facebook coming to an end? Google Trend in my opinion can't produce a good data set for this case of study. Google trend shows how often a term is searched with Google so there are at least two reasons for raising some doubts about the prevision: We don't know if the user searches on Google Facebook to log in or if he searches information about Facebook Facebook is not only a site is a phenomenon, with many articles, books and a film about it and Facebook Inc. on May 18, 2012 began selling stock to the public and trading on the NASDAQ. Google Trend shows you both: the searches for the site and the searches for the "phenomenon". New things always have a great impact to the mass, TV had a great impact to the mass now no one write articles about it but is still one of the most used appliance. Most users don't search "facebook" on Google to login With mobile applications and Bookmarks a user with a decent knowledge of internet search "facebook" on Google only the first time then he usually saves the page as a bookmark or download the application. The graph below is the Google trend for Wikipedia, it seems that we will not use Wikipedia in the future. Obviously this is not true we simply don't access to wikipedia typing "wikipedia" we simply search and then use the wikipedia page or we use the bookmark to access to it.
Is Facebook coming to an end? Google Trend in my opinion can't produce a good data set for this case of study. Google trend shows how often a term is searched with Google so there are at least two reasons for raising some doubts a
1,134
Is Facebook coming to an end?
A few basic issues stand out with this paper: It assumes correlation of search engine queries about a rising social network with the membership increases. This may have correlated in the past, but may not in the future. There are very few new large social networks. You can almost count them on one hand. Friendster, Myspace, Facebook, Google+. Also, Stack Exchange, Tumblr, and Twitter function similarly to social networks. Is anyone predicting Twitter is over? Quite to the contrary, it seems to have major momentum. There is not much mention or study of other ones to see if they fit. In a way we are talking about, does a trend exist among 5-7 data points? (The number of social networks.) It's just too little data to make any conclusion about the future. Facebook displaced Myspace. That was the chief dynamic. It doesn't consider the idea that one infection is displacing another, it tends to consider them separately. What is displacing Facebook? Google+? Twitter? The interaction and "defection" of customers from one "brand" or "product" to the other is the critical phenomenon in this area. Social networks coexist. One can be a member of multiple sites. It is true that members may tend to prefer one over the other. It would seem a much better model is that there is a consolidation going on, like in economics, such as with automobiles, radio makers, web sites, etc. As in any new disruptive technology, there are many competitors in the beginning, and then, later, the field narrows, they tend to consolidate, there are buyouts and mergers, and some die out in the competition. We already see examples of this, e.g. Yahoo buying out Tumblr recently. A similar concept might be with television networks consolidating and being owned by large conglomerates, e.g. major media companies owning many media assets. Indeed, Myspace was bought out by News Corporation. The way to go is to look for more analogies between economics and infections (biology). Companies acquiring customers from competitors and the uptake of products do indeed have many epidemiological parallels. There are strong parallels to evolutionary "red queen" races [see the book, Red Queen by Ridley]. There might be connections to a field called bionomics. Another basic model is products that compete with each other and have various "barriers to entry" for customers to switch from one brand to another. It is true the cost of switching is very low in cyberspace. It's similar to brands of beers competing for customers, etc. In an asymptotic model, it is much more likely that a network increases its members toward some asymptotic maximum and then it tends to plateau. Early in the plateau, it will not be apparent that it is a plateau. That all said, I think it has some very valid and engaging ideas and is likely to spur much further research. It's groundbreaking, pioneering, and it just needs to be adjusted a bit in its claims. I am delighted in this use of Stack Exchange and collaborative wisdom/collective intelligence analyzing this paper. (Now if only reporters researching the subject would read this whole page carefully before preparing their simplistic sound bites.)
Is Facebook coming to an end?
A few basic issues stand out with this paper: It assumes correlation of search engine queries about a rising social network with the membership increases. This may have correlated in the past, but ma
Is Facebook coming to an end? A few basic issues stand out with this paper: It assumes correlation of search engine queries about a rising social network with the membership increases. This may have correlated in the past, but may not in the future. There are very few new large social networks. You can almost count them on one hand. Friendster, Myspace, Facebook, Google+. Also, Stack Exchange, Tumblr, and Twitter function similarly to social networks. Is anyone predicting Twitter is over? Quite to the contrary, it seems to have major momentum. There is not much mention or study of other ones to see if they fit. In a way we are talking about, does a trend exist among 5-7 data points? (The number of social networks.) It's just too little data to make any conclusion about the future. Facebook displaced Myspace. That was the chief dynamic. It doesn't consider the idea that one infection is displacing another, it tends to consider them separately. What is displacing Facebook? Google+? Twitter? The interaction and "defection" of customers from one "brand" or "product" to the other is the critical phenomenon in this area. Social networks coexist. One can be a member of multiple sites. It is true that members may tend to prefer one over the other. It would seem a much better model is that there is a consolidation going on, like in economics, such as with automobiles, radio makers, web sites, etc. As in any new disruptive technology, there are many competitors in the beginning, and then, later, the field narrows, they tend to consolidate, there are buyouts and mergers, and some die out in the competition. We already see examples of this, e.g. Yahoo buying out Tumblr recently. A similar concept might be with television networks consolidating and being owned by large conglomerates, e.g. major media companies owning many media assets. Indeed, Myspace was bought out by News Corporation. The way to go is to look for more analogies between economics and infections (biology). Companies acquiring customers from competitors and the uptake of products do indeed have many epidemiological parallels. There are strong parallels to evolutionary "red queen" races [see the book, Red Queen by Ridley]. There might be connections to a field called bionomics. Another basic model is products that compete with each other and have various "barriers to entry" for customers to switch from one brand to another. It is true the cost of switching is very low in cyberspace. It's similar to brands of beers competing for customers, etc. In an asymptotic model, it is much more likely that a network increases its members toward some asymptotic maximum and then it tends to plateau. Early in the plateau, it will not be apparent that it is a plateau. That all said, I think it has some very valid and engaging ideas and is likely to spur much further research. It's groundbreaking, pioneering, and it just needs to be adjusted a bit in its claims. I am delighted in this use of Stack Exchange and collaborative wisdom/collective intelligence analyzing this paper. (Now if only reporters researching the subject would read this whole page carefully before preparing their simplistic sound bites.)
Is Facebook coming to an end? A few basic issues stand out with this paper: It assumes correlation of search engine queries about a rising social network with the membership increases. This may have correlated in the past, but ma
1,135
Is Facebook coming to an end?
The question isn't "if" but "when". That it will end is already guaranteed. http://www.ted.com/talks/geoffrey_west_the_surprising_math_of_cities_and_corporations.html I take umbrage with the use of the SIR model. It comes with assumptions. One of the assumptions is that eventually everyone is "recovered". Infections are not perpetual, while technology adoption can be (consider the automobile for example). If the business is doomed to eventually die, then when going through death throes the relationships between susceptible, infected, and recovered might be adequately modeled by a particular SIR model. This does not mean the model is descriptive of any of the seasons before end-of-life. It does not take into account other forces - the context. Facebook was part of the context of end of "Myspace" and so while an SIR was appropriate for Myspace-only use, it was not for Social-Network use because many users had accounts on both, and switched to FB-dominant usage. I dug through the zombie-model, and even through some non-zombie SIR fits, and a time and population punctuated-windowed SIR is more appropriate there. It is not a universal model, and it has strengths and weaknesses. That means that the SIR is imperfect even for the systems that it was engineered to model. Such fundamental imperfection for its target suggests that without careful use, application outside the target area can be, ceteris paribus, more problematic than other model.
Is Facebook coming to an end?
The question isn't "if" but "when". That it will end is already guaranteed. http://www.ted.com/talks/geoffrey_west_the_surprising_math_of_cities_and_corporations.html I take umbrage with the use of th
Is Facebook coming to an end? The question isn't "if" but "when". That it will end is already guaranteed. http://www.ted.com/talks/geoffrey_west_the_surprising_math_of_cities_and_corporations.html I take umbrage with the use of the SIR model. It comes with assumptions. One of the assumptions is that eventually everyone is "recovered". Infections are not perpetual, while technology adoption can be (consider the automobile for example). If the business is doomed to eventually die, then when going through death throes the relationships between susceptible, infected, and recovered might be adequately modeled by a particular SIR model. This does not mean the model is descriptive of any of the seasons before end-of-life. It does not take into account other forces - the context. Facebook was part of the context of end of "Myspace" and so while an SIR was appropriate for Myspace-only use, it was not for Social-Network use because many users had accounts on both, and switched to FB-dominant usage. I dug through the zombie-model, and even through some non-zombie SIR fits, and a time and population punctuated-windowed SIR is more appropriate there. It is not a universal model, and it has strengths and weaknesses. That means that the SIR is imperfect even for the systems that it was engineered to model. Such fundamental imperfection for its target suggests that without careful use, application outside the target area can be, ceteris paribus, more problematic than other model.
Is Facebook coming to an end? The question isn't "if" but "when". That it will end is already guaranteed. http://www.ted.com/talks/geoffrey_west_the_surprising_math_of_cities_and_corporations.html I take umbrage with the use of th
1,136
Is Facebook coming to an end?
To answer your question This model and logic may have worked for MySpace, but is it valid for any social network? Probably not. Historical data can only predict future events if the 'environment' is similar. This paper assumes that the total of Google users and queries is a constant, which of course it is not. Now this article may say more about Google than about Facebook. However, based on the rapid rise and fall of many other social networks like MySpace and others I think one can safely assume that there is a big chance Facebook will no longer be the dominant social network in 5 years.
Is Facebook coming to an end?
To answer your question This model and logic may have worked for MySpace, but is it valid for any social network? Probably not. Historical data can only predict future events if the 'environment'
Is Facebook coming to an end? To answer your question This model and logic may have worked for MySpace, but is it valid for any social network? Probably not. Historical data can only predict future events if the 'environment' is similar. This paper assumes that the total of Google users and queries is a constant, which of course it is not. Now this article may say more about Google than about Facebook. However, based on the rapid rise and fall of many other social networks like MySpace and others I think one can safely assume that there is a big chance Facebook will no longer be the dominant social network in 5 years.
Is Facebook coming to an end? To answer your question This model and logic may have worked for MySpace, but is it valid for any social network? Probably not. Historical data can only predict future events if the 'environment'
1,137
Is Facebook coming to an end?
If we take a look at the map of social networks, there are some cases that epidemic model applies. http://vincos.it/world-map-of-social-networks/ The article could have some other examples (Friendster and Orkut are a good example of massive declination of its users) and also taking into account the fact that normally people migrate to other social network that offers better or new services. Facebook inovates the way people comunicate. Comparing with Orkut, an user needed to enter another person profile to see their updates. On the other hand on facebook the feeds are now on his own timeline. That's a major change. This model and logic may have worked for MySpace, but is it valid for any social network? IMHO, people don't leave Social Network. They migrate, based on a better service, functionality or experience. The question is: Will there be a better Social Network ? Maybe Google +.
Is Facebook coming to an end?
If we take a look at the map of social networks, there are some cases that epidemic model applies. http://vincos.it/world-map-of-social-networks/ The article could have some other examples (Friendst
Is Facebook coming to an end? If we take a look at the map of social networks, there are some cases that epidemic model applies. http://vincos.it/world-map-of-social-networks/ The article could have some other examples (Friendster and Orkut are a good example of massive declination of its users) and also taking into account the fact that normally people migrate to other social network that offers better or new services. Facebook inovates the way people comunicate. Comparing with Orkut, an user needed to enter another person profile to see their updates. On the other hand on facebook the feeds are now on his own timeline. That's a major change. This model and logic may have worked for MySpace, but is it valid for any social network? IMHO, people don't leave Social Network. They migrate, based on a better service, functionality or experience. The question is: Will there be a better Social Network ? Maybe Google +.
Is Facebook coming to an end? If we take a look at the map of social networks, there are some cases that epidemic model applies. http://vincos.it/world-map-of-social-networks/ The article could have some other examples (Friendst
1,138
Is Facebook coming to an end?
The answers here are excellent in picking apart the paper's weaknesses; I especially enjoyed @Fomite's critique of their use of SIR models. But it's now been 8 years since this question was asked, and 2017 has come and gone. So I thought it would be fun to revisit this and ask: what do the data show? Well, facebook user activity data show conclusively that the prediction failed. First, the number of active monthly users (i.e. users who have logged into their accounts within the last 30 days) has increased fairly steadily. It's levelled off a bit since 2021, but not decreased: The same picture is true for the number of active daily users, except there's less sign of levelling off [note: the time ranges are slightly different in the two plots]. It's interesting to see a small but notable jump at the beginning of the pandemic that seems to have subsided earlier this year. [Both images are from Statista, and the links above take you to the page with the latest data] Bottom line: we already had good explanations for why the prediction was likely to be poor, and we now know it was wrong.
Is Facebook coming to an end?
The answers here are excellent in picking apart the paper's weaknesses; I especially enjoyed @Fomite's critique of their use of SIR models. But it's now been 8 years since this question was asked, and
Is Facebook coming to an end? The answers here are excellent in picking apart the paper's weaknesses; I especially enjoyed @Fomite's critique of their use of SIR models. But it's now been 8 years since this question was asked, and 2017 has come and gone. So I thought it would be fun to revisit this and ask: what do the data show? Well, facebook user activity data show conclusively that the prediction failed. First, the number of active monthly users (i.e. users who have logged into their accounts within the last 30 days) has increased fairly steadily. It's levelled off a bit since 2021, but not decreased: The same picture is true for the number of active daily users, except there's less sign of levelling off [note: the time ranges are slightly different in the two plots]. It's interesting to see a small but notable jump at the beginning of the pandemic that seems to have subsided earlier this year. [Both images are from Statista, and the links above take you to the page with the latest data] Bottom line: we already had good explanations for why the prediction was likely to be poor, and we now know it was wrong.
Is Facebook coming to an end? The answers here are excellent in picking apart the paper's weaknesses; I especially enjoyed @Fomite's critique of their use of SIR models. But it's now been 8 years since this question was asked, and
1,139
Why are neural networks becoming deeper, but not wider?
As a disclaimer, I work on neural nets in my research, but I generally use relatively small, shallow neural nets rather than the really deep networks at the cutting edge of research you cite in your question. I am not an expert on the quirks and peculiarities of very deep networks and I will defer to someone who is. First, in principle, there is no reason you need deep neural nets at all. A sufficiently wide neural network with just a single hidden layer can approximate any (reasonable) function given enough training data. There are, however, a few difficulties with using an extremely wide, shallow network. The main issue is that these very wide, shallow networks are very good at memorization, but not so good at generalization. So, if you train the network with every possible input value, a super wide network could eventually memorize the corresponding output value that you want. But that's not useful because for any practical application you won't have every possible input value to train with. The advantage of multiple layers is that they can learn features at various levels of abstraction. For example, if you train a deep convolutional neural network to classify images, you will find that the first layer will train itself to recognize very basic things like edges, the next layer will train itself to recognize collections of edges such as shapes, the next layer will train itself to recognize collections of shapes like eyes or noses, and the next layer will learn even higher-order features like faces. Multiple layers are much better at generalizing because they learn all the intermediate features between the raw data and the high-level classification. So that explains why you might use a deep network rather than a very wide but shallow network. But why not a very deep, very wide network? I think the answer there is that you want your network to be as small as possible to produce good results. As you increase the size of the network, you're really just introducing more parameters that your network needs to learn, and hence increasing the chances of overfitting. If you build a very wide, very deep network, you run the chance of each layer just memorizing what you want the output to be, and you end up with a neural network that fails to generalize to new data. Aside from the specter of overfitting, the wider your network, the longer it will take to train. Deep networks already can be very computationally expensive to train, so there's a strong incentive to make them wide enough that they work well, but no wider.
Why are neural networks becoming deeper, but not wider?
As a disclaimer, I work on neural nets in my research, but I generally use relatively small, shallow neural nets rather than the really deep networks at the cutting edge of research you cite in your q
Why are neural networks becoming deeper, but not wider? As a disclaimer, I work on neural nets in my research, but I generally use relatively small, shallow neural nets rather than the really deep networks at the cutting edge of research you cite in your question. I am not an expert on the quirks and peculiarities of very deep networks and I will defer to someone who is. First, in principle, there is no reason you need deep neural nets at all. A sufficiently wide neural network with just a single hidden layer can approximate any (reasonable) function given enough training data. There are, however, a few difficulties with using an extremely wide, shallow network. The main issue is that these very wide, shallow networks are very good at memorization, but not so good at generalization. So, if you train the network with every possible input value, a super wide network could eventually memorize the corresponding output value that you want. But that's not useful because for any practical application you won't have every possible input value to train with. The advantage of multiple layers is that they can learn features at various levels of abstraction. For example, if you train a deep convolutional neural network to classify images, you will find that the first layer will train itself to recognize very basic things like edges, the next layer will train itself to recognize collections of edges such as shapes, the next layer will train itself to recognize collections of shapes like eyes or noses, and the next layer will learn even higher-order features like faces. Multiple layers are much better at generalizing because they learn all the intermediate features between the raw data and the high-level classification. So that explains why you might use a deep network rather than a very wide but shallow network. But why not a very deep, very wide network? I think the answer there is that you want your network to be as small as possible to produce good results. As you increase the size of the network, you're really just introducing more parameters that your network needs to learn, and hence increasing the chances of overfitting. If you build a very wide, very deep network, you run the chance of each layer just memorizing what you want the output to be, and you end up with a neural network that fails to generalize to new data. Aside from the specter of overfitting, the wider your network, the longer it will take to train. Deep networks already can be very computationally expensive to train, so there's a strong incentive to make them wide enough that they work well, but no wider.
Why are neural networks becoming deeper, but not wider? As a disclaimer, I work on neural nets in my research, but I generally use relatively small, shallow neural nets rather than the really deep networks at the cutting edge of research you cite in your q
1,140
Why are neural networks becoming deeper, but not wider?
I don't think there is a definite answer to your questions. But I think the conventional wisdom goes as following: Basically, as the hypothesis space of a learning algorithm grows, the algorithm can learn richer and richer structures. But at the same time, the algorithm becomes more prone to overfitting and its generalization error is likely to increase. So ultimately, for any given dataset, it's advisable to work with the minimal model that has enough capacity to learn the real structure of the data. But this is a very hand-wavy advice, since usually the "real structure of the data" is unknown, and often even the capacities of the candidate models are only vaguely understood. When it comes to neural networks, the size of the hypothesis space is controlled by the number of parameters. And it seems that for a fixed number of parameters (or a fixed order of magnitude), going deeper allows the models to capture richer structures (e.g. this paper). This may partially explain the success of deeper models with fewer parameters: VGGNet (from 2014) has 16 layers with ~140M parameters, while ResNet (from 2015) beat it with 152 layers but only ~2M parameters (as a side, smaller models may be computationally easier to train - but I don't think it's a major factor by itself - since depth actually complicates the training) Note that this trend (more depth, less parameters) is mostly present in vision-related tasks and convolutional networks, and this calls for a domain-specific explanation. So here's another perspective: Each "neuron" in a convolutional layer has a "receptive field", which is the size and shape of the inputs that effects each output. Intuitively, each kernel captures some kind of a relation between nearby inputs. And small kernels (which are common and preferable) have a small receptive field, so they can provide information only regarding local relations. But as you go deeper, the receptive field of each neuron with respect to a some earlier layer becomes larger. So deep layers can provide features with global semantic meaning and abstract details (relations of relations ... of relations of objects), while using only small kernels (which regularize the relations the network learns, and helps it converge and generalize). So the usefulness of deep convolutional networks in computer vision may be partially explained by the spatial structure of images and videos. It's possible that time will tell that for different types of problems, or for non-convolutional architectures, depth actually doesn't work well.
Why are neural networks becoming deeper, but not wider?
I don't think there is a definite answer to your questions. But I think the conventional wisdom goes as following: Basically, as the hypothesis space of a learning algorithm grows, the algorithm can
Why are neural networks becoming deeper, but not wider? I don't think there is a definite answer to your questions. But I think the conventional wisdom goes as following: Basically, as the hypothesis space of a learning algorithm grows, the algorithm can learn richer and richer structures. But at the same time, the algorithm becomes more prone to overfitting and its generalization error is likely to increase. So ultimately, for any given dataset, it's advisable to work with the minimal model that has enough capacity to learn the real structure of the data. But this is a very hand-wavy advice, since usually the "real structure of the data" is unknown, and often even the capacities of the candidate models are only vaguely understood. When it comes to neural networks, the size of the hypothesis space is controlled by the number of parameters. And it seems that for a fixed number of parameters (or a fixed order of magnitude), going deeper allows the models to capture richer structures (e.g. this paper). This may partially explain the success of deeper models with fewer parameters: VGGNet (from 2014) has 16 layers with ~140M parameters, while ResNet (from 2015) beat it with 152 layers but only ~2M parameters (as a side, smaller models may be computationally easier to train - but I don't think it's a major factor by itself - since depth actually complicates the training) Note that this trend (more depth, less parameters) is mostly present in vision-related tasks and convolutional networks, and this calls for a domain-specific explanation. So here's another perspective: Each "neuron" in a convolutional layer has a "receptive field", which is the size and shape of the inputs that effects each output. Intuitively, each kernel captures some kind of a relation between nearby inputs. And small kernels (which are common and preferable) have a small receptive field, so they can provide information only regarding local relations. But as you go deeper, the receptive field of each neuron with respect to a some earlier layer becomes larger. So deep layers can provide features with global semantic meaning and abstract details (relations of relations ... of relations of objects), while using only small kernels (which regularize the relations the network learns, and helps it converge and generalize). So the usefulness of deep convolutional networks in computer vision may be partially explained by the spatial structure of images and videos. It's possible that time will tell that for different types of problems, or for non-convolutional architectures, depth actually doesn't work well.
Why are neural networks becoming deeper, but not wider? I don't think there is a definite answer to your questions. But I think the conventional wisdom goes as following: Basically, as the hypothesis space of a learning algorithm grows, the algorithm can
1,141
Why are neural networks becoming deeper, but not wider?
Adding more features helps but the benefit quickly become marginal after a lot of features were added. That's one reason why tools like PCA work: a few components capture most variance in the features. Hence, adding more features after some point is almost useless. On the other hand finding the right functional for ma of the feature is always a good idea. However, if you don't have a good theory it's hard to come up with a correct function, of course. So, adding layers is helpful as form of a brute force approach. Consider a simple case: air drag of a car. Say, we didn't know the equation: $$f\sim C\rho A v^2/2$$ where $A$ - a crossectional area of a car, $\rho$ - air density, and $v$ - velocity of a car. We could figure that car measurements are important and add them as features, velocity of a car will go in too. So we keep adding features, and maybe add air pressure, temperature, length, width of a car, number of seats, etc. We'll end up with a model like $$f\sim \sum_i\beta_i x_i$$ You see how these features are not going to assemble themselves into the "true" equation unless we add all interactions and polynomials. However, if the true equation wasn't conveniently polynomial, say it had exponents or other weird transcendental functions, then we'd have no chance to emulate it with expanding feature set or widening the network. However, making the network deeper would easily get you to the equation above with just two layers. More complicated functions would need more layer, that's why deepening the number of layers could be a way to go in many problems.
Why are neural networks becoming deeper, but not wider?
Adding more features helps but the benefit quickly become marginal after a lot of features were added. That's one reason why tools like PCA work: a few components capture most variance in the features
Why are neural networks becoming deeper, but not wider? Adding more features helps but the benefit quickly become marginal after a lot of features were added. That's one reason why tools like PCA work: a few components capture most variance in the features. Hence, adding more features after some point is almost useless. On the other hand finding the right functional for ma of the feature is always a good idea. However, if you don't have a good theory it's hard to come up with a correct function, of course. So, adding layers is helpful as form of a brute force approach. Consider a simple case: air drag of a car. Say, we didn't know the equation: $$f\sim C\rho A v^2/2$$ where $A$ - a crossectional area of a car, $\rho$ - air density, and $v$ - velocity of a car. We could figure that car measurements are important and add them as features, velocity of a car will go in too. So we keep adding features, and maybe add air pressure, temperature, length, width of a car, number of seats, etc. We'll end up with a model like $$f\sim \sum_i\beta_i x_i$$ You see how these features are not going to assemble themselves into the "true" equation unless we add all interactions and polynomials. However, if the true equation wasn't conveniently polynomial, say it had exponents or other weird transcendental functions, then we'd have no chance to emulate it with expanding feature set or widening the network. However, making the network deeper would easily get you to the equation above with just two layers. More complicated functions would need more layer, that's why deepening the number of layers could be a way to go in many problems.
Why are neural networks becoming deeper, but not wider? Adding more features helps but the benefit quickly become marginal after a lot of features were added. That's one reason why tools like PCA work: a few components capture most variance in the features
1,142
Why are neural networks becoming deeper, but not wider?
For a densely connected neural net of depth $d$ and width $w$, the number of parameters (hence, RAM required to run or train the network) is $O(dw^2)$. Thus, if you only have a limited number of parameters, it often makes sense to prefer a large increase in depth over a small increase in width. Why might you be trying to limit the number of parameters? A number of reasons: You are trying to avoid overfitting. (Although limiting the number of parameters is a very blunt instrument for achieving this.) Your research is more impressive if you can outperform someone else's model using the same number of parameters. Training your model is much easier if the model (plus moment params if you're using Adam) can fit inside the memory of a single GPU. In real life applications, RAM is often expensive when serving models. This is especially true for running models on e.g. a cell phone, but can sometimes apply even for serving models from the cloud. Where does the $O(dw^2)$ come from? For two neighboring layers of width $w_1, w_2$, the connections between them are described by a $w_1 \times w_2$. So if you have $(d-2)$ layers of width $w$ (plus an input and an output layer), the number of parameters is $$(d-2) w^2 + w \cdot (\text{input layer width}) + w \cdot (\text{output layer width}) = O(dw^2)\text{.}$$ Instead of restricting the width, an alternate strategy sometimes used is to use sparse connections. For instance, when initializing the network topology, you can admit each connection with probability $1/\sqrt{w}$ so the total number of parameters is $O(dw)$. But if you do this, it's not clear that increasing the width will necessarily increase the model's capacity to learn.
Why are neural networks becoming deeper, but not wider?
For a densely connected neural net of depth $d$ and width $w$, the number of parameters (hence, RAM required to run or train the network) is $O(dw^2)$. Thus, if you only have a limited number of param
Why are neural networks becoming deeper, but not wider? For a densely connected neural net of depth $d$ and width $w$, the number of parameters (hence, RAM required to run or train the network) is $O(dw^2)$. Thus, if you only have a limited number of parameters, it often makes sense to prefer a large increase in depth over a small increase in width. Why might you be trying to limit the number of parameters? A number of reasons: You are trying to avoid overfitting. (Although limiting the number of parameters is a very blunt instrument for achieving this.) Your research is more impressive if you can outperform someone else's model using the same number of parameters. Training your model is much easier if the model (plus moment params if you're using Adam) can fit inside the memory of a single GPU. In real life applications, RAM is often expensive when serving models. This is especially true for running models on e.g. a cell phone, but can sometimes apply even for serving models from the cloud. Where does the $O(dw^2)$ come from? For two neighboring layers of width $w_1, w_2$, the connections between them are described by a $w_1 \times w_2$. So if you have $(d-2)$ layers of width $w$ (plus an input and an output layer), the number of parameters is $$(d-2) w^2 + w \cdot (\text{input layer width}) + w \cdot (\text{output layer width}) = O(dw^2)\text{.}$$ Instead of restricting the width, an alternate strategy sometimes used is to use sparse connections. For instance, when initializing the network topology, you can admit each connection with probability $1/\sqrt{w}$ so the total number of parameters is $O(dw)$. But if you do this, it's not clear that increasing the width will necessarily increase the model's capacity to learn.
Why are neural networks becoming deeper, but not wider? For a densely connected neural net of depth $d$ and width $w$, the number of parameters (hence, RAM required to run or train the network) is $O(dw^2)$. Thus, if you only have a limited number of param
1,143
Why are neural networks becoming deeper, but not wider?
I think you are get the in detail answer of the question through this paper name Impact of fully connected layers on performance of convolutional neural networks for image classification link - https://www.sciencedirect.com/science/article/pii/S0925231219313803. It comes to the following conclusion - In order to obtain better performance, the shallow CNNs require more nodes in FC layers. On the other hand, deeper CNNs need less number of neurons in FC layers irrespective of type of the dataset. The shallow CNNs require a large number of neurons in FC layers as well as more number of FC layers for wider datasets compared to deeper datasets and vice-versa. Deeper CNNs perform better than shallow models over deeper datasets. In contrast, shallow architectures perform better than deeper architectures for wider datasets. These observations can help the deep learning community while making a decision about the choice of deep/shallow CNN architectures.
Why are neural networks becoming deeper, but not wider?
I think you are get the in detail answer of the question through this paper name Impact of fully connected layers on performance of convolutional neural networks for image classification link - https:
Why are neural networks becoming deeper, but not wider? I think you are get the in detail answer of the question through this paper name Impact of fully connected layers on performance of convolutional neural networks for image classification link - https://www.sciencedirect.com/science/article/pii/S0925231219313803. It comes to the following conclusion - In order to obtain better performance, the shallow CNNs require more nodes in FC layers. On the other hand, deeper CNNs need less number of neurons in FC layers irrespective of type of the dataset. The shallow CNNs require a large number of neurons in FC layers as well as more number of FC layers for wider datasets compared to deeper datasets and vice-versa. Deeper CNNs perform better than shallow models over deeper datasets. In contrast, shallow architectures perform better than deeper architectures for wider datasets. These observations can help the deep learning community while making a decision about the choice of deep/shallow CNN architectures.
Why are neural networks becoming deeper, but not wider? I think you are get the in detail answer of the question through this paper name Impact of fully connected layers on performance of convolutional neural networks for image classification link - https:
1,144
Why are neural networks becoming deeper, but not wider?
Currently, on GPUs - we use 32-bit float and with 512 features - combining them we already get quite imprecise. Going even wider is hence limited by numerics and precision of 32-bit float. Another thing is that we actually should probably go wider if we care about accuracy: https://www.sciencedirect.com/science/article/pii/S2666827021000633
Why are neural networks becoming deeper, but not wider?
Currently, on GPUs - we use 32-bit float and with 512 features - combining them we already get quite imprecise. Going even wider is hence limited by numerics and precision of 32-bit float. Another thi
Why are neural networks becoming deeper, but not wider? Currently, on GPUs - we use 32-bit float and with 512 features - combining them we already get quite imprecise. Going even wider is hence limited by numerics and precision of 32-bit float. Another thing is that we actually should probably go wider if we care about accuracy: https://www.sciencedirect.com/science/article/pii/S2666827021000633
Why are neural networks becoming deeper, but not wider? Currently, on GPUs - we use 32-bit float and with 512 features - combining them we already get quite imprecise. Going even wider is hence limited by numerics and precision of 32-bit float. Another thi
1,145
Removal of statistically significant intercept term increases $R^2$ in linear model
First of all, we should understand what the R software is doing when no intercept is included in the model. Recall that the usual computation of $R^2$ when an intercept is present is $$ R^2 = \frac{\sum_i (\hat y_i - \bar y)^2}{\sum_i (y_i - \bar y)^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bar y)^2} \>. $$ The first equality only occurs because of the inclusion of the intercept in the model even though this is probably the more popular of the two ways of writing it. The second equality actually provides the more general interpretation! This point is also address in this related question. But, what happens if there is no intercept in the model? Well, in that case, R (silently!) uses the modified form $$ R_0^2 = \frac{\sum_i \hat y_i^2}{\sum_i y_i^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i y_i^2} \>. $$ It helps to recall what $R^2$ is trying to measure. In the former case, it is comparing your current model to the reference model that only includes an intercept (i.e., constant term). In the second case, there is no intercept, so it makes little sense to compare it to such a model. So, instead, $R_0^2$ is computed, which implicitly uses a reference model corresponding to noise only. In what follows below, I focus on the second expression for both $R^2$ and $R_0^2$ since that expression generalizes to other contexts and it's generally more natural to think about things in terms of residuals. But, how are they different, and when? Let's take a brief digression into some linear algebra and see if we can figure out what is going on. First of all, let's call the fitted values from the model with intercept $\newcommand{\yhat}{\hat {\mathbf y}}\newcommand{\ytilde}{\tilde {\mathbf y}}\yhat$ and the fitted values from the model without intercept $\ytilde$. We can rewrite the expressions for $R^2$ and $R_0^2$ as $$\newcommand{\y}{\mathbf y}\newcommand{\one}{\mathbf 1} R^2 = 1 - \frac{\|\y - \yhat\|_2^2}{\|\y - \bar y \one\|_2^2} \>, $$ and $$ R_0^2 = 1 - \frac{\|\y - \ytilde\|_2^2}{\|\y\|_2^2} \>, $$ respectively. Now, since $\|\y\|_2^2 = \|\y - \bar y \one\|_2^2 + n \bar y^2$, then $R_0^2 > R^2$ if and only if $$ \frac{\|\y - \ytilde\|_2^2}{\|\y - \yhat\|_2^2} < 1 + \frac{\bar y^2}{\frac{1}{n}\|\y - \bar y \one\|_2^2} \> . $$ The left-hand side is greater than one since the model corresponding to $\ytilde$ is nested within that of $\yhat$. The second term on the right-hand side is the squared-mean of the responses divided by the mean square error of an intercept-only model. So, the larger the mean of the response relative to the other variation, the more "slack" we have and a greater chance of $R_0^2$ dominating $R^2$. Notice that all the model-dependent stuff is on the left side and non-model dependent stuff is on the right. Ok, so how do we make the ratio on the left-hand side small? Recall that $\newcommand{\P}{\mathbf P}\ytilde = \P_0 \y$ and $\yhat = \P_1 \y$ where $\P_0$ and $\P_1$ are projection matrices corresponding to subspaces $S_0$ and $S_1$ such that $S_0 \subset S_1$. So, in order for the ratio to be close to one, we need the subspaces $S_0$ and $S_1$ to be very similar. Now $S_0$ and $S_1$ differ only by whether $\one$ is a basis vector or not, so that means that $S_0$ had better be a subspace that already lies very close to $\one$. In essence, that means our predictor had better have a strong mean offset itself and that this mean offset should dominate the variation of the predictor. An example Here we try to generate an example with an intercept explicitly in the model and which behaves close to the case in the question. Below is some simple R code to demonstrate. set.seed(.Random.seed[1]) n <- 220 a <- 0.5 b <- 0.5 se <- 0.25 # Make sure x has a strong mean offset x <- rnorm(n)/3 + a y <- a + b*x + se*rnorm(x) int.lm <- lm(y~x) noint.lm <- lm(y~x+0) # Intercept be gone! # For comparison to summary(.) output rsq.int <- cor(y,x)^2 rsq.noint <- 1-mean((y-noint.lm$fit)^2) / mean(y^2) This gives the following output. We begin with the model with intercept. # Include an intercept! > summary(int.lm) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -0.656010 -0.161556 -0.005112 0.178008 0.621790 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.48521 0.02990 16.23 <2e-16 *** x 0.54239 0.04929 11.00 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.2467 on 218 degrees of freedom Multiple R-squared: 0.3571, Adjusted R-squared: 0.3541 F-statistic: 121.1 on 1 and 218 DF, p-value: < 2.2e-16 Then, see what happens when we exclude the intercept. # No intercept! > summary(noint.lm) Call: lm(formula = y ~ x + 0) Residuals: Min 1Q Median 3Q Max -0.62108 -0.08006 0.16295 0.38258 1.02485 Coefficients: Estimate Std. Error t value Pr(>|t|) x 1.20712 0.04066 29.69 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.3658 on 219 degrees of freedom Multiple R-squared: 0.801, Adjusted R-squared: 0.8001 F-statistic: 881.5 on 1 and 219 DF, p-value: < 2.2e-16 Below is a plot of the data with the model-with-intercept in red and the model-without-intercept in blue.
Removal of statistically significant intercept term increases $R^2$ in linear model
First of all, we should understand what the R software is doing when no intercept is included in the model. Recall that the usual computation of $R^2$ when an intercept is present is $$ R^2 = \frac{\s
Removal of statistically significant intercept term increases $R^2$ in linear model First of all, we should understand what the R software is doing when no intercept is included in the model. Recall that the usual computation of $R^2$ when an intercept is present is $$ R^2 = \frac{\sum_i (\hat y_i - \bar y)^2}{\sum_i (y_i - \bar y)^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i (y_i - \bar y)^2} \>. $$ The first equality only occurs because of the inclusion of the intercept in the model even though this is probably the more popular of the two ways of writing it. The second equality actually provides the more general interpretation! This point is also address in this related question. But, what happens if there is no intercept in the model? Well, in that case, R (silently!) uses the modified form $$ R_0^2 = \frac{\sum_i \hat y_i^2}{\sum_i y_i^2} = 1 - \frac{\sum_i (y_i - \hat y_i)^2}{\sum_i y_i^2} \>. $$ It helps to recall what $R^2$ is trying to measure. In the former case, it is comparing your current model to the reference model that only includes an intercept (i.e., constant term). In the second case, there is no intercept, so it makes little sense to compare it to such a model. So, instead, $R_0^2$ is computed, which implicitly uses a reference model corresponding to noise only. In what follows below, I focus on the second expression for both $R^2$ and $R_0^2$ since that expression generalizes to other contexts and it's generally more natural to think about things in terms of residuals. But, how are they different, and when? Let's take a brief digression into some linear algebra and see if we can figure out what is going on. First of all, let's call the fitted values from the model with intercept $\newcommand{\yhat}{\hat {\mathbf y}}\newcommand{\ytilde}{\tilde {\mathbf y}}\yhat$ and the fitted values from the model without intercept $\ytilde$. We can rewrite the expressions for $R^2$ and $R_0^2$ as $$\newcommand{\y}{\mathbf y}\newcommand{\one}{\mathbf 1} R^2 = 1 - \frac{\|\y - \yhat\|_2^2}{\|\y - \bar y \one\|_2^2} \>, $$ and $$ R_0^2 = 1 - \frac{\|\y - \ytilde\|_2^2}{\|\y\|_2^2} \>, $$ respectively. Now, since $\|\y\|_2^2 = \|\y - \bar y \one\|_2^2 + n \bar y^2$, then $R_0^2 > R^2$ if and only if $$ \frac{\|\y - \ytilde\|_2^2}{\|\y - \yhat\|_2^2} < 1 + \frac{\bar y^2}{\frac{1}{n}\|\y - \bar y \one\|_2^2} \> . $$ The left-hand side is greater than one since the model corresponding to $\ytilde$ is nested within that of $\yhat$. The second term on the right-hand side is the squared-mean of the responses divided by the mean square error of an intercept-only model. So, the larger the mean of the response relative to the other variation, the more "slack" we have and a greater chance of $R_0^2$ dominating $R^2$. Notice that all the model-dependent stuff is on the left side and non-model dependent stuff is on the right. Ok, so how do we make the ratio on the left-hand side small? Recall that $\newcommand{\P}{\mathbf P}\ytilde = \P_0 \y$ and $\yhat = \P_1 \y$ where $\P_0$ and $\P_1$ are projection matrices corresponding to subspaces $S_0$ and $S_1$ such that $S_0 \subset S_1$. So, in order for the ratio to be close to one, we need the subspaces $S_0$ and $S_1$ to be very similar. Now $S_0$ and $S_1$ differ only by whether $\one$ is a basis vector or not, so that means that $S_0$ had better be a subspace that already lies very close to $\one$. In essence, that means our predictor had better have a strong mean offset itself and that this mean offset should dominate the variation of the predictor. An example Here we try to generate an example with an intercept explicitly in the model and which behaves close to the case in the question. Below is some simple R code to demonstrate. set.seed(.Random.seed[1]) n <- 220 a <- 0.5 b <- 0.5 se <- 0.25 # Make sure x has a strong mean offset x <- rnorm(n)/3 + a y <- a + b*x + se*rnorm(x) int.lm <- lm(y~x) noint.lm <- lm(y~x+0) # Intercept be gone! # For comparison to summary(.) output rsq.int <- cor(y,x)^2 rsq.noint <- 1-mean((y-noint.lm$fit)^2) / mean(y^2) This gives the following output. We begin with the model with intercept. # Include an intercept! > summary(int.lm) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -0.656010 -0.161556 -0.005112 0.178008 0.621790 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.48521 0.02990 16.23 <2e-16 *** x 0.54239 0.04929 11.00 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.2467 on 218 degrees of freedom Multiple R-squared: 0.3571, Adjusted R-squared: 0.3541 F-statistic: 121.1 on 1 and 218 DF, p-value: < 2.2e-16 Then, see what happens when we exclude the intercept. # No intercept! > summary(noint.lm) Call: lm(formula = y ~ x + 0) Residuals: Min 1Q Median 3Q Max -0.62108 -0.08006 0.16295 0.38258 1.02485 Coefficients: Estimate Std. Error t value Pr(>|t|) x 1.20712 0.04066 29.69 <2e-16 *** --- Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 Residual standard error: 0.3658 on 219 degrees of freedom Multiple R-squared: 0.801, Adjusted R-squared: 0.8001 F-statistic: 881.5 on 1 and 219 DF, p-value: < 2.2e-16 Below is a plot of the data with the model-with-intercept in red and the model-without-intercept in blue.
Removal of statistically significant intercept term increases $R^2$ in linear model First of all, we should understand what the R software is doing when no intercept is included in the model. Recall that the usual computation of $R^2$ when an intercept is present is $$ R^2 = \frac{\s
1,146
Removal of statistically significant intercept term increases $R^2$ in linear model
I would base my decision on an information criteria such as the Akaike or Bayes-Schwarz criteria rather than R^2; even then I would not view these as absolute. If you have a process where the slope is near zero and all of the data is far from the origin, your correct R^2 should be low as most of the variation in the data will be due to noise. If you try to fit such data to a model without an intercept you will generate a large and wrong slope term and likely a better looking R^2 if the intercept free version is used. The following graph shows what happens in this extreme cases. Here the generating process is that x=100,100.1,.... and y is just 100 + random noise with mean 0 and standard deviation .1. The points are black circles, the fit without the intercept is the blue line and the fit with the intercept (zeroing out the slope) is the red line: [Sorry it won't let me post the graph; run the R-code below to generate it. It shows the origin in the lower left corner, the cluster of points in the upper right corner. The bad no-intercept fit goes from the lower left to the upper right and the correct fit is a line parallel to the x-axis] The correct model for this should have an R^2 of zero---be a constant plus random noise. R will give you and R^2 of .99 for the fit with no intercept. This won't matter much if you only use the model for prediction with x-values within the range of the training data, but will fail miserably if x goes outside of the narrow range of the training set or you are trying to gain true insights beyond just prediction. The AIC correctly shows that the model with the intercept is preferred. The R code for this is: Nsamp=100 x=seq(1,100,1)*.1+100 # x=101.1,101.2,.... y=rnorm(n=length(x))+100 # random noise +100 (best model is constant) model_withint=lm(y~x) print(summary(model_withint)) flush.console() model_noint=lm(y~x+0) print(summary(model_noint)) print (AIC(model_withint)) print(sprintf ('without intercept AIC=%f',AIC(model_noint))) print(sprintf ('with intercept AIC=%f',AIC(model_withint))) print(sprintf ('constant model AIC=%f',AIC(lm(y~1)))) plot(x,y,ylim=c(0,105),xlim=c(0,105)) lines( c(0,105),c(0,105)*model_noint$coefficients['x'],col=c('blue')) lines( c(0,105),c(1,1)*(lm(y~1)$coefficients['(Intercept)']),col=c('red')) The AIC output is "without intercept AIC=513.549626" "with intercept AIC=288.112573" "constant model AIC=289.411682" Note that the AIC still gets the wrong model in this case, as the true model is the constant model; but other random numbers will yield data for which the AIC is lowest for the constant model. Note that if you discard the slope, you should refit the model without it, not try to use the intercept from the model and ignore the slope.
Removal of statistically significant intercept term increases $R^2$ in linear model
I would base my decision on an information criteria such as the Akaike or Bayes-Schwarz criteria rather than R^2; even then I would not view these as absolute. If you have a process where the slope
Removal of statistically significant intercept term increases $R^2$ in linear model I would base my decision on an information criteria such as the Akaike or Bayes-Schwarz criteria rather than R^2; even then I would not view these as absolute. If you have a process where the slope is near zero and all of the data is far from the origin, your correct R^2 should be low as most of the variation in the data will be due to noise. If you try to fit such data to a model without an intercept you will generate a large and wrong slope term and likely a better looking R^2 if the intercept free version is used. The following graph shows what happens in this extreme cases. Here the generating process is that x=100,100.1,.... and y is just 100 + random noise with mean 0 and standard deviation .1. The points are black circles, the fit without the intercept is the blue line and the fit with the intercept (zeroing out the slope) is the red line: [Sorry it won't let me post the graph; run the R-code below to generate it. It shows the origin in the lower left corner, the cluster of points in the upper right corner. The bad no-intercept fit goes from the lower left to the upper right and the correct fit is a line parallel to the x-axis] The correct model for this should have an R^2 of zero---be a constant plus random noise. R will give you and R^2 of .99 for the fit with no intercept. This won't matter much if you only use the model for prediction with x-values within the range of the training data, but will fail miserably if x goes outside of the narrow range of the training set or you are trying to gain true insights beyond just prediction. The AIC correctly shows that the model with the intercept is preferred. The R code for this is: Nsamp=100 x=seq(1,100,1)*.1+100 # x=101.1,101.2,.... y=rnorm(n=length(x))+100 # random noise +100 (best model is constant) model_withint=lm(y~x) print(summary(model_withint)) flush.console() model_noint=lm(y~x+0) print(summary(model_noint)) print (AIC(model_withint)) print(sprintf ('without intercept AIC=%f',AIC(model_noint))) print(sprintf ('with intercept AIC=%f',AIC(model_withint))) print(sprintf ('constant model AIC=%f',AIC(lm(y~1)))) plot(x,y,ylim=c(0,105),xlim=c(0,105)) lines( c(0,105),c(0,105)*model_noint$coefficients['x'],col=c('blue')) lines( c(0,105),c(1,1)*(lm(y~1)$coefficients['(Intercept)']),col=c('red')) The AIC output is "without intercept AIC=513.549626" "with intercept AIC=288.112573" "constant model AIC=289.411682" Note that the AIC still gets the wrong model in this case, as the true model is the constant model; but other random numbers will yield data for which the AIC is lowest for the constant model. Note that if you discard the slope, you should refit the model without it, not try to use the intercept from the model and ignore the slope.
Removal of statistically significant intercept term increases $R^2$ in linear model I would base my decision on an information criteria such as the Akaike or Bayes-Schwarz criteria rather than R^2; even then I would not view these as absolute. If you have a process where the slope
1,147
Removal of statistically significant intercept term increases $R^2$ in linear model
The way in which the R software computes the R squared for the case of no intercept (see the answer by cardinal) produces inconsistent results. Suppose that you have a single categorical explanatory variable that has only two categories (cat and dog). Then you can write two completely equivalent regression models: a regression on a constant and a dummy variable that encodes the cat category (1 if cat, 0 otherwise); a regression without a constant and two dummy variables, one for cat and one for dog. Since these models are completely equivalent, they should have the same R squared, but the R software will tell you that they have two different R squareds. Bottom line: in general, the R squared calculated by R for the case of no intercept is hardly useful to compare two models. A more useful (and common) definition of R squared (which always makes sense, and coincides with that computed by R for regressions including a constant) is $$R^2 = 1 - \frac{MSE_{model}}{MSE_{mean}}$$ where $MSE_{model}$ is the mean squared prediction error of your model, and $MSE_{mean}$ is the mean squared prediction error obtained by predicting the dependent variable with its sample mean.
Removal of statistically significant intercept term increases $R^2$ in linear model
The way in which the R software computes the R squared for the case of no intercept (see the answer by cardinal) produces inconsistent results. Suppose that you have a single categorical explanatory v
Removal of statistically significant intercept term increases $R^2$ in linear model The way in which the R software computes the R squared for the case of no intercept (see the answer by cardinal) produces inconsistent results. Suppose that you have a single categorical explanatory variable that has only two categories (cat and dog). Then you can write two completely equivalent regression models: a regression on a constant and a dummy variable that encodes the cat category (1 if cat, 0 otherwise); a regression without a constant and two dummy variables, one for cat and one for dog. Since these models are completely equivalent, they should have the same R squared, but the R software will tell you that they have two different R squareds. Bottom line: in general, the R squared calculated by R for the case of no intercept is hardly useful to compare two models. A more useful (and common) definition of R squared (which always makes sense, and coincides with that computed by R for regressions including a constant) is $$R^2 = 1 - \frac{MSE_{model}}{MSE_{mean}}$$ where $MSE_{model}$ is the mean squared prediction error of your model, and $MSE_{mean}$ is the mean squared prediction error obtained by predicting the dependent variable with its sample mean.
Removal of statistically significant intercept term increases $R^2$ in linear model The way in which the R software computes the R squared for the case of no intercept (see the answer by cardinal) produces inconsistent results. Suppose that you have a single categorical explanatory v
1,148
Difference between neural net weight decay and learning rate
The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled. So let's say that we have a cost or error function $E(\mathbf{w})$ that we want to minimize. Gradient descent tells us to modify the weights $\mathbf{w}$ in the direction of steepest descent in $E$: \begin{equation} w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}, \end{equation} where $\eta$ is the learning rate, and if it's large you will have a correspondingly large modification of the weights $w_i$ (in general it shouldn't be too large, otherwise you'll overshoot the local minimum in your cost function). In order to effectively limit the number of free parameters in your model so as to avoid over-fitting, it is possible to regularize the cost function. An easy way to do that is by introducing a zero mean Gaussian prior over the weights, which is equivalent to changing the cost function to $\widetilde{E}(\mathbf{w})=E(\mathbf{w})+\frac{\lambda}{2}\mathbf{w}^2$. In practice this penalizes large weights and effectively limits the freedom in your model. The regularization parameter $\lambda$ determines how you trade off the original cost $E$ with the large weights penalization. Applying gradient descent to this new cost function we obtain: \begin{equation} w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}-\eta\lambda w_i. \end{equation} The new term $-\eta\lambda w_i$ coming from the regularization causes the weight to decay in proportion to its size.
Difference between neural net weight decay and learning rate
The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes t
Difference between neural net weight decay and learning rate The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes the weights to exponentially decay to zero, if no other update is scheduled. So let's say that we have a cost or error function $E(\mathbf{w})$ that we want to minimize. Gradient descent tells us to modify the weights $\mathbf{w}$ in the direction of steepest descent in $E$: \begin{equation} w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}, \end{equation} where $\eta$ is the learning rate, and if it's large you will have a correspondingly large modification of the weights $w_i$ (in general it shouldn't be too large, otherwise you'll overshoot the local minimum in your cost function). In order to effectively limit the number of free parameters in your model so as to avoid over-fitting, it is possible to regularize the cost function. An easy way to do that is by introducing a zero mean Gaussian prior over the weights, which is equivalent to changing the cost function to $\widetilde{E}(\mathbf{w})=E(\mathbf{w})+\frac{\lambda}{2}\mathbf{w}^2$. In practice this penalizes large weights and effectively limits the freedom in your model. The regularization parameter $\lambda$ determines how you trade off the original cost $E$ with the large weights penalization. Applying gradient descent to this new cost function we obtain: \begin{equation} w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}-\eta\lambda w_i. \end{equation} The new term $-\eta\lambda w_i$ coming from the regularization causes the weight to decay in proportion to its size.
Difference between neural net weight decay and learning rate The learning rate is a parameter that determines how much an updating step influences the current value of the weights. While weight decay is an additional term in the weight update rule that causes t
1,149
Difference between neural net weight decay and learning rate
In addition to @mrig's answer (+1), for many practical application of neural networks it is better to use a more advanced optimisation algorithm, such as Levenberg-Marquardt (small-medium sized networks) or scaled conjugate gradient descent (medium-large networks), as these will be much faster, and there is no need to set the learning rate (both algorithms essentially adapt the learning rate using curvature as well as gradient). Any decent neural network package or library will have implementations of one of these methods, any package that doesn't is probably obsolete. I use the NETLAB libary for MATLAB, which is a great piece of kit.
Difference between neural net weight decay and learning rate
In addition to @mrig's answer (+1), for many practical application of neural networks it is better to use a more advanced optimisation algorithm, such as Levenberg-Marquardt (small-medium sized networ
Difference between neural net weight decay and learning rate In addition to @mrig's answer (+1), for many practical application of neural networks it is better to use a more advanced optimisation algorithm, such as Levenberg-Marquardt (small-medium sized networks) or scaled conjugate gradient descent (medium-large networks), as these will be much faster, and there is no need to set the learning rate (both algorithms essentially adapt the learning rate using curvature as well as gradient). Any decent neural network package or library will have implementations of one of these methods, any package that doesn't is probably obsolete. I use the NETLAB libary for MATLAB, which is a great piece of kit.
Difference between neural net weight decay and learning rate In addition to @mrig's answer (+1), for many practical application of neural networks it is better to use a more advanced optimisation algorithm, such as Levenberg-Marquardt (small-medium sized networ
1,150
Difference between neural net weight decay and learning rate
So the answer given by @mrig is actually intuitively alright. But theoretically speaking what he has explained is L2 regularization. This was known as weight decay back in the day but now I think the literature is pretty clear about the fact. These two concepts have a subtle difference and learning this difference can give a better understanding on weight decay parameter. It's easier to understand once you identify the two as which is which. Here I'll discuss about the two regularization techniques known as L2 regularization and decoupled wight decay. In L2 regularization you directly make changes to the cost function. This can be shown as follows using the same terminology as in @mrig's answer. \begin{equation} \widetilde{E}(\mathbf{w})=E(\mathbf{w})+\frac{\lambda}{2}\mathbf{w}^2 \end{equation} So once you take the gradient (as in SGD optimizer), this simplifies down to the following equation: \begin{equation} w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}-\eta\lambda w_i \end{equation} \begin{equation} w_i \leftarrow (1-\eta\lambda) w_i-\eta\frac{\partial E}{\partial w_i} \end{equation} However, in decoupled weight decay, you do not do any adjustments to the cost function directly. For the same SGD optimizer weight decay can be written as: \begin{equation} w_i \leftarrow (1-\lambda^\prime) w_i-\eta\frac{\partial E}{\partial w_i} \end{equation} So there you have it. The difference of the two techniques in SGD is subtle. When $\lambda = \frac{\lambda^\prime}{\eta}$ the two equations become the same. On the contrary, it makes a huge difference in adaptive optimizers such as Adam. This is extensively explained in the literature I have attached. About the learning rate, I think the other answers have given a nice explanation about that and further explanation is unnecessary at this point.
Difference between neural net weight decay and learning rate
So the answer given by @mrig is actually intuitively alright. But theoretically speaking what he has explained is L2 regularization. This was known as weight decay back in the day but now I think the
Difference between neural net weight decay and learning rate So the answer given by @mrig is actually intuitively alright. But theoretically speaking what he has explained is L2 regularization. This was known as weight decay back in the day but now I think the literature is pretty clear about the fact. These two concepts have a subtle difference and learning this difference can give a better understanding on weight decay parameter. It's easier to understand once you identify the two as which is which. Here I'll discuss about the two regularization techniques known as L2 regularization and decoupled wight decay. In L2 regularization you directly make changes to the cost function. This can be shown as follows using the same terminology as in @mrig's answer. \begin{equation} \widetilde{E}(\mathbf{w})=E(\mathbf{w})+\frac{\lambda}{2}\mathbf{w}^2 \end{equation} So once you take the gradient (as in SGD optimizer), this simplifies down to the following equation: \begin{equation} w_i \leftarrow w_i-\eta\frac{\partial E}{\partial w_i}-\eta\lambda w_i \end{equation} \begin{equation} w_i \leftarrow (1-\eta\lambda) w_i-\eta\frac{\partial E}{\partial w_i} \end{equation} However, in decoupled weight decay, you do not do any adjustments to the cost function directly. For the same SGD optimizer weight decay can be written as: \begin{equation} w_i \leftarrow (1-\lambda^\prime) w_i-\eta\frac{\partial E}{\partial w_i} \end{equation} So there you have it. The difference of the two techniques in SGD is subtle. When $\lambda = \frac{\lambda^\prime}{\eta}$ the two equations become the same. On the contrary, it makes a huge difference in adaptive optimizers such as Adam. This is extensively explained in the literature I have attached. About the learning rate, I think the other answers have given a nice explanation about that and further explanation is unnecessary at this point.
Difference between neural net weight decay and learning rate So the answer given by @mrig is actually intuitively alright. But theoretically speaking what he has explained is L2 regularization. This was known as weight decay back in the day but now I think the
1,151
Difference between neural net weight decay and learning rate
In simple terms: learning_rate: It controls how quickly or slowly a neural network model learns a problem. See: https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/ weight_decay: Is a regularisation technique used to avoid over-fitting. See: https://metacademy.org/graphs/concepts/weight_decay_neural_networks
Difference between neural net weight decay and learning rate
In simple terms: learning_rate: It controls how quickly or slowly a neural network model learns a problem. See: https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/ wei
Difference between neural net weight decay and learning rate In simple terms: learning_rate: It controls how quickly or slowly a neural network model learns a problem. See: https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/ weight_decay: Is a regularisation technique used to avoid over-fitting. See: https://metacademy.org/graphs/concepts/weight_decay_neural_networks
Difference between neural net weight decay and learning rate In simple terms: learning_rate: It controls how quickly or slowly a neural network model learns a problem. See: https://machinelearningmastery.com/learning-rate-for-deep-learning-neural-networks/ wei
1,152
Obtaining knowledge from a random forest
Random Forests are hardly a black box. They are based on decision trees, which are very easy to interpret: #Setup a binary classification problem require(randomForest) data(iris) set.seed(1) dat <- iris dat$Species <- factor(ifelse(dat$Species=='virginica','virginica','other')) trainrows <- runif(nrow(dat)) > 0.3 train <- dat[trainrows,] test <- dat[!trainrows,] #Build a decision tree require(rpart) model.rpart <- rpart(Species~., train) This results in a simple decision tree: > model.rpart n= 111 node), split, n, loss, yval, (yprob) * denotes terminal node 1) root 111 35 other (0.68468468 0.31531532) 2) Petal.Length< 4.95 77 3 other (0.96103896 0.03896104) * 3) Petal.Length>=4.95 34 2 virginica (0.05882353 0.94117647) * If Petal.Length < 4.95, this tree classifies the observation as "other." If it's greater than 4.95, it classifies the observation as "virginica." A random forest is simple a collection of many such trees, where each one is trained on a random subset of the data. Each tree then "votes" on the final classification of each observation. model.rf <- randomForest(Species~., train, ntree=25, proximity=TRUE, importance=TRUE, nodesize=5) > getTree(model.rf, k=1, labelVar=TRUE) left daughter right daughter split var split point status prediction 1 2 3 Petal.Width 1.70 1 <NA> 2 4 5 Petal.Length 4.95 1 <NA> 3 6 7 Petal.Length 4.95 1 <NA> 4 0 0 <NA> 0.00 -1 other 5 0 0 <NA> 0.00 -1 virginica 6 0 0 <NA> 0.00 -1 other 7 0 0 <NA> 0.00 -1 virginica You can even pull out individual trees from the rf, and look at their structure. The format is slightly different than for rpart models, but you could inspect each tree if you wanted and see how it's modeling the data. Furthermore, no model is truly a black box, because you can examine predicted responses vs actual responses for each variable in the dataset. This is a good idea regardless of what sort of model you are building: library(ggplot2) pSpecies <- predict(model.rf,test,'vote')[,2] plotData <- lapply(names(test[,1:4]), function(x){ out <- data.frame( var = x, type = c(rep('Actual',nrow(test)),rep('Predicted',nrow(test))), value = c(test[,x],test[,x]), species = c(as.numeric(test$Species)-1,pSpecies) ) out$value <- out$value-min(out$value) #Normalize to [0,1] out$value <- out$value/max(out$value) out }) plotData <- do.call(rbind,plotData) qplot(value, species, data=plotData, facets = type ~ var, geom='smooth', span = 0.5) I've normalized the variables (sepal and petal length and width) to a 0-1 range. The response is also 0-1, where 0 is other and 1 is virginica. As you can see the random forest is a good model, even on the test set. Additionally, a random forest will compute various measure of variable importance, which can be very informative: > importance(model.rf, type=1) MeanDecreaseAccuracy Sepal.Length 0.28567162 Sepal.Width -0.08584199 Petal.Length 0.64705819 Petal.Width 0.58176828 This table represents how much removing each variable reduces the accuracy of the model. Finally, there are many other plots you can make from a random forest model, to view what's going on in the black box: plot(model.rf) plot(margin(model.rf)) MDSplot(model.rf, iris$Species, k=5) plot(outlier(model.rf), type="h", col=c("red", "green", "blue")[as.numeric(dat$Species)]) You can view the help files for each of these functions to get a better idea of what they display.
Obtaining knowledge from a random forest
Random Forests are hardly a black box. They are based on decision trees, which are very easy to interpret: #Setup a binary classification problem require(randomForest) data(iris) set.seed(1) dat <- i
Obtaining knowledge from a random forest Random Forests are hardly a black box. They are based on decision trees, which are very easy to interpret: #Setup a binary classification problem require(randomForest) data(iris) set.seed(1) dat <- iris dat$Species <- factor(ifelse(dat$Species=='virginica','virginica','other')) trainrows <- runif(nrow(dat)) > 0.3 train <- dat[trainrows,] test <- dat[!trainrows,] #Build a decision tree require(rpart) model.rpart <- rpart(Species~., train) This results in a simple decision tree: > model.rpart n= 111 node), split, n, loss, yval, (yprob) * denotes terminal node 1) root 111 35 other (0.68468468 0.31531532) 2) Petal.Length< 4.95 77 3 other (0.96103896 0.03896104) * 3) Petal.Length>=4.95 34 2 virginica (0.05882353 0.94117647) * If Petal.Length < 4.95, this tree classifies the observation as "other." If it's greater than 4.95, it classifies the observation as "virginica." A random forest is simple a collection of many such trees, where each one is trained on a random subset of the data. Each tree then "votes" on the final classification of each observation. model.rf <- randomForest(Species~., train, ntree=25, proximity=TRUE, importance=TRUE, nodesize=5) > getTree(model.rf, k=1, labelVar=TRUE) left daughter right daughter split var split point status prediction 1 2 3 Petal.Width 1.70 1 <NA> 2 4 5 Petal.Length 4.95 1 <NA> 3 6 7 Petal.Length 4.95 1 <NA> 4 0 0 <NA> 0.00 -1 other 5 0 0 <NA> 0.00 -1 virginica 6 0 0 <NA> 0.00 -1 other 7 0 0 <NA> 0.00 -1 virginica You can even pull out individual trees from the rf, and look at their structure. The format is slightly different than for rpart models, but you could inspect each tree if you wanted and see how it's modeling the data. Furthermore, no model is truly a black box, because you can examine predicted responses vs actual responses for each variable in the dataset. This is a good idea regardless of what sort of model you are building: library(ggplot2) pSpecies <- predict(model.rf,test,'vote')[,2] plotData <- lapply(names(test[,1:4]), function(x){ out <- data.frame( var = x, type = c(rep('Actual',nrow(test)),rep('Predicted',nrow(test))), value = c(test[,x],test[,x]), species = c(as.numeric(test$Species)-1,pSpecies) ) out$value <- out$value-min(out$value) #Normalize to [0,1] out$value <- out$value/max(out$value) out }) plotData <- do.call(rbind,plotData) qplot(value, species, data=plotData, facets = type ~ var, geom='smooth', span = 0.5) I've normalized the variables (sepal and petal length and width) to a 0-1 range. The response is also 0-1, where 0 is other and 1 is virginica. As you can see the random forest is a good model, even on the test set. Additionally, a random forest will compute various measure of variable importance, which can be very informative: > importance(model.rf, type=1) MeanDecreaseAccuracy Sepal.Length 0.28567162 Sepal.Width -0.08584199 Petal.Length 0.64705819 Petal.Width 0.58176828 This table represents how much removing each variable reduces the accuracy of the model. Finally, there are many other plots you can make from a random forest model, to view what's going on in the black box: plot(model.rf) plot(margin(model.rf)) MDSplot(model.rf, iris$Species, k=5) plot(outlier(model.rf), type="h", col=c("red", "green", "blue")[as.numeric(dat$Species)]) You can view the help files for each of these functions to get a better idea of what they display.
Obtaining knowledge from a random forest Random Forests are hardly a black box. They are based on decision trees, which are very easy to interpret: #Setup a binary classification problem require(randomForest) data(iris) set.seed(1) dat <- i
1,153
Obtaining knowledge from a random forest
Some time ago I had to justify a RF model-fit to some chemists in my company. I spent quite time trying different visualization techniques. During the process, I accidentally also came up with some new techniques which I put into an R package (forestFloor) specifically for random forest visualizations. The classical approach are partial dependence plots supported by: Rminer(data-based sensitivity analysis is reinvented partial dependence), or partialPlot in randomForest package. I find the partial dependence package iceBOX as an elegant way to discover interactions. Have not used edarf package, but seems to have some fine visualizations dedicated for RF. The ggRandomForest package also contain a large set of useful visualizations. Currently forestFloor supports randomForest objects(support for other RF implementions is on its way). Also feature contributions can be computed for gradient boosted trees, as these trees after training are not much different from random forest trees. So forestFloor could support XGBoost in future. Partial dependence plots are completely model invariant. All packages have in common to visualize the geometrical mapping structure of a model from feature space to target space. A sine curve y = sin(x) would be a mapping from x to y and can be plotted in 2D. To plot a RF mapping directly would often require too many dimensions. Instead the overall mapping structure can be projected, sliced or decomposed, such that the entire mapping structure is boiled down into a sequence of 2D marginal plots. If your RF model only has captured main effects and no interactions between variables, classic visualizations methods will do just fine. Then you can simplify your model structure like this $y = F(X) \approx f_1(x_1) + f_2(x_2) + ... + f_d(x_d)$. Then each partial function by each variable can be visualized just as the sine curve. If your RF model has captured sizable interactions, then it is more problematic. 3D slices of the structure can visualize interactions between two features and the output. The problem is to know which combination of features to visualize, (iceBOX does address this issue). Also it is not easy to tell if other latent interactions still are not accounted for. In this paper, I used an very early version of forestFloor to explain what actual biochemical relationship a very small RF model had captured. And in this paper we thoroughly describe visualizations of feature contributions, Forest Floor Visualizations of Random Forests. I have pasted the simulated example from forestFloor package, where I show how to uncover a simulated hidden function $y = {x_1}^2 + sin(x_2\pi) + 2 * x_3 * x_4 + $ noise #1 - Regression example: set.seed(1234) library(forestFloor) library(randomForest) #simulate data y = x1^2+sin(x2*pi)+x3*x4 + noise obs = 5000 #how many observations/samples vars = 6 #how many variables/features #create 6 normal distr. uncorr. variables X = data.frame(replicate(vars,rnorm(obs))) #create target by hidden function Y = with(X, X1^2 + sin(X2*pi) + 2 * X3 * X4 + 0.5 * rnorm(obs)) #grow a forest rfo = randomForest( X, #features, data.frame or matrix. Recommended to name columns. Y, #targets, vector of integers or floats keep.inbag = TRUE, # mandatory, importance = TRUE, # recommended, else ordering by giniImpurity (unstable) sampsize = 1500 , # optional, reduce tree sizes to compute faster ntree = if(interactive()) 500 else 50 #speedup CRAN testing ) #compute forestFloor object, often only 5-10% time of growing forest ff = forestFloor( rf.fit = rfo, # mandatory X = X, # mandatory calc_np = FALSE, # TRUE or FALSE both works, makes no difference binary_reg = FALSE # takes no effect here when rfo$type="regression" ) #plot partial functions of most important variables first plot(ff, # forestFloor object plot_seq = 1:6, # optional sequence of features to plot orderByImportance=TRUE # if TRUE index sequence by importance, else by X column ) #Non interacting features are well displayed, whereas X3 and X4 are not #by applying color gradient, interactions reveal themself #also a k-nearest neighbor fit is applied to evaluate goodness-of-fit Col=fcol(ff,3,orderByImportance=FALSE) #create color gradient see help(fcol) plot(ff,col=Col,plot_GOF=TRUE) #feature contributions of X3 and X4 are well explained in the context of X3 and X4 # as GOF R^2>.8 show3d(ff,3:4,col=Col,plot_GOF=TRUE,orderByImportance=FALSE) Lastly the code for partial dependence plots coded by A.Liaw described by J.Friedman. Which do fine for main effects. par(mfrow=c(2,3)) for(i in 1:6) partialPlot(rfo,X,x.var=names(X)[i])
Obtaining knowledge from a random forest
Some time ago I had to justify a RF model-fit to some chemists in my company. I spent quite time trying different visualization techniques. During the process, I accidentally also came up with some ne
Obtaining knowledge from a random forest Some time ago I had to justify a RF model-fit to some chemists in my company. I spent quite time trying different visualization techniques. During the process, I accidentally also came up with some new techniques which I put into an R package (forestFloor) specifically for random forest visualizations. The classical approach are partial dependence plots supported by: Rminer(data-based sensitivity analysis is reinvented partial dependence), or partialPlot in randomForest package. I find the partial dependence package iceBOX as an elegant way to discover interactions. Have not used edarf package, but seems to have some fine visualizations dedicated for RF. The ggRandomForest package also contain a large set of useful visualizations. Currently forestFloor supports randomForest objects(support for other RF implementions is on its way). Also feature contributions can be computed for gradient boosted trees, as these trees after training are not much different from random forest trees. So forestFloor could support XGBoost in future. Partial dependence plots are completely model invariant. All packages have in common to visualize the geometrical mapping structure of a model from feature space to target space. A sine curve y = sin(x) would be a mapping from x to y and can be plotted in 2D. To plot a RF mapping directly would often require too many dimensions. Instead the overall mapping structure can be projected, sliced or decomposed, such that the entire mapping structure is boiled down into a sequence of 2D marginal plots. If your RF model only has captured main effects and no interactions between variables, classic visualizations methods will do just fine. Then you can simplify your model structure like this $y = F(X) \approx f_1(x_1) + f_2(x_2) + ... + f_d(x_d)$. Then each partial function by each variable can be visualized just as the sine curve. If your RF model has captured sizable interactions, then it is more problematic. 3D slices of the structure can visualize interactions between two features and the output. The problem is to know which combination of features to visualize, (iceBOX does address this issue). Also it is not easy to tell if other latent interactions still are not accounted for. In this paper, I used an very early version of forestFloor to explain what actual biochemical relationship a very small RF model had captured. And in this paper we thoroughly describe visualizations of feature contributions, Forest Floor Visualizations of Random Forests. I have pasted the simulated example from forestFloor package, where I show how to uncover a simulated hidden function $y = {x_1}^2 + sin(x_2\pi) + 2 * x_3 * x_4 + $ noise #1 - Regression example: set.seed(1234) library(forestFloor) library(randomForest) #simulate data y = x1^2+sin(x2*pi)+x3*x4 + noise obs = 5000 #how many observations/samples vars = 6 #how many variables/features #create 6 normal distr. uncorr. variables X = data.frame(replicate(vars,rnorm(obs))) #create target by hidden function Y = with(X, X1^2 + sin(X2*pi) + 2 * X3 * X4 + 0.5 * rnorm(obs)) #grow a forest rfo = randomForest( X, #features, data.frame or matrix. Recommended to name columns. Y, #targets, vector of integers or floats keep.inbag = TRUE, # mandatory, importance = TRUE, # recommended, else ordering by giniImpurity (unstable) sampsize = 1500 , # optional, reduce tree sizes to compute faster ntree = if(interactive()) 500 else 50 #speedup CRAN testing ) #compute forestFloor object, often only 5-10% time of growing forest ff = forestFloor( rf.fit = rfo, # mandatory X = X, # mandatory calc_np = FALSE, # TRUE or FALSE both works, makes no difference binary_reg = FALSE # takes no effect here when rfo$type="regression" ) #plot partial functions of most important variables first plot(ff, # forestFloor object plot_seq = 1:6, # optional sequence of features to plot orderByImportance=TRUE # if TRUE index sequence by importance, else by X column ) #Non interacting features are well displayed, whereas X3 and X4 are not #by applying color gradient, interactions reveal themself #also a k-nearest neighbor fit is applied to evaluate goodness-of-fit Col=fcol(ff,3,orderByImportance=FALSE) #create color gradient see help(fcol) plot(ff,col=Col,plot_GOF=TRUE) #feature contributions of X3 and X4 are well explained in the context of X3 and X4 # as GOF R^2>.8 show3d(ff,3:4,col=Col,plot_GOF=TRUE,orderByImportance=FALSE) Lastly the code for partial dependence plots coded by A.Liaw described by J.Friedman. Which do fine for main effects. par(mfrow=c(2,3)) for(i in 1:6) partialPlot(rfo,X,x.var=names(X)[i])
Obtaining knowledge from a random forest Some time ago I had to justify a RF model-fit to some chemists in my company. I spent quite time trying different visualization techniques. During the process, I accidentally also came up with some ne
1,154
Obtaining knowledge from a random forest
To supplement these fine responses, I would mention use of gradient boosted trees (e.g. the GBM Package in R). In R, I prefer this to random forests because missing values are allowed as compared to randomForest where imputation is required. Variable importance and partial plots are available (as in randomForest) to aid in feature selection and nonlinear transformation exploration in your logit model. Further, variable interaction is addressed with Friedman’s H-statistic (interact.gbm) with reference given as J.H. Friedman and B.E. Popescu (2005). “Predictive Learning via Rule Ensembles.” Section 8.1. A commercial version called TreeNet is available from Salford Systems and this video presentation speaks to their take on variable interaction estimation Video.
Obtaining knowledge from a random forest
To supplement these fine responses, I would mention use of gradient boosted trees (e.g. the GBM Package in R). In R, I prefer this to random forests because missing values are allowed as compared to r
Obtaining knowledge from a random forest To supplement these fine responses, I would mention use of gradient boosted trees (e.g. the GBM Package in R). In R, I prefer this to random forests because missing values are allowed as compared to randomForest where imputation is required. Variable importance and partial plots are available (as in randomForest) to aid in feature selection and nonlinear transformation exploration in your logit model. Further, variable interaction is addressed with Friedman’s H-statistic (interact.gbm) with reference given as J.H. Friedman and B.E. Popescu (2005). “Predictive Learning via Rule Ensembles.” Section 8.1. A commercial version called TreeNet is available from Salford Systems and this video presentation speaks to their take on variable interaction estimation Video.
Obtaining knowledge from a random forest To supplement these fine responses, I would mention use of gradient boosted trees (e.g. the GBM Package in R). In R, I prefer this to random forests because missing values are allowed as compared to r
1,155
Obtaining knowledge from a random forest
Late answer, but I came across a recent R package forestFloor (2015) that helps you doing this "unblackboxing" task in an automated fashion. It looks very promising! library(forestFloor) library(randomForest) #simulate data obs=1000 vars = 18 X = data.frame(replicate(vars,rnorm(obs))) Y = with(X, X1^2 + sin(X2*pi) + 2 * X3 * X4 + 1 * rnorm(obs)) #grow a forest, remeber to include inbag rfo=randomForest(X,Y,keep.inbag = TRUE,sampsize=250,ntree=50) #compute topology ff = forestFloor(rfo,X) #ggPlotForestFloor(ff,1:9) plot(ff,1:9,col=fcol(ff)) Produces the following plots: It also provides three-dimensional visualization if you are looking for interactions.
Obtaining knowledge from a random forest
Late answer, but I came across a recent R package forestFloor (2015) that helps you doing this "unblackboxing" task in an automated fashion. It looks very promising! library(forestFloor) library(rando
Obtaining knowledge from a random forest Late answer, but I came across a recent R package forestFloor (2015) that helps you doing this "unblackboxing" task in an automated fashion. It looks very promising! library(forestFloor) library(randomForest) #simulate data obs=1000 vars = 18 X = data.frame(replicate(vars,rnorm(obs))) Y = with(X, X1^2 + sin(X2*pi) + 2 * X3 * X4 + 1 * rnorm(obs)) #grow a forest, remeber to include inbag rfo=randomForest(X,Y,keep.inbag = TRUE,sampsize=250,ntree=50) #compute topology ff = forestFloor(rfo,X) #ggPlotForestFloor(ff,1:9) plot(ff,1:9,col=fcol(ff)) Produces the following plots: It also provides three-dimensional visualization if you are looking for interactions.
Obtaining knowledge from a random forest Late answer, but I came across a recent R package forestFloor (2015) that helps you doing this "unblackboxing" task in an automated fashion. It looks very promising! library(forestFloor) library(rando
1,156
Obtaining knowledge from a random forest
As mentioned by Zach, one way of understanding a model is to plot the response as the predictors vary. You can do this easily for "any" model with the plotmo R package. For example library(randomForest) data <- iris data$Species <- factor(ifelse(data$Species=='virginica','virginica','other')) mod <- randomForest(Species~Sepal.Length+Sepal.Width, data=data) library(plotmo) plotmo(mod, type="prob") which gives This changes one variable while holding the others at their median values. For interaction plots, it changes two variables. (Note added Nov 2016: plotmo now also supports partial dependence plots.) The example above uses only two variables; more complicated models can be visualized in a piecemeal fashion by looking at one or two variables at a time. Since the "other" variables are held at their median values, this shows only a slice of the data, but can still be useful. Some examples are in the vignette for the plotmo package. Other examples are in Chapter 10 of Plotting rpart trees with the rpart.plot package.
Obtaining knowledge from a random forest
As mentioned by Zach, one way of understanding a model is to plot the response as the predictors vary. You can do this easily for "any" model with the plotmo R package. For example library(randomFor
Obtaining knowledge from a random forest As mentioned by Zach, one way of understanding a model is to plot the response as the predictors vary. You can do this easily for "any" model with the plotmo R package. For example library(randomForest) data <- iris data$Species <- factor(ifelse(data$Species=='virginica','virginica','other')) mod <- randomForest(Species~Sepal.Length+Sepal.Width, data=data) library(plotmo) plotmo(mod, type="prob") which gives This changes one variable while holding the others at their median values. For interaction plots, it changes two variables. (Note added Nov 2016: plotmo now also supports partial dependence plots.) The example above uses only two variables; more complicated models can be visualized in a piecemeal fashion by looking at one or two variables at a time. Since the "other" variables are held at their median values, this shows only a slice of the data, but can still be useful. Some examples are in the vignette for the plotmo package. Other examples are in Chapter 10 of Plotting rpart trees with the rpart.plot package.
Obtaining knowledge from a random forest As mentioned by Zach, one way of understanding a model is to plot the response as the predictors vary. You can do this easily for "any" model with the plotmo R package. For example library(randomFor
1,157
Obtaining knowledge from a random forest
Late in the game but there are some new developments in this front, for example LIME and SHAP. Also a package worth checking is DALEX (in particular if using R but in any case contains nice cheatsheets etc.), though doesn't seem to cover interactions at the moment. And these are all model-agnostic so will work for random forests, GBMs, neural networks, etc.
Obtaining knowledge from a random forest
Late in the game but there are some new developments in this front, for example LIME and SHAP. Also a package worth checking is DALEX (in particular if using R but in any case contains nice cheatsheet
Obtaining knowledge from a random forest Late in the game but there are some new developments in this front, for example LIME and SHAP. Also a package worth checking is DALEX (in particular if using R but in any case contains nice cheatsheets etc.), though doesn't seem to cover interactions at the moment. And these are all model-agnostic so will work for random forests, GBMs, neural networks, etc.
Obtaining knowledge from a random forest Late in the game but there are some new developments in this front, for example LIME and SHAP. Also a package worth checking is DALEX (in particular if using R but in any case contains nice cheatsheet
1,158
Obtaining knowledge from a random forest
I'm very interested in these types of questions myself. I do think there is a lot of information we can get out of a random forest. About interactions, it seems like Breiman and Cutler have already tried to look at it, especially for classification RFs. To my knowledge, this has not been implemented in the randomForest R package. Maybe because it might not be as simple and because the meaning of "variable interactions" is very dependent of your problem. About the nonlinearity, I'm not sure what you are looking for, regression forest are used for nonlinear multiple regression problems without any priors on what type of nonlinear function to use.
Obtaining knowledge from a random forest
I'm very interested in these types of questions myself. I do think there is a lot of information we can get out of a random forest. About interactions, it seems like Breiman and Cutler have already tr
Obtaining knowledge from a random forest I'm very interested in these types of questions myself. I do think there is a lot of information we can get out of a random forest. About interactions, it seems like Breiman and Cutler have already tried to look at it, especially for classification RFs. To my knowledge, this has not been implemented in the randomForest R package. Maybe because it might not be as simple and because the meaning of "variable interactions" is very dependent of your problem. About the nonlinearity, I'm not sure what you are looking for, regression forest are used for nonlinear multiple regression problems without any priors on what type of nonlinear function to use.
Obtaining knowledge from a random forest I'm very interested in these types of questions myself. I do think there is a lot of information we can get out of a random forest. About interactions, it seems like Breiman and Cutler have already tr
1,159
Obtaining knowledge from a random forest
A slight modification of random forests that provide more information about the data are the recently-developed causal forest methods. See the GRF R-package and the motivating paper here. The idea is to use the random forest baseline methods to find heterogeneity in causal effects. An earlier paper (here) gives a detailed approach to a simple causal forest. Page 9 of the paper gives a step-by-step procedure for growing a causal tree, which can then be expanded to a forest in the usual ways. Equation 4: Equation 5:
Obtaining knowledge from a random forest
A slight modification of random forests that provide more information about the data are the recently-developed causal forest methods. See the GRF R-package and the motivating paper here. The idea is
Obtaining knowledge from a random forest A slight modification of random forests that provide more information about the data are the recently-developed causal forest methods. See the GRF R-package and the motivating paper here. The idea is to use the random forest baseline methods to find heterogeneity in causal effects. An earlier paper (here) gives a detailed approach to a simple causal forest. Page 9 of the paper gives a step-by-step procedure for growing a causal tree, which can then be expanded to a forest in the usual ways. Equation 4: Equation 5:
Obtaining knowledge from a random forest A slight modification of random forests that provide more information about the data are the recently-developed causal forest methods. See the GRF R-package and the motivating paper here. The idea is
1,160
Obtaining knowledge from a random forest
Late answer related to my question here (Can we make Random Forest 100% interpretable by fixing the seed?): Let $z_1$ be the seed in the creation of boostrapped training set, and $z_2 $ be the seed in the selection of feature's subset (for simplification, I only list 2 kinds of seeds here). From $z_1$, $m$ boostrapped training sets are created: $D_1(z_1)$, $D_2(z_1)$, $D_3(z_1)$, ..., $D_m(z_1)$. From those traning sets, $m$ corresponding decision trees are created, and tuned via cross-validation: $T_1(z_1,z_2)$, $T_2(z_1,z_2)$, $T_3(z_1,z_2)$,..., $T_m(z_1,z_2)$. Let's denote predictions from the ${j^\text{th}}_{(j=1,2,...,m)}$ tree for an individual $x_i$ (from training or testing set, whatever) as $\hat{f}^j(x_i)_{(i \le n, j \le m)}$. Hence the final predictions by the ensemble trees are: $$\hat{F}(x_i) = > \frac{1}{m}\sum\limits_{j=1}^m \hat{f}^j(x_i)$$ Once the model is validated, and is stable (meaning $\hat{F}(x_i)$ doesn't depend strongly on the pair $(z_1,z_2)$). I start to create every possible combinations of my features, which give me a very big set ($x'_i$). Applying my forest on each $x'_i$ gives me the corresponding predictions: $$x'_1 \rightarrow \hat{F}(x'_1) \text{ - which is fixed > thanks to $(z_1, z_2)$}$$ $$x'_2 \rightarrow \hat{F}(x'_2) \text{ - > which is fixed thanks to $(z_1, z_2)$}$$ $$x'_3 \rightarrow > \hat{F}(x'_3) \text{ - which is fixed thanks to $(z_1, z_2)$}$$ $$x'_4 > \rightarrow \hat{F}(x'_4) \text{ - which is fixed thanks to $(z_1, > z_2)$}$$ $$....$$ The latter can be easily represented in form of a single (huge) tree. For example: $x'_1$: (Age = 18, sex = M, ...), $x'_2$ = (Age = 18, sex = F, ...), ... could be regrouped to create a leaf. This works also for every ensemble methods based on aggregation of trees.
Obtaining knowledge from a random forest
Late answer related to my question here (Can we make Random Forest 100% interpretable by fixing the seed?): Let $z_1$ be the seed in the creation of boostrapped training set, and $z_2 $ be the seed
Obtaining knowledge from a random forest Late answer related to my question here (Can we make Random Forest 100% interpretable by fixing the seed?): Let $z_1$ be the seed in the creation of boostrapped training set, and $z_2 $ be the seed in the selection of feature's subset (for simplification, I only list 2 kinds of seeds here). From $z_1$, $m$ boostrapped training sets are created: $D_1(z_1)$, $D_2(z_1)$, $D_3(z_1)$, ..., $D_m(z_1)$. From those traning sets, $m$ corresponding decision trees are created, and tuned via cross-validation: $T_1(z_1,z_2)$, $T_2(z_1,z_2)$, $T_3(z_1,z_2)$,..., $T_m(z_1,z_2)$. Let's denote predictions from the ${j^\text{th}}_{(j=1,2,...,m)}$ tree for an individual $x_i$ (from training or testing set, whatever) as $\hat{f}^j(x_i)_{(i \le n, j \le m)}$. Hence the final predictions by the ensemble trees are: $$\hat{F}(x_i) = > \frac{1}{m}\sum\limits_{j=1}^m \hat{f}^j(x_i)$$ Once the model is validated, and is stable (meaning $\hat{F}(x_i)$ doesn't depend strongly on the pair $(z_1,z_2)$). I start to create every possible combinations of my features, which give me a very big set ($x'_i$). Applying my forest on each $x'_i$ gives me the corresponding predictions: $$x'_1 \rightarrow \hat{F}(x'_1) \text{ - which is fixed > thanks to $(z_1, z_2)$}$$ $$x'_2 \rightarrow \hat{F}(x'_2) \text{ - > which is fixed thanks to $(z_1, z_2)$}$$ $$x'_3 \rightarrow > \hat{F}(x'_3) \text{ - which is fixed thanks to $(z_1, z_2)$}$$ $$x'_4 > \rightarrow \hat{F}(x'_4) \text{ - which is fixed thanks to $(z_1, > z_2)$}$$ $$....$$ The latter can be easily represented in form of a single (huge) tree. For example: $x'_1$: (Age = 18, sex = M, ...), $x'_2$ = (Age = 18, sex = F, ...), ... could be regrouped to create a leaf. This works also for every ensemble methods based on aggregation of trees.
Obtaining knowledge from a random forest Late answer related to my question here (Can we make Random Forest 100% interpretable by fixing the seed?): Let $z_1$ be the seed in the creation of boostrapped training set, and $z_2 $ be the seed
1,161
What is the difference between linear regression and logistic regression?
Linear regression uses the general linear equation $Y=b_0+∑(b_i X_i)+\epsilon$ where $Y$ is a continuous dependent variable and independent variables $X_i$ are usually continuous (but can also be binary, e.g. when the linear model is used in a t-test) or other discrete domains. $\epsilon$ is a term for the variance that is not explained by the model and is usually just called "error". Individual dependent values denoted by $Y_j$ can be solved by modifying the equation a little: $Y_j=b_0 + \sum{(b_i X_{ij})+\epsilon_j}$ Logistic regression is another generalized linear model (GLM) procedure using the same basic formula, but instead of the continuous $Y$, it is regressing for the probability of a categorical outcome. In simplest form, this means that we're considering just one outcome variable and two states of that variable- either 0 or 1. The equation for the probability of $Y=1$ looks like this: $$ P(Y=1) = {1 \over 1+e^{-(b_0+\sum{(b_iX_i)})}} $$ Your independent variables $X_i$ can be continuous or binary. The regression coefficients $b_i$ can be exponentiated to give you the change in odds of $Y$ per change in $X_i$, i.e., $Odds={P(Y=1) \over P(Y=0)}={P(Y=1) \over 1-P(Y=1)}$ and ${\Delta Odds}= e^{b_i}$. $\Delta Odds$ is called the odds ratio, $Odds(X_i+1)\over Odds(X_i)$. In English, you can say that the odds of $Y=1$ increase by a factor of $e^{b_i}$ per unit change in $X_i$. Example: If you wanted to see how body mass index predicts blood cholesterol (a continuous measure), you'd use linear regression as described at the top of my answer. If you wanted to see how BMI predicts the odds of being a diabetic (a binary diagnosis), you'd use logistic regression.
What is the difference between linear regression and logistic regression?
Linear regression uses the general linear equation $Y=b_0+∑(b_i X_i)+\epsilon$ where $Y$ is a continuous dependent variable and independent variables $X_i$ are usually continuous (but can also be bina
What is the difference between linear regression and logistic regression? Linear regression uses the general linear equation $Y=b_0+∑(b_i X_i)+\epsilon$ where $Y$ is a continuous dependent variable and independent variables $X_i$ are usually continuous (but can also be binary, e.g. when the linear model is used in a t-test) or other discrete domains. $\epsilon$ is a term for the variance that is not explained by the model and is usually just called "error". Individual dependent values denoted by $Y_j$ can be solved by modifying the equation a little: $Y_j=b_0 + \sum{(b_i X_{ij})+\epsilon_j}$ Logistic regression is another generalized linear model (GLM) procedure using the same basic formula, but instead of the continuous $Y$, it is regressing for the probability of a categorical outcome. In simplest form, this means that we're considering just one outcome variable and two states of that variable- either 0 or 1. The equation for the probability of $Y=1$ looks like this: $$ P(Y=1) = {1 \over 1+e^{-(b_0+\sum{(b_iX_i)})}} $$ Your independent variables $X_i$ can be continuous or binary. The regression coefficients $b_i$ can be exponentiated to give you the change in odds of $Y$ per change in $X_i$, i.e., $Odds={P(Y=1) \over P(Y=0)}={P(Y=1) \over 1-P(Y=1)}$ and ${\Delta Odds}= e^{b_i}$. $\Delta Odds$ is called the odds ratio, $Odds(X_i+1)\over Odds(X_i)$. In English, you can say that the odds of $Y=1$ increase by a factor of $e^{b_i}$ per unit change in $X_i$. Example: If you wanted to see how body mass index predicts blood cholesterol (a continuous measure), you'd use linear regression as described at the top of my answer. If you wanted to see how BMI predicts the odds of being a diabetic (a binary diagnosis), you'd use logistic regression.
What is the difference between linear regression and logistic regression? Linear regression uses the general linear equation $Y=b_0+∑(b_i X_i)+\epsilon$ where $Y$ is a continuous dependent variable and independent variables $X_i$ are usually continuous (but can also be bina
1,162
What is the difference between linear regression and logistic regression?
Linear Regression is used to establish a relationship between Dependent and Independent variables, which is useful in estimating the resultant dependent variable in case independent variable change. For example: Using a Linear Regression, the relationship between Rain (R) and Umbrella Sales (U) is found to be - U = 2R + 5000 This equation says that for every 1mm of Rain, there is a demand for 5002 umbrellas. So, using Simple Regression, you can estimate the value of your variable. Logistic Regression on the other hand is used to ascertain the probability of an event. And this event is captured in binary format, i.e. 0 or 1. Example - I want to ascertain if a customer will buy my product or not. For this, I would run a Logistic Regression on the (relevant) data and my dependent variable would be a binary variable (1=Yes; 0=No). In terms of graphical representation, Linear Regression gives a linear line as an output, once the values are plotted on the graph. Whereas, the logistic regression gives an S-shaped line Reference from Mohit Khurana.
What is the difference between linear regression and logistic regression?
Linear Regression is used to establish a relationship between Dependent and Independent variables, which is useful in estimating the resultant dependent variable in case independent variable change. F
What is the difference between linear regression and logistic regression? Linear Regression is used to establish a relationship between Dependent and Independent variables, which is useful in estimating the resultant dependent variable in case independent variable change. For example: Using a Linear Regression, the relationship between Rain (R) and Umbrella Sales (U) is found to be - U = 2R + 5000 This equation says that for every 1mm of Rain, there is a demand for 5002 umbrellas. So, using Simple Regression, you can estimate the value of your variable. Logistic Regression on the other hand is used to ascertain the probability of an event. And this event is captured in binary format, i.e. 0 or 1. Example - I want to ascertain if a customer will buy my product or not. For this, I would run a Logistic Regression on the (relevant) data and my dependent variable would be a binary variable (1=Yes; 0=No). In terms of graphical representation, Linear Regression gives a linear line as an output, once the values are plotted on the graph. Whereas, the logistic regression gives an S-shaped line Reference from Mohit Khurana.
What is the difference between linear regression and logistic regression? Linear Regression is used to establish a relationship between Dependent and Independent variables, which is useful in estimating the resultant dependent variable in case independent variable change. F
1,163
What is the difference between linear regression and logistic regression?
The differences have been settled by DocBuckets and Pardis, but I want to add one way to compare their performance not mentioned. Linear regression is usually solved by minimizing the least squares error of the model to the data, therefore large errors are penalized quadratically. Logistic regression is just the opposite. Using the logistic loss function causes large errors to be penalized to an asymptotically constant. Consider linear regression on a categorical {0,1} outcomes to see why this is a problem. If your model predicts the outcome is 38 when truth is 1, you've lost nothing. Linear regression would try to reduce that 38, logistic wouldn't (as much).
What is the difference between linear regression and logistic regression?
The differences have been settled by DocBuckets and Pardis, but I want to add one way to compare their performance not mentioned. Linear regression is usually solved by minimizing the least squares er
What is the difference between linear regression and logistic regression? The differences have been settled by DocBuckets and Pardis, but I want to add one way to compare their performance not mentioned. Linear regression is usually solved by minimizing the least squares error of the model to the data, therefore large errors are penalized quadratically. Logistic regression is just the opposite. Using the logistic loss function causes large errors to be penalized to an asymptotically constant. Consider linear regression on a categorical {0,1} outcomes to see why this is a problem. If your model predicts the outcome is 38 when truth is 1, you've lost nothing. Linear regression would try to reduce that 38, logistic wouldn't (as much).
What is the difference between linear regression and logistic regression? The differences have been settled by DocBuckets and Pardis, but I want to add one way to compare their performance not mentioned. Linear regression is usually solved by minimizing the least squares er
1,164
What if residuals are normally distributed, but y is not?
It is reasonable for the residuals in a regression problem to be normally distributed, even though the response variable is not. Consider a univariate regression problem where $y \sim \mathcal{N}(\beta x, \sigma^2)$. so that the regression model is appropriate, and further assume that the true value of $\beta=1$. In this case, while the residuals of the true regression model are normal, the distribution of $y$ depends on the distribution of $x$, as the conditional mean of $y$ is a function of $x$. If the dataset has a lot of values of $x$ that are close to zero and progressively fewer the higher the value of $x$, then the distribution of $y$ will be skewed to the right. If values of $x$ are distributed symmetrically, then $y$ will be distributed symmetrically, and so forth. For a regression problem, we only assume that the response is normal conditioned on the value of $x$.
What if residuals are normally distributed, but y is not?
It is reasonable for the residuals in a regression problem to be normally distributed, even though the response variable is not. Consider a univariate regression problem where $y \sim \mathcal{N}(\be
What if residuals are normally distributed, but y is not? It is reasonable for the residuals in a regression problem to be normally distributed, even though the response variable is not. Consider a univariate regression problem where $y \sim \mathcal{N}(\beta x, \sigma^2)$. so that the regression model is appropriate, and further assume that the true value of $\beta=1$. In this case, while the residuals of the true regression model are normal, the distribution of $y$ depends on the distribution of $x$, as the conditional mean of $y$ is a function of $x$. If the dataset has a lot of values of $x$ that are close to zero and progressively fewer the higher the value of $x$, then the distribution of $y$ will be skewed to the right. If values of $x$ are distributed symmetrically, then $y$ will be distributed symmetrically, and so forth. For a regression problem, we only assume that the response is normal conditioned on the value of $x$.
What if residuals are normally distributed, but y is not? It is reasonable for the residuals in a regression problem to be normally distributed, even though the response variable is not. Consider a univariate regression problem where $y \sim \mathcal{N}(\be
1,165
What if residuals are normally distributed, but y is not?
@DikranMarsupial is exactly right, of course, but it occurred to me that it might be nice to illustrate his point, especially since this concern seems to come up frequently. Specifically, the residuals of a regression model should be normally distributed for the p-values to be correct. However, even if the residuals are normally distributed, that doesn't guarantee that $Y$ will be (not that it matters... ); it depends on the distribution of $X$. Let's take a simple example (which I am making up). Let's say we're testing a drug for isolated systolic hypertension (i.e., the top blood pressure number is too high). Let's further stipulate that systolic bp is normally distributed within our patient population, with a mean of 160 & SD of 3, and that for each mg of the drug that patients take each day, systolic bp goes down by 1mmHg. In other words, the true value of $\beta_0$ is 160, and $\beta_1$ is -1, and the true data generating function is: $$ BP_{sys}=160-1\times\text{daily drug dosage}+\varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, 9) $$ In our fictitious study, 300 patients are randomly assigned to take 0mg (a placebo), 20mg, or 40mg of this new medicine per day. (Notice that $X$ is not normally distributed.) Then, after an adequate period of time for the drug to take effect, our data might look like this: (I jittered the dosages so that the points wouldn't overlap so much that they were hard to distinguish.) Now, let's check out the distributions of $Y$ (i.e., it's marginal / original distribution), and the residuals: The qq-plots show us that $Y$ is not remotely normal, but that the residuals are reasonably normal. The kernel density plots give us a more intuitively accessible picture of the distributions. It is clear that $Y$ is tri-modal, whereas the residuals look much like a normal distribution is supposed to look. But what about the fitted regression model, what is the effect of the non-normal $Y$ & $X$ (but normal residuals)? To answer this question, we need to specify what we might be worried about regarding the typical performance of a regression model in situations like this. The first issue is, are the betas, on average, right? (Of course, they'll bounce around some, but in the long run, are the sampling distributions of the betas centered on the true values?) This is the question of bias. Another issue is, can we trust the p-values we get? That is, when the null hypothesis true, is $p<.05$ only 5% of the time? To determine these things, we can simulate data from the above data generating process and a parallel case where the drug has no effect, a large number of times. Then we can plot the sampling distributions of $\beta_1$ and check to see if they're centered on the true value, and also check how often the relationship was 'significant' in the null case: set.seed(123456789) # this make the simulation repeatable b0 = 160; b1 = -1; b1_null = 0 # these are the true beta values x = rep(c(0, 20, 40), each=100) # the (non-normal) drug dosages patients get estimated.b1s = vector(length=10000) # these will store the simulation's results estimated.b1ns = vector(length=10000) null.p.values = vector(length=10000) for(i in 1:10000){ residuals = rnorm(300, mean=0, sd=3) y.works = b0 + b1*x + residuals y.null = b0 + b1_null*x + residuals # everything is identical except b1 model.works = lm(y.works~x) model.null = lm(y.null~x) estimated.b1s[i] = coef(model.works)[2] estimated.b1ns[i] = coef(model.null)[2] null.p.values[i] = summary(model.null)$coefficients[2,4] } mean(estimated.b1s) # the sampling distributions are centered on the true values [1] -1.000084 mean(estimated.b1ns) [1] -8.43504e-05 mean(null.p.values<.05) # when the null is true, p<.05 5% of the time [1] 0.0532 These results show that everything works out fine. I won't go through the motions, but if $X$ had been normally distributed, with otherwise the same setup, the original / marginal distribution of $Y$ would have been normally distributed just as the residuals (albeit with a larger SD). I also didn't illustrate the effects of a skewed distribution of $X$ (which is was the impetus behind this question), but @DikranMarsupial's point is just as valid in that case, and it could be illustrated similarly.
What if residuals are normally distributed, but y is not?
@DikranMarsupial is exactly right, of course, but it occurred to me that it might be nice to illustrate his point, especially since this concern seems to come up frequently. Specifically, the residua
What if residuals are normally distributed, but y is not? @DikranMarsupial is exactly right, of course, but it occurred to me that it might be nice to illustrate his point, especially since this concern seems to come up frequently. Specifically, the residuals of a regression model should be normally distributed for the p-values to be correct. However, even if the residuals are normally distributed, that doesn't guarantee that $Y$ will be (not that it matters... ); it depends on the distribution of $X$. Let's take a simple example (which I am making up). Let's say we're testing a drug for isolated systolic hypertension (i.e., the top blood pressure number is too high). Let's further stipulate that systolic bp is normally distributed within our patient population, with a mean of 160 & SD of 3, and that for each mg of the drug that patients take each day, systolic bp goes down by 1mmHg. In other words, the true value of $\beta_0$ is 160, and $\beta_1$ is -1, and the true data generating function is: $$ BP_{sys}=160-1\times\text{daily drug dosage}+\varepsilon \\ \text{where }\varepsilon\sim\mathcal N(0, 9) $$ In our fictitious study, 300 patients are randomly assigned to take 0mg (a placebo), 20mg, or 40mg of this new medicine per day. (Notice that $X$ is not normally distributed.) Then, after an adequate period of time for the drug to take effect, our data might look like this: (I jittered the dosages so that the points wouldn't overlap so much that they were hard to distinguish.) Now, let's check out the distributions of $Y$ (i.e., it's marginal / original distribution), and the residuals: The qq-plots show us that $Y$ is not remotely normal, but that the residuals are reasonably normal. The kernel density plots give us a more intuitively accessible picture of the distributions. It is clear that $Y$ is tri-modal, whereas the residuals look much like a normal distribution is supposed to look. But what about the fitted regression model, what is the effect of the non-normal $Y$ & $X$ (but normal residuals)? To answer this question, we need to specify what we might be worried about regarding the typical performance of a regression model in situations like this. The first issue is, are the betas, on average, right? (Of course, they'll bounce around some, but in the long run, are the sampling distributions of the betas centered on the true values?) This is the question of bias. Another issue is, can we trust the p-values we get? That is, when the null hypothesis true, is $p<.05$ only 5% of the time? To determine these things, we can simulate data from the above data generating process and a parallel case where the drug has no effect, a large number of times. Then we can plot the sampling distributions of $\beta_1$ and check to see if they're centered on the true value, and also check how often the relationship was 'significant' in the null case: set.seed(123456789) # this make the simulation repeatable b0 = 160; b1 = -1; b1_null = 0 # these are the true beta values x = rep(c(0, 20, 40), each=100) # the (non-normal) drug dosages patients get estimated.b1s = vector(length=10000) # these will store the simulation's results estimated.b1ns = vector(length=10000) null.p.values = vector(length=10000) for(i in 1:10000){ residuals = rnorm(300, mean=0, sd=3) y.works = b0 + b1*x + residuals y.null = b0 + b1_null*x + residuals # everything is identical except b1 model.works = lm(y.works~x) model.null = lm(y.null~x) estimated.b1s[i] = coef(model.works)[2] estimated.b1ns[i] = coef(model.null)[2] null.p.values[i] = summary(model.null)$coefficients[2,4] } mean(estimated.b1s) # the sampling distributions are centered on the true values [1] -1.000084 mean(estimated.b1ns) [1] -8.43504e-05 mean(null.p.values<.05) # when the null is true, p<.05 5% of the time [1] 0.0532 These results show that everything works out fine. I won't go through the motions, but if $X$ had been normally distributed, with otherwise the same setup, the original / marginal distribution of $Y$ would have been normally distributed just as the residuals (albeit with a larger SD). I also didn't illustrate the effects of a skewed distribution of $X$ (which is was the impetus behind this question), but @DikranMarsupial's point is just as valid in that case, and it could be illustrated similarly.
What if residuals are normally distributed, but y is not? @DikranMarsupial is exactly right, of course, but it occurred to me that it might be nice to illustrate his point, especially since this concern seems to come up frequently. Specifically, the residua
1,166
What if residuals are normally distributed, but y is not?
In a regression model fitting, we should check for the normality of the response at each level of $X$, but not collectively as a whole since it's meaningless for this purpose. If you really need to check the normality of $Y$, then check it for each $X$ level.
What if residuals are normally distributed, but y is not?
In a regression model fitting, we should check for the normality of the response at each level of $X$, but not collectively as a whole since it's meaningless for this purpose. If you really need to ch
What if residuals are normally distributed, but y is not? In a regression model fitting, we should check for the normality of the response at each level of $X$, but not collectively as a whole since it's meaningless for this purpose. If you really need to check the normality of $Y$, then check it for each $X$ level.
What if residuals are normally distributed, but y is not? In a regression model fitting, we should check for the normality of the response at each level of $X$, but not collectively as a whole since it's meaningless for this purpose. If you really need to ch
1,167
Why does the Cauchy distribution have no mean?
You can mechanically check that the expected value does not exist, but this should be physically intuitive, at least if you accept Huygens' principle and the Law of Large Numbers. The conclusion of the Law of Large Numbers fails for a Cauchy distribution, so it can't have a mean. If you average $n$ independent Cauchy random variables, the result does not converge to $0$ as $n\to \infty$ with probability $1$. It stays a Cauchy distribution of the same size. This is important in optics. The Cauchy distribution is the normalized intensity of light on a line from a point source. Huygens' principle says that you can determine the intensity by assuming that the light is re-emitted from any line between the source and the target. So, the intensity of light on a line $2$ meters away can be determined by assuming that the light first hits a line $1$ meter away, and is re-emitted at any forward angle. The intensity of light on a line $n$ meters away can be expressed as the $n$-fold convolution of the distribution of light on a line $1$ meter away. That is, the sum of $n$ independent Cauchy distributions is a Cauchy distribution scaled by a factor of $n$. If the Cauchy distribution had a mean, then the $25$th percentile of the $n$-fold convolution divided by $n$ would have to converge to $0$ by the Law of Large Numbers. Instead it stays constant. If you mark the $25$th percentile on a (transparent) line $1$ meter away, $2$ meters away, etc. then these points form a straight line, at $45$ degrees. They don't bend toward $0$. This tells you about the Cauchy distribution in particular, but you should know the integral test because there are other distributions with no mean which don't have a clear physical interpretation.
Why does the Cauchy distribution have no mean?
You can mechanically check that the expected value does not exist, but this should be physically intuitive, at least if you accept Huygens' principle and the Law of Large Numbers. The conclusion of th
Why does the Cauchy distribution have no mean? You can mechanically check that the expected value does not exist, but this should be physically intuitive, at least if you accept Huygens' principle and the Law of Large Numbers. The conclusion of the Law of Large Numbers fails for a Cauchy distribution, so it can't have a mean. If you average $n$ independent Cauchy random variables, the result does not converge to $0$ as $n\to \infty$ with probability $1$. It stays a Cauchy distribution of the same size. This is important in optics. The Cauchy distribution is the normalized intensity of light on a line from a point source. Huygens' principle says that you can determine the intensity by assuming that the light is re-emitted from any line between the source and the target. So, the intensity of light on a line $2$ meters away can be determined by assuming that the light first hits a line $1$ meter away, and is re-emitted at any forward angle. The intensity of light on a line $n$ meters away can be expressed as the $n$-fold convolution of the distribution of light on a line $1$ meter away. That is, the sum of $n$ independent Cauchy distributions is a Cauchy distribution scaled by a factor of $n$. If the Cauchy distribution had a mean, then the $25$th percentile of the $n$-fold convolution divided by $n$ would have to converge to $0$ by the Law of Large Numbers. Instead it stays constant. If you mark the $25$th percentile on a (transparent) line $1$ meter away, $2$ meters away, etc. then these points form a straight line, at $45$ degrees. They don't bend toward $0$. This tells you about the Cauchy distribution in particular, but you should know the integral test because there are other distributions with no mean which don't have a clear physical interpretation.
Why does the Cauchy distribution have no mean? You can mechanically check that the expected value does not exist, but this should be physically intuitive, at least if you accept Huygens' principle and the Law of Large Numbers. The conclusion of th
1,168
Why does the Cauchy distribution have no mean?
Answer added in response to @whuber's comment on Michael Chernicks's answer (and re-written completely to remove the error pointed out by whuber.) The value of the integral for the expected value of a Cauchy random variable is said to be undefined because the value can be "made" to be anything one likes. The integral $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ (interpreted in the sense of a Riemann integral) is what is commonly called an improper integral and its value must be computed as a limiting value: $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx = \lim_{T_1\to-\infty}\lim_{T_2\to+\infty} \int_{T_1}^{T_2} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ or $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx = \lim_{T_2\to+\infty}\lim_{T_1\to-\infty} \int_{T_1}^{T_2} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ and or course, both evaluations should give the same finite value. If not, the integral is said to be undefined. This immediately shows why the mean of the Cauchy random variable is said to be undefined: the limiting value in the inner limit diverges. The Cauchy principal value is obtained as a single limit: $$\lim_{T\to\infty} \int_{-T}^{T} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ instead of the double limit above. The principal value of the expectation integral is easily seen to be $0$ since the limitand has value $0$ for all $T$. But this cannot be used to say that the mean of a Cauchy random variable is $0$. That is, the mean is defined as the value of the integral in the usual sense and not in the principal value sense. For $\alpha > 0$, consider instead the integral $$\begin{align} \int_{-T}^{\alpha T} \frac{x}{\pi(1+x^2)}\,\mathrm dx &= \int_{-T}^{T} \frac{x}{\pi(1+x^2)}\,\mathrm dx + \int_{T}^{\alpha T} \frac{x}{\pi(1+x^2)}\,\mathrm dx\\ &= 0 + \left.\frac{\ln(1+x^2)}{2\pi}\right|_T^{\alpha T}\\ &= \frac{1}{2\pi}\ln\left(\frac{1+\alpha^2T^2}{1+T^2}\right)\\ &= \frac{1}{2\pi}\ln\left(\frac{\alpha^2+T^{-2}}{1+T^{-2}}\right) \end{align}$$ which approaches a limiting value of $\displaystyle \frac{\ln(\alpha)}{\pi}$ as $T\to\infty$. When $\alpha = 1$, we get the principal value $0$ discussed above. Thus, we cannot assign an unambiguous meaning to the expression $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ without specifying how the two infinities were approached, and to ignore this point leads to all sorts of complications and incorrect results because things are not always what they seem when the milk of principal value masquerades as the cream of value. This is why the mean of the Cauchy random variable is said to be undefined rather than have value $0$, the principal value of the integral. If one is using the measure-theoretic approach to probability and the expected value integral is defined in the sense of a Lebesgue integral, then the issue is simpler. $\int g$ exists only when $\int |g|$ is finite, and so $E[X]$ is undefined for a Cauchy random variable $X$ since $E[|X|]$ is not finite.
Why does the Cauchy distribution have no mean?
Answer added in response to @whuber's comment on Michael Chernicks's answer (and re-written completely to remove the error pointed out by whuber.) The value of the integral for the expected value of a
Why does the Cauchy distribution have no mean? Answer added in response to @whuber's comment on Michael Chernicks's answer (and re-written completely to remove the error pointed out by whuber.) The value of the integral for the expected value of a Cauchy random variable is said to be undefined because the value can be "made" to be anything one likes. The integral $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ (interpreted in the sense of a Riemann integral) is what is commonly called an improper integral and its value must be computed as a limiting value: $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx = \lim_{T_1\to-\infty}\lim_{T_2\to+\infty} \int_{T_1}^{T_2} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ or $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx = \lim_{T_2\to+\infty}\lim_{T_1\to-\infty} \int_{T_1}^{T_2} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ and or course, both evaluations should give the same finite value. If not, the integral is said to be undefined. This immediately shows why the mean of the Cauchy random variable is said to be undefined: the limiting value in the inner limit diverges. The Cauchy principal value is obtained as a single limit: $$\lim_{T\to\infty} \int_{-T}^{T} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ instead of the double limit above. The principal value of the expectation integral is easily seen to be $0$ since the limitand has value $0$ for all $T$. But this cannot be used to say that the mean of a Cauchy random variable is $0$. That is, the mean is defined as the value of the integral in the usual sense and not in the principal value sense. For $\alpha > 0$, consider instead the integral $$\begin{align} \int_{-T}^{\alpha T} \frac{x}{\pi(1+x^2)}\,\mathrm dx &= \int_{-T}^{T} \frac{x}{\pi(1+x^2)}\,\mathrm dx + \int_{T}^{\alpha T} \frac{x}{\pi(1+x^2)}\,\mathrm dx\\ &= 0 + \left.\frac{\ln(1+x^2)}{2\pi}\right|_T^{\alpha T}\\ &= \frac{1}{2\pi}\ln\left(\frac{1+\alpha^2T^2}{1+T^2}\right)\\ &= \frac{1}{2\pi}\ln\left(\frac{\alpha^2+T^{-2}}{1+T^{-2}}\right) \end{align}$$ which approaches a limiting value of $\displaystyle \frac{\ln(\alpha)}{\pi}$ as $T\to\infty$. When $\alpha = 1$, we get the principal value $0$ discussed above. Thus, we cannot assign an unambiguous meaning to the expression $$\int_{-\infty}^{\infty} \frac{x}{\pi(1+x^2)}\,\mathrm dx$$ without specifying how the two infinities were approached, and to ignore this point leads to all sorts of complications and incorrect results because things are not always what they seem when the milk of principal value masquerades as the cream of value. This is why the mean of the Cauchy random variable is said to be undefined rather than have value $0$, the principal value of the integral. If one is using the measure-theoretic approach to probability and the expected value integral is defined in the sense of a Lebesgue integral, then the issue is simpler. $\int g$ exists only when $\int |g|$ is finite, and so $E[X]$ is undefined for a Cauchy random variable $X$ since $E[|X|]$ is not finite.
Why does the Cauchy distribution have no mean? Answer added in response to @whuber's comment on Michael Chernicks's answer (and re-written completely to remove the error pointed out by whuber.) The value of the integral for the expected value of a
1,169
Why does the Cauchy distribution have no mean?
While the above answers are valid explanations of why the Cauchy distribution has no expectation, I find the fact that the ratio $X_1/X_2$ of two independent normal $\mathcal{N}(0,1)$ variates is Cauchy just as illuminating: indeed, we have $$ \mathbb{E}\left[ \frac{|X_1|}{|X_2|} \right] = \mathbb{E}\left[ |X_1| \right] \times \mathbb{E}\left[ \frac{1}{|X_2|} \right] $$ and the second expectation is $+\infty$.
Why does the Cauchy distribution have no mean?
While the above answers are valid explanations of why the Cauchy distribution has no expectation, I find the fact that the ratio $X_1/X_2$ of two independent normal $\mathcal{N}(0,1)$ variates is Cauc
Why does the Cauchy distribution have no mean? While the above answers are valid explanations of why the Cauchy distribution has no expectation, I find the fact that the ratio $X_1/X_2$ of two independent normal $\mathcal{N}(0,1)$ variates is Cauchy just as illuminating: indeed, we have $$ \mathbb{E}\left[ \frac{|X_1|}{|X_2|} \right] = \mathbb{E}\left[ |X_1| \right] \times \mathbb{E}\left[ \frac{1}{|X_2|} \right] $$ and the second expectation is $+\infty$.
Why does the Cauchy distribution have no mean? While the above answers are valid explanations of why the Cauchy distribution has no expectation, I find the fact that the ratio $X_1/X_2$ of two independent normal $\mathcal{N}(0,1)$ variates is Cauc
1,170
Why does the Cauchy distribution have no mean?
The Cauchy has no mean because the point you select (0) is not a mean. It is a median and a mode. The mean for an absolutely continuous distribution is defined as $\int x f(x) dx$ where $f$ is the density function and the integral is taken over the domain of $f$ (which is $-\infty$ to $\infty$ in the case of the Cauchy). For the Cauchy density, this integral is simply not finite (the half from $-\infty$ to $0$ is $-\infty$ and the half from $0$ to $\infty$ is $\infty$).
Why does the Cauchy distribution have no mean?
The Cauchy has no mean because the point you select (0) is not a mean. It is a median and a mode. The mean for an absolutely continuous distribution is defined as $\int x f(x) dx$ where $f$ is the d
Why does the Cauchy distribution have no mean? The Cauchy has no mean because the point you select (0) is not a mean. It is a median and a mode. The mean for an absolutely continuous distribution is defined as $\int x f(x) dx$ where $f$ is the density function and the integral is taken over the domain of $f$ (which is $-\infty$ to $\infty$ in the case of the Cauchy). For the Cauchy density, this integral is simply not finite (the half from $-\infty$ to $0$ is $-\infty$ and the half from $0$ to $\infty$ is $\infty$).
Why does the Cauchy distribution have no mean? The Cauchy has no mean because the point you select (0) is not a mean. It is a median and a mode. The mean for an absolutely continuous distribution is defined as $\int x f(x) dx$ where $f$ is the d
1,171
Why does the Cauchy distribution have no mean?
The Cauchy distribution is best thought of as the uniform distribution on a unit circle, so it would be surprising if averaging made sense. Suppose $f$ were some kind of "averaging function". That is, suppose that, for each finite subset $X$ of the unit circle, $f(X)$ was a point of the unit circle. Clearly, $f$ has to be "unnatural". More precisely $f$ cannot be equivariant with respect to rotations. To obtain the Cauchy distribution in its more usual, but less revealing, form, project the unit circle onto the x-axis from (0,1), and use this projection to transfer the uniform distribution on the circle to the x-axis. To understand why the mean doesn't exist, think of x as a function on the unit circle. It's quite easy to find an infinite number of disjoint arcs on the unit circle, such that, if one of the arcs has length d, then x > 1/4d on that arc. So each of these disjoint arcs contributes more than 1/4 to the mean, and the total contribution from these arcs is infinite. We can do the same thing again, but with x < -1/4d, with a total contribution minus infinity. These intervals could be displayed with a diagram, but can one make diagrams for Cross Validated?
Why does the Cauchy distribution have no mean?
The Cauchy distribution is best thought of as the uniform distribution on a unit circle, so it would be surprising if averaging made sense. Suppose $f$ were some kind of "averaging function". That is,
Why does the Cauchy distribution have no mean? The Cauchy distribution is best thought of as the uniform distribution on a unit circle, so it would be surprising if averaging made sense. Suppose $f$ were some kind of "averaging function". That is, suppose that, for each finite subset $X$ of the unit circle, $f(X)$ was a point of the unit circle. Clearly, $f$ has to be "unnatural". More precisely $f$ cannot be equivariant with respect to rotations. To obtain the Cauchy distribution in its more usual, but less revealing, form, project the unit circle onto the x-axis from (0,1), and use this projection to transfer the uniform distribution on the circle to the x-axis. To understand why the mean doesn't exist, think of x as a function on the unit circle. It's quite easy to find an infinite number of disjoint arcs on the unit circle, such that, if one of the arcs has length d, then x > 1/4d on that arc. So each of these disjoint arcs contributes more than 1/4 to the mean, and the total contribution from these arcs is infinite. We can do the same thing again, but with x < -1/4d, with a total contribution minus infinity. These intervals could be displayed with a diagram, but can one make diagrams for Cross Validated?
Why does the Cauchy distribution have no mean? The Cauchy distribution is best thought of as the uniform distribution on a unit circle, so it would be surprising if averaging made sense. Suppose $f$ were some kind of "averaging function". That is,
1,172
Why does the Cauchy distribution have no mean?
The mean or expected value of some random variable $X$ is a Lebesgue integral defined over some probability measure $P$: $$EX=\int XdP$$ The nonexistence of the mean of Cauchy random variable just means that the integral of Cauchy r.v. does not exist. This is because the tails of Cauchy distribution are heavy tails (compare to the tails of normal distribution). However, nonexistence of expected value does not forbid the existence of other functions of a Cauchy random variable.
Why does the Cauchy distribution have no mean?
The mean or expected value of some random variable $X$ is a Lebesgue integral defined over some probability measure $P$: $$EX=\int XdP$$ The nonexistence of the mean of Cauchy random variable just me
Why does the Cauchy distribution have no mean? The mean or expected value of some random variable $X$ is a Lebesgue integral defined over some probability measure $P$: $$EX=\int XdP$$ The nonexistence of the mean of Cauchy random variable just means that the integral of Cauchy r.v. does not exist. This is because the tails of Cauchy distribution are heavy tails (compare to the tails of normal distribution). However, nonexistence of expected value does not forbid the existence of other functions of a Cauchy random variable.
Why does the Cauchy distribution have no mean? The mean or expected value of some random variable $X$ is a Lebesgue integral defined over some probability measure $P$: $$EX=\int XdP$$ The nonexistence of the mean of Cauchy random variable just me
1,173
Why does the Cauchy distribution have no mean?
Here is more of a visual explanation. (For those of us that are math challenged.). Take a cauchy distributed random number generator and try averaging the resulting values. Here is a good page on a function for this. https://math.stackexchange.com/questions/484395/how-to-generate-a-cauchy-random-variable You will find that the "spikiness" of the random values cause it to get larger as you go instead of smaller. Hence it has no mean.
Why does the Cauchy distribution have no mean?
Here is more of a visual explanation. (For those of us that are math challenged.). Take a cauchy distributed random number generator and try averaging the resulting values. Here is a good page on a
Why does the Cauchy distribution have no mean? Here is more of a visual explanation. (For those of us that are math challenged.). Take a cauchy distributed random number generator and try averaging the resulting values. Here is a good page on a function for this. https://math.stackexchange.com/questions/484395/how-to-generate-a-cauchy-random-variable You will find that the "spikiness" of the random values cause it to get larger as you go instead of smaller. Hence it has no mean.
Why does the Cauchy distribution have no mean? Here is more of a visual explanation. (For those of us that are math challenged.). Take a cauchy distributed random number generator and try averaging the resulting values. Here is a good page on a
1,174
Why does the Cauchy distribution have no mean?
Just to add to the excellent answers, I will make some comments about why the nonconvergence of the integral is relevant for statistical practice. As others have mentioned, if we allowed the principal value to be a "mean" then the slln are not anymore valid! Apart from this, think about the implications of the fact that , in practice, all models are approximations. Specifically, the Cauchy distribution is a model for an unbounded random variable. In practice, random variables are bounded, but the bounds are often vague and uncertain. Using unbounded models is way to alleviate that, it makes unnecessary the introduction of unsure (and often unnatural) bounds into the models. But for this to make sense, important aspects of the problem should not be affected. That means that, if we were to introduce bounds, that should not alter in important ways the model. But when the integral is nonconvergent that does not happen! The model is unstable, in the sense that the expectation of the RV would depend on the largely arbitrary bounds. (In applications, there is not necessarily any reason to make the bounds symmetric!) For this reason, it is better to say the integral is divergent than saying it is "infinite", the last being close to imply some definite value when no exists! A more thorough discussion is here.
Why does the Cauchy distribution have no mean?
Just to add to the excellent answers, I will make some comments about why the nonconvergence of the integral is relevant for statistical practice. As others have mentioned, if we allowed the principal
Why does the Cauchy distribution have no mean? Just to add to the excellent answers, I will make some comments about why the nonconvergence of the integral is relevant for statistical practice. As others have mentioned, if we allowed the principal value to be a "mean" then the slln are not anymore valid! Apart from this, think about the implications of the fact that , in practice, all models are approximations. Specifically, the Cauchy distribution is a model for an unbounded random variable. In practice, random variables are bounded, but the bounds are often vague and uncertain. Using unbounded models is way to alleviate that, it makes unnecessary the introduction of unsure (and often unnatural) bounds into the models. But for this to make sense, important aspects of the problem should not be affected. That means that, if we were to introduce bounds, that should not alter in important ways the model. But when the integral is nonconvergent that does not happen! The model is unstable, in the sense that the expectation of the RV would depend on the largely arbitrary bounds. (In applications, there is not necessarily any reason to make the bounds symmetric!) For this reason, it is better to say the integral is divergent than saying it is "infinite", the last being close to imply some definite value when no exists! A more thorough discussion is here.
Why does the Cauchy distribution have no mean? Just to add to the excellent answers, I will make some comments about why the nonconvergence of the integral is relevant for statistical practice. As others have mentioned, if we allowed the principal
1,175
Why does the Cauchy distribution have no mean?
I wanted to be a bit picky for a second. The graphic at the top is wrong. The x-axis is in standard deviations, something that does not exist for the Cauchy distribution. I am being picky because I use the Cauchy distribution every single day of my life in my work. There is a practical case where the confusion could cause an empirical error. Student's t-distribution with 1 degree of freedom is the standard Cauchy. It will usually list various sigmas required for significance. These sigmas are NOT standard deviations, they are probable errors and mu is the mode. If you wanted to do the above graphic correctly, either the x-axis is raw data, or if you wanted them to have equivalent sized errors, then you would give them equal probable errors. One probable error is .67 standard deviations in size on the normal distribution. In both cases it is the semi-interquartile range. Now as to an answer to your question, everything that everyone wrote above is correct and it is the mathematical reason for this. However, I suspect you are a student and new to the topic and so the counter-intuitive mathematical solutions to the visually obvious may not ring true. I have two nearly identical real world samples, drawn from a Cauchy distribution, both have the same mode and the same probable error. One has a mean of 1.27 and one has a mean of 1.33. The one with a mean of 1.27 has a standard deviation of 400, the one with the mean of 1.33 has a standard deviation of 5.15. The probable error for both is .32 and the mode is 1. This means that for symmetric data, the mean is not in the central 50%. It only takes ONE additional observation to push the mean and/or the variance outside significance for any test. The reason is that the mean and the variance are not parameters and the sample mean and the sample variance are themselves random numbers. The simplest answer is that the parameters of the Cauchy distribution do not include a mean and therefore no variance about a mean. It is likely that in your past pedagogy the importance of the mean was in that it is usually a sufficient statistic. In long run frequency based statistics the Cauchy distribution has no sufficient statistic. It is true that the sample median, for a Cauchy distribution with support over the entire reals, is a sufficient statistic, but that is because it inherits it from being an order statistic. It is sort of coincidentally sufficient, lacking an easy way to think about it. Now in Bayesian statistics there is a sufficient statistic for the parameters of the Cauchy distribution and if you use a uniform prior then it is also unbiased. I bring this up because if you have to use them on a daily basis, you have learned about every way there is to perform estimations on them. This is due to the fact that inference between long run frequency based statistics and Bayesian statistics runs in opposite directions. There are no valid order statistics that can be used as estimators for truncated Cauchy distributions, which are what you are likely to run into in the real world, and so there is no sufficient statistic in frequency based methods for most but not all real world applications. What I suggest is to step away from the mean, mentally, as being something real. It is a tool, like a hammer, that is broadly useful and can usually be used. Sometimes that tool won't work. A mathematical note on the normal and the Cauchy distributions. When the data is received as a time series, then the normal distribution only happens when errors converge to zero as t goes to infinity. When data is received as a time series, then the Cauchy distribution happens when the errors diverge to infinity. One is due to a convergent series, the other due to a divergent series. Cauchy distributions never arrive at a specific point at the limit, they swing back and forth across a fixed point so that fifty percent of the time they are on one side and fifty percent of the time on the other. There is no median reversion.
Why does the Cauchy distribution have no mean?
I wanted to be a bit picky for a second. The graphic at the top is wrong. The x-axis is in standard deviations, something that does not exist for the Cauchy distribution. I am being picky because I
Why does the Cauchy distribution have no mean? I wanted to be a bit picky for a second. The graphic at the top is wrong. The x-axis is in standard deviations, something that does not exist for the Cauchy distribution. I am being picky because I use the Cauchy distribution every single day of my life in my work. There is a practical case where the confusion could cause an empirical error. Student's t-distribution with 1 degree of freedom is the standard Cauchy. It will usually list various sigmas required for significance. These sigmas are NOT standard deviations, they are probable errors and mu is the mode. If you wanted to do the above graphic correctly, either the x-axis is raw data, or if you wanted them to have equivalent sized errors, then you would give them equal probable errors. One probable error is .67 standard deviations in size on the normal distribution. In both cases it is the semi-interquartile range. Now as to an answer to your question, everything that everyone wrote above is correct and it is the mathematical reason for this. However, I suspect you are a student and new to the topic and so the counter-intuitive mathematical solutions to the visually obvious may not ring true. I have two nearly identical real world samples, drawn from a Cauchy distribution, both have the same mode and the same probable error. One has a mean of 1.27 and one has a mean of 1.33. The one with a mean of 1.27 has a standard deviation of 400, the one with the mean of 1.33 has a standard deviation of 5.15. The probable error for both is .32 and the mode is 1. This means that for symmetric data, the mean is not in the central 50%. It only takes ONE additional observation to push the mean and/or the variance outside significance for any test. The reason is that the mean and the variance are not parameters and the sample mean and the sample variance are themselves random numbers. The simplest answer is that the parameters of the Cauchy distribution do not include a mean and therefore no variance about a mean. It is likely that in your past pedagogy the importance of the mean was in that it is usually a sufficient statistic. In long run frequency based statistics the Cauchy distribution has no sufficient statistic. It is true that the sample median, for a Cauchy distribution with support over the entire reals, is a sufficient statistic, but that is because it inherits it from being an order statistic. It is sort of coincidentally sufficient, lacking an easy way to think about it. Now in Bayesian statistics there is a sufficient statistic for the parameters of the Cauchy distribution and if you use a uniform prior then it is also unbiased. I bring this up because if you have to use them on a daily basis, you have learned about every way there is to perform estimations on them. This is due to the fact that inference between long run frequency based statistics and Bayesian statistics runs in opposite directions. There are no valid order statistics that can be used as estimators for truncated Cauchy distributions, which are what you are likely to run into in the real world, and so there is no sufficient statistic in frequency based methods for most but not all real world applications. What I suggest is to step away from the mean, mentally, as being something real. It is a tool, like a hammer, that is broadly useful and can usually be used. Sometimes that tool won't work. A mathematical note on the normal and the Cauchy distributions. When the data is received as a time series, then the normal distribution only happens when errors converge to zero as t goes to infinity. When data is received as a time series, then the Cauchy distribution happens when the errors diverge to infinity. One is due to a convergent series, the other due to a divergent series. Cauchy distributions never arrive at a specific point at the limit, they swing back and forth across a fixed point so that fifty percent of the time they are on one side and fifty percent of the time on the other. There is no median reversion.
Why does the Cauchy distribution have no mean? I wanted to be a bit picky for a second. The graphic at the top is wrong. The x-axis is in standard deviations, something that does not exist for the Cauchy distribution. I am being picky because I
1,176
Why does the Cauchy distribution have no mean?
To put it simply, the area under the curve approaches infinity as you zoom out. If you sample a finite region, you can find a mean for that region. However, there is no mean for infinity.
Why does the Cauchy distribution have no mean?
To put it simply, the area under the curve approaches infinity as you zoom out. If you sample a finite region, you can find a mean for that region. However, there is no mean for infinity.
Why does the Cauchy distribution have no mean? To put it simply, the area under the curve approaches infinity as you zoom out. If you sample a finite region, you can find a mean for that region. However, there is no mean for infinity.
Why does the Cauchy distribution have no mean? To put it simply, the area under the curve approaches infinity as you zoom out. If you sample a finite region, you can find a mean for that region. However, there is no mean for infinity.
1,177
Books for self-studying time series analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I would recommed the following books: Time Series Analysis and Its Applications: With R Examples, Third Edition, by Robert H. Shumway and David S. Stoffer, Springer Verlag. Time Series Analysis and Forecasting by Example, 1st Edition, by Søren Bisgaard and Murat Kulahci, John Wiley & Sons. I hope it helps you. Best of luck!
Books for self-studying time series analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Books for self-studying time series analysis? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I would recommed the following books: Time Series Analysis and Its Applications: With R Examples, Third Edition, by Robert H. Shumway and David S. Stoffer, Springer Verlag. Time Series Analysis and Forecasting by Example, 1st Edition, by Søren Bisgaard and Murat Kulahci, John Wiley & Sons. I hope it helps you. Best of luck!
Books for self-studying time series analysis? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,178
Books for self-studying time series analysis?
Forecasting: Principles and Practice by Rob J Hyndman and George Athanasopoulos is available free online. It's a good book in its own right; Hyndman's previous forecasting book with Makridakis and Wheelright is highly regarded, but this has the added advantage that you can see what you're getting for the price.
Books for self-studying time series analysis?
Forecasting: Principles and Practice by Rob J Hyndman and George Athanasopoulos is available free online. It's a good book in its own right; Hyndman's previous forecasting book with Makridakis and Whe
Books for self-studying time series analysis? Forecasting: Principles and Practice by Rob J Hyndman and George Athanasopoulos is available free online. It's a good book in its own right; Hyndman's previous forecasting book with Makridakis and Wheelright is highly regarded, but this has the added advantage that you can see what you're getting for the price.
Books for self-studying time series analysis? Forecasting: Principles and Practice by Rob J Hyndman and George Athanasopoulos is available free online. It's a good book in its own right; Hyndman's previous forecasting book with Makridakis and Whe
1,179
Books for self-studying time series analysis?
There are three books that I keep referring to always from an R programming and time series analysis perspective: Time Series Analysis and Its Applications: With R Examples by Shumway and Stoffer Time Series Analysis: With Applications in R by Cryer and Chan. Introductory Time Series with R by Cowpertwait and Metcalfe The first book by Shumway and Stoffer has an open source (abridged) version available online called EZgreen version. If you are specifically looking into time series forecasting, I would recommend following books: Forecasting Methods and Applications by Makridakis, Wheelwright and Hyndman. I keep referring to this book repeatedly, This is a classic, writing style is absolutely phenomenal. An online successor to the above book with nice R examples is Forecasting Principles and Practice by Hyndman and Athana­sopou­los. If you are looking at classic Box Jenkins modeling approach, I would recommend Time Series Analysis: Forecasting and Control by Box, Jenkins and Reinsel. An exceptional treatment on transfer function modeling and forecasting is in Forecasting with Dynamic Regression Models by Pankratz. Again the writing style is absolutely great. Another extremely useful if you in to applying forecasting to solve real world problems is Principles of Forecasting by Armstrong. In my opinion, books 1, 4 and 5 are some of the best of the best books. Many like Forecasting Principles and Practice by Hyndman and Athana­sopou­los because it's open source and has R codes. It is no way closer to the breadth, the depth of coverage of forecasting methods and the writing style of its predecessor Makridakis et al.. Below are some contrasting features on why I like the Makridakis et al: List of references: for instance in the Box Jenkins chapter Makridakis et al has ~31 references, Hyndman et al there is very little or no references in many chapters. Breadth and Depth in coverage - Hyndman et al. mainly focus on Univariate methods especially developed by the first author, while Makridakis et. al focus not just on their own research but a wide variety of methods and application and also emphasis is on real world application and learning as opposed to being more academically focused. Writing style - I really cant complain as both the books are exceptionally well written. However I personally lean towards Makridakis because it boils down complex concepts into reader friendly sections. There is a section on Dynamic regression or transfer functions, I have no where encountered such clear explanation on this "complex method". It takes extraordinary writing talent to help reader understand what Dynamic regression is in 15 pages and they succeed at it. Makridakis et al is software/method agnostic and they list some useful software packages and compare and contrast them (although this is almost 20 years old) is still a very valuable for a practitioner. Three dedicated chapters on how to apply forecasting in real world in Makridakis et al. which is big plus to have for a practitioner. Forecasting is simply not running univariate methods like arima and exponential smoothing and producing outputs. It is much more than that, and especially strategic forecasting when you are looking into longer horizon. Principles of forecasting by Armstrong goes beyond the univariate extrapolation methods and is highly recommended for anyone who does real world forecasting especially strategic forecasting.
Books for self-studying time series analysis?
There are three books that I keep referring to always from an R programming and time series analysis perspective: Time Series Analysis and Its Applications: With R Examples by Shumway and Stoffer Tim
Books for self-studying time series analysis? There are three books that I keep referring to always from an R programming and time series analysis perspective: Time Series Analysis and Its Applications: With R Examples by Shumway and Stoffer Time Series Analysis: With Applications in R by Cryer and Chan. Introductory Time Series with R by Cowpertwait and Metcalfe The first book by Shumway and Stoffer has an open source (abridged) version available online called EZgreen version. If you are specifically looking into time series forecasting, I would recommend following books: Forecasting Methods and Applications by Makridakis, Wheelwright and Hyndman. I keep referring to this book repeatedly, This is a classic, writing style is absolutely phenomenal. An online successor to the above book with nice R examples is Forecasting Principles and Practice by Hyndman and Athana­sopou­los. If you are looking at classic Box Jenkins modeling approach, I would recommend Time Series Analysis: Forecasting and Control by Box, Jenkins and Reinsel. An exceptional treatment on transfer function modeling and forecasting is in Forecasting with Dynamic Regression Models by Pankratz. Again the writing style is absolutely great. Another extremely useful if you in to applying forecasting to solve real world problems is Principles of Forecasting by Armstrong. In my opinion, books 1, 4 and 5 are some of the best of the best books. Many like Forecasting Principles and Practice by Hyndman and Athana­sopou­los because it's open source and has R codes. It is no way closer to the breadth, the depth of coverage of forecasting methods and the writing style of its predecessor Makridakis et al.. Below are some contrasting features on why I like the Makridakis et al: List of references: for instance in the Box Jenkins chapter Makridakis et al has ~31 references, Hyndman et al there is very little or no references in many chapters. Breadth and Depth in coverage - Hyndman et al. mainly focus on Univariate methods especially developed by the first author, while Makridakis et. al focus not just on their own research but a wide variety of methods and application and also emphasis is on real world application and learning as opposed to being more academically focused. Writing style - I really cant complain as both the books are exceptionally well written. However I personally lean towards Makridakis because it boils down complex concepts into reader friendly sections. There is a section on Dynamic regression or transfer functions, I have no where encountered such clear explanation on this "complex method". It takes extraordinary writing talent to help reader understand what Dynamic regression is in 15 pages and they succeed at it. Makridakis et al is software/method agnostic and they list some useful software packages and compare and contrast them (although this is almost 20 years old) is still a very valuable for a practitioner. Three dedicated chapters on how to apply forecasting in real world in Makridakis et al. which is big plus to have for a practitioner. Forecasting is simply not running univariate methods like arima and exponential smoothing and producing outputs. It is much more than that, and especially strategic forecasting when you are looking into longer horizon. Principles of forecasting by Armstrong goes beyond the univariate extrapolation methods and is highly recommended for anyone who does real world forecasting especially strategic forecasting.
Books for self-studying time series analysis? There are three books that I keep referring to always from an R programming and time series analysis perspective: Time Series Analysis and Its Applications: With R Examples by Shumway and Stoffer Tim
1,180
Books for self-studying time series analysis?
Part Four of Damodar Gujarati and Dawn Porter's Basic Econometrics (5th ed) contains five chapters on time-series econometrics - a very popular book! It contains lots of exercises, regression outputs, interpretations, and best of all, you can download the data from the book's website and replicate the results for yourself. Another good book is Stock and Watson's Introduction to Econometrics. Starting with Hamilton was admirable, but I'd say read through both of the time-series sections in the two books that I just mentioned and then move on to something like Walter Enders' Applied Econometric Time Series or Terrence C Mill's The Modelling of Financial Time Series. After this (and probably after some review of mathematical economics) then you should be able to sit down and read Hamilton comfortably. Note: Box & Jenkins' 1970 classic Time series analysis: Forecasting and control is obviously more concentrated (i.e. narrower in content) than the "modern textbooks" that I mentioned, but I'd say that anyone who wants to get a real good understanding of time-series shouldn't leave this off their reading list.
Books for self-studying time series analysis?
Part Four of Damodar Gujarati and Dawn Porter's Basic Econometrics (5th ed) contains five chapters on time-series econometrics - a very popular book! It contains lots of exercises, regression outputs,
Books for self-studying time series analysis? Part Four of Damodar Gujarati and Dawn Porter's Basic Econometrics (5th ed) contains five chapters on time-series econometrics - a very popular book! It contains lots of exercises, regression outputs, interpretations, and best of all, you can download the data from the book's website and replicate the results for yourself. Another good book is Stock and Watson's Introduction to Econometrics. Starting with Hamilton was admirable, but I'd say read through both of the time-series sections in the two books that I just mentioned and then move on to something like Walter Enders' Applied Econometric Time Series or Terrence C Mill's The Modelling of Financial Time Series. After this (and probably after some review of mathematical economics) then you should be able to sit down and read Hamilton comfortably. Note: Box & Jenkins' 1970 classic Time series analysis: Forecasting and control is obviously more concentrated (i.e. narrower in content) than the "modern textbooks" that I mentioned, but I'd say that anyone who wants to get a real good understanding of time-series shouldn't leave this off their reading list.
Books for self-studying time series analysis? Part Four of Damodar Gujarati and Dawn Porter's Basic Econometrics (5th ed) contains five chapters on time-series econometrics - a very popular book! It contains lots of exercises, regression outputs,
1,181
Books for self-studying time series analysis?
It depends on how much math you want. For a less mathematically-intense treatment, Applied Econometric Time Series by Enders is well-regarded.
Books for self-studying time series analysis?
It depends on how much math you want. For a less mathematically-intense treatment, Applied Econometric Time Series by Enders is well-regarded.
Books for self-studying time series analysis? It depends on how much math you want. For a less mathematically-intense treatment, Applied Econometric Time Series by Enders is well-regarded.
Books for self-studying time series analysis? It depends on how much math you want. For a less mathematically-intense treatment, Applied Econometric Time Series by Enders is well-regarded.
1,182
Books for self-studying time series analysis?
Last year I started teaching introductory and semi-advanced time series course, so I embarked on journey of reading the (text-)books in the field to find suitable materials for students. Given that I did not find any post on CV, Quora or ResearchGate that would full satisfy me, I decided to share my conclusions here. This text below lists several time series textbooks and provides their evaluation. The focus is on suitability of the textbook as introductory textbook, or their added value in case they are not suitable as introductory textbook. Hamilton – Time Series Analysis Probably the most famous time series textbook. And also probably the least suitable as introductory textbook of them all, despite often being recommended to students (including me): you must be either genius or insane (not mutually exclusive, obviously) to recommend this textbook to starting students. The textbook is very exhaustive and very rigorous, but this also makes it hard to read for those who are new to the topic. That said, this is the textbook everybody should know about – once you become serious about doing time series analysis (rather than just modelling) you will want to consult this book. Enders – Applied time series The best introductory textbook in this list. The books is especially strong in other than univariate topics, such as transfer function models, VARs, cointegration and non-linear models. Nevertheless its coverage of univariate models is still better than most. The book’s value comes from focus on intuition rather than technical exposition, extensive use of simple illustrative examples as well as more complicated real-world examples; all of this leaves you understanding when and why are given models used, and how do they work. Yet, despite not being technical, it still provides the right amount of technical material for the reader to see time series models as mathematical constructs they are. Diebold - Elements of forecasting While being introductory textbook for forecasting rather than time series, this book still manages to be the best intuitive introduction to time series modelling (as opposed to analysis – do not search for it there). Diebold has the unique ability to understand what people who don’t understand are likely not to understand. While it likely cannot serve as the sole textbook for time series course, it should be suggested as introductory reading to students – a book they want to read before they want to get serious studying time series. Major drawback is the limited scope of the book, which covers only univariate models. Box, Jenkins - Time Series Analysis: Forecasting and Control Probably most famous book dedicated to time series, from two pioneers of modelling time series. It should be stressed that their work and book is not solely focused on economics, which is a serious limitation for using this book as introductory textbook. Still, the book has its undisputable value in providing very detailed, and mostly digestible exposition of ARMA models. It should be consulted by those who have basic knowledge of time series but want to get deeper understanding of (mostly) univariate time series models. Pankratz - Forecasting with Dynamic Regression Models If you want to learn about multivariate single equation models, this is the book. The exposition is very digestible but at the same time provides sufficient technical detail. Moreover, it includes large number of very detailed examples that help reader understand the material. Brooks - Introductory Econometrics for Finance This is a great introductory textbook with focus on finance applications. The textbook is on the low end of the technical apparatus and as such it reads well. Moreover, it provides ample illustration of the theory, so that the basic concepts sink in well. Overall, it is recommended for courses that avoid the technicalities to focus on the intuition, but as such it cannot be the last textbook one reads before going out in the real world. Tsay - Analysis of Financial Time Series This book is sometimes feels like in-between. In most cases it is too technical for most starting students, but at moments it is able to suitably simplify difficult material – for example it contains the most digestible introduction to Kalamn filter mechanics. It should be recommended as textbook for students that have some basic knowledge of time series models and what to get deeper into the topic with focus on financial time series. Harvey – Time series models This textbook provides very digestible mix of intuition and theory when presenting standard time series models and methods. From the perspective of modern reader the list of models and sequencing of their exposition is somewhat outdated, but for each type of model (ARMA, unobserved components, …) it provides exposition that is illuminating to beginners and advanced readers alike. Still, I would recommend this textbook as something you read after you read more introductory textbook. Harvey – Elements of Analysis of Time Series This textbook is best thought as complementary to ‘Time series models’ by the same author. It goes into the details of estimation techniques of different econometrical models, including the workings of algorithms and underlying statistical theory. That means that for the question of “what&why happens after I click estimate” it is unparalleled resource. In addition the chapters on multivariate single equation time series models provide very useful exposition of these models. Harvey – Forecasting, structural time series models and the Kalman filter This is an in-depth textbook on structural models and Kalman filter. As such it goes further than probably most readers will want to go. However, the introductory chapters are written with the usual great mix of intuitive and technical approach typical of the author. More than recommended for the start of using Kalman filter. Maddala and Kim - Unit Roots, Cointegration, and Structural Change This is probably the book on unit roots and cointegration, but one should be aware how to use this book. The best way to think about this book is as a textbook for advanced reader on relevant topics; but it will not serve well to beginners. Assuming one is knowledgeable enough then reading this book will be extremely beneficial. An especially good features of the book are (1) inclusion of historical narrative which allows the reader to orient himself in the literature, (2) encyclopedical approach to existing statistical tests combined with audacity to evaluate alternative tests, (3) intuitive introduction to Winer process theory (much more digestible than Hamilton) underlying much of the econometrics of integrated processes. Banerjee et al - Co-Integration, error correction, and the econometric analysis of non-stationary data This is not a textbook, but it is a useful source for some specific topics. It can serve as very good advanced introduction to econometrics of integrated processes, including the unit roots. It has great introduction to error-correction models in its multiple representations, which is useful to anybody interacting with multivariate single equation models. And finally, it provides the reconstruction of academic research on co-integration as it was in 1991, eliminating the need to go into the actual papers.
Books for self-studying time series analysis?
Last year I started teaching introductory and semi-advanced time series course, so I embarked on journey of reading the (text-)books in the field to find suitable materials for students. Given that I
Books for self-studying time series analysis? Last year I started teaching introductory and semi-advanced time series course, so I embarked on journey of reading the (text-)books in the field to find suitable materials for students. Given that I did not find any post on CV, Quora or ResearchGate that would full satisfy me, I decided to share my conclusions here. This text below lists several time series textbooks and provides their evaluation. The focus is on suitability of the textbook as introductory textbook, or their added value in case they are not suitable as introductory textbook. Hamilton – Time Series Analysis Probably the most famous time series textbook. And also probably the least suitable as introductory textbook of them all, despite often being recommended to students (including me): you must be either genius or insane (not mutually exclusive, obviously) to recommend this textbook to starting students. The textbook is very exhaustive and very rigorous, but this also makes it hard to read for those who are new to the topic. That said, this is the textbook everybody should know about – once you become serious about doing time series analysis (rather than just modelling) you will want to consult this book. Enders – Applied time series The best introductory textbook in this list. The books is especially strong in other than univariate topics, such as transfer function models, VARs, cointegration and non-linear models. Nevertheless its coverage of univariate models is still better than most. The book’s value comes from focus on intuition rather than technical exposition, extensive use of simple illustrative examples as well as more complicated real-world examples; all of this leaves you understanding when and why are given models used, and how do they work. Yet, despite not being technical, it still provides the right amount of technical material for the reader to see time series models as mathematical constructs they are. Diebold - Elements of forecasting While being introductory textbook for forecasting rather than time series, this book still manages to be the best intuitive introduction to time series modelling (as opposed to analysis – do not search for it there). Diebold has the unique ability to understand what people who don’t understand are likely not to understand. While it likely cannot serve as the sole textbook for time series course, it should be suggested as introductory reading to students – a book they want to read before they want to get serious studying time series. Major drawback is the limited scope of the book, which covers only univariate models. Box, Jenkins - Time Series Analysis: Forecasting and Control Probably most famous book dedicated to time series, from two pioneers of modelling time series. It should be stressed that their work and book is not solely focused on economics, which is a serious limitation for using this book as introductory textbook. Still, the book has its undisputable value in providing very detailed, and mostly digestible exposition of ARMA models. It should be consulted by those who have basic knowledge of time series but want to get deeper understanding of (mostly) univariate time series models. Pankratz - Forecasting with Dynamic Regression Models If you want to learn about multivariate single equation models, this is the book. The exposition is very digestible but at the same time provides sufficient technical detail. Moreover, it includes large number of very detailed examples that help reader understand the material. Brooks - Introductory Econometrics for Finance This is a great introductory textbook with focus on finance applications. The textbook is on the low end of the technical apparatus and as such it reads well. Moreover, it provides ample illustration of the theory, so that the basic concepts sink in well. Overall, it is recommended for courses that avoid the technicalities to focus on the intuition, but as such it cannot be the last textbook one reads before going out in the real world. Tsay - Analysis of Financial Time Series This book is sometimes feels like in-between. In most cases it is too technical for most starting students, but at moments it is able to suitably simplify difficult material – for example it contains the most digestible introduction to Kalamn filter mechanics. It should be recommended as textbook for students that have some basic knowledge of time series models and what to get deeper into the topic with focus on financial time series. Harvey – Time series models This textbook provides very digestible mix of intuition and theory when presenting standard time series models and methods. From the perspective of modern reader the list of models and sequencing of their exposition is somewhat outdated, but for each type of model (ARMA, unobserved components, …) it provides exposition that is illuminating to beginners and advanced readers alike. Still, I would recommend this textbook as something you read after you read more introductory textbook. Harvey – Elements of Analysis of Time Series This textbook is best thought as complementary to ‘Time series models’ by the same author. It goes into the details of estimation techniques of different econometrical models, including the workings of algorithms and underlying statistical theory. That means that for the question of “what&why happens after I click estimate” it is unparalleled resource. In addition the chapters on multivariate single equation time series models provide very useful exposition of these models. Harvey – Forecasting, structural time series models and the Kalman filter This is an in-depth textbook on structural models and Kalman filter. As such it goes further than probably most readers will want to go. However, the introductory chapters are written with the usual great mix of intuitive and technical approach typical of the author. More than recommended for the start of using Kalman filter. Maddala and Kim - Unit Roots, Cointegration, and Structural Change This is probably the book on unit roots and cointegration, but one should be aware how to use this book. The best way to think about this book is as a textbook for advanced reader on relevant topics; but it will not serve well to beginners. Assuming one is knowledgeable enough then reading this book will be extremely beneficial. An especially good features of the book are (1) inclusion of historical narrative which allows the reader to orient himself in the literature, (2) encyclopedical approach to existing statistical tests combined with audacity to evaluate alternative tests, (3) intuitive introduction to Winer process theory (much more digestible than Hamilton) underlying much of the econometrics of integrated processes. Banerjee et al - Co-Integration, error correction, and the econometric analysis of non-stationary data This is not a textbook, but it is a useful source for some specific topics. It can serve as very good advanced introduction to econometrics of integrated processes, including the unit roots. It has great introduction to error-correction models in its multiple representations, which is useful to anybody interacting with multivariate single equation models. And finally, it provides the reconstruction of academic research on co-integration as it was in 1991, eliminating the need to go into the actual papers.
Books for self-studying time series analysis? Last year I started teaching introductory and semi-advanced time series course, so I embarked on journey of reading the (text-)books in the field to find suitable materials for students. Given that I
1,183
Books for self-studying time series analysis?
In addition to the other text there are two books introductory books in Springer's Use R! series that cover time series: Introductory Time Series with R and Applied Econometrics in R There is also an advanced econometrics text in the series, Analysis of Integrated and Co-integrated Time Series with R. I have not used these but have found several others in the series to be excellent.
Books for self-studying time series analysis?
In addition to the other text there are two books introductory books in Springer's Use R! series that cover time series: Introductory Time Series with R and Applied Econometrics in R There is also an
Books for self-studying time series analysis? In addition to the other text there are two books introductory books in Springer's Use R! series that cover time series: Introductory Time Series with R and Applied Econometrics in R There is also an advanced econometrics text in the series, Analysis of Integrated and Co-integrated Time Series with R. I have not used these but have found several others in the series to be excellent.
Books for self-studying time series analysis? In addition to the other text there are two books introductory books in Springer's Use R! series that cover time series: Introductory Time Series with R and Applied Econometrics in R There is also an
1,184
Books for self-studying time series analysis?
There are some good, free, online resources: The Little Book of R for Time Series, by Avril Coghlan (also available in print, reasonably cheap) - I haven't read through this all, but it looks like it's well written, has some good examples, and starts basically from scratch (ie. easy to get into). Chapter 15, Statistics with R, by Vincent Zoonekynd - Decent intro, but probably slightly more advanced. I find that there's too much (poorly commented) code, and not enough explanation thereof.
Books for self-studying time series analysis?
There are some good, free, online resources: The Little Book of R for Time Series, by Avril Coghlan (also available in print, reasonably cheap) - I haven't read through this all, but it looks like it
Books for self-studying time series analysis? There are some good, free, online resources: The Little Book of R for Time Series, by Avril Coghlan (also available in print, reasonably cheap) - I haven't read through this all, but it looks like it's well written, has some good examples, and starts basically from scratch (ie. easy to get into). Chapter 15, Statistics with R, by Vincent Zoonekynd - Decent intro, but probably slightly more advanced. I find that there's too much (poorly commented) code, and not enough explanation thereof.
Books for self-studying time series analysis? There are some good, free, online resources: The Little Book of R for Time Series, by Avril Coghlan (also available in print, reasonably cheap) - I haven't read through this all, but it looks like it
1,185
Books for self-studying time series analysis?
If you find Hamilton too difficult then there is Econometric Modeling: A Likelihood Approach (Princeton Uni Press) by Bent Nielsen and David Hendry. It focuses more on intuition and practical how-tos than deeper theory. So if you're on a time constraint then that would be a good approach. I would still recommend to persevere with Time Series Analysis by Hamilton. It is very deep mathematically and the first four chapters will keep you going for a long time and serve as a very strong introduction to the topic. It also covers Granger non-causality and cointegration and if you decide to pursue this topic more deeply then it is in invaluable resource. For a more intuitive treatment of cointegration, I would also recommend Cointegration, Causality, and Forecasting by Engle and White. Finally for very advanced treatments, there is Soren Johansen's book "Likelihood-Based Inference in Cointegrated VARs" and of course David Hendry's "Dynamic Econometrics". Among those two, I would think Hendry's is more big-picture oriented and Johansen is pretty hard-going on the math.
Books for self-studying time series analysis?
If you find Hamilton too difficult then there is Econometric Modeling: A Likelihood Approach (Princeton Uni Press) by Bent Nielsen and David Hendry. It focuses more on intuition and practical how-tos
Books for self-studying time series analysis? If you find Hamilton too difficult then there is Econometric Modeling: A Likelihood Approach (Princeton Uni Press) by Bent Nielsen and David Hendry. It focuses more on intuition and practical how-tos than deeper theory. So if you're on a time constraint then that would be a good approach. I would still recommend to persevere with Time Series Analysis by Hamilton. It is very deep mathematically and the first four chapters will keep you going for a long time and serve as a very strong introduction to the topic. It also covers Granger non-causality and cointegration and if you decide to pursue this topic more deeply then it is in invaluable resource. For a more intuitive treatment of cointegration, I would also recommend Cointegration, Causality, and Forecasting by Engle and White. Finally for very advanced treatments, there is Soren Johansen's book "Likelihood-Based Inference in Cointegrated VARs" and of course David Hendry's "Dynamic Econometrics". Among those two, I would think Hendry's is more big-picture oriented and Johansen is pretty hard-going on the math.
Books for self-studying time series analysis? If you find Hamilton too difficult then there is Econometric Modeling: A Likelihood Approach (Princeton Uni Press) by Bent Nielsen and David Hendry. It focuses more on intuition and practical how-tos
1,186
Books for self-studying time series analysis?
In my opinion, you really can't beat Forecasting: principles and practice. It's written by CV's own Rob Hyndman and George Athana­sopou­los, it's available for free online, and it's got tons of example code in R, making use of the excellent forecast package.
Books for self-studying time series analysis?
In my opinion, you really can't beat Forecasting: principles and practice. It's written by CV's own Rob Hyndman and George Athana­sopou­los, it's available for free online, and it's got tons of examp
Books for self-studying time series analysis? In my opinion, you really can't beat Forecasting: principles and practice. It's written by CV's own Rob Hyndman and George Athana­sopou­los, it's available for free online, and it's got tons of example code in R, making use of the excellent forecast package.
Books for self-studying time series analysis? In my opinion, you really can't beat Forecasting: principles and practice. It's written by CV's own Rob Hyndman and George Athana­sopou­los, it's available for free online, and it's got tons of examp
1,187
Books for self-studying time series analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. There's the NBER Summer Institute "What's New in Time Series Econometrics" (not sure whether this material is gated or not). There are videos with accompanying slides. The lectures are given by a pair of professors (Stock and Watson) who are known for their popular undergraduate econometrics textbook.
Books for self-studying time series analysis?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Books for self-studying time series analysis? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. There's the NBER Summer Institute "What's New in Time Series Econometrics" (not sure whether this material is gated or not). There are videos with accompanying slides. The lectures are given by a pair of professors (Stock and Watson) who are known for their popular undergraduate econometrics textbook.
Books for self-studying time series analysis? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,188
Books for self-studying time series analysis?
Time Series Analysis: Univariate and Multivariate Methods by William Wei and David P. Reilly - is a very good book on time series and quite inexepnsive. There is am updated version but at a much higher price. It does not include R examples. It explicitely includes a great discussion/presentation of Intervention Detection procedures which are ignored in simplified solutions/introductory textbooks.
Books for self-studying time series analysis?
Time Series Analysis: Univariate and Multivariate Methods by William Wei and David P. Reilly - is a very good book on time series and quite inexepnsive. There is am updated version but at a much high
Books for self-studying time series analysis? Time Series Analysis: Univariate and Multivariate Methods by William Wei and David P. Reilly - is a very good book on time series and quite inexepnsive. There is am updated version but at a much higher price. It does not include R examples. It explicitely includes a great discussion/presentation of Intervention Detection procedures which are ignored in simplified solutions/introductory textbooks.
Books for self-studying time series analysis? Time Series Analysis: Univariate and Multivariate Methods by William Wei and David P. Reilly - is a very good book on time series and quite inexepnsive. There is am updated version but at a much high
1,189
Books for self-studying time series analysis?
If you use Stata, Introduction to Time Series Using Stata by Sean Becketti is a solid gentle introduction, with many examples and an emphasis on intuition over theory. I think this book would complement Ender rather well. The book opens with an intro to Stata language, followed by a quick review of regression and hypothesis testing. The time series part starts with moving-average and Holt–Winters techniques to smooth and forecast the data. The next section focuses on using these for techniques forecasting. These methods are often neglected, but they work rather well for automated forecasting and are easy to explain. Becketti explains when they will work and when they won't. The next chapters cover single-equation time-series models like autocorrelated disturbances, ARIMA, and ARCH/GARCH modeling. In the end, Becketti discusses multiple-equation models, particularly VARs and VECs, and non-stationary time series.
Books for self-studying time series analysis?
If you use Stata, Introduction to Time Series Using Stata by Sean Becketti is a solid gentle introduction, with many examples and an emphasis on intuition over theory. I think this book would complem
Books for self-studying time series analysis? If you use Stata, Introduction to Time Series Using Stata by Sean Becketti is a solid gentle introduction, with many examples and an emphasis on intuition over theory. I think this book would complement Ender rather well. The book opens with an intro to Stata language, followed by a quick review of regression and hypothesis testing. The time series part starts with moving-average and Holt–Winters techniques to smooth and forecast the data. The next section focuses on using these for techniques forecasting. These methods are often neglected, but they work rather well for automated forecasting and are easy to explain. Becketti explains when they will work and when they won't. The next chapters cover single-equation time-series models like autocorrelated disturbances, ARIMA, and ARCH/GARCH modeling. In the end, Becketti discusses multiple-equation models, particularly VARs and VECs, and non-stationary time series.
Books for self-studying time series analysis? If you use Stata, Introduction to Time Series Using Stata by Sean Becketti is a solid gentle introduction, with many examples and an emphasis on intuition over theory. I think this book would complem
1,190
Books for self-studying time series analysis?
There are a few books that might be useful. If you are mathematically challenged you might want to start with two SAGE books by Mcdowall, Mcleary, Meidinger and Hay called "Interrupted Time Series Analysis" 1980 OR "Applied Time Series Analysis" by Richard McLeary. As you learn more about time series and decide that you you want more than prose and that you are willing to suffer through some math the Wei text published by Addison-Wessley entitled "Time Series Analysis" would be an excellent choice. In terms of web-based educational material, I have written a lot of useful material which can be viewed at http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting entitled "Introduction to Forecasting".
Books for self-studying time series analysis?
There are a few books that might be useful. If you are mathematically challenged you might want to start with two SAGE books by Mcdowall, Mcleary, Meidinger and Hay called "Interrupted Time Series Ana
Books for self-studying time series analysis? There are a few books that might be useful. If you are mathematically challenged you might want to start with two SAGE books by Mcdowall, Mcleary, Meidinger and Hay called "Interrupted Time Series Analysis" 1980 OR "Applied Time Series Analysis" by Richard McLeary. As you learn more about time series and decide that you you want more than prose and that you are willing to suffer through some math the Wei text published by Addison-Wessley entitled "Time Series Analysis" would be an excellent choice. In terms of web-based educational material, I have written a lot of useful material which can be viewed at http://www.autobox.com/cms/index.php/afs-university/intro-to-forecasting entitled "Introduction to Forecasting".
Books for self-studying time series analysis? There are a few books that might be useful. If you are mathematically challenged you might want to start with two SAGE books by Mcdowall, Mcleary, Meidinger and Hay called "Interrupted Time Series Ana
1,191
Books for self-studying time series analysis?
HILL GRIFFITHS LIM 2011 "Principles of Econometrics" 4E Wiley Advantages: (1) Very easy to follow. Topics are well presented. Even though I did not take any econometric course in my life, I easily grasped introductory econometrics with the book. (2) There are supplemantary books to understand HILL's book: a. Using EViews for Principles of Econometrics b. Using Excel for Principles of Econometrics c. Using Gretl for Principles of Econometrics d. Using Stata for Principles of Econometrics Disadvantages: (1) There is no "Using R for Principles of Econometrics"! R is industry standard. R is better than Python. Maths in mind can be best reflected to code via R (I am saying this as a person who wrote VBA modules in Excel, wrote Gretl codes, wrote Eviews codes). I self-started Econometrics with "GREENE 2011 Econometric Analysis - W.H. GREENE 7E PearsonPrentice Hall" This is also nice, but more theoretical; may be difficult for starters. In summary, I strongly recommend grasping Econometrics with Hill's book, and apply that understanding via another Econometry book that is based on R.
Books for self-studying time series analysis?
HILL GRIFFITHS LIM 2011 "Principles of Econometrics" 4E Wiley Advantages: (1) Very easy to follow. Topics are well presented. Even though I did not take any econometric course in my life, I easily gra
Books for self-studying time series analysis? HILL GRIFFITHS LIM 2011 "Principles of Econometrics" 4E Wiley Advantages: (1) Very easy to follow. Topics are well presented. Even though I did not take any econometric course in my life, I easily grasped introductory econometrics with the book. (2) There are supplemantary books to understand HILL's book: a. Using EViews for Principles of Econometrics b. Using Excel for Principles of Econometrics c. Using Gretl for Principles of Econometrics d. Using Stata for Principles of Econometrics Disadvantages: (1) There is no "Using R for Principles of Econometrics"! R is industry standard. R is better than Python. Maths in mind can be best reflected to code via R (I am saying this as a person who wrote VBA modules in Excel, wrote Gretl codes, wrote Eviews codes). I self-started Econometrics with "GREENE 2011 Econometric Analysis - W.H. GREENE 7E PearsonPrentice Hall" This is also nice, but more theoretical; may be difficult for starters. In summary, I strongly recommend grasping Econometrics with Hill's book, and apply that understanding via another Econometry book that is based on R.
Books for self-studying time series analysis? HILL GRIFFITHS LIM 2011 "Principles of Econometrics" 4E Wiley Advantages: (1) Very easy to follow. Topics are well presented. Even though I did not take any econometric course in my life, I easily gra
1,192
Books for self-studying time series analysis?
I haven't seen anybody mention the book by Gloria Gonzalez-Rivera "Forecasting for Economics and Business". I have found it to be the best kept secret in the time series space. It is a terrific book. It will give you more intuition than Diebold, more context than Enders, and will actually be readable unlike Hamilton. With much of the outstanding literature on time series, one may wonder if top time series experts are sworn to some sort of secrecy to not explain time series forecasting to others in an understandable way lest others join their little circle of trust. Gloria Gonzalez-Rivera's book let's you into this exclusive time series circle; it was a precious find for me.
Books for self-studying time series analysis?
I haven't seen anybody mention the book by Gloria Gonzalez-Rivera "Forecasting for Economics and Business". I have found it to be the best kept secret in the time series space. It is a terrific book.
Books for self-studying time series analysis? I haven't seen anybody mention the book by Gloria Gonzalez-Rivera "Forecasting for Economics and Business". I have found it to be the best kept secret in the time series space. It is a terrific book. It will give you more intuition than Diebold, more context than Enders, and will actually be readable unlike Hamilton. With much of the outstanding literature on time series, one may wonder if top time series experts are sworn to some sort of secrecy to not explain time series forecasting to others in an understandable way lest others join their little circle of trust. Gloria Gonzalez-Rivera's book let's you into this exclusive time series circle; it was a precious find for me.
Books for self-studying time series analysis? I haven't seen anybody mention the book by Gloria Gonzalez-Rivera "Forecasting for Economics and Business". I have found it to be the best kept secret in the time series space. It is a terrific book.
1,193
Books for self-studying time series analysis?
Lütkepohl "New Introduction to Multiple Time Series Analysis" (2005) is quite up to date and offers a clear exposition.
Books for self-studying time series analysis?
Lütkepohl "New Introduction to Multiple Time Series Analysis" (2005) is quite up to date and offers a clear exposition.
Books for self-studying time series analysis? Lütkepohl "New Introduction to Multiple Time Series Analysis" (2005) is quite up to date and offers a clear exposition.
Books for self-studying time series analysis? Lütkepohl "New Introduction to Multiple Time Series Analysis" (2005) is quite up to date and offers a clear exposition.
1,194
Books for self-studying time series analysis?
I think the word 'introductory' should be banned in statistics. Not many without a strong background in statistics will find topics such as vector autoregressive models or ARDL to be introductory nor the Hamilton work and many others mentioned. There is a a huge gap between academic and practitioner audiences in this topic I feel. Having looked hard as a practitioner for time series books over the last 7 year, I have found few that are introductory and either fewer aimed at practitioners as compared to academics. The Chadwick book already mentioned was useful (practical). I found Anders Milhoj to be useful for exponential smoothing, but he uses SAS. Many issues that practitioners worry about, such as cleaning and finding data are simply not addressed in many works on time series nor the use of expert judgement to correct mistakes. Concepts such as using multiple models to triangulate results (found to correct error) never show up in the academic time series books I have encountered. I have found on line links better than books for this, although I plan to read many of the works suggested.
Books for self-studying time series analysis?
I think the word 'introductory' should be banned in statistics. Not many without a strong background in statistics will find topics such as vector autoregressive models or ARDL to be introductory nor
Books for self-studying time series analysis? I think the word 'introductory' should be banned in statistics. Not many without a strong background in statistics will find topics such as vector autoregressive models or ARDL to be introductory nor the Hamilton work and many others mentioned. There is a a huge gap between academic and practitioner audiences in this topic I feel. Having looked hard as a practitioner for time series books over the last 7 year, I have found few that are introductory and either fewer aimed at practitioners as compared to academics. The Chadwick book already mentioned was useful (practical). I found Anders Milhoj to be useful for exponential smoothing, but he uses SAS. Many issues that practitioners worry about, such as cleaning and finding data are simply not addressed in many works on time series nor the use of expert judgement to correct mistakes. Concepts such as using multiple models to triangulate results (found to correct error) never show up in the academic time series books I have encountered. I have found on line links better than books for this, although I plan to read many of the works suggested.
Books for self-studying time series analysis? I think the word 'introductory' should be banned in statistics. Not many without a strong background in statistics will find topics such as vector autoregressive models or ARDL to be introductory nor
1,195
Books for self-studying time series analysis?
I will recommend you a textbook related with time series analysis. I read this book and got the idea. This book is very easy to understand. The link for the book :https://a-little-book-of-r-for-time-series.readthedocs.io/en/latest/src/timeseries.html This book is very good because it shows everything from scratch. This book shows. how to read time series data. plotting time series. Decomposing time series Decomposing non seasonal data Decomposing seasonal data seasonality adjusting forecasting using exponential smoothing and many more topics which is very helpful and very clear. If you read this book you can get a good understanding about time series analysis.
Books for self-studying time series analysis?
I will recommend you a textbook related with time series analysis. I read this book and got the idea. This book is very easy to understand. The link for the book :https://a-little-book-of-r-for-time-s
Books for self-studying time series analysis? I will recommend you a textbook related with time series analysis. I read this book and got the idea. This book is very easy to understand. The link for the book :https://a-little-book-of-r-for-time-series.readthedocs.io/en/latest/src/timeseries.html This book is very good because it shows everything from scratch. This book shows. how to read time series data. plotting time series. Decomposing time series Decomposing non seasonal data Decomposing seasonal data seasonality adjusting forecasting using exponential smoothing and many more topics which is very helpful and very clear. If you read this book you can get a good understanding about time series analysis.
Books for self-studying time series analysis? I will recommend you a textbook related with time series analysis. I read this book and got the idea. This book is very easy to understand. The link for the book :https://a-little-book-of-r-for-time-s
1,196
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders?
Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. E.g. in a network like this: output[i] has edge back to input[i] for every i. Typically, number of hidden units is much less then number of visible (input/output) ones. As a result, when you pass data through such a network, it first compresses (encodes) input vector to "fit" in a smaller representation, and then tries to reconstruct (decode) it back. The task of training is to minimize an error or reconstruction, i.e. find the most efficient compact representation (encoding) for input data. RBM shares similar idea, but uses stochastic approach. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. Learning procedure consists of several steps of Gibbs sampling (propagate: sample hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error. Intuition behind RBMs is that there are some visible random variables (e.g. film reviews from different users) and some hidden variables (like film genres or other internal features), and the task of training is to find out how these two sets of variables are actually connected to each other (more on this example may be found here). Convolutional Neural Networks are somewhat similar to these two, but instead of learning single global weight matrix between two layers, they aim to find a set of locally connected neurons. CNNs are mostly used in image recognition. Their name comes from "convolution" operator or simply "filter". In short, filters are an easy way to perform complex operation by means of simple change of a convolution kernel. Apply Gaussian blur kernel and you'll get it smoothed. Apply Canny kernel and you'll see all edges. Apply Gabor kernel to get gradient features. (image from here) The goal of convolutional neural networks is not to use one of predefined kernels, but instead to learn data-specific kernels. The idea is the same as with autoencoders or RBMs - translate many low-level features (e.g. user reviews or image pixels) to the compressed high-level representation (e.g. film genres or edges) - but now weights are learned only from neurons that are spatially close to each other. All three models have their use cases, pros and cons, but probably the most important properties are: Autoencoders are simplest ones. They are intuitively understandable, easy to implement and to reason about (e.g. it's much easier to find good meta-parameters for them than for RBMs). RBMs are generative. That is, unlike autoencoders that only discriminate some data vectors in favour of others, RBMs can also generate new data with given joined distribution. They are also considered more feature-rich and flexible. CNNs are very specific model that is mostly used for very specific task (though pretty popular task). Most of the top-level algorithms in image recognition are somehow based on CNNs today, but outside that niche they are hardly applicable (e.g. what's the reason to use convolution for film review analysis?). UPD. Dimensionality reduction When we represent some object as a vector of $n$ elements, we say that this is a vector in $n$-dimensional space. Thus, dimensionality reduction refers to a process of refining data in such a way, that each data vector $x$ is translated into another vector $x'$ in an $m$-dimensional space (vector with $m$ elements), where $m < n$. Probably the most common way of doing this is PCA. Roughly speaking, PCA finds "internal axes" of a dataset (called "components") and sorts them by their importance. First $m$ most important components are then used as new basis. Each of these components may be thought of as a high-level feature, describing data vectors better than original axes. Both - autoencoders and RBMs - do the same thing. Taking a vector in $n$-dimensional space they translate it into an $m$-dimensional one, trying to keep as much important information as possible and, at the same time, remove noise. If training of autoencoder/RBM was successful, each element of resulting vector (i.e. each hidden unit) represents something important about the object - shape of an eyebrow in an image, genre of a film, field of study in scientific article, etc. You take lots of noisy data as an input and produce much less data in a much more efficient representation. Deep architectures So, if we already had PCA, why the hell did we come up with autoencoders and RBMs? It turns out that PCA only allows linear transformation of a data vectors. That is, having $m$ principal components $c_1..c_m$, you can represent only vectors $x=\sum_{i=1}^{m}w_ic_i$. This is pretty good already, but not always enough. No matter, how many times you will apply PCA to a data - relationship will always stay linear. Autoencoders and RBMs, on other hand, are non-linear by the nature, and thus, they can learn more complicated relations between visible and hidden units. Moreover, they can be stacked, which makes them even more powerful. E.g. you train RBM with $n$ visible and $m$ hidden units, then you put another RBM with $m$ visible and $k$ hidden units on top of the first one and train it too, etc. And exactly the same way with autoencoders. But you don't just add new layers. On each layer you try to learn best possible representation for a data from the previous one: On the image above there's an example of such a deep network. We start with ordinary pixels, proceed with simple filters, then with face elements and finally end up with entire faces! This is the essence of deep learning. Now note, that at this example we worked with image data and sequentially took larger and larger areas of spatially close pixels. Doesn't it sound similar? Yes, because it's an example of deep convolutional network. Be it based on autoencoders or RBMs, it uses convolution to stress importance of locality. That's why CNNs are somewhat distinct from autoencoders and RBMs. Classification None of models mentioned here work as classification algorithms per se. Instead, they are used for pretraining - learning transformations from low-level and hard-to-consume representation (like pixels) to a high-level one. Once deep (or maybe not that deep) network is pretrained, input vectors are transformed to a better representation and resulting vectors are finally passed to real classifier (such as SVM or logistic regression). In an image above it means that at the very bottom there's one more component that actually does classification.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut
Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. E.g. in a network like this: output[i] has edge back to input[i] for every i. Typically,
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders? Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. E.g. in a network like this: output[i] has edge back to input[i] for every i. Typically, number of hidden units is much less then number of visible (input/output) ones. As a result, when you pass data through such a network, it first compresses (encodes) input vector to "fit" in a smaller representation, and then tries to reconstruct (decode) it back. The task of training is to minimize an error or reconstruction, i.e. find the most efficient compact representation (encoding) for input data. RBM shares similar idea, but uses stochastic approach. Instead of deterministic (e.g. logistic or ReLU) it uses stochastic units with particular (usually binary of Gaussian) distribution. Learning procedure consists of several steps of Gibbs sampling (propagate: sample hiddens given visibles; reconstruct: sample visibles given hiddens; repeat) and adjusting the weights to minimize reconstruction error. Intuition behind RBMs is that there are some visible random variables (e.g. film reviews from different users) and some hidden variables (like film genres or other internal features), and the task of training is to find out how these two sets of variables are actually connected to each other (more on this example may be found here). Convolutional Neural Networks are somewhat similar to these two, but instead of learning single global weight matrix between two layers, they aim to find a set of locally connected neurons. CNNs are mostly used in image recognition. Their name comes from "convolution" operator or simply "filter". In short, filters are an easy way to perform complex operation by means of simple change of a convolution kernel. Apply Gaussian blur kernel and you'll get it smoothed. Apply Canny kernel and you'll see all edges. Apply Gabor kernel to get gradient features. (image from here) The goal of convolutional neural networks is not to use one of predefined kernels, but instead to learn data-specific kernels. The idea is the same as with autoencoders or RBMs - translate many low-level features (e.g. user reviews or image pixels) to the compressed high-level representation (e.g. film genres or edges) - but now weights are learned only from neurons that are spatially close to each other. All three models have their use cases, pros and cons, but probably the most important properties are: Autoencoders are simplest ones. They are intuitively understandable, easy to implement and to reason about (e.g. it's much easier to find good meta-parameters for them than for RBMs). RBMs are generative. That is, unlike autoencoders that only discriminate some data vectors in favour of others, RBMs can also generate new data with given joined distribution. They are also considered more feature-rich and flexible. CNNs are very specific model that is mostly used for very specific task (though pretty popular task). Most of the top-level algorithms in image recognition are somehow based on CNNs today, but outside that niche they are hardly applicable (e.g. what's the reason to use convolution for film review analysis?). UPD. Dimensionality reduction When we represent some object as a vector of $n$ elements, we say that this is a vector in $n$-dimensional space. Thus, dimensionality reduction refers to a process of refining data in such a way, that each data vector $x$ is translated into another vector $x'$ in an $m$-dimensional space (vector with $m$ elements), where $m < n$. Probably the most common way of doing this is PCA. Roughly speaking, PCA finds "internal axes" of a dataset (called "components") and sorts them by their importance. First $m$ most important components are then used as new basis. Each of these components may be thought of as a high-level feature, describing data vectors better than original axes. Both - autoencoders and RBMs - do the same thing. Taking a vector in $n$-dimensional space they translate it into an $m$-dimensional one, trying to keep as much important information as possible and, at the same time, remove noise. If training of autoencoder/RBM was successful, each element of resulting vector (i.e. each hidden unit) represents something important about the object - shape of an eyebrow in an image, genre of a film, field of study in scientific article, etc. You take lots of noisy data as an input and produce much less data in a much more efficient representation. Deep architectures So, if we already had PCA, why the hell did we come up with autoencoders and RBMs? It turns out that PCA only allows linear transformation of a data vectors. That is, having $m$ principal components $c_1..c_m$, you can represent only vectors $x=\sum_{i=1}^{m}w_ic_i$. This is pretty good already, but not always enough. No matter, how many times you will apply PCA to a data - relationship will always stay linear. Autoencoders and RBMs, on other hand, are non-linear by the nature, and thus, they can learn more complicated relations between visible and hidden units. Moreover, they can be stacked, which makes them even more powerful. E.g. you train RBM with $n$ visible and $m$ hidden units, then you put another RBM with $m$ visible and $k$ hidden units on top of the first one and train it too, etc. And exactly the same way with autoencoders. But you don't just add new layers. On each layer you try to learn best possible representation for a data from the previous one: On the image above there's an example of such a deep network. We start with ordinary pixels, proceed with simple filters, then with face elements and finally end up with entire faces! This is the essence of deep learning. Now note, that at this example we worked with image data and sequentially took larger and larger areas of spatially close pixels. Doesn't it sound similar? Yes, because it's an example of deep convolutional network. Be it based on autoencoders or RBMs, it uses convolution to stress importance of locality. That's why CNNs are somewhat distinct from autoencoders and RBMs. Classification None of models mentioned here work as classification algorithms per se. Instead, they are used for pretraining - learning transformations from low-level and hard-to-consume representation (like pixels) to a high-level one. Once deep (or maybe not that deep) network is pretrained, input vectors are transformed to a better representation and resulting vectors are finally passed to real classifier (such as SVM or logistic regression). In an image above it means that at the very bottom there's one more component that actually does classification.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut Autoencoder is a simple 3-layer neural network where output units are directly connected back to input units. E.g. in a network like this: output[i] has edge back to input[i] for every i. Typically,
1,197
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders?
All of these architectures can be interpreted as a neural network. The main difference between AutoEncoder and Convolutional Network is the level of network hardwiring. Convolutional Nets are pretty much hardwired. Convolution operation is pretty much local in image domain, meaning much more sparsity in the number of connections in neural network view. Pooling(subsampling) operation in image domain is also a hardwired set of neural connections in neural domain. Such topological constraints on network structure. Given such constraints, training of CNN learns best weights for this convolution operation (In practice there are multiple filters). CNNs are usually used for image and speech tasks where convolutional constraints are a good assumption. In contrast, Autoencoders almost specify nothing about the topology of the network. They are much more general. The idea is to find good neural transformation to reconstruct the input. They are composed of encoder (projects the input to hidden layer) and decoder (reprojects hidden layer to output). The hidden layer learns a set of latent features or latent factors. Linear autoencoders span the same subspace with PCA. Given a dataset, they learn number of basis to explain the underlying pattern of the data. RBMs are also a neural network. But interpretation of the network is totally different. RBMs interpret the network as not a feedforward, but a bipartite graph where the idea is to learn joint probability distribution of hidden and input variables. They are viewed as a graphical model. Remember that both AutoEncoder and CNN learns a deterministic function. RBMs, on the other hand, is generative model. It can generate samples from learned hidden representations. There are different algorithms to train RBMs. However, at the end of the day, after learning RBMs, you can use its network weights to interpret it as a feedforward network.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut
All of these architectures can be interpreted as a neural network. The main difference between AutoEncoder and Convolutional Network is the level of network hardwiring. Convolutional Nets are pretty m
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders? All of these architectures can be interpreted as a neural network. The main difference between AutoEncoder and Convolutional Network is the level of network hardwiring. Convolutional Nets are pretty much hardwired. Convolution operation is pretty much local in image domain, meaning much more sparsity in the number of connections in neural network view. Pooling(subsampling) operation in image domain is also a hardwired set of neural connections in neural domain. Such topological constraints on network structure. Given such constraints, training of CNN learns best weights for this convolution operation (In practice there are multiple filters). CNNs are usually used for image and speech tasks where convolutional constraints are a good assumption. In contrast, Autoencoders almost specify nothing about the topology of the network. They are much more general. The idea is to find good neural transformation to reconstruct the input. They are composed of encoder (projects the input to hidden layer) and decoder (reprojects hidden layer to output). The hidden layer learns a set of latent features or latent factors. Linear autoencoders span the same subspace with PCA. Given a dataset, they learn number of basis to explain the underlying pattern of the data. RBMs are also a neural network. But interpretation of the network is totally different. RBMs interpret the network as not a feedforward, but a bipartite graph where the idea is to learn joint probability distribution of hidden and input variables. They are viewed as a graphical model. Remember that both AutoEncoder and CNN learns a deterministic function. RBMs, on the other hand, is generative model. It can generate samples from learned hidden representations. There are different algorithms to train RBMs. However, at the end of the day, after learning RBMs, you can use its network weights to interpret it as a feedforward network.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut All of these architectures can be interpreted as a neural network. The main difference between AutoEncoder and Convolutional Network is the level of network hardwiring. Convolutional Nets are pretty m
1,198
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders?
RBMs can be seen as some kind of probabilistic auto encoder. Actually, it has been shown that under certain conditions they become equivalent. Nevertheless, it is much harder to show this equivalency than to just believe they are different beasts. Indeed, I find it hard to find a lot of similarities among the three, as soon as I start to look closely. E.g. if you write down the functions implemented by an auto encoder, an RBM and a CNN, you get three completely different mathematical expressions.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut
RBMs can be seen as some kind of probabilistic auto encoder. Actually, it has been shown that under certain conditions they become equivalent. Nevertheless, it is much harder to show this equivalency
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders? RBMs can be seen as some kind of probabilistic auto encoder. Actually, it has been shown that under certain conditions they become equivalent. Nevertheless, it is much harder to show this equivalency than to just believe they are different beasts. Indeed, I find it hard to find a lot of similarities among the three, as soon as I start to look closely. E.g. if you write down the functions implemented by an auto encoder, an RBM and a CNN, you get three completely different mathematical expressions.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut RBMs can be seen as some kind of probabilistic auto encoder. Actually, it has been shown that under certain conditions they become equivalent. Nevertheless, it is much harder to show this equivalency
1,199
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders?
I can't tell you much about RBMs, but autoencoders and CNNs are two different kinds of things. An autoencoder is a neural network that is trained in an unsupervised fashion. The goal of an autoencoder is to find a more compact representation of the data by learning an encoder, which transforms the data to their corresponding compact representation, and a decoder, which reconstructs the original data. The encoder part of autoencoders (and originally RBMs) have been used to learn good initial weights of a deeper architecture, but there are other applications. Essentially, an autoencoder learns a clustering of the data. In contrast, the term CNN refers to a type of neural network which uses the convolution operator (often the 2D convolution when it is used for image processing tasks) to extract features from the data. In image processing, filters, that are convoluted with images, are learned automatically to solve the task at hand, e.g. a classification task. Whether the training criterion is a regression/classification (supervised) or a reconstruction (unsupervised) is unrelated to idea of convolutions as an alternative to affine transformations. You can also have a CNN-autoencoder.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut
I can't tell you much about RBMs, but autoencoders and CNNs are two different kinds of things. An autoencoder is a neural network that is trained in an unsupervised fashion. The goal of an autoencoder
What is the difference between convolutional neural networks, restricted Boltzmann machines, and auto-encoders? I can't tell you much about RBMs, but autoencoders and CNNs are two different kinds of things. An autoencoder is a neural network that is trained in an unsupervised fashion. The goal of an autoencoder is to find a more compact representation of the data by learning an encoder, which transforms the data to their corresponding compact representation, and a decoder, which reconstructs the original data. The encoder part of autoencoders (and originally RBMs) have been used to learn good initial weights of a deeper architecture, but there are other applications. Essentially, an autoencoder learns a clustering of the data. In contrast, the term CNN refers to a type of neural network which uses the convolution operator (often the 2D convolution when it is used for image processing tasks) to extract features from the data. In image processing, filters, that are convoluted with images, are learned automatically to solve the task at hand, e.g. a classification task. Whether the training criterion is a regression/classification (supervised) or a reconstruction (unsupervised) is unrelated to idea of convolutions as an alternative to affine transformations. You can also have a CNN-autoencoder.
What is the difference between convolutional neural networks, restricted Boltzmann machines, and aut I can't tell you much about RBMs, but autoencoders and CNNs are two different kinds of things. An autoencoder is a neural network that is trained in an unsupervised fashion. The goal of an autoencoder
1,200
Nested cross validation for model selection
How do I choose a model from this [outer cross validation] output? Short answer: You don't. Treat the inner cross validation as part of the model fitting procedure. That means that the fitting including the fitting of the hyper-parameters (this is where the inner cross validation hides) is just like any other model esitmation routine. The outer cross validation estimates the performance of this model fitting approach. For that you use the usual assumptions the $k$ outer surrogate models are equivalent to the "real" model built by model.fitting.procedure with all data. Or, in case 1. breaks down (pessimistic bias of resampling validation), at least the $k$ outer surrogate models are equivalent to each other. This allows you to pool (average) the test results. It also means that you do not need to choose among them as you assume that they are basically the same. The breaking down of this second, weaker assumption is model instability. Do not pick the seemingly best of the $k$ surrogate models - that would usually be just "harvesting" testing uncertainty and leads to an optimistic bias. So how can I use nested CV for model selection? The inner CV does the selection. It looks to me that selecting the best model out of those K winning models would not be a fair comparison since each model was trained and tested on different parts of the dataset. You are right in that it is no good idea to pick one of the $k$ surrogate models. But you are wrong about the reason. Real reason: see above. The fact that they are not trained and tested on the same data does not "hurt" here. Not having the same testing data: as you want to claim afterwards that the test results generalize to never seen data, this cannot make a difference. Not having the same training data: if the models are stable, this doesn't make a difference: Stable here means that the model does not change (much) if the training data is "perturbed" by replacing a few cases by other cases. if the models are not stable, three considerations are important: you can actually measure whether and to which extent this is the case, by using iterated/repeated $k$-fold cross validation. That allows you to compare cross validation results for the same case that were predicted by different models built on slightly differing training data. If the models are not stable, the variance observed over the test results of the $k$-fold cross validation increases: you do not only have the variance due to the fact that only a finite number of cases is tested in total, but have additional variance due to the instability of the models (variance in the predictive abilities). If instability is a real problem, you cannot extrapolate well to the performance for the "real" model. Which brings me to your last question: What types of analysis /checks can I do with the scores that I get from the outer K folds? check for stability of the predictions (use iterated/repeated cross-validation) check for the stability/variation of the optimized hyper-parameters. For one thing, wildly scattering hyper-parameters may indicate that the inner optimization didn't work. For another thing, this may allow you to decide on the hyperparameters without the costly optimization step in similar situations in the future. With costly I do not refer to computational resources but to the fact that this "costs" information that may better be used for estimating the "normal" model parameters. check for the difference between the inner and outer estimate of the chosen model. If there is a large difference (the inner being very overoptimistic), there is a risk that the inner optimization didn't work well because of overfitting. update @user99889's question: What to do if outer CV finds instability? First of all, detecting in the outer CV loop that the models do not yield stable predictions in that respect doesn't really differ from detecting that the prediciton error is too high for the application. It is one of the possible outcomes of model validation (or verification) implying that the model we have is not fit for its purpose. In the comment answering @davips, I was thinking of tackling the instability in the inner CV - i.e. as part of the model optimization process. But you are certainly right: if we change our model based on the findings of the outer CV, yet another round of independent testing of the changed model is necessary. However, instability in the outer CV would also be a sign that the optimization wasn't set up well - so finding instability in the outer CV implies that the inner CV did not penalize instability in the necessary fashion - this would be my main point of critique in such a situation. In other words, why does the optimization allow/lead to heavily overfit models? However, there is one peculiarity here that IMHO may excuse the further change of the "final" model after careful consideration of the exact circumstances: As we did detect overfitting, any proposed change (fewer d.f./more restrictive or aggregation) to the model would be in direction of less overfitting (or at least hyperparameters that are less prone to overfitting). The point of independent testing is to detect overfitting - underfitting can be detected by data that was already used in the training process. So if we are talking, say, about further reducing the number of latent variables in a PLS model that would be comparably benign (if the proposed change would be a totally different type of model, say PLS instead of SVM, all bets would be off), and I'd be even more relaxed about it if I'd know that we are anyways in an intermediate stage of modeling - after all, if the optimized models are still unstable, there's no question that more cases are needed. Also, in many situations, you'll eventually need to perform studies that are designed to properly test various aspects of performance (e.g. generalization to data acquired in the future). Still, I'd insist that the full modeling process would need to be reported, and that the implications of these late changes would need to be carefully discussed. Also, aggregation including and out-of-bag analogue CV estimate of performance would be possible from the already available results - which is the other type of "post-processing" of the model that I'd be willing to consider benign here. Yet again, it then would have been better if the study were designed from the beginning to check that aggregation provides no advantage over individual predcitions (which is another way of saying that the individual models are stable). Update (2019): the more I think about these situations, the more I come to favor the "nested cross validation apparently without nesting" approach.
Nested cross validation for model selection
How do I choose a model from this [outer cross validation] output? Short answer: You don't. Treat the inner cross validation as part of the model fitting procedure. That means that the fitting inc
Nested cross validation for model selection How do I choose a model from this [outer cross validation] output? Short answer: You don't. Treat the inner cross validation as part of the model fitting procedure. That means that the fitting including the fitting of the hyper-parameters (this is where the inner cross validation hides) is just like any other model esitmation routine. The outer cross validation estimates the performance of this model fitting approach. For that you use the usual assumptions the $k$ outer surrogate models are equivalent to the "real" model built by model.fitting.procedure with all data. Or, in case 1. breaks down (pessimistic bias of resampling validation), at least the $k$ outer surrogate models are equivalent to each other. This allows you to pool (average) the test results. It also means that you do not need to choose among them as you assume that they are basically the same. The breaking down of this second, weaker assumption is model instability. Do not pick the seemingly best of the $k$ surrogate models - that would usually be just "harvesting" testing uncertainty and leads to an optimistic bias. So how can I use nested CV for model selection? The inner CV does the selection. It looks to me that selecting the best model out of those K winning models would not be a fair comparison since each model was trained and tested on different parts of the dataset. You are right in that it is no good idea to pick one of the $k$ surrogate models. But you are wrong about the reason. Real reason: see above. The fact that they are not trained and tested on the same data does not "hurt" here. Not having the same testing data: as you want to claim afterwards that the test results generalize to never seen data, this cannot make a difference. Not having the same training data: if the models are stable, this doesn't make a difference: Stable here means that the model does not change (much) if the training data is "perturbed" by replacing a few cases by other cases. if the models are not stable, three considerations are important: you can actually measure whether and to which extent this is the case, by using iterated/repeated $k$-fold cross validation. That allows you to compare cross validation results for the same case that were predicted by different models built on slightly differing training data. If the models are not stable, the variance observed over the test results of the $k$-fold cross validation increases: you do not only have the variance due to the fact that only a finite number of cases is tested in total, but have additional variance due to the instability of the models (variance in the predictive abilities). If instability is a real problem, you cannot extrapolate well to the performance for the "real" model. Which brings me to your last question: What types of analysis /checks can I do with the scores that I get from the outer K folds? check for stability of the predictions (use iterated/repeated cross-validation) check for the stability/variation of the optimized hyper-parameters. For one thing, wildly scattering hyper-parameters may indicate that the inner optimization didn't work. For another thing, this may allow you to decide on the hyperparameters without the costly optimization step in similar situations in the future. With costly I do not refer to computational resources but to the fact that this "costs" information that may better be used for estimating the "normal" model parameters. check for the difference between the inner and outer estimate of the chosen model. If there is a large difference (the inner being very overoptimistic), there is a risk that the inner optimization didn't work well because of overfitting. update @user99889's question: What to do if outer CV finds instability? First of all, detecting in the outer CV loop that the models do not yield stable predictions in that respect doesn't really differ from detecting that the prediciton error is too high for the application. It is one of the possible outcomes of model validation (or verification) implying that the model we have is not fit for its purpose. In the comment answering @davips, I was thinking of tackling the instability in the inner CV - i.e. as part of the model optimization process. But you are certainly right: if we change our model based on the findings of the outer CV, yet another round of independent testing of the changed model is necessary. However, instability in the outer CV would also be a sign that the optimization wasn't set up well - so finding instability in the outer CV implies that the inner CV did not penalize instability in the necessary fashion - this would be my main point of critique in such a situation. In other words, why does the optimization allow/lead to heavily overfit models? However, there is one peculiarity here that IMHO may excuse the further change of the "final" model after careful consideration of the exact circumstances: As we did detect overfitting, any proposed change (fewer d.f./more restrictive or aggregation) to the model would be in direction of less overfitting (or at least hyperparameters that are less prone to overfitting). The point of independent testing is to detect overfitting - underfitting can be detected by data that was already used in the training process. So if we are talking, say, about further reducing the number of latent variables in a PLS model that would be comparably benign (if the proposed change would be a totally different type of model, say PLS instead of SVM, all bets would be off), and I'd be even more relaxed about it if I'd know that we are anyways in an intermediate stage of modeling - after all, if the optimized models are still unstable, there's no question that more cases are needed. Also, in many situations, you'll eventually need to perform studies that are designed to properly test various aspects of performance (e.g. generalization to data acquired in the future). Still, I'd insist that the full modeling process would need to be reported, and that the implications of these late changes would need to be carefully discussed. Also, aggregation including and out-of-bag analogue CV estimate of performance would be possible from the already available results - which is the other type of "post-processing" of the model that I'd be willing to consider benign here. Yet again, it then would have been better if the study were designed from the beginning to check that aggregation provides no advantage over individual predcitions (which is another way of saying that the individual models are stable). Update (2019): the more I think about these situations, the more I come to favor the "nested cross validation apparently without nesting" approach.
Nested cross validation for model selection How do I choose a model from this [outer cross validation] output? Short answer: You don't. Treat the inner cross validation as part of the model fitting procedure. That means that the fitting inc