idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
1,601
Validation Error less than training error?
I don't have enough points to comment on @D-K's answer, but this is now answered as a FAQ on Keras' documentation: Why is my training loss much higher than my testing loss? A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time. They are reflected in the training time loss but not in the test time loss. Besides, the training loss that Keras displays is the average of the losses for each batch of training data, over the current epoch. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. This can bring the epoch-wise average down. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.
Validation Error less than training error?
I don't have enough points to comment on @D-K's answer, but this is now answered as a FAQ on Keras' documentation: Why is my training loss much higher than my testing loss? A Keras model has two mode
Validation Error less than training error? I don't have enough points to comment on @D-K's answer, but this is now answered as a FAQ on Keras' documentation: Why is my training loss much higher than my testing loss? A Keras model has two modes: training and testing. Regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time. They are reflected in the training time loss but not in the test time loss. Besides, the training loss that Keras displays is the average of the losses for each batch of training data, over the current epoch. Because your model is changing over time, the loss over the first batches of an epoch is generally higher than over the last batches. This can bring the epoch-wise average down. On the other hand, the testing loss for an epoch is computed using the model as it is at the end of the epoch, resulting in a lower loss.
Validation Error less than training error? I don't have enough points to comment on @D-K's answer, but this is now answered as a FAQ on Keras' documentation: Why is my training loss much higher than my testing loss? A Keras model has two mode
1,602
Validation Error less than training error?
my 2 cents: I also had the same problem even without having dropout layers. In my case - batch-norm layers were the culprits. When I deleted them - training loss became similar to validation loss. Probably, that happened because during training batch-norm uses mean and variance of the given input batch, which might be different from batch to batch. But during evaluation batch-norm uses running mean and variance, both of which reflect properties of the whole training set much better than mean and variance of a single batch during training. At least, that is how batch-norm is implemented in pytorch
Validation Error less than training error?
my 2 cents: I also had the same problem even without having dropout layers. In my case - batch-norm layers were the culprits. When I deleted them - training loss became similar to validation loss. Pr
Validation Error less than training error? my 2 cents: I also had the same problem even without having dropout layers. In my case - batch-norm layers were the culprits. When I deleted them - training loss became similar to validation loss. Probably, that happened because during training batch-norm uses mean and variance of the given input batch, which might be different from batch to batch. But during evaluation batch-norm uses running mean and variance, both of which reflect properties of the whole training set much better than mean and variance of a single batch during training. At least, that is how batch-norm is implemented in pytorch
Validation Error less than training error? my 2 cents: I also had the same problem even without having dropout layers. In my case - batch-norm layers were the culprits. When I deleted them - training loss became similar to validation loss. Pr
1,603
Validation Error less than training error?
Another possibility that combines both the answer of @cdeterman and @D-K in some way is if you're using some data augmentation mechanism. Infact data augmentation is usually done only on training set and not on validation set (as for the dropout regularization), and this may lead to a validation set containing "easier" cases to predict than those in the training set.
Validation Error less than training error?
Another possibility that combines both the answer of @cdeterman and @D-K in some way is if you're using some data augmentation mechanism. Infact data augmentation is usually done only on training set
Validation Error less than training error? Another possibility that combines both the answer of @cdeterman and @D-K in some way is if you're using some data augmentation mechanism. Infact data augmentation is usually done only on training set and not on validation set (as for the dropout regularization), and this may lead to a validation set containing "easier" cases to predict than those in the training set.
Validation Error less than training error? Another possibility that combines both the answer of @cdeterman and @D-K in some way is if you're using some data augmentation mechanism. Infact data augmentation is usually done only on training set
1,604
Validation Error less than training error?
At this time, stochastic gradient based methods are almost always the algorithm of choice for deep learning. This means that data comes in as batches, gradients are computed and parameters are updated. This means you can also compute the loss over the data as each batch is selected. Under this framework, there are two ways in how the loss is computed that I can think of which can lead to this phenomenon that the training error is greater than the validation error. Below, I show that Keras does, in fact, appear to compute the in-sample errors in these ways. 1.) Training error is averaged over whole epoch, rather all at once at the end of the epoch, but validation error is only at end of epoch. As we sample our training data to compute gradients, we might as well compute the loss over them as well. But since the validation data is not used during the computation of gradients, we may decide to only compute the loss after the end of the epoch. Under this framework, the validation error has the benefit of being fully updated, while the training error includes error calculations with fewer updates. Of course, asymptotically this effect should generally disappear, since the effect on the validation error of one epoch typically flattens out. 2.) Training error is computed before batch update is done. In a stochastic gradient based method, there's some noise the gradient. While one is climbing a hill, there's a high probability that one is decreasing global loss computed over all training samples. However, when one gets very close to the mode, the update direction will be negative with respect to the samples in your batch. But since we are bouncing around a mode, this means on average we must be choosing a direction that is positive with respect to the samples out of batch. Now, if we are about to update with respect to the samples in a given batch, that means they have been pushed around by potentially many batch updates that they were not included in, by computing their loss before the update, this is when the stochastic methods have pushed the parameters the most in favor of the other samples in your dataset, thus giving us a small upward bias in the expected loss. Note that while asymptotically, the effect of (1) goes away, (2) does not! Below I show that Keras appears to do both (1) and (2). (1) Showing that metrics are averaged over each batch in epoch, rather than all at once at the end. Notice the HUGE difference in in-sample accuracy vs val_accuracy favoring val_accuracy at the very first epoch. This is because some of in-sample error computed with very few batch updates. >>> model.fit(Xtrn, Xtrn, epochs = 3, batch_size = 100, ... validation_data = (Xtst, Xtst)) Train on 46580 samples, validate on 1000 samples Epoch 1/3 46580/46580 [==============================] - 8s 176us/sample - loss: 0.2320 - accuracy: 0.9216 - val_loss: 0.1581 - val_accuracy: 0.9636 Epoch 2/3 46580/46580 [==============================] - 8s 165us/sample - loss: 0.1487 - accuracy: 0.9662 - val_loss: 0.1545 - val_accuracy: 0.9677 Epoch 3/3 46580/46580 [==============================] - 8s 165us/sample - loss: 0.1471 - accuracy: 0.9687 - val_loss: 0.1424 - val_accuracy: 0.9699 <tensorflow.python.keras.callbacks.History object at 0x17070d080> (2) Showing error is computed before update for each batch. Note that for epoch 1, when we use batch_size = nRows (i.e., all data in one batch), the in-sample error is about 0.5 (random guessing) for epoch 1, yet the validation error is 0.82. Therefore, the in-sample error was computed before the batch update, while the validation error was computed after the batch update. >>> model.fit(Xtrn, Xtrn, epochs = 3, batch_size = nRows, ... validation_data = (Xtst, Xtst)) Train on 46580 samples, validate on 1000 samples Epoch 1/3 46580/46580 [==============================] - 9s 201us/sample - loss: 0.7126 - accuracy: 0.5088 - val_loss: 0.5779 - val_accuracy: 0.8191 Epoch 2/3 46580/46580 [==============================] - 6s 136us/sample - loss: 0.5770 - accuracy: 0.8211 - val_loss: 0.4940 - val_accuracy: 0.8249 Epoch 3/3 46580/46580 [==============================] - 6s 120us/sample - loss: 0.4921 - accuracy: 0.8268 - val_loss: 0.4502 - val_accuracy: 0.8249 Small note about the code above: an auto-encoder was built, hence why the input (Xtrn) is the same as the output (Xtrn).
Validation Error less than training error?
At this time, stochastic gradient based methods are almost always the algorithm of choice for deep learning. This means that data comes in as batches, gradients are computed and parameters are updated
Validation Error less than training error? At this time, stochastic gradient based methods are almost always the algorithm of choice for deep learning. This means that data comes in as batches, gradients are computed and parameters are updated. This means you can also compute the loss over the data as each batch is selected. Under this framework, there are two ways in how the loss is computed that I can think of which can lead to this phenomenon that the training error is greater than the validation error. Below, I show that Keras does, in fact, appear to compute the in-sample errors in these ways. 1.) Training error is averaged over whole epoch, rather all at once at the end of the epoch, but validation error is only at end of epoch. As we sample our training data to compute gradients, we might as well compute the loss over them as well. But since the validation data is not used during the computation of gradients, we may decide to only compute the loss after the end of the epoch. Under this framework, the validation error has the benefit of being fully updated, while the training error includes error calculations with fewer updates. Of course, asymptotically this effect should generally disappear, since the effect on the validation error of one epoch typically flattens out. 2.) Training error is computed before batch update is done. In a stochastic gradient based method, there's some noise the gradient. While one is climbing a hill, there's a high probability that one is decreasing global loss computed over all training samples. However, when one gets very close to the mode, the update direction will be negative with respect to the samples in your batch. But since we are bouncing around a mode, this means on average we must be choosing a direction that is positive with respect to the samples out of batch. Now, if we are about to update with respect to the samples in a given batch, that means they have been pushed around by potentially many batch updates that they were not included in, by computing their loss before the update, this is when the stochastic methods have pushed the parameters the most in favor of the other samples in your dataset, thus giving us a small upward bias in the expected loss. Note that while asymptotically, the effect of (1) goes away, (2) does not! Below I show that Keras appears to do both (1) and (2). (1) Showing that metrics are averaged over each batch in epoch, rather than all at once at the end. Notice the HUGE difference in in-sample accuracy vs val_accuracy favoring val_accuracy at the very first epoch. This is because some of in-sample error computed with very few batch updates. >>> model.fit(Xtrn, Xtrn, epochs = 3, batch_size = 100, ... validation_data = (Xtst, Xtst)) Train on 46580 samples, validate on 1000 samples Epoch 1/3 46580/46580 [==============================] - 8s 176us/sample - loss: 0.2320 - accuracy: 0.9216 - val_loss: 0.1581 - val_accuracy: 0.9636 Epoch 2/3 46580/46580 [==============================] - 8s 165us/sample - loss: 0.1487 - accuracy: 0.9662 - val_loss: 0.1545 - val_accuracy: 0.9677 Epoch 3/3 46580/46580 [==============================] - 8s 165us/sample - loss: 0.1471 - accuracy: 0.9687 - val_loss: 0.1424 - val_accuracy: 0.9699 <tensorflow.python.keras.callbacks.History object at 0x17070d080> (2) Showing error is computed before update for each batch. Note that for epoch 1, when we use batch_size = nRows (i.e., all data in one batch), the in-sample error is about 0.5 (random guessing) for epoch 1, yet the validation error is 0.82. Therefore, the in-sample error was computed before the batch update, while the validation error was computed after the batch update. >>> model.fit(Xtrn, Xtrn, epochs = 3, batch_size = nRows, ... validation_data = (Xtst, Xtst)) Train on 46580 samples, validate on 1000 samples Epoch 1/3 46580/46580 [==============================] - 9s 201us/sample - loss: 0.7126 - accuracy: 0.5088 - val_loss: 0.5779 - val_accuracy: 0.8191 Epoch 2/3 46580/46580 [==============================] - 6s 136us/sample - loss: 0.5770 - accuracy: 0.8211 - val_loss: 0.4940 - val_accuracy: 0.8249 Epoch 3/3 46580/46580 [==============================] - 6s 120us/sample - loss: 0.4921 - accuracy: 0.8268 - val_loss: 0.4502 - val_accuracy: 0.8249 Small note about the code above: an auto-encoder was built, hence why the input (Xtrn) is the same as the output (Xtrn).
Validation Error less than training error? At this time, stochastic gradient based methods are almost always the algorithm of choice for deep learning. This means that data comes in as batches, gradients are computed and parameters are updated
1,605
Validation Error less than training error?
I got similar results (test loss was significantly lower than training loss). Once I removed the dropout regularization, both the loss became almost equal.
Validation Error less than training error?
I got similar results (test loss was significantly lower than training loss). Once I removed the dropout regularization, both the loss became almost equal.
Validation Error less than training error? I got similar results (test loss was significantly lower than training loss). Once I removed the dropout regularization, both the loss became almost equal.
Validation Error less than training error? I got similar results (test loss was significantly lower than training loss). Once I removed the dropout regularization, both the loss became almost equal.
1,606
Validation Error less than training error?
@cdeterman and @D-K have good explanation. I would like to one more reason - data leakage. Some part of your train-data are "closely related" with the test-data. Potential example: imagine you have 1000 dogs and 1000 cats with 500 similar pictures per pet (some owners love to take pictures of their pets in very similar positions), say on the background. So if you do random 70/30 split, you'll get data leakage of train data into the test data.
Validation Error less than training error?
@cdeterman and @D-K have good explanation. I would like to one more reason - data leakage. Some part of your train-data are "closely related" with the test-data. Potential example: imagine you have 1
Validation Error less than training error? @cdeterman and @D-K have good explanation. I would like to one more reason - data leakage. Some part of your train-data are "closely related" with the test-data. Potential example: imagine you have 1000 dogs and 1000 cats with 500 similar pictures per pet (some owners love to take pictures of their pets in very similar positions), say on the background. So if you do random 70/30 split, you'll get data leakage of train data into the test data.
Validation Error less than training error? @cdeterman and @D-K have good explanation. I would like to one more reason - data leakage. Some part of your train-data are "closely related" with the test-data. Potential example: imagine you have 1
1,607
Validation Error less than training error?
A lower validation than training error can be caused by fluctuations associated with dropout or else, but if it persists in the long run this may indicate that the training and validation datasets were not actually drawn from the same statistical ensembles. This could happen if your examples come from a series and if you did not properly randomize the training and validation datasets.
Validation Error less than training error?
A lower validation than training error can be caused by fluctuations associated with dropout or else, but if it persists in the long run this may indicate that the training and validation datasets wer
Validation Error less than training error? A lower validation than training error can be caused by fluctuations associated with dropout or else, but if it persists in the long run this may indicate that the training and validation datasets were not actually drawn from the same statistical ensembles. This could happen if your examples come from a series and if you did not properly randomize the training and validation datasets.
Validation Error less than training error? A lower validation than training error can be caused by fluctuations associated with dropout or else, but if it persists in the long run this may indicate that the training and validation datasets wer
1,608
Validation Error less than training error?
Simply put, if training loss and validation loss are computed correctly, it is impossible for training loss to be higher than validation loss. This is because back-propagation DIRECTLY reduces error computed on the training set and only INDIRECTLY (not even guaranteed!) reduces error computed on the validation set. There must be some additional factors that are different while training and while validating. Dropout is a good one, but there can be others. Make sure to check the documentation of whatever library that you are using. Models and layers can usually have default settings that we don't commonly pay attention to.
Validation Error less than training error?
Simply put, if training loss and validation loss are computed correctly, it is impossible for training loss to be higher than validation loss. This is because back-propagation DIRECTLY reduces error c
Validation Error less than training error? Simply put, if training loss and validation loss are computed correctly, it is impossible for training loss to be higher than validation loss. This is because back-propagation DIRECTLY reduces error computed on the training set and only INDIRECTLY (not even guaranteed!) reduces error computed on the validation set. There must be some additional factors that are different while training and while validating. Dropout is a good one, but there can be others. Make sure to check the documentation of whatever library that you are using. Models and layers can usually have default settings that we don't commonly pay attention to.
Validation Error less than training error? Simply put, if training loss and validation loss are computed correctly, it is impossible for training loss to be higher than validation loss. This is because back-propagation DIRECTLY reduces error c
1,609
"Best" series of colors to use for differentiating series in publication-quality plots
A common reference for choosing a color palette is the work of Cynthia Brewer on ColorBrewer. The colors were chosen based on perceptual patterns in choropleth maps, but most of the same advice applies to using color in any type of plot to distinguish data patterns. If color is solely to distinguish between the different lines, then a qualitative palette is in order. Often color is not needed in line plots with only a few lines, and different point symbols and/or dash patterns are effective enough. A more common problem with line plots is that if the lines frequently overlap it will be difficult to distinguish different patterns no matter what symbols or color you use. Stephen Kosslyn recommends a general rule of thumb for only having 4 lines in a plot. If you have more consider splitting the lines into a series of small multiple plots. Here is an example showing the recommendation No color needed and the labels are more than sufficient.
"Best" series of colors to use for differentiating series in publication-quality plots
A common reference for choosing a color palette is the work of Cynthia Brewer on ColorBrewer. The colors were chosen based on perceptual patterns in choropleth maps, but most of the same advice applie
"Best" series of colors to use for differentiating series in publication-quality plots A common reference for choosing a color palette is the work of Cynthia Brewer on ColorBrewer. The colors were chosen based on perceptual patterns in choropleth maps, but most of the same advice applies to using color in any type of plot to distinguish data patterns. If color is solely to distinguish between the different lines, then a qualitative palette is in order. Often color is not needed in line plots with only a few lines, and different point symbols and/or dash patterns are effective enough. A more common problem with line plots is that if the lines frequently overlap it will be difficult to distinguish different patterns no matter what symbols or color you use. Stephen Kosslyn recommends a general rule of thumb for only having 4 lines in a plot. If you have more consider splitting the lines into a series of small multiple plots. Here is an example showing the recommendation No color needed and the labels are more than sufficient.
"Best" series of colors to use for differentiating series in publication-quality plots A common reference for choosing a color palette is the work of Cynthia Brewer on ColorBrewer. The colors were chosen based on perceptual patterns in choropleth maps, but most of the same advice applie
1,610
"Best" series of colors to use for differentiating series in publication-quality plots
Much outstandingly good advice in other answers, but here are some extra points from my own low-level advice to students. This is all just advice, naturally, to be thought about given the key questions: What is my graph intended to do? What makes sense with these data? Who are the readership? What I am expecting colour(s) to do within the graph? Does the graph work well, regardless of someone else's dogmas? Furthermore, the importance of colour varies enormously from one graph to another. For a choropleth or patch map, in which the idea is indeed that different areas are coloured or at least shaded differently, the success of a graph is bound up with the success of its colour scheme. For other kinds of graphs, colours may be dispensable or even a nuisance. Are your colours all needed? For example, if different variables or groups are clearly distinguished by text labels in different regions of a graph, then separate colours too would often be overkill. Beware fruit salad or technicolor dreamcoat effects. For a pie chart with text labelling on or by the slices, colour conveys no extra information, for example. (If your pie chart depends on a key or legend, you are likely to be trying the wrong kind of graph.) Never rely on a contrast between red and green, as so many people struggle to distinguish these colours. Rainbow sequences (ROYGBIV or red-orange-yellow-green-blue-indigo-violet) may appeal on physical grounds, but they don't work well in practice. For example, yellow is usually a weak colour while orange and green are usually stronger, so the impression is not even of a monotonic sequence. Avoid any colour scheme which has the consequence of large patches of strong colour. A sequence from dark red to dark blue works well when an ordered sequence is needed. If white is (as usual) the background colour anywhere, don't use it, but skip from pale red to pale blue. [added 1 March 2018] Perhaps too obvious to underline: red has connotations of negative and/or danger for many, which can be helpful, and blue can then mean positive. Too obvious to underline, but I do it any way: Red and blue do have political connotations in many countries. [added 7 February 2023] White can make sense for bar colours if there is a boundary (say in a light gray) to the patch showing where the bar ends! (ditto, area patches on maps) Blue and orange go well together (a grateful nod to Hastie, Tibshirani and Friedman here [added 1 March 2018]. Many introductory books on visualization now recommend orange, blue and grey as a basic palette: orange and/or blue for what you care about and grey for backdrop. Grayscale from pale gray to dark gray can work well and is a good idea when colour reproduction is out of the question. (It is a lousy printer that can't make a fair bash at grayscale.) (Grey if you like; preferences change across oceans, it seems; just as with colour and color.) [added 5 Aug 2016] A fairly general principle is that often two colours work much better than many. If two groups are both of interest, then choose equally strong colours (e.g. red or orange and blue). If one group is of particular interest among several, make it blue or orange, and let the others be grey. Using seven colours for seven groups in principle carries the information, but it's hard even to focus on one colour at a time when there is competition from several others. Small multiples can be better for several groups than a multicolour plot.
"Best" series of colors to use for differentiating series in publication-quality plots
Much outstandingly good advice in other answers, but here are some extra points from my own low-level advice to students. This is all just advice, naturally, to be thought about given the key question
"Best" series of colors to use for differentiating series in publication-quality plots Much outstandingly good advice in other answers, but here are some extra points from my own low-level advice to students. This is all just advice, naturally, to be thought about given the key questions: What is my graph intended to do? What makes sense with these data? Who are the readership? What I am expecting colour(s) to do within the graph? Does the graph work well, regardless of someone else's dogmas? Furthermore, the importance of colour varies enormously from one graph to another. For a choropleth or patch map, in which the idea is indeed that different areas are coloured or at least shaded differently, the success of a graph is bound up with the success of its colour scheme. For other kinds of graphs, colours may be dispensable or even a nuisance. Are your colours all needed? For example, if different variables or groups are clearly distinguished by text labels in different regions of a graph, then separate colours too would often be overkill. Beware fruit salad or technicolor dreamcoat effects. For a pie chart with text labelling on or by the slices, colour conveys no extra information, for example. (If your pie chart depends on a key or legend, you are likely to be trying the wrong kind of graph.) Never rely on a contrast between red and green, as so many people struggle to distinguish these colours. Rainbow sequences (ROYGBIV or red-orange-yellow-green-blue-indigo-violet) may appeal on physical grounds, but they don't work well in practice. For example, yellow is usually a weak colour while orange and green are usually stronger, so the impression is not even of a monotonic sequence. Avoid any colour scheme which has the consequence of large patches of strong colour. A sequence from dark red to dark blue works well when an ordered sequence is needed. If white is (as usual) the background colour anywhere, don't use it, but skip from pale red to pale blue. [added 1 March 2018] Perhaps too obvious to underline: red has connotations of negative and/or danger for many, which can be helpful, and blue can then mean positive. Too obvious to underline, but I do it any way: Red and blue do have political connotations in many countries. [added 7 February 2023] White can make sense for bar colours if there is a boundary (say in a light gray) to the patch showing where the bar ends! (ditto, area patches on maps) Blue and orange go well together (a grateful nod to Hastie, Tibshirani and Friedman here [added 1 March 2018]. Many introductory books on visualization now recommend orange, blue and grey as a basic palette: orange and/or blue for what you care about and grey for backdrop. Grayscale from pale gray to dark gray can work well and is a good idea when colour reproduction is out of the question. (It is a lousy printer that can't make a fair bash at grayscale.) (Grey if you like; preferences change across oceans, it seems; just as with colour and color.) [added 5 Aug 2016] A fairly general principle is that often two colours work much better than many. If two groups are both of interest, then choose equally strong colours (e.g. red or orange and blue). If one group is of particular interest among several, make it blue or orange, and let the others be grey. Using seven colours for seven groups in principle carries the information, but it's hard even to focus on one colour at a time when there is competition from several others. Small multiples can be better for several groups than a multicolour plot.
"Best" series of colors to use for differentiating series in publication-quality plots Much outstandingly good advice in other answers, but here are some extra points from my own low-level advice to students. This is all just advice, naturally, to be thought about given the key question
1,611
"Best" series of colors to use for differentiating series in publication-quality plots
Paul Tol provides a colour scheme optimised for colour differences (i.e., categorical or qualitative data) and colour-blind vision on his website, and in detail in a "technote" (PDF file) linked to there. He states: To make graphics with your scientific results as clear as possible, it is handy to have a palette of colours that are: distinct for all people, including colour-blind readers; distinct from black and white; distinct on screen and paper; and still match well together. I took the colour scheme from his "Palette 1" of the 9 most distinct colours, and placed it in my matplotlibrc file under axes.color_cycle: axes.color_cycle : 332288, 88CCEE, 44AA99, 117733, 999933, DDCC77, CC6677, 882255, AA4499 Then, borrowing from Joe Kington's answer the default lines as plotted by: import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np x = np.linspace(0, 20, 100) fig, axes = plt.subplots(nrows=2) for i in range(1,10): axes[0].plot(x, i * (x - 10)**2) for i in range(1,10): axes[1].plot(x, i * np.cos(x)) plt.show() results in: For diverging colour maps (e.g., to represent scalar values), the best reference I have seen is the paper by Kenneth Moreland available here "Diverging Color Maps for Scientific Visualization". He developed the cool-warm scheme to replace the rainbow scheme, and "presents an algorithm that allows users to easily generate their own customized color maps". Another useful source for information on the use of colour in scientific visualisations comes from Robert Simmon, the man who created the "Blue Marble" image for NASA. See his series of posts at the Earth Observatory web site.
"Best" series of colors to use for differentiating series in publication-quality plots
Paul Tol provides a colour scheme optimised for colour differences (i.e., categorical or qualitative data) and colour-blind vision on his website, and in detail in a "technote" (PDF file) linked to th
"Best" series of colors to use for differentiating series in publication-quality plots Paul Tol provides a colour scheme optimised for colour differences (i.e., categorical or qualitative data) and colour-blind vision on his website, and in detail in a "technote" (PDF file) linked to there. He states: To make graphics with your scientific results as clear as possible, it is handy to have a palette of colours that are: distinct for all people, including colour-blind readers; distinct from black and white; distinct on screen and paper; and still match well together. I took the colour scheme from his "Palette 1" of the 9 most distinct colours, and placed it in my matplotlibrc file under axes.color_cycle: axes.color_cycle : 332288, 88CCEE, 44AA99, 117733, 999933, DDCC77, CC6677, 882255, AA4499 Then, borrowing from Joe Kington's answer the default lines as plotted by: import matplotlib.pyplot as plt import matplotlib as mpl import numpy as np x = np.linspace(0, 20, 100) fig, axes = plt.subplots(nrows=2) for i in range(1,10): axes[0].plot(x, i * (x - 10)**2) for i in range(1,10): axes[1].plot(x, i * np.cos(x)) plt.show() results in: For diverging colour maps (e.g., to represent scalar values), the best reference I have seen is the paper by Kenneth Moreland available here "Diverging Color Maps for Scientific Visualization". He developed the cool-warm scheme to replace the rainbow scheme, and "presents an algorithm that allows users to easily generate their own customized color maps". Another useful source for information on the use of colour in scientific visualisations comes from Robert Simmon, the man who created the "Blue Marble" image for NASA. See his series of posts at the Earth Observatory web site.
"Best" series of colors to use for differentiating series in publication-quality plots Paul Tol provides a colour scheme optimised for colour differences (i.e., categorical or qualitative data) and colour-blind vision on his website, and in detail in a "technote" (PDF file) linked to th
1,612
"Best" series of colors to use for differentiating series in publication-quality plots
There's actually been a good deal of research on this in recent years. A big point is "semantic resonance." This basically means "colors that correspond to what they represent," e.g. a time series for money should be colored green, at least for an audience in the USA. This apparently improves comprehension. One very interesting paper on the subject is by Lin, et al (2013): http://vis.stanford.edu/papers/semantically-resonant-colors There's also the very nice iWantHue color generator, at http://tools.medialab.sciences-po.fr/iwanthue/, with lots of info in the other tabs. References Lin, Sharon, Julie Fortuna, Chinmay Kulkarni, Maureen Stone, and Jeffrey Heer. (2013). Selecting Semantically-Resonant Colors for Data Visualization. Computer Graphics Forum (Proc. EuroVis), 2013
"Best" series of colors to use for differentiating series in publication-quality plots
There's actually been a good deal of research on this in recent years. A big point is "semantic resonance." This basically means "colors that correspond to what they represent," e.g. a time series for
"Best" series of colors to use for differentiating series in publication-quality plots There's actually been a good deal of research on this in recent years. A big point is "semantic resonance." This basically means "colors that correspond to what they represent," e.g. a time series for money should be colored green, at least for an audience in the USA. This apparently improves comprehension. One very interesting paper on the subject is by Lin, et al (2013): http://vis.stanford.edu/papers/semantically-resonant-colors There's also the very nice iWantHue color generator, at http://tools.medialab.sciences-po.fr/iwanthue/, with lots of info in the other tabs. References Lin, Sharon, Julie Fortuna, Chinmay Kulkarni, Maureen Stone, and Jeffrey Heer. (2013). Selecting Semantically-Resonant Colors for Data Visualization. Computer Graphics Forum (Proc. EuroVis), 2013
"Best" series of colors to use for differentiating series in publication-quality plots There's actually been a good deal of research on this in recent years. A big point is "semantic resonance." This basically means "colors that correspond to what they represent," e.g. a time series for
1,613
"Best" series of colors to use for differentiating series in publication-quality plots
On colorbrewer2.org you can find qualitative, sequential and diverging colour schemes. Qualitative maximizes the difference between successive colours, and that's what I am using in gnuplot. The beauty of the site is that you can easily copy the hexadecimal codes of the colours so they are a breeze to import. As an example, I'm using the following 8-colour set: #e41a1c #377eb8 #4daf4a #984ea3 #ff7f00 #ffff33 #a65628 #f781bf It is rather pleasant and produces clear results. As a side note, sequential is used when you need a smooth gradient and diverging when you need to highlight differences from a central value (e.g. mountain elevation and sea depth). You can read more about these color schemes here.
"Best" series of colors to use for differentiating series in publication-quality plots
On colorbrewer2.org you can find qualitative, sequential and diverging colour schemes. Qualitative maximizes the difference between successive colours, and that's what I am using in gnuplot. The beaut
"Best" series of colors to use for differentiating series in publication-quality plots On colorbrewer2.org you can find qualitative, sequential and diverging colour schemes. Qualitative maximizes the difference between successive colours, and that's what I am using in gnuplot. The beauty of the site is that you can easily copy the hexadecimal codes of the colours so they are a breeze to import. As an example, I'm using the following 8-colour set: #e41a1c #377eb8 #4daf4a #984ea3 #ff7f00 #ffff33 #a65628 #f781bf It is rather pleasant and produces clear results. As a side note, sequential is used when you need a smooth gradient and diverging when you need to highlight differences from a central value (e.g. mountain elevation and sea depth). You can read more about these color schemes here.
"Best" series of colors to use for differentiating series in publication-quality plots On colorbrewer2.org you can find qualitative, sequential and diverging colour schemes. Qualitative maximizes the difference between successive colours, and that's what I am using in gnuplot. The beaut
1,614
"Best" series of colors to use for differentiating series in publication-quality plots
There are plenty of websites dedicated to choosing color palettes. I don't know that there is a particular set of colors that is objectively the best, you will have to choose based on your audience and the tone of your work. Check out http://www.colourlovers.com/palettes or http://design-seeds.com/index.php/search to get started. Some of them have colors that are two close to show different groups, but others will give you complementary colors across a wider range. You can also check out the non-default predefined colorsets in Matplotlib.
"Best" series of colors to use for differentiating series in publication-quality plots
There are plenty of websites dedicated to choosing color palettes. I don't know that there is a particular set of colors that is objectively the best, you will have to choose based on your audience an
"Best" series of colors to use for differentiating series in publication-quality plots There are plenty of websites dedicated to choosing color palettes. I don't know that there is a particular set of colors that is objectively the best, you will have to choose based on your audience and the tone of your work. Check out http://www.colourlovers.com/palettes or http://design-seeds.com/index.php/search to get started. Some of them have colors that are two close to show different groups, but others will give you complementary colors across a wider range. You can also check out the non-default predefined colorsets in Matplotlib.
"Best" series of colors to use for differentiating series in publication-quality plots There are plenty of websites dedicated to choosing color palettes. I don't know that there is a particular set of colors that is objectively the best, you will have to choose based on your audience an
1,615
"Best" series of colors to use for differentiating series in publication-quality plots
For colorblind viewers, CARTOColors has a qualitative colorblind-friendly scheme called Safe that is based on Paul Tol's popular colour schemes. This palette consists of 12 easily distinguishable colours. Another great qualitative colorblind friendly palette is the Okabe and Ito scheme proposed in their article “Color Universal Design (CUD): How to make figures and presentations that are friendly to colorblind people.” ### Example for R users if (!require("pacman")) install.packages("pacman") pacman::p_load(ggplot2, rcartocolor, patchwork) theme_set(theme_classic(base_size = 14) + theme(panel.background = element_rect(fill = "#ecf0f1"))) set.seed(123) df <- data.frame(x = rep(1:5, 8), value = sample(1:100, 40), variable = rep(paste0("category", 1:8), each = 5)) safe_pal <- carto_pal(12, "Safe") palette_OkabeIto_black <- c("#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#000000") # plot p1 <- ggplot(data = df, aes(x = x, y = value)) + geom_line(aes(colour = variable), size = 1) + scale_color_manual(values = palette_OkabeIto_black) p2 <- ggplot(data = df, aes(x = x, y = value)) + geom_col(aes(fill = variable)) + scale_fill_manual(values = safe_pal) p1 / p2
"Best" series of colors to use for differentiating series in publication-quality plots
For colorblind viewers, CARTOColors has a qualitative colorblind-friendly scheme called Safe that is based on Paul Tol's popular colour schemes. This palette consists of 12 easily distinguishable colo
"Best" series of colors to use for differentiating series in publication-quality plots For colorblind viewers, CARTOColors has a qualitative colorblind-friendly scheme called Safe that is based on Paul Tol's popular colour schemes. This palette consists of 12 easily distinguishable colours. Another great qualitative colorblind friendly palette is the Okabe and Ito scheme proposed in their article “Color Universal Design (CUD): How to make figures and presentations that are friendly to colorblind people.” ### Example for R users if (!require("pacman")) install.packages("pacman") pacman::p_load(ggplot2, rcartocolor, patchwork) theme_set(theme_classic(base_size = 14) + theme(panel.background = element_rect(fill = "#ecf0f1"))) set.seed(123) df <- data.frame(x = rep(1:5, 8), value = sample(1:100, 40), variable = rep(paste0("category", 1:8), each = 5)) safe_pal <- carto_pal(12, "Safe") palette_OkabeIto_black <- c("#E69F00", "#56B4E9", "#009E73", "#F0E442", "#0072B2", "#D55E00", "#CC79A7", "#000000") # plot p1 <- ggplot(data = df, aes(x = x, y = value)) + geom_line(aes(colour = variable), size = 1) + scale_color_manual(values = palette_OkabeIto_black) p2 <- ggplot(data = df, aes(x = x, y = value)) + geom_col(aes(fill = variable)) + scale_fill_manual(values = safe_pal) p1 / p2
"Best" series of colors to use for differentiating series in publication-quality plots For colorblind viewers, CARTOColors has a qualitative colorblind-friendly scheme called Safe that is based on Paul Tol's popular colour schemes. This palette consists of 12 easily distinguishable colo
1,616
"Best" series of colors to use for differentiating series in publication-quality plots
I like the Dark2 palette from colorbrewer for scatter plots. We used this in the ggobi book, www.ggobi.org/book. But otherwise the color palettes are meant for geographic areas rather than data plots. Good color choice is still an issue for point-based plots. The R packages colorspace and dichromat are useful. colorspace allows selection of colors around the wheel: you can spend hours/days fine tuning. dichromat helps check for colorblindness. ggplot2 generally has good defaults, although not necessarily color-blind proof. The diverging red to blue scheme looks good on your computer but does not project well.
"Best" series of colors to use for differentiating series in publication-quality plots
I like the Dark2 palette from colorbrewer for scatter plots. We used this in the ggobi book, www.ggobi.org/book. But otherwise the color palettes are meant for geographic areas rather than data plots.
"Best" series of colors to use for differentiating series in publication-quality plots I like the Dark2 palette from colorbrewer for scatter plots. We used this in the ggobi book, www.ggobi.org/book. But otherwise the color palettes are meant for geographic areas rather than data plots. Good color choice is still an issue for point-based plots. The R packages colorspace and dichromat are useful. colorspace allows selection of colors around the wheel: you can spend hours/days fine tuning. dichromat helps check for colorblindness. ggplot2 generally has good defaults, although not necessarily color-blind proof. The diverging red to blue scheme looks good on your computer but does not project well.
"Best" series of colors to use for differentiating series in publication-quality plots I like the Dark2 palette from colorbrewer for scatter plots. We used this in the ggobi book, www.ggobi.org/book. But otherwise the color palettes are meant for geographic areas rather than data plots.
1,617
"Best" series of colors to use for differentiating series in publication-quality plots
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This is my favourite scheme. It has 20 (!!!!) distinct colours, all of which are easily distinguishable. It probably fails for colour blind people, though. #e6194b #3cb44b #ffe119 #0082c8 #f58231 #911eb4 #46f0f0 #f032e6 #d2f53c #fabebe #008080 #e6beff #aa6e28 #fffac8 #800000 #aaffc3 #808000 #ffd8b1 #000080 #808080 #ffffff #000000 I don't know what the methodology is or anything. If you want to find out more, just go to the link I posted.
"Best" series of colors to use for differentiating series in publication-quality plots
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
"Best" series of colors to use for differentiating series in publication-quality plots Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. This is my favourite scheme. It has 20 (!!!!) distinct colours, all of which are easily distinguishable. It probably fails for colour blind people, though. #e6194b #3cb44b #ffe119 #0082c8 #f58231 #911eb4 #46f0f0 #f032e6 #d2f53c #fabebe #008080 #e6beff #aa6e28 #fffac8 #800000 #aaffc3 #808000 #ffd8b1 #000080 #808080 #ffffff #000000 I don't know what the methodology is or anything. If you want to find out more, just go to the link I posted.
"Best" series of colors to use for differentiating series in publication-quality plots Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,618
"Best" series of colors to use for differentiating series in publication-quality plots
Another possibility would be to find a set of colors that are a) equidistant in LAB, b) take color blindness into consideration, and c) can fit into the gamut of the sRGB colorspace as well as the gamuts of the most common CMYK spaces. I think the last requirement is a necessity for any method of picking colors- it doesn't do any good if the colors look good on the screen but are muddled when printed in a CMYK process. And since the OP specified "publication quality", I'm assuming that the graphs will indeed be printed in CMYK.
"Best" series of colors to use for differentiating series in publication-quality plots
Another possibility would be to find a set of colors that are a) equidistant in LAB, b) take color blindness into consideration, and c) can fit into the gamut of the sRGB colorspace as well as the gam
"Best" series of colors to use for differentiating series in publication-quality plots Another possibility would be to find a set of colors that are a) equidistant in LAB, b) take color blindness into consideration, and c) can fit into the gamut of the sRGB colorspace as well as the gamuts of the most common CMYK spaces. I think the last requirement is a necessity for any method of picking colors- it doesn't do any good if the colors look good on the screen but are muddled when printed in a CMYK process. And since the OP specified "publication quality", I'm assuming that the graphs will indeed be printed in CMYK.
"Best" series of colors to use for differentiating series in publication-quality plots Another possibility would be to find a set of colors that are a) equidistant in LAB, b) take color blindness into consideration, and c) can fit into the gamut of the sRGB colorspace as well as the gam
1,619
"Best" series of colors to use for differentiating series in publication-quality plots
When plotting lines, you should watch out for green and yellow, which don't display well on projectors. Since I eventually re-use most of my plots in presentations, I avoid these colours even if the original intention is for screen or paper publication. In the interests of maintaining high contrast, that leaves me with black, red, blue, magenta, cyan and if I really need it I use grey. Indeed, most of these are bright, primary or secondary colours. I know it might not be optimal from an aesthetic point of view, but I'm more interested in the clarity of what I'm presenting. On the other hand, consistently reusing the same colours from a limited palette can be a good thing aesthetically. If you're using more than 6 lines, you're filling up more space and moving towards plotting blocks of colour. For these kinds of plot I think each case needs to be considered separately. Do you want the extremes to stand out, or the zero-crossings? Is your data cyclical (e.g. 0 and 2π should use the same colour)? Is there an analogy to standards such as blue/red for temperature? Does white represent NaN, no data, or will it be used as a highlight? etcetc.
"Best" series of colors to use for differentiating series in publication-quality plots
When plotting lines, you should watch out for green and yellow, which don't display well on projectors. Since I eventually re-use most of my plots in presentations, I avoid these colours even if the o
"Best" series of colors to use for differentiating series in publication-quality plots When plotting lines, you should watch out for green and yellow, which don't display well on projectors. Since I eventually re-use most of my plots in presentations, I avoid these colours even if the original intention is for screen or paper publication. In the interests of maintaining high contrast, that leaves me with black, red, blue, magenta, cyan and if I really need it I use grey. Indeed, most of these are bright, primary or secondary colours. I know it might not be optimal from an aesthetic point of view, but I'm more interested in the clarity of what I'm presenting. On the other hand, consistently reusing the same colours from a limited palette can be a good thing aesthetically. If you're using more than 6 lines, you're filling up more space and moving towards plotting blocks of colour. For these kinds of plot I think each case needs to be considered separately. Do you want the extremes to stand out, or the zero-crossings? Is your data cyclical (e.g. 0 and 2π should use the same colour)? Is there an analogy to standards such as blue/red for temperature? Does white represent NaN, no data, or will it be used as a highlight? etcetc.
"Best" series of colors to use for differentiating series in publication-quality plots When plotting lines, you should watch out for green and yellow, which don't display well on projectors. Since I eventually re-use most of my plots in presentations, I avoid these colours even if the o
1,620
What is the relation between k-means clustering and PCA?
It is true that K-means clustering and PCA appear to have very different goals and at first sight do not seem to be related. However, as explained in the Ding & He 2004 paper K-means Clustering via Principal Component Analysis, there is a deep connection between them. The intuition is that PCA seeks to represent all $n$ data vectors as linear combinations of a small number of eigenvectors, and does it to minimize the mean-squared reconstruction error. In contrast, K-means seeks to represent all $n$ data vectors via small number of cluster centroids, i.e. to represent them as linear combinations of a small number of cluster centroid vectors where linear combination weights must be all zero except for the single $1$. This is also done to minimize the mean-squared reconstruction error. So K-means can be seen as a super-sparse PCA. Ding & He paper makes this connection more precise. Unfortunately, the Ding & He paper contains some sloppy formulations (at best) and can easily be misunderstood. E.g. it might seem that Ding & He claim to have proved that cluster centroids of K-means clustering solution lie in the $(K-1)$-dimensional PCA subspace: Theorem 3.3. Cluster centroid subspace is spanned by the first $K-1$ principal directions [...]. For $K=2$ this would imply that projections on PC1 axis will necessarily be negative for one cluster and positive for another cluster, i.e. PC2 axis will separate clusters perfectly. This is either a mistake or some sloppy writing; in any case, taken literally, this particular claim is false. Let's start with looking at some toy examples in 2D for $K=2$. I generated some samples from the two normal distributions with the same covariance matrix but varying means. I then ran both K-means and PCA. The following figure shows the scatter plot of the data above, and the same data colored according to the K-means solution below. I also show the first principal direction as a black line and class centroids found by K-means with black crosses. PC2 axis is shown with the dashed black line. K-means was repeated $100$ times with random seeds to ensure convergence to the global optimum. One can clearly see that even though the class centroids tend to be pretty close to the first PC direction, they do not fall on it exactly. Moreover, even though PC2 axis separates clusters perfectly in subplots 1 and 4, there is a couple of points on the wrong side of it in subplots 2 and 3. So the agreement between K-means and PCA is quite good, but it is not exact. So what did Ding & He prove? For simplicity, I will consider only $K=2$ case. Let the number of points assigned to each cluster be $n_1$ and $n_2$ and the total number of points $n=n_1+n_2$. Following Ding & He, let's define cluster indicator vector $\mathbf q\in\mathbb R^n$ as follows: $q_i = \sqrt{n_2/nn_1}$ if $i$-th points belongs to cluster 1 and $q_i = -\sqrt{n_1/nn_2}$ if it belongs to cluster 2. Cluster indicator vector has unit length $\|\mathbf q\| = 1$ and is "centered", i.e. its elements sum to zero $\sum q_i = 0$. Ding & He show that K-means loss function $\sum_k \sum_i (\mathbf x_i^{(k)} - \boldsymbol \mu_k)^2$ (that K-means algorithm minimizes), where $x_i^{(k)}$ is the $i$-th element in cluster $k$, can be equivalently rewritten as $-\mathbf q^\top \mathbf G \mathbf q$, where $\mathbf G$ is the $n\times n$ Gram matrix of scalar products between all points: $\mathbf G = \mathbf X_c \mathbf X_c^\top$, where $\mathbf X$ is the $n\times 2$ data matrix and $\mathbf X_c$ is the centered data matrix. (Note: I am using notation and terminology that slightly differs from their paper but that I find clearer). So the K-means solution $\mathbf q$ is a centered unit vector maximizing $\mathbf q^\top \mathbf G \mathbf q$. It is easy to show that the first principal component (when normalized to have unit sum of squares) is the leading eigenvector of the Gram matrix, i.e. it is also a centered unit vector $\mathbf p$ maximizing $\mathbf p^\top \mathbf G \mathbf p$. The only difference is that $\mathbf q$ is additionally constrained to have only two different values whereas $\mathbf p$ does not have this constraint. In other words, K-means and PCA maximize the same objective function, with the only difference being that K-means has additional "categorical" constraint. It stands to reason that most of the times the K-means (constrained) and PCA (unconstrained) solutions will be pretty to close to each other, as we saw above in the simulation, but one should not expect them to be identical. Taking $\mathbf p$ and setting all its negative elements to be equal to $-\sqrt{n_1/nn_2}$ and all its positive elements to $\sqrt{n_2/nn_1}$ will generally not give exactly $\mathbf q$. Ding & He seem to understand this well because they formulate their theorem as follows: Theorem 2.2. For K-means clustering where $K= 2$, the continuous solution of the cluster indicator vector is the [first] principal component Note that words "continuous solution". After proving this theorem they additionally comment that PCA can be used to initialize K-means iterations which makes total sense given that we expect $\mathbf q$ to be close to $\mathbf p$. But one still needs to perform the iterations, because they are not identical. However, Ding & He then go on to develop a more general treatment for $K>2$ and end up formulating Theorem 3.3 as Theorem 3.3. Cluster centroid subspace is spanned by the first $K-1$ principal directions [...]. I did not go through the math of Section 3, but I believe that this theorem in fact also refers to the "continuous solution" of K-means, i.e. its statement should read "cluster centroid space of the continuous solution of K-means is spanned [...]". Ding & He, however, do not make this important qualification, and moreover write in their abstract that Here we prove that principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering. Equivalently, we show that the subspace spanned by the cluster centroids are given by spectral expansion of the data covariance matrix truncated at $K-1$ terms. The first sentence is absolutely correct, but the second one is not. It is not clear to me if this is a (very) sloppy writing or a genuine mistake. I have very politely emailed both authors asking for clarification. (Update two months later: I have never heard back from them.) Matlab simulation code figure('Position', [100 100 1200 600]) n = 50; Sigma = [2 1.8; 1.8 2]; for i=1:4 means = [0 0; i*2 0]; rng(42) X = [bsxfun(@plus, means(1,:), randn(n,2) * chol(Sigma)); ... bsxfun(@plus, means(2,:), randn(n,2) * chol(Sigma))]; X = bsxfun(@minus, X, mean(X)); [U,S,V] = svd(X,0); [ind, centroids] = kmeans(X,2, 'Replicates', 100); subplot(2,4,i) scatter(X(:,1), X(:,2), [], [0 0 0]) subplot(2,4,i+4) hold on scatter(X(ind==1,1), X(ind==1,2), [], [1 0 0]) scatter(X(ind==2,1), X(ind==2,2), [], [0 0 1]) plot([-1 1]*10*V(1,1), [-1 1]*10*V(2,1), 'k', 'LineWidth', 2) plot(centroids(1,1), centroids(1,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4) plot(centroids(1,1), centroids(1,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2) plot(centroids(2,1), centroids(2,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4) plot(centroids(2,1), centroids(2,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2) plot([-1 1]*5*V(1,2), [-1 1]*5*V(2,2), 'k--') end for i=1:8 subplot(2,4,i) axis([-8 8 -8 8]) axis square set(gca,'xtick',[],'ytick',[]) end
What is the relation between k-means clustering and PCA?
It is true that K-means clustering and PCA appear to have very different goals and at first sight do not seem to be related. However, as explained in the Ding & He 2004 paper K-means Clustering via Pr
What is the relation between k-means clustering and PCA? It is true that K-means clustering and PCA appear to have very different goals and at first sight do not seem to be related. However, as explained in the Ding & He 2004 paper K-means Clustering via Principal Component Analysis, there is a deep connection between them. The intuition is that PCA seeks to represent all $n$ data vectors as linear combinations of a small number of eigenvectors, and does it to minimize the mean-squared reconstruction error. In contrast, K-means seeks to represent all $n$ data vectors via small number of cluster centroids, i.e. to represent them as linear combinations of a small number of cluster centroid vectors where linear combination weights must be all zero except for the single $1$. This is also done to minimize the mean-squared reconstruction error. So K-means can be seen as a super-sparse PCA. Ding & He paper makes this connection more precise. Unfortunately, the Ding & He paper contains some sloppy formulations (at best) and can easily be misunderstood. E.g. it might seem that Ding & He claim to have proved that cluster centroids of K-means clustering solution lie in the $(K-1)$-dimensional PCA subspace: Theorem 3.3. Cluster centroid subspace is spanned by the first $K-1$ principal directions [...]. For $K=2$ this would imply that projections on PC1 axis will necessarily be negative for one cluster and positive for another cluster, i.e. PC2 axis will separate clusters perfectly. This is either a mistake or some sloppy writing; in any case, taken literally, this particular claim is false. Let's start with looking at some toy examples in 2D for $K=2$. I generated some samples from the two normal distributions with the same covariance matrix but varying means. I then ran both K-means and PCA. The following figure shows the scatter plot of the data above, and the same data colored according to the K-means solution below. I also show the first principal direction as a black line and class centroids found by K-means with black crosses. PC2 axis is shown with the dashed black line. K-means was repeated $100$ times with random seeds to ensure convergence to the global optimum. One can clearly see that even though the class centroids tend to be pretty close to the first PC direction, they do not fall on it exactly. Moreover, even though PC2 axis separates clusters perfectly in subplots 1 and 4, there is a couple of points on the wrong side of it in subplots 2 and 3. So the agreement between K-means and PCA is quite good, but it is not exact. So what did Ding & He prove? For simplicity, I will consider only $K=2$ case. Let the number of points assigned to each cluster be $n_1$ and $n_2$ and the total number of points $n=n_1+n_2$. Following Ding & He, let's define cluster indicator vector $\mathbf q\in\mathbb R^n$ as follows: $q_i = \sqrt{n_2/nn_1}$ if $i$-th points belongs to cluster 1 and $q_i = -\sqrt{n_1/nn_2}$ if it belongs to cluster 2. Cluster indicator vector has unit length $\|\mathbf q\| = 1$ and is "centered", i.e. its elements sum to zero $\sum q_i = 0$. Ding & He show that K-means loss function $\sum_k \sum_i (\mathbf x_i^{(k)} - \boldsymbol \mu_k)^2$ (that K-means algorithm minimizes), where $x_i^{(k)}$ is the $i$-th element in cluster $k$, can be equivalently rewritten as $-\mathbf q^\top \mathbf G \mathbf q$, where $\mathbf G$ is the $n\times n$ Gram matrix of scalar products between all points: $\mathbf G = \mathbf X_c \mathbf X_c^\top$, where $\mathbf X$ is the $n\times 2$ data matrix and $\mathbf X_c$ is the centered data matrix. (Note: I am using notation and terminology that slightly differs from their paper but that I find clearer). So the K-means solution $\mathbf q$ is a centered unit vector maximizing $\mathbf q^\top \mathbf G \mathbf q$. It is easy to show that the first principal component (when normalized to have unit sum of squares) is the leading eigenvector of the Gram matrix, i.e. it is also a centered unit vector $\mathbf p$ maximizing $\mathbf p^\top \mathbf G \mathbf p$. The only difference is that $\mathbf q$ is additionally constrained to have only two different values whereas $\mathbf p$ does not have this constraint. In other words, K-means and PCA maximize the same objective function, with the only difference being that K-means has additional "categorical" constraint. It stands to reason that most of the times the K-means (constrained) and PCA (unconstrained) solutions will be pretty to close to each other, as we saw above in the simulation, but one should not expect them to be identical. Taking $\mathbf p$ and setting all its negative elements to be equal to $-\sqrt{n_1/nn_2}$ and all its positive elements to $\sqrt{n_2/nn_1}$ will generally not give exactly $\mathbf q$. Ding & He seem to understand this well because they formulate their theorem as follows: Theorem 2.2. For K-means clustering where $K= 2$, the continuous solution of the cluster indicator vector is the [first] principal component Note that words "continuous solution". After proving this theorem they additionally comment that PCA can be used to initialize K-means iterations which makes total sense given that we expect $\mathbf q$ to be close to $\mathbf p$. But one still needs to perform the iterations, because they are not identical. However, Ding & He then go on to develop a more general treatment for $K>2$ and end up formulating Theorem 3.3 as Theorem 3.3. Cluster centroid subspace is spanned by the first $K-1$ principal directions [...]. I did not go through the math of Section 3, but I believe that this theorem in fact also refers to the "continuous solution" of K-means, i.e. its statement should read "cluster centroid space of the continuous solution of K-means is spanned [...]". Ding & He, however, do not make this important qualification, and moreover write in their abstract that Here we prove that principal components are the continuous solutions to the discrete cluster membership indicators for K-means clustering. Equivalently, we show that the subspace spanned by the cluster centroids are given by spectral expansion of the data covariance matrix truncated at $K-1$ terms. The first sentence is absolutely correct, but the second one is not. It is not clear to me if this is a (very) sloppy writing or a genuine mistake. I have very politely emailed both authors asking for clarification. (Update two months later: I have never heard back from them.) Matlab simulation code figure('Position', [100 100 1200 600]) n = 50; Sigma = [2 1.8; 1.8 2]; for i=1:4 means = [0 0; i*2 0]; rng(42) X = [bsxfun(@plus, means(1,:), randn(n,2) * chol(Sigma)); ... bsxfun(@plus, means(2,:), randn(n,2) * chol(Sigma))]; X = bsxfun(@minus, X, mean(X)); [U,S,V] = svd(X,0); [ind, centroids] = kmeans(X,2, 'Replicates', 100); subplot(2,4,i) scatter(X(:,1), X(:,2), [], [0 0 0]) subplot(2,4,i+4) hold on scatter(X(ind==1,1), X(ind==1,2), [], [1 0 0]) scatter(X(ind==2,1), X(ind==2,2), [], [0 0 1]) plot([-1 1]*10*V(1,1), [-1 1]*10*V(2,1), 'k', 'LineWidth', 2) plot(centroids(1,1), centroids(1,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4) plot(centroids(1,1), centroids(1,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2) plot(centroids(2,1), centroids(2,2), 'w+', 'MarkerSize', 15, 'LineWidth', 4) plot(centroids(2,1), centroids(2,2), 'k+', 'MarkerSize', 10, 'LineWidth', 2) plot([-1 1]*5*V(1,2), [-1 1]*5*V(2,2), 'k--') end for i=1:8 subplot(2,4,i) axis([-8 8 -8 8]) axis square set(gca,'xtick',[],'ytick',[]) end
What is the relation between k-means clustering and PCA? It is true that K-means clustering and PCA appear to have very different goals and at first sight do not seem to be related. However, as explained in the Ding & He 2004 paper K-means Clustering via Pr
1,621
What is the relation between k-means clustering and PCA?
PCA and K-means do different things. PCA is used for dimensionality reduction / feature selection / representation learning e.g. when the feature space contains too many irrelevant or redundant features. The aim is to find the intrinsic dimensionality of the data. Here's a two dimensional example that can be generalized to higher dimensional spaces. The dataset has two features, $x$ and $y$, every circle is a data point. In the image $v1$ has a larger magnitude than $v2$. These are the Eigenvectors. The dimension of the data is reduced from two dimensions to one dimension (not much choice in this case) and this is done by projecting on the direction of the $v2$ vector (after a rotation where $v2$ becomes parallel or perpendicular to one of the axes). This is because $v2$ is orthogonal to the direction of largest variance. One way to think of it, is minimal loss of information. (There is still a loss since one coordinate axis is lost). K-means is a clustering algorithm that returns the natural grouping of data points, based on their similarity. It's a special case of Gaussian Mixture Models. In the image below the dataset has three dimensions. It can be seen from the 3D plot on the left that the $X$ dimension can be 'dropped' without losing much information. PCA is used to project the data onto two dimensions. In the figure to the left, the projection plane is also shown. Then, K-means can be used on the projected data to label the different groups, in the figure on the right, coded with different colors. PCA or other dimensionality reduction techniques are used before both unsupervised or supervised methods in machine learning. In addition to the reasons outlined by you and the ones I mentioned above, it is also used for visualization purposes (projection to 2D or 3D from higher dimensions). As to the article, I don't believe there is any connection, PCA has no information regarding the natural grouping of data and operates on the entire data, not subsets (groups). If some groups might be explained by one eigenvector ( just because that particular cluster is spread along that direction ) is just a coincidence and shouldn't be taken as a general rule. "PCA aims at compressing the T features whereas clustering aims at compressing the N data-points." Indeed, compression is an intuitive way to think about PCA. However, in K-means, to describe each point relative to it's cluster you still need at least the same amount of information (e.g. dimensions) $x_i = d( \mu_i, \delta_i) $, where $d$ is the distance and $\delta_i$ is stored instead of $x_i$. And you also need to store the $\mu_i$ to know what the delta is relative to. You can of course store $d$ and $i$ however you will be unable to retrieve the actual information in the data. Clustering adds information really. I think of it as splitting the data into natural groups (that don't have to necessarily be disjoint) without knowing what the label for each group means (well, until you look at the data within the groups).
What is the relation between k-means clustering and PCA?
PCA and K-means do different things. PCA is used for dimensionality reduction / feature selection / representation learning e.g. when the feature space contains too many irrelevant or redundant featur
What is the relation between k-means clustering and PCA? PCA and K-means do different things. PCA is used for dimensionality reduction / feature selection / representation learning e.g. when the feature space contains too many irrelevant or redundant features. The aim is to find the intrinsic dimensionality of the data. Here's a two dimensional example that can be generalized to higher dimensional spaces. The dataset has two features, $x$ and $y$, every circle is a data point. In the image $v1$ has a larger magnitude than $v2$. These are the Eigenvectors. The dimension of the data is reduced from two dimensions to one dimension (not much choice in this case) and this is done by projecting on the direction of the $v2$ vector (after a rotation where $v2$ becomes parallel or perpendicular to one of the axes). This is because $v2$ is orthogonal to the direction of largest variance. One way to think of it, is minimal loss of information. (There is still a loss since one coordinate axis is lost). K-means is a clustering algorithm that returns the natural grouping of data points, based on their similarity. It's a special case of Gaussian Mixture Models. In the image below the dataset has three dimensions. It can be seen from the 3D plot on the left that the $X$ dimension can be 'dropped' without losing much information. PCA is used to project the data onto two dimensions. In the figure to the left, the projection plane is also shown. Then, K-means can be used on the projected data to label the different groups, in the figure on the right, coded with different colors. PCA or other dimensionality reduction techniques are used before both unsupervised or supervised methods in machine learning. In addition to the reasons outlined by you and the ones I mentioned above, it is also used for visualization purposes (projection to 2D or 3D from higher dimensions). As to the article, I don't believe there is any connection, PCA has no information regarding the natural grouping of data and operates on the entire data, not subsets (groups). If some groups might be explained by one eigenvector ( just because that particular cluster is spread along that direction ) is just a coincidence and shouldn't be taken as a general rule. "PCA aims at compressing the T features whereas clustering aims at compressing the N data-points." Indeed, compression is an intuitive way to think about PCA. However, in K-means, to describe each point relative to it's cluster you still need at least the same amount of information (e.g. dimensions) $x_i = d( \mu_i, \delta_i) $, where $d$ is the distance and $\delta_i$ is stored instead of $x_i$. And you also need to store the $\mu_i$ to know what the delta is relative to. You can of course store $d$ and $i$ however you will be unable to retrieve the actual information in the data. Clustering adds information really. I think of it as splitting the data into natural groups (that don't have to necessarily be disjoint) without knowing what the label for each group means (well, until you look at the data within the groups).
What is the relation between k-means clustering and PCA? PCA and K-means do different things. PCA is used for dimensionality reduction / feature selection / representation learning e.g. when the feature space contains too many irrelevant or redundant featur
1,622
What is the relation between k-means clustering and PCA?
It is common to whiten data before using k-means. The reason is that k-means is extremely sensitive to scale, and when you have mixed attributes there is no "true" scale anymore. Then you have to normalize, standardize, or whiten your data. None is perfect, but whitening will remove global correlation which can sometimes give better results. PCA/whitening is $O(n\cdot d^2 + d^3)$ since you operate on the covariance matrix. To my understanding, the relationship of k-means to PCA is not on the original data. It is to using PCA on the distance matrix (which has $n^2$ entries, and doing full PCA thus is $O(n^2\cdot d+n^3)$ - i.e. prohibitively expensive, in particular compared to k-means which is $O(k\cdot n \cdot i\cdot d)$ where $n$ is the only large term), and maybe only for $k=2$. K-means is a least-squares optimization problem, so is PCA. k-means tries to find the least-squares partition of the data. PCA finds the least-squares cluster membership vector. The first Eigenvector has the largest variance, therefore splitting on this vector (which resembles cluster membership, not input data coordinates!) means maximizing between cluster variance. By maximizing between cluster variance, you minimize within-cluster variance, too. But for real problems, this is useless. It is only of theoretical interest.
What is the relation between k-means clustering and PCA?
It is common to whiten data before using k-means. The reason is that k-means is extremely sensitive to scale, and when you have mixed attributes there is no "true" scale anymore. Then you have to norm
What is the relation between k-means clustering and PCA? It is common to whiten data before using k-means. The reason is that k-means is extremely sensitive to scale, and when you have mixed attributes there is no "true" scale anymore. Then you have to normalize, standardize, or whiten your data. None is perfect, but whitening will remove global correlation which can sometimes give better results. PCA/whitening is $O(n\cdot d^2 + d^3)$ since you operate on the covariance matrix. To my understanding, the relationship of k-means to PCA is not on the original data. It is to using PCA on the distance matrix (which has $n^2$ entries, and doing full PCA thus is $O(n^2\cdot d+n^3)$ - i.e. prohibitively expensive, in particular compared to k-means which is $O(k\cdot n \cdot i\cdot d)$ where $n$ is the only large term), and maybe only for $k=2$. K-means is a least-squares optimization problem, so is PCA. k-means tries to find the least-squares partition of the data. PCA finds the least-squares cluster membership vector. The first Eigenvector has the largest variance, therefore splitting on this vector (which resembles cluster membership, not input data coordinates!) means maximizing between cluster variance. By maximizing between cluster variance, you minimize within-cluster variance, too. But for real problems, this is useless. It is only of theoretical interest.
What is the relation between k-means clustering and PCA? It is common to whiten data before using k-means. The reason is that k-means is extremely sensitive to scale, and when you have mixed attributes there is no "true" scale anymore. Then you have to norm
1,623
What is the relation between k-means clustering and PCA?
Solving the k-means on its O(k/epsilon) low-rank approximation (i.e., projecting on the span of the first largest singular vectors as in PCA) would yield a (1+epsilon) approximation in term of multiplicative error. Particularly, Projecting on the k-largest vector would yield 2-approximation. In fact, the sum of squared distances for ANY set of k centers can be approximated by this projection. Then we can compute coreset on the reduced data to reduce the input to poly(k/eps) points that approximates this sum. See: Dan Feldman, Melanie Schmidt, Christian Sohler: Turning big data into tiny data: Constant-size coresets for k-means, PCA and projective clustering. SODA 2013: 1434-1453
What is the relation between k-means clustering and PCA?
Solving the k-means on its O(k/epsilon) low-rank approximation (i.e., projecting on the span of the first largest singular vectors as in PCA) would yield a (1+epsilon) approximation in term of multipl
What is the relation between k-means clustering and PCA? Solving the k-means on its O(k/epsilon) low-rank approximation (i.e., projecting on the span of the first largest singular vectors as in PCA) would yield a (1+epsilon) approximation in term of multiplicative error. Particularly, Projecting on the k-largest vector would yield 2-approximation. In fact, the sum of squared distances for ANY set of k centers can be approximated by this projection. Then we can compute coreset on the reduced data to reduce the input to poly(k/eps) points that approximates this sum. See: Dan Feldman, Melanie Schmidt, Christian Sohler: Turning big data into tiny data: Constant-size coresets for k-means, PCA and projective clustering. SODA 2013: 1434-1453
What is the relation between k-means clustering and PCA? Solving the k-means on its O(k/epsilon) low-rank approximation (i.e., projecting on the span of the first largest singular vectors as in PCA) would yield a (1+epsilon) approximation in term of multipl
1,624
What is the relation between k-means clustering and PCA?
Intuitive relationship of PCA and KMeans Theoretically PCA dimensional analysis (the first K dimension retaining say the 90% of variance...does not need to have direct relationship with K Means cluster), however the value of using PCA came from a) practical consideration given the nature of objects that we analyse tends to naturally cluster around/evolve from ( a certain segment of) their principal components (age, gender..) b) PCA eliminates those low variance dimension (noise), so itself adds value (and form a sense similar to clustering) by focusing on those key dimension In simple terms, it is just like X-Y axis is what help us master any abstract mathematical concept but in a more advance manner. K Means try to minimize overall distance within a cluster for a given K For a set of objects with N dimension parameters, by default similar objects Will have MOST parameters “similar” except a few key difference (eg a group of young IT students, young dancers, humans… will have some highly similar features (low variance) but a few key features still quite diverse and capturing those "key Principal Componenents" essentially capture the majority of variance, eg. color, area of residence.... Hence low distortion if we neglect those features of minor differences, or the conversion to lower PCs will not loss much information It is thus “very likely” and “very natural” that grouping them together to look at the differences (variations) make sense for data evaluation (eg. if you make 1,000 surveys in a week in the main street, clustering them based on ethnic, age, or educational background as PC make sense) Under K Means’ mission, we try to establish a fair number of K so that those group elements (in a cluster) would have overall smallest distance (minimized) between Centroid and whilst the cost to establish and running the K clusters is optimal (each members as a cluster does not make sense as that is too costly to maintain and no value) K Means grouping could be easily “visually inspected” to be optimal, if such K is along the Principal Components (eg. if for people in different age, ethnic / regious clusters they tend to express similar opinions so if you cluster those surveys based on those PCs, then that achieve the minization goal (ref. 1) Also those PCs (ethnic, age, religion..) quite often are orthogonal, hence visually distinct by viewing the PCA However this intuitive deduction lead to a sufficient but not a necessary condition. (Ref 2: However, that PCA is a useful relaxation of k-means clustering was not a new result (see, for example,[35]), and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[36]) Choosing clusters based on / along the CPs may comfortably lead to comfortable allocation mechanism This one could be an example if x is the first PC along X axis: (...........CC1...............CC2............CC3 X axis) where the X axis say capture over 9X% of variance and say is the only PC Finally PCA is also used to visualize after K Means is done (Ref 4) If the PCA display* our K clustering result to be orthogonal or close to, then it is a sign that our clustering is sound , each of which exhibit unique characteristics (*since by definition PCA find out / display those major dimensions (1D to 3D) such that say K (PCA) will capture probably over a vast majority of the variance. So PCA is both useful in visualize and confirmation of a good clustering, as well as an intrinsically useful element in determining K Means clustering - to be used prior to after the K Means. References: https://msdn.microsoft.com/en-us/library/azure/dn905944.aspx https://en.wikipedia.org/wiki/Principal_component_analysis Clustering using principal component analysis: application of elderly people autonomy-disability (Combes & Azema) http://cs229.stanford.edu/notes/cs229-notes10.pdf Andrew Ng
What is the relation between k-means clustering and PCA?
Intuitive relationship of PCA and KMeans Theoretically PCA dimensional analysis (the first K dimension retaining say the 90% of variance...does not need to have direct relationship with K Means clust
What is the relation between k-means clustering and PCA? Intuitive relationship of PCA and KMeans Theoretically PCA dimensional analysis (the first K dimension retaining say the 90% of variance...does not need to have direct relationship with K Means cluster), however the value of using PCA came from a) practical consideration given the nature of objects that we analyse tends to naturally cluster around/evolve from ( a certain segment of) their principal components (age, gender..) b) PCA eliminates those low variance dimension (noise), so itself adds value (and form a sense similar to clustering) by focusing on those key dimension In simple terms, it is just like X-Y axis is what help us master any abstract mathematical concept but in a more advance manner. K Means try to minimize overall distance within a cluster for a given K For a set of objects with N dimension parameters, by default similar objects Will have MOST parameters “similar” except a few key difference (eg a group of young IT students, young dancers, humans… will have some highly similar features (low variance) but a few key features still quite diverse and capturing those "key Principal Componenents" essentially capture the majority of variance, eg. color, area of residence.... Hence low distortion if we neglect those features of minor differences, or the conversion to lower PCs will not loss much information It is thus “very likely” and “very natural” that grouping them together to look at the differences (variations) make sense for data evaluation (eg. if you make 1,000 surveys in a week in the main street, clustering them based on ethnic, age, or educational background as PC make sense) Under K Means’ mission, we try to establish a fair number of K so that those group elements (in a cluster) would have overall smallest distance (minimized) between Centroid and whilst the cost to establish and running the K clusters is optimal (each members as a cluster does not make sense as that is too costly to maintain and no value) K Means grouping could be easily “visually inspected” to be optimal, if such K is along the Principal Components (eg. if for people in different age, ethnic / regious clusters they tend to express similar opinions so if you cluster those surveys based on those PCs, then that achieve the minization goal (ref. 1) Also those PCs (ethnic, age, religion..) quite often are orthogonal, hence visually distinct by viewing the PCA However this intuitive deduction lead to a sufficient but not a necessary condition. (Ref 2: However, that PCA is a useful relaxation of k-means clustering was not a new result (see, for example,[35]), and it is straightforward to uncover counterexamples to the statement that the cluster centroid subspace is spanned by the principal directions.[36]) Choosing clusters based on / along the CPs may comfortably lead to comfortable allocation mechanism This one could be an example if x is the first PC along X axis: (...........CC1...............CC2............CC3 X axis) where the X axis say capture over 9X% of variance and say is the only PC Finally PCA is also used to visualize after K Means is done (Ref 4) If the PCA display* our K clustering result to be orthogonal or close to, then it is a sign that our clustering is sound , each of which exhibit unique characteristics (*since by definition PCA find out / display those major dimensions (1D to 3D) such that say K (PCA) will capture probably over a vast majority of the variance. So PCA is both useful in visualize and confirmation of a good clustering, as well as an intrinsically useful element in determining K Means clustering - to be used prior to after the K Means. References: https://msdn.microsoft.com/en-us/library/azure/dn905944.aspx https://en.wikipedia.org/wiki/Principal_component_analysis Clustering using principal component analysis: application of elderly people autonomy-disability (Combes & Azema) http://cs229.stanford.edu/notes/cs229-notes10.pdf Andrew Ng
What is the relation between k-means clustering and PCA? Intuitive relationship of PCA and KMeans Theoretically PCA dimensional analysis (the first K dimension retaining say the 90% of variance...does not need to have direct relationship with K Means clust
1,625
What is the relation between k-means clustering and PCA?
In a recent paper, we found that PCA is able to compress the Euclidean distance of intra-cluster pairs while preserving Euclidean distance of inter-cluster pairs. Notice that K-means aims to minimize Euclidean distance to the centers. Hence the compressibility of PCA helps a lot. This phenomenon can also be theoretical proved in random matrices. Please see our paper. "Compressibility: Power of PCA in Clustering Problems Beyond Dimensionality Reduction" Chandra Sekhar Mukherjee and Jiapeng Zhang https://arxiv.org/abs/2204.10888
What is the relation between k-means clustering and PCA?
In a recent paper, we found that PCA is able to compress the Euclidean distance of intra-cluster pairs while preserving Euclidean distance of inter-cluster pairs. Notice that K-means aims to minimize
What is the relation between k-means clustering and PCA? In a recent paper, we found that PCA is able to compress the Euclidean distance of intra-cluster pairs while preserving Euclidean distance of inter-cluster pairs. Notice that K-means aims to minimize Euclidean distance to the centers. Hence the compressibility of PCA helps a lot. This phenomenon can also be theoretical proved in random matrices. Please see our paper. "Compressibility: Power of PCA in Clustering Problems Beyond Dimensionality Reduction" Chandra Sekhar Mukherjee and Jiapeng Zhang https://arxiv.org/abs/2204.10888
What is the relation between k-means clustering and PCA? In a recent paper, we found that PCA is able to compress the Euclidean distance of intra-cluster pairs while preserving Euclidean distance of inter-cluster pairs. Notice that K-means aims to minimize
1,626
Mean absolute error OR root mean squared error?
This depends on your loss function. In many circumstances it makes sense to give more weight to points further away from the mean--that is, being off by 10 is more than twice as bad as being off by 5. In such cases RMSE is a more appropriate measure of error. If being off by ten is just twice as bad as being off by 5, then MAE is more appropriate. In any case, it doesn't make sense to compare RMSE and MAE to each other as you do in your second-to-last sentence ("MAE gives a lower error than RMSE"). MAE will never be higher than RMSE because of the way they are calculated. They only make sense in comparison to the same measure of error: you can compare RMSE for Method 1 to RMSE for Method 2, or MAE for Method 1 to MAE for Method 2, but you can't say MAE is better than RMSE for Method 1 because it's smaller.
Mean absolute error OR root mean squared error?
This depends on your loss function. In many circumstances it makes sense to give more weight to points further away from the mean--that is, being off by 10 is more than twice as bad as being off by 5.
Mean absolute error OR root mean squared error? This depends on your loss function. In many circumstances it makes sense to give more weight to points further away from the mean--that is, being off by 10 is more than twice as bad as being off by 5. In such cases RMSE is a more appropriate measure of error. If being off by ten is just twice as bad as being off by 5, then MAE is more appropriate. In any case, it doesn't make sense to compare RMSE and MAE to each other as you do in your second-to-last sentence ("MAE gives a lower error than RMSE"). MAE will never be higher than RMSE because of the way they are calculated. They only make sense in comparison to the same measure of error: you can compare RMSE for Method 1 to RMSE for Method 2, or MAE for Method 1 to MAE for Method 2, but you can't say MAE is better than RMSE for Method 1 because it's smaller.
Mean absolute error OR root mean squared error? This depends on your loss function. In many circumstances it makes sense to give more weight to points further away from the mean--that is, being off by 10 is more than twice as bad as being off by 5.
1,627
Mean absolute error OR root mean squared error?
Here is another situation when you want to use (R)MSE instead of MAE: when your observations' conditional distribution is asymmetric and you want an unbiased fit. The (R)MSE is minimized by the conditional mean, the MAE by the conditional median. So if you minimize the MAE, the fit will be closer to the median and biased. Of course, all this really depends on your loss function. The same problem occurs if you are using the MAE or (R)MSE to evaluate predictions or forecasts. For instance, low volume sales data typically have an asymmetric distribution. If you optimize the MAE, you may be surprised to find that the MAE-optimal forecast is a flat zero forecast. Here is a little presentation covering this, and here is a recent invited commentary on the M4 forecasting competition where I explained this effect.
Mean absolute error OR root mean squared error?
Here is another situation when you want to use (R)MSE instead of MAE: when your observations' conditional distribution is asymmetric and you want an unbiased fit. The (R)MSE is minimized by the condit
Mean absolute error OR root mean squared error? Here is another situation when you want to use (R)MSE instead of MAE: when your observations' conditional distribution is asymmetric and you want an unbiased fit. The (R)MSE is minimized by the conditional mean, the MAE by the conditional median. So if you minimize the MAE, the fit will be closer to the median and biased. Of course, all this really depends on your loss function. The same problem occurs if you are using the MAE or (R)MSE to evaluate predictions or forecasts. For instance, low volume sales data typically have an asymmetric distribution. If you optimize the MAE, you may be surprised to find that the MAE-optimal forecast is a flat zero forecast. Here is a little presentation covering this, and here is a recent invited commentary on the M4 forecasting competition where I explained this effect.
Mean absolute error OR root mean squared error? Here is another situation when you want to use (R)MSE instead of MAE: when your observations' conditional distribution is asymmetric and you want an unbiased fit. The (R)MSE is minimized by the condit
1,628
Mean absolute error OR root mean squared error?
RMSE is a more natural way of describing loss in Euclidean distance. Therefore if you graph it out in 3D, the loss is in a cone shape, as you can see above in green. This also applies to higher dimensions, although it's harder to visualize it. MAE can be thought of as city-block distance. It isn't really as natural of a way to measure loss, as you can see in the graph in blue.
Mean absolute error OR root mean squared error?
RMSE is a more natural way of describing loss in Euclidean distance. Therefore if you graph it out in 3D, the loss is in a cone shape, as you can see above in green. This also applies to higher dimens
Mean absolute error OR root mean squared error? RMSE is a more natural way of describing loss in Euclidean distance. Therefore if you graph it out in 3D, the loss is in a cone shape, as you can see above in green. This also applies to higher dimensions, although it's harder to visualize it. MAE can be thought of as city-block distance. It isn't really as natural of a way to measure loss, as you can see in the graph in blue.
Mean absolute error OR root mean squared error? RMSE is a more natural way of describing loss in Euclidean distance. Therefore if you graph it out in 3D, the loss is in a cone shape, as you can see above in green. This also applies to higher dimens
1,629
Mean absolute error OR root mean squared error?
To put it in short, if there are many outliers then you may consider using Mean Absolute Error (also called the Average Absolute Deviation). RMSE is more sensitive to outliers than the MAE. But when outliers are exponentially rare (like in a bell-shaped curve), the RMSE performs very well and is generally preferred. Both the RMSE and the MAE are ways to measure the distance between two vectors: the vector of predictions and the vector of target values. MAE corresponds to the l1 norm or Manhattan norm while RMSE corresponds to the l2 norm or Euclidian Norm. The higher the norm index, the more it focuses on large values and neglects small ones
Mean absolute error OR root mean squared error?
To put it in short, if there are many outliers then you may consider using Mean Absolute Error (also called the Average Absolute Deviation). RMSE is more sensitive to outliers than the MAE. But when o
Mean absolute error OR root mean squared error? To put it in short, if there are many outliers then you may consider using Mean Absolute Error (also called the Average Absolute Deviation). RMSE is more sensitive to outliers than the MAE. But when outliers are exponentially rare (like in a bell-shaped curve), the RMSE performs very well and is generally preferred. Both the RMSE and the MAE are ways to measure the distance between two vectors: the vector of predictions and the vector of target values. MAE corresponds to the l1 norm or Manhattan norm while RMSE corresponds to the l2 norm or Euclidian Norm. The higher the norm index, the more it focuses on large values and neglects small ones
Mean absolute error OR root mean squared error? To put it in short, if there are many outliers then you may consider using Mean Absolute Error (also called the Average Absolute Deviation). RMSE is more sensitive to outliers than the MAE. But when o
1,630
Mean absolute error OR root mean squared error?
When prediction is less focal than parameter estimation, the Gauss-Markov theorem might be relevant: In a linear model with spherical errors, OLS - the solution to the MSE minimization problem - is efficient in a class of linear unbiased estimators - there are (restrictive, to be sure) conditions under which "you can't do better than OLS". I am not arguing this should justify using OLS almost all of the time, but it sure contributes to why (especially since it is a good excuse to focus so much on OLS in teaching).
Mean absolute error OR root mean squared error?
When prediction is less focal than parameter estimation, the Gauss-Markov theorem might be relevant: In a linear model with spherical errors, OLS - the solution to the MSE minimization problem - is ef
Mean absolute error OR root mean squared error? When prediction is less focal than parameter estimation, the Gauss-Markov theorem might be relevant: In a linear model with spherical errors, OLS - the solution to the MSE minimization problem - is efficient in a class of linear unbiased estimators - there are (restrictive, to be sure) conditions under which "you can't do better than OLS". I am not arguing this should justify using OLS almost all of the time, but it sure contributes to why (especially since it is a good excuse to focus so much on OLS in teaching).
Mean absolute error OR root mean squared error? When prediction is less focal than parameter estimation, the Gauss-Markov theorem might be relevant: In a linear model with spherical errors, OLS - the solution to the MSE minimization problem - is ef
1,631
What is an embedding layer in a neural network?
Relation to Word2Vec ========================================== Word2Vec in a simple picture: (source: netdna-ssl.com) More in-depth explanation: I believe it's related to the recent Word2Vec innovation in natural language processing. Roughly, Word2Vec means our vocabulary is discrete and we will learn an map which will embed each word into a continuous vector space. Using this vector space representation will allow us to have a continuous, distributed representation of our vocabulary words. If for example our dataset consists of n-grams, we may now use our continuous word features to create a distributed representation of our n-grams. In the process of training a language model we will learn this word embedding map. The hope is that by using a continuous representation, our embedding will map similar words to similar regions. For example in the landmark paper Distributed Representations of Words and Phrases and their Compositionality, observe in Tables 6 and 7 that certain phrases have very good nearest neighbour phrases from a semantic point of view. Transforming into this continuous space allows us to use continuous metric notions of similarity to evaluate the semantic quality of our embedding. Explanation using Lasagne code Let's break down the Lasagne code snippet: x = T.imatrix() x is a matrix of integers. Okay, no problem. Each word in the vocabulary can be represented an integer, or a 1-hot sparse encoding. So if x is 2x2, we have two datapoints, each being a 2-gram. l_in = InputLayer((3, )) The input layer. The 3 represents the size of our vocabulary. So we have words $w_0, w_1, w_2$ for example. W = np.arange(3*5).reshape((3, 5)).astype('float32') This is our word embedding matrix. It is a 3 row by 5 column matrix with entries 0 to 14. Up until now we have the following interpretation. Our vocabulary has 3 words and we will embed our words into a 5 dimensional vector space. For example, we may represent one word $w_0 = (1,0,0)$, and another word $w_1 = (0, 1, 0)$ and the other word $w_2 = (0, 0, 1)$, e.g. as hot sparse encodings. We can view the $W$ matrix as embedding these words via matrix multiplication. Therefore the first word $w_0 \rightarrow w_0W = [0, 1, 2, 3, 4].$ Simmilarly $w_1 \rightarrow w_1W = [5, 6, 7, 8, 9]$. It should be noted, due to the one-hot sparse encoding we are using, you also see this referred to as table lookups. l1 = EmbeddingLayer(l_in, input_size=3, output_size=5, W=W) The embedding layer output = get_output(l1, x) Symbolic Theano expression for the embedding. f = theano.function([x], output) Theano function which computes the embedding. x_test = np.array([[0, 2], [1, 2]]).astype('int32') It's worth pausing here to discuss what exactly x_test means. First notice that all of x_test entries are in {0, 1, 2}, i.e. range(3). x_test has 2 datapoints. The first datapoint [0, 2] represents the 2-gram $(w_0, w_2)$ and the second datapoint represents the 2-gram $(w_1, w_2)$. We wish to embed our 2-grams using our word embedding layer now. Before we do that, let's make sure we're clear about what should be returned by our embedding function f. The 2 gram $(w_0, w_2)$ is equivalent to a [[1, 0, 0], [0, 0, 1]] matrix. Applying our embedding matrix W to this sparse matrix should yield: [[0, 1, 2, 3, 4], [10, 11, 12, 13, 14]]. Note in order to have the matrix multiplication work out, we have to apply the word embedding matrix $W$ via right multiplication to the sparse matrix representation of our 2-gram. f(x_test) returns: array([[[ 0., 1., 2., 3., 4.], [ 10., 11., 12., 13., 14.]], [[ 5., 6., 7., 8., 9.], [ 10., 11., 12., 13., 14.]]], dtype=float32) To convince you that the 3 does indeed represent the vocabulary size, try inputting a matrix x_test = [[5, 0], [1, 2]]. You will see that it raises a matrix mis-match error.
What is an embedding layer in a neural network?
Relation to Word2Vec ========================================== Word2Vec in a simple picture: (source: netdna-ssl.com) More in-depth explanation: I believe it's related to the recent Word2Vec innovat
What is an embedding layer in a neural network? Relation to Word2Vec ========================================== Word2Vec in a simple picture: (source: netdna-ssl.com) More in-depth explanation: I believe it's related to the recent Word2Vec innovation in natural language processing. Roughly, Word2Vec means our vocabulary is discrete and we will learn an map which will embed each word into a continuous vector space. Using this vector space representation will allow us to have a continuous, distributed representation of our vocabulary words. If for example our dataset consists of n-grams, we may now use our continuous word features to create a distributed representation of our n-grams. In the process of training a language model we will learn this word embedding map. The hope is that by using a continuous representation, our embedding will map similar words to similar regions. For example in the landmark paper Distributed Representations of Words and Phrases and their Compositionality, observe in Tables 6 and 7 that certain phrases have very good nearest neighbour phrases from a semantic point of view. Transforming into this continuous space allows us to use continuous metric notions of similarity to evaluate the semantic quality of our embedding. Explanation using Lasagne code Let's break down the Lasagne code snippet: x = T.imatrix() x is a matrix of integers. Okay, no problem. Each word in the vocabulary can be represented an integer, or a 1-hot sparse encoding. So if x is 2x2, we have two datapoints, each being a 2-gram. l_in = InputLayer((3, )) The input layer. The 3 represents the size of our vocabulary. So we have words $w_0, w_1, w_2$ for example. W = np.arange(3*5).reshape((3, 5)).astype('float32') This is our word embedding matrix. It is a 3 row by 5 column matrix with entries 0 to 14. Up until now we have the following interpretation. Our vocabulary has 3 words and we will embed our words into a 5 dimensional vector space. For example, we may represent one word $w_0 = (1,0,0)$, and another word $w_1 = (0, 1, 0)$ and the other word $w_2 = (0, 0, 1)$, e.g. as hot sparse encodings. We can view the $W$ matrix as embedding these words via matrix multiplication. Therefore the first word $w_0 \rightarrow w_0W = [0, 1, 2, 3, 4].$ Simmilarly $w_1 \rightarrow w_1W = [5, 6, 7, 8, 9]$. It should be noted, due to the one-hot sparse encoding we are using, you also see this referred to as table lookups. l1 = EmbeddingLayer(l_in, input_size=3, output_size=5, W=W) The embedding layer output = get_output(l1, x) Symbolic Theano expression for the embedding. f = theano.function([x], output) Theano function which computes the embedding. x_test = np.array([[0, 2], [1, 2]]).astype('int32') It's worth pausing here to discuss what exactly x_test means. First notice that all of x_test entries are in {0, 1, 2}, i.e. range(3). x_test has 2 datapoints. The first datapoint [0, 2] represents the 2-gram $(w_0, w_2)$ and the second datapoint represents the 2-gram $(w_1, w_2)$. We wish to embed our 2-grams using our word embedding layer now. Before we do that, let's make sure we're clear about what should be returned by our embedding function f. The 2 gram $(w_0, w_2)$ is equivalent to a [[1, 0, 0], [0, 0, 1]] matrix. Applying our embedding matrix W to this sparse matrix should yield: [[0, 1, 2, 3, 4], [10, 11, 12, 13, 14]]. Note in order to have the matrix multiplication work out, we have to apply the word embedding matrix $W$ via right multiplication to the sparse matrix representation of our 2-gram. f(x_test) returns: array([[[ 0., 1., 2., 3., 4.], [ 10., 11., 12., 13., 14.]], [[ 5., 6., 7., 8., 9.], [ 10., 11., 12., 13., 14.]]], dtype=float32) To convince you that the 3 does indeed represent the vocabulary size, try inputting a matrix x_test = [[5, 0], [1, 2]]. You will see that it raises a matrix mis-match error.
What is an embedding layer in a neural network? Relation to Word2Vec ========================================== Word2Vec in a simple picture: (source: netdna-ssl.com) More in-depth explanation: I believe it's related to the recent Word2Vec innovat
1,632
What is an embedding layer in a neural network?
In https://stackoverflow.com/questions/45649520/explain-with-example-how-embedding-layers-in-keras-works/ I tried to prepare an example using 2 sentences, keras's texts_to_sequences 'This is a text' --> [0 0 1 2 3 4] and embedding layer. Based on How does Keras 'Embedding' layer work? the embedding layer first initialize the embedding vector at random and then uses network optimizer to update it similarly like it would do to any other network layer in keras. [0 0 1 2 3 4] --> [-0.01494285, -0.007915 , 0.01764857], [-0.01494285, -0.007915 , 0.01764857], [-0.03019481, -0.02910612, 0.03518577], [-0.0046863 , 0.04763055, -0.02629668], [ 0.02297204, 0.02146662, 0.03114786], [ 0.01634104, 0.02296363, -0.02348827] Above would be some initial embeding vector for a sentence of (maximum) 6 words and output_dim of 3.
What is an embedding layer in a neural network?
In https://stackoverflow.com/questions/45649520/explain-with-example-how-embedding-layers-in-keras-works/ I tried to prepare an example using 2 sentences, keras's texts_to_sequences 'This is a text'
What is an embedding layer in a neural network? In https://stackoverflow.com/questions/45649520/explain-with-example-how-embedding-layers-in-keras-works/ I tried to prepare an example using 2 sentences, keras's texts_to_sequences 'This is a text' --> [0 0 1 2 3 4] and embedding layer. Based on How does Keras 'Embedding' layer work? the embedding layer first initialize the embedding vector at random and then uses network optimizer to update it similarly like it would do to any other network layer in keras. [0 0 1 2 3 4] --> [-0.01494285, -0.007915 , 0.01764857], [-0.01494285, -0.007915 , 0.01764857], [-0.03019481, -0.02910612, 0.03518577], [-0.0046863 , 0.04763055, -0.02629668], [ 0.02297204, 0.02146662, 0.03114786], [ 0.01634104, 0.02296363, -0.02348827] Above would be some initial embeding vector for a sentence of (maximum) 6 words and output_dim of 3.
What is an embedding layer in a neural network? In https://stackoverflow.com/questions/45649520/explain-with-example-how-embedding-layers-in-keras-works/ I tried to prepare an example using 2 sentences, keras's texts_to_sequences 'This is a text'
1,633
Difference between standard error and standard deviation
To complete the answer to the question, Ocram nicely addressed standard error but did not contrast it to standard deviation and did not mention the dependence on sample size. As a special case for the estimator consider the sample mean. The standard error for the mean is $\sigma \, / \, \sqrt{n}$ where $\sigma$ is the population standard deviation. So in this example we see explicitly how the standard error decreases with increasing sample size. The standard deviation is most often used to refer to the individual observations. So standard deviation describes the variability of the individual observations while standard error shows the variability of the estimator. Good estimators are consistent which means that they converge to the true parameter value. When their standard error decreases to 0 as the sample size increases the estimators are consistent which in most cases happens because the standard error goes to 0 as we see explicitly with the sample mean.
Difference between standard error and standard deviation
To complete the answer to the question, Ocram nicely addressed standard error but did not contrast it to standard deviation and did not mention the dependence on sample size. As a special case for th
Difference between standard error and standard deviation To complete the answer to the question, Ocram nicely addressed standard error but did not contrast it to standard deviation and did not mention the dependence on sample size. As a special case for the estimator consider the sample mean. The standard error for the mean is $\sigma \, / \, \sqrt{n}$ where $\sigma$ is the population standard deviation. So in this example we see explicitly how the standard error decreases with increasing sample size. The standard deviation is most often used to refer to the individual observations. So standard deviation describes the variability of the individual observations while standard error shows the variability of the estimator. Good estimators are consistent which means that they converge to the true parameter value. When their standard error decreases to 0 as the sample size increases the estimators are consistent which in most cases happens because the standard error goes to 0 as we see explicitly with the sample mean.
Difference between standard error and standard deviation To complete the answer to the question, Ocram nicely addressed standard error but did not contrast it to standard deviation and did not mention the dependence on sample size. As a special case for th
1,634
Difference between standard error and standard deviation
Here is a more practical (and not mathematical) answer: The SD (standard deviation) quantifies scatter — how much the values vary from one another. The SEM (standard error of the mean) quantifies how precisely you know the true mean of the population. It takes into account both the value of the SD and the sample size. Both SD and SEM are in the same units -- the units of the data. The SEM, by definition, is always smaller than the SD. The SEM gets smaller as your samples get larger. This makes sense, because the mean of a large sample is likely to be closer to the true population mean than is the mean of a small sample. With a huge sample, you'll know the value of the mean with a lot of precision even if the data are very scattered. The SD does not change predictably as you acquire more data. The SD you compute from a sample is the best possible estimate of the SD of the overall population. As you collect more data, you'll assess the SD of the population with more precision. But you can't predict whether the SD from a larger sample will be bigger or smaller than the SD from a small sample. (This is a simplification, not quite true. See comments below.) Note that standard errors can be computed for almost any parameter you compute from data, not just the mean. The phrase "the standard error" is a bit ambiguous. The points above refer only to the standard error of the mean. (From the GraphPad Statistics Guide that I wrote.)
Difference between standard error and standard deviation
Here is a more practical (and not mathematical) answer: The SD (standard deviation) quantifies scatter — how much the values vary from one another. The SEM (standard error of the mean) quantifies h
Difference between standard error and standard deviation Here is a more practical (and not mathematical) answer: The SD (standard deviation) quantifies scatter — how much the values vary from one another. The SEM (standard error of the mean) quantifies how precisely you know the true mean of the population. It takes into account both the value of the SD and the sample size. Both SD and SEM are in the same units -- the units of the data. The SEM, by definition, is always smaller than the SD. The SEM gets smaller as your samples get larger. This makes sense, because the mean of a large sample is likely to be closer to the true population mean than is the mean of a small sample. With a huge sample, you'll know the value of the mean with a lot of precision even if the data are very scattered. The SD does not change predictably as you acquire more data. The SD you compute from a sample is the best possible estimate of the SD of the overall population. As you collect more data, you'll assess the SD of the population with more precision. But you can't predict whether the SD from a larger sample will be bigger or smaller than the SD from a small sample. (This is a simplification, not quite true. See comments below.) Note that standard errors can be computed for almost any parameter you compute from data, not just the mean. The phrase "the standard error" is a bit ambiguous. The points above refer only to the standard error of the mean. (From the GraphPad Statistics Guide that I wrote.)
Difference between standard error and standard deviation Here is a more practical (and not mathematical) answer: The SD (standard deviation) quantifies scatter — how much the values vary from one another. The SEM (standard error of the mean) quantifies h
1,635
Difference between standard error and standard deviation
Let $\theta$ be your parameter of interest for which you want to make inference. To do this, you have available to you a sample of observations $\mathbf{x} = \{x_1, \ldots, x_n \}$ along with some technique to obtain an estimate of $\theta$, $\hat{\theta}(\mathbf{x})$. In this notation, I have made explicit that $\hat{\theta}(\mathbf{x})$ depends on $\mathbf{x}$. Indeed, if you had had another sample, $\tilde{\mathbf{x}}$, you would have ended up with another estimate, $\hat{\theta}(\tilde{\mathbf{x}})$. This makes $\hat{\theta}(\mathbf{x})$ a realisation of a random variable which I denote $\hat{\theta}$. This random variable is called an estimator. The standard error of $\hat{\theta}(\mathbf{x})$ (=estimate) is the standard deviation of $\hat{\theta}$ (=random variable). It contains the information on how confident you are about your estimate. If it is large, it means that you could have obtained a totally different estimate if you had drawn another sample. The standard error is used to construct confidence intervals.
Difference between standard error and standard deviation
Let $\theta$ be your parameter of interest for which you want to make inference. To do this, you have available to you a sample of observations $\mathbf{x} = \{x_1, \ldots, x_n \}$ along with some tec
Difference between standard error and standard deviation Let $\theta$ be your parameter of interest for which you want to make inference. To do this, you have available to you a sample of observations $\mathbf{x} = \{x_1, \ldots, x_n \}$ along with some technique to obtain an estimate of $\theta$, $\hat{\theta}(\mathbf{x})$. In this notation, I have made explicit that $\hat{\theta}(\mathbf{x})$ depends on $\mathbf{x}$. Indeed, if you had had another sample, $\tilde{\mathbf{x}}$, you would have ended up with another estimate, $\hat{\theta}(\tilde{\mathbf{x}})$. This makes $\hat{\theta}(\mathbf{x})$ a realisation of a random variable which I denote $\hat{\theta}$. This random variable is called an estimator. The standard error of $\hat{\theta}(\mathbf{x})$ (=estimate) is the standard deviation of $\hat{\theta}$ (=random variable). It contains the information on how confident you are about your estimate. If it is large, it means that you could have obtained a totally different estimate if you had drawn another sample. The standard error is used to construct confidence intervals.
Difference between standard error and standard deviation Let $\theta$ be your parameter of interest for which you want to make inference. To do this, you have available to you a sample of observations $\mathbf{x} = \{x_1, \ldots, x_n \}$ along with some tec
1,636
Difference between standard error and standard deviation
(note that I'm focusing on standard error of the mean, which I believe the questioner was as well, but you can generate a standard error for any sample statistic) The standard error is related to the standard deviation but they are not the same thing and increasing sample size does not make them closer together. Rather, it makes them farther apart. The standard deviation of the sample becomes closer to the population standard deviation as sample size increases but not the standard error. Sometimes the terminology around this is a bit thick to get through. When you gather a sample and calculate the standard deviation of that sample, as the sample grows in size the estimate of the standard deviation gets more and more accurate. It seems from your question that was what you were thinking about. But also consider that the mean of the sample tends to be closer to the population mean on average. That's critical for understanding the standard error. The standard error is about what would happen if you got multiple samples of a given size. If you take a sample of 10 you can get some estimate of the mean. Then you take another sample of 10 and new mean estimate, and so on. The standard deviation of the means of those samples is the standard error. Given that you posed your question you can probably see now that if the N is high then the standard error is smaller because the means of samples will be less likely to deviate much from the true value. To some that sounds kind of miraculous given that you've calculated this from one sample. So, what you could do is bootstrap a standard error through simulation to demonstrate the relationship. In R that would look like: # the size of a sample n <- 10 # set true mean and standard deviation values m <- 50 s <- 100 # now generate lots and lots of samples with mean m and standard deviation s # and get the means of those samples. Save them in y. y <- replicate( 10000, mean( rnorm(n, m, s) ) ) # standard deviation of those means sd(y) # calcuation of theoretical standard error s / sqrt(n) You'll find that those last two commands generate the same number (approximately). You can vary the n, m, and s values and they'll always come out pretty close to each other.
Difference between standard error and standard deviation
(note that I'm focusing on standard error of the mean, which I believe the questioner was as well, but you can generate a standard error for any sample statistic) The standard error is related to the
Difference between standard error and standard deviation (note that I'm focusing on standard error of the mean, which I believe the questioner was as well, but you can generate a standard error for any sample statistic) The standard error is related to the standard deviation but they are not the same thing and increasing sample size does not make them closer together. Rather, it makes them farther apart. The standard deviation of the sample becomes closer to the population standard deviation as sample size increases but not the standard error. Sometimes the terminology around this is a bit thick to get through. When you gather a sample and calculate the standard deviation of that sample, as the sample grows in size the estimate of the standard deviation gets more and more accurate. It seems from your question that was what you were thinking about. But also consider that the mean of the sample tends to be closer to the population mean on average. That's critical for understanding the standard error. The standard error is about what would happen if you got multiple samples of a given size. If you take a sample of 10 you can get some estimate of the mean. Then you take another sample of 10 and new mean estimate, and so on. The standard deviation of the means of those samples is the standard error. Given that you posed your question you can probably see now that if the N is high then the standard error is smaller because the means of samples will be less likely to deviate much from the true value. To some that sounds kind of miraculous given that you've calculated this from one sample. So, what you could do is bootstrap a standard error through simulation to demonstrate the relationship. In R that would look like: # the size of a sample n <- 10 # set true mean and standard deviation values m <- 50 s <- 100 # now generate lots and lots of samples with mean m and standard deviation s # and get the means of those samples. Save them in y. y <- replicate( 10000, mean( rnorm(n, m, s) ) ) # standard deviation of those means sd(y) # calcuation of theoretical standard error s / sqrt(n) You'll find that those last two commands generate the same number (approximately). You can vary the n, m, and s values and they'll always come out pretty close to each other.
Difference between standard error and standard deviation (note that I'm focusing on standard error of the mean, which I believe the questioner was as well, but you can generate a standard error for any sample statistic) The standard error is related to the
1,637
Why do we need to normalize data before principal component analysis (PCA)? [duplicate]
Normalization is important in PCA since it is a variance maximizing exercise. It projects your original data onto directions which maximize the variance. The first plot below shows the amount of total variance explained in the different principal components wher we have not normalized the data. As you can see, it seems like component one explains most of the variance in the data. If you look at the second picture, we have normalized the data first. Here it is clear that the other components contribute as well. The reason for this is because PCA seeks to maximize the variance of each component. And since the covariance matrix of this particular dataset is: Murder Assault UrbanPop Rape Murder 18.970465 291.0624 4.386204 22.99141 Assault 291.062367 6945.1657 312.275102 519.26906 UrbanPop 4.386204 312.2751 209.518776 55.76808 Rape 22.991412 519.2691 55.768082 87.72916 From this structure, the PCA will select to project as much as possible in the direction of Assault since that variance is much greater. So for finding features usable for any kind of model, a PCA without normalization would perform worse than one with normalization.
Why do we need to normalize data before principal component analysis (PCA)? [duplicate]
Normalization is important in PCA since it is a variance maximizing exercise. It projects your original data onto directions which maximize the variance. The first plot below shows the amount of total
Why do we need to normalize data before principal component analysis (PCA)? [duplicate] Normalization is important in PCA since it is a variance maximizing exercise. It projects your original data onto directions which maximize the variance. The first plot below shows the amount of total variance explained in the different principal components wher we have not normalized the data. As you can see, it seems like component one explains most of the variance in the data. If you look at the second picture, we have normalized the data first. Here it is clear that the other components contribute as well. The reason for this is because PCA seeks to maximize the variance of each component. And since the covariance matrix of this particular dataset is: Murder Assault UrbanPop Rape Murder 18.970465 291.0624 4.386204 22.99141 Assault 291.062367 6945.1657 312.275102 519.26906 UrbanPop 4.386204 312.2751 209.518776 55.76808 Rape 22.991412 519.2691 55.768082 87.72916 From this structure, the PCA will select to project as much as possible in the direction of Assault since that variance is much greater. So for finding features usable for any kind of model, a PCA without normalization would perform worse than one with normalization.
Why do we need to normalize data before principal component analysis (PCA)? [duplicate] Normalization is important in PCA since it is a variance maximizing exercise. It projects your original data onto directions which maximize the variance. The first plot below shows the amount of total
1,638
Why do we need to normalize data before principal component analysis (PCA)? [duplicate]
The term normalization is used in many contexts, with distinct, but related, meanings. Basically, normalizing means transforming so as to render normal. When data are seen as vectors, normalizing means transforming the vector so that it has unit norm. When data are though of as random variables, normalizing means transforming to normal distribution. When the data are hypothesized to be normal, normalizing means transforming to unit variance.
Why do we need to normalize data before principal component analysis (PCA)? [duplicate]
The term normalization is used in many contexts, with distinct, but related, meanings. Basically, normalizing means transforming so as to render normal. When data are seen as vectors, normalizing mean
Why do we need to normalize data before principal component analysis (PCA)? [duplicate] The term normalization is used in many contexts, with distinct, but related, meanings. Basically, normalizing means transforming so as to render normal. When data are seen as vectors, normalizing means transforming the vector so that it has unit norm. When data are though of as random variables, normalizing means transforming to normal distribution. When the data are hypothesized to be normal, normalizing means transforming to unit variance.
Why do we need to normalize data before principal component analysis (PCA)? [duplicate] The term normalization is used in many contexts, with distinct, but related, meanings. Basically, normalizing means transforming so as to render normal. When data are seen as vectors, normalizing mean
1,639
Detecting a given face in a database of facial images
A better idea might be to trash all images that appear in the feed of more than one user - no recognition needed.
Detecting a given face in a database of facial images
A better idea might be to trash all images that appear in the feed of more than one user - no recognition needed.
Detecting a given face in a database of facial images A better idea might be to trash all images that appear in the feed of more than one user - no recognition needed.
Detecting a given face in a database of facial images A better idea might be to trash all images that appear in the feed of more than one user - no recognition needed.
1,640
Detecting a given face in a database of facial images
I have a feeling that http://www.tineye.com/commercial_api may be the solution here. Simply throw the Twitter profile image to Tineye, see if it returns images (and associated URLs) that can clearly be identified (or automatically scored using simple word-count logic) as being related to (or of) that little sack of **. Simples!
Detecting a given face in a database of facial images
I have a feeling that http://www.tineye.com/commercial_api may be the solution here. Simply throw the Twitter profile image to Tineye, see if it returns images (and associated URLs) that can clearly b
Detecting a given face in a database of facial images I have a feeling that http://www.tineye.com/commercial_api may be the solution here. Simply throw the Twitter profile image to Tineye, see if it returns images (and associated URLs) that can clearly be identified (or automatically scored using simple word-count logic) as being related to (or of) that little sack of **. Simples!
Detecting a given face in a database of facial images I have a feeling that http://www.tineye.com/commercial_api may be the solution here. Simply throw the Twitter profile image to Tineye, see if it returns images (and associated URLs) that can clearly b
1,641
Detecting a given face in a database of facial images
Since you are able to filter to only those that are clear portrait photos, I'm assuming you have some method of feature generation to transform the raw images into features that are useful for machine learning purposes. If that's true, you could try to train a classification algorithm (there are lots of them: neural networks, etc.) by feeding the algorithm a bunch of known Bieber photos as well as a bunch of known non-Biebers. Once you have trained the model, it could be used to predict whether a new image is Bieber or not. This sort of supervised learning technique does require you to have data where you know the correct answer (Bieber or not), but those could probably be found from a Google image search. It also requires that you have the right sorts of features, and I don't know enough about image processing or your algorithm to know if that is a major drawback.
Detecting a given face in a database of facial images
Since you are able to filter to only those that are clear portrait photos, I'm assuming you have some method of feature generation to transform the raw images into features that are useful for machine
Detecting a given face in a database of facial images Since you are able to filter to only those that are clear portrait photos, I'm assuming you have some method of feature generation to transform the raw images into features that are useful for machine learning purposes. If that's true, you could try to train a classification algorithm (there are lots of them: neural networks, etc.) by feeding the algorithm a bunch of known Bieber photos as well as a bunch of known non-Biebers. Once you have trained the model, it could be used to predict whether a new image is Bieber or not. This sort of supervised learning technique does require you to have data where you know the correct answer (Bieber or not), but those could probably be found from a Google image search. It also requires that you have the right sorts of features, and I don't know enough about image processing or your algorithm to know if that is a major drawback.
Detecting a given face in a database of facial images Since you are able to filter to only those that are clear portrait photos, I'm assuming you have some method of feature generation to transform the raw images into features that are useful for machine
1,642
Detecting a given face in a database of facial images
You could use a method like eigenfaces, http://en.wikipedia.org/wiki/Eigenface. The following has a good walk through of the procedure as well as links to different implementations. http://www.pages.drexel.edu/~sis26/Eigenface%20Tutorial.htm From here it is common to use this in a classification approach, train a model and then predict cases. You could do this by training on a bunch of known celebrities and if you predict a face from twitter as one in your trained model of celebrities, remove it. Similar to this http://blog.cordiner.net/2010/12/02/eigenfaces-face-recognition-matlab/ This suffers from constant amendments. Soon there will be a new Justin Bieber that wont be in your trained model, so you cant predict it. There is also a case like Whitney Houston, you may have never thought to add her before but she may be a common image out of respect and admiration for a few weeks. You will not have the downside of baby pictures as mentioned above though. To over come these problems you could use more of a hierarchical clustering approach. Removing the first few sets of clusters that are very close if they reach a certain level of support, your first cluster has 15 items before a second is constructed. Now you don't have to worry about whose in your training model but you will fall to the baby pictures issue.
Detecting a given face in a database of facial images
You could use a method like eigenfaces, http://en.wikipedia.org/wiki/Eigenface. The following has a good walk through of the procedure as well as links to different implementations. http://www.pages.
Detecting a given face in a database of facial images You could use a method like eigenfaces, http://en.wikipedia.org/wiki/Eigenface. The following has a good walk through of the procedure as well as links to different implementations. http://www.pages.drexel.edu/~sis26/Eigenface%20Tutorial.htm From here it is common to use this in a classification approach, train a model and then predict cases. You could do this by training on a bunch of known celebrities and if you predict a face from twitter as one in your trained model of celebrities, remove it. Similar to this http://blog.cordiner.net/2010/12/02/eigenfaces-face-recognition-matlab/ This suffers from constant amendments. Soon there will be a new Justin Bieber that wont be in your trained model, so you cant predict it. There is also a case like Whitney Houston, you may have never thought to add her before but she may be a common image out of respect and admiration for a few weeks. You will not have the downside of baby pictures as mentioned above though. To over come these problems you could use more of a hierarchical clustering approach. Removing the first few sets of clusters that are very close if they reach a certain level of support, your first cluster has 15 items before a second is constructed. Now you don't have to worry about whose in your training model but you will fall to the baby pictures issue.
Detecting a given face in a database of facial images You could use a method like eigenfaces, http://en.wikipedia.org/wiki/Eigenface. The following has a good walk through of the procedure as well as links to different implementations. http://www.pages.
1,643
Detecting a given face in a database of facial images
If you want to do it yourself, I would recommend using Intel's free and open source OpenCV (CV for computer vision) project. http://opencv.willowgarage.com/ http://oreilly.com/catalog/9780596516130
Detecting a given face in a database of facial images
If you want to do it yourself, I would recommend using Intel's free and open source OpenCV (CV for computer vision) project. http://opencv.willowgarage.com/ http://oreilly.com/catalog/9780596516130
Detecting a given face in a database of facial images If you want to do it yourself, I would recommend using Intel's free and open source OpenCV (CV for computer vision) project. http://opencv.willowgarage.com/ http://oreilly.com/catalog/9780596516130
Detecting a given face in a database of facial images If you want to do it yourself, I would recommend using Intel's free and open source OpenCV (CV for computer vision) project. http://opencv.willowgarage.com/ http://oreilly.com/catalog/9780596516130
1,644
Detecting a given face in a database of facial images
You need to put on an algorithm detecting which person that picture is referring to. You can build a model based on different portrait pictures of famous personality and use classifiers to ensure that this picture is referring to one of your database picture. You need to use a certain classifier based on different parameters liked to the face, like distance between eyes or other parameters to rise up the accuracy of your model. There is also skin analysis. The most important is to build a good classifier. This method can be vulnerable. But there is also a very good project working on face recognition http://opencv-code.com/Opencv_Face_Detection
Detecting a given face in a database of facial images
You need to put on an algorithm detecting which person that picture is referring to. You can build a model based on different portrait pictures of famous personality and use classifiers to ensure that
Detecting a given face in a database of facial images You need to put on an algorithm detecting which person that picture is referring to. You can build a model based on different portrait pictures of famous personality and use classifiers to ensure that this picture is referring to one of your database picture. You need to use a certain classifier based on different parameters liked to the face, like distance between eyes or other parameters to rise up the accuracy of your model. There is also skin analysis. The most important is to build a good classifier. This method can be vulnerable. But there is also a very good project working on face recognition http://opencv-code.com/Opencv_Face_Detection
Detecting a given face in a database of facial images You need to put on an algorithm detecting which person that picture is referring to. You can build a model based on different portrait pictures of famous personality and use classifiers to ensure that
1,645
Detecting a given face in a database of facial images
You could try locality sensitive hashing.
Detecting a given face in a database of facial images
You could try locality sensitive hashing.
Detecting a given face in a database of facial images You could try locality sensitive hashing.
Detecting a given face in a database of facial images You could try locality sensitive hashing.
1,646
What is covariance in plain language?
Covariance is a measure of how changes in one variable are associated with changes in a second variable. Specifically, covariance measures the degree to which two variables are linearly associated. However, it is also often used informally as a general measure of how monotonically related two variables are. There are many useful intuitive explanations of covariance here. Regarding how covariance is related to each of the terms you mentioned: (1) Correlation is a scaled version of covariance that takes on values in $[-1,1]$ with a correlation of $\pm 1$ indicating perfect linear association and $0$ indicating no linear relationship. This scaling makes correlation invariant to changes in scale of the original variables, (which Akavall points out and gives an example of, +1). The scaling constant is the product of the standard deviations of the two variables. (2) If two variables are independent, their covariance is $0$. But, having a covariance of $0$ does not imply the variables are independent. This figure (from Wikipedia) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ shows several example plots of data that are not independent, but their covariances are $0$. One important special case is that if two variables are jointly normally distributed, then they are independent if and only if they are uncorrelated. Another special case is that pairs of bernoulli variables are uncorrelated if and only if they are independent (thanks @cardinal). (3) The variance/covariance structure (often called simply the covariance structure) in repeated measures designs refers to the structure used to model the fact that repeated measurements on individuals are potentially correlated (and therefore are dependent) - this is done by modeling the entries in the covariance matrix of the repeated measurements. One example is the exchangeable correlation structure with constant variance which specifies that each repeated measurement has the same variance, and all pairs of measurements are equally correlated. A better choice may be to specify a covariance structure that requires two measurements taken farther apart in time to be less correlated (e.g. an autoregressive model). Note that the term covariance structure arises more generally in many kinds of multivariate analyses where observations are allowed to be correlated.
What is covariance in plain language?
Covariance is a measure of how changes in one variable are associated with changes in a second variable. Specifically, covariance measures the degree to which two variables are linearly associated. Ho
What is covariance in plain language? Covariance is a measure of how changes in one variable are associated with changes in a second variable. Specifically, covariance measures the degree to which two variables are linearly associated. However, it is also often used informally as a general measure of how monotonically related two variables are. There are many useful intuitive explanations of covariance here. Regarding how covariance is related to each of the terms you mentioned: (1) Correlation is a scaled version of covariance that takes on values in $[-1,1]$ with a correlation of $\pm 1$ indicating perfect linear association and $0$ indicating no linear relationship. This scaling makes correlation invariant to changes in scale of the original variables, (which Akavall points out and gives an example of, +1). The scaling constant is the product of the standard deviations of the two variables. (2) If two variables are independent, their covariance is $0$. But, having a covariance of $0$ does not imply the variables are independent. This figure (from Wikipedia) $ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $ shows several example plots of data that are not independent, but their covariances are $0$. One important special case is that if two variables are jointly normally distributed, then they are independent if and only if they are uncorrelated. Another special case is that pairs of bernoulli variables are uncorrelated if and only if they are independent (thanks @cardinal). (3) The variance/covariance structure (often called simply the covariance structure) in repeated measures designs refers to the structure used to model the fact that repeated measurements on individuals are potentially correlated (and therefore are dependent) - this is done by modeling the entries in the covariance matrix of the repeated measurements. One example is the exchangeable correlation structure with constant variance which specifies that each repeated measurement has the same variance, and all pairs of measurements are equally correlated. A better choice may be to specify a covariance structure that requires two measurements taken farther apart in time to be less correlated (e.g. an autoregressive model). Note that the term covariance structure arises more generally in many kinds of multivariate analyses where observations are allowed to be correlated.
What is covariance in plain language? Covariance is a measure of how changes in one variable are associated with changes in a second variable. Specifically, covariance measures the degree to which two variables are linearly associated. Ho
1,647
What is covariance in plain language?
Macro's answer is excellent, but I want to add more to a point of how covariance is related to correlation. Covariance doesn't really tell you about the strength of the relationship between the two variables, while correlation does. For example: x = [1, 2, 3] y = [4, 6, 10] cov(x,y) = 2 #I am using population covariance here Now let's change the scale, and multiply both x and y by 10 x = [10, 20, 30] y = [40, 60, 100] cov(x, y) = 200 Changing the scale should not increase the strength of the relationship, so we can adjust by dividing the covariances by standard deviations of x and y, which is exactly the definition of correlation coefficient. In both above cases correlation coefficient between x and y is 0.98198.
What is covariance in plain language?
Macro's answer is excellent, but I want to add more to a point of how covariance is related to correlation. Covariance doesn't really tell you about the strength of the relationship between the two va
What is covariance in plain language? Macro's answer is excellent, but I want to add more to a point of how covariance is related to correlation. Covariance doesn't really tell you about the strength of the relationship between the two variables, while correlation does. For example: x = [1, 2, 3] y = [4, 6, 10] cov(x,y) = 2 #I am using population covariance here Now let's change the scale, and multiply both x and y by 10 x = [10, 20, 30] y = [40, 60, 100] cov(x, y) = 200 Changing the scale should not increase the strength of the relationship, so we can adjust by dividing the covariances by standard deviations of x and y, which is exactly the definition of correlation coefficient. In both above cases correlation coefficient between x and y is 0.98198.
What is covariance in plain language? Macro's answer is excellent, but I want to add more to a point of how covariance is related to correlation. Covariance doesn't really tell you about the strength of the relationship between the two va
1,648
What is the difference between Multiclass and Multilabel Problem
I suspect the difference is that in multi-class problems the classes are mutually exclusive, whereas for multi-label problems each label represents a different classification task, but the tasks are somehow related (so there is a benefit in tackling them together rather than separately). For example, in the famous leptograspus crabs dataset there are examples of males and females of two colour forms of crab. You could approach this as a multi-class problem with four classes (male-blue, female-blue, male-orange, female-orange) or as a multi-label problem, where one label would be male/female and the other blue/orange. Essentially in multi-label problems a pattern can belong to more than one class.
What is the difference between Multiclass and Multilabel Problem
I suspect the difference is that in multi-class problems the classes are mutually exclusive, whereas for multi-label problems each label represents a different classification task, but the tasks are s
What is the difference between Multiclass and Multilabel Problem I suspect the difference is that in multi-class problems the classes are mutually exclusive, whereas for multi-label problems each label represents a different classification task, but the tasks are somehow related (so there is a benefit in tackling them together rather than separately). For example, in the famous leptograspus crabs dataset there are examples of males and females of two colour forms of crab. You could approach this as a multi-class problem with four classes (male-blue, female-blue, male-orange, female-orange) or as a multi-label problem, where one label would be male/female and the other blue/orange. Essentially in multi-label problems a pattern can belong to more than one class.
What is the difference between Multiclass and Multilabel Problem I suspect the difference is that in multi-class problems the classes are mutually exclusive, whereas for multi-label problems each label represents a different classification task, but the tasks are s
1,649
What is the difference between Multiclass and Multilabel Problem
Multiclass classification means a classification task with more than two classes; e.g., classify a set of images of fruits which may be oranges, apples, or pears. Multiclass classification makes the assumption that each sample is assigned to one and only one label: a fruit can be either an apple or a pear but not both at the same time. Multilabel classification assigns to each sample a set of target labels. This can be thought of as predicting properties of a data-point that are not mutually exclusive, such as topics that are relevant for a document. A text might be about any of religion, politics, finance or education at the same time or none of these. Taken from http://scikit-learn.org/stable/modules/multiclass.html Edit1 (Sept 2020): For those who prefer contrasts of terms for a better understanding, look at these contrasts: Multi-class vs Binary-class is the question of the number of classes your classifier is modeling. In theory, a binary classifier is much simpler than multi-class problem, so it's useful to make this distinction. For example, Support Vector Machines (SVMs) can trivially learn a hyperplane to separate two classes, but 3 or more classes make the classification problem much more complicated. In the neural networks, we commonly use Sigmoid for binary, but Softmax for multi-class as the last layer of the model. Multi-label vs Single-Label is the question of how many classes any object or example can belong to. In the neural networks, if we need single label, we use a single Softmax layer as the last layer, thus learning a single probability distribution that spans across all classes. If we need multi-label classification, we use multiple Sigmoids on the last layer, thus learning separate distribution for each class. Remarks: we combine multilabel with multiclass, in fact, it is safe to assume that all multi-label are multi-class classifiers. When we have a binary classifier (say positive v/s negative classes), we wouldn't usually assign both labels or no-label at the same time! We usually convert such scenarios to a multi-class classifier where classes are one of {positive, negative, both, none}. Hence multi-label AND binary classifier is not practical, and it is safe to assume all multilabel are multiclass. On the other side, not all Multi-class classifiers are multi-label classifiers and we shouldn't assume it unless explicitly stated. EDIT 2: Venn diagram for my remarks
What is the difference between Multiclass and Multilabel Problem
Multiclass classification means a classification task with more than two classes; e.g., classify a set of images of fruits which may be oranges, apples, or pears. Multiclass classification makes the a
What is the difference between Multiclass and Multilabel Problem Multiclass classification means a classification task with more than two classes; e.g., classify a set of images of fruits which may be oranges, apples, or pears. Multiclass classification makes the assumption that each sample is assigned to one and only one label: a fruit can be either an apple or a pear but not both at the same time. Multilabel classification assigns to each sample a set of target labels. This can be thought of as predicting properties of a data-point that are not mutually exclusive, such as topics that are relevant for a document. A text might be about any of religion, politics, finance or education at the same time or none of these. Taken from http://scikit-learn.org/stable/modules/multiclass.html Edit1 (Sept 2020): For those who prefer contrasts of terms for a better understanding, look at these contrasts: Multi-class vs Binary-class is the question of the number of classes your classifier is modeling. In theory, a binary classifier is much simpler than multi-class problem, so it's useful to make this distinction. For example, Support Vector Machines (SVMs) can trivially learn a hyperplane to separate two classes, but 3 or more classes make the classification problem much more complicated. In the neural networks, we commonly use Sigmoid for binary, but Softmax for multi-class as the last layer of the model. Multi-label vs Single-Label is the question of how many classes any object or example can belong to. In the neural networks, if we need single label, we use a single Softmax layer as the last layer, thus learning a single probability distribution that spans across all classes. If we need multi-label classification, we use multiple Sigmoids on the last layer, thus learning separate distribution for each class. Remarks: we combine multilabel with multiclass, in fact, it is safe to assume that all multi-label are multi-class classifiers. When we have a binary classifier (say positive v/s negative classes), we wouldn't usually assign both labels or no-label at the same time! We usually convert such scenarios to a multi-class classifier where classes are one of {positive, negative, both, none}. Hence multi-label AND binary classifier is not practical, and it is safe to assume all multilabel are multiclass. On the other side, not all Multi-class classifiers are multi-label classifiers and we shouldn't assume it unless explicitly stated. EDIT 2: Venn diagram for my remarks
What is the difference between Multiclass and Multilabel Problem Multiclass classification means a classification task with more than two classes; e.g., classify a set of images of fruits which may be oranges, apples, or pears. Multiclass classification makes the a
1,650
What is the difference between Multiclass and Multilabel Problem
To complement the other answers, here are some figures. One row = the expected output for one sample. Multiclass One column = one class (one-hot encoding) Multilabel One column = one class You see that: in the multilabel case, one sample might be assigned more than one class. in the multiclass case, there are more than 2 classes in total. As a side note, nothing prevents you from having a multioutput-multiclass classification problem, e.g.:
What is the difference between Multiclass and Multilabel Problem
To complement the other answers, here are some figures. One row = the expected output for one sample. Multiclass One column = one class (one-hot encoding) Multilabel One column = one class You see
What is the difference between Multiclass and Multilabel Problem To complement the other answers, here are some figures. One row = the expected output for one sample. Multiclass One column = one class (one-hot encoding) Multilabel One column = one class You see that: in the multilabel case, one sample might be assigned more than one class. in the multiclass case, there are more than 2 classes in total. As a side note, nothing prevents you from having a multioutput-multiclass classification problem, e.g.:
What is the difference between Multiclass and Multilabel Problem To complement the other answers, here are some figures. One row = the expected output for one sample. Multiclass One column = one class (one-hot encoding) Multilabel One column = one class You see
1,651
What is the difference between Multiclass and Multilabel Problem
A multi-class problem has the assignment of instances to one of a finite, mutually-exclusive collection of classes. As in the example already given of crabs (from @Dikran): male-blue, female-blue, male-orange, female-orange. Each of these is exclusive of the others and taken together they are comprehensive. One form of a multi-label problem is to divide these into two labels, sex and color; where sex can be male or female, and color can be blue or orange. But note that this is a special case of the multi-label problem as every instance will get every label (that is every crab has both a sex and a color). Multi-label problems also include other cases that allow for a variable number of labels to be assigned to each instance. For instance, an article in a newspaper or wire service may be assigned to the categories NEWS, POLITICS, SPORTS, MEDICINE, etc. One story about an important sporting event would get an assignment of the label SPORTS; while another, involving political tensions that are revealed by a particular sporting event, might get both the labels SPORTS and POLITICS. Where I am, in the US, the results of the Superbowl are labeled both SPORTS and NEWS given the societal impact of the event. Note that this form of labeling, with variable numbers of labels, can be recast into a form similar to the example with the crabs; except that every label is treated as LABEL-X or not-LABEL-X. But not all methods require this recasting.
What is the difference between Multiclass and Multilabel Problem
A multi-class problem has the assignment of instances to one of a finite, mutually-exclusive collection of classes. As in the example already given of crabs (from @Dikran): male-blue, female-blue, ma
What is the difference between Multiclass and Multilabel Problem A multi-class problem has the assignment of instances to one of a finite, mutually-exclusive collection of classes. As in the example already given of crabs (from @Dikran): male-blue, female-blue, male-orange, female-orange. Each of these is exclusive of the others and taken together they are comprehensive. One form of a multi-label problem is to divide these into two labels, sex and color; where sex can be male or female, and color can be blue or orange. But note that this is a special case of the multi-label problem as every instance will get every label (that is every crab has both a sex and a color). Multi-label problems also include other cases that allow for a variable number of labels to be assigned to each instance. For instance, an article in a newspaper or wire service may be assigned to the categories NEWS, POLITICS, SPORTS, MEDICINE, etc. One story about an important sporting event would get an assignment of the label SPORTS; while another, involving political tensions that are revealed by a particular sporting event, might get both the labels SPORTS and POLITICS. Where I am, in the US, the results of the Superbowl are labeled both SPORTS and NEWS given the societal impact of the event. Note that this form of labeling, with variable numbers of labels, can be recast into a form similar to the example with the crabs; except that every label is treated as LABEL-X or not-LABEL-X. But not all methods require this recasting.
What is the difference between Multiclass and Multilabel Problem A multi-class problem has the assignment of instances to one of a finite, mutually-exclusive collection of classes. As in the example already given of crabs (from @Dikran): male-blue, female-blue, ma
1,652
What is the difference between Multiclass and Multilabel Problem
And one more difference lies in that the multi-label problem requires the model to learn the correlation between the different classes, but in multiclass problems different classes are independent of each other.
What is the difference between Multiclass and Multilabel Problem
And one more difference lies in that the multi-label problem requires the model to learn the correlation between the different classes, but in multiclass problems different classes are independent of
What is the difference between Multiclass and Multilabel Problem And one more difference lies in that the multi-label problem requires the model to learn the correlation between the different classes, but in multiclass problems different classes are independent of each other.
What is the difference between Multiclass and Multilabel Problem And one more difference lies in that the multi-label problem requires the model to learn the correlation between the different classes, but in multiclass problems different classes are independent of
1,653
What is the difference between Multiclass and Multilabel Problem
Multi Class classification Problem One right answer and Mutually exclusive outputs(eg iris, numbers) Multi Label Classification more than one right answer and appropriate output or Non exclusive eg(sugar test, eye test) In multi class we user softmax In multi label we use sigmoid
What is the difference between Multiclass and Multilabel Problem
Multi Class classification Problem One right answer and Mutually exclusive outputs(eg iris, numbers) Multi Label Classification more than one right answer and appropriate output or Non exclusive eg(s
What is the difference between Multiclass and Multilabel Problem Multi Class classification Problem One right answer and Mutually exclusive outputs(eg iris, numbers) Multi Label Classification more than one right answer and appropriate output or Non exclusive eg(sugar test, eye test) In multi class we user softmax In multi label we use sigmoid
What is the difference between Multiclass and Multilabel Problem Multi Class classification Problem One right answer and Mutually exclusive outputs(eg iris, numbers) Multi Label Classification more than one right answer and appropriate output or Non exclusive eg(s
1,654
Diagnostic plots for count regression
Here is what I usually like doing (for illustration I use the overdispersed and not very easily modelled quine data of pupil's days absent from school from MASS): Test and graph the original count data by plotting observed frequencies and fitted frequencies (see chapter 2 in Friendly) which is supported by the vcd package in R in large parts. For example, with goodfit and a rootogram: library(MASS) library(vcd) data(quine) fit <- goodfit(quine$Days) summary(fit) rootogram(fit) or with Ord plots which help in identifying which count data model is underlying (e.g., here the slope is positive and the intercept is positive which speaks for a negative binomial distribution): Ord_plot(quine$Days) or with the "XXXXXXness" plots where XXXXX is the distribution of choice, say Poissoness plot (which speaks against Poisson, try also type="nbinom"): distplot(quine$Days, type="poisson") Inspect usual goodness-of-fit measures (such as likelihood ratio statistics vs. a null model or similar): mod1 <- glm(Days~Age+Sex, data=quine, family="poisson") summary(mod1) anova(mod1, test="Chisq") Check for over / underdispersion by looking at residual deviance/df or at a formal test statistic (e.g., see this answer). Here we have clearly overdispersion: library(AER) deviance(mod1)/mod1$df.residual dispersiontest(mod1) Check for influential and leverage points, e.g., with the influencePlot in the car package. Of course here many points are highly influential because Poisson is a bad model: library(car) influencePlot(mod1) Check for zero inflation by fitting a count data model and its zeroinflated / hurdle counterpart and compare them (usually with AIC). Here a zero inflated model would fit better than the simple Poisson (again probably due to overdispersion): library(pscl) mod2 <- zeroinfl(Days~Age+Sex, data=quine, dist="poisson") AIC(mod1, mod2) Plot the residuals (raw, deviance or scaled) on the y-axis vs. the (log) predicted values (or the linear predictor) on the x-axis. Here we see some very large residuals and a substantial deviance of the deviance residuals from the normal (speaking against the Poisson; Edit: @FlorianHartig's answer suggests that normality of these residuals is not to be expected so this is not a conclusive clue): res <- residuals(mod1, type="deviance") plot(log(predict(mod1)), res) abline(h=0, lty=2) qqnorm(res) qqline(res) If interested, plot a half normal probability plot of residuals by plotting ordered absolute residuals vs. expected normal values Atkinson (1981). A special feature would be to simulate a reference ‘line’ and envelope with simulated / bootstrapped confidence intervals (not shown though): library(faraway) halfnorm(residuals(mod1)) Diagnostic plots for log linear models for count data (see chapters 7.2 and 7.7 in Friendly's book). Plot predicted vs. observed values perhaps with some interval estimate (I did just for the age groups--here we see again that we are pretty far off with our estimates due to the overdispersion apart, perhaps, in group F3. The pink points are the point prediction $\pm$ one standard error): plot(Days~Age, data=quine) prs <- predict(mod1, type="response", se.fit=TRUE) pris <- data.frame("pest"=prs[[1]], "lwr"=prs[[1]]-prs[[2]], "upr"=prs[[1]]+prs[[2]]) points(pris$pest ~ quine$Age, col="red") points(pris$lwr ~ quine$Age, col="pink", pch=19) points(pris$upr ~ quine$Age, col="pink", pch=19) This should give you much of the useful information about your analysis and most steps work for all standard count data distributions (e.g., Poisson, Negative Binomial, COM Poisson, Power Laws).
Diagnostic plots for count regression
Here is what I usually like doing (for illustration I use the overdispersed and not very easily modelled quine data of pupil's days absent from school from MASS): Test and graph the original coun
Diagnostic plots for count regression Here is what I usually like doing (for illustration I use the overdispersed and not very easily modelled quine data of pupil's days absent from school from MASS): Test and graph the original count data by plotting observed frequencies and fitted frequencies (see chapter 2 in Friendly) which is supported by the vcd package in R in large parts. For example, with goodfit and a rootogram: library(MASS) library(vcd) data(quine) fit <- goodfit(quine$Days) summary(fit) rootogram(fit) or with Ord plots which help in identifying which count data model is underlying (e.g., here the slope is positive and the intercept is positive which speaks for a negative binomial distribution): Ord_plot(quine$Days) or with the "XXXXXXness" plots where XXXXX is the distribution of choice, say Poissoness plot (which speaks against Poisson, try also type="nbinom"): distplot(quine$Days, type="poisson") Inspect usual goodness-of-fit measures (such as likelihood ratio statistics vs. a null model or similar): mod1 <- glm(Days~Age+Sex, data=quine, family="poisson") summary(mod1) anova(mod1, test="Chisq") Check for over / underdispersion by looking at residual deviance/df or at a formal test statistic (e.g., see this answer). Here we have clearly overdispersion: library(AER) deviance(mod1)/mod1$df.residual dispersiontest(mod1) Check for influential and leverage points, e.g., with the influencePlot in the car package. Of course here many points are highly influential because Poisson is a bad model: library(car) influencePlot(mod1) Check for zero inflation by fitting a count data model and its zeroinflated / hurdle counterpart and compare them (usually with AIC). Here a zero inflated model would fit better than the simple Poisson (again probably due to overdispersion): library(pscl) mod2 <- zeroinfl(Days~Age+Sex, data=quine, dist="poisson") AIC(mod1, mod2) Plot the residuals (raw, deviance or scaled) on the y-axis vs. the (log) predicted values (or the linear predictor) on the x-axis. Here we see some very large residuals and a substantial deviance of the deviance residuals from the normal (speaking against the Poisson; Edit: @FlorianHartig's answer suggests that normality of these residuals is not to be expected so this is not a conclusive clue): res <- residuals(mod1, type="deviance") plot(log(predict(mod1)), res) abline(h=0, lty=2) qqnorm(res) qqline(res) If interested, plot a half normal probability plot of residuals by plotting ordered absolute residuals vs. expected normal values Atkinson (1981). A special feature would be to simulate a reference ‘line’ and envelope with simulated / bootstrapped confidence intervals (not shown though): library(faraway) halfnorm(residuals(mod1)) Diagnostic plots for log linear models for count data (see chapters 7.2 and 7.7 in Friendly's book). Plot predicted vs. observed values perhaps with some interval estimate (I did just for the age groups--here we see again that we are pretty far off with our estimates due to the overdispersion apart, perhaps, in group F3. The pink points are the point prediction $\pm$ one standard error): plot(Days~Age, data=quine) prs <- predict(mod1, type="response", se.fit=TRUE) pris <- data.frame("pest"=prs[[1]], "lwr"=prs[[1]]-prs[[2]], "upr"=prs[[1]]+prs[[2]]) points(pris$pest ~ quine$Age, col="red") points(pris$lwr ~ quine$Age, col="pink", pch=19) points(pris$upr ~ quine$Age, col="pink", pch=19) This should give you much of the useful information about your analysis and most steps work for all standard count data distributions (e.g., Poisson, Negative Binomial, COM Poisson, Power Laws).
Diagnostic plots for count regression Here is what I usually like doing (for illustration I use the overdispersed and not very easily modelled quine data of pupil's days absent from school from MASS): Test and graph the original coun
1,655
Diagnostic plots for count regression
For the approach of using standard diagnostic plots but wanting to know what they should look like, I like the paper: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickham, H. (2009) Statistical Inference for exploratory data analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367, 4361-4383 doi: 10.1098/rsta.2009.0120 One of the approaches mentioned there is to create several simulated datasets where the assumptions of interest are true and create the diagnostic plots for these simulated datasets and also create the diagnostic plot for the real data. put all these plots on the screen at the same time (randomly placing the one based on real data). Now you have a visual reference of what the plots should look like and if the assumptions hold for the real data then that plot should look just like the others (if you cannot tell which is the real data, then the assumptions being tested are likely close enough to true), but if the real data plot looks clearly different from the other, then that means that at least one of the assumptions don't hold. The vis.test function in the TeachingDemos package for R helps implement this as a test.
Diagnostic plots for count regression
For the approach of using standard diagnostic plots but wanting to know what they should look like, I like the paper: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickha
Diagnostic plots for count regression For the approach of using standard diagnostic plots but wanting to know what they should look like, I like the paper: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickham, H. (2009) Statistical Inference for exploratory data analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367, 4361-4383 doi: 10.1098/rsta.2009.0120 One of the approaches mentioned there is to create several simulated datasets where the assumptions of interest are true and create the diagnostic plots for these simulated datasets and also create the diagnostic plot for the real data. put all these plots on the screen at the same time (randomly placing the one based on real data). Now you have a visual reference of what the plots should look like and if the assumptions hold for the real data then that plot should look just like the others (if you cannot tell which is the real data, then the assumptions being tested are likely close enough to true), but if the real data plot looks clearly different from the other, then that means that at least one of the assumptions don't hold. The vis.test function in the TeachingDemos package for R helps implement this as a test.
Diagnostic plots for count regression For the approach of using standard diagnostic plots but wanting to know what they should look like, I like the paper: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickha
1,656
Diagnostic plots for count regression
This is an old question, but I thought it would be useful to add that my DHARMa R package (available from CRAN, see here) now provides standardized residuals for GLMs and GLMMs, based on a simulation approach similar to what is suggested by @GregSnow. From the package description: The DHARMa package uses a simulation-based approach to create readily interpretable scaled residuals from fitted generalized linear mixed models. Currently supported are all 'merMod' classes from 'lme4' ('lmerMod', 'glmerMod'), 'glm' (including 'negbin' from 'MASS', but excluding quasi-distributions) and 'lm' model classes. Alternatively, externally created simulations, e.g. posterior predictive simulations from Bayesian software such as 'JAGS', 'STAN', or 'BUGS' can be processed as well. The resulting residuals are standardized to values between 0 and 1 and can be interpreted as intuitively as residuals from a linear regression. The package also provides a number of plot and test functions for typical model mispecification problem, such as over/underdispersion, zero-inflation, and spatial / temporal autocorrelation. @Momo - you may want to update your recommendation 6, it is misleading. Normality of deviance residuals is in general not expected under a Poisson, as explained in the DHARMa vignette or here; and seing deviance residuals (or any other standard residuals) that differ from a straight line in a qqnorm plot is therefore in general no concern at all. The DHARMa package provides a qq plot that is reliable for diagnosing deviations from Poisson or other GLM families. I have created an example that demonstrates the problem with the deviance residuals here.
Diagnostic plots for count regression
This is an old question, but I thought it would be useful to add that my DHARMa R package (available from CRAN, see here) now provides standardized residuals for GLMs and GLMMs, based on a simulation
Diagnostic plots for count regression This is an old question, but I thought it would be useful to add that my DHARMa R package (available from CRAN, see here) now provides standardized residuals for GLMs and GLMMs, based on a simulation approach similar to what is suggested by @GregSnow. From the package description: The DHARMa package uses a simulation-based approach to create readily interpretable scaled residuals from fitted generalized linear mixed models. Currently supported are all 'merMod' classes from 'lme4' ('lmerMod', 'glmerMod'), 'glm' (including 'negbin' from 'MASS', but excluding quasi-distributions) and 'lm' model classes. Alternatively, externally created simulations, e.g. posterior predictive simulations from Bayesian software such as 'JAGS', 'STAN', or 'BUGS' can be processed as well. The resulting residuals are standardized to values between 0 and 1 and can be interpreted as intuitively as residuals from a linear regression. The package also provides a number of plot and test functions for typical model mispecification problem, such as over/underdispersion, zero-inflation, and spatial / temporal autocorrelation. @Momo - you may want to update your recommendation 6, it is misleading. Normality of deviance residuals is in general not expected under a Poisson, as explained in the DHARMa vignette or here; and seing deviance residuals (or any other standard residuals) that differ from a straight line in a qqnorm plot is therefore in general no concern at all. The DHARMa package provides a qq plot that is reliable for diagnosing deviations from Poisson or other GLM families. I have created an example that demonstrates the problem with the deviance residuals here.
Diagnostic plots for count regression This is an old question, but I thought it would be useful to add that my DHARMa R package (available from CRAN, see here) now provides standardized residuals for GLMs and GLMMs, based on a simulation
1,657
Diagnostic plots for count regression
There is a function called glm.diag.plots in package boot, to generate diagnostic plots for GLMs. What it does: Makes plot of jackknife deviance residuals against linear predictor, normal scores plots of standardized deviance residuals, plot of approximate Cook statistics against leverage/(1-leverage), and case plot of Cook statistic.
Diagnostic plots for count regression
There is a function called glm.diag.plots in package boot, to generate diagnostic plots for GLMs. What it does: Makes plot of jackknife deviance residuals against linear predictor, normal scores pl
Diagnostic plots for count regression There is a function called glm.diag.plots in package boot, to generate diagnostic plots for GLMs. What it does: Makes plot of jackknife deviance residuals against linear predictor, normal scores plots of standardized deviance residuals, plot of approximate Cook statistics against leverage/(1-leverage), and case plot of Cook statistic.
Diagnostic plots for count regression There is a function called glm.diag.plots in package boot, to generate diagnostic plots for GLMs. What it does: Makes plot of jackknife deviance residuals against linear predictor, normal scores pl
1,658
Diagnostic plots for count regression
I would definitely recommend the {performance} package. It has a check_model(mod1) function that shows the relevant diagnostic plots. library(MASS) data(quine) mod1 <- glm(Days~Age+Sex, data=quine, family="poisson") summary(mod1) #> #> Call: #> glm(formula = Days ~ Age + Sex, family = "poisson", data = quine) #> #> Deviance Residuals: #> Min 1Q Median 3Q Max #> -6.647 -2.964 -1.299 1.465 10.258 #> #> Coefficients: #> Estimate Std. Error z value Pr(>|z|) #> (Intercept) 2.63090 0.05693 46.215 < 2e-16 *** #> AgeF1 -0.25232 0.06804 -3.708 0.000209 *** #> AgeF2 0.35964 0.06083 5.913 3.37e-09 *** #> AgeF3 0.29915 0.06412 4.665 3.08e-06 *** #> SexM 0.10476 0.04181 2.506 0.012221 * #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> (Dispersion parameter for poisson family taken to be 1) #> #> Null deviance: 2073.5 on 145 degrees of freedom #> Residual deviance: 1908.3 on 141 degrees of freedom #> AIC: 2506.8 #> #> Number of Fisher Scoring iterations: 5 performance::check_model(mod1) #> Loading required namespace: qqplotr #> `geom_smooth()` using formula 'y ~ x' #> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. #> Warning: ggrepel: 138 unlabeled data points (too many overlaps). Consider #> increasing max.overlaps Created on 2021-02-05 by the reprex package (v1.0.0)
Diagnostic plots for count regression
I would definitely recommend the {performance} package. It has a check_model(mod1) function that shows the relevant diagnostic plots. library(MASS) data(quine) mod1 <- glm(Days~Age+Sex, data=quine
Diagnostic plots for count regression I would definitely recommend the {performance} package. It has a check_model(mod1) function that shows the relevant diagnostic plots. library(MASS) data(quine) mod1 <- glm(Days~Age+Sex, data=quine, family="poisson") summary(mod1) #> #> Call: #> glm(formula = Days ~ Age + Sex, family = "poisson", data = quine) #> #> Deviance Residuals: #> Min 1Q Median 3Q Max #> -6.647 -2.964 -1.299 1.465 10.258 #> #> Coefficients: #> Estimate Std. Error z value Pr(>|z|) #> (Intercept) 2.63090 0.05693 46.215 < 2e-16 *** #> AgeF1 -0.25232 0.06804 -3.708 0.000209 *** #> AgeF2 0.35964 0.06083 5.913 3.37e-09 *** #> AgeF3 0.29915 0.06412 4.665 3.08e-06 *** #> SexM 0.10476 0.04181 2.506 0.012221 * #> --- #> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 #> #> (Dispersion parameter for poisson family taken to be 1) #> #> Null deviance: 2073.5 on 145 degrees of freedom #> Residual deviance: 1908.3 on 141 degrees of freedom #> AIC: 2506.8 #> #> Number of Fisher Scoring iterations: 5 performance::check_model(mod1) #> Loading required namespace: qqplotr #> `geom_smooth()` using formula 'y ~ x' #> `stat_bin()` using `bins = 30`. Pick better value with `binwidth`. #> Warning: ggrepel: 138 unlabeled data points (too many overlaps). Consider #> increasing max.overlaps Created on 2021-02-05 by the reprex package (v1.0.0)
Diagnostic plots for count regression I would definitely recommend the {performance} package. It has a check_model(mod1) function that shows the relevant diagnostic plots. library(MASS) data(quine) mod1 <- glm(Days~Age+Sex, data=quine
1,659
How to select kernel for SVM?
The kernel is effectively a similarity measure, so choosing a kernel according to prior knowledge of invariances as suggested by Robin (+1) is a good idea. In the absence of expert knowledge, the Radial Basis Function kernel makes a good default kernel (once you have established it is a problem requiring a non-linear model). The choice of the kernel and kernel/regularisation parameters can be automated by optimising a cross-valdiation based model selection (or use the radius-margin or span bounds). The simplest thing to do is to minimise a continuous model selection criterion using the Nelder-Mead simplex method, which doesn't require gradient calculation and works well for sensible numbers of hyper-parameters. If you have more than a few hyper-parameters to tune, automated model selection is likely to result in severe over-fitting, due to the variance of the model selection criterion. It is possible to use gradient based optimization, but the performance gain is not usually worth the effort of coding it up). Automated choice of kernels and kernel/regularization parameters is a tricky issue, as it is very easy to overfit the model selection criterion (typically cross-validation based), and you can end up with a worse model than you started with. Automated model selection also can bias performance evaluation, so make sure your performance evaluation evaluates the whole process of fitting the model (training and model selection), for details, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (pdf) and G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, vol. 11, pp. 2079-2107, July 2010.(pdf)
How to select kernel for SVM?
The kernel is effectively a similarity measure, so choosing a kernel according to prior knowledge of invariances as suggested by Robin (+1) is a good idea. In the absence of expert knowledge, the Radi
How to select kernel for SVM? The kernel is effectively a similarity measure, so choosing a kernel according to prior knowledge of invariances as suggested by Robin (+1) is a good idea. In the absence of expert knowledge, the Radial Basis Function kernel makes a good default kernel (once you have established it is a problem requiring a non-linear model). The choice of the kernel and kernel/regularisation parameters can be automated by optimising a cross-valdiation based model selection (or use the radius-margin or span bounds). The simplest thing to do is to minimise a continuous model selection criterion using the Nelder-Mead simplex method, which doesn't require gradient calculation and works well for sensible numbers of hyper-parameters. If you have more than a few hyper-parameters to tune, automated model selection is likely to result in severe over-fitting, due to the variance of the model selection criterion. It is possible to use gradient based optimization, but the performance gain is not usually worth the effort of coding it up). Automated choice of kernels and kernel/regularization parameters is a tricky issue, as it is very easy to overfit the model selection criterion (typically cross-validation based), and you can end up with a worse model than you started with. Automated model selection also can bias performance evaluation, so make sure your performance evaluation evaluates the whole process of fitting the model (training and model selection), for details, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (pdf) and G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, vol. 11, pp. 2079-2107, July 2010.(pdf)
How to select kernel for SVM? The kernel is effectively a similarity measure, so choosing a kernel according to prior knowledge of invariances as suggested by Robin (+1) is a good idea. In the absence of expert knowledge, the Radi
1,660
How to select kernel for SVM?
If you are not sure what would be best you can use automatic techniques of selection (e.g. cross validation, ... ). In this case you can even use a combination of classifiers (if your problem is classification) obtained with different kernel. However, the "advantage" of working with a kernel is that you change the usual "Euclidean" geometry so that it fits your own problem. Also, you should really try to understand what is the interest of a kernel for your problem, what is particular to the geometry of your problem. This can include: Invariance: if there is a familly of transformations that do not change your problem fundamentally, the kernel should reflect that. Invariance by rotation is contained in the gaussian kernel, but you can think of a lot of other things: translation, homothetie, any group representation, .... What is a good separator ? if you have an idea of what a good separator is (i.e. a good classification rule) in your classification problem, this should be included in the choice of kernel. Remmeber that SVM will give you classifiers of the form $$ \hat{f}(x)=\sum_{i=1}^n \lambda_i K(x,x_i)$$ If you know that a linear separator would be a good one, then you can use Kernel that gives affine functions (i.e. $K(x,x_i)=\langle x,A x_i\rangle+c$). If you think smooth boundaries much in the spirit of smooth KNN would be better, then you can take a gaussian kernel...
How to select kernel for SVM?
If you are not sure what would be best you can use automatic techniques of selection (e.g. cross validation, ... ). In this case you can even use a combination of classifiers (if your problem is class
How to select kernel for SVM? If you are not sure what would be best you can use automatic techniques of selection (e.g. cross validation, ... ). In this case you can even use a combination of classifiers (if your problem is classification) obtained with different kernel. However, the "advantage" of working with a kernel is that you change the usual "Euclidean" geometry so that it fits your own problem. Also, you should really try to understand what is the interest of a kernel for your problem, what is particular to the geometry of your problem. This can include: Invariance: if there is a familly of transformations that do not change your problem fundamentally, the kernel should reflect that. Invariance by rotation is contained in the gaussian kernel, but you can think of a lot of other things: translation, homothetie, any group representation, .... What is a good separator ? if you have an idea of what a good separator is (i.e. a good classification rule) in your classification problem, this should be included in the choice of kernel. Remmeber that SVM will give you classifiers of the form $$ \hat{f}(x)=\sum_{i=1}^n \lambda_i K(x,x_i)$$ If you know that a linear separator would be a good one, then you can use Kernel that gives affine functions (i.e. $K(x,x_i)=\langle x,A x_i\rangle+c$). If you think smooth boundaries much in the spirit of smooth KNN would be better, then you can take a gaussian kernel...
How to select kernel for SVM? If you are not sure what would be best you can use automatic techniques of selection (e.g. cross validation, ... ). In this case you can even use a combination of classifiers (if your problem is class
1,661
How to select kernel for SVM?
I always have the feeling that any hyper parameter selection for SVMs is done via cross validation in combination with grid search.
How to select kernel for SVM?
I always have the feeling that any hyper parameter selection for SVMs is done via cross validation in combination with grid search.
How to select kernel for SVM? I always have the feeling that any hyper parameter selection for SVMs is done via cross validation in combination with grid search.
How to select kernel for SVM? I always have the feeling that any hyper parameter selection for SVMs is done via cross validation in combination with grid search.
1,662
How to select kernel for SVM?
In general, the RBF kernel is a reasonable rst choice.Furthermore,the linear kernel is a special case of RBF,In particular,when the number of features is very large, one may just use the linear kernel.
How to select kernel for SVM?
In general, the RBF kernel is a reasonable rst choice.Furthermore,the linear kernel is a special case of RBF,In particular,when the number of features is very large, one may just use the linear kernel
How to select kernel for SVM? In general, the RBF kernel is a reasonable rst choice.Furthermore,the linear kernel is a special case of RBF,In particular,when the number of features is very large, one may just use the linear kernel.
How to select kernel for SVM? In general, the RBF kernel is a reasonable rst choice.Furthermore,the linear kernel is a special case of RBF,In particular,when the number of features is very large, one may just use the linear kernel
1,663
Loadings vs eigenvectors in PCA: when to use one or another?
In PCA, you split covariance (or correlation) matrix into scale part (eigenvalues) and direction part (eigenvectors). You may then endow eigenvectors with the scale: loadings. So, loadings are thus become comparable by magnitude with the covariances/correlations observed between the variables, - because what had been drawn out from the variables' covariation now returns back - in the form of the covariation between the variables and the principal components. Actually, loadings are the covariances/correlations between the original variables and the unit-scaled components. This answer shows geometrically what loadings are and what are coefficients associating components with variables in PCA or factor analysis. Loadings: Help you interpret principal components or factors; Because they are the linear combination weights (coefficients) whereby unit-scaled components or factors define or "load" a variable. (Eigenvector is just a coefficient of orthogonal transformation or projection, it is devoid of "load" within its value. "Load" is (information of the amount of) variance, magnitude. PCs are extracted to explain variance of the variables. Eigenvalues are the variances of (= explained by) PCs. When we multiply eigenvector by sq.root of the eivenvalue we "load" the bare coefficient by the amount of variance. By that virtue we make the coefficient to be the measure of association, co-variability.) Loadings sometimes are "rotated" (e.g. varimax) afterwards to facilitate interpretability (see also); It is loadings which "restore" the original covariance/correlation matrix (see also this thread discussing nuances of PCA and FA in that respect); While in PCA you can compute values of components both from eigenvectors and loadings, in factor analysis you compute factor scores out of loadings. And, above all, loading matrix is informative: its vertical sums of squares are the eigenvalues, components' variances, and its horizontal sums of squares are portions of the variables' variances being "explained" by the components. Rescaled or standardized loading is the loading divided by the variable's st. deviation; it is the correlation. (If your PCA is correlation-based PCA, loading is equal to the rescaled one, because correlation-based PCA is the PCA on standardized variables.) Rescaled loading squared has the meaning of the contribution of a pr. component into a variable; if it is high (close to 1) the variable is well defined by that component alone. An example of computations done in PCA and FA for you to see. Eigenvectors are unit-scaled loadings; and they are the coefficients (the cosines) of orthogonal transformation (rotation) of variables into principal components or back. Therefore it is easy to compute the components' values (not standardized) with them. Besides that their usage is limited. Eigenvector value squared has the meaning of the contribution of a variable into a pr. component; if it is high (close to 1) the component is well defined by that variable alone. Although eigenvectors and loadings are simply two different ways to normalize coordinates of the same points representing columns (variables) of the data on a biplot, it is not a good idea to mix the two terms. This answer explained why. See also.
Loadings vs eigenvectors in PCA: when to use one or another?
In PCA, you split covariance (or correlation) matrix into scale part (eigenvalues) and direction part (eigenvectors). You may then endow eigenvectors with the scale: loadings. So, loadings are thus be
Loadings vs eigenvectors in PCA: when to use one or another? In PCA, you split covariance (or correlation) matrix into scale part (eigenvalues) and direction part (eigenvectors). You may then endow eigenvectors with the scale: loadings. So, loadings are thus become comparable by magnitude with the covariances/correlations observed between the variables, - because what had been drawn out from the variables' covariation now returns back - in the form of the covariation between the variables and the principal components. Actually, loadings are the covariances/correlations between the original variables and the unit-scaled components. This answer shows geometrically what loadings are and what are coefficients associating components with variables in PCA or factor analysis. Loadings: Help you interpret principal components or factors; Because they are the linear combination weights (coefficients) whereby unit-scaled components or factors define or "load" a variable. (Eigenvector is just a coefficient of orthogonal transformation or projection, it is devoid of "load" within its value. "Load" is (information of the amount of) variance, magnitude. PCs are extracted to explain variance of the variables. Eigenvalues are the variances of (= explained by) PCs. When we multiply eigenvector by sq.root of the eivenvalue we "load" the bare coefficient by the amount of variance. By that virtue we make the coefficient to be the measure of association, co-variability.) Loadings sometimes are "rotated" (e.g. varimax) afterwards to facilitate interpretability (see also); It is loadings which "restore" the original covariance/correlation matrix (see also this thread discussing nuances of PCA and FA in that respect); While in PCA you can compute values of components both from eigenvectors and loadings, in factor analysis you compute factor scores out of loadings. And, above all, loading matrix is informative: its vertical sums of squares are the eigenvalues, components' variances, and its horizontal sums of squares are portions of the variables' variances being "explained" by the components. Rescaled or standardized loading is the loading divided by the variable's st. deviation; it is the correlation. (If your PCA is correlation-based PCA, loading is equal to the rescaled one, because correlation-based PCA is the PCA on standardized variables.) Rescaled loading squared has the meaning of the contribution of a pr. component into a variable; if it is high (close to 1) the variable is well defined by that component alone. An example of computations done in PCA and FA for you to see. Eigenvectors are unit-scaled loadings; and they are the coefficients (the cosines) of orthogonal transformation (rotation) of variables into principal components or back. Therefore it is easy to compute the components' values (not standardized) with them. Besides that their usage is limited. Eigenvector value squared has the meaning of the contribution of a variable into a pr. component; if it is high (close to 1) the component is well defined by that variable alone. Although eigenvectors and loadings are simply two different ways to normalize coordinates of the same points representing columns (variables) of the data on a biplot, it is not a good idea to mix the two terms. This answer explained why. See also.
Loadings vs eigenvectors in PCA: when to use one or another? In PCA, you split covariance (or correlation) matrix into scale part (eigenvalues) and direction part (eigenvectors). You may then endow eigenvectors with the scale: loadings. So, loadings are thus be
1,664
Loadings vs eigenvectors in PCA: when to use one or another?
There seems to be a great deal of confusion about loadings, coefficients and eigenvectors. The word loadings comes from Factor Analysis and it refers to coefficients of the regression of the data matrix onto the factors. They are not the coefficients defining the factors. See for example Mardia, Bibby and Kent or other multivariate statistics textbooks. In recent years the word loadings has been used to indicate the PCs coefficients. Here it seems that it used to indicate the coefficients multiplied by the sqrt of the eigenvalues of the matrix. These are not quantities commonly used in PCA. The principal components are defined as the sum of the variables weighted with unit norm coefficients. In this way the PCs have norm equal to the corresponding eigenvalue, which in turn is equal to the variance explained by the component. It is in Factor Analysis that the factors are required to have unit norm. But FA and PCA are completely different. Rotating the PCs' coefficient is very rarely done because it destroys the optimality of the components. In FA the factors are not uniquely defined and can be estimated in different ways. The important quantities are the loadings (the true ones) and the communalities which are used to study the structure of the covariance matrix. PCA or PLS should be used to estimate components.
Loadings vs eigenvectors in PCA: when to use one or another?
There seems to be a great deal of confusion about loadings, coefficients and eigenvectors. The word loadings comes from Factor Analysis and it refers to coefficients of the regression of the data matr
Loadings vs eigenvectors in PCA: when to use one or another? There seems to be a great deal of confusion about loadings, coefficients and eigenvectors. The word loadings comes from Factor Analysis and it refers to coefficients of the regression of the data matrix onto the factors. They are not the coefficients defining the factors. See for example Mardia, Bibby and Kent or other multivariate statistics textbooks. In recent years the word loadings has been used to indicate the PCs coefficients. Here it seems that it used to indicate the coefficients multiplied by the sqrt of the eigenvalues of the matrix. These are not quantities commonly used in PCA. The principal components are defined as the sum of the variables weighted with unit norm coefficients. In this way the PCs have norm equal to the corresponding eigenvalue, which in turn is equal to the variance explained by the component. It is in Factor Analysis that the factors are required to have unit norm. But FA and PCA are completely different. Rotating the PCs' coefficient is very rarely done because it destroys the optimality of the components. In FA the factors are not uniquely defined and can be estimated in different ways. The important quantities are the loadings (the true ones) and the communalities which are used to study the structure of the covariance matrix. PCA or PLS should be used to estimate components.
Loadings vs eigenvectors in PCA: when to use one or another? There seems to be a great deal of confusion about loadings, coefficients and eigenvectors. The word loadings comes from Factor Analysis and it refers to coefficients of the regression of the data matr
1,665
Loadings vs eigenvectors in PCA: when to use one or another?
I am a bit confused by those names, and I searched in the book named "Statistical Methods in the Atmospherical Science", and it gave me a summary of varied Terminology of PCA, here are the screenshots in the book, hope it will help.
Loadings vs eigenvectors in PCA: when to use one or another?
I am a bit confused by those names, and I searched in the book named "Statistical Methods in the Atmospherical Science", and it gave me a summary of varied Terminology of PCA, here are the screenshots
Loadings vs eigenvectors in PCA: when to use one or another? I am a bit confused by those names, and I searched in the book named "Statistical Methods in the Atmospherical Science", and it gave me a summary of varied Terminology of PCA, here are the screenshots in the book, hope it will help.
Loadings vs eigenvectors in PCA: when to use one or another? I am a bit confused by those names, and I searched in the book named "Statistical Methods in the Atmospherical Science", and it gave me a summary of varied Terminology of PCA, here are the screenshots
1,666
Loadings vs eigenvectors in PCA: when to use one or another?
There appears to be some confusion over this matter, so I will provide some observations and a pointer to where an excellent answer can be found in the literature. Firstly, PCA and Factor Analysis (FA) are related. In general, principal components are orthogonal by definition whereas factors - the analogous entity in FA - are not. Simply put, principal components span the factor space in an arbitrary but not necessarily useful way due to their being derived from pure eigenanalysis of the data. Factors on the other hand represent real-world entities which are only orthogonal (i.e. uncorrelated or independent) by coincidence. Say we take s observations from each of l subjects. These can be arranged into a data matrix D having s rows and l columns. D can be decomposed into a score matrix S and a loading matrix L such that D = SL. S will have s rows, and L will have l columns, the second dimension of each being the number of factors n. The purpose of factor analysis is to decompose D in such a way as to reveal the underlying scores and factors. The loadings in L tell us the proportion of each score which make up the observations in D. In PCA, L has the eigenvectors of the correlation or covariance matrix of D as its columns. These are conventionally arranged in descending order of the corresponding eigenvalues. The value of n - i.e. the number of significant principal components to retain in the analysis, and hence the number of rows of L - is typically determined through the use of a scree plot of the eigenvalues or one of numerous other methods to be found in the literature. The columns of S in PCA form the n abstract principal components themselves. The value of n is the underlying dimensionality of the data set. The object of factor analysis is to transform the abstract components into meaningful factors through the use of a transformation matrix T such that D = STT-1L. (ST) is the transformed score matrix, and (T-1L) is the transformed loading matrix. The above explanation roughly follows the notation of Edmund R. Malinowski from his excellent Factor Analysis in Chemistry. I highly recommend the opening chapters as an introduction to the subject.
Loadings vs eigenvectors in PCA: when to use one or another?
There appears to be some confusion over this matter, so I will provide some observations and a pointer to where an excellent answer can be found in the literature. Firstly, PCA and Factor Analysis (FA
Loadings vs eigenvectors in PCA: when to use one or another? There appears to be some confusion over this matter, so I will provide some observations and a pointer to where an excellent answer can be found in the literature. Firstly, PCA and Factor Analysis (FA) are related. In general, principal components are orthogonal by definition whereas factors - the analogous entity in FA - are not. Simply put, principal components span the factor space in an arbitrary but not necessarily useful way due to their being derived from pure eigenanalysis of the data. Factors on the other hand represent real-world entities which are only orthogonal (i.e. uncorrelated or independent) by coincidence. Say we take s observations from each of l subjects. These can be arranged into a data matrix D having s rows and l columns. D can be decomposed into a score matrix S and a loading matrix L such that D = SL. S will have s rows, and L will have l columns, the second dimension of each being the number of factors n. The purpose of factor analysis is to decompose D in such a way as to reveal the underlying scores and factors. The loadings in L tell us the proportion of each score which make up the observations in D. In PCA, L has the eigenvectors of the correlation or covariance matrix of D as its columns. These are conventionally arranged in descending order of the corresponding eigenvalues. The value of n - i.e. the number of significant principal components to retain in the analysis, and hence the number of rows of L - is typically determined through the use of a scree plot of the eigenvalues or one of numerous other methods to be found in the literature. The columns of S in PCA form the n abstract principal components themselves. The value of n is the underlying dimensionality of the data set. The object of factor analysis is to transform the abstract components into meaningful factors through the use of a transformation matrix T such that D = STT-1L. (ST) is the transformed score matrix, and (T-1L) is the transformed loading matrix. The above explanation roughly follows the notation of Edmund R. Malinowski from his excellent Factor Analysis in Chemistry. I highly recommend the opening chapters as an introduction to the subject.
Loadings vs eigenvectors in PCA: when to use one or another? There appears to be some confusion over this matter, so I will provide some observations and a pointer to where an excellent answer can be found in the literature. Firstly, PCA and Factor Analysis (FA
1,667
Loadings vs eigenvectors in PCA: when to use one or another?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The loadings show the relationship between the raw data and the rotated data. $$ {\rm Loadings} = {\rm Eigenvectors}\cdot\sqrt{\rm Eigenvalues} $$ For example, in R, raw data %*% loadings = rotated data. Try it: d = matrix(c(-1,-1,0,2,0,-2,0,-0,1,1),ncol = 2) loadings = prcomp(d)$rotation[,'PC1'] PC1_scores = d %*% loadings PC1_scores = prcomp(d)$x[,'PC1'] prcomp returns rotation which is the matrix of variable loadings (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
Loadings vs eigenvectors in PCA: when to use one or another?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Loadings vs eigenvectors in PCA: when to use one or another? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The loadings show the relationship between the raw data and the rotated data. $$ {\rm Loadings} = {\rm Eigenvectors}\cdot\sqrt{\rm Eigenvalues} $$ For example, in R, raw data %*% loadings = rotated data. Try it: d = matrix(c(-1,-1,0,2,0,-2,0,-0,1,1),ncol = 2) loadings = prcomp(d)$rotation[,'PC1'] PC1_scores = d %*% loadings PC1_scores = prcomp(d)$x[,'PC1'] prcomp returns rotation which is the matrix of variable loadings (i.e., a matrix whose columns contain the eigenvectors). The function princomp returns this in the element loadings.
Loadings vs eigenvectors in PCA: when to use one or another? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
1,668
Does the variance of a sum equal the sum of the variances?
The answer to your question is "Sometimes, but not in general". To see this let $X_1, ..., X_n$ be random variables (with finite variances). Then, $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) - \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2$$ Now note that $(\sum_{i=1}^{n} a_i)^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} a_i a_j $, which is clear if you think about what you're doing when you calculate $(a_1+...+a_n) \cdot (a_1+...+a_n)$ by hand. Therefore, $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) = E \left( \sum_{i=1}^{n} \sum_{j=1}^{n} X_i X_j \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i X_j) $$ similarly, $$ \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 = \left[ \sum_{i=1}^{n} E(X_i) \right]^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i) E(X_j)$$ so $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} \big( E(X_i X_j)-E(X_i) E(X_j) \big) = \sum_{i=1}^{n} \sum_{j=1}^{n} {\rm cov}(X_i, X_j)$$ by the definition of covariance. Now regarding Does the variance of a sum equal the sum of the variances?: If the variables are uncorrelated, yes: that is, ${\rm cov}(X_i,X_j)=0$ for $i\neq j$, then $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} {\rm cov}(X_i, X_j) = \sum_{i=1}^{n} {\rm cov}(X_i, X_i) = \sum_{i=1}^{n} {\rm var}(X_i) $$ If the variables are correlated, no, not in general: For example, suppose $X_1, X_2$ are two random variables each with variance $\sigma^2$ and ${\rm cov}(X_1,X_2)=\rho$ where $0 < \rho <\sigma^2$. Then ${\rm var}(X_1 + X_2) = 2(\sigma^2 + \rho) \neq 2\sigma^2$, so the identity fails. but it is possible for certain examples: Suppose $X_1, X_2, X_3$ have covariance matrix $$ \left( \begin{array}{ccc} 1 & 0.4 &-0.6 \\ 0.4 & 1 & 0.2 \\ -0.6 & 0.2 & 1 \\ \end{array} \right) $$ then ${\rm var}(X_1+X_2+X_3) = 3 = {\rm var}(X_1) + {\rm var}(X_2) + {\rm var}(X_3)$ Therefore if the variables are uncorrelated then the variance of the sum is the sum of the variances, but converse is not true in general.
Does the variance of a sum equal the sum of the variances?
The answer to your question is "Sometimes, but not in general". To see this let $X_1, ..., X_n$ be random variables (with finite variances). Then, $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E
Does the variance of a sum equal the sum of the variances? The answer to your question is "Sometimes, but not in general". To see this let $X_1, ..., X_n$ be random variables (with finite variances). Then, $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) - \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2$$ Now note that $(\sum_{i=1}^{n} a_i)^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} a_i a_j $, which is clear if you think about what you're doing when you calculate $(a_1+...+a_n) \cdot (a_1+...+a_n)$ by hand. Therefore, $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) = E \left( \sum_{i=1}^{n} \sum_{j=1}^{n} X_i X_j \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i X_j) $$ similarly, $$ \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 = \left[ \sum_{i=1}^{n} E(X_i) \right]^2 = \sum_{i=1}^{n} \sum_{j=1}^{n} E(X_i) E(X_j)$$ so $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} \big( E(X_i X_j)-E(X_i) E(X_j) \big) = \sum_{i=1}^{n} \sum_{j=1}^{n} {\rm cov}(X_i, X_j)$$ by the definition of covariance. Now regarding Does the variance of a sum equal the sum of the variances?: If the variables are uncorrelated, yes: that is, ${\rm cov}(X_i,X_j)=0$ for $i\neq j$, then $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = \sum_{i=1}^{n} \sum_{j=1}^{n} {\rm cov}(X_i, X_j) = \sum_{i=1}^{n} {\rm cov}(X_i, X_i) = \sum_{i=1}^{n} {\rm var}(X_i) $$ If the variables are correlated, no, not in general: For example, suppose $X_1, X_2$ are two random variables each with variance $\sigma^2$ and ${\rm cov}(X_1,X_2)=\rho$ where $0 < \rho <\sigma^2$. Then ${\rm var}(X_1 + X_2) = 2(\sigma^2 + \rho) \neq 2\sigma^2$, so the identity fails. but it is possible for certain examples: Suppose $X_1, X_2, X_3$ have covariance matrix $$ \left( \begin{array}{ccc} 1 & 0.4 &-0.6 \\ 0.4 & 1 & 0.2 \\ -0.6 & 0.2 & 1 \\ \end{array} \right) $$ then ${\rm var}(X_1+X_2+X_3) = 3 = {\rm var}(X_1) + {\rm var}(X_2) + {\rm var}(X_3)$ Therefore if the variables are uncorrelated then the variance of the sum is the sum of the variances, but converse is not true in general.
Does the variance of a sum equal the sum of the variances? The answer to your question is "Sometimes, but not in general". To see this let $X_1, ..., X_n$ be random variables (with finite variances). Then, $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E
1,669
Does the variance of a sum equal the sum of the variances?
$$\text{Var}\bigg(\sum_{i=1}^m X_i\bigg) = \sum_{i=1}^m \text{Var}(X_i) + 2\sum_{i\lt j} \text{Cov}(X_i,X_j).$$ So, if the covariances average to $0$, which would be a consequence if the variables are pairwise uncorrelated or if they are independent, then the variance of the sum is the sum of the variances. An example where this is not true: Let $\text{Var}(X_1)=1$. Let $X_2 = X_1$. Then $\text{Var}(X_1 + X_2) = \text{Var}(2X_1)=4$.
Does the variance of a sum equal the sum of the variances?
$$\text{Var}\bigg(\sum_{i=1}^m X_i\bigg) = \sum_{i=1}^m \text{Var}(X_i) + 2\sum_{i\lt j} \text{Cov}(X_i,X_j).$$ So, if the covariances average to $0$, which would be a consequence if the variables are
Does the variance of a sum equal the sum of the variances? $$\text{Var}\bigg(\sum_{i=1}^m X_i\bigg) = \sum_{i=1}^m \text{Var}(X_i) + 2\sum_{i\lt j} \text{Cov}(X_i,X_j).$$ So, if the covariances average to $0$, which would be a consequence if the variables are pairwise uncorrelated or if they are independent, then the variance of the sum is the sum of the variances. An example where this is not true: Let $\text{Var}(X_1)=1$. Let $X_2 = X_1$. Then $\text{Var}(X_1 + X_2) = \text{Var}(2X_1)=4$.
Does the variance of a sum equal the sum of the variances? $$\text{Var}\bigg(\sum_{i=1}^m X_i\bigg) = \sum_{i=1}^m \text{Var}(X_i) + 2\sum_{i\lt j} \text{Cov}(X_i,X_j).$$ So, if the covariances average to $0$, which would be a consequence if the variables are
1,670
Does the variance of a sum equal the sum of the variances?
I just wanted to add a more succinct version of the proof given by Macro, so it's easier to see what's going on. $\newcommand{\Cov}{\text{Cov}}\newcommand{\Var}{\text{Var}}$ Notice that since $\Var(X) = \Cov(X,X)$ For any two random variables $X,Y$ we have: \begin{align} \Var(X+Y) &= \Cov(X+Y,X+Y) \\ &= E((X+Y)^2)-E(X+Y)E(X+Y) \\ &\text{by expanding,} \\ &= E(X^2) - (E(X))^2 + E(Y^2) - (E(Y))^2 + 2(E(XY) - E(X)E(Y)) \\ &= \Var(X) + \Var(Y) + 2(E(XY)) - E(X)E(Y)) \\ \end{align} Therefore in general, the variance of the sum of two random variables is not the sum of the variances. However, if $X,Y$ are independent, then $E(XY) = E(X)E(Y)$, and we have $\Var(X+Y) = \Var(X) + \Var(Y)$. Notice that we can produce the result for the sum of $n$ random variables by a simple induction.
Does the variance of a sum equal the sum of the variances?
I just wanted to add a more succinct version of the proof given by Macro, so it's easier to see what's going on. $\newcommand{\Cov}{\text{Cov}}\newcommand{\Var}{\text{Var}}$ Notice that since $\Var(X)
Does the variance of a sum equal the sum of the variances? I just wanted to add a more succinct version of the proof given by Macro, so it's easier to see what's going on. $\newcommand{\Cov}{\text{Cov}}\newcommand{\Var}{\text{Var}}$ Notice that since $\Var(X) = \Cov(X,X)$ For any two random variables $X,Y$ we have: \begin{align} \Var(X+Y) &= \Cov(X+Y,X+Y) \\ &= E((X+Y)^2)-E(X+Y)E(X+Y) \\ &\text{by expanding,} \\ &= E(X^2) - (E(X))^2 + E(Y^2) - (E(Y))^2 + 2(E(XY) - E(X)E(Y)) \\ &= \Var(X) + \Var(Y) + 2(E(XY)) - E(X)E(Y)) \\ \end{align} Therefore in general, the variance of the sum of two random variables is not the sum of the variances. However, if $X,Y$ are independent, then $E(XY) = E(X)E(Y)$, and we have $\Var(X+Y) = \Var(X) + \Var(Y)$. Notice that we can produce the result for the sum of $n$ random variables by a simple induction.
Does the variance of a sum equal the sum of the variances? I just wanted to add a more succinct version of the proof given by Macro, so it's easier to see what's going on. $\newcommand{\Cov}{\text{Cov}}\newcommand{\Var}{\text{Var}}$ Notice that since $\Var(X)
1,671
Does the variance of a sum equal the sum of the variances?
Yes, if each pair of the $X_i$'s are uncorrelated, this is true. See the explanation on Wikipedia
Does the variance of a sum equal the sum of the variances?
Yes, if each pair of the $X_i$'s are uncorrelated, this is true. See the explanation on Wikipedia
Does the variance of a sum equal the sum of the variances? Yes, if each pair of the $X_i$'s are uncorrelated, this is true. See the explanation on Wikipedia
Does the variance of a sum equal the sum of the variances? Yes, if each pair of the $X_i$'s are uncorrelated, this is true. See the explanation on Wikipedia
1,672
Does the variance of a sum equal the sum of the variances?
I just want to add some steps in the very first equivalence of Macro's answer. Indeed I think it can be helpful to retrieve it directly from the usual definition of variance ${\rm var}(X)=E([X-E(X)]^2)$ from which in this particular case we should have: $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E \left( \left[ \sum_{i=1}^{n} X_i - E\left( \sum_{i=1}^{n} X_i \right)\right]^2 \right) $$ Indeed expanding the argument of the expected value: $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 -2 \left( \sum_{i=1}^{n} X_i \right) E\left( \sum_{i=1}^{n} X_i \right) + \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 \right) $$ Exploiting the linearity of the expectations, the expected value of a linear combination of random variables is the linear combination of the expected values of the corresponding random variables, such that: $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) -2 E \left( \left( \sum_{i=1}^{n} X_i \right) E \left( \sum_{i=1}^{n} X_i \right) \right) + E \left( \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 \right) $$ The last term is the expectation of a constant which is equal to the constant itself (by means of the LOTUS theorem). Exploiting again linearity of the expectations in the second term: $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) -2 E\left( \sum_{i=1}^{n} X_i \right) E \left( \sum_{i=1}^{n} X_i \right) + \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 = $$ $$ = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) -2 \left[ E \left( \sum_{i=1}^{n} X_i \right) \right] ^2 + \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 = $$ $$ = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) - \left[ E \left( \sum_{i=1}^{n} X_i \right) \right] ^2 $$ which is the right hand side of the very first equivalence of the comment of Macro. In the end it's like adding a passage in it: $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E \left( \left[ \sum_{i=1}^{n} X_i - E\left( \sum_{i=1}^{n} X_i \right)\right]^2 \right) = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) - \left[ E \left( \sum_{i=1}^{n} X_i \right) \right] ^2$$
Does the variance of a sum equal the sum of the variances?
I just want to add some steps in the very first equivalence of Macro's answer. Indeed I think it can be helpful to retrieve it directly from the usual definition of variance ${\rm var}(X)=E([X-E(X)]^2
Does the variance of a sum equal the sum of the variances? I just want to add some steps in the very first equivalence of Macro's answer. Indeed I think it can be helpful to retrieve it directly from the usual definition of variance ${\rm var}(X)=E([X-E(X)]^2)$ from which in this particular case we should have: $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E \left( \left[ \sum_{i=1}^{n} X_i - E\left( \sum_{i=1}^{n} X_i \right)\right]^2 \right) $$ Indeed expanding the argument of the expected value: $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 -2 \left( \sum_{i=1}^{n} X_i \right) E\left( \sum_{i=1}^{n} X_i \right) + \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 \right) $$ Exploiting the linearity of the expectations, the expected value of a linear combination of random variables is the linear combination of the expected values of the corresponding random variables, such that: $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) -2 E \left( \left( \sum_{i=1}^{n} X_i \right) E \left( \sum_{i=1}^{n} X_i \right) \right) + E \left( \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 \right) $$ The last term is the expectation of a constant which is equal to the constant itself (by means of the LOTUS theorem). Exploiting again linearity of the expectations in the second term: $$ E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) -2 E\left( \sum_{i=1}^{n} X_i \right) E \left( \sum_{i=1}^{n} X_i \right) + \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 = $$ $$ = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) -2 \left[ E \left( \sum_{i=1}^{n} X_i \right) \right] ^2 + \left[ E\left( \sum_{i=1}^{n} X_i \right) \right]^2 = $$ $$ = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) - \left[ E \left( \sum_{i=1}^{n} X_i \right) \right] ^2 $$ which is the right hand side of the very first equivalence of the comment of Macro. In the end it's like adding a passage in it: $$ {\rm var} \left( \sum_{i=1}^{n} X_i \right) = E \left( \left[ \sum_{i=1}^{n} X_i - E\left( \sum_{i=1}^{n} X_i \right)\right]^2 \right) = E \left( \left[ \sum_{i=1}^{n} X_i \right]^2 \right) - \left[ E \left( \sum_{i=1}^{n} X_i \right) \right] ^2$$
Does the variance of a sum equal the sum of the variances? I just want to add some steps in the very first equivalence of Macro's answer. Indeed I think it can be helpful to retrieve it directly from the usual definition of variance ${\rm var}(X)=E([X-E(X)]^2
1,673
Explain "Curse of dimensionality" to a child
Probably the kid will like to eat cookies, so let us assume that you have a whole truck with cookies having a different colour, a different shape, a different taste, a different price ... If the kid has to choose but only take into account one characteristic e.g. the taste, then it has four possibilities: sweet, salt, sour, bitter, so the kid only has to try four cookies to find what (s)he likes most. If the kid likes combinations of taste and colour, and there are 4 (I am rather optimistic here :-) ) different colours, then he already has to choose among 4x4 different types; If he wants, in addition, to take into account the shape of the cookies and there are 5 different shapes then he will have to try 4x4x5=80 cookies We could go on, but after eating all these cookies he might already have belly-ache ... before he can make his best choice :-) Apart from the belly-ache, it can get really difficult to remember the differences in the taste of each cookie. As you can see (@Almo) most (all?) things become more complicated as the number of dimensions increases, this holds for adults, for computers and also for kids.
Explain "Curse of dimensionality" to a child
Probably the kid will like to eat cookies, so let us assume that you have a whole truck with cookies having a different colour, a different shape, a different taste, a different price ... If the kid h
Explain "Curse of dimensionality" to a child Probably the kid will like to eat cookies, so let us assume that you have a whole truck with cookies having a different colour, a different shape, a different taste, a different price ... If the kid has to choose but only take into account one characteristic e.g. the taste, then it has four possibilities: sweet, salt, sour, bitter, so the kid only has to try four cookies to find what (s)he likes most. If the kid likes combinations of taste and colour, and there are 4 (I am rather optimistic here :-) ) different colours, then he already has to choose among 4x4 different types; If he wants, in addition, to take into account the shape of the cookies and there are 5 different shapes then he will have to try 4x4x5=80 cookies We could go on, but after eating all these cookies he might already have belly-ache ... before he can make his best choice :-) Apart from the belly-ache, it can get really difficult to remember the differences in the taste of each cookie. As you can see (@Almo) most (all?) things become more complicated as the number of dimensions increases, this holds for adults, for computers and also for kids.
Explain "Curse of dimensionality" to a child Probably the kid will like to eat cookies, so let us assume that you have a whole truck with cookies having a different colour, a different shape, a different taste, a different price ... If the kid h
1,674
Explain "Curse of dimensionality" to a child
The analogy I like to use for the curse of dimensionality is a bit more on the geometric side, but I hope it's still sufficiently useful for your kid. It's easy to hunt a dog and maybe catch it if it were running around on the plain (two dimensions). It's much harder to hunt birds, which now have an extra dimension they can move in. If we pretend that ghosts are higher-dimensional beings (akin to the Sphere interacting with A. Square in Flatland), those are even more difficult to catch. :)
Explain "Curse of dimensionality" to a child
The analogy I like to use for the curse of dimensionality is a bit more on the geometric side, but I hope it's still sufficiently useful for your kid. It's easy to hunt a dog and maybe catch it if it
Explain "Curse of dimensionality" to a child The analogy I like to use for the curse of dimensionality is a bit more on the geometric side, but I hope it's still sufficiently useful for your kid. It's easy to hunt a dog and maybe catch it if it were running around on the plain (two dimensions). It's much harder to hunt birds, which now have an extra dimension they can move in. If we pretend that ghosts are higher-dimensional beings (akin to the Sphere interacting with A. Square in Flatland), those are even more difficult to catch. :)
Explain "Curse of dimensionality" to a child The analogy I like to use for the curse of dimensionality is a bit more on the geometric side, but I hope it's still sufficiently useful for your kid. It's easy to hunt a dog and maybe catch it if it
1,675
Explain "Curse of dimensionality" to a child
Ok, so let's analyze the example of the child clustering its toys. Imagine the child has only 3 toys: a blue soccer ball a blue freesbe a green cube (ok maybe it's not the most fun toy you can imagine) Let's do the following initial hypothesis regarding how a toy can be made: Possible colors are: red, green, blue Possible shapes are: circle, square, triangle Now we can have have (num_colors * num_shapes) = 3 * 3 = 9 possible clusters. The boy would cluster the toys as follows: CLUSTER A) contains the blue ball and the blue freesbe, because thay have the same color and shape CLUSTER B) contains the super-funny green cube Using only these 2 dimensions (color, shape) we have 2 non-empty clusters: so in this first case 7/9 ~ 77% of our space is empty. Now let's increase the number of dimensions the child has to consider. We do also the following hypothesis regarding how a toy can be made: Size of the toy can vary between few centimeters to 1 meter, in step of ten centimeters: 0-10cm, 11-20cm, ..., 91cm-1m Weight of the toy can vary in a similar manner up to 1 kilogram, with steps of 100grams: 0-100g, 101-200g, ..., 901g-1kg. If we want to cluster our toys NOW, we have (num_colors * num_shapes * num_sizes * num_weights) = 3 * 3 * 10 * 10= 900 possible clusters. The boy would cluster the toys as follows: CLUSTER A) contains the blue soccer ball because is blue and heavy CLUSTER B) contains the blue freesbe because is blue and light CLUSTER C) contains the super-funny green cube Using the current 4 dimensions (shape, color, size, weigth) only 3 clusters are non empty: so in this case 897/900 ~ 99.7% of the space is empty. This is an example of what you find on Wikipedia (https://en.wikipedia.org/wiki/Curse_of_dimensionality): ...when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. Edit: I'm not sure i could really explain to a child why distance sometimes goes wrong in high-dimensional spaces, but let's try to proceed with our example of the child and his toys. Consider only the 2 first features {color, shape} everyone agrees that the blue ball is more similar to the blue freesbe than to the green cube. Now let's add other 98 features {say: size, weight, day_of_production_of_the_toy, material, softness, day_in_which_the_toy_was_bought_by_daddy, price etc}: well, to me would be increasingly more difficult to judge which toy is similar to which. So: A large number of features can be irrelevant in a certain comparison of similarity, leading to a corruption of the signal-to-noise ratio. In high dimensions, all examples "look-alike". If you listen to me, a good lecture is "A Few Useful Things to Know about Machine Learning" (http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf), paragraph 6 in particular presents this kind of reasoning. Hope this helps!
Explain "Curse of dimensionality" to a child
Ok, so let's analyze the example of the child clustering its toys. Imagine the child has only 3 toys: a blue soccer ball a blue freesbe a green cube (ok maybe it's not the most fun toy you can imagin
Explain "Curse of dimensionality" to a child Ok, so let's analyze the example of the child clustering its toys. Imagine the child has only 3 toys: a blue soccer ball a blue freesbe a green cube (ok maybe it's not the most fun toy you can imagine) Let's do the following initial hypothesis regarding how a toy can be made: Possible colors are: red, green, blue Possible shapes are: circle, square, triangle Now we can have have (num_colors * num_shapes) = 3 * 3 = 9 possible clusters. The boy would cluster the toys as follows: CLUSTER A) contains the blue ball and the blue freesbe, because thay have the same color and shape CLUSTER B) contains the super-funny green cube Using only these 2 dimensions (color, shape) we have 2 non-empty clusters: so in this first case 7/9 ~ 77% of our space is empty. Now let's increase the number of dimensions the child has to consider. We do also the following hypothesis regarding how a toy can be made: Size of the toy can vary between few centimeters to 1 meter, in step of ten centimeters: 0-10cm, 11-20cm, ..., 91cm-1m Weight of the toy can vary in a similar manner up to 1 kilogram, with steps of 100grams: 0-100g, 101-200g, ..., 901g-1kg. If we want to cluster our toys NOW, we have (num_colors * num_shapes * num_sizes * num_weights) = 3 * 3 * 10 * 10= 900 possible clusters. The boy would cluster the toys as follows: CLUSTER A) contains the blue soccer ball because is blue and heavy CLUSTER B) contains the blue freesbe because is blue and light CLUSTER C) contains the super-funny green cube Using the current 4 dimensions (shape, color, size, weigth) only 3 clusters are non empty: so in this case 897/900 ~ 99.7% of the space is empty. This is an example of what you find on Wikipedia (https://en.wikipedia.org/wiki/Curse_of_dimensionality): ...when the dimensionality increases, the volume of the space increases so fast that the available data become sparse. Edit: I'm not sure i could really explain to a child why distance sometimes goes wrong in high-dimensional spaces, but let's try to proceed with our example of the child and his toys. Consider only the 2 first features {color, shape} everyone agrees that the blue ball is more similar to the blue freesbe than to the green cube. Now let's add other 98 features {say: size, weight, day_of_production_of_the_toy, material, softness, day_in_which_the_toy_was_bought_by_daddy, price etc}: well, to me would be increasingly more difficult to judge which toy is similar to which. So: A large number of features can be irrelevant in a certain comparison of similarity, leading to a corruption of the signal-to-noise ratio. In high dimensions, all examples "look-alike". If you listen to me, a good lecture is "A Few Useful Things to Know about Machine Learning" (http://homes.cs.washington.edu/~pedrod/papers/cacm12.pdf), paragraph 6 in particular presents this kind of reasoning. Hope this helps!
Explain "Curse of dimensionality" to a child Ok, so let's analyze the example of the child clustering its toys. Imagine the child has only 3 toys: a blue soccer ball a blue freesbe a green cube (ok maybe it's not the most fun toy you can imagin
1,676
Explain "Curse of dimensionality" to a child
I have come across the following link that provides a very intuitive (and detailed) explanation of curse of dimensionality: http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classification/ In this article, we will discuss the so called ‘Curse of Dimensionality’, and explain why it is important when designing a classifier. In the following sections I will provide an intuitive explanation of this concept, illustrated by a clear example of overfitting due to the curse of dimensionality. In a few words this article derives (intuitively) that adding more features (i.e. increasing the dimensionality of our feature space) requires to collect more data. In fact the amount of data we need to collect (to avoid overfitting) grows exponentially as we add more dimensions. It also has nice illustrations like the following one:
Explain "Curse of dimensionality" to a child
I have come across the following link that provides a very intuitive (and detailed) explanation of curse of dimensionality: http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classificatio
Explain "Curse of dimensionality" to a child I have come across the following link that provides a very intuitive (and detailed) explanation of curse of dimensionality: http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classification/ In this article, we will discuss the so called ‘Curse of Dimensionality’, and explain why it is important when designing a classifier. In the following sections I will provide an intuitive explanation of this concept, illustrated by a clear example of overfitting due to the curse of dimensionality. In a few words this article derives (intuitively) that adding more features (i.e. increasing the dimensionality of our feature space) requires to collect more data. In fact the amount of data we need to collect (to avoid overfitting) grows exponentially as we add more dimensions. It also has nice illustrations like the following one:
Explain "Curse of dimensionality" to a child I have come across the following link that provides a very intuitive (and detailed) explanation of curse of dimensionality: http://www.visiondummy.com/2014/04/curse-dimensionality-affect-classificatio
1,677
Explain "Curse of dimensionality" to a child
The curse of dimensionality is somewhat fuzzy in definition as it describes different but related things in different disciplines. The following illustrates machine learning’s curse of dimensionality: Suppose a girl has ten toys, of which she likes only those in italics: a brown teddy bear a blue car a red train a yellow excavator a green book a grey plush walrus a black wagon a pink ball a white book an orange doll Now, her father wants to give her a new toy as a present for her birthday and wants to ensure that she likes it. He thinks very hard about what the toys she likes have in common and finally arrives at a solution. He gives his daughter an all-coloured jigsaw puzzle. When she does not like, he responds: “Why don’t you like it? It does contain the letter w.” The father has fallen victim to the curse of dimensionality (and in-sample optimisation). By considering letters, he was moving in a 26-dimensional space and thus it was very likely that he would find some criterion separating the toys liked by the daughter. This did not need to be a single-letter criterion as in the example, but could have also been something like contains at least one of a, n and p but none of u, f and s. To adequately tell whether letters are a good criterion for determining which toys his daughter likes, the father would have to know his daughter’s preferences on a gargantuan amount of toys¹ – or just use his brain and only consider parameters that are actually conceivable to affect the daughter’s opinion. ¹ order of magnitude: $2^{26}$, if all letters were equally likely and he would not take into account multiple occurrences of letters.
Explain "Curse of dimensionality" to a child
The curse of dimensionality is somewhat fuzzy in definition as it describes different but related things in different disciplines. The following illustrates machine learning’s curse of dimensionality:
Explain "Curse of dimensionality" to a child The curse of dimensionality is somewhat fuzzy in definition as it describes different but related things in different disciplines. The following illustrates machine learning’s curse of dimensionality: Suppose a girl has ten toys, of which she likes only those in italics: a brown teddy bear a blue car a red train a yellow excavator a green book a grey plush walrus a black wagon a pink ball a white book an orange doll Now, her father wants to give her a new toy as a present for her birthday and wants to ensure that she likes it. He thinks very hard about what the toys she likes have in common and finally arrives at a solution. He gives his daughter an all-coloured jigsaw puzzle. When she does not like, he responds: “Why don’t you like it? It does contain the letter w.” The father has fallen victim to the curse of dimensionality (and in-sample optimisation). By considering letters, he was moving in a 26-dimensional space and thus it was very likely that he would find some criterion separating the toys liked by the daughter. This did not need to be a single-letter criterion as in the example, but could have also been something like contains at least one of a, n and p but none of u, f and s. To adequately tell whether letters are a good criterion for determining which toys his daughter likes, the father would have to know his daughter’s preferences on a gargantuan amount of toys¹ – or just use his brain and only consider parameters that are actually conceivable to affect the daughter’s opinion. ¹ order of magnitude: $2^{26}$, if all letters were equally likely and he would not take into account multiple occurrences of letters.
Explain "Curse of dimensionality" to a child The curse of dimensionality is somewhat fuzzy in definition as it describes different but related things in different disciplines. The following illustrates machine learning’s curse of dimensionality:
1,678
Explain "Curse of dimensionality" to a child
Think of a circle enclosed in unit square. Think of a sphere enclosed in the unit cube. Think of an n-dimensional hyper sphere enclosed in the n-dimensional unit hyper cube. The volume of the hyper cube is 1, of course, when measured in $1^n$ units. However, the volume of a hyper sphere shrinks with n growing. If there was something interesting inside the hyper sphere it's harder and harder to see it in higher dimensions. In $\infty$-dimensional case the hyper sphere disappears! That's the curse. UPDATE: It seems that some folks didn't get the connection to statistics. You can see the relationship if you imagine picking a random point inside a hyper cube. In two dimensional case the probability that this point is inside the circle (hyper sphere) is $\pi/4$, in three dimensional case it's $\pi/6$ etc. In the $\infty$-dimensional case the probability is zero.
Explain "Curse of dimensionality" to a child
Think of a circle enclosed in unit square. Think of a sphere enclosed in the unit cube. Think of an n-dimensional hyper sphere enclosed in the n-dimensional unit hyper cube. The volume of the hyper c
Explain "Curse of dimensionality" to a child Think of a circle enclosed in unit square. Think of a sphere enclosed in the unit cube. Think of an n-dimensional hyper sphere enclosed in the n-dimensional unit hyper cube. The volume of the hyper cube is 1, of course, when measured in $1^n$ units. However, the volume of a hyper sphere shrinks with n growing. If there was something interesting inside the hyper sphere it's harder and harder to see it in higher dimensions. In $\infty$-dimensional case the hyper sphere disappears! That's the curse. UPDATE: It seems that some folks didn't get the connection to statistics. You can see the relationship if you imagine picking a random point inside a hyper cube. In two dimensional case the probability that this point is inside the circle (hyper sphere) is $\pi/4$, in three dimensional case it's $\pi/6$ etc. In the $\infty$-dimensional case the probability is zero.
Explain "Curse of dimensionality" to a child Think of a circle enclosed in unit square. Think of a sphere enclosed in the unit cube. Think of an n-dimensional hyper sphere enclosed in the n-dimensional unit hyper cube. The volume of the hyper c
1,679
Explain "Curse of dimensionality" to a child
Me: "I am thinking of a small brown animal beginning with 'S'. What is it?" Her: "Squirrel!" Me: "OK, a harder one. I am thinking of a small brown animal. What is it?" Her: "Still a squirrel?" Me: "No" Her: "Rat, mouse, vole? Me: "Nope" Her: "Umm... give me a clue" Me: "Nope, but I'll do some thing better: I'll let you answer to a CrossValidated question" Her: [groans] Me: "The question is: What is the curse of dimensionality? And you already know the answer" Her: "I do?" Me: "You do. Why was it harder to guess the first animal than the second?" Her: "Because there are more small brown animals than small brown animals beginning with 'S'?" Me: "Right. And that's the curse of dimensionality. Let's play again." Her: "OK" Me: "I'm thinking of something. What is it?" Her: "No fair. This game is way to hard." Me: "True. That's why they call it a curse. You just can't do well without knowing the things I tend to think about."
Explain "Curse of dimensionality" to a child
Me: "I am thinking of a small brown animal beginning with 'S'. What is it?" Her: "Squirrel!" Me: "OK, a harder one. I am thinking of a small brown animal. What is it?" Her: "Still a squirrel?" Me: "
Explain "Curse of dimensionality" to a child Me: "I am thinking of a small brown animal beginning with 'S'. What is it?" Her: "Squirrel!" Me: "OK, a harder one. I am thinking of a small brown animal. What is it?" Her: "Still a squirrel?" Me: "No" Her: "Rat, mouse, vole? Me: "Nope" Her: "Umm... give me a clue" Me: "Nope, but I'll do some thing better: I'll let you answer to a CrossValidated question" Her: [groans] Me: "The question is: What is the curse of dimensionality? And you already know the answer" Her: "I do?" Me: "You do. Why was it harder to guess the first animal than the second?" Her: "Because there are more small brown animals than small brown animals beginning with 'S'?" Me: "Right. And that's the curse of dimensionality. Let's play again." Her: "OK" Me: "I'm thinking of something. What is it?" Her: "No fair. This game is way to hard." Me: "True. That's why they call it a curse. You just can't do well without knowing the things I tend to think about."
Explain "Curse of dimensionality" to a child Me: "I am thinking of a small brown animal beginning with 'S'. What is it?" Her: "Squirrel!" Me: "OK, a harder one. I am thinking of a small brown animal. What is it?" Her: "Still a squirrel?" Me: "
1,680
Explain "Curse of dimensionality" to a child
Suppose you want to ship some goods. You want to waste as little space as possible when packaging the goods (i.e., leave as little empty space as possible), because shipping costs are related to volume of the envelope/box. The containers at your disposal (envelopes, boxes) have right angles, so no sacks etc. First problem: ship a pen (a "line") - you can build a box around it with no space lost. Second problem: ship a CD (a "sphere"). You need to put it into a square envelope. Depending how old the child is, she may be able to calculate how much of the envelope will remain empty (and still know that there are CDs and not just downloads ;-)). Third problem: ship a football (soccer, and it has to be inflated!). You will need to put it into a box, and some space will remain empty. That empty space will be a higher fraction of the total volume than in the CD example. At that point my intuition using this analogy stops, because I cannot imagine a 4th dimension. EDIT: The analogy is most useful (if at all) for nonparametric estimation, which uses observations "local" to the point of interest to estimate, say, a density or a regression function at that point. The curse of dimensionality is that in higher dimensions, one either needs a much larger neighborhood for a given number of observations (which makes the notion of locality questionable) or a large amount of data.
Explain "Curse of dimensionality" to a child
Suppose you want to ship some goods. You want to waste as little space as possible when packaging the goods (i.e., leave as little empty space as possible), because shipping costs are related to volum
Explain "Curse of dimensionality" to a child Suppose you want to ship some goods. You want to waste as little space as possible when packaging the goods (i.e., leave as little empty space as possible), because shipping costs are related to volume of the envelope/box. The containers at your disposal (envelopes, boxes) have right angles, so no sacks etc. First problem: ship a pen (a "line") - you can build a box around it with no space lost. Second problem: ship a CD (a "sphere"). You need to put it into a square envelope. Depending how old the child is, she may be able to calculate how much of the envelope will remain empty (and still know that there are CDs and not just downloads ;-)). Third problem: ship a football (soccer, and it has to be inflated!). You will need to put it into a box, and some space will remain empty. That empty space will be a higher fraction of the total volume than in the CD example. At that point my intuition using this analogy stops, because I cannot imagine a 4th dimension. EDIT: The analogy is most useful (if at all) for nonparametric estimation, which uses observations "local" to the point of interest to estimate, say, a density or a regression function at that point. The curse of dimensionality is that in higher dimensions, one either needs a much larger neighborhood for a given number of observations (which makes the notion of locality questionable) or a large amount of data.
Explain "Curse of dimensionality" to a child Suppose you want to ship some goods. You want to waste as little space as possible when packaging the goods (i.e., leave as little empty space as possible), because shipping costs are related to volum
1,681
Explain "Curse of dimensionality" to a child
My 6 yo is more on the verse of the primary cause research, like in "but where did all this gas in the universe come from?"... well, I’ll imagine your child understand "higher dimensions", which seems very unlikely to me. Let’s ask the following question: pick random points (uniformly) in a $n$-cube $[0,1]^n$, one by one. How long does it take to get a point in the lower corner $\left[ {1\over2}, {1\over2}\right]^n$? The answer, young lad, is that the probability for a random point to be in this lower corner is $\left({1\over 2}\right)^n$, which means that the expected number of points to draw before hitting the left corner is $2^n$ (by the properties of the geometric distribution). And as you know it from the wheat and chessboard problem, this quickly becomes awfully huge. Now go pick up your room, daddy’s got to work. PS about clustering... think about your points scattered in this high dimension box. It’s so big that there are $2^n$ sub-boxes with edges of length ${1\over 2}$. It will take some time before picking two points in the same sub-box. Well that can a problem even when the point are not drawn uniformly at random, but in some clusters. If the clusters are not chosen arbitrarily small, it can take very long before picking two points in the same sub-box. You understand that this hinders clustering...
Explain "Curse of dimensionality" to a child
My 6 yo is more on the verse of the primary cause research, like in "but where did all this gas in the universe come from?"... well, I’ll imagine your child understand "higher dimensions", which seems
Explain "Curse of dimensionality" to a child My 6 yo is more on the verse of the primary cause research, like in "but where did all this gas in the universe come from?"... well, I’ll imagine your child understand "higher dimensions", which seems very unlikely to me. Let’s ask the following question: pick random points (uniformly) in a $n$-cube $[0,1]^n$, one by one. How long does it take to get a point in the lower corner $\left[ {1\over2}, {1\over2}\right]^n$? The answer, young lad, is that the probability for a random point to be in this lower corner is $\left({1\over 2}\right)^n$, which means that the expected number of points to draw before hitting the left corner is $2^n$ (by the properties of the geometric distribution). And as you know it from the wheat and chessboard problem, this quickly becomes awfully huge. Now go pick up your room, daddy’s got to work. PS about clustering... think about your points scattered in this high dimension box. It’s so big that there are $2^n$ sub-boxes with edges of length ${1\over 2}$. It will take some time before picking two points in the same sub-box. Well that can a problem even when the point are not drawn uniformly at random, but in some clusters. If the clusters are not chosen arbitrarily small, it can take very long before picking two points in the same sub-box. You understand that this hinders clustering...
Explain "Curse of dimensionality" to a child My 6 yo is more on the verse of the primary cause research, like in "but where did all this gas in the universe come from?"... well, I’ll imagine your child understand "higher dimensions", which seems
1,682
Explain "Curse of dimensionality" to a child
Fcop offered a great analogy with cookies but have covered only the sampling density aspect of the curse of dimensionality. We can extend this analogy to the sampling volume or the distance by distributing same number of Fcop's cookies in, say, ten boxes in one line, 10x10 boxes flat on the table and 10x10x10 in a stack. Then you can show that to eat the same share of cookies the child will have to open ever more boxes. It is really about the expectations but let's take a "worst case scenario" approach to illustrate. If there are 8 cookies and we want to eat a half i.e. 4, from 10 boxes in a worst case we only need to open 6 boxes. That's 60% - just about a half too. From 10x10 (again in a worst case) - 96(%). And from 10x10x10 - 996(99,6%). That's almost all of them! May be the storage room analogy and distance walked between rooms would do better than boxes here.
Explain "Curse of dimensionality" to a child
Fcop offered a great analogy with cookies but have covered only the sampling density aspect of the curse of dimensionality. We can extend this analogy to the sampling volume or the distance by distrib
Explain "Curse of dimensionality" to a child Fcop offered a great analogy with cookies but have covered only the sampling density aspect of the curse of dimensionality. We can extend this analogy to the sampling volume or the distance by distributing same number of Fcop's cookies in, say, ten boxes in one line, 10x10 boxes flat on the table and 10x10x10 in a stack. Then you can show that to eat the same share of cookies the child will have to open ever more boxes. It is really about the expectations but let's take a "worst case scenario" approach to illustrate. If there are 8 cookies and we want to eat a half i.e. 4, from 10 boxes in a worst case we only need to open 6 boxes. That's 60% - just about a half too. From 10x10 (again in a worst case) - 96(%). And from 10x10x10 - 996(99,6%). That's almost all of them! May be the storage room analogy and distance walked between rooms would do better than boxes here.
Explain "Curse of dimensionality" to a child Fcop offered a great analogy with cookies but have covered only the sampling density aspect of the curse of dimensionality. We can extend this analogy to the sampling volume or the distance by distrib
1,683
Explain "Curse of dimensionality" to a child
There is a classic, textbook, math problem that shows this. Would you rather earn (option 1) 100 pennies a day, every day for a month, or (option 2) a penny doubled every day for a month? You can ask your child this question. If you choose option 1, on day 1 you get 100 pennies on day 2 you get 100 pennies on day 3 you get 100 pennies ... on day 30 you get 100 pennies on the $n^{th}$ day you get 100 pennies. the total number of pennies is found by multiplying the number of days by the number of pennies per day: $$ \sum_{i=1}^{30}100 = 30 \cdot 100 = 3000 $$ If you choose option 2: on day 1 you get 1 penny on day 2 you get 2 pennies on day 3 you get 4 pennies on day 4 you get 8 pennies on day 5 you get 16 pennies ... on day 30 you get 1,073,741,824 pennies on the $n^{th}$ day you get $2^n$ pennies. the total number of pennies is observing that the sum of all prior days is one less than the number of pennies received on the current day: $$ \sum_{i=1}^{30}2^n= \left(2^{31} \right)-1 = 2147483648 - 1 = 2147483647 $$ Anyone with greed will choose the bigger number. Simple greed is easy to find, and requires little thought. Unspeaking animals are easily capable of greed - insects are notoriously good at it. Humans are capable of much more. If you start out with one penny instead of a hundred the greed is easier, but if you change the power for a polynomial it is more complex. Complex can also mean much more valuable. About "the curse" The "most important" physics-related mathematical operation is matrix inversion. It drives solutions of systems of partial differential equations, the most common of which are Maxwell's equations (electromagnetics), Navier Stokes equations(fluids), Poisson's equation (diffusive transfer), and variations on Hookes Law (deformable solids). Each of these equations has college courses built around them. Raw matrix inversion as taught in Linear Algebra, aka Gauss-Jordan method, requires order of $n^3$ operations to complete. Here "n" is not the number of dimensions, but the number of discretized chunks. It abstracts to number of dimensions easily. If it takes 10 chunks to adequately represent the geometry of a 2d object, it takes at least 10^2 to adequately represent a 3d analog, and 10^2^2 to represent a 4d analog. If you are thinking in terms of geometry you might say "there aren't 4 dimensions" but in terms of physical quantities like temperature, concentration, or velocity in a particular direction each require their own "column" and count as a dimension. Taking these equations from 2d to 3d can increase the "n" by several powers. The curse exists because if it is overcome there is a pot of golden value at the end of the rainbow. It isn't easy - great minds have engaged the problem vigorously. link: https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
Explain "Curse of dimensionality" to a child
There is a classic, textbook, math problem that shows this. Would you rather earn (option 1) 100 pennies a day, every day for a month, or (option 2) a penny doubled every day for a month? You can ask
Explain "Curse of dimensionality" to a child There is a classic, textbook, math problem that shows this. Would you rather earn (option 1) 100 pennies a day, every day for a month, or (option 2) a penny doubled every day for a month? You can ask your child this question. If you choose option 1, on day 1 you get 100 pennies on day 2 you get 100 pennies on day 3 you get 100 pennies ... on day 30 you get 100 pennies on the $n^{th}$ day you get 100 pennies. the total number of pennies is found by multiplying the number of days by the number of pennies per day: $$ \sum_{i=1}^{30}100 = 30 \cdot 100 = 3000 $$ If you choose option 2: on day 1 you get 1 penny on day 2 you get 2 pennies on day 3 you get 4 pennies on day 4 you get 8 pennies on day 5 you get 16 pennies ... on day 30 you get 1,073,741,824 pennies on the $n^{th}$ day you get $2^n$ pennies. the total number of pennies is observing that the sum of all prior days is one less than the number of pennies received on the current day: $$ \sum_{i=1}^{30}2^n= \left(2^{31} \right)-1 = 2147483648 - 1 = 2147483647 $$ Anyone with greed will choose the bigger number. Simple greed is easy to find, and requires little thought. Unspeaking animals are easily capable of greed - insects are notoriously good at it. Humans are capable of much more. If you start out with one penny instead of a hundred the greed is easier, but if you change the power for a polynomial it is more complex. Complex can also mean much more valuable. About "the curse" The "most important" physics-related mathematical operation is matrix inversion. It drives solutions of systems of partial differential equations, the most common of which are Maxwell's equations (electromagnetics), Navier Stokes equations(fluids), Poisson's equation (diffusive transfer), and variations on Hookes Law (deformable solids). Each of these equations has college courses built around them. Raw matrix inversion as taught in Linear Algebra, aka Gauss-Jordan method, requires order of $n^3$ operations to complete. Here "n" is not the number of dimensions, but the number of discretized chunks. It abstracts to number of dimensions easily. If it takes 10 chunks to adequately represent the geometry of a 2d object, it takes at least 10^2 to adequately represent a 3d analog, and 10^2^2 to represent a 4d analog. If you are thinking in terms of geometry you might say "there aren't 4 dimensions" but in terms of physical quantities like temperature, concentration, or velocity in a particular direction each require their own "column" and count as a dimension. Taking these equations from 2d to 3d can increase the "n" by several powers. The curse exists because if it is overcome there is a pot of golden value at the end of the rainbow. It isn't easy - great minds have engaged the problem vigorously. link: https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations
Explain "Curse of dimensionality" to a child There is a classic, textbook, math problem that shows this. Would you rather earn (option 1) 100 pennies a day, every day for a month, or (option 2) a penny doubled every day for a month? You can ask
1,684
Explain "Curse of dimensionality" to a child
Think about cookie attributes (i.e. features, i.e. dimensions) and assume you want to find the one best cookie attributes combination. With increasing features (dimensions) randomly picking the right becomes improbable. Put in another way: randomly picking a small interval on a line (1D) is easier than picking a small area in (2D) which is easier than picking a small volume in 3D which is easier than picking a small volume and timepoint in 4D (think about meeting your buddy at the play ground but you don't know when). This trend of hardness goes on at higher dimensions.
Explain "Curse of dimensionality" to a child
Think about cookie attributes (i.e. features, i.e. dimensions) and assume you want to find the one best cookie attributes combination. With increasing features (dimensions) randomly picking the right
Explain "Curse of dimensionality" to a child Think about cookie attributes (i.e. features, i.e. dimensions) and assume you want to find the one best cookie attributes combination. With increasing features (dimensions) randomly picking the right becomes improbable. Put in another way: randomly picking a small interval on a line (1D) is easier than picking a small area in (2D) which is easier than picking a small volume in 3D which is easier than picking a small volume and timepoint in 4D (think about meeting your buddy at the play ground but you don't know when). This trend of hardness goes on at higher dimensions.
Explain "Curse of dimensionality" to a child Think about cookie attributes (i.e. features, i.e. dimensions) and assume you want to find the one best cookie attributes combination. With increasing features (dimensions) randomly picking the right
1,685
Simple algorithm for online outlier detection of a generic time series
Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates of the trend and seasonal components and subtract them. Then find outliers in the residuals. The test for residual outliers is the same as for the standard boxplot -- points greater than 1.5IQR above or below the upper and lower quartiles are assumed outliers. The number of IQRs above/below these thresholds is returned as an outlier "score". So the score can be any positive number, and will be zero for non-outliers. I realise you are not implementing this in R, but I often find an R function a good place to start. Then the task is to translate this into whatever language is required. tsoutliers <- function(x,plot=FALSE) { x <- as.ts(x) if(frequency(x)>1) resid <- stl(x,s.window="periodic",robust=TRUE)$time.series[,3] else { tt <- 1:length(x) resid <- residuals(loess(x ~ tt)) } resid.q <- quantile(resid,prob=c(0.25,0.75)) iqr <- diff(resid.q) limits <- resid.q + 1.5*iqr*c(-1,1) score <- abs(pmin((resid-limits[1])/iqr,0) + pmax((resid - limits[2])/iqr,0)) if(plot) { plot(x) x2 <- ts(rep(NA,length(x))) x2[score>0] <- x[score>0] tsp(x2) <- tsp(x) points(x2,pch=19,col="red") return(invisible(score)) } else return(score) }
Simple algorithm for online outlier detection of a generic time series
Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates o
Simple algorithm for online outlier detection of a generic time series Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates of the trend and seasonal components and subtract them. Then find outliers in the residuals. The test for residual outliers is the same as for the standard boxplot -- points greater than 1.5IQR above or below the upper and lower quartiles are assumed outliers. The number of IQRs above/below these thresholds is returned as an outlier "score". So the score can be any positive number, and will be zero for non-outliers. I realise you are not implementing this in R, but I often find an R function a good place to start. Then the task is to translate this into whatever language is required. tsoutliers <- function(x,plot=FALSE) { x <- as.ts(x) if(frequency(x)>1) resid <- stl(x,s.window="periodic",robust=TRUE)$time.series[,3] else { tt <- 1:length(x) resid <- residuals(loess(x ~ tt)) } resid.q <- quantile(resid,prob=c(0.25,0.75)) iqr <- diff(resid.q) limits <- resid.q + 1.5*iqr*c(-1,1) score <- abs(pmin((resid-limits[1])/iqr,0) + pmax((resid - limits[2])/iqr,0)) if(plot) { plot(x) x2 <- ts(rep(NA,length(x))) x2[score>0] <- x[score>0] tsp(x2) <- tsp(x) points(x2,pch=19,col="red") return(invisible(score)) } else return(score) }
Simple algorithm for online outlier detection of a generic time series Here is a simple R function that will find time series outliers (and optionally show them in a plot). It will handle seasonal and non-seasonal time series. The basic idea is to find robust estimates o
1,686
Simple algorithm for online outlier detection of a generic time series
A good solution will have several ingredients, including: Use a resistant, moving window smooth to remove nonstationarity. Re-express the original data so that the residuals with respect to the smooth are approximately symmetrically distributed. Given the nature of your data, it's likely that their square roots or logarithms would give symmetric residuals. Apply control chart methods, or at least control chart thinking, to the residuals. As far as that last one goes, control chart thinking shows that "conventional" thresholds like 2 SD or 1.5 times the IQR beyond the quartiles work poorly because they trigger too many false out-of-control signals. People usually use 3 SD in control chart work, whence 2.5 (or even 3) times the IQR beyond the quartiles would be a good starting point. I have more or less outlined the nature of Rob Hyndman's solution while adding to it two major points: the potential need to re-express the data and the wisdom of being more conservative in signaling an outlier. I'm not sure that Loess is good for an online detector, though, because it doesn't work well at the endpoints. You might instead use something as simple as a moving median filter (as in Tukey's resistant smoothing). If outliers don't come in bursts, you can use a narrow window (5 data points, perhaps, which will break down only with a burst of 3 or more outliers within a group of 5). Once you have performed the analysis to determine a good re-expression of the data, it's unlikely you'll need to change the re-expression. Therefore, your online detector really only needs to reference the most recent values (the latest window) because it won't use the earlier data at all. If you have really long time series you could go further to analyze autocorrelation and seasonality (such as recurring daily or weekly fluctuations) to improve the procedure.
Simple algorithm for online outlier detection of a generic time series
A good solution will have several ingredients, including: Use a resistant, moving window smooth to remove nonstationarity. Re-express the original data so that the residuals with respect to the smoot
Simple algorithm for online outlier detection of a generic time series A good solution will have several ingredients, including: Use a resistant, moving window smooth to remove nonstationarity. Re-express the original data so that the residuals with respect to the smooth are approximately symmetrically distributed. Given the nature of your data, it's likely that their square roots or logarithms would give symmetric residuals. Apply control chart methods, or at least control chart thinking, to the residuals. As far as that last one goes, control chart thinking shows that "conventional" thresholds like 2 SD or 1.5 times the IQR beyond the quartiles work poorly because they trigger too many false out-of-control signals. People usually use 3 SD in control chart work, whence 2.5 (or even 3) times the IQR beyond the quartiles would be a good starting point. I have more or less outlined the nature of Rob Hyndman's solution while adding to it two major points: the potential need to re-express the data and the wisdom of being more conservative in signaling an outlier. I'm not sure that Loess is good for an online detector, though, because it doesn't work well at the endpoints. You might instead use something as simple as a moving median filter (as in Tukey's resistant smoothing). If outliers don't come in bursts, you can use a narrow window (5 data points, perhaps, which will break down only with a burst of 3 or more outliers within a group of 5). Once you have performed the analysis to determine a good re-expression of the data, it's unlikely you'll need to change the re-expression. Therefore, your online detector really only needs to reference the most recent values (the latest window) because it won't use the earlier data at all. If you have really long time series you could go further to analyze autocorrelation and seasonality (such as recurring daily or weekly fluctuations) to improve the procedure.
Simple algorithm for online outlier detection of a generic time series A good solution will have several ingredients, including: Use a resistant, moving window smooth to remove nonstationarity. Re-express the original data so that the residuals with respect to the smoot
1,687
Simple algorithm for online outlier detection of a generic time series
(This answer responded to a duplicate (now closed) question at Detecting outstanding events, which presented some data in graphical form.) Outlier detection depends on the nature of the data and on what you are willing to assume about them. General-purpose methods rely on robust statistics. The spirit of this approach is to characterize the bulk of the data in a way that is not influenced by any outliers and then point to any individual values that do not fit within that characterization. Because this is a time series, it adds the complication of needing to (re)detect outliers on an ongoing basis. If this is to be done as the series unfolds, then we are allowed only to use older data for the detection, not future data! Moreover, as protection against the many repeated tests, we would want to use a method that has a very low false positive rate. These considerations suggest running a simple, robust moving window outlier test over the data. There are many possibilities, but one simple, easily understood and easily implemented one is based on a running MAD: median absolute deviation from the median. This is a strongly robust measure of variation within the data, akin to a standard deviation. An outlying peak would be several MADs or more greater than the median. There is still some tuning to be done: how much of a deviation from the bulk of the data should be considered outlying and how far back in time should one look? Let's leave these as parameters for experimentation. Here's an R implementation applied to data $x = (1,2,\ldots,n)$ (with $n=1150$ to emulate the data) with corresponding values $y$: # Parameters to tune to the circumstances: window <- 30 threshold <- 5 # An upper threshold ("ut") calculation based on the MAD: library(zoo) # rollapply() ut <- function(x) {m = median(x); median(x) + threshold * median(abs(x - m))} z <- rollapply(zoo(y), window, ut, align="right") z <- c(rep(z[1], window-1), z) # Use z[1] throughout the initial period outliers <- y > z # Graph the data, show the ut() cutoffs, and mark the outliers: plot(x, y, type="l", lwd=2, col="#E00000", ylim=c(0, 20000)) lines(x, z, col="Gray") points(x[outliers], y[outliers], pch=19) Applied to a dataset like the red curve illustrated in the question, it produces this result: The data are shown in red, the 30-day window of median+5*MAD thresholds in gray, and the outliers--which are simply those data values above the gray curve--in black. (The threshold can only be computed beginning at the end of the initial window. For all data within this initial window, the first threshold is used: that's why the gray curve is flat between x=0 and x=30.) The effects of changing the parameters are (a) increasing the value of window will tend to smooth out the gray curve and (b) increasing threshold will raise the gray curve. Knowing this, one can take an initial segment of the data and quickly identify values of the parameters that best segregate the outlying peaks from the rest of the data. Apply these parameter values to checking the rest of the data. If a plot shows the method is worsening over time, that means the nature of the data are changing and the parameters might need re-tuning. Notice how little this method assumes about the data: they do not have to be normally distributed; they do not need to exhibit any periodicity; they don't even have to be non-negative. All it assumes is that the data behave in reasonably similar ways over time and that the outlying peaks are visibly higher than the rest of the data. If anyone would like to experiment (or compare some other solution to the one offered here), here is the code I used to produce data like those shown in the question. n.length <- 1150 cycle.a <- 11 cycle.b <- 365/12 amp.a <- 800 amp.b <- 8000 set.seed(17) x <- 1:n.length baseline <- (1/2) * amp.a * (1 + sin(x * 2*pi / cycle.a)) * rgamma(n.length, 40, scale=1/40) peaks <- rbinom(n.length, 1, exp(2*(-1 + sin(((1 + x/2)^(1/5) / (1 + n.length/2)^(1/5))*x * 2*pi / cycle.b))*cycle.b)) y <- peaks * rgamma(n.length, 20, scale=amp.b/20) + baseline
Simple algorithm for online outlier detection of a generic time series
(This answer responded to a duplicate (now closed) question at Detecting outstanding events, which presented some data in graphical form.) Outlier detection depends on the nature of the data and on w
Simple algorithm for online outlier detection of a generic time series (This answer responded to a duplicate (now closed) question at Detecting outstanding events, which presented some data in graphical form.) Outlier detection depends on the nature of the data and on what you are willing to assume about them. General-purpose methods rely on robust statistics. The spirit of this approach is to characterize the bulk of the data in a way that is not influenced by any outliers and then point to any individual values that do not fit within that characterization. Because this is a time series, it adds the complication of needing to (re)detect outliers on an ongoing basis. If this is to be done as the series unfolds, then we are allowed only to use older data for the detection, not future data! Moreover, as protection against the many repeated tests, we would want to use a method that has a very low false positive rate. These considerations suggest running a simple, robust moving window outlier test over the data. There are many possibilities, but one simple, easily understood and easily implemented one is based on a running MAD: median absolute deviation from the median. This is a strongly robust measure of variation within the data, akin to a standard deviation. An outlying peak would be several MADs or more greater than the median. There is still some tuning to be done: how much of a deviation from the bulk of the data should be considered outlying and how far back in time should one look? Let's leave these as parameters for experimentation. Here's an R implementation applied to data $x = (1,2,\ldots,n)$ (with $n=1150$ to emulate the data) with corresponding values $y$: # Parameters to tune to the circumstances: window <- 30 threshold <- 5 # An upper threshold ("ut") calculation based on the MAD: library(zoo) # rollapply() ut <- function(x) {m = median(x); median(x) + threshold * median(abs(x - m))} z <- rollapply(zoo(y), window, ut, align="right") z <- c(rep(z[1], window-1), z) # Use z[1] throughout the initial period outliers <- y > z # Graph the data, show the ut() cutoffs, and mark the outliers: plot(x, y, type="l", lwd=2, col="#E00000", ylim=c(0, 20000)) lines(x, z, col="Gray") points(x[outliers], y[outliers], pch=19) Applied to a dataset like the red curve illustrated in the question, it produces this result: The data are shown in red, the 30-day window of median+5*MAD thresholds in gray, and the outliers--which are simply those data values above the gray curve--in black. (The threshold can only be computed beginning at the end of the initial window. For all data within this initial window, the first threshold is used: that's why the gray curve is flat between x=0 and x=30.) The effects of changing the parameters are (a) increasing the value of window will tend to smooth out the gray curve and (b) increasing threshold will raise the gray curve. Knowing this, one can take an initial segment of the data and quickly identify values of the parameters that best segregate the outlying peaks from the rest of the data. Apply these parameter values to checking the rest of the data. If a plot shows the method is worsening over time, that means the nature of the data are changing and the parameters might need re-tuning. Notice how little this method assumes about the data: they do not have to be normally distributed; they do not need to exhibit any periodicity; they don't even have to be non-negative. All it assumes is that the data behave in reasonably similar ways over time and that the outlying peaks are visibly higher than the rest of the data. If anyone would like to experiment (or compare some other solution to the one offered here), here is the code I used to produce data like those shown in the question. n.length <- 1150 cycle.a <- 11 cycle.b <- 365/12 amp.a <- 800 amp.b <- 8000 set.seed(17) x <- 1:n.length baseline <- (1/2) * amp.a * (1 + sin(x * 2*pi / cycle.a)) * rgamma(n.length, 40, scale=1/40) peaks <- rbinom(n.length, 1, exp(2*(-1 + sin(((1 + x/2)^(1/5) / (1 + n.length/2)^(1/5))*x * 2*pi / cycle.b))*cycle.b)) y <- peaks * rgamma(n.length, 20, scale=amp.b/20) + baseline
Simple algorithm for online outlier detection of a generic time series (This answer responded to a duplicate (now closed) question at Detecting outstanding events, which presented some data in graphical form.) Outlier detection depends on the nature of the data and on w
1,688
Simple algorithm for online outlier detection of a generic time series
If you're worried about assumptions with any particular approach, one approach is to train a number of learners on different signals, then use ensemble methods and aggregate over the "votes" from your learners to make the outlier classification. BTW, this might be worth reading or skimming since it references a few approaches to the problem. Online outlier detection over data streams
Simple algorithm for online outlier detection of a generic time series
If you're worried about assumptions with any particular approach, one approach is to train a number of learners on different signals, then use ensemble methods and aggregate over the "votes" from your
Simple algorithm for online outlier detection of a generic time series If you're worried about assumptions with any particular approach, one approach is to train a number of learners on different signals, then use ensemble methods and aggregate over the "votes" from your learners to make the outlier classification. BTW, this might be worth reading or skimming since it references a few approaches to the problem. Online outlier detection over data streams
Simple algorithm for online outlier detection of a generic time series If you're worried about assumptions with any particular approach, one approach is to train a number of learners on different signals, then use ensemble methods and aggregate over the "votes" from your
1,689
Simple algorithm for online outlier detection of a generic time series
I am guessing sophisticated time series model will not work for you because of the time it takes to detect outliers using this methodology. Therefore, here is a workaround: First establish a baseline 'normal' traffic patterns for a year based on manual analysis of historical data which accounts for time of the day, weekday vs weekend, month of the year etc. Use this baseline along with some simple mechanism (e.g., moving average suggested by Carlos) to detect outliers. You may also want to review the statistical process control literature for some ideas.
Simple algorithm for online outlier detection of a generic time series
I am guessing sophisticated time series model will not work for you because of the time it takes to detect outliers using this methodology. Therefore, here is a workaround: First establish a baselin
Simple algorithm for online outlier detection of a generic time series I am guessing sophisticated time series model will not work for you because of the time it takes to detect outliers using this methodology. Therefore, here is a workaround: First establish a baseline 'normal' traffic patterns for a year based on manual analysis of historical data which accounts for time of the day, weekday vs weekend, month of the year etc. Use this baseline along with some simple mechanism (e.g., moving average suggested by Carlos) to detect outliers. You may also want to review the statistical process control literature for some ideas.
Simple algorithm for online outlier detection of a generic time series I am guessing sophisticated time series model will not work for you because of the time it takes to detect outliers using this methodology. Therefore, here is a workaround: First establish a baselin
1,690
Simple algorithm for online outlier detection of a generic time series
Seasonally adjust the data such that a normal day looks closer to flat. You could take today's 5:00pm sample and subtract or divide out the average of the previous 30 days at 5:00pm. Then look past N standard deviations (measured using pre-adjusted data) for outliers. This could be done separately for weekly and daily "seasons."
Simple algorithm for online outlier detection of a generic time series
Seasonally adjust the data such that a normal day looks closer to flat. You could take today's 5:00pm sample and subtract or divide out the average of the previous 30 days at 5:00pm. Then look past N
Simple algorithm for online outlier detection of a generic time series Seasonally adjust the data such that a normal day looks closer to flat. You could take today's 5:00pm sample and subtract or divide out the average of the previous 30 days at 5:00pm. Then look past N standard deviations (measured using pre-adjusted data) for outliers. This could be done separately for weekly and daily "seasons."
Simple algorithm for online outlier detection of a generic time series Seasonally adjust the data such that a normal day looks closer to flat. You could take today's 5:00pm sample and subtract or divide out the average of the previous 30 days at 5:00pm. Then look past N
1,691
Simple algorithm for online outlier detection of a generic time series
An alternative to the approach outlined by Rob Hyndman would be to use Holt-Winters Forecasting . The confidence bands derived from Holt-Winters can be used to detect outliers. Here is a paper that describes how to use Holt-Winters for "Aberrant Behavior Detection in Time Series for Network Monitoring". An implementation for RRDTool can be found here.
Simple algorithm for online outlier detection of a generic time series
An alternative to the approach outlined by Rob Hyndman would be to use Holt-Winters Forecasting . The confidence bands derived from Holt-Winters can be used to detect outliers. Here is a paper that de
Simple algorithm for online outlier detection of a generic time series An alternative to the approach outlined by Rob Hyndman would be to use Holt-Winters Forecasting . The confidence bands derived from Holt-Winters can be used to detect outliers. Here is a paper that describes how to use Holt-Winters for "Aberrant Behavior Detection in Time Series for Network Monitoring". An implementation for RRDTool can be found here.
Simple algorithm for online outlier detection of a generic time series An alternative to the approach outlined by Rob Hyndman would be to use Holt-Winters Forecasting . The confidence bands derived from Holt-Winters can be used to detect outliers. Here is a paper that de
1,692
Simple algorithm for online outlier detection of a generic time series
Spectral analysis detects periodicity in stationary time series. The frequency domain approach based on spectral density estimation is an approach I would recommend as your first step. If for certain periods irregularity means a much higher peak than is typical for that period then the series with such irregularities would not be stationary and spectral anlsysis would not be appropriate. But assuming you have identified the period that has the irregularities you should be able to determine approximately what the normal peak height would be and then can set a threshold at some level above that average to designate the irregular cases.
Simple algorithm for online outlier detection of a generic time series
Spectral analysis detects periodicity in stationary time series. The frequency domain approach based on spectral density estimation is an approach I would recommend as your first step. If for certain
Simple algorithm for online outlier detection of a generic time series Spectral analysis detects periodicity in stationary time series. The frequency domain approach based on spectral density estimation is an approach I would recommend as your first step. If for certain periods irregularity means a much higher peak than is typical for that period then the series with such irregularities would not be stationary and spectral anlsysis would not be appropriate. But assuming you have identified the period that has the irregularities you should be able to determine approximately what the normal peak height would be and then can set a threshold at some level above that average to designate the irregular cases.
Simple algorithm for online outlier detection of a generic time series Spectral analysis detects periodicity in stationary time series. The frequency domain approach based on spectral density estimation is an approach I would recommend as your first step. If for certain
1,693
Simple algorithm for online outlier detection of a generic time series
Since it is a time series data, a simple exponential filter http://en.wikipedia.org/wiki/Exponential_smoothing will smoothen the data. It is a very good filter since you don't need to accumulate old data points. Compare every newly smoothed data value with its unsmoothed value. Once the deviation exceeds a certain predefined threshold (depending on what you believe an outlier in your data is), then your outlier can be easily detected. In C I will do the following for a real-time 16 bit sample (I believe this is found somewhere here < Explanation - https://dsp.stackexchange.com/questions/378/what-is-the-best-first-order-iir-approximation-to-a-moving-average-filter>) #define BITS2 2 //< This is roughly = log2( 1 / alpha ), depending on how smooth you want your data to be short Simple_Exp_Filter(int new_sample) {static int filtered_sample = 0; long local_sample = sample << 16; /*We assume it is a 16 bit sample */ filtered_sample += (local_sample - filtered_sample) >> BITS2; return (short) ((filtered_sample+0x8000) >> 16); //< Round by adding .5 and truncating. } int main() { newly_arrived = function_receive_new_sample(); filtered_sample = Simple_Exp_Filter(newly_arrived); if (abs(newly_arrived - filtered_sample)/newly_arrived > THRESHOLD) { //AN OUTLIER HAS BEEN FOUND } return 0; }
Simple algorithm for online outlier detection of a generic time series
Since it is a time series data, a simple exponential filter http://en.wikipedia.org/wiki/Exponential_smoothing will smoothen the data. It is a very good filter since you don't need to accumulate old
Simple algorithm for online outlier detection of a generic time series Since it is a time series data, a simple exponential filter http://en.wikipedia.org/wiki/Exponential_smoothing will smoothen the data. It is a very good filter since you don't need to accumulate old data points. Compare every newly smoothed data value with its unsmoothed value. Once the deviation exceeds a certain predefined threshold (depending on what you believe an outlier in your data is), then your outlier can be easily detected. In C I will do the following for a real-time 16 bit sample (I believe this is found somewhere here < Explanation - https://dsp.stackexchange.com/questions/378/what-is-the-best-first-order-iir-approximation-to-a-moving-average-filter>) #define BITS2 2 //< This is roughly = log2( 1 / alpha ), depending on how smooth you want your data to be short Simple_Exp_Filter(int new_sample) {static int filtered_sample = 0; long local_sample = sample << 16; /*We assume it is a 16 bit sample */ filtered_sample += (local_sample - filtered_sample) >> BITS2; return (short) ((filtered_sample+0x8000) >> 16); //< Round by adding .5 and truncating. } int main() { newly_arrived = function_receive_new_sample(); filtered_sample = Simple_Exp_Filter(newly_arrived); if (abs(newly_arrived - filtered_sample)/newly_arrived > THRESHOLD) { //AN OUTLIER HAS BEEN FOUND } return 0; }
Simple algorithm for online outlier detection of a generic time series Since it is a time series data, a simple exponential filter http://en.wikipedia.org/wiki/Exponential_smoothing will smoothen the data. It is a very good filter since you don't need to accumulate old
1,694
Simple algorithm for online outlier detection of a generic time series
You could use the standard deviation of the last N measurements (you have to pick a suitable N). A good anomaly score would be how many standard deviations a measurement is from the moving average.
Simple algorithm for online outlier detection of a generic time series
You could use the standard deviation of the last N measurements (you have to pick a suitable N). A good anomaly score would be how many standard deviations a measurement is from the moving average.
Simple algorithm for online outlier detection of a generic time series You could use the standard deviation of the last N measurements (you have to pick a suitable N). A good anomaly score would be how many standard deviations a measurement is from the moving average.
Simple algorithm for online outlier detection of a generic time series You could use the standard deviation of the last N measurements (you have to pick a suitable N). A good anomaly score would be how many standard deviations a measurement is from the moving average.
1,695
Simple algorithm for online outlier detection of a generic time series
what I do is group the measurements by hour and day of week and compare standard deviations of that. Still doesn't correct for things like holidays and summer/winter seasonality but its correct most of the time. The downside is that you really need to collect a year or so of data to have enough so that stddev starts making sense.
Simple algorithm for online outlier detection of a generic time series
what I do is group the measurements by hour and day of week and compare standard deviations of that. Still doesn't correct for things like holidays and summer/winter seasonality but its correct most o
Simple algorithm for online outlier detection of a generic time series what I do is group the measurements by hour and day of week and compare standard deviations of that. Still doesn't correct for things like holidays and summer/winter seasonality but its correct most of the time. The downside is that you really need to collect a year or so of data to have enough so that stddev starts making sense.
Simple algorithm for online outlier detection of a generic time series what I do is group the measurements by hour and day of week and compare standard deviations of that. Still doesn't correct for things like holidays and summer/winter seasonality but its correct most o
1,696
Simple algorithm for online outlier detection of a generic time series
anomaly detection requires the construction of an equation which describes the expectation. Intervention Detection is available in both a non-causal and causal setting . If one has a predictor series like price then things can get a little complicated. Other responses here don't seem to take into account assignable cause attributable to user specified predictor series like price and thus might be flawed. Quantity sold may well depend on price , perhaps previous prices and perhaps quantity sold in the past. The basis for the anomaly detection ( pulses,seasonal pulses, level shifts and local time trends ) is found in https://pdfs.semanticscholar.org/09c4/ba8dd3cc88289caf18d71e8985bdd11ad21c.pdf
Simple algorithm for online outlier detection of a generic time series
anomaly detection requires the construction of an equation which describes the expectation. Intervention Detection is available in both a non-causal and causal setting . If one has a predictor series
Simple algorithm for online outlier detection of a generic time series anomaly detection requires the construction of an equation which describes the expectation. Intervention Detection is available in both a non-causal and causal setting . If one has a predictor series like price then things can get a little complicated. Other responses here don't seem to take into account assignable cause attributable to user specified predictor series like price and thus might be flawed. Quantity sold may well depend on price , perhaps previous prices and perhaps quantity sold in the past. The basis for the anomaly detection ( pulses,seasonal pulses, level shifts and local time trends ) is found in https://pdfs.semanticscholar.org/09c4/ba8dd3cc88289caf18d71e8985bdd11ad21c.pdf
Simple algorithm for online outlier detection of a generic time series anomaly detection requires the construction of an equation which describes the expectation. Intervention Detection is available in both a non-causal and causal setting . If one has a predictor series
1,697
Simple algorithm for online outlier detection of a generic time series
For the case where one has to compute the outliers quickly, one could use the idea of Rob Hyndman and Mahito Sugiyama ( https://github.com/BorgwardtLab/sampling-outlier-detection , library(spoutlier), function qsp ) to compute the outliers as follows: library(spoutlier) rapidtsoutliers <- function(x,plot=FALSE,seed=123) { set.seed(seed) x <- as.numeric(x) tt <- 1:length(x) qspscore <- qsp(x) limit <- quantile(qspscore,prob=c(0.95)) score <- pmax((qspscore - limit),0) if(plot) { plot(x,type="l") x2 <- ts(rep(NA,length(x))) x2[score>0] <- x[score>0] tsp(x2) <- tsp(x) points(x2,pch=19,col="red") return(invisible(score)) } else return(score) }
Simple algorithm for online outlier detection of a generic time series
For the case where one has to compute the outliers quickly, one could use the idea of Rob Hyndman and Mahito Sugiyama ( https://github.com/BorgwardtLab/sampling-outlier-detection , library(spoutlier),
Simple algorithm for online outlier detection of a generic time series For the case where one has to compute the outliers quickly, one could use the idea of Rob Hyndman and Mahito Sugiyama ( https://github.com/BorgwardtLab/sampling-outlier-detection , library(spoutlier), function qsp ) to compute the outliers as follows: library(spoutlier) rapidtsoutliers <- function(x,plot=FALSE,seed=123) { set.seed(seed) x <- as.numeric(x) tt <- 1:length(x) qspscore <- qsp(x) limit <- quantile(qspscore,prob=c(0.95)) score <- pmax((qspscore - limit),0) if(plot) { plot(x,type="l") x2 <- ts(rep(NA,length(x))) x2[score>0] <- x[score>0] tsp(x2) <- tsp(x) points(x2,pch=19,col="red") return(invisible(score)) } else return(score) }
Simple algorithm for online outlier detection of a generic time series For the case where one has to compute the outliers quickly, one could use the idea of Rob Hyndman and Mahito Sugiyama ( https://github.com/BorgwardtLab/sampling-outlier-detection , library(spoutlier),
1,698
US Election results 2016: What went wrong with prediction models?
In short, polling is not always easy. This election may have been the hardest. Any time we are trying to do statistical inference, a fundamental question is whether our sample is a good representation of the population of interest. A typical assumption that is required for many types of statistical inference is that of having our sample being a completely random sample from the population of interest (and often, we also need samples to be independent). If these assumptions hold true, we typically have good measures of our uncertainty based on statistical theory. But we definitively do not have these assumptions holding true with polls! We have exactly 0 samples from our population of interest: actual votes cast at election day. In this case, we cannot make any sort of valid inference without further, untestable assumptions about the data. Or at least, untestable until after election day. Do we completely give up and say "50%-50%!"? Typically, no. We can try to make what we believe are reasonable assumptions about how the votes will be cast. For example, maybe we want to believe that polls are unbiased estimates for the election day votes, plus some certain unbiased temporal noise (i.e., evolving public opinion as time passes). I'm not an expert on polling methods, but I believe this is the type of model 538 uses. And in 2012, it worked pretty well. So those assumptions were probably pretty reasonable. Unfortunately, there's no real way of evaluating those assumptions, outside strictly qualitative reasoning. For more discussion on a similar topic, see the topic of Non-Ignorable missingness. My theory for why polls did so poorly in 2016: the polls were not unbiased estimates of voter day behavior. That is, I would guess that Trump supporters (and likely Brexit supporters as well) were much more distrustful of pollsters. Remember that Mr. Trump actively denounced polls. As such, I think Trump supporters were less likely to report their voting intentions to pollsters than supporters of his opponents. I would speculate that this caused an unforeseen heavy bias in the polls. How could analysts have accounted for this when using the poll data? Based on the poll data alone, there is no real way to do this in a quantitative way. The poll data does not tell you anything about those who did not participate. However, one may be able to improve the polls in a qualitative way, by choosing more reasonable (but untestable) assumptions about the relation between polling data and election day behavior. This is non-trivial and the truly difficult part of being a good pollster (note: I am not a pollster). Also note that the results were very surprising to the pundits as well, so it's not like there were obvious signs that the assumptions were wildly off this time. Polling can be hard.
US Election results 2016: What went wrong with prediction models?
In short, polling is not always easy. This election may have been the hardest. Any time we are trying to do statistical inference, a fundamental question is whether our sample is a good representatio
US Election results 2016: What went wrong with prediction models? In short, polling is not always easy. This election may have been the hardest. Any time we are trying to do statistical inference, a fundamental question is whether our sample is a good representation of the population of interest. A typical assumption that is required for many types of statistical inference is that of having our sample being a completely random sample from the population of interest (and often, we also need samples to be independent). If these assumptions hold true, we typically have good measures of our uncertainty based on statistical theory. But we definitively do not have these assumptions holding true with polls! We have exactly 0 samples from our population of interest: actual votes cast at election day. In this case, we cannot make any sort of valid inference without further, untestable assumptions about the data. Or at least, untestable until after election day. Do we completely give up and say "50%-50%!"? Typically, no. We can try to make what we believe are reasonable assumptions about how the votes will be cast. For example, maybe we want to believe that polls are unbiased estimates for the election day votes, plus some certain unbiased temporal noise (i.e., evolving public opinion as time passes). I'm not an expert on polling methods, but I believe this is the type of model 538 uses. And in 2012, it worked pretty well. So those assumptions were probably pretty reasonable. Unfortunately, there's no real way of evaluating those assumptions, outside strictly qualitative reasoning. For more discussion on a similar topic, see the topic of Non-Ignorable missingness. My theory for why polls did so poorly in 2016: the polls were not unbiased estimates of voter day behavior. That is, I would guess that Trump supporters (and likely Brexit supporters as well) were much more distrustful of pollsters. Remember that Mr. Trump actively denounced polls. As such, I think Trump supporters were less likely to report their voting intentions to pollsters than supporters of his opponents. I would speculate that this caused an unforeseen heavy bias in the polls. How could analysts have accounted for this when using the poll data? Based on the poll data alone, there is no real way to do this in a quantitative way. The poll data does not tell you anything about those who did not participate. However, one may be able to improve the polls in a qualitative way, by choosing more reasonable (but untestable) assumptions about the relation between polling data and election day behavior. This is non-trivial and the truly difficult part of being a good pollster (note: I am not a pollster). Also note that the results were very surprising to the pundits as well, so it's not like there were obvious signs that the assumptions were wildly off this time. Polling can be hard.
US Election results 2016: What went wrong with prediction models? In short, polling is not always easy. This election may have been the hardest. Any time we are trying to do statistical inference, a fundamental question is whether our sample is a good representatio
1,699
US Election results 2016: What went wrong with prediction models?
There are a number of sources of polling error: You find some people hard to reach This is corrected by doing demographic analysis, then correcting for your sampling bias. If your demographic analysis doesn't reflect the things that make people hard to reach, this correction does not repair the damage. People lie You can use historical rates at which people lie to pollsters to influence your model. As an example, historically people state they are going to vote 3rd party far more than they actually do on election day. Your corrections can be wrong here. These lies can also mess up your other corrections; if they lie about voting in the last election, they may be counted as a likely voter even if they are not, for example. Only the people who vote end up counting Someone can have lots of support, but if their supporters don't show up on election day, it doesn't count. This is why we have registered voter, likely voter, etc models. If these models are wrong, things don't work. Polling costs money Doing polls is expensive, and if you don't expect (say) Michigan to flip you might not poll it very often. This can lead to surprised where a state you polled 3 weeks before the election looks nothing like that on election day. People change their minds Over minutes, hours, days, weeks or months, people change their minds. Polling about "what you would do now" doesn't help much if they change their minds before it counts. There are models that guess roughly the rate at which people change their minds based off historical polls. Herding If everyone else states that Hillary is +3 and you get a poll showing Hillary +11 or Donald +1, you might question it. You might do another pass and see if there is an analysis failure. You might even throw it out and do another poll. When you get a Hillary +2 or +4 poll, you might not do it. Massive outliers, even if the statistical model says it happens sometimes, can make you "look bad". A particularly crappy form of this happened on election day, where everyone who released a poll magically converged to the same value; they probably where outlier polls, but nobody wants to be the one who said (say) Hillary +11 the day before this election. Being wrong in a herd hurts you less. Expected sampling error If you have 1 million people and you ask 100 perfectly random people and half say "Apple" and half say "Orange", the expected error you'd get from sampling is +/- 10 or so, even if none of the above problems occur. This last bit is what polls describe as their margin of error. Polls rarely describe what the above correction factors could introduce as error. Nate Silver at 538 was one of the few polling aggregators that used conservative (cautious) means to handle the possibility of the above kinds of errors. He factored in the possibility of systemic correlated errors in the polling models. While other aggregators were predicting a 90%+ chance HC was elected, Nate Silver was stating 70%, because the polls were within "normal polling error" of a Donald victory. This was a historical measure of model error, as opposed to raw statistical sampling error; what if the model and the corrections to the model were wrong? People are still crunching the numbers. But, preliminary results indicate a big part of it was turnout models. Donald supporters showed up to the polls in larger numbers, and Hillary supporters in lesser numbers, than the polling models (and exit polls!) indicated. Latino's voted more for Donald than expected. Blacks voted more for Donald than expected. (Most of both voted for Hillary). White women voted more for Donald than expected (more of them voted for Donald than Hillary, which was not expected). Voter turnout was low in general. Democrats tend to win when there is high voter turnout, and Republicans when there is low.
US Election results 2016: What went wrong with prediction models?
There are a number of sources of polling error: You find some people hard to reach This is corrected by doing demographic analysis, then correcting for your sampling bias. If your demographic analys
US Election results 2016: What went wrong with prediction models? There are a number of sources of polling error: You find some people hard to reach This is corrected by doing demographic analysis, then correcting for your sampling bias. If your demographic analysis doesn't reflect the things that make people hard to reach, this correction does not repair the damage. People lie You can use historical rates at which people lie to pollsters to influence your model. As an example, historically people state they are going to vote 3rd party far more than they actually do on election day. Your corrections can be wrong here. These lies can also mess up your other corrections; if they lie about voting in the last election, they may be counted as a likely voter even if they are not, for example. Only the people who vote end up counting Someone can have lots of support, but if their supporters don't show up on election day, it doesn't count. This is why we have registered voter, likely voter, etc models. If these models are wrong, things don't work. Polling costs money Doing polls is expensive, and if you don't expect (say) Michigan to flip you might not poll it very often. This can lead to surprised where a state you polled 3 weeks before the election looks nothing like that on election day. People change their minds Over minutes, hours, days, weeks or months, people change their minds. Polling about "what you would do now" doesn't help much if they change their minds before it counts. There are models that guess roughly the rate at which people change their minds based off historical polls. Herding If everyone else states that Hillary is +3 and you get a poll showing Hillary +11 or Donald +1, you might question it. You might do another pass and see if there is an analysis failure. You might even throw it out and do another poll. When you get a Hillary +2 or +4 poll, you might not do it. Massive outliers, even if the statistical model says it happens sometimes, can make you "look bad". A particularly crappy form of this happened on election day, where everyone who released a poll magically converged to the same value; they probably where outlier polls, but nobody wants to be the one who said (say) Hillary +11 the day before this election. Being wrong in a herd hurts you less. Expected sampling error If you have 1 million people and you ask 100 perfectly random people and half say "Apple" and half say "Orange", the expected error you'd get from sampling is +/- 10 or so, even if none of the above problems occur. This last bit is what polls describe as their margin of error. Polls rarely describe what the above correction factors could introduce as error. Nate Silver at 538 was one of the few polling aggregators that used conservative (cautious) means to handle the possibility of the above kinds of errors. He factored in the possibility of systemic correlated errors in the polling models. While other aggregators were predicting a 90%+ chance HC was elected, Nate Silver was stating 70%, because the polls were within "normal polling error" of a Donald victory. This was a historical measure of model error, as opposed to raw statistical sampling error; what if the model and the corrections to the model were wrong? People are still crunching the numbers. But, preliminary results indicate a big part of it was turnout models. Donald supporters showed up to the polls in larger numbers, and Hillary supporters in lesser numbers, than the polling models (and exit polls!) indicated. Latino's voted more for Donald than expected. Blacks voted more for Donald than expected. (Most of both voted for Hillary). White women voted more for Donald than expected (more of them voted for Donald than Hillary, which was not expected). Voter turnout was low in general. Democrats tend to win when there is high voter turnout, and Republicans when there is low.
US Election results 2016: What went wrong with prediction models? There are a number of sources of polling error: You find some people hard to reach This is corrected by doing demographic analysis, then correcting for your sampling bias. If your demographic analys
1,700
US Election results 2016: What went wrong with prediction models?
This was mentioned in the comments on the accepted answer (hat-tip to Mehrdad), but I think it should be emphasized. 538 actually did this quite well this cycle*. 538 is a polling aggregator that runs models against each state to try to predict the winner. Their final run gave Trump about a 30% chance of winning. That means if you ran three elections with data like this, you'd expect Team Red to win one of them. That isn't really that small of a chance. Its certainly a big enough one that I took precautions (eg: The Friday before I asked for Wednesday the 9th off at work, considering the likelihood of it being close enough to be a late night). One thing 538 will tell you if you hang out there is that if polls are off, there's a good chance they will all be off in the same direction. This is for a couple of reasons. Likely voter models. Polls have to adjust for the the types of voters who will actually show up on election day. We have historical models, but this was obviously not your typical pair of candidates, so predicting based on past data was always going to be a bit of a crapshoot. Late election herding. Nobody wants to be the poll that blew the election the worst. So while they don't mind being an outlier in the middle of a campaign, at the end all the polls tend to tweak themselves so that they say the same thing. This is one of the things that was blamed for the polls being so egregiously off in Eric Cantor's surprise loss in 2014, and for the surprisingly close results of the 2014 Virginia Senate race as well. * - 538 has now posted their own analysis. It mostly jibes with what is said above, but is worth reading if you want a lot more details. Now a bit of personal speculation. I was actually skeptical of 538's final % chances for its last 3 days. The reason goes back to that second bullet above. Let's take a look at the history of their model for this election (from their website) (Sadly, the labels obscure it, but after this the curves diverged again for the last three days, out to more than a 70% chance for Clinton) The pattern we see here is repeated divergence followed by decay back toward a Trump lead. The Clinton bubbles were all caused by events. The first was the conventions (normally there's a couple of days lag after an event for it to start showing up in the polling). The second seems to have been kicked off by the first debate, likely helped along by the TMZ tape. Then there's the third inflection point I've marked in the picture. It happened on November 5, 3 days before the election. What event caused this? A couple days before that was another email-flareup, but that shouldn't have worked in Clinton's favor. The best explanation I could come up with at the time was poll herding. It was only 3 days until the election, 2 days until the final polls, and pollsters would be starting to worry about their final results. The "conventional wisdom" this entire election (as evidenced by the betting models) was an easy Clinton win. So it seemed a distinct possibility that this wasn't a true inflection at all. If that were the case, the true curve from Nov 5 on was quite likely a continuation of this one towards convergence. It would take a better mathematician than I to estimate the curve forward here without this suspicious final inflection point, but eyeballing it I think Nov 8 would have been near the crossover point. In front or behind depends on how much of that curve was actually real. Now I can't say for sure this is what happened. There are other very plausible explanations (eg: Trump got his voters out far better than any pollster expected) But it was my theory for what was going on at the time, and it certainly proved predictive.
US Election results 2016: What went wrong with prediction models?
This was mentioned in the comments on the accepted answer (hat-tip to Mehrdad), but I think it should be emphasized. 538 actually did this quite well this cycle*. 538 is a polling aggregator that runs
US Election results 2016: What went wrong with prediction models? This was mentioned in the comments on the accepted answer (hat-tip to Mehrdad), but I think it should be emphasized. 538 actually did this quite well this cycle*. 538 is a polling aggregator that runs models against each state to try to predict the winner. Their final run gave Trump about a 30% chance of winning. That means if you ran three elections with data like this, you'd expect Team Red to win one of them. That isn't really that small of a chance. Its certainly a big enough one that I took precautions (eg: The Friday before I asked for Wednesday the 9th off at work, considering the likelihood of it being close enough to be a late night). One thing 538 will tell you if you hang out there is that if polls are off, there's a good chance they will all be off in the same direction. This is for a couple of reasons. Likely voter models. Polls have to adjust for the the types of voters who will actually show up on election day. We have historical models, but this was obviously not your typical pair of candidates, so predicting based on past data was always going to be a bit of a crapshoot. Late election herding. Nobody wants to be the poll that blew the election the worst. So while they don't mind being an outlier in the middle of a campaign, at the end all the polls tend to tweak themselves so that they say the same thing. This is one of the things that was blamed for the polls being so egregiously off in Eric Cantor's surprise loss in 2014, and for the surprisingly close results of the 2014 Virginia Senate race as well. * - 538 has now posted their own analysis. It mostly jibes with what is said above, but is worth reading if you want a lot more details. Now a bit of personal speculation. I was actually skeptical of 538's final % chances for its last 3 days. The reason goes back to that second bullet above. Let's take a look at the history of their model for this election (from their website) (Sadly, the labels obscure it, but after this the curves diverged again for the last three days, out to more than a 70% chance for Clinton) The pattern we see here is repeated divergence followed by decay back toward a Trump lead. The Clinton bubbles were all caused by events. The first was the conventions (normally there's a couple of days lag after an event for it to start showing up in the polling). The second seems to have been kicked off by the first debate, likely helped along by the TMZ tape. Then there's the third inflection point I've marked in the picture. It happened on November 5, 3 days before the election. What event caused this? A couple days before that was another email-flareup, but that shouldn't have worked in Clinton's favor. The best explanation I could come up with at the time was poll herding. It was only 3 days until the election, 2 days until the final polls, and pollsters would be starting to worry about their final results. The "conventional wisdom" this entire election (as evidenced by the betting models) was an easy Clinton win. So it seemed a distinct possibility that this wasn't a true inflection at all. If that were the case, the true curve from Nov 5 on was quite likely a continuation of this one towards convergence. It would take a better mathematician than I to estimate the curve forward here without this suspicious final inflection point, but eyeballing it I think Nov 8 would have been near the crossover point. In front or behind depends on how much of that curve was actually real. Now I can't say for sure this is what happened. There are other very plausible explanations (eg: Trump got his voters out far better than any pollster expected) But it was my theory for what was going on at the time, and it certainly proved predictive.
US Election results 2016: What went wrong with prediction models? This was mentioned in the comments on the accepted answer (hat-tip to Mehrdad), but I think it should be emphasized. 538 actually did this quite well this cycle*. 538 is a polling aggregator that runs