source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
14,899 | I have built my model. Now I want to draw the network architecture diagram for my research paper. Example is shown below: | I recently found this online tool that produces publication-ready NN-architecture schematics. It is called NN-SVG and made by Alex Lenail . You can easily export these to use in, say, LaTeX for example. Here are a few examples: AlexNet style LeNet style and the good old Fully Connected style | {
"source": [
"https://datascience.stackexchange.com/questions/14899",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/20618/"
]
} |
15,135 | How could I randomly split a data matrix and the corresponding label vector into a X_train , X_test , X_val , y_train , y_test , y_val with scikit-learn? As far as I know, sklearn.model_selection.train_test_split is only capable of splitting into two not into three... | You could just use sklearn.model_selection.train_test_split twice. First to split to train, test and then split train again into validation and train. Something like this: X_train, X_test, y_train, y_test
= train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val
= train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2 | {
"source": [
"https://datascience.stackexchange.com/questions/15135",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/21560/"
]
} |
15,148 | Suppose, we use the following code to generate scatter plots, function res = plot2features(tset, f1, f2)
% Plots tset samples on a 2-dimensional diagram
% using features f1 and f2
% tset - training set; the first column contains class label
% f1 - index of the first feature (mapped to horizontal axis)
% f2 - index of the second feature (mapped to vertical axis)
%
% res - matrix containing values of f1 and f2 features
% plotting parameters for different classes
% restriction to 8 classes seems reasonable
pattern(1,:) = 'ks';
pattern(2,:) = 'rd';
pattern(3,:) = 'mv';
pattern(4,:) = 'b^';
pattern(5,:) = 'gs';
pattern(6,:) = 'md';
pattern(7,:) = 'mv';
pattern(8,:) = 'g^';
res = tset(:, [f1, f2]);
% extraction of all unique labels used in tset
labels = unique(tset(:,1));
% create diagram and switch to content preserving mode
figure;
hold on;
for i=1:size(labels,1)
idx = tset(:,1) == labels(i);
plot(res(idx,1), res(idx,2), pattern(i,:));
end
hold off;
end The following is its usage, >> plot2features(train, 3,4) This code generates the following image before removing outliers, and following image after removing outliers, I have the following questions, (1) What do the 1st image tell us about the existence of outliers? I can guess that the plot at a distant position is an outlier. But, how can I find which row or column is generating outliers? according to the 1st picture, the outlier is situated at (27,375) coordinates. But, in the actual data it is situated on the train(184:188,:) rows. So, why is that difference? (2) What do the color codes in the second picture represent? (3) Why has the two images that much different? Why only removing 4 rows bring so radical differnce? (4) How can we analyze the existence of outliers using histograms? Please, supply me any study material about outlier detection using histograms. Suppose we have the following training and test data in our hands to be used in testing Bayes Classifier algorithm, Training data train.txt Test data test.txt First column represents class. Rest of the columns represent features. | You could just use sklearn.model_selection.train_test_split twice. First to split to train, test and then split train again into validation and train. Something like this: X_train, X_test, y_train, y_test
= train_test_split(X, y, test_size=0.2, random_state=1)
X_train, X_val, y_train, y_val
= train_test_split(X_train, y_train, test_size=0.25, random_state=1) # 0.25 x 0.8 = 0.2 | {
"source": [
"https://datascience.stackexchange.com/questions/15148",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/-1/"
]
} |
15,630 | I am dealing with a highly unbalanced dataset so I used SMOTE to resample it. After SMOTE resampling, I split the resampled dataset into training/test sets using the training set to build a model and the test set to evaluate it. However, I am worried that some data points in the test set might actually be jittered from data points in the training set (i.e. the information is leaking from the training set into the test set) so the test set is not really a clean set for testing. Does anyone have any similar experience? Does the information really leak from the training set into the test set? Or does SMOTE actually take care of this and we do not need to worry about it? | When you use any sampling technique (specifically synthetic) you divide your data first and then apply synthetic sampling on the training data only. After you do the training, you use the test set (which contains only original samples) to evaluate. The risk if you use your strategy is having the original sample in training (testing) and the synthetic sample (that was created based on this original sample) in the test (training) set. | {
"source": [
"https://datascience.stackexchange.com/questions/15630",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/17310/"
]
} |
15,989 | I am trying out a multiclass classification setting with 3 classes. The class distribution is skewed with most of the data falling in 1 of the 3 classes. (class labels being 1,2,3, with 67.28% of the data falling in class label 1, 11.99% data in class 2, and remaining in class 3) I am training a multiclass classifier on this dataset and I am getting the following performance: Precision Recall F1-Score
Micro Average 0.731 0.731 0.731
Macro Average 0.679 0.529 0.565 I am not sure why all Micro average performances are equal and also Macro average performances are low compared to Micro average. | Micro- and macro-averages (for whatever metric) will compute slightly different things, and thus their interpretation differs. A macro-average will compute the metric independently for each class and then take the average (hence treating all classes equally), whereas a micro-average will aggregate the contributions of all classes to compute the average metric. In a multi-class classification setup, micro-average is preferable if you suspect there might be class imbalance (i.e you may have many more examples of one class than of other classes). To illustrate why, take for example precision $Pr=\frac{TP}{(TP+FP)}$. Let's imagine you have a One-vs-All (there is only one correct class output per example) multi-class classification system with four classes and the following numbers when tested: Class A: 1 TP and 1 FP Class B: 10 TP and 90 FP Class C: 1 TP and 1 FP Class D: 1 TP and 1 FP You can see easily that $Pr_A = Pr_C = Pr_D = 0.5$, whereas $Pr_B=0.1$. A macro-average will then compute: $Pr=\frac{0.5+0.1+0.5+0.5}{4}=0.4$ A micro-average will compute: $Pr=\frac{1+10+1+1}{2+100+2+2}=0.123$ These are quite different values for precision. Intuitively, in the macro-average the "good" precision (0.5) of classes A, C and D is contributing to maintain a "decent" overall precision (0.4). While this is technically true (across classes, the average precision is 0.4), it is a bit misleading, since a large number of examples are not properly classified. These examples predominantly correspond to class B, so they only contribute 1/4 towards the average in spite of constituting 94.3% of your test data. The micro-average will adequately capture this class imbalance, and bring the overall precision average down to 0.123 (more in line with the precision of the dominating class B (0.1)). For computational reasons, it may sometimes be more convenient to compute class averages and then macro-average them. If class imbalance is known to be an issue, there are several ways around it. One is to report not only the macro-average, but also its standard deviation (for 3 or more classes). Another is to compute a weighted macro-average, in which each class contribution to the average is weighted by the relative number of examples available for it. In the above scenario, we obtain: $Pr_{macro-mean}={0.25·0.5+0.25·0.1+0.25·0.5+0.25·0.5}=0.4$
$Pr_{macro-stdev}=0.173$ $Pr_{macro-weighted}={0.0189·0.5+0.943·0.1+0.0189·0.5+0.0189·0.5}={0.009+0.094+0.009+0.009}=0.123$ The large standard deviation (0.173) already tells us that the 0.4 average does not stem from a uniform precision among classes, but it might be just easier to compute the weighted macro-average, which in essence is another way of computing the micro-average. | {
"source": [
"https://datascience.stackexchange.com/questions/15989",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/13518/"
]
} |
16,060 | I'm having trouble understanding the difference between equivariant to translation and invariant to translation . In the book Deep Learning . MIT Press, 2016 (I. Goodfellow, A. Courville, and Y. Bengio), one can find on the convolutional networks: [...] the particular form of parameter sharing causes the layer to have a property called equivariance to translation [...] pooling helps to make the representation become approximately invariant to small translations of the input Is there any difference between them or are the terms interchangeably used? | Equivariance and invariance are sometimes used interchangeably in common speech. They have ancient roots in maths and physics. As pointed out by @Xi'an , you can find previous uses (anterior to Convolutional Neural Networks) in the statistical literature, for instance on the notions of the invariant estimator and especially the Pitman estimator . However, I would like to mention that it would be better if both terms keep separate meaning , as the prefix " in- " in invariant is privative (meaning "no variance" at all), while " equi- " in equivariant refers to "varying in a similar or equivalent proportion". In other words, one in- does not vary, the other equi- does . Let us start from simple image features, and suppose that image $I$ has a unique maximum $m$ at spatial pixel location $(x_m,y_m)$ , which is here the main classification feature. In other words: an image and all its translations are "the same" .
An interesting property of classifiers is their ability to classify in the same manner some distorted versions $I'$ of $I$ , for instance translations by all vectors $(u,v)$ . The maximum value $m'$ of $I'$ is invariant : $m'=m$ : the value is the same. While its location will be at $(x'_m,y'_m)=(x_m-u,y_m-v)$ , and is equivariant , meaning that is varies "equally" with the distortion . The precise formulations given (in mathematical terms) for equivariance depend on the class of objects and transformations one considers: translation, rotation, scale, shear, shift, etc. So I prefer here to focus on the notion that is most often used in practice (I accept the blame from a theoretical stand-point). Here, translations by vectors $(u,v)$ of the image (or some more generic actions) can be equipped with a structure of composition, like that of a group $G$ (here the group of translations). One specific $g$ denotes a specific element of the translation group ( translational symmetry ). A function or feature $f$ is invariant under the group of actions $G$ if for all images in a class, and for any $g$ , $$f(g(I)) = f(I)\,.$$ In other words: if you change the image by action $g$ , the values for feature or function $f$ are the same. It becomes equivariant if there exists another mathematical structure or action (often a group again) $G'$ that reflects
the
transformations (from $G$ ) in $I$ in a meaningful way . In other words, such that for each $g$ , you have some (unique?) $g' \in G'$ such that $$f(g(I)) = g'(f(I))\,.$$ In the above example on the group of translations, $g$ and $g'$ are the same (and hence $G'=G$ ): an integer translation of the image reflects as the exact same translation of the maximum location. This is sometimes refered to as "same-equivariance". Another common definition is: $$f(g(I)) = g(f(I))\,.$$ I however used potentially different $G$ and $G'$ because sometimes $f(.)$ and $g(.)$ do not lie in the same domain. This happens for instance in multivariate statistics (see e.g. Equivariance and invariance properties of multivariate quantile and related functions, and the role of standardisation ).
But here, the uniqueness of the mapping between $g$ and $g'$ allows to get back to the original transformation $g$ . Often, people use the term invariance because the equivariance concept is unknown, or everybody else uses invariance, and equivariance would seem more pedantic. For the record, other related notions (esp. in maths and physics) are termed covariance , contravariance , differential invariance . In addition, translation-invariance, as least approximate, or in envelope, has been a quest for several signal and image processing tools. Notably, multi-rate (filter-banks) and multi-scale (wavelets or pyramids) transformations have been design in the past 25 years, for instance under the hood of shift-invariant, cycle-spinning, stationary, complex, dual-tree wavelet transforms (for a review on 2D wavelets, A panorama on multiscale geometric representations ). The wavelets can absorb a few discrete scale variations. All theses (approximate) invariances often come with the price of redundancy in the number of transformed coefficients.
But they are more likely to yield shift-invariant, or shift-equivariant features. | {
"source": [
"https://datascience.stackexchange.com/questions/16060",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/27607/"
]
} |
16,342 | I have 3 classes with this distribution: Class 0: 0.1169
Class 1: 0.7668
Class 2: 0.1163 And I am using xgboost for classification. I know that there is a parameter called scale_pos_weight . But how is it handled for 'multiclass' case, and how can I properly set it? | scale_pos_weight is used for binary classification as you stated. It is a more generalized solution to handle imbalanced classes. A good approach when assigning a value to scale_pos_weight is: sum(negative instances) / sum(positive instances) For your specific case, there is another option in order to weight individual data points and take their weights into account while working with the booster, and let the optimization happen regarding their weights so that each point is represented equally. You just need to simply use: xgboost.DMatrix(..., weight = *weight array for individual weights*) You can define the weights as you like and by doing so, you can even handle imbalances within classes as well as imbalances across different classes. | {
"source": [
"https://datascience.stackexchange.com/questions/16342",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/14434/"
]
} |
16,463 | I am pretty new to neural networks, but I understand linear algebra and the mathematics of convolution pretty decently. I am trying to understand the example code I find in various places on the net for training a Keras convolutional NN with MNIST data to recognize digits. My expectation would be that when I create a convolutional layer, I would have to specify a filter or set of filters to apply to the input. But the three samples I have found all create a convolutional layer like this: model.add(Convolution2D(nb_filter = 32, nb_row = 3, nb_col = 3,
border_mode='valid',
input_shape=input_shape)) This seems to be applying a total of 32 3x3 filters to the images processed by the CNN. But what are those filters? How would I describe them mathematically? The keras documentation is no help. Thanks in advance, | By default, the filters $W$ are initialised randomly using the glorot_uniform method, which draws values from a uniform distribution with positive and negative bounds described as so: $$W \sim \mathcal{U}\left(\frac{6}{n_{in} + n_{out}}, \frac{-6}{n_{in} + n_{out}}\right),$$ where $n_{in}$ is the number of units that feed into this unit, and $n_{out}$ is the number of units this result is fed to. When you are using the network to make a prediction, these filters are applied at each layer of the network. That is, a discrete convolution is performed for each filter on each input image, and the results of these convolutions are fed to the next layer of convolutions (or fully connected layer, or whatever else you might have). During training, the values in the filters are optimised with backpropogation with respect to a loss function. For classification tasks such as recognising digits, usually the cross entropy loss is used.
Here's a visualisation of some filters learned in the first layer (top) and the filters learned in the second layer (bottom) of a convolutional network: As you can see, the first layer filters basically all act as simple edge detectors, while the second layer filters are more complex. As you go deeper into a network, the filters are able to detect more complex shapes. It gets a little tricky to visualise though, as these filters act on images that have been convolved many times already, and probably don't look much like the original natural image. | {
"source": [
"https://datascience.stackexchange.com/questions/16463",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/28213/"
]
} |
16,797 | For detection, a common way to determine if one object proposal was right is Intersection over Union (IoU, IU). This takes the set $A$ of proposed object pixels and the set of true object pixels $B$ and calculates: $$IoU(A, B) = \frac{A \cap B}{A \cup B}$$ Commonly, IoU > 0.5 means that it was a hit, otherwise it was a fail. For each class, one can calculate the True Positive ($TP(c)$): a proposal was made for class $c$ and there actually was an object of class $c$ False Positive ($FP(c)$): a proposal was made for class $c$, but there is no object of class $c$ Average Precision for class $c$: $\frac{\#TP(c)}{\#TP(c) + \#FP(c)}$ The mAP (mean average precision) = $\frac{1}{|classes|}\sum_{c \in classes} \frac{\#TP(c)}{\#TP(c) + \#FP(c)}$ If one wants better proposals, one does increase the IoU from 0.5 to a higher value (up to 1.0 which would be perfect). One can denote this with mAP@p, where $p \in (0, 1)$ is the IoU. But what does mAP@[.5:.95] (as found in this paper ) mean? | mAP@[.5:.95] (someone denoted mAP@[.5,.95] ) means average mAP over different IoU thresholds, from 0.5 to 0.95, step 0.05 (0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95). There is
an associated MS COCO challenge with a new evaluation
metric, that averages mAP over different IoU thresholds,
from 0.5 to 0.95 (written as “0.5:0.95”). [ Ref ] We evaluate the
mAP averaged for IoU ∈ [0.5 : 0.05 : 0.95] (COCO’s
standard metric, simply denoted as mAP@[.5, .95])
and [email protected] (PASCAL VOC’s metric). [ Ref ] To evaluate our final detections, we use the official
COCO API [20], which measures mAP averaged over IOU
thresholds in [0.5 : 0.05 : 0.95], amongst other metrics. [ Ref ] BTW, the source code of coco shows exactly what mAP@[.5:.95] is doing: self.iouThrs = np.linspace(.5, 0.95, np.round((0.95 - .5) / .05) + 1, endpoint=True) References cocoapi Inside-Outside Net: Detecting Objects in Context with Skip Pooling and
Recurrent Neural Networks Faster R-CNN: Towards Real-Time Object
Detection with Region Proposal Networks Speed/accuracy trade-offs for modern convolutional object detectors | {
"source": [
"https://datascience.stackexchange.com/questions/16797",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/8820/"
]
} |
16,807 | I often read that in case of Deep Learning models the usual practice is to apply mini batches (generally a small one, 32/64) over several training epochs. I cannot really fathom the reason behind this. Unless I'm mistaken, the batch size is the number of training instances let seen by the model during a training iteration; and epoch is a full turn when each of the training instances have been seen by the model. If so, I cannot see the advantage of iterate over an almost insignificant subset of the training instances several times in contrast with applying a "max batch" by expose all the available training instances in each turn to the model (assuming, of course, enough the memory). What is the advantage of this approach? | The key advantage of using minibatch as opposed to the full dataset goes back to the fundamental idea of stochastic gradient descent 1 . In batch gradient descent, you compute the gradient over the entire dataset, averaging over potentially a vast amount of information. It takes lots of memory to do that. But the real handicap is the batch gradient trajectory land you in a bad spot (saddle point). In pure SGD, on the other hand, you update your parameters by adding (minus sign) the gradient computed on a single instance of the dataset. Since it's based on one random data point, it's very noisy and may go off in a direction far from the batch gradient. However, the noisiness is exactly what you want in non-convex optimization, because it helps you escape from saddle points or local minima(Theorem 6 in [2]). The disadvantage is it's terribly inefficient and you need to loop over the entire dataset many times to find a good solution. The minibatch methodology is a compromise that injects enough noise to each gradient update, while achieving a relative speedy convergence. 1 Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010 (pp. 177-186). Physica-Verlag HD. [2] Ge, R., Huang, F., Jin, C., & Yuan, Y. (2015, June). Escaping From Saddle Points-Online Stochastic Gradient for Tensor Decomposition. In COLT (pp. 797-842). EDIT : I just saw this comment on Yann LeCun's facebook, which gives a fresh perspective on this question (sorry don't know how to link to fb.) Training with large minibatches is bad for your health.
More importantly, it's bad for your test error.
Friends dont let friends use minibatches larger than 32.
Let's face it: the only people have switched to minibatch sizes larger than one since 2012 is because GPUs are inefficient for batch sizes smaller than 32. That's a terrible reason. It just means our hardware sucks. He cited this paper which has just been posted on arXiv few days ago (Apr 2018), which is worth reading, Dominic Masters, Carlo Luschi, Revisiting Small Batch Training for Deep Neural Networks , arXiv:1804.07612v1 From the abstract, While the use of large mini-batches increases the available computational parallelism, small batch training has been shown to provide improved generalization performance ... The best performance has been consistently obtained for mini-batch sizes between m=2 and m=32, which contrasts with recent work advocating the use of mini-batch sizes in the thousands. | {
"source": [
"https://datascience.stackexchange.com/questions/16807",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/21560/"
]
} |
16,904 | I am trying to understand the key differences between GBM and XGBOOST. I tried to google it, but could not find any good answers explaining the differences between the two algorithms and why xgboost almost always performs better than GBM. What makes XGBOOST so fast? | Quote from the author of xgboost : Both xgboost and gbm follows the principle of gradient boosting. There are however, the difference in modeling details. Specifically, xgboost used a more regularized model formalization to control over-fitting, which gives it better performance. We have updated a comprehensive tutorial on introduction to the model, which you might want to take a look at. Introduction to Boosted Trees The name xgboost, though, actually refers to the engineering goal to push the limit of computations resources for boosted tree algorithms. Which is the reason why many people use xgboost. For model, it might be more suitable to be called as regularized gradient boosting. Edit: There's a detailed guide of xgboost which shows more differences. References https://www.quora.com/What-is-the-difference-between-the-R-gbm-gradient-boosting-machine-and-xgboost-extreme-gradient-boosting https://xgboost.readthedocs.io/en/latest/tutorials/model.html | {
"source": [
"https://datascience.stackexchange.com/questions/16904",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/28834/"
]
} |
17,099 | have been reading up a bit on LSTM's and their use for time series and its been interesting but difficult at the same time. One thing I have had difficulties with understanding is the approach to adding additional features to what is already a list of time series features. Assuming you have your dataset up like this: t-3,t-2,t-1,Output Now lets say you know you have a feature that does affect the output but its not necessarily a time series feature, lets say its the weather outside. Is this something you can just add and the LSTM will be able to distinguish what is the time series aspect and what isnt? | For RNNs (e.g., LSTMs and GRUs), the layer input is a list of timesteps, and each timestep is a feature tensor. That means that you could have a input tensor like this (in Pythonic notation): # Input tensor to RNN
[
# Timestep 1
[ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
# Timestep 2
[ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
# Timestep 3
[ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
...
] So absolutely, you can have multiple features at each timestep. In my mind, weather is a time series feature: where I live, it happens to be a function of time. So it would be quite reasonable to encode weather information as one of your features in each timestep (with an appropriate encoding, like cloudy=0, sunny=1, etc.). If you have non-time-series data, then it doesn't really make sense to pass it through the LSTM, though. Maybe the LSTM will work anyway, but even if it does, it will probably come at the cost of higher loss / lower accuracy per training time. Alternatively, you can introduce this sort of "extra" information into your model outside of the LSTM by means of additional layers. You might have a data flow like this: TIME_SERIES_INPUT ------> LSTM -------\
*---> MERGE ---> [more processing]
AUXILIARY_INPUTS --> [do something] --/ So you would merge your auxiliary inputs into the LSTM outputs, and continue your network from there. Now your model is simply multi-input. For example, let's say that in your particular application, you only keep the last output of the LSTM output sequence. Let's say that it is a vector of length 10. You auxiliary input might be your encoded weather (so a scalar). Your merge layer could simply append the auxiliary weather information onto the end of the LSTM output vector to produce a single vector of length 11. But you don't need to just keep the last LSTM output timestep: if the LSTM outputted 100 timesteps, each with a 10-vector of features, you could still tack on your auxiliary weather information, resulting in 100 timesteps, each consisting of a vector of 11 datapoints. The Keras documentation on its functional API has a good overview of this. In other cases, as @horaceT points out, you may want to condition the LSTM on non-temporal data. For example, predict the weather tomorrow, given location. In this case, here are three suggestions, each with positive/negatives: Have the first timestep contain your conditioning data, since it will effectively "set" the internal/hidden state of your RNN. Frankly, I would not do this, for a bunch of reasons: your conditioning data needs to be the same shape as the rest of your features, makes it harder to create stateful RNNs (in terms of being really careful to track how you feed data into the network), the network may "forget" the conditioning data with enough time (e.g., long training sequences, or long prediction sequences), etc. Include the data as part of the temporal data itself. So each feature vector at a particular timestep includes "mostly" time-series data, but then has the conditioning data appended to the end of each feature vector. Will the network learn to recognize this? Probably, but even then, you are creating a harder learning task by polluting the sequence data with non-sequential information. So I would also discourage this. Probably the best approach would be to directly affect the hidden state of the RNN at time zero. This is the approach taken by Karpathy and Fei-Fei and by Vinyals et al . This is how it works: For each training sample, take your condition variables $\vec{x}$. Transform/reshape your condition variables with an affine transformation to get it into the right shape as the internal state of the RNN: $\vec{v} = \mathbf{W} \vec{x} + \vec{b}$ (these $\mathbf{W}$ and $\vec{b}$ are trainable weights). You can obtain it with a Dense layer in keras. For the very first timestep, add $\vec{v}$ to the hidden state of the RNN when calculating its value. This approach is the most "theoretically" correct, since it properly conditions the RNN on your non-temporal inputs, naturally solves the shape problem, and also avoids polluting your inputs timesteps with additional, non-temporal information. The downside is that this approach often requires graph-level control of your architecture, so if you are using a higher-level abstraction like Keras, you will find it hard to implement unless you add your own layer type. | {
"source": [
"https://datascience.stackexchange.com/questions/17099",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/26800/"
]
} |
17,114 | In the following hand made charts I show some value for years. In the first chart I've evenly spaced each year. On the second chart I've spaced them relativelly to their actual year value within time (i.e 2016 is closer to 2017 than 2010). Is there a terminology for the spacing of the second chart? Imagine building a software which would have a toggle control to switch the view from A to B. How would you call it? | For RNNs (e.g., LSTMs and GRUs), the layer input is a list of timesteps, and each timestep is a feature tensor. That means that you could have a input tensor like this (in Pythonic notation): # Input tensor to RNN
[
# Timestep 1
[ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
# Timestep 2
[ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
# Timestep 3
[ temperature_in_paris, value_of_nasdaq, unemployment_rate ],
...
] So absolutely, you can have multiple features at each timestep. In my mind, weather is a time series feature: where I live, it happens to be a function of time. So it would be quite reasonable to encode weather information as one of your features in each timestep (with an appropriate encoding, like cloudy=0, sunny=1, etc.). If you have non-time-series data, then it doesn't really make sense to pass it through the LSTM, though. Maybe the LSTM will work anyway, but even if it does, it will probably come at the cost of higher loss / lower accuracy per training time. Alternatively, you can introduce this sort of "extra" information into your model outside of the LSTM by means of additional layers. You might have a data flow like this: TIME_SERIES_INPUT ------> LSTM -------\
*---> MERGE ---> [more processing]
AUXILIARY_INPUTS --> [do something] --/ So you would merge your auxiliary inputs into the LSTM outputs, and continue your network from there. Now your model is simply multi-input. For example, let's say that in your particular application, you only keep the last output of the LSTM output sequence. Let's say that it is a vector of length 10. You auxiliary input might be your encoded weather (so a scalar). Your merge layer could simply append the auxiliary weather information onto the end of the LSTM output vector to produce a single vector of length 11. But you don't need to just keep the last LSTM output timestep: if the LSTM outputted 100 timesteps, each with a 10-vector of features, you could still tack on your auxiliary weather information, resulting in 100 timesteps, each consisting of a vector of 11 datapoints. The Keras documentation on its functional API has a good overview of this. In other cases, as @horaceT points out, you may want to condition the LSTM on non-temporal data. For example, predict the weather tomorrow, given location. In this case, here are three suggestions, each with positive/negatives: Have the first timestep contain your conditioning data, since it will effectively "set" the internal/hidden state of your RNN. Frankly, I would not do this, for a bunch of reasons: your conditioning data needs to be the same shape as the rest of your features, makes it harder to create stateful RNNs (in terms of being really careful to track how you feed data into the network), the network may "forget" the conditioning data with enough time (e.g., long training sequences, or long prediction sequences), etc. Include the data as part of the temporal data itself. So each feature vector at a particular timestep includes "mostly" time-series data, but then has the conditioning data appended to the end of each feature vector. Will the network learn to recognize this? Probably, but even then, you are creating a harder learning task by polluting the sequence data with non-sequential information. So I would also discourage this. Probably the best approach would be to directly affect the hidden state of the RNN at time zero. This is the approach taken by Karpathy and Fei-Fei and by Vinyals et al . This is how it works: For each training sample, take your condition variables $\vec{x}$. Transform/reshape your condition variables with an affine transformation to get it into the right shape as the internal state of the RNN: $\vec{v} = \mathbf{W} \vec{x} + \vec{b}$ (these $\mathbf{W}$ and $\vec{b}$ are trainable weights). You can obtain it with a Dense layer in keras. For the very first timestep, add $\vec{v}$ to the hidden state of the RNN when calculating its value. This approach is the most "theoretically" correct, since it properly conditions the RNN on your non-temporal inputs, naturally solves the shape problem, and also avoids polluting your inputs timesteps with additional, non-temporal information. The downside is that this approach often requires graph-level control of your architecture, so if you are using a higher-level abstraction like Keras, you will find it hard to implement unless you add your own layer type. | {
"source": [
"https://datascience.stackexchange.com/questions/17114",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/29265/"
]
} |
17,540 | I create a corr() df out of an original df. The corr() df came out 70 X 70 and it is impossible to visualize the heatmap... sns.heatmap(df) . If I try to display the corr = df.corr() , the table doesn't fit the screen and I can see all the correlations. Is it a way to either print the entire df regardless of its size or to control the size of the heatmap? | I found out how to increase the size of my plot with the following code... plt.subplots(figsize=(20,15))
sns.heatmap(corr) | {
"source": [
"https://datascience.stackexchange.com/questions/17540",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/29897/"
]
} |
17,598 | I hope this question is the most suitable in this site... In Python, usually the class name is defined using the capital letter as its first character, for example class Vehicle:
... However, in machine learning field, often times train and test data are defined as X and Y - not x and y . For example, I'm now reading this tutorial on Keras , but it uses the X and Y as its variables: from sklearn import datasets
mnist = datasets.load_digits()
X = mnist.data
Y = mnist.target Why are these defined as capital letters? Is there any convention (at least in Python) among machine learning field that it is better to use the capital letter to define these variables? Or maybe do people distinguish the upper vs lower case variables in machine learning? In fact the same tutorial later distinguish these variables like the following: from sklearn.cross_validation import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, Y, train_size=0.7, random_state=0) | The X (and sometimes Y) variables are matrices. In some math notation, it is common practice to write vector variable names as lower case and matrix variable names as upper case. Often these are in bold or have other annotation, but that does not translate well to code. Either way, I believe that the practice has transferred from this notation. You may also notice in code, when the target variable is a single column of values, it is written y , so you have X, y Of course, this has no special semantic meaning in Python and you are free to ignore the convention. However, because it has become a convention, it may be worth maintaining if you share your code. | {
"source": [
"https://datascience.stackexchange.com/questions/17598",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/8432/"
]
} |
17,759 | Is it better to encode features like month and hour as factor or numeric in a machine learning model? On the one hand, I feel numeric encoding might be reasonable, because time is a forward progressing process (the fifth month is followed by the sixth month), but on the other hand I think categorial encoding might be more reasonable because of the cyclic nature of years and days ( the 12th month is followed by the first one). Is there a general solution or convention for this? | Have you considered adding the (sine, cosine) transformation of the time of day variable? This will ensure that the 0 and 23 hour for example are close to each other, thus allowing the cyclical nature of the variable to shine through. ( More Info ) | {
"source": [
"https://datascience.stackexchange.com/questions/17759",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/30238/"
]
} |
17,769 | Suppose I have a 5*3 data frame in which third column contains missing value 1 2 3
4 5 NaN
7 8 9
3 2 NaN
5 6 NaN I hope to generate value for missing value based rule that first product second column 1 2 3
4 5 20 <--4*5
7 8 9
3 2 6 <-- 3*2
5 6 30 <-- 5*6 How can I do it use data frame? Thanks. How to add condition to calculate missing value like this? if 1st % 2 == 0 then 3rd = 1st * 2nd
else 3rd = 1st + 2nd 1 2 3
4 5 20 <-- 4*5 because 4%2==0
7 8 9
3 2 5 <-- 3+2 because 3%2==1
5 6 11 <-- 5+6 because 5%2==1 | Assuming three columns of your dataframe is a , b and c . This is what you want: df['c'] = df.apply(
lambda row: row['a']*row['b'] if np.isnan(row['c']) else row['c'],
axis=1
) Full code: df = pd.DataFrame(
np.array([[1, 2, 3], [4, 5, np.nan], [7, 8, 9], [3, 2, np.nan], [5, 6, np.nan]]),
columns=['a', 'b', 'c']
)
df['c'] = df.apply(
lambda row: row['a']*row['b'] if np.isnan(row['c']) else row['c'],
axis=1
) | {
"source": [
"https://datascience.stackexchange.com/questions/17769",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/29911/"
]
} |
17,839 | In the context of Machine Learning , I have seen the term Ground Truth used a lot. I have searched a lot and found the following definition in Wikipedia : In machine learning, the term "ground truth" refers to the accuracy of the training set's classification for supervised learning techniques. This is used in statistical models to prove or disprove research hypotheses. The term "ground truthing" refers to the process of gathering the proper objective (provable) data for this test. Compare with gold standard. Bayesian spam filtering is a common example of supervised learning. In this system, the algorithm is manually taught the differences between spam and non-spam. This depends on the ground truth of the messages used to train the algorithm – inaccuracies in the ground truth will correlate to inaccuracies in the resulting spam/non-spam verdicts. The point is that I really can not get what it means. Is that the label used for each data object or the target function which gives a label to each data object , or maybe something else? | The ground truth is what you measured for your target variable for the training and testing examples. Nearly all the time you can safely treat this the same as the label. In some cases it is not precisely the same as the label. For instance if you augment your data set, there is a subtle difference between the ground truth (your actual measurements) and how the augmented examples relate to the labels you have assigned. However, this distinction is not usually a problem. Ground truth can be wrong. It is a measurement, and there can be errors in it. In some ML scenarios it can also be a subjective measurement where it is difficult define an underlying objective truth - e.g. expert opinion or analysis, which you are hoping to automate. Any ML model you train will be limited by the quality of the ground truth used to train and test it, and that is part of the explanation on the Wikipedia quote. It is also why published articles about ML should include full descriptions of how the data was collected. | {
"source": [
"https://datascience.stackexchange.com/questions/17839",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/28175/"
]
} |
17,850 | I'm assuming that each time someone trains a model and wants to tweak it/iterate, that they don't have to wait hours and hours for it to learn and then output. So my question is, how do people manage this workflow? | The ground truth is what you measured for your target variable for the training and testing examples. Nearly all the time you can safely treat this the same as the label. In some cases it is not precisely the same as the label. For instance if you augment your data set, there is a subtle difference between the ground truth (your actual measurements) and how the augmented examples relate to the labels you have assigned. However, this distinction is not usually a problem. Ground truth can be wrong. It is a measurement, and there can be errors in it. In some ML scenarios it can also be a subjective measurement where it is difficult define an underlying objective truth - e.g. expert opinion or analysis, which you are hoping to automate. Any ML model you train will be limited by the quality of the ground truth used to train and test it, and that is part of the explanation on the Wikipedia quote. It is also why published articles about ML should include full descriptions of how the data was collected. | {
"source": [
"https://datascience.stackexchange.com/questions/17850",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/30344/"
]
} |
17,854 | I am an R user and I am interested in learning/understanding how Hadoop actually works. For this I previously read about Hadoop but was not able to find a satisfactory answer for my question. Can R + Hadoop overcome R's memory constraints in any case? The answer might be clear as the accepted answer to this question implies but for me it isn't. To be more precise: Can I use R + Hadoop to fit a model to a really big data set at once. I mean that the entire data is needed for the computation without any independent sub processes which could be parallelized in some way? I do not see how this can work when using a computer cluster for the computation. In case it is possible: How does it work? | The ground truth is what you measured for your target variable for the training and testing examples. Nearly all the time you can safely treat this the same as the label. In some cases it is not precisely the same as the label. For instance if you augment your data set, there is a subtle difference between the ground truth (your actual measurements) and how the augmented examples relate to the labels you have assigned. However, this distinction is not usually a problem. Ground truth can be wrong. It is a measurement, and there can be errors in it. In some ML scenarios it can also be a subjective measurement where it is difficult define an underlying objective truth - e.g. expert opinion or analysis, which you are hoping to automate. Any ML model you train will be limited by the quality of the ground truth used to train and test it, and that is part of the explanation on the Wikipedia quote. It is also why published articles about ML should include full descriptions of how the data was collected. | {
"source": [
"https://datascience.stackexchange.com/questions/17854",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/30261/"
]
} |
18,339 | Consider a neural network: For a given set of data, we divide it into training, validation and test set. Suppose we do it in the classic 60:20:20 ratio, then we prevent overfitting by validating the network by checking it on validation set. Then what is the need to test it on the test set to check its performance? Won't the error on the test set be somewhat same as the validation set as for the network it is an unseen data just like the validation set and also both of them are same in number? Instead can't we increase the training set by merging the test set to it so that we have more training data and the network trains better and then use validation set to prevent overfitting?
Why don't we do this? | Let's assume that you are training a model whose performance depends on a set of hyperparameters. In the case of a neural network, these parameters may be for instance the learning rate or the number of training iterations. Given a choice of hyperparameter values, you use the training set to train the model. But, how do you set the values for the hyperparameters? That's what the validation set is for. You can use it to evaluate the performance of your model for different combinations of hyperparameter values (e.g. by means of a grid search process) and keep the best trained model. But, how does your selected model compares to other different models? Is your neural network performing better than, let's say, a random forest trained with the same combination of training/test data? You cannot compare based on the validation set, because that validation set was part of the fitting of your model. You used it to select the hyperparameter values! The test set allows you to compare different models in an unbiased way, by basing your comparisons in data that were not use in any part of your training/hyperparameter selection process. | {
"source": [
"https://datascience.stackexchange.com/questions/18339",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/27916/"
]
} |
18,414 | When training neural networks, one hyperparameter is the size of a minibatch. Common choices are 32, 64, and 128 elements per mini batch. Are there any rules/guidelines on how big a mini-batch should be? Or any publications which investigate the effect on the training? | In On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima there are a couple of intersting statements: It has been observed in practice that
when using a larger batch there is a degradation in the quality of the model, as measured by its ability to generalize [...] large-batch methods tend to converge to sharp minimizers of the
training and testing functions—and as is well known, sharp minima lead to poorer
generalization. n. In contrast, small-batch methods consistently converge to flat minimizers, and our experiments support a commonly held view that this is due to the inherent noise in the gradient estimation. From my masters thesis : Hence the choice of the mini-batch size influences: Training time until convergence : There seems to be a sweet spot. If the batch size is very small (e.g. 8), this time goes up. If the batch size is huge, it is also higher than the minimum. Training time per epoch : Bigger computes faster (is efficient) Resulting model quality : The lower the better due to better generalization (?) It is important to note hyper-parameter interactions : Batch size may interact with other hyper-parameters, most notably learning rate. In some experiments this interaction may make it hard to isolate the effect of batch size alone on model quality. Another strong interaction is with early stopping for regularisation. See also this nice answer / related question Efficient Mini-batch Training for Stochastic Optimization this RNN study | {
"source": [
"https://datascience.stackexchange.com/questions/18414",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/8820/"
]
} |
18,583 | I thought both, PReLU and Leaky ReLU are $$f(x) = \max(x, \alpha x) \qquad \text{ with } \alpha \in (0, 1)$$ Keras, however, has both functions in the docs . Leaky ReLU Source of LeakyReLU : return K.relu(inputs, alpha=self.alpha) Hence (see relu code ) $$f_1(x) = \max(0, x) - \alpha \max(0, -x)$$ PReLU Source of PReLU : def call(self, inputs, mask=None):
pos = K.relu(inputs)
if K.backend() == 'theano':
neg = (K.pattern_broadcast(self.alpha, self.param_broadcast) *
(inputs - K.abs(inputs)) * 0.5)
else:
neg = -self.alpha * K.relu(-inputs)
return pos + neg Hence $$f_2(x) = \max(0, x) - \alpha \max(0, -x)$$ Question Did I get something wrong? Aren't $f_1$ and $f_2$ equivalent to $f$ (assuming $\alpha \in (0, 1)$ ?) | Straight from wikipedia : Leaky ReLU s allow a small, non-zero gradient when the unit is not active. Parametric ReLU s take this idea further by making the coefficient of leakage into a parameter that is learned along with the other neural network parameters. | {
"source": [
"https://datascience.stackexchange.com/questions/18583",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/8820/"
]
} |
18,903 | I'm trying to understand which is better (more accurate, especially in classification problems) I've been searching articles comparing LightGBM and XGBoost but found only two: https://medium.com/implodinggradients/benchmarking-lightgbm-how-fast-is-lightgbm-vs-xgboost-15d224568031 - which is only about speed but not accuracy. https://github.com/Microsoft/LightGBM/wiki/Experiments - which is from the authors of LightGBM and no surprise LightGBM wins there. In my tests I get pretty the same AUC for both algorithms, but LightGBM runs form 2 to 5 times faster. If LGBM is so cool, why don't I hear so much about it here and on Kaggle :) | LightGBM is a great implementation that is similar to XGBoost but varies in a few specific ways, especially in how it creates the trees. It offers some different parameters but most of them are very similar to their XGBoost counterparts. If you use the same parameters, you almost always get a very close score. In most cases, the training will be 2-10 times faster though. Why don't more people use it then? XGBoost has been around longer and is already installed on many machines. LightGBM is rather new and didn't have a Python wrapper at first. The current version is easier to install and use so no obstacles here. Many of the more advanced users on Kaggle and similar sites already use LightGBM and for each new competition, it gets more and more coverage. Still, the starter scripts are often based around XGBoost as people just reuse their old code and adjust a few parameters. I'm sure this will increase once there are a few more tutorials and guides on how to use it (most of the non-ScikitLearn guides currently focus on XGBoost or neural networks). | {
"source": [
"https://datascience.stackexchange.com/questions/18903",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/32148/"
]
} |
18,904 | I'm trying to create a contour map from two variables which store some temperature values and a third variable which is the time stamp.
I used this notebook as a tutorial https://plot.ly/pandas/contour-plots/ I'm not able to convert the pandas dataframe created, into a 1d array. And the kde_scipy doesn't work with a nd-array. I tried converting the dataframe into a 1d array using .as_matrix() but this is the error I am receiving. Degrees of freedom <= 0 for slice How can I convert this CSV file (with 3 columns of data) imported as a dataframe into individual columns of data? Or can I directly import each column of data into a 1d array and use it in the function kde_scipy? | LightGBM is a great implementation that is similar to XGBoost but varies in a few specific ways, especially in how it creates the trees. It offers some different parameters but most of them are very similar to their XGBoost counterparts. If you use the same parameters, you almost always get a very close score. In most cases, the training will be 2-10 times faster though. Why don't more people use it then? XGBoost has been around longer and is already installed on many machines. LightGBM is rather new and didn't have a Python wrapper at first. The current version is easier to install and use so no obstacles here. Many of the more advanced users on Kaggle and similar sites already use LightGBM and for each new competition, it gets more and more coverage. Still, the starter scripts are often based around XGBoost as people just reuse their old code and adjust a few parameters. I'm sure this will increase once there are a few more tutorials and guides on how to use it (most of the non-ScikitLearn guides currently focus on XGBoost or neural networks). | {
"source": [
"https://datascience.stackexchange.com/questions/18904",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/29509/"
]
} |
19,220 | I've seen discussions about the 'overhead' of a GPU, and that for 'small' networks, it may actually be faster to train on a CPU (or network of CPUs) than a GPU. What is meant by 'small'? For example, would a single-layer MLP with 100 hidden units be 'small'? Does our definition of 'small' change for recurrent architectures? Are there any other criteria that should be considered when deciding whether to train on CPU or GPU? EDIT 1: I just found a blog post (possibly outdated? It's from 2014): "...Most network card[s] only work with memory that is registered with the CPU and so the GPU to GPU transfer between two nodes would be like this: GPU 1 to CPU 1 to Network Card 1 to Network Card 2 to CPU 2 to GPU 2. What this means is, if one chooses a slow network card then there might be no speedups over a single computer. Even with fast network cards, if the cluster is large, one does not even get speedups from GPUs when compared to CPUs as the GPUs just work too fast for the network cards to keep up with them. This is the reason why many big companies like Google and Microsoft are using CPU rather than GPU clusters to train their big neural networks. " So at some point, according to this post, it could have been faster to use CPUs. Is this still the case? EDIT 2: Yes, that blog post may very well be outdated because: Now it seems that GPUs within a node are connected via PCIe bus, so communication can happen at about 6GiB/s. (For example: https://www.youtube.com/watch?v=el1iSlP1uOs , about 35 minutes in). The speaker implies that this is faster than going from GPU1 to CPU to GPU2. It would mean the network card is no longer the bottleneck. | Unlike some of the other answers, I would highly advice against always training on GPUs without any second thought. This is driven by the usage of deep learning methods on images and texts, where the data is very rich (e.g. a lot of pixels = a lot of variables) and the model similarly has many millions of parameters. For other domains, this might not be the case. What is meant by 'small'? For example, would a single-layer MLP with 100 hidden units be 'small'? Yes, that is definitely very small by modern standards. Unless you have a GPU suited perfectly for training (e.g. NVIDIA 1080 or NVIDIA Titan), I wouldn't be surprised to find that your CPU was faster. Note that the complexity of your neural network also depends on your number of input features, not just the number of units in your hidden layer. If your hidden layer has 100 units and each observation in your dataset has 4 input features, then your network is tiny (~400 parameters). If each observation instead has 1M input features as in some medical/biotech contexts, then your network is pretty big in terms of number of parameters. For the remainder of my answer I'm assuming you have quite few input features pr. observation. One good example I've found of comparing CPU vs. GPU performance was when I trained a poker bot using reinforcement learning. For reinforcement learning you often don't want that many layers in your neural network and we found that we only needed a few layers with few parameters. Moreover, the number of input features was quite low. Initially I trained on a GPU (NVIDIA Titan), but it was taking a long time as reinforcement learning requires a lot of iterations. Luckily, I found that training on my CPU instead made my training go 10x as fast! This is just to say that CPU's can sometimes be better for training. Are there any other criteria that should be considered when deciding whether to train on CPU or GPU? It's important to note that while on a GPU you will always want to fill up the entire GPU memory by increasing your batch size, that is not the case on the CPU. On the CPU an increase in batch size will increase the time pr. batch. Therefore, if it's important for you to have a very large batch size (e.g. due to a very noisy signal), it can be beneficial to use a GPU. I haven't experienced this in practice though and normally small batch sizes are preferred. | {
"source": [
"https://datascience.stackexchange.com/questions/19220",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/31434/"
]
} |
20,071 | I tried to load fastText pretrained model from here Fasttext model . I am using wiki.simple.en from gensim.models.keyedvectors import KeyedVectors
word_vectors = KeyedVectors.load_word2vec_format('wiki.simple.bin', binary=True) But, it shows the following errors Traceback (most recent call last):
File "nltk_check.py", line 28, in <module>
word_vectors = KeyedVectors.load_word2vec_format('wiki.simple.bin', binary=True)
File "P:\major_project\venv\lib\sitepackages\gensim\models\keyedvectors.py",line 206, in load_word2vec_format
header = utils.to_unicode(fin.readline(), encoding=encoding)
File "P:\major_project\venv\lib\site-packages\gensim\utils.py", line 235, in any2unicode
return unicode(text, encoding, errors=errors)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xba in position 0: invalid start byte Question 1 How do I load fasttext model with Gensim? Question 2 Also, after loading the model, I want to find the similarity between two words model.find_similarity('teacher', 'teaches')
# Something like this
Output : 0.99 How do I do this? | Here's the link for the methods available for fasttext implementation in gensim fasttext.py from gensim.models.wrappers import FastText
model = FastText.load_fasttext_format('wiki.simple')
print(model.most_similar('teacher'))
# Output = [('headteacher', 0.8075869083404541), ('schoolteacher', 0.7955552339553833), ('teachers', 0.733420729637146), ('teaches', 0.6839243173599243), ('meacher', 0.6825737357139587), ('teach', 0.6285147070884705), ('taught', 0.6244685649871826), ('teaching', 0.6199781894683838), ('schoolmaster', 0.6037642955780029), ('lessons', 0.5812176465988159)]
print(model.similarity('teacher', 'teaches'))
# Output = 0.683924396754 | {
"source": [
"https://datascience.stackexchange.com/questions/20071",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/33970/"
]
} |
20,074 | I am a college student(rising senior) and became interested in Natural Language Processing last semester. I decided to focus on studying this area this summer and become skilled in this area. I wanted to get some advice for studying this particular subject. Right now, I am taking Andrew Ng's Machine Learning course on Coursera to get a sense of how Machine Learning works. After finishing this course, I am planning to take Standford's CS224n NLP course on Youtube and do its class activities. I am assuming AWS and Tenserflow is also important since they are included as topics of CS224n. I want to know if my summer plan sounds reasonable. If not, could you please give me an advice of how to make a better plan. If this sounds reasonable, it would be great if you can add more or specify which part is particularly important in these areas. | Here's the link for the methods available for fasttext implementation in gensim fasttext.py from gensim.models.wrappers import FastText
model = FastText.load_fasttext_format('wiki.simple')
print(model.most_similar('teacher'))
# Output = [('headteacher', 0.8075869083404541), ('schoolteacher', 0.7955552339553833), ('teachers', 0.733420729637146), ('teaches', 0.6839243173599243), ('meacher', 0.6825737357139587), ('teach', 0.6285147070884705), ('taught', 0.6244685649871826), ('teaching', 0.6199781894683838), ('schoolmaster', 0.6037642955780029), ('lessons', 0.5812176465988159)]
print(model.similarity('teacher', 'teaches'))
# Output = 0.683924396754 | {
"source": [
"https://datascience.stackexchange.com/questions/20074",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/34037/"
]
} |
20,179 | While training models in machine learning, why is it sometimes advantageous to keep the batch size to a power of 2? I thought it would be best to use a size that is the largest fit in your GPU memory / RAM. This answer claims that for some packages, a power of 2 is better as a batch size. Can someone provide a detailed explanation / link to a detailed explanation for this? Is this true for all optimisation algorithms (gradient descent, backpropagation, etc) or only some of them? | This is a problem of alignment of the virtual processors (VP) onto the physical processors (PP) of the GPU. Since the number of PP is often a power of 2, using a number of VP different from a power of 2 leads to poor performance. You can see the mapping of the VP onto the PP as a pile of slices of size the number of PP . Say you've got 16 PP. You can map 16 VP on them : 1 VP is mapped onto 1 PP. You can map 32 VP on them : 2 slices of 16 VP, 1 PP will be responsible for 2 VP. Etc.
During execution, each PP will execute the job of the 1st VP he is responsible for, then the job of the 2nd VP etc. If you use 17 VP, each PP will execute the job of their 1st PP, then 1 PP will execute the job of the 17th AND the other ones will do nothing (precised below). This is due to the SIMD paradigm (called vector in the 70s) used by GPUs. This is often called Data Parallelism : all the PP do the same thing at the same time but on different data. See here . More precisely, in the example with 17 VP, once the job of the 1st slice done (by all the PPs doing the job of their 1st VP), all the PP will do the same job (2nd VP), but only one has some data to work on . Nothing to do with learning. This is only programming stuff. | {
"source": [
"https://datascience.stackexchange.com/questions/20179",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/27616/"
]
} |
20,199 | Fairly new to Python but building out my first RF model based on some classification data. I've converted all of the labels into int64 numerical data and loaded into X and Y as a numpy array, but I am hitting an error when I am trying to train the models. Here is what my arrays look like: >>> X = np.array([[df.tran_cityname, df.tran_signupos, df.tran_signupchannel, df.tran_vmake, df.tran_vmodel, df.tran_vyear]])
>>> Y = np.array(df['completed_trip_status'].values.tolist())
>>> X
array([[[ 1, 1, 2, 3, 1, 1, 1, 1, 1, 3, 1,
3, 1, 1, 1, 1, 2, 1, 3, 1, 3, 3,
2, 3, 3, 1, 1, 1, 1],
[ 0, 5, 5, 1, 1, 1, 2, 2, 0, 2, 2,
3, 1, 2, 5, 5, 2, 1, 2, 2, 2, 2,
2, 4, 3, 5, 1, 0, 1],
[ 2, 2, 1, 3, 3, 3, 2, 3, 3, 2, 3,
2, 3, 2, 2, 3, 2, 2, 1, 1, 2, 1,
2, 2, 1, 2, 3, 1, 1],
[ 0, 0, 0, 42, 17, 8, 42, 0, 0, 0, 22,
0, 22, 0, 0, 42, 0, 0, 0, 0, 11, 0,
0, 0, 0, 0, 28, 17, 18],
[ 0, 0, 0, 70, 291, 88, 234, 0, 0, 0, 222,
0, 222, 0, 0, 234, 0, 0, 0, 0, 89, 0,
0, 0, 0, 0, 40, 291, 131],
[ 0, 0, 0, 2016, 2016, 2006, 2014, 0, 0, 0, 2015,
0, 2015, 0, 0, 2015, 0, 0, 0, 0, 2015, 0,
0, 0, 0, 0, 2016, 2016, 2010]]])
>>> Y
array(['NO', 'NO', 'NO', 'YES', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO',
'NO', 'YES', 'NO', 'NO', 'YES', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO',
'NO', 'NO', 'NO', 'NO', 'NO', 'NO', 'NO'],
dtype='|S3')
>>> X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.3) Traceback (most recent call last): File "<stdin>", line 1, in <module>
File "/Library/Python/2.7/site-packages/sklearn/cross_validation.py", line 2039, in train_test_split
arrays = indexable(*arrays)
File "/Library/Python/2.7/site-packages/sklearn/utils/validation.py", line
206, in indexable
check_consistent_length(*result)
File "/Library/Python/2.7/site-packages/sklearn/utils/validation.py", line
181, in check_consistent_length
" samples: %r" % [int(l) for l in lengths]) ValueError: Found input variables with inconsistent numbers of samples: [1, 29] | You are running into that error because your X and Y don't have the same length (which is what train_test_split requires), i.e., X.shape[0] != Y.shape[0] . Given your current code: >>> X.shape
(1, 6, 29)
>>> Y.shape
(29,) To fix this error: Remove the extra list from inside of np.array() when defining X or remove the extra dimension afterwards with the following command: X = X.reshape(X.shape[1:]) . Now, the shape of X will be (6, 29). Transpose X by running X = X.transpose() to get equal number of samples in X and Y . Now, the shape of X will be (29, 6) and the shape of Y will be (29,). | {
"source": [
"https://datascience.stackexchange.com/questions/20199",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/34253/"
]
} |
20,296 | Suppose I build a neural network for classification. The last layer is a dense layer with Softmax activation. I have five different classes to classify. Suppose for a single training example, the true label is [1 0 0 0 0] while the predictions be [0.1 0.5 0.1 0.1 0.2] . How would I calculate the cross entropy loss for this example? | The cross entropy formula takes in two distributions, $p(x)$ , the true distribution, and $q(x)$ , the estimated distribution, defined over the discrete variable $x$ and is given by $$H(p,q) = -\sum_{\forall x} p(x) \log(q(x))$$ For a neural network, the calculation is independent of the following: What kind of layer was used. What kind of activation was used - although many activations will not be compatible with the calculation because their outputs are not interpretable as probabilities (i.e., their outputs are negative, greater than 1, or do not sum to 1). Softmax is often used for multiclass classification because it guarantees a well-behaved probability distribution function. For a neural network, you will usually see the equation written in a form where $\mathbf{y}$ is the ground truth vector and $\mathbf{\hat{y}}$ (or some other value taken direct from the last layer output) is the estimate. For a single example, it would look like this: $$L = - \mathbf{y} \cdot \log(\mathbf{\hat{y}})$$ where $\cdot$ is the inner product. Your example ground truth $\mathbf{y}$ gives all probability to the first value, and the other values are zero, so we can ignore them, and just use the matching term from your estimates $\mathbf{\hat{y}}$ $L = -(1\times log(0.1) + 0 \times \log(0.5) + ...)$ $L = - log(0.1) \approx 2.303$ An important point from comments That means, the loss would be same no matter if the predictions are $[0.1, 0.5, 0.1, 0.1, 0.2]$ or $[0.1, 0.6, 0.1, 0.1, 0.1]$ ? Yes, this is a key feature of multiclass logloss, it rewards/penalises probabilities of correct classes only. The value is independent of how the remaining probability is split between incorrect classes. You will often see this equation averaged over all examples as a cost function. It is not always strictly adhered to in descriptions, but usually a loss function is lower level and describes how a single instance or component determines an error value, whilst a cost function is higher level and describes how a complete system is evaluated for optimisation. A cost function based on multiclass log loss for data set of size $N$ might look like this: $$J = - \frac{1}{N}\left(\sum_{i=1}^{N} \mathbf{y_i} \cdot \log(\mathbf{\hat{y}_i})\right)$$ Many implementations will require your ground truth values to be one-hot encoded (with a single true class), because that allows for some extra optimisation. However, in principle the cross entropy loss can be calculated - and optimised - when this is not the case. | {
"source": [
"https://datascience.stackexchange.com/questions/20296",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/15412/"
]
} |
20,535 | I've been reading Google's DeepMind Atari paper and I'm trying to understand the concept of "experience replay". Experience replay comes up in a lot of other reinforcement learning papers (particularly, the AlphaGo paper), so I want to understand how it works. Below are some excerpts. First, we used a biologically inspired mechanism termed experience replay that randomizes over the data, thereby removing correlations in the observation sequence and smoothing over changes in the data distribution. The paper then elaborates as follows: While other stable methods exist for training neural networks in the reinforcement learning setting, such as neural fitted Q-iteration, these
methods involve the repeated training of networks de novo hundreds
of iterations. Consequently, these methods, unlike our algorithm, are
too inefficient to be used successfully with large neural networks. We
parameterize an approximate value function $Q(s, a; \theta_i)$ using the deep
convolutional neural network shown in Fig. 1, in which $\theta_i$ are the parameters (that is, weights) of the Q-network at iteration $i$ . To perform
experience replay, we store the agent's experiences $e_t = (s_t, a_t, r_t, s_{t+1})$ at each time-step $t$ in a data set $D_t = \{e_1, \dots, e_t \}$ . During learning, we apply Q-learning updates, on samples (or mini-batches) of experience $(s, a, r, s') \sim U(D)$ , drawn uniformly at random from the pool of stored samples. The Q-learning update at iteration $i$ uses the following loss
function: $$
L_i(\theta_i) = \mathbb{E}_{(s, a, r, s') \sim U(D)} \left[ \left(r + \gamma \max_{a'} Q(s', a'; \theta_i^-) - Q(s, a; \theta_i)\right)^2 \right]
$$ What is experience replay, and what are its benefits, in laymen's terms? | The key part of the quoted text is: To perform experience replay we store the agent's experiences $e_t = (s_t,a_t,r_t,s_{t+1})$ This means instead of running Q-learning on state/action pairs as they occur during simulation or actual experience, the system stores the data discovered for [state, action, reward, next_state] - typically in a large table. Note this does not store associated values - this is the raw data to feed into action-value calculations later. The learning phase is then logically separate from gaining experience, and based on taking random samples from this table. You still want to interleave the two processes - acting and learning - because improving the policy will lead to different behaviour that should explore actions closer to optimal ones, and you want to learn from those. However, you can split this how you like - e.g. take one step, learn from three random prior steps etc. The Q-Learning targets when using experience replay use the same targets as the online version, so there is no new formula for that. The loss formula given is also the one you would use for DQN without experience replay. The difference is only which s, a, r, s', a' you feed into it. In DQN, the DeepMind team also maintained two networks and switched which one was learning and which one feeding in current action-value estimates as "bootstraps". This helped with stability of the algorithm when using a non-linear function approximator. That's what the bar stands for in ${\theta}^{\overline{\space}}_i$ - it denotes the alternate frozen version of the weights. Advantages of experience replay: More efficient use of previous experience, by learning with it multiple times. This is key when gaining real-world experience is costly, you can get full use of it. The Q-learning updates are incremental and do not converge quickly, so multiple passes with the same data is beneficial, especially when there is low variance in immediate outcomes (reward, next state) given the same state, action pair. Better convergence behaviour when training a function approximator. Partly this is because the data is more like i.i.d. data assumed in most supervised learning convergence proofs. Disadvantage of experience replay: It is harder to use multi-step learning algorithms, such as Q($\lambda$), which can be tuned to give better learning curves by balancing between bias (due to bootstrapping) and variance (due to delays and randomness in long-term outcomes). Multi-step DQN with experience-replay DQN is one of the extensions explored in the paper Rainbow: Combining Improvements in Deep Reinforcement Learning . The approach used in DQN is briefly outlined by David Silver in parts of this video lecture (around 01:17:00, but worth seeing sections before it). I recommend watching the whole series, which is a graduate level course on reinforcement learning, if you have time. | {
"source": [
"https://datascience.stackexchange.com/questions/20535",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/12515/"
]
} |
21,877 | I'm currently working with Python and Scikit learn for classification purposes, and doing some reading around GridSearch I thought this was a great way for optimising my estimator parameters to get the best results. My methodology is this: Split my data into training/test. Use GridSearch with 5Fold Cross validation to train and test my estimators(Random Forest, Gradient Boost, SVC amongst others) to get the best estimators with the optimal combination of hyper parameters. I then calculate metrics on each of my estimators such as Precision, Recall, FMeasure and Matthews Correlation Coefficient, using my test set to predict the classifications and compare them to actual class labels. It is at this stage that I see strange behaviour and I'm unsure how to proceed. Do I take the .best_estimator_ from the GridSearch and use this as the 'optimal' output from the grid search , and perform prediction using this estimator? If I do this I find that the stage 3 metrics are usually much lower than if I simply train on all training data and test on the test set. Or, do I simply take the output GridSearchCV object as the new estimator ? If I do this I get better scores for my stage 3 metrics, but it seems odd using a GridSearchCV object instead of the intended classifier (E.g. a random Forest) ... EDIT: So my question is what is the difference between the returned GridSearchCV object and the .best_estimator_ attribute? Which one of these should I use for calculating further metrics? Can I use this output like a regular classifier (e.g. using predict), or else how should I use it? | Decided to go away and find the answers that would satisfy my question, and write them up here for anyone else wondering. The .best_estimator_ attribute is an instance of the specified model type, which has the 'best' combination of given parameters from the param_grid. Whether or not this instance is useful depends on whether the refit parameter is set to True (it is by default). For example: clf = GridSearchCV(estimator=RandomForestClassifier(),
param_grid=parameter_candidates,
cv=5,
refit=True,
error_score=0,
n_jobs=-1)
clf.fit(training_set, training_classifications)
optimised_random_forest = clf.best_estimator_
return optimised_random_forest Will return a RandomForestClassifier. This is all pretty clear from the [documentation][1]. What isn't clear from the documentation is why most examples don't specifically use the .best_estimator_ and instead do this: clf = GridSearchCV(estimator=RandomForestClassifier(),
param_grid=parameter_candidates,
cv=5,
refit=True,
error_score=0,
n_jobs=-1)
clf.fit(training_set, training_classifications)
return clf This second approach returns a GridSearchCV instance, with all the bells and whistles of the GridSearchCV such as .best_estimator_, .best_params, etc, which itself can be used like a trained classifier because: Optimised Random Forest Accuracy: 0.916970802919708
[[139 47]
[ 44 866]]
GridSearchCV Accuracy: 0.916970802919708
[[139 47]
[ 44 866]] It just uses the same best estimator instance when making predictions. So in practise there's no difference between these two unless you specifically only want the estimator instance itself. As a side note, my differences in metrics were unrelated and down to a buggy class weighting function.
[1]: http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.GridSearchCV.html#sklearn.model_selection.GridSearchCV.fit | {
"source": [
"https://datascience.stackexchange.com/questions/21877",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/33603/"
]
} |
22,335 | I was reading this blog post titled: The Financial World Wants to Open AI’s Black Boxes , where the author repeatedly refer to ML models as "black boxes". A similar terminology has been used at several places when referring to ML models. Why is it so? It is not like the ML engineers don't know what goes on inside a neural net. Every layer is selected by the ML engineer knowing what activation function to use, what that type of layer does, how the error is back propagated, etc. | The black box thing has nothing to do with the level of expertise of the audience (as long as the audience is human), but with the explainability of the function modelled by the machine learning algorithm. In logistic regression, there is a very simple relationship between inputs and outputs. You can sometimes understand why a certain sample was incorrectly catalogued (e.g. because the value of certain component of the input vector was too low). The same applies to decision trees: you can follow the logic applied by the tree and understand why a certain element was assigned to one class or the other. However, deep neural networks are the paradigmatic example of black box algorithms. No one, not even the most expert person in the world grasp the function that is actually modeled by training a neural network. An insight about this can be provided by adversarial examples : some slight (and unnoticeable by a human) change in a training sample can lead the network to think that it belongs to a totally different label. There are some techniques to create adversarial examples, and some techniques to improve robustness against them. But given that no one actually knows all the relevant properties of the function being modeled by the network, it is always possible to find a novel way to create them. Humans are also black boxes and we are also sensible to adversarial examples . | {
"source": [
"https://datascience.stackexchange.com/questions/22335",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/11097/"
]
} |
22,494 | I am playing a little with convnets. Specifically, I am using the kaggle cats-vs-dogs dataset which consists on 25000 images labeled as either cat or dog (12500 each). I've managed to achieve around 85% classification accuracy on my test set, however I set a goal of achieving 90% accuracy. My main problem is overfitting. Somehow it always ends up happening (normally after epoch 8-10). The architecture of my network is loosely inspired by VGG-16, more specifically my images are resized to $128x128x3$ , and then I run: Convolution 1 128x128x32 (kernel size is 3, strides is 1)
Convolution 2 128x128x32 (kernel size is 3, strides is 1)
Max pool 1 64x64x32 (kernel size is 2, strides is 2)
Convolution 3 64x64x64 (kernel size is 3, strides is 1)
Convolution 4 64x64x64 (kernel size is 3, strides is 1)
Max pool 2 32x32x64 (kernel size is 2, strides is 2)
Convolution 5 16x16x128 (kernel size is 3, strides is 1)
Convolution 6 16x16x128 (kernel size is 3, strides is 1)
Max pool 3 8x8x128 (kernel size is 2, strides is 2)
Convolution 7 8x8x256 (kernel size is 3, strides is 1)
Max pool 4 4x4x256 (kernel size is 2, strides is 2)
Convolution 8 4x4x512 (kernel size is 3, strides is 1)
Fully connected layer 1024 (dropout 0.5)
Fully connected layer 1024 (dropout 0.5) All the layers except the last one have relus as activation functions. Note that I have tried different combinations of convolutions (I started with simpler convolutions). Also, I have augmented the dataset by mirroring the images, so that in total I have 50000 images. Also, I am normalizing the images using min max normalization, where X is the image $X = X - 0 / 255 - 0$ The code is written in tensorflow and the batch sizes are 128. The mini-batches of training data end up overfitting and having an accuracy of 100% while the validation data seems to stop learning at around 84-85%. I have also tried to increase/decrease the dropout rate. The optimizer being used is AdamOptimizer with a learning rate of 0.0001 At the moment I have been playing with this problem for the last 3 weeks and 85% seems to have set a barrier in front of me. For the record, I know I could use transfer learning to achieve much higher results, but I am interesting on building this network as a self-learning experience. Update: I am running the SAME network with a different batch size, in this case I am using a much smaller batch size (16 instead of 128) so far I am achieving 87.5% accuracy (instead of 85%). That said, the network ends up overfitting anyway. Still I do not understand how a dropout of 50% of the units is not helping... obviously I am doing something wrong here. Any ideas? Update 2: Seems like the problem had to do with the batch size, as with a smaller size (16 instead of 128) I am achieving now 92.8% accuracy on my test set, with the smaller batch size the network still overfits (the mini batches end up with an accuracy of 100%) however, the loss (error) keeps decreasing and it is in general more stable. The cons are a MUCH slower running time, but it is totally worth the wait. | Ok, so after a lot of experimentation I have managed to get some results/insights. In the first place, everything being equal, smaller batches in the training set help a lot in order to increase the general performance of the network, as a negative side, the training process is muuuuuch slower. Second point, data is important, nothing new here but as I learned while fighting this problem, more data always seems to help a bit. Third point, dropout is useful in large networks with lots of data and lots of iterations, in my network I applied dropout on the final fully connected layers only, convolution layers did not get dropout applied. Fourth point (and this is something I am learning over and over): neural networds take A LOT to train, even on good GPUs (I trained this network on floydhub, which uses quite expensive NVIDIA cards), so PATIENCE is key . Final conclusion: Batch sizes are more important that one might think, apparently it is easier to hit a local minimum when batches are larger. The code I wrote is available as a python notebook , I think it is decently documented. | {
"source": [
"https://datascience.stackexchange.com/questions/22494",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/38339/"
]
} |
22,762 | I'm following this example on the scikit-learn website to perform a multioutput classification with a Random Forest model. from sklearn.datasets import make_classification
from sklearn.multioutput import MultiOutputClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils import shuffle
import numpy as np
X, y1 = make_classification(n_samples=5, n_features=5, n_informative=2, n_classes=2, random_state=1)
y2 = shuffle(y1, random_state=1)
Y = np.vstack((y1, y2)).T
forest = RandomForestClassifier(n_estimators=10, random_state=1)
multi_target_forest = MultiOutputClassifier(forest, n_jobs=-1)
multi_target_forest.fit(X, Y).predict(X)
print(multi_target_forest.predict_proba(X)) From this predict_proba I get a 2 5x2 arrays: [array([[ 0.8, 0.2],
[ 0.4, 0.6],
[ 0.8, 0.2],
[ 0.9, 0.1],
[ 0.4, 0.6]]), array([[ 0.6, 0.4],
[ 0.1, 0.9],
[ 0.2, 0.8],
[ 0.9, 0.1],
[ 0.9, 0.1]])] I was really expecting a n_sample by n_classes matrix. I'm struggling to understand how this relates to the probability of the classes present. The docs for predict_proba states: array of shape = [n_samples, n_classes], or a list of n_outputs such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. I'm guessing I have the latter in the description, but I'm still struggling to understand how this relates to my class probabilities. Furthermore, when I attempt to access the classes_ attribute for the forest model I get an AttributeError and this attribute does not exist on the MultiOutputClassifier . How can I relate the classes to the output? print(forest.classes_)
AttributeError: 'RandomForestClassifier' object has no attribute 'classes_' | Assuming your target is (0,1), then the classifier would output a probability matrix of dimension (N,2).
The first index refers to the probability that the data belong to class 0, and the second refers to the probability that the data belong to class 1. These two would sum to 1. You can then output the result by: probability_class_1 = model.predict_proba(X)[:, 1] If you have k classes, the output would be (N,k), you would have to specify the probability of which class you want. | {
"source": [
"https://datascience.stackexchange.com/questions/22762",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/20429/"
]
} |
22,781 | I am still finding confusing on look back topic when using LSTM for time-series analysis.
If I have hourly data and I want to predict next 6 hours with multiple predictors, should I look back up to 6 hours when I prepare my training set? OR Should I look back one hour (shift 1) and then predict next hour and take that predicted value and feed it back to predict the value after until next 6 hours? This concept is still a little fuzzy for me in LSTM . Any thoughts would be appreciated. | Assuming your target is (0,1), then the classifier would output a probability matrix of dimension (N,2).
The first index refers to the probability that the data belong to class 0, and the second refers to the probability that the data belong to class 1. These two would sum to 1. You can then output the result by: probability_class_1 = model.predict_proba(X)[:, 1] If you have k classes, the output would be (N,k), you would have to specify the probability of which class you want. | {
"source": [
"https://datascience.stackexchange.com/questions/22781",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/34028/"
]
} |
23,159 | Why use softmax as opposed to standard normalization? In the comment area of the top answer of this question, @Kilian Batzner raised 2 questions which also confuse me a lot. It seems no one gives an explanation except numerical benefits. I get the reasons for using Cross-Entropy Loss, but how does that relate to the softmax? You said "the softmax function can be seen as trying to minimize the cross-entropy between the predictions and the truth". Suppose, I would use standard / linear normalization, but still use the Cross-Entropy Loss. Then I would also try to minimize the Cross-Entropy. So how is the softmax linked to the Cross-Entropy except for the numerical benefits? As for the probabilistic view: what is the motivation for looking at log probabilities? The reasoning seems to be a bit like "We use e^x in the softmax, because we interpret x as log-probabilties". With the same reasoning we could say, we use e^e^e^x in the softmax, because we interpret x as log-log-log-probabilities (Exaggerating here, of course). I get the numerical benefits of softmax, but what is the theoretical motivation for using it? | It is more than just numerical. A quick reminder of the softmax:
$$
P(y=j | x) = \frac{e^{x_j}}{\sum_{k=1}^K e^{x_k}}
$$ Where $x$ is an input vector with length equal to the number of classes $K$. The softmax function has 3 very nice properties: 1. it normalizes your data (outputs a proper probability distribution), 2. is differentiable, and 3. it uses the exp you mentioned. A few important points: The loss function is not directly related to softmax. You can use standard normalization and still use cross-entropy. A "hardmax" function (i.e. argmax) is not differentiable. The softmax gives at least a minimal amount of probability to all elements in the output vector, and so is nicely differentiable, hence the term "soft" in softmax. Now I get to your question. The $e$ in softmax is the natural exponential function. Before we normalize, we transform $x$ as in the graph of $e^x$: If $x$ is 0 then $y=1$, if $x$ is 1, then $y=2.7$, and if $x$ is 2, now $y=7$! A huge step! This is what's called a non-linear transformation of our unnormalized log scores. The interesting property of the exponential function combined with the normalization in the softmax is that high scores in $x$ become much more probable than low scores. An example . Say $K=4$, and your log score $x$ is vector $[2, 4, 2, 1]$. The simple argmax function outputs: $$
[0, 1, 0, 0]
$$ The argmax is the goal, but it's not differentiable and we can't train our model with it :( A simple normalization, which is differentiable, outputs the following probabilities: $$
[0.2222, 0.4444, 0.2222, 0.1111]
$$ That's really far from the argmax! :( Whereas the softmax outputs:
$$
[0.1025, 0.7573, 0.1025, 0.0377]
$$ That's much closer to the argmax! Because we use the natural exponential, we hugely increase the probability of the biggest score and decrease the probability of the lower scores when compared with standard normalization. Hence the "max" in softmax. | {
"source": [
"https://datascience.stackexchange.com/questions/23159",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/39461/"
]
} |
23,183 | If we have a look to 90-99% of the papers published using a CNN (ConvNet).
The vast majority of them use filter size of odd numbers :{1, 3, 5, 7} for the most used. This situation can lead to some problem: With these filter sizes, usually the convolution operation is not perfect with a padding of 2 (common padding) and some edges of the input_field get lost in the process... Question1: Why using only odd_numbers for convolutions filter sizes ? Question2: Is it actually a problem to omit a small part of the input_field during the convolution ? Why so/not ? | The convolution operation, simply put, is combination of element-wise product of two matrices. So long as these two matrices agree in dimensions, there shouldn't be a problem, and so I can understand the motivation behind your query. A.1. However, the intent of convolution is to encode source data matrix (entire image) in terms of a filter or kernel. More specifically, we are trying to encode the pixels in the neighborhood of anchor/source pixels. Have a look at the figure below: Typically, we consider every pixel of the source image as anchor/source pixel, but we are not constrained to do this. In fact, it is not uncommon to include a stride, where in we anchor/source pixels are separated by a specific number of pixels. Okay, so what is the source pixel? It is the anchor point at which the kernel is centered and we are encoding all the neighboring pixels, including the anchor/source pixel. Since, the kernel is symmetrically shaped (not symmetric in kernel values), there are equal number (n) of pixel on all sides (4- connectivity) of the anchor pixel. Therefore, whatever this number of pixels maybe, the length of each side of our symmetrically shaped kernel is 2*n+1 (each side of the anchor + the anchor pixel), and therefore filter/kernels are always odd sized. What if we decided to break with 'tradition' and used asymmetric kernels? You'd suffer aliasing errors, and so we don't do it. We consider the pixel to be the smallest entity, i.e. there is no sub-pixel concept here. A.2
The boundary problem is dealt with using different approaches: some ignore it, some zero pad it, some mirror reflect it. If you are not going to compute an inverse operation, i.e. deconvolution, and are not interested in perfect reconstruction of original image, then you don't care about either loss of information or injection of noise due to the boundary problem. Typically, the pooling operation (average pooling or max pooling) will remove your boundary artifacts anyway. So, feel free to ignore part of your 'input field', your pooling operation will do so for you. -- Zen of convolution: In the old-school signal processing domain, when an input signal was convolved or passed through a filter, there was no way of judging a-prior which components of the convolved/filtered response were relevant/informative and which were not. Consequently, the aim was to preserve signal components (all of it) in these transformations. These signal components are information. Some components are more informative than others. The only reason for this is that we are interested in extracting higher-level information; Information pertinent towards some semantic classes. Accordingly, those signal components that do not provide the information we are specifically interested in can be pruned out. Therefore, unlike old-school dogmas about convolution/filtering, we are free to pool/prune the convolution response as we feel like. The way we feel like doing so is to rigorously remove all data components that are not contributing towards improving our statistical model. | {
"source": [
"https://datascience.stackexchange.com/questions/23183",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/39198/"
]
} |
23,493 | Here the answer refers to vanishing and exploding gradients that has been in sigmoid -like activation functions but, I guess, Relu has a disadvantage and it is its expected value. there is no limitation for the output of the Relu and so its expected value is not zero. I remember the time before the popularity of Relu that tanh was the most popular amongst machine learning experts rather than sigmoid . The reason was that the expected value of the tanh was equal to zero and and it helped learning in deeper layers to be more rapid in a neural net. Relu does not have this characteristic, but why it is working so good if we put its derivative advantage aside. Moreover, I guess the derivative also may be affected. Because the activations (output of Relu ) are involved for calculating the update rules. | The biggest advantage of ReLu is indeed non-saturation of its gradient, which greatly accelerates the convergence of stochastic gradient descent compared to the sigmoid / tanh functions ( paper by Krizhevsky et al). But it's not the only advantage. Here is a discussion of sparsity effects of ReLu activations and induced regularization. Another nice property is that compared to tanh / sigmoid neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply thresholding a matrix of activations at zero. But I'm not convinced that great success of modern neural networks is due to ReLu alone . New initialization techniques, such as Xavier initialization, dropout and (later) batchnorm also played very important role. For example, famous AlexNet used ReLu and dropout. So to answer your question: ReLu has very nice properties, though not ideal . But it truly proves itself when combined with other great techniques, which by the way solve non-zero-center problem that you've mentioned. UPD: ReLu output is not zero-centered indeed and it does hurt the NN performance. But this particular issue can be tackled by other regularization techniques, e.g. batchnorm, which normalizes the signal before activation : We add the BN transform immediately before the nonlinearity, by
normalizing $x = Wu+ b$. ... normalizing it is likely to produce
activations with a stable distribution. | {
"source": [
"https://datascience.stackexchange.com/questions/23493",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/28175/"
]
} |
23,789 | I wasn't clear on couple of concepts: XGBoost converts weak learners to strong learners. What's the advantage of doing this ? Combining many weak learners instead of just using a single tree ? Random Forest uses various sample from tree to create a tree. What's the advantage of this method instead of just using a singular tree? | It's easier to start with your second question and then go to the first. Bagging Random Forest is a bagging algorithm. It reduces variance. Say that you have very unreliable models, such as Decision Trees. (Why unreliable? Because if you change your data a little bit, the decision tree created can be very different.) In such a case, you can build a robust model (reduce variance) through bagging -- bagging is when you create different models by resampling your data to make the resulting model more robust. Random forest is what we call to bagging applied to decision trees, but it's no different than other bagging algorithm. Why would you want to do this? It depends on the problem. But usually, it is highly desirable for the model to be stable. Boosting Boosting reduces variance, and also reduces bias. It reduces variance because you are using multiple models (bagging). It reduces bias by training the subsequent model by telling him what errors the previous models made (the boosting part). There are two main algorithms: Adaboost: this is the original algorithm; you tell subsequent models to punish more heavily observations mistaken by the previous models Gradient boosting: you train each subsequent model using the residuals (the difference between the predicted and true values) In these ensembles, your base learner must be weak. If it overfits the data, there won't be any residuals or errors for the subsequent models to build upon. Why are these good models? Well, most competitions in websites like Kaggle have been won using gradient boosting trees. Data science is an empirical science, "because it works" is good enough. Anyhow, do notice that boosting models can overfit (albeit empirically it's not very common). Another reason why gradient boosting, in particular, is also pretty cool: because it makes it very easy to use different loss functions, even when the derivative is not convex. For instance, when using probabilistic forecast, you can use stuff such as the pinball function as your loss function; something which is much harder with neural networks (because the derivative is always constant). [Interesting historical note: Boosting was originally a theoretical invention motivated by the question " can we build a stronger model using weaker models "] Notice: People sometimes confuse random forest and gradient boosting trees, just because both use decision trees, but they are two very different families of ensembles. | {
"source": [
"https://datascience.stackexchange.com/questions/23789",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/39331/"
]
} |
23,895 | How we can program in the Keras library (or TensorFlow) to partition training on multiple GPUs? Let's say that you are in an Amazon ec2 instance that has 8 GPUs and you would like to use all of them to train faster, but your code is just for a single CPU or GPU. | From the Keras FAQs , below is copy-pasted code to enable 'data parallelism'. I.e. having each of your GPUs process a different subset of your data independently. from keras.utils import multi_gpu_model
# Replicates `model` on 8 GPUs.
# This assumes that your machine has 8 available GPUs.
parallel_model = multi_gpu_model(model, gpus=8)
parallel_model.compile(loss='categorical_crossentropy',
optimizer='rmsprop')
# This `fit` call will be distributed on 8 GPUs.
# Since the batch size is 256, each GPU will process 32 samples.
parallel_model.fit(x, y, epochs=20, batch_size=256) Note that this appears to be valid only for the Tensorflow backend at the time of writing. Update (Feb 2018) : Keras now accepts automatic gpu selection using multi_gpu_model, so you don't have to hardcode the number of gpus anymore. Details in this Pull Request . In other words, this enables code that looks like this: try:
model = multi_gpu_model(model)
except:
pass But to be more explicit , you can stick with something like: parallel_model = multi_gpu_model(model, gpus=None) Bonus : To check if you really are utilizing all of your GPUs, specifically NVIDIA ones, you can monitor your usage in the terminal using: watch -n0.5 nvidia-smi References: https://keras.io/utils/#multi_gpu_model https://stackoverflow.com/questions/8223811/top-command-for-gpus-using-cuda | {
"source": [
"https://datascience.stackexchange.com/questions/23895",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/40665/"
]
} |
23,969 | I'm looking to solve the following problem: I have a set of sentences as my dataset, and I want to be able to type a new sentence, and find the sentence that the new one is the most similar to in the dataset. An example would look like: New sentence: " I opened a new mailbox " Prediction based on dataset: Sentence | Similarity
A dog ate poop 0%
A mailbox is good 50%
A mailbox was opened by me 80% I've read that cosine similarity can be used to solve these kinds of issues paired with tf-idf (and RNNs should not bring significant improvements to the basic methods), or also word2vec is used for similar problems. Are those actually viable for use in this specific case, too? Are there any other techniques/algorithms to solve this (preferably with Python and SKLearn, but I'm open to learn about TensorFlow, too)? | Your problem can be solved with Word2vec as well as Doc2vec. Doc2vec would give better results because it takes sentences into account while training the model. Doc2vec solution You can train your doc2vec model following this link . You may want to perform some pre-processing steps like removing all stop words (words like "the", "an", etc. that don't add much meaning to the sentence). Once you trained your model, you can find the similar sentences using following code. import gensim
model = gensim.models.Doc2Vec.load('saved_doc2vec_model')
new_sentence = "I opened a new mailbox".split(" ")
model.docvecs.most_similar(positive=[model.infer_vector(new_sentence)],topn=5) Results: [('TRAIN_29670', 0.6352514028549194),
('TRAIN_678', 0.6344441771507263),
('TRAIN_12792', 0.6202734708786011),
('TRAIN_12062', 0.6163255572319031),
('TRAIN_9710', 0.6056315898895264)] The above results are list of tuples for (label,cosine_similarity_score) . You can map outputs to sentences by doing train[29670] . Please note that the above approach will only give good results if your doc2vec model contains embeddings for words found in the new sentence. If you try to get similarity for some gibberish sentence like sdsf sdf f sdf sdfsdffg , it will give you few results, but those might not be the actual similar sentences as your trained model may haven't seen these gibberish words while training the model. So try to train your model on as many sentences as possible to incorporate as many words for better results. Word2vec Solution If you are using word2vec, you need to calculate the average vector for all words in every sentence and use cosine similarity between vectors. def avg_sentence_vector(words, model, num_features, index2word_set):
#function to average all words vectors in a given paragraph
featureVec = np.zeros((num_features,), dtype="float32")
nwords = 0
for word in words:
if word in index2word_set:
nwords = nwords+1
featureVec = np.add(featureVec, model[word])
if nwords>0:
featureVec = np.divide(featureVec, nwords)
return featureVec Calculate Similarity from sklearn.metrics.pairwise import cosine_similarity
#get average vector for sentence 1
sentence_1 = "this is sentence number one"
sentence_1_avg_vector = avg_sentence_vector(sentence_1.split(), model=word2vec_model, num_features=100)
#get average vector for sentence 2
sentence_2 = "this is sentence number two"
sentence_2_avg_vector = avg_sentence_vector(sentence_2.split(), model=word2vec_model, num_features=100)
sen1_sen2_similarity = cosine_similarity(sentence_1_avg_vector,sentence_2_avg_vector) | {
"source": [
"https://datascience.stackexchange.com/questions/23969",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/21254/"
]
} |
24,081 | I recently came across graph embedding such as DeepWalk and LINE. However, I still do not have a clear idea as what is meant by graph embeddings and when to use it (applications)? Any suggestions are welcome! | Graph embedding learns a mapping from a network to a vector space, while preserving relevant network properties. Vector spaces are more amenable to data science than graphs. Graphs contain edges and nodes, those network relationships can only use a specific subset of mathematics, statistics, and machine learning. Vector spaces have a richer toolset from those domains. Additionally, vector operations are often simpler and faster than the equivalent graph operations. One example is finding nearest neighbors. You can perform "hops" from node to another node in a graph. In many real-world graphs after a couple of hops, there is little meaningful information (e.g., recommendations from friends of friends of friends). However, in vector spaces, you can use distance metrics to get quantitative results (e.g., Euclidian distance or Cosine Similarity). If you have quantitative distance metrics in a meaningful vector space, finding nearest neighbors is straightforward. " Graph Embedding Techniques, Applications, and Performance: A Survey " is an overview article that goes into greater detail. | {
"source": [
"https://datascience.stackexchange.com/questions/24081",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/37986/"
]
} |
24,093 | I developed a machine learning model with Python (Anaconda + Flask) on my workstation and all goes well. Later, I tried to ship this program onto another machine where of course I tried to set up the same environment, but the program fails to run. I copied the program to other machines where it also runs smoothly. I cannot figure out what the problem is in the failed case (both the program code and the error message are copious so I am not able to present them here) but I'm almost certain that it is something with the different versions of the dependencies. So, my question is that given an environment where a certain program runs well, how can I clone it to another where it should run well also? Of course, without the cloning of the full system ;) | First of all this is a Python/Anaconda question and should probably be asked in a different stack exchange subsite. As for the question itself - you can export your Anaconda environment using: conda env export > environment.yml And recreate it using: conda env create -f environment.yml Please note that as others suggested - you should use virtual environments which allows you to create a certain environment that is separated from that of your machine and manage it more easily. To create a virtual environment in Anaconda you can use: conda create -n yourenvname python=x.x anaconda which you activate using: source activate yourenvname | {
"source": [
"https://datascience.stackexchange.com/questions/24093",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/21560/"
]
} |
24,319 | I want to train a deep model with a large amount of training data, but my desktop does not have that power to train such a deep model with these abundant data. I'd like to know whether there are any free cloud services that can be used for training machine learning and deep learning models? I also would like to know if there is a cloud service, where I would be able to track the training results, and the training would continue even if I am not connected to the cloud. | There are no unlimited free services*, but some have starting credit or free offers on initial signup. Here are some suggested to date: AWS: If specifically deep learning on a large data set, then probably AWS is out - their free offer does not cover machines with enough processing power to tackle deep learning projects. Google Cloud might do, the starting credit offer is good enough to do a little deep learning (for maybe a couple of weeks), although they have signup and tax restrictions. Azure have a free tier with limited processing and storage options. Most free offerings appear to follow the "Freemium" model - give you limited service that you can learn to use and maybe like. However not enough to use heavily (for e.g. training an image recogniser or NLP model from scratch) unless you are willing to pay. This best advice is to shop around for a best starting offer and best price. A review of services is not suitable here, as it will get out of date quickly and not a good use of Stack Exchange. But you can find similar questions on Quora and other sites - your best bet is to do a web search for "cloud compute services for deep learning" or similar and expect to spend some time comparing notes. A few specialist deep learning services have popped up recently such as Nimbix or FloydHub , and there are also the big players such as Azure, AWS, Google Cloud. You won't find anything completely free and unencumbered, and if you want to do this routinely and have time to build and maintain hardware then it is cheaper to buy your own equipment in the long run - at least at a personal level. To decide whether to pay for cloud or build your own, then consider a typical price for a cloud machine suitable for performing deep learning at around \$1 per hour (prices do vary a lot though, and it is worth shopping around, if only to find a spec that matches your problem). There may be additional fees for storage and data transfer. Compare that to pre-built deep learning machines costing from \$2000, or building your own for \$1000 - such machines might not be 100% comparable, but if you are working by yourself then the payback point is going to be after only a few months use. Although don't forget the electricity costs - a powerful machine can draw 0.5kW whilst being heavily used, so this adds up to more than you might expect. The advantages of cloud computing are that someone else does the maintenance work and takes on the risk of hardware failure. These are valuable services, and priced accordingly. * But see Jay Speidall's answer about Google's colab service, which appears to be free to use, but may have some T&C limitations which may affect you (for instance I doubt they will be happy for you to run content production of Deep Dream or Style Transfer on it) | {
"source": [
"https://datascience.stackexchange.com/questions/24319",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/28175/"
]
} |
24,320 | I have time-series data of a few metrics. I know which metric is the response variable and the independent ones. I need to fit a model between them. The relationship could be linear, quadratic, logarithmic, piecewise linear, multiple linear, etc. Basically, it could be anything. Is there any technique / properties I could use to find the relationship between the metrics and fit a model? Right now, I have written a brute-force script in R. For example, I have response variable A which depends on X1, X2 and X2. A = C1*f1(X1)+C2*f2(X2)+C3*f3(X3) is my model. My script tries all possible combinations of f1, f2 and f3. By combinations, I mean that I initialy start with all linear, then one of them is quadratic, then cubic, then logarithmic, etc. I am using lm(). I then choose the model which has the least AIC as my final model. This obviously takes too long. I definitely need an automated way of finding this model. Could you suggest a better way to do this? | There are no unlimited free services*, but some have starting credit or free offers on initial signup. Here are some suggested to date: AWS: If specifically deep learning on a large data set, then probably AWS is out - their free offer does not cover machines with enough processing power to tackle deep learning projects. Google Cloud might do, the starting credit offer is good enough to do a little deep learning (for maybe a couple of weeks), although they have signup and tax restrictions. Azure have a free tier with limited processing and storage options. Most free offerings appear to follow the "Freemium" model - give you limited service that you can learn to use and maybe like. However not enough to use heavily (for e.g. training an image recogniser or NLP model from scratch) unless you are willing to pay. This best advice is to shop around for a best starting offer and best price. A review of services is not suitable here, as it will get out of date quickly and not a good use of Stack Exchange. But you can find similar questions on Quora and other sites - your best bet is to do a web search for "cloud compute services for deep learning" or similar and expect to spend some time comparing notes. A few specialist deep learning services have popped up recently such as Nimbix or FloydHub , and there are also the big players such as Azure, AWS, Google Cloud. You won't find anything completely free and unencumbered, and if you want to do this routinely and have time to build and maintain hardware then it is cheaper to buy your own equipment in the long run - at least at a personal level. To decide whether to pay for cloud or build your own, then consider a typical price for a cloud machine suitable for performing deep learning at around \$1 per hour (prices do vary a lot though, and it is worth shopping around, if only to find a spec that matches your problem). There may be additional fees for storage and data transfer. Compare that to pre-built deep learning machines costing from \$2000, or building your own for \$1000 - such machines might not be 100% comparable, but if you are working by yourself then the payback point is going to be after only a few months use. Although don't forget the electricity costs - a powerful machine can draw 0.5kW whilst being heavily used, so this adds up to more than you might expect. The advantages of cloud computing are that someone else does the maintenance work and takes on the risk of hardware failure. These are valuable services, and priced accordingly. * But see Jay Speidall's answer about Google's colab service, which appears to be free to use, but may have some T&C limitations which may affect you (for instance I doubt they will be happy for you to run content production of Deep Dream or Style Transfer on it) | {
"source": [
"https://datascience.stackexchange.com/questions/24320",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/40584/"
]
} |
24,323 | I have a severely skewed data sets consisting of 20 something classes where the smallest class contains on the order of 1000 samples and the largest several millions. Regarding the validation data, I understand that I should make sure that it represent a similar ratio between classes compared to the one in my original raw data. Hence, I shouldn't do any under- or over-sampling on that validation data, but can do it on the training data. Because I have such greatly skewed data set, is it still viable to add some restriction to the selection of my validation data set? Say I want there to be at least 1000 samples from each class in order to accept it, as I want to have a reasonable accuracy on the metrics of all classes. Would this ruin my validation as the ratio between the largest and smallest class could then go from ~0.01-0.1% to ~1.0%, or is it still safe as the validation data still is significantly skewed? | There are no unlimited free services*, but some have starting credit or free offers on initial signup. Here are some suggested to date: AWS: If specifically deep learning on a large data set, then probably AWS is out - their free offer does not cover machines with enough processing power to tackle deep learning projects. Google Cloud might do, the starting credit offer is good enough to do a little deep learning (for maybe a couple of weeks), although they have signup and tax restrictions. Azure have a free tier with limited processing and storage options. Most free offerings appear to follow the "Freemium" model - give you limited service that you can learn to use and maybe like. However not enough to use heavily (for e.g. training an image recogniser or NLP model from scratch) unless you are willing to pay. This best advice is to shop around for a best starting offer and best price. A review of services is not suitable here, as it will get out of date quickly and not a good use of Stack Exchange. But you can find similar questions on Quora and other sites - your best bet is to do a web search for "cloud compute services for deep learning" or similar and expect to spend some time comparing notes. A few specialist deep learning services have popped up recently such as Nimbix or FloydHub , and there are also the big players such as Azure, AWS, Google Cloud. You won't find anything completely free and unencumbered, and if you want to do this routinely and have time to build and maintain hardware then it is cheaper to buy your own equipment in the long run - at least at a personal level. To decide whether to pay for cloud or build your own, then consider a typical price for a cloud machine suitable for performing deep learning at around \$1 per hour (prices do vary a lot though, and it is worth shopping around, if only to find a spec that matches your problem). There may be additional fees for storage and data transfer. Compare that to pre-built deep learning machines costing from \$2000, or building your own for \$1000 - such machines might not be 100% comparable, but if you are working by yourself then the payback point is going to be after only a few months use. Although don't forget the electricity costs - a powerful machine can draw 0.5kW whilst being heavily used, so this adds up to more than you might expect. The advantages of cloud computing are that someone else does the maintenance work and takes on the risk of hardware failure. These are valuable services, and priced accordingly. * But see Jay Speidall's answer about Google's colab service, which appears to be free to use, but may have some T&C limitations which may affect you (for instance I doubt they will be happy for you to run content production of Deep Dream or Style Transfer on it) | {
"source": [
"https://datascience.stackexchange.com/questions/24323",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/31948/"
]
} |
24,452 | I read somewhere that if we have features that are too correlated, we have to remove one, as this may worsen the model. It is clear that correlated features means that they bring the same information, so it is logical to remove one of them. But I can not understand why this can worsen the model. | Correlated features in general don't improve models (although it depends on the specifics of the problem like the number of variables and the degree of correlation), but they affect specific models in different ways and to varying extents: For linear models (e.g., linear regression or logistic regression), multicolinearity can yield solutions that are wildly varying and possibly numerically unstable . Random forests can be good at detecting interactions between different features, but highly correlated features can mask these interactions. More generally, this can be viewed as a special case of Occam's razor . A simpler model is preferable, and, in some sense, a model with fewer features is simpler. The concept of minimum description length makes this more precise. | {
"source": [
"https://datascience.stackexchange.com/questions/24452",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/33386/"
]
} |
24,477 | I have a problem figuring out how to visualize two simple quantities using R. I want to compare their size by visualizing two proportional spatial figures next to each other and the size proportional to their volume, height or whatever. E.g. I have two numbers 45 and 15, after my code execution I would get two spatial figures, but first is three times bigger. https://visual.ly/blog/45-ways-to-communicate-two-quantities/ <- I want to do something like in this article example no.26 "Volumes".
Figures could be like cone, cube or 3d pie. Is is possible to realize that using ggplot2 or any other package? | Correlated features in general don't improve models (although it depends on the specifics of the problem like the number of variables and the degree of correlation), but they affect specific models in different ways and to varying extents: For linear models (e.g., linear regression or logistic regression), multicolinearity can yield solutions that are wildly varying and possibly numerically unstable . Random forests can be good at detecting interactions between different features, but highly correlated features can mask these interactions. More generally, this can be viewed as a special case of Occam's razor . A simpler model is preferable, and, in some sense, a model with fewer features is simpler. The concept of minimum description length makes this more precise. | {
"source": [
"https://datascience.stackexchange.com/questions/24477",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/40898/"
]
} |
24,511 | In machine learning tasks it is common to shuffle data and normalize it. The purpose of normalization is clear (for having same range of feature values). But, after struggling a lot, I did not find any valuable reason for shuffling data. I have read this post here discussing when we need to shuffle data, but it is not obvious why we should shuffle the data. Furthermore, I have frequently seen in algorithms such as Adam or SGD where we need batch gradient descent (data should be separated to mini-batches and batch size has to be specified). It is vital according to this post to shuffle data for each epoch to have different data for each batch. So, perhaps the data is shuffled and more importantly changed. Why do we do this? | Shuffling data serves the purpose of reducing variance and making sure that models remain general and overfit less. The obvious case where you'd shuffle your data is if your data is sorted by their class/target. Here, you will want to shuffle to make sure that your training/test/validation sets are representative of the overall distribution of the data. For batch gradient descent, the same logic applies. The idea behind batch gradient descent is that by calculating the gradient on a single batch, you will usually get a fairly good estimate of the "true" gradient. That way, you save computation time by not having to calculate the "true" gradient over the entire dataset every time. You want to shuffle your data after each epoch because you will always have the risk to create batches that are not representative of the overall dataset, and therefore, your estimate of the gradient will be off. Shuffling your data after each epoch ensures that you will not be "stuck" with too many bad batches. In regular stochastic gradient descent, when each batch has size 1, you still want to shuffle your data after each epoch to keep your learning general. Indeed, if data point 17 is always used after data point 16, its own gradient will be biased with whatever updates data point 16 is making on the model. By shuffling your data, you ensure that each data point creates an "independent" change on the model, without being biased by the same points before them. | {
"source": [
"https://datascience.stackexchange.com/questions/24511",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/28175/"
]
} |
24,534 | I am wondering whether there is any scenario in which gradient descent does not converge to a minimum. I am aware that gradient descent is not always guaranteed to converge to a global optimum. I am also aware that it might diverge from an optimum if, say, the step size is too big. However, it seems to me that, if it diverges from some optimum, then it will eventually go to another optimum. Hence, gradient descent would be guaranteed to converge to a local or global optimum. Is that right? If not, could you please provide a rough counterexample? | Gradient Descent is an algorithm which is designed to find the optimal points, but these optimal points are not necessarily global. And yes if it happens that it diverges from a local location it may converge to another optimal point but its probability is not too much. The reason is that the step size might be too large that prompts it recede one optimal point and the probability that it oscillates is much more than convergence. About gradient descent there are two main perspectives, machine learning era and deep learning era. During machine learning era it was considered that gradient descent will find the local/global optimum but in deep learning era where the dimension of input features are too much it is shown in practice that the probability that all of the features be located in there optimal value at a single point is not too much and rather seeing to have optimal locations in cost functions, most of the time saddle points are observed. This is one of the reasons that training with lots of data and training epochs cause the deep learning models outperform other algorithms. So if you train your model, it will find a detour or will find its way to go downhill and do not stuck in saddle points, but you have to have appropriate step sizes. For more intuitions I suggest you referring here and here . | {
"source": [
"https://datascience.stackexchange.com/questions/24534",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/41658/"
]
} |
25,024 | I'm trying to train a single perceptron (1000 input units, 1 output, no hidden layers) on 64 randomly generated data points. I'm using Pytorch using the Adam optimizer: import torch
from torch.autograd import Variable
torch.manual_seed(545345)
N, D_in, D_out = 64, 1000, 1
x = Variable(torch.randn(N, D_in))
y = Variable(torch.randn(N, D_out))
model = torch.nn.Linear(D_in, D_out)
loss_fn = torch.nn.MSELoss(size_average=False)
optimizer = torch.optim.Adam(model.parameters())
for t in xrange(5000):
y_pred = model(x)
loss = loss_fn(y_pred, y)
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step() Initially, the loss quickly decreases, as expected: (0, 91.74887084960938)
(1, 76.85824584960938)
(2, 63.434078216552734)
(3, 51.46927261352539)
(4, 40.942893981933594)
(5, 31.819372177124023) Around 300 iterations, the error reaches near zero: (300, 2.1734419819452455e-12)
(301, 1.90354676465887e-12)
(302, 2.3347573874232808e-12) This goes on for a few thousand iterations. However, after training for too long, the error starts to increase again: (4997, 0.002102422062307596)
(4998, 0.0020302983466535807)
(4999, 0.0017039275262504816) Why is this happening? | This small instability at the end of convergence is a feature of Adam (and RMSProp) due to how it estimates mean gradient magnitudes over recent steps and divides by them. One thing Adam does is maintain a rolling geometric mean of recent gradients and squares of the gradients. The squares of the gradients are used to divide (another rolling mean of) the current gradient to decide the current step. However, when your gradient becomes and stays very close to zero, this will make the squares of the gradient become so low that they either have large rounding errors or are effectively zero, which can introduce instability (for instance a long-term stable gradient in one dimension makes a relatively small step from $10^{-10}$ to $10^{-5}$ due to changes in other params), and the step size will start to jump around, before settling again. This actually makes Adam less stable and worse for your problem than more basic gradient descent, assuming you want to get as numerically close to zero loss as calculations allow for your problem. In practice on deep learning problems, you don't get this close to convergence (and for some regularisation techniques such as early stopping, you don't want to anyway), so it is usually not a practical concern on the types of problem that Adam was designed for. You can actually see this occurring for RMSProp in a comparison of different optimisers (RMSProp is the black line - watch the very last steps just as it reaches the target): You can make Adam more stable and able to get closer to true convergence by reducing the learning rate. E.g. optimizer = torch.optim.Adam(model.parameters(), lr=1e-5) It will take longer to optimise. Using lr=1e-5 you need to train for 20,000+ iterations before you see the instability and the instability is less dramatic, values hover around $10^{-7}$. | {
"source": [
"https://datascience.stackexchange.com/questions/25024",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/42255/"
]
} |
25,119 | How to calculate the mAP (mean Average Precision) for the detection task for the Pascal VOC leaderboards? There said - at page 11 : Average Precision (AP). For the VOC2007 challenge, the interpolated
average precision (Salton and Mcgill 1986) was used to evaluate both
classification and detection. For a given task and class, the
precision/recall curve is computed from a method’s ranked output.
Recall is defined as the proportion of all positive examples ranked
above a given rank. Precision is the proportion of all examples above
that rank which are from the positive class. The AP summarises the
shape of the precision/recall curve, and is defined as the mean
precision at a set of eleven equally spaced recall levels
[0,0.1,...,1]: AP = 1/11 ∑ r∈{0,0.1,...,1} pinterp(r) The precision at each recall level r is interpolated by taking the
maximum precision measured for a method for which the corresponding
recall exceeds r: pinterp(r) = max p(r˜) , where p(r˜) is the measured
precision at recall ˜r About mAP So does it mean that: We calculate Precision and Recall : A) For many different IoU > {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1} we calculate True/False Positive/Negative values Where True positive = Number_of_detection with IoU > {0, 0.1,..., 1} , as said here and then we calculate: Precision = True positive / (True positive + False positive) Recall = True positive / (True positive + False negative) B) Or for many different thresholds of detection algorithms we calculate: Precision = True positive / (True positive + False positive) Recall = True positive / (True positive + False negative) Where True positive = Number_of_detection with IoU > 0.5 as said here C) Or for many different thresholds of detection algorithms we calculate: Precision = Intersect / Detected_box Recall = Intersect / Object As shown here ? Then we build Precision-Recall curve , as shown here: Then we calculate AP (average precision) as average of 11 values of Precision at the points where Recall = {0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1} , i.e. AP = 1/11 ∑ recall∈{0,0.1,...,1} Precision(Recall) (In general for each point, for example 0.3, we get MAX of Precision for Recall <= 0.3, instead of value of Precision at this point Recall=0.3) And when we calculate AP only for 1 something object class on all images - then we get AP (average precision) for this class, for example, only for air . So AP is a integral ( area under the curve ) But when we calculate AP for all object classes on all images - then we get mAP (mean average precision) for all images dataset. Questions: Is it right, and if it isn't, then how to calculate mAP for Pascal VOC Challenge? And which of the 3 formulas (A, B or C) is correct for calculating Precision and Recall, in paragraph 1? Short answer: mAP = AVG(AP for each object class) AP = AVG(Precision for each of 11 Recalls {precision = 0, 0.1, ..., 1}) PR-curve = Precision and Recall (for each Threshold that is in the Predictions bound-boxes) Precision = TP / (TP + FP) Recall = TP / (TP + FN) TP = number of detections with IoU>0.5 FP = number of detections with IoU<=0.5 or detected more than once FN = number of objects that not detected or detected with IoU<=0.5 | To answer your questions: Yes your approach is right Of A, B and C the right answer is B. The explanation is the following: In order to calculate Mean Average Precision (mAP) in the context of Object Detection you must compute the Average Precision (AP) for each class, and then compute the mean across all classes. The key here is to compute the AP for each class, in general for computing Precision (P) and Recall (R) you must define what are: True Positives (TP), False Positives (FP), True Negative (TN) and False Negative (FN). In the setting of Object Detection of the Pascal VOC Challenge are the following: TP: are the Bounding Boxes (BB) that the intersection over union (IoU) with the ground truth (GT) is above 0.5 FP: two cases (a) BB that the IoU with GT is below 0.5 (b) the BB that have IoU with a GT that has already been detected. TN: there are not true negative, the image are expected to contain at least one object FN: those ground truthes for which the method failed to produce a BB Now each predicted BB have a confidence value for the given class. So the scoring method sort the predictions for decreasing order of confidence and compute the P = TP / (TP + FP) and R = TP / (TP + FN) for each possible rank k = 1 up to the number of predictions. So now you have a (P, R) for each rank those P and R are the "raw" Precision-Recall curve. To compute the interpolated P-R curve foreach value of R you select the maximum P that has a corresponding R' >= R. There are two different ways to sample P-R curve points according to voc devkit doc .
For VOC Challenge before 2010, we select the maximum P obtained for any R' >= R, which R belongs to 0, 0.1, ..., 1 (eleven points). The AP is then the average precision at each of the Recall thresholds. For VOC Challenge 2010 and after, we still select the maximum P for any R' >= R, while R belongs to all unique recall values (include 0 and 1). The AP is then the area size under P-R curve. Notice that in the case that you don't have a value of P with Recall above some of the thresholds the Precision value is 0. For instance consider the following output of a method given the class "Aeroplane": BB | confidence | GT
----------------------
BB1 | 0.9 | 1
----------------------
BB2 | 0.9 | 1
----------------------
BB3 | 0.7 | 0
----------------------
BB4 | 0.7 | 0
----------------------
BB5 | 0.7 | 1
----------------------
BB6 | 0.7 | 0
----------------------
BB7 | 0.7 | 0
----------------------
BB8 | 0.7 | 1
----------------------
BB9 | 0.7 | 1
---------------------- Besides it not detected bounding boxes in two images, so we have FN = 2. The previous table is the ordered rank by confidence value of the predictions of the method GT = 1 means is a TP and GT = 0 FP. So TP=5 (BB1, BB2, BB5, BB8 and BB9), FP=5. For the case of rank=3 the precision drops because BB1 was already detected, so even if the object is indeed present it counts as a FP. . rank=1 precision=1.00 and recall=0.14
----------
rank=2 precision=1.00 and recall=0.29
----------
rank=3 precision=0.66 and recall=0.29
----------
rank=4 precision=0.50 and recall=0.29
----------
rank=5 precision=0.40 and recall=0.29
----------
rank=6 precision=0.50 and recall=0.43
----------
rank=7 precision=0.43 and recall=0.43
----------
rank=8 precision=0.38 and recall=0.43
----------
rank=9 precision=0.44 and recall=0.57
----------
rank=10 precision=0.50 and recall=0.71
---------- Given the previous results:
If we used the way before voc2010, the interpolated Precision values are 1, 1, 1, 0.5, 0.5, 0.5, 0.5, 0.5, 0, 0, 0. Then AP = 5.5 / 11 = 0.5 for the class of "Aeroplanes".
Else if we used the way since voc2010, the interpolated Precision values are 1, 1, 1, 0.5, 0.5, 0.5, 0 for seven unique recalls that are 0, 0.14, 0.29, 0.43, 0.57, 0.71, 1.Then AP = (0.14-0)*1 + (0.29-0.14)*1 + (0.43-0.29)*0.5 + (0.57-0.43)*0.5 + (0.71-0.57)*0.5 + (1-0.71)*0 = 0.5 for the class of "Aeroplanes". Repeat for each class and then you have the (mAP). More information can be found in the following links 1 , 2 . Also you should check the paper: The PASCAL Visual Object Classes Challenge: A Retrospective for a more detailed explanation. | {
"source": [
"https://datascience.stackexchange.com/questions/25119",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/37736/"
]
} |
26,103 | I am trying to merge two Keras models into a single model and I am unable to accomplish this. For example in the attached Figure, I would like to fetch the middle layer $A2$ of dimension 8, and use this as input to the layer $B1$ (of dimension 8 again) in Model $B$ and then combine both Model $A$ and Model $B$ as a single model. I am using the functional module to create Model $A$ and Model $B$ independently. How can I accomplish this task? Note : $A1$ is the input layer to model $A$ and $B1$ is the input layer to model $B$. | I figured out the answer to my question and here is the code that builds on the above answer. from keras.layers import Input, Dense
from keras.models import Model
from keras.utils import plot_model
A1 = Input(shape=(30,),name='A1')
A2 = Dense(8, activation='relu',name='A2')(A1)
A3 = Dense(30, activation='relu',name='A3')(A2)
B2 = Dense(40, activation='relu',name='B2')(A2)
B3 = Dense(30, activation='relu',name='B3')(B2)
merged = Model(inputs=[A1],outputs=[A3,B3])
plot_model(merged,to_file='demo.png',show_shapes=True) and here is the output structure that I wanted: | {
"source": [
"https://datascience.stackexchange.com/questions/26103",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/43921/"
]
} |
26,124 | I have a SampleData (2653 observation, 11 features) of bank transaction done in the 1-month timeframe. Download Dataset Size=250KB I want to come up with algorithms (Single or Combine) that can segment users into different categories. Since there are mostly categorical features involved except tx_amount , Which algorithm is best suited here, or how should I approach this problem to create user segments? | I figured out the answer to my question and here is the code that builds on the above answer. from keras.layers import Input, Dense
from keras.models import Model
from keras.utils import plot_model
A1 = Input(shape=(30,),name='A1')
A2 = Dense(8, activation='relu',name='A2')(A1)
A3 = Dense(30, activation='relu',name='A3')(A2)
B2 = Dense(40, activation='relu',name='B2')(A2)
B3 = Dense(30, activation='relu',name='B3')(B2)
merged = Model(inputs=[A1],outputs=[A3,B3])
plot_model(merged,to_file='demo.png',show_shapes=True) and here is the output structure that I wanted: | {
"source": [
"https://datascience.stackexchange.com/questions/26124",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/30686/"
]
} |
26,333 | I am trying to convert a list of lists which looks like the following into a Pandas Dataframe [['New York Yankees ', '"Acevedo Juan" ', 900000, ' Pitcher\n'],
['New York Yankees ', '"Anderson Jason"', 300000, ' Pitcher\n'],
['New York Yankees ', '"Clemens Roger" ', 10100000, ' Pitcher\n'],
['New York Yankees ', '"Contreras Jose"', 5500000, ' Pitcher\n']] I am basically trying to convert each item in the array into a pandas data frame which has four columns. What would be the best approach to this as pd.Dataframe does not quite give me what I am looking for. | import pandas as pd
data = [['New York Yankees', 'Acevedo Juan', 900000, 'Pitcher'],
['New York Yankees', 'Anderson Jason', 300000, 'Pitcher'],
['New York Yankees', 'Clemens Roger', 10100000, 'Pitcher'],
['New York Yankees', 'Contreras Jose', 5500000, 'Pitcher']]
df = pd.DataFrame.from_records(data) | {
"source": [
"https://datascience.stackexchange.com/questions/26333",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/44204/"
]
} |
26,366 | I am trying to get started learning about RNNs and I'm using Keras. I understand the basic premise of vanilla RNN and LSTM layers, but I'm having trouble understanding a certain technical point for training. In the keras documentation , it says the input to an RNN layer must have shape (batch_size, timesteps, input_dim) . This suggests that all the training examples have a fixed sequence length, namely timesteps . But this is not especially typical, is it? I might want to have the RNN operate on sentences of varying lengths. When I train it on some corpus, I will feed it batches of sentences, all of different lengths. I suppose the obvious thing to do would be to find the max length of any sequence in the training set and zero pad it. But then does that mean I can't make predictions at test time with input length greater than that? This is a question about Keras's particular implementation, I suppose, but I'm also asking for what people typically do when faced with this kind of a problem in general. | This suggests that all the training examples have a fixed sequence length, namely timesteps . That is not quite correct, since that dimension can be None , i.e. variable length. Within a single batch , you must have the same number of timesteps (this is typically where you see 0-padding and masking). But between batches there is no such restriction. During inference, you can have any length. Example code that creates random time-length batches of training data. from keras.models import Sequential
from keras.layers import LSTM, Dense, TimeDistributed
from keras.utils import to_categorical
import numpy as np
model = Sequential()
model.add(LSTM(32, return_sequences=True, input_shape=(None, 5)))
model.add(LSTM(8, return_sequences=True))
model.add(TimeDistributed(Dense(2, activation='sigmoid')))
print(model.summary(90))
model.compile(loss='categorical_crossentropy',
optimizer='adam')
def train_generator():
while True:
sequence_length = np.random.randint(10, 100)
x_train = np.random.random((1000, sequence_length, 5))
# y_train will depend on past 5 timesteps of x
y_train = x_train[:, :, 0]
for i in range(1, 5):
y_train[:, i:] += x_train[:, :-i, i]
y_train = to_categorical(y_train > 2.5)
yield x_train, y_train
model.fit_generator(train_generator(), steps_per_epoch=30, epochs=10, verbose=1) And this is what it prints. Note the output shapes are (None, None, x) indicating variable batch size and variable timestep size. __________________________________________________________________________________________
Layer (type) Output Shape Param #
==========================================================================================
lstm_1 (LSTM) (None, None, 32) 4864
__________________________________________________________________________________________
lstm_2 (LSTM) (None, None, 8) 1312
__________________________________________________________________________________________
time_distributed_1 (TimeDistributed) (None, None, 2) 18
==========================================================================================
Total params: 6,194
Trainable params: 6,194
Non-trainable params: 0
__________________________________________________________________________________________
Epoch 1/10
30/30 [==============================] - 6s 201ms/step - loss: 0.6913
Epoch 2/10
30/30 [==============================] - 4s 137ms/step - loss: 0.6738
...
Epoch 9/10
30/30 [==============================] - 4s 136ms/step - loss: 0.1643
Epoch 10/10
30/30 [==============================] - 4s 142ms/step - loss: 0.1441 | {
"source": [
"https://datascience.stackexchange.com/questions/26366",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/44256/"
]
} |
26,475 | Activation functions are used to introduce non-linearities in the linear output of the type w * x + b in a neural network. Which I am able to understand intuitively for the activation functions like sigmoid. I understand the advantages of ReLU, which is avoiding dead neurons during backpropagation. However, I am not able to understand why is ReLU used as an activation function if its output is linear? Doesn't the whole point of being the activation function get defeated if it won't introduce non-linearity? | In mathematics (linear algebra) a function is considered linear whenever a function $f: A \rightarrow B$ if for every $x$ and $y$ in the domain $A$ has the following property: $f(x) + f(y) = f(x+y)$ . By definition the ReLU is $max(0,x)$ . Therefore, if we split the domain from $(-\infty, 0]$ or $[0, \infty)$ then the function is linear. However, it's easy to see that $f(-1) + f(1) \neq f(0)$ . Hence by definition ReLU is not linear. Nevertheless, ReLU is so close to linear that this often confuses people and wonder how can it be used as a universal approximator. In my experience, the best way to think about them is like Riemann sums. You can approximate any continuous functions with lots of little rectangles. ReLU activations can produced lots of little rectangles. In fact, in practice, ReLU can make rather complicated shapes and approximate many complicated domains. I also feel like clarifying another point. As pointed out by a previous answer, neurons do not die in Sigmoid, but rather vanish. The reason for this is because at maximum the derivative of the sigmoid function is .25. Hence, after so many layers you end up multiplying these gradients and the product of very small numbers less than 1 tend to go to zero very quickly. Hence if you're building a deep learning network with a lot of layers, your sigmoid functions will essentially stagnant rather quickly and become more or less useless. The key take away is the vanishing comes from multiplying the gradients not the gradients themselves. | {
"source": [
"https://datascience.stackexchange.com/questions/26475",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/16807/"
]
} |
26,792 | According to this scintillating blogpost Adam is very similar to RMSProp with momentum. From tensorflow documentation we see that tf.train.RMSPropOptimizer has following parameters __init__(
learning_rate,
decay=0.9,
momentum=0.0,
epsilon=1e-10,
use_locking=False,
centered=False,
name='RMSProp'
) while tf.train.AdamOptimizer : __init__(
learning_rate=0.001,
beta1=0.9,
beta2=0.999,
epsilon=1e-08,
use_locking=False,
name='Adam'
) What is the conceptual difference if we put beta1 = decay and beta2 = momentum ? | (My answer is based mostly on Adam: A Method for Stochastic Optimization (the original Adam paper) and on the implementation of rmsprop with momentum in Tensorflow (which is operator() of struct ApplyRMSProp ), as rmsprop is unpublished - it was described in a lecture by Geoffrey Hinton .) Some Background Adam and rmsprop with momentum are both methods (used by a gradient descent algorithm) to determine the step. Let $\Delta x^{(t)}_j$ be the $j^{\text{th}}$ component of the $t^{\text{th}}$ step. Then: In Adam: $$\Delta x_{j}^{(t)}=-\frac{\text{learning_rate}}{\sqrt{\text{BCMA}\left(g_{j}^{2}\right)}}\cdot\text{BCMA}\left(g_{j}\right)$$ while: $\text{learning_rate}$ is a hyperparameter. $\text{BCMA}$ is short for "bias-corrected (exponential) moving average " (I made up the acronym for brevity). All of the moving averages I am going to talk about are exponential moving averages, so I would just refer to them as "moving averages". $g_j$ is the $j^{\text{th}}$ component of the gradient, and so $\text{BCMA}\left(g_{j}\right)$ is a bias-corrected moving average of the $j^{\text{th}}$ components of the gradients that were calculated. Similarly, $\text{BCMA}\left(g_{j}^{2}\right)$ is a bias-corrected moving average of the squares of the $j^{\text{th}}$ components of the gradients that were calculated. For each moving average, the decay factor (aka smoothing factor) is a hyperparameter. Both the Adam paper and TensorFlow use the following notation: $\beta_1$ is the decay factor for $\text{BCMA}\left(g_{j}\right)$ $\beta_2$ is the decay factor for $\text{BCMA}\left(g^2_{j}\right)$ The denominator is actually $\sqrt{\text{BCMA}\left(g_{j}^{2}\right)}+\epsilon$ , while $\epsilon$ is a small hyperparameter, but I would ignore it for simplicity. In rmsprop with momentum: $$\Delta x_{j}^{\left(t\right)}=\text{momentum_decay_factor}\cdot\Delta x_{j}^{\left(t-1\right)}-\frac{\text{learning_rate}}{\sqrt{\text{MA}\left(g_{j}^{2}\right)}}\cdot g_{j}^{\left(t\right)}$$ while: $\text{momentum_decay_factor}$ is a hyperparameter, and I would assume it is in $(0,1)$ (as it usually is). In TensorFlow , this is the momentum argument of RMSPropOptimizer . $g^{(t)}_j$ is the $j^{\text{th}}$ component of the gradient in the $t^{\text{th}}$ step. $\text{MA}\left(g_{j}^{2}\right)$ is a moving average of the squares of the $j^{\text{th}}$ components of the gradients that were calculated. The decay factor of this moving average is a hyperparameter, and in TensorFlow , this is the decay argument of RMSPropOptimizer . High-Level Comparison Now we are finally ready to talk about the differences between the two. The denominator is quite similar (except for the bias-correction, which I explain about later). However, the momentum-like behavior that both share (Adam due to $\text{BCMA}\left(g_{j}\right)$ , and rmsprop with momentum due to explicitly taking a fraction of the previous step) is somewhat different. E.g. this is how Sebastian Ruder describes this difference in his blog post An overview of gradient descent optimization algorithms : Whereas momentum can be seen as a ball running down a slope, Adam behaves like a heavy ball with friction, which thus prefers flat minima in the error surface [...] This description is based on the paper GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium , so check it out if you want to dive deeper. Next, I would describe 2 simple scenarios to demonstrate the difference in the momentum-like behaviors of the methods. Lastly, I would describe the difference with regard to bias-correction. Accumulating Momentum Consider the following scenario: The gradient was constant in every step in the recent past, and $\Delta x_{j}^{(t-1)}=0$ . Also, to keep it simple, $g_{j}>0$ . I.e. we can imagine our algorithm as a stationary ball on a linear slope. What would happen when we use each of the methods? In Adam The gradient was constant in the recent past, so $\text{BCMA}\left(g_{j}^{2}\right)\approx g_{j}^{2}$ and $\text{BCMA}\left(g_{j}\right)\approx g_j$ . Thus we get: $$\begin{gathered}\\
\Delta x_{j}^{(t)}=-\frac{\text{learning_rate}}{\sqrt{g_{j}^{2}}}\cdot g_{j}=-\frac{\text{learning_rate}}{|g_{j}|}\cdot g_{j}\\
\downarrow\\
\Delta x_{j}^{\left(t\right)}=-\text{learning_rate}
\end{gathered}
$$ I.e. the "ball" immediately starts moving downhill in a constant speed. In rmsprop with momentum Similarly, we get: $$\Delta x_{j}^{\left(t\right)}=\text{momentum_decay_factor}\cdot\Delta x_{j}^{\left(t-1\right)}-\text{learning_rate}$$ This case is a little more complicated, but we can see that: $$\begin{gathered}\\
\Delta x_{j}^{\left(t\right)}=-\text{learning_rate}\\
\Delta x_{j}^{\left(t+1\right)}=-\text{learning_rate}\cdot(1+\text{momentum_decay_factor})
\end{gathered}
$$ So the "ball" starts accelerating downhill. Given that the gradient stays constant, you can prove that if: $$-\frac{\text{learning_rate}}{1-\text{momentum_decay_factor}}<\Delta x_{j}^{\left(k\right)}$$ then: $$-\frac{\text{learning_rate}}{1-\text{momentum_decay_factor}}<\Delta x_{j}^{\left(k+1\right)}<\Delta x_{j}^{\left(k\right)}$$ Therefore, we conclude that the step converges, i.e. $\Delta x_{j}^{\left(k\right)}\approx \Delta x_{j}^{\left(k-1\right)}$ for some $k>t$ , and then: $$\begin{gathered}\\
\Delta x_{j}^{\left(k\right)}\approx \text{momentum_decay_factor}\cdot\Delta x_{j}^{\left(k\right)}-\text{learning_rate}\\
\downarrow\\
\Delta x_{j}^{\left(k\right)}\approx -\frac{\text{learning_rate}}{1-\text{momentum_decay_factor}}
\end{gathered}
$$ Thus, the "ball" accelerates downhill and approaches a speed $\frac{1}{1-\text{momentum_decay_factor}}$ times as large as the constant speed of Adam's "ball". (E.g. for a typical $\text{momentum_decay_factor}=0.9$ , it can approach $10\times$ speed!) Changing Direction Now, consider a scenario following the previous one: After going down the slope (in the previous scenario) for quite some time (i.e. enough time for rmsprop with momentum to reach a nearly constant step size), suddenly a slope with an opposite and smaller constant gradient is reached. What would happen when we use each of the methods? This time I would just describe the results of my simulation of the scenario (my Python code is at the end of the answer). Note that I have chosen for Adam's $\text{BCMA}\left(g_{j}\right)$ a decay factor equal to $\text{momentum_decay_factor}$ . Choosing differently would have changed the following results: Adam is slower to change its direction, and then much slower to get back to the minimum. However, rmsprop with momentum reaches much further before it changes direction (when both use the same $\text{learning_rate}$ ). Note that this further reach is because rmsprop with momentum first reaches the opposite slope with much higher speed than Adam. If both reached the opposite slope with the same speed (which would happen if Adam's $\text{learning_rate}$ were $\frac{1}{1-\text{momentum_decay_factor}}$ times as large as that of rmsprop with momentum), then Adam would reach further before changing direction. Bias-Correction What do we mean by a biased/bias-corrected moving average? (Or at least, what does the Adam paper mean by that?) Generally speaking, a moving average is a weighted average of: The moving average of all of the previous terms The current term Then what is the moving average in the first step? A natural choice for a programmer would be to initialize the "moving average of all of the previous terms" to $0$ . We say that in this case the moving average is biased towards $0$ . When you only have one term, by definition the average should be equal to that term. Thus, we say that the moving average is bias-corrected in case the moving average in the first step is the first term (and the moving average works as usual for the rest of the steps). So here is another difference: The moving averages in Adam are bias-corrected, while the moving average in rmsprop with momentum is biased towards $0$ . For more about the bias-correction in Adam, see section 3 in the paper and also this answer . Simulation Python Code import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
###########################################
# script parameters
def f(x):
if x > 0:
return x
else:
return -0.1 * x
def f_grad(x):
if x > 0:
return 1
else:
return -0.1
METHOD_TO_LEARNING_RATE = {
'Adam': 0.01,
'GD': 0.00008,
'rmsprop_with_Nesterov_momentum': 0.008,
'rmsprop_with_momentum': 0.001,
'rmsprop': 0.02,
'momentum': 0.00008,
'Nesterov': 0.008,
'Adadelta': None,
}
X0 = 2
METHOD = 'rmsprop'
METHOD = 'momentum'
METHOD = 'GD'
METHOD = 'rmsprop_with_Nesterov_momentum'
METHOD = 'Nesterov'
METHOD = 'Adadelta'
METHOD = 'rmsprop_with_momentum'
METHOD = 'Adam'
LEARNING_RATE = METHOD_TO_LEARNING_RATE[METHOD]
MOMENTUM_DECAY_FACTOR = 0.9
RMSPROP_SQUARED_GRADS_AVG_DECAY_FACTOR = 0.9
ADADELTA_DECAY_FACTOR = 0.9
RMSPROP_EPSILON = 1e-10
ADADELTA_EPSILON = 1e-6
ADAM_EPSILON = 1e-10
ADAM_SQUARED_GRADS_AVG_DECAY_FACTOR = 0.999
ADAM_GRADS_AVG_DECAY_FACTOR = 0.9
INTERVAL = 9e2
INTERVAL = 1
INTERVAL = 3e2
INTERVAL = 3e1
###########################################
def plot_func(axe, f):
xs = np.arange(-X0 * 0.5, X0 * 1.05, abs(X0) / 100)
vf = np.vectorize(f)
ys = vf(xs)
return axe.plot(xs, ys, color='grey')
def next_color(color, f):
color[1] -= 0.01
if color[1] < 0:
color[1] = 1
return color[:]
def update(frame):
global k, x, prev_step, squared_grads_decaying_avg, \
squared_prev_steps_decaying_avg, grads_decaying_avg
if METHOD in ('momentum', 'Nesterov', 'rmsprop_with_momentum',
'rmsprop_with_Nesterov_momentum'):
step_momentum_portion = MOMENTUM_DECAY_FACTOR * prev_step
if METHOD in ('Nesterov', 'rmsprop_with_Nesterov_momentum'):
gradient = f_grad(x + step_momentum_portion)
else:
gradient = f_grad(x)
if METHOD == 'GD':
step = -LEARNING_RATE * gradient
elif METHOD in ('momentum', 'Nesterov'):
step = step_momentum_portion - LEARNING_RATE * gradient
elif METHOD in ('rmsprop', 'rmsprop_with_momentum',
'rmsprop_with_Nesterov_momentum'):
squared_grads_decaying_avg = (
RMSPROP_SQUARED_GRADS_AVG_DECAY_FACTOR * squared_grads_decaying_avg +
(1 - RMSPROP_SQUARED_GRADS_AVG_DECAY_FACTOR) * gradient ** 2)
grads_rms = np.sqrt(squared_grads_decaying_avg + RMSPROP_EPSILON)
if METHOD == 'rmsprop':
step = -LEARNING_RATE / grads_rms * gradient
else:
assert(METHOD in ('rmsprop_with_momentum',
'rmsprop_with_Nesterov_momentum'))
print(f'LEARNING_RATE / grads_rms * gradient: {LEARNING_RATE / grads_rms * gradient}')
step = step_momentum_portion - LEARNING_RATE / grads_rms * gradient
elif METHOD == 'Adadelta':
gradient = f_grad(x)
squared_grads_decaying_avg = (
ADADELTA_DECAY_FACTOR * squared_grads_decaying_avg +
(1 - ADADELTA_DECAY_FACTOR) * gradient ** 2)
grads_rms = np.sqrt(squared_grads_decaying_avg + ADADELTA_EPSILON)
squared_prev_steps_decaying_avg = (
ADADELTA_DECAY_FACTOR * squared_prev_steps_decaying_avg +
(1 - ADADELTA_DECAY_FACTOR) * prev_step ** 2)
prev_steps_rms = np.sqrt(squared_prev_steps_decaying_avg + ADADELTA_EPSILON)
step = - prev_steps_rms / grads_rms * gradient
elif METHOD == 'Adam':
squared_grads_decaying_avg = (
ADAM_SQUARED_GRADS_AVG_DECAY_FACTOR * squared_grads_decaying_avg +
(1 - ADAM_SQUARED_GRADS_AVG_DECAY_FACTOR) * gradient ** 2)
unbiased_squared_grads_decaying_avg = (
squared_grads_decaying_avg /
(1 - ADAM_SQUARED_GRADS_AVG_DECAY_FACTOR ** (k + 1)))
grads_decaying_avg = (
ADAM_GRADS_AVG_DECAY_FACTOR * grads_decaying_avg +
(1 - ADAM_GRADS_AVG_DECAY_FACTOR) * gradient)
unbiased_grads_decaying_avg = (
grads_decaying_avg /
(1 - ADAM_GRADS_AVG_DECAY_FACTOR ** (k + 1)))
step = - (LEARNING_RATE /
(np.sqrt(unbiased_squared_grads_decaying_avg) + ADAM_EPSILON) *
unbiased_grads_decaying_avg)
x += step
prev_step = step
k += 1
color = next_color(cur_color, f)
print(f'k: {k}\n'
f'x: {x}\n'
f'step: {step}\n'
f'gradient: {gradient}\n')
k_x_marker, = k_and_x.plot(k, x, '.', color=color)
x_y_marker, = x_and_y.plot(x, f(x), '.', color=color)
return k_x_marker, x_y_marker
k = 0
x = X0
cur_color = [0, 1, 1]
prev_step = 0
squared_grads_decaying_avg = 0
squared_prev_steps_decaying_avg = 0
grads_decaying_avg = 0
fig, (k_and_x, x_and_y) = plt.subplots(1, 2, figsize=(9,5))
k_and_x.set_xlabel('k')
k_and_x.set_ylabel('x', rotation=0)
x_and_y.set_xlabel('x')
x_and_y.set_ylabel('y', rotation=0)
plot_func(x_and_y, f)
x_and_y.plot(x, f(x), '.', color=cur_color[:])
k_and_x.plot(k, x, '.', color=cur_color[:])
plt.tight_layout()
ani = FuncAnimation(fig, update, blit=False, repeat=False, interval=INTERVAL)
plt.show() | {
"source": [
"https://datascience.stackexchange.com/questions/26792",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/42903/"
]
} |
26,938 | Apparently, in reinforcement learning, temporal-difference (TD) method is a bootstrapping method. On the other hand, Monte Carlo methods are not bootstrapping methods. What exactly is bootstrapping in RL? What is a bootstrapping method in RL? | Bootstrapping in RL can be read as "using one or more estimated values in the update step for the same kind of estimated value". In most TD update rules, you will see something like this SARSA(0) update: $$Q(s,a) \leftarrow Q(s,a) + \alpha(R_{t+1} + \gamma Q(s',a') - Q(s,a))$$ The value $R_{t+1} + \gamma Q(s',a')$ is an estimate for the true value of $Q(s,a)$ , and also called the TD target. It is a bootstrap method because we are in part using a Q value to update another Q value. There is a small amount of real observed data in the form of $R_{t+1}$ , the immediate reward for the step, and also in the state transition $s \rightarrow s'$ . Contrast with Monte Carlo where the equivalent update rule might be: $$Q(s,a) \leftarrow Q(s,a) + \alpha(G_{t} - Q(s,a))$$ Where $G_{t}$ was the total discounted reward at time $t$ , assuming in this update, that it started in state $s$ , taking action $a$ , then followed the current policy until the end of the episode. Technically, $G_t = \sum_{k=0}^{T-t-1} \gamma^k R_{t+k+1}$ where $T$ is the time step for the terminal reward and state. Notably, this target value does not use any existing estimates (from other Q values) at all, it only uses a set of observations (i.e., rewards) from the environment. As such, it is guaranteed to be unbiased estimate of the true value of $Q(s,a)$ , as it is technically a sample of $Q(s,a)$ . The main disadvantage of bootstrapping is that it is biased towards whatever your starting values of $Q(s',a')$ (or $V(s')$ ) are. Those are are most likely wrong, and the update system can be unstable as a whole because of too much self-reference and not enough real data - this is a problem with off-policy learning (e.g. Q-learning) using neural networks. Without bootstrapping, using longer trajectories, there is often high variance instead, which, in practice, means you need more samples before the estimates converge. So, despite the problems with bootstrapping, if it can be made to work, it may learn significantly faster, and is often preferred over Monte Carlo approaches. You can compromise between Monte Carlo sample based methods and single-step TD methods that bootstrap by using a mix of results from different length trajectories. This is called TD( $\lambda$ ) learning , and there are a variety of specific methods such as SARSA( $\lambda$ ) or Q( $\lambda$ ). | {
"source": [
"https://datascience.stackexchange.com/questions/26938",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/-1/"
]
} |
27,388 | If I have a 50 dimensional hypercube. And I define it's boundary by $0<x_j<0.05$ or $0.95<x_j<1$ where $x_j$ is dimension of the hypercube. Then calculating the proportion of points on the boundary of the hypercube will be $0.995$. What does it mean? Does it mean that rest of the space is empty? If $99\%$ of the points are at the boundary then the points inside the cube must not be uniformly distributed? | Speaking of ' $99\%$ of the points in a hypercube ' is a bit misleading since a hypercube contains infinitely many points. Let's talk about volume instead. The volume of a hypercube is the product of its side lengths.
For the 50-dimensional unit hypercube we get $$\text{Total volume} = \underbrace{1 \times 1 \times \dots \times 1}_{50 \text{ times}} = 1^{50} = 1.$$ Now let us exclude the boundaries of the hypercube and look at the ' interior ' (I put this in quotation marks because the mathematical term interior has a very different meaning). We only keep the points $x = (x_1, x_2, \dots, x_{50})$ that satisfy $$
0.05 < x_1 < 0.95 \,\text{ and }\, 0.05 < x_2 < 0.95 \,\text{ and }\, \dots
\,\text{ and }\, 0.05 < x_{50} < 0.95.
$$ What is the volume of this ' interior '? Well, the ' interior ' is again a hypercube, and the length of each side is $0.9$ ( $=0.95 - 0.05$ ... it helps to imagine this in two and three dimensions).
So the volume is $$\text{Interior volume} = \underbrace{0.9 \times 0.9 \times \dots \times 0.9}_{50 \text{ times}} = 0.9^{50} \approx 0.005.$$ Conclude that the volume of the ' boundary ' (defined as the unit hypercube without the ' interior ') is $1 - 0.9^{50} \approx 0.995.$ This shows that $99.5\%$ of the volume of a 50-dimensional hypercube is concentrated on its ' boundary '. Follow-up: ignatius raised an interesting question on how this is connected to probability. Here is an example. Say you came up with a (machine learning) model that predicts housing prices based on 50 input parameters. All 50 input parameters are independent and uniformly distributed between $0$ and $1$ . Let us say that your model works very well if none of the input parameters is extreme: As long as every input parameter stays between $0.05$ and $0.95$ , your model predicts the housing price almost perfectly.
But if one or more input parameters are extreme (smaller than $0.05$ or larger than $0.95$ ), the predictions of your model are absolutely terrible. Any given input parameter is extreme with a probability of only $10\%$ . So clearly this is a good model, right?
No! The probability that at least one of the $50$ parameters is extreme is $1 - 0.9^{50} \approx 0.995.$ So in $99.5\%$ of the cases, your model's prediction is terrible. Rule of thumb: In high dimensions, extreme observations are the rule and not the exception. | {
"source": [
"https://datascience.stackexchange.com/questions/27388",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/41149/"
]
} |
27,586 | I have just completed the machine learning for R course on cognitiveclass.ai and have begun experimenting with randomforests. I have made a model by using the "randomForest" library in R. The model classifies by two classes, good, and bad. I know that when a model is overfit, it performs well on data from its own trainingset but badly on out-of-sample data. To train and test my model I have shuffled and split the complete dataset into 70% for training and 30% for testing. My question: I am getting a 100% accuracy out of the prediction done on the testing set. Is this bad? It seems too good to be true. The objective is waveform recognition on four on each other depending waveforms. The features of the dataset are the cost results of Dynamic Time Warping analysis of waveforms with their target waveform. | High validation scores like accuracy generally mean that you are not overfitting, however it should lead to caution and may indicate something went wrong. It could also mean that the problem is not too difficult and that your model truly performs well. Two things that could go wrong: You didn't split the data properly and the validation data also occured in your training data, meaning it does indicate overfitting because you are not measuring generalization anymore You use some feature engineering to create additional features and you might have introduced some target leakage, where your rows are using information from it's current target, not just from others in your training set | {
"source": [
"https://datascience.stackexchange.com/questions/27586",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/45937/"
]
} |
27,615 | I am doing a project on an author identification problem. I applied the tf-idf normalization to train data and then trained an SVM on that data. Now when using the classifier, should I normalize test data as well. I feel that the basic aim of normalization is to make the learning algorithm give more weight to more important features while learning. So once it has been trained, it already knows which features are important and which are not. So is there any need to apply normalization to test data as well? I am new to this field. So please ignore if the question appears silly? | Yes you need to apply normalisation to test data, if your algorithm works with or needs normalised training data*. That is because your model works on the representation given by its input vectors. The scale of those numbers is part of the representation. This is a bit like converting between feet and metres . . . a model or formula would work with just one type of unit normally. Not only do you need normalisation, but you should apply the exact same scaling as for your training data. That means storing the scale and offset used with your training data, and using that again. A common beginner mistake is to separately normalise your train and test data. In Python and SKLearn, you might normalise your input/X values using the Standard Scaler like this: scaler = StandardScaler()
train_X = scaler.fit_transform( train_X )
test_X = scaler.transform( test_X ) Note how the conversion of train_X using a function which fits (figures out the params) then normalises. Whilst the test_X conversion just transforms, using the same params that it learned from the train data. The tf-idf normalisation you are applying should work similarly, as it learns some parameters from the data set as a whole (frequency of words in all documents), as well as using ratios found in each document. * Some algorithms (such as those based on decision trees) do not need normalised inputs, and can cope with features that have different inherent scales. | {
"source": [
"https://datascience.stackexchange.com/questions/27615",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/37063/"
]
} |
28,006 | What are the differences, if any, between a "data scientist" and a "machine learning engineer"? Over the past year or so "machine learning engineer" has started to show up a lot in job postings. This is particularly noticeable in San Francisco, which is arguably where the term "data scientist" originated. At one point "data scientist" overtook "statistician", and I'm wondering if the same is now slowly beginning to happen to "data scientist". Career advice is listed as off-topic on this site, but I view my question as highly relevant since I'm asking about definitions; I'm not asking about recommendations given my own career trajectory or personal circumstances like other off-topic questions have. This question is on-topic because it might someday have significant implications for many users of this site. In fact, this stack-exchange site might not exist if the "statistician" vs "data scientist" evolution had not occurred. In that sense, this is a rather pertinent, potentially existential question. | Good question. Actually there is a lot of confusion on this subject, mainly because both are quite new jobs. But if we focus on the semantics, the real meaning of the jobs become clear. Beforehand is better to compare apples with apples, talking about a single subject, the Data. Machine Learning and its sub-genre (Deep Learning, etc.) are just one aspect of the Data World, together with the statistic theories, the data acquisition (DAQ), the processing (which can be non-machine learning driven), the interpretation of the results, etc. So, for my explanation, I will broad the Machine Learning Engineer role to the one of Data Engineer. Science is about experiment, trials and fails, theory building, phenomenological understanding.
Engineering is about work on what science already knows, perfecting it and carry to the "real world". Think about a proxy: what is the difference between a nuclear scientist and a nuclear engineer? The nuclear scientist is the one which know the science behind the atom, the interaction between them, the one which wrote the recipe which allow to get energy from the atoms. The nuclear engineer is the guy charged to take the recipe of the scientist, and carry it to the real world. So it's knowledge about the atomic physics is quite limited, but he also know about materials, buildings, economics, and whatever else useful to build a proper nuclear plant. Coming back to the Data world, here another example: the guys which developed Convolutional Neural Networks (Yann LeCun) is a Data Scientist, the guy which deploy the model to recognize faces in pictures is a Machine Learning Engineer. The guy responsible of the whole process, from the data acquisition to the registration of the .JPG image, is a Data Engineer. So, basically, 90% of the Data Scientist today are actually Data Engineers or Machine Learning Engineers, and 90% of the positions opened as Data Scientist actually need Engineers. An easy check: in the interview, you will be asked about how many ML models you deployed in production, not on how many papers on new methods you published. Instead, when you see announces about "Machine Learning Engineer", that means that the recruiters are well aware of the difference, and they really need someone able to put some model in production. | {
"source": [
"https://datascience.stackexchange.com/questions/28006",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/12515/"
]
} |
28,158 | I am confused about how I choose the number of folds (in k-fold CV) when I apply cross validation to check the model. Is it dependent on data size or other parameters? | The number of folds is usually determined by the number of instances contained in your dataset. For example, if you have 10 instances in your data, 10-fold cross-validation wouldn't make sense. $k$-fold cross validation is used for two main purposes, to tune hyper parameters and to better evaluate the performance of a model. In both of these cases selecting $k$ depends on the same thing. You must ensure that the training set and testing set are drawn from the same distribution. And that both sets contain sufficient variation such that the underlining distribution is represented. In a 10-fold cross validation with only 10 instances, there would only be 1 instance in the testing set. This instance does not properly represent the variation of the underlying distribution. That being said, selecting $k$ is not an exact science because it's hard to estimate how well your fold represents your overall dataset. I usually use 5-fold cross validation. This means that 20% of the data is used for testing, this is usually pretty accurate. However, if your dataset size increases dramatically, like if you have over 100,000 instances, it can be seen that a 10-fold cross validation would lead in folds of 10,000 instances. This should be sufficient to reliably test your model. In short, yes the number of folds depends on the data size. I usually stick with 4- or 5-fold. Make sure to shuffle your data, such that your folds do not contain inherent bias. | {
"source": [
"https://datascience.stackexchange.com/questions/28158",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/46433/"
]
} |
28,210 | Can someone please post a straightforward example of Keras using a callback to save a model after every epoch? I can find examples of saving weights, but I want to be able to save a completely functioning model after every training epoch. | Setting 'save_weights_only' to False in the Keras callback 'ModelCheckpoint' will save the full model; this example taken from the link above will save a full model every epoch, regardless of performance: keras.callbacks.ModelCheckpoint(filepath, monitor='val_loss', verbose=0, save_best_only=False, save_weights_only=False, mode='auto', period=1) Some more examples are found here , including saving only improved models and loading the saved models. | {
"source": [
"https://datascience.stackexchange.com/questions/28210",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/23240/"
]
} |
29,006 | Feature extraction and feature selection essentially reduce the dimensionality of the data, but feature extraction also makes the data more separable, if I am right. Which technique would be preferred over the other and when? I was thinking, since feature selection does not modify the original data and it's properties, I assume that you will use feature selection when it's important that the features you're training on be unchanged. But I can't imagine why you would want something like this.. | Adding to The answer given by Toros, These(see below bullets) three are quite similar but with a subtle differences-:(concise and easy to remember) feature extraction and feature engineering : transformation of raw data into features suitable for modeling; feature transformation : transformation of data to improve the accuracy of the algorithm; feature selection : removing unnecessary features. Just to add an Example of the same, Feature Extraction and Engineering(we can extract something from them) Texts(ngrams, word2vec, tf-idf etc) Images(CNN'S, texts, q&a) Geospatial data(lat, long etc) Date and time(day, month, week, year, rolling based) Time series, web, etc Dimensional Reduction Techniques (PCA, SVD, Eigen-Faces etc) Maybe we can use Clustering as well (DBSCAN etc) .....(And Many Others) Feature transformations(transforming them to make sense) Normalization and changing distribution(Scaling) Interactions Filling in the missing values(median filling etc) .....(And Many Others) Feature selection(building your model on these selected features) Statistical approaches Selection by modeling Grid search Cross Validation .....(And Many Others) Hope this helps... Do look at the links shared by others.
They are Quite Nice... | {
"source": [
"https://datascience.stackexchange.com/questions/29006",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/32288/"
]
} |
29,719 | I am starting to learn CNNs using Keras. I am using the theano backend. I don't understand how to set values to: batch_size steps_per_epoch validation_steps What should be the value set to batch_size , steps_per_epoch , and validation_steps , if I have 240,000 samples in the training set and 80,000 in the test set? | batch_size determines the number of samples in each mini batch. Its maximum is the number of all samples, which makes gradient descent accurate, the loss will decrease towards the minimum if the learning rate is small enough, but iterations are slower. Its minimum is 1, resulting in stochastic gradient descent: Fast but the direction of the gradient step is based only on one example, the loss may jump around. batch_size allows to adjust between the two extremes: accurate gradient direction and fast iteration. Also, the maximum value for batch_size may be limited if your model + data set does not fit into the available (GPU) memory. steps_per_epoch the number of batch iterations before a training epoch is considered finished. If you have a training set of fixed size you can ignore it but it may be useful if you have a huge data set or if you are generating random data augmentations on the fly, i.e. if your training set has a (generated) infinite size. If you have the time to go through your whole training data set I recommend to skip this parameter. validation_steps similar to steps_per_epoch but on the validation data set instead on the training data. If you have the time to go through your whole validation data set I recommend to skip this parameter. | {
"source": [
"https://datascience.stackexchange.com/questions/29719",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/49697/"
]
} |
29,851 | A colleague of mine is having an interesting situation, he has quite a large set of possibilities for a defined categorical feature (+/- 300 different values) The usual data science approach would be to perform a One-Hot Encoding.
However, wouldn't it be a bit extreme to perform some One-Hot Encoding with a dictionary quite large (+/- 300 values)? Is there any best practice on when to choose Embedding vectors or One-Hot Encoding? Additional, information: how would you handle the previous case if new values can be added to the dictionary. Re-training seems the only solution, however with One-Hot Encoding, the data dimension will simultaniously increase which may lead to additional troubles, embedding vectors, on the opposite side, can keep the same dimension even if new values appears. How would you handle such a case ? Embedding vectors clearly seem more appropriate to me, however I would like to validate my opinion and check if there is another solution that could be more apporiate. | One-Hot Encoding is a general method that can vectorize any categorical features. It is simple and fast to create and update the vectorization, just add a new entry in the vector with a one for each new category. However, that speed and simplicity also leads to the "curse of dimensionality" by creating a new dimension for each category. Embedding is a method that requires large amounts, both in the total amount of data and repeated occurrences of individual exemplars, and long training time. The result is a dense vector with a fixed, arbitrary number of dimensions. They also differ at the prediction stage: a One-Hot Encoding tells you nothing of the semantics of the items; each vectorization is an orthogonal representation in another dimension. Embeddings will group commonly co-occurring items together in the representation space. If you have enough training data, enough training time, and the ability to apply the more complex training algorithm (e.g., word2vec or GloVe), go with Embeddings. Otherwise, fall back to One-Hot Encoding. | {
"source": [
"https://datascience.stackexchange.com/questions/29851",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/39198/"
]
} |
30,344 | It seems the Adaptive Moment Estimation (Adam) optimizer nearly always works better (faster and more reliably reaching a global minimum) when minimising the cost function in training neural nets. Why not always use Adam? Why even bother using RMSProp or momentum optimizers? | Here ’s a blog post reviewing an article claiming SGD is a better generalized adapter than ADAM. There is often a value to using more than one method (an ensemble), because every method has a weakness. | {
"source": [
"https://datascience.stackexchange.com/questions/30344",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/41929/"
]
} |
30,676 | I try to understand role of derivative of sigmoid function in neural networks. First I plot sigmoid function, and derivative of all points from definition using python. What is the role of this derivative exactly? import numpy as np
import matplotlib.pyplot as plt
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def derivative(x, step):
return (sigmoid(x+step) - sigmoid(x)) / step
x = np.linspace(-10, 10, 1000)
y1 = sigmoid(x)
y2 = derivative(x, 0.0000000000001)
plt.plot(x, y1, label='sigmoid')
plt.plot(x, y2, label='derivative')
plt.legend(loc='upper left')
plt.show() | The use of derivatives in neural networks is for the training process called backpropagation . This technique uses gradient descent in order to find an optimal set of model parameters in order to minimize a loss function. In your example you must use the derivative of a sigmoid because that is the activation that your individual neurons are using. The loss function The essence of machine learning is to optimize a cost function such that we can either minimize or maximize some target function. This is typically called the loss or cost funtion. We typically want to minimize this function. The cost function, $C$ , associates some penalty based on the resulting errors when passing data through your model as a function of the model parameters. Let's look at the example where we try to label whether an image contains a cat or a dog. If we have a perfect model, we can give the model a picture and it will tell us if it is a cat or a dog. However, no model is perfect and it will make mistakes. When we train our model to be able to infer meaning from input data we want to minimize the amount of mistakes it makes. So we use a training set, this data contains a lot of pictures of dogs and cats and we have the ground truth label associated with that image. Each time we run a training iteration of the model we calculate the cost (the amount of mistakes) of the model. We will want to minimize this cost. Many cost functions exist each serving their own purpose. A common cost function that is used is the quadratic cost which is defined as $C = \frac{1}{N} \sum_{i=0}^{N}(\hat{y} - y)^2$ . This is the square of the difference between the predicted label and the ground truth label for the $N$ images that we trained over. We will want to minimize this in some way. Minimizing a loss function Indeed most of machine learning is simply a family of frameworks which are capable of determining a distribution by minimizing some cost function. The question we can ask is "how can we minimize a function"? Let's minimize the following function $y = x^2-4x+6$ . If we plot this we can see that there is a minimum at $x = 2$ . To do this analytically we can take the derivative of this function as $\frac{dy}{dx} = 2x - 4 = 0$ $x = 2$ . However, often times finding a global minimum analytically is not feasible. So instead we use some optimization techniques. Here as well many different ways exist such as : Newton-Raphson, grid search, etc. Among these is gradient descent . This is the technique used by neural networks. Gradient Descent Let's use a famously used analogy to understand this. Imagine a 2D minimization problem. This is equivalent of being on a mountainous hike in the wilderness. You want to get back down to the village which you know is at the lowest point. Even if you do not know the cardinal directions of the village. All you need to do is continuously take the steepest way down, and you will eventually get to the village. So we will descend down the surface based on the steepness of the slope. Let's take our function $y = x^2-4x+6$ we will determine the $x$ for which $y$ is minimized. Gradient descent algorithm first says we will pick a random value for $x$ . Let us initialize at $x=8$ . Then the algorithm will do the following iteratively until we reach convergence. $x^{new} = x^{old} - \nu \frac{dy}{dx}$ where $\nu$ is the learning rate, we can set this to whatever value we will like. However there is a smart way to choose this. Too big and we will never reach our minimum value, and too small we will waste soooo much time before we get there. It is analogous to the size of the steps you want to take down the steep slope. Small steps and you will die on the mountain, you'll never get down. Too large of a step and you risk over shooting the village and ending up the other side of the mountain. The derivative is the means by which we travel down this slope towards our minimum. $\frac{dy}{dx} = 2x - 4$ $\nu = 0.1$ Iteration 1: $x^{new} = 8 - 0.1(2 * 8 - 4) = 6.8 $ $x^{new} = 6.8 - 0.1(2 * 6.8 - 4) = 5.84 $ $x^{new} = 5.84 - 0.1(2 * 5.84 - 4) = 5.07 $ $x^{new} = 5.07 - 0.1(2 * 5.07 - 4) = 4.45 $ $x^{new} = 4.45 - 0.1(2 * 4.45 - 4) = 3.96 $ $x^{new} = 3.96 - 0.1(2 * 3.96 - 4) = 3.57 $ $x^{new} = 3.57 - 0.1(2 * 3.57 - 4) = 3.25 $ $x^{new} = 3.25 - 0.1(2 * 3.25 - 4) = 3.00 $ $x^{new} = 3.00 - 0.1(2 * 3.00 - 4) = 2.80 $ $x^{new} = 2.80 - 0.1(2 * 2.80 - 4) = 2.64 $ $x^{new} = 2.64 - 0.1(2 * 2.64 - 4) = 2.51 $ $x^{new} = 2.51 - 0.1(2 * 2.51 - 4) = 2.41 $ $x^{new} = 2.41 - 0.1(2 * 2.41 - 4) = 2.32 $ $x^{new} = 2.32 - 0.1(2 * 2.32 - 4) = 2.26 $ $x^{new} = 2.26 - 0.1(2 * 2.26 - 4) = 2.21 $ $x^{new} = 2.21 - 0.1(2 * 2.21 - 4) = 2.16 $ $x^{new} = 2.16 - 0.1(2 * 2.16 - 4) = 2.13 $ $x^{new} = 2.13 - 0.1(2 * 2.13 - 4) = 2.10 $ $x^{new} = 2.10 - 0.1(2 * 2.10 - 4) = 2.08 $ $x^{new} = 2.08 - 0.1(2 * 2.08 - 4) = 2.06 $ $x^{new} = 2.06 - 0.1(2 * 2.06 - 4) = 2.05 $ $x^{new} = 2.05 - 0.1(2 * 2.05 - 4) = 2.04 $ $x^{new} = 2.04 - 0.1(2 * 2.04 - 4) = 2.03 $ $x^{new} = 2.03 - 0.1(2 * 2.03 - 4) = 2.02 $ $x^{new} = 2.02 - 0.1(2 * 2.02 - 4) = 2.02 $ $x^{new} = 2.02 - 0.1(2 * 2.02 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ And we see that the algorithm converges at $x = 2$ ! We have found the minimum. Applied to neural networks The first neural networks only had a single neuron which took in some inputs $x$ and then provide an output $\hat{y}$ . A common function used is the sigmoid function $\sigma(z) = \frac{1}{1+exp(z)}$ $\hat{y}(w^Tx) = \frac{1}{1+exp(w^Tx + b)}$ where $w$ is the associated weight for each input $x$ and we have a bias $b$ . We then want to minimize our cost function $C = \frac{1}{2N} \sum_{i=0}^{N}(\hat{y} - y)^2$ . How to train the neural network? We will use gradient descent to train the weights based on the output of the sigmoid function and we will use some cost function $C$ and train on batches of data of size $N$ . $C = \frac{1}{2N} \sum_i^N (\hat{y} - y)^2$ $\hat{y}$ is the predicted class obtained from the sigmoid function and $y$ is the ground truth label. We will use gradient descent to minimize the cost function with respect to the weights $w$ . To make life easier we will split the derivative as follows $\frac{\partial C}{\partial w} = \frac{\partial C}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial w}$ . $\frac{\partial C}{\partial \hat{y}} = \hat{y} - y$ and we have that $\hat{y} = \sigma(w^Tx)$ and the derivative of the sigmoid function is $\frac{\partial \sigma(z)}{\partial z} = \sigma(z)(1-\sigma(z))$ thus we have, $\frac{\partial \hat{y}}{\partial w} = \frac{1}{1+exp(w^Tx + b)} (1 - \frac{1}{1+exp(w^Tx + b)})$ . So we can then update the weights through gradient descent as $w^{new} = w^{old} - \eta \frac{\partial C}{\partial w}$ where $\eta$ is the learning rate. | {
"source": [
"https://datascience.stackexchange.com/questions/30676",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/49067/"
]
} |
30,686 | I've been combing through this code for a week now trying to figure out why my cost function is increasing as in the following image. Reducing the learning rate does help but very little. Can anyone spot why the cost function isn't working as expected? I realise a CNN would be preferable, but I still want to understand why this simple network is failing.
Please help:) import numpy as np
import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import matplotlib.pyplot as plt
mnist = input_data.read_data_sets("MNIST_DATA/",one_hot=True)
def createPlaceholders():
xph = tf.placeholder(tf.float32, (784, None))
yph = tf.placeholder(tf.float32, (10, None))
return xph, yph
def init_param(layers_dim):
weights = {}
L = len(layers_dim)
for l in range(1,L):
weights['W' + str(l)] = tf.get_variable('W' + str(l), shape=(layers_dim[l],layers_dim[l-1]), initializer= tf.contrib.layers.xavier_initializer())
weights['b' + str(l)] = tf.get_variable('b' + str(l), shape=(layers_dim[l],1), initializer= tf.zeros_initializer())
return weights
def forward_prop(X,L,weights):
parameters = {}
parameters['A0'] = tf.cast(X,tf.float32)
for l in range(1,L-1):
parameters['Z' + str(l)] = tf.add(tf.matmul(weights['W' + str(l)], parameters['A' + str(l-1)]), weights['b' + str(l)])
parameters['A' + str(l)] = tf.nn.relu(parameters['Z' + str(l)])
parameters['Z' + str(L-1)] = tf.add(tf.matmul(weights['W' + str(L-1)], parameters['A' + str(L-2)]), weights['b' + str(L-1)])
return parameters['Z' + str(L-1)]
def compute_cost(ZL,Y):
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = tf.cast(Y,tf.float32), logits = ZL))
return cost
def randomMiniBatches(X,Y,minibatch_size):
m = X.shape[1]
shuffle = np.random.permutation(m)
temp_X = X[:,shuffle]
temp_Y = Y[:,shuffle]
num_complete_minibatches = int(np.floor(m/minibatch_size))
mini_batches = []
for batch in range(num_complete_minibatches):
mini_batches.append((temp_X[:,batch*minibatch_size: (batch+1)*minibatch_size], temp_Y[:,batch*minibatch_size: (batch+1)*minibatch_size]))
mini_batches.append((temp_X[:,num_complete_minibatches*minibatch_size:], temp_Y[:,num_complete_minibatches*minibatch_size:]))
return mini_batches
def model(X, Y, layers_dim, learning_rate = 0.001, num_epochs = 20, minibatch_size = 64):
tf.reset_default_graph()
costs = []
xph, yph = createPlaceholders()
weights = init_param(layers_dim)
ZL = forward_prop(xph, len(layers_dim), weights)
cost = compute_cost(ZL,yph)
optimiser = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
init = tf.global_variables_initializer()
with tf.Session() as sess:
sess.run(init)
for epoch in range(num_epochs):
minibatches = randomMiniBatches(X,Y,minibatch_size)
epoch_cost = 0
for b, mini in enumerate(minibatches,1):
mini_x, mini_y = mini
_,c = sess.run([optimiser,cost],feed_dict={xph:mini_x,yph:mini_y})
epoch_cost += c
print('epoch: ',epoch+1,'/ ',num_epochs)
epoch_cost /= len(minibatches)
costs.append(epoch_cost)
plt.plot(costs)
print(costs)
X_train = mnist.train.images.T
n_x = X_train.shape[0]
Y_train = mnist.train.labels.T
n_y = Y_train.shape[0]
layers_dim = [n_x,10,n_y]
model(X_train, Y_train, layers_dim) | The use of derivatives in neural networks is for the training process called backpropagation . This technique uses gradient descent in order to find an optimal set of model parameters in order to minimize a loss function. In your example you must use the derivative of a sigmoid because that is the activation that your individual neurons are using. The loss function The essence of machine learning is to optimize a cost function such that we can either minimize or maximize some target function. This is typically called the loss or cost funtion. We typically want to minimize this function. The cost function, $C$ , associates some penalty based on the resulting errors when passing data through your model as a function of the model parameters. Let's look at the example where we try to label whether an image contains a cat or a dog. If we have a perfect model, we can give the model a picture and it will tell us if it is a cat or a dog. However, no model is perfect and it will make mistakes. When we train our model to be able to infer meaning from input data we want to minimize the amount of mistakes it makes. So we use a training set, this data contains a lot of pictures of dogs and cats and we have the ground truth label associated with that image. Each time we run a training iteration of the model we calculate the cost (the amount of mistakes) of the model. We will want to minimize this cost. Many cost functions exist each serving their own purpose. A common cost function that is used is the quadratic cost which is defined as $C = \frac{1}{N} \sum_{i=0}^{N}(\hat{y} - y)^2$ . This is the square of the difference between the predicted label and the ground truth label for the $N$ images that we trained over. We will want to minimize this in some way. Minimizing a loss function Indeed most of machine learning is simply a family of frameworks which are capable of determining a distribution by minimizing some cost function. The question we can ask is "how can we minimize a function"? Let's minimize the following function $y = x^2-4x+6$ . If we plot this we can see that there is a minimum at $x = 2$ . To do this analytically we can take the derivative of this function as $\frac{dy}{dx} = 2x - 4 = 0$ $x = 2$ . However, often times finding a global minimum analytically is not feasible. So instead we use some optimization techniques. Here as well many different ways exist such as : Newton-Raphson, grid search, etc. Among these is gradient descent . This is the technique used by neural networks. Gradient Descent Let's use a famously used analogy to understand this. Imagine a 2D minimization problem. This is equivalent of being on a mountainous hike in the wilderness. You want to get back down to the village which you know is at the lowest point. Even if you do not know the cardinal directions of the village. All you need to do is continuously take the steepest way down, and you will eventually get to the village. So we will descend down the surface based on the steepness of the slope. Let's take our function $y = x^2-4x+6$ we will determine the $x$ for which $y$ is minimized. Gradient descent algorithm first says we will pick a random value for $x$ . Let us initialize at $x=8$ . Then the algorithm will do the following iteratively until we reach convergence. $x^{new} = x^{old} - \nu \frac{dy}{dx}$ where $\nu$ is the learning rate, we can set this to whatever value we will like. However there is a smart way to choose this. Too big and we will never reach our minimum value, and too small we will waste soooo much time before we get there. It is analogous to the size of the steps you want to take down the steep slope. Small steps and you will die on the mountain, you'll never get down. Too large of a step and you risk over shooting the village and ending up the other side of the mountain. The derivative is the means by which we travel down this slope towards our minimum. $\frac{dy}{dx} = 2x - 4$ $\nu = 0.1$ Iteration 1: $x^{new} = 8 - 0.1(2 * 8 - 4) = 6.8 $ $x^{new} = 6.8 - 0.1(2 * 6.8 - 4) = 5.84 $ $x^{new} = 5.84 - 0.1(2 * 5.84 - 4) = 5.07 $ $x^{new} = 5.07 - 0.1(2 * 5.07 - 4) = 4.45 $ $x^{new} = 4.45 - 0.1(2 * 4.45 - 4) = 3.96 $ $x^{new} = 3.96 - 0.1(2 * 3.96 - 4) = 3.57 $ $x^{new} = 3.57 - 0.1(2 * 3.57 - 4) = 3.25 $ $x^{new} = 3.25 - 0.1(2 * 3.25 - 4) = 3.00 $ $x^{new} = 3.00 - 0.1(2 * 3.00 - 4) = 2.80 $ $x^{new} = 2.80 - 0.1(2 * 2.80 - 4) = 2.64 $ $x^{new} = 2.64 - 0.1(2 * 2.64 - 4) = 2.51 $ $x^{new} = 2.51 - 0.1(2 * 2.51 - 4) = 2.41 $ $x^{new} = 2.41 - 0.1(2 * 2.41 - 4) = 2.32 $ $x^{new} = 2.32 - 0.1(2 * 2.32 - 4) = 2.26 $ $x^{new} = 2.26 - 0.1(2 * 2.26 - 4) = 2.21 $ $x^{new} = 2.21 - 0.1(2 * 2.21 - 4) = 2.16 $ $x^{new} = 2.16 - 0.1(2 * 2.16 - 4) = 2.13 $ $x^{new} = 2.13 - 0.1(2 * 2.13 - 4) = 2.10 $ $x^{new} = 2.10 - 0.1(2 * 2.10 - 4) = 2.08 $ $x^{new} = 2.08 - 0.1(2 * 2.08 - 4) = 2.06 $ $x^{new} = 2.06 - 0.1(2 * 2.06 - 4) = 2.05 $ $x^{new} = 2.05 - 0.1(2 * 2.05 - 4) = 2.04 $ $x^{new} = 2.04 - 0.1(2 * 2.04 - 4) = 2.03 $ $x^{new} = 2.03 - 0.1(2 * 2.03 - 4) = 2.02 $ $x^{new} = 2.02 - 0.1(2 * 2.02 - 4) = 2.02 $ $x^{new} = 2.02 - 0.1(2 * 2.02 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.01 $ $x^{new} = 2.01 - 0.1(2 * 2.01 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ $x^{new} = 2.00 - 0.1(2 * 2.00 - 4) = 2.00 $ And we see that the algorithm converges at $x = 2$ ! We have found the minimum. Applied to neural networks The first neural networks only had a single neuron which took in some inputs $x$ and then provide an output $\hat{y}$ . A common function used is the sigmoid function $\sigma(z) = \frac{1}{1+exp(z)}$ $\hat{y}(w^Tx) = \frac{1}{1+exp(w^Tx + b)}$ where $w$ is the associated weight for each input $x$ and we have a bias $b$ . We then want to minimize our cost function $C = \frac{1}{2N} \sum_{i=0}^{N}(\hat{y} - y)^2$ . How to train the neural network? We will use gradient descent to train the weights based on the output of the sigmoid function and we will use some cost function $C$ and train on batches of data of size $N$ . $C = \frac{1}{2N} \sum_i^N (\hat{y} - y)^2$ $\hat{y}$ is the predicted class obtained from the sigmoid function and $y$ is the ground truth label. We will use gradient descent to minimize the cost function with respect to the weights $w$ . To make life easier we will split the derivative as follows $\frac{\partial C}{\partial w} = \frac{\partial C}{\partial \hat{y}} \frac{\partial \hat{y}}{\partial w}$ . $\frac{\partial C}{\partial \hat{y}} = \hat{y} - y$ and we have that $\hat{y} = \sigma(w^Tx)$ and the derivative of the sigmoid function is $\frac{\partial \sigma(z)}{\partial z} = \sigma(z)(1-\sigma(z))$ thus we have, $\frac{\partial \hat{y}}{\partial w} = \frac{1}{1+exp(w^Tx + b)} (1 - \frac{1}{1+exp(w^Tx + b)})$ . So we can then update the weights through gradient descent as $w^{new} = w^{old} - \eta \frac{\partial C}{\partial w}$ where $\eta$ is the learning rate. | {
"source": [
"https://datascience.stackexchange.com/questions/30686",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/50992/"
]
} |
30,881 | Can anyone give me some examples where precision is important and some examples where recall is important? | For rare cancer data modeling, anything that doesn't account for false-negatives is a crime. Recall is a better measure than precision. For YouTube recommendations, false-negatives is less of a concern. Precision is better here. | {
"source": [
"https://datascience.stackexchange.com/questions/30881",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/50247/"
]
} |
30,912 | What does "baseline" mean in the context of machine learning and data science? Someone wrote me: Hint: An appropriate baseline will give an RMSE of approximately 200. I don't get this. Does he mean that if my predictive model on the training data has a RMSE below 500, it's good? And what could be a "baseline approach"? | A baseline is the result of a very basic model/solution. You generally create a baseline and then try to make more complex solutions in order to get a better result.
If you achieve a better score than the baseline, it is good. | {
"source": [
"https://datascience.stackexchange.com/questions/30912",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/41968/"
]
} |
31,041 | "One common mistake that I would make is adding a non-linearity to my logits output." What does the term "logit" means here or what does it represent ? | Logits interpreted to be the unnormalised (or not-yet normalised) predictions (or outputs) of a model. These can give results, but we don't normally stop with logits, because interpreting their raw values is not easy. Have a look at their definition to help understand how logits are produced. Let me explain with an example: We want to train a model that learns how to classify cats and dogs, using photos that each contain either one cat or one dog. You build a model give it some of the data you have to approximate a mapping between images and predictions. You then give the model some of the unseen photos in order to test its predictive accuracy on new data. As we have a classification problem (we are trying to put each photo into one of two classes), the model will give us two scores for each input image. A score for how likely it believes the image contains a cat, and then a score for its belief that the image contains a dog. Perhaps for the first new image, you get logit values out of 16.917 for a cat and then 0.772 for a dog. Higher means better, or ('more likely'), so you'd say that a cat is the answer. The correct answer is a cat, so the model worked! For the second image, the model may say the logit values are 1.004 for a cat and 0.709 for a dog. So once again, our model says we the image contains a cat. The correct answer is once again a cat, so the model worked again! Now we want to compare the two result. One way to do this is to normalise the scores. That is, we normalise the logits ! Doing this we gain some insight into the confidence of our model. Let's using the softmax , where all results sum to 1 and so allow us to think of them as probabilities: $$\sigma (\mathbf {z} )_{j}={\frac {e^{z_{j}}}{\sum _{k=1}^{K}e^{z_{k}}}} \hspace{20mm} for \hspace{5mm} j = 1, …, K.$$ For the first test image, we get $$prob(cat) = \frac{exp(16.917)}{exp(16.917) + exp(0.772)} = 0.9999$$ $$prob(dog) = \frac{exp(0.772)}{exp(16.917) + exp(0.772)} = 0.0001$$ If we do the same for the second image, we get the results: $$prob(cat) = \frac{exp(1.004)}{exp(1.004) + exp(0.709)} = 0.5732$$ $$prob(dog) = \frac{exp(0.709)}{exp(1.004) + exp(0.709)} = 0.4268$$ The model was not really sure about the second image, as it was very close to 50-50 - a guess! The last part of the quote from your question likely refers to a neural network as the model. The layers of a neural network commonly take input data, multiply that by some parameters (weights) that we want to learn, then apply a non-linearity function, which provides the model with the power to learn non-linear relationships. Without this non-linearity, a neural network would simply be a list of linear operations, performed on some input data, which means it would only be able to learn linear relationships. This would be a massive constraint, meaning the model could always be reduced to a basic linear model.
That being said, it is not considered helpful to apply a non-linearity to the logit outputs of a model, as you are generally going to be cutting out some information, right before a final prediction is made. Have a look for related comments in this thread . | {
"source": [
"https://datascience.stackexchange.com/questions/31041",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/50247/"
]
} |
32,126 | Is there a comprehensive open source package (preferably in python or R) that can be used for anomaly detection in time series? There is a one class SVM package in scikit-learn but it is not for the time series data. I’m looking for more sophisticated packages that, for example, use Bayesian networks for anomaly detection. | I know I'm bit late here, but yes there is a package for anomaly detection along with outlier combination-frameworks. The package is in Python and its name is pyod . It is published in JMLR. It has multiple algorithms for following individual approaches: Linear Models for Outlier Detection ( PCA,vMCD,vOne-Class, and SVM ) Proximity-Based Outlier Detection Models ( LOF, CBLOF, HBOS, KNN, AverageKNN, and MedianKNN ) Probabilistic Models for Outlier Detection ( ABOD and FastABOD ) Outlier Ensembles and Combination Frameworks( IsolationForest and FeatureBagging ) Neural Networks and Deep Learning Models ( Auto-encoder with fully connected Neural Network ) Finally, if you're looking specifically for time-series per se, then this github link will be useful. It has the following list packages for timeseries outlier detection: datastream.io skyline banpei AnomalyDetection | {
"source": [
"https://datascience.stackexchange.com/questions/32126",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/51209/"
]
} |
32,264 | I used to apply K-fold cross-validation for robust evaluation of my machine learning models. But I'm aware of the existence of the bootstrapping method for this purpose as well. However, I cannot see the main difference between them in terms of performance estimation. As far as I see, bootstrapping is also producing a certain number of random training+testing subsets (albeit in a different way) so what is the point, advantage for using this method over CV? The only thing I could figure out that in case of bootstrapping one could artificially produce virtually arbitrary number of such subsets while for CV the number of instances is a kind of limit for this. But this aspect seems to be a very little nuisance. | Both cross validation and bootstrapping are resampling methods. bootstrap resamples with replacement (and usually produces new "surrogate" data sets with the same number of cases as the original data set). Due to the drawing with replacement, a bootstrapped data set may contain multiple instances of the same original cases, and may completely omit other original cases. cross validation resamples without replacement and thus produces surrogate data sets that are smaller than the original. These data sets are produced in a systematic way so that after a pre-specified number $k$ of surrogate data sets, each of the $n$ original cases has been left out exactly once. This is called k-fold cross validation or leave- x -out cross validation with $x = \frac{n}{k}$ , e.g. leave-one-out cross validation omits 1 case for each surrogate set, i.e. $k = n$ . As the name cross validation suggests, its primary purpose is measuring (generalization) performance of a model. On contrast, bootstrapping is primarily used to establish empirical distribution functions for a widespread range of statistics (widespread as in ranging from, say, the variation of the mean to the variation of models in bagged ensemble models). The leave-one-out analogue of the bootstrap procedure is called jackknifing (and is actually older than bootstrapping). The bootstrap analogue to cross validation estimates of generalization error is called out-of-bootstrap estimate (because the test cases are those that were left out of the bootstrap resampled training set). [cross validation vs. out-of-bootstrap validation] However, I cannot see the main difference between them in terms of performance estimation. That intuition is correct: in practice there's often not much of a difference between iterated $k$ -fold cross validation and out-of-bootstrap. With a similar total number of evaluated surrogate models, total error [of the model prediction error measurement] has been found to be similar, although oob typically has more bias and less variance than the corresponding CV estimates. There are a number of attempts to reduce oob bias (.632-bootstrap, .632+-bootstrap) but whether they will actually improve the situation depends on the situation at hand. Literature: Kohavi, R.: A Study of Cross-Validation and Bootstrap for Accuracy Estimation and Model Selection, Mellish, C. S. (ed.) Artificial Intelligence Proceedings 14 $^th$ International Joint Conference, 20 -- 25. August 1995, Montréal, Québec, Canada, Morgan Kaufmann, USA, , 1137 - 1145 (1995). Kim, J.-H. Estimating classification error rate: Repeated cross-validation, repeated hold-out and bootstrap , Computational Statistics & Data Analysis , 53, 3735 - 3745 (2009). DOI: 10.1016/j.csda.2009.04.009 Beleites, C.; Baumgartner, R.; Bowman, C.; Somorjai, R.; Steiner, G.; Salzer, R. & Sowa, M. G. Variance reduction in estimating classification error using sparse datasets, Chemom Intell Lab Syst, 79, 91 - 100 (2005). The only thing I could figure out that in case of bootstrapping one could artificially produce virtually arbitrary number of such subsets while for CV the number of instances is a kind of limit for this. Yes, there are fewer combinations possible for CV than for bootstrapping. But the limit for CV is probably higher than you are aware of.
For a data set with $n$ cases and $k$ -fold cross validation, you have CV $\binom{n}{k}$ combinations without replacement (for k < n that are far more than the $k$ possibilities that are usually evaluated) vs. bootstrap/oob $\binom{2 n - 1}{n}$ combinations with replacement (which are again far more than the, say, 100 or 1000 surrogate models that are typically evaluated) | {
"source": [
"https://datascience.stackexchange.com/questions/32264",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/9011/"
]
} |
32,557 | I'm studying machine learning from Andrew Ng Stanford lectures and just came across the theory of VC dimensions. According to the lectures and what I understood, the definition of VC dimension can be given as, If you can find a set of $n$ points, so that it can be shattered by the classifier (i.e. classify all possible $2^n$ labeling correctly) and you cannot find any set of $n+1$ points that can be shattered (i.e. for any set of $n+1$ points there is at least one labeling order so that the classifier can not separate all points correctly), then the VC dimension is $n$. Also Professor took an example and explained this nicely. Which is: Let, $H=\{{set\ of\ linear\ classifiers\ in\ 2\ Dimensions \}}$ Then any 3 points can be classified by $H$ correctly with separating hyper plane as shown in the following figure. And that's why the VC dimension of $H$ is 3. Because for any 4 points in 2D plane, a linear classifier can not shatter all the combinations of the points. For example, For this set of points, there is no separating hyper plane can be drawn to classify this set. So the VC dimension is 3. I get the idea till here. But what if we've following type of pattern? Or the pattern where a three points coincides on each other, Here also we can not draw separating hyper plane between 3 points. But still this pattern is not considered in the definition of the VC dimension. Why? The same point is also discussed the lectures I'm watching Here at 16:24 but professor does not mention the exact reason behind this. Any intuitive example of explanation will be appreciated. Thanks | The definition of VC dimension is: if there exists a set of n points that can be shattered by the classifier and there is no set of n+1 points that can be shattered by the classifier, then the VC dimension of the classifier is n. The definition does not say: if any set of n points can be shattered by the classifier... If a classifier's VC dimension is 3, it does not have to shatter all possible arrangements of 3 points. If of all arrangements of 3 points you can find at least one such arrangement that can be shattered by the classifier, and cannot find 4 points that can be shattered, then VC dimension is 3. | {
"source": [
"https://datascience.stackexchange.com/questions/32557",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/53083/"
]
} |
32,651 | I am new to pytorch and started with this github code. I do not understand the comment in line 60-61 in the code "because weights have requires_grad=True, but we don't need to track this in autograd" . I understood that we mention requires_grad=True to the variables which we need to calculate the gradients for using autograd but what does it mean to be "tracked by autograd" ? | The wrapper with torch.no_grad() temporarily sets all of the requires_grad flags to false. An example is from the official PyTorch tutorial . x = torch.randn(3, requires_grad=True)
print(x.requires_grad)
print((x ** 2).requires_grad)
with torch.no_grad():
print((x ** 2).requires_grad) Output: True
True
False I recommend you to read all the tutorials from the link above. In your example: I guess the author does not want PyTorch to calculate the gradients of the new defined variables w1 and w2 since he just want to update their values. | {
"source": [
"https://datascience.stackexchange.com/questions/32651",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/41058/"
]
} |
32,818 | I have a model that does binary classification. My dataset is highly unbalanced, so I thought that I should balance it by undersampling before I train the model. So balance the dataset and then split it randomly. Is this the right way ? or should I balance also the test and train dataset ? I tried balancing only the whole dataset and I get train accuracy of 80% but then on the test set I have 30% accuracy. This doesn't seem right ? But also I don't think that I should balance the test set because it could be considered as bias. What is the right way to do this? Thanks UPDATE : I have 400 000 samples, 10% are 1s and 90% 0s. I cannot get more data. I tried to keep the whole dataset but I don't know how to split it into train and test set. Do I need the same distribution in the train and test dataset ? | Best way is to collect more data, if you can. Sampling should always be done on train dataset. If you are using python, scikit-learn has some really cool packages to help you with this. Random sampling is a very bad option for splitting. Try stratified sampling . This splits your class proportionally between training and test set. Run oversampling, undersampling or hybrid techniques on training set. Again, if you are using scikit-learn and logistic regression, there's a parameter called class-weight . Set this to balanced . Selection of evaluation metric also plays a very important role in model selection. Accuracy never helps in imbalanced dataset. Try, Area under ROC or precision and recall depending on your need. Do you want to give more weightage to false positive rate or false negative rate? | {
"source": [
"https://datascience.stackexchange.com/questions/32818",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/49017/"
]
} |
33,053 | I would like to compare one column of a df with other df's. The columns are names and last names. I'd like to check if a person in one data frame is in another one. | If you want to check equal values on a certain column, let's say Name , you can merge both DataFrames to a new one: mergedStuff = pd.merge(df1, df2, on=['Name'], how='inner')
mergedStuff.head() I think this is more efficient and faster than where if you have a big data set. | {
"source": [
"https://datascience.stackexchange.com/questions/33053",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/53575/"
]
} |
33,059 | I have an exploratory script running a Databricks notebook that performs a simple arithmetic function (Pythagorean theorem) on all possible pairwise combinations of a list of pairs of floats (akin to coordinates). The values are generated randomly, like so: vals = np.random.rand(num_samples, 2) The list is then converted to 2 RDDs of Rows, like so: rdd = sc.parallelize(vals)
rows_1 = rdd.map(lambda v: Row(x=float(v[0]), y=float(v[1]), join_val=1))
rows_2 = rdd.map(lambda v: Row(x_r=float(v[0]), y_r=float(v[1]), join_val=1)) Which are then registered as tables: sqlContext.createDataFrame(rows_1).registerTempTable('sdf_1')
sqlContext.createDataFrame(rows_2).registerTempTable('sdf_2')
sdf_1 = sqlContext.table('sdf_1')
sdf_2 = sqlContext.table('sdf_2') Each table contains the same content, just different columns names. The two are then joined: sdf_1.join(sdf_2, sdf_1.join_val==sdf_2.join_val).registerTempTable('sdf_join')
sdf_join = sqlContext.table('sdf_join') With the tables joined, the following UDF is defined: def calc_dist(x1, y1, x2, y2):
return math.sqrt((x1-x2)**2 + (y1-y2)**2)
calc_dist_udf = udf(calc_dist, FloatType()) Finally, the operation is performed on all rows: sdf_join\
.select(calc_dist_udf('x', 'y', 'x_r', 'y_r').alias('dist'))\
.filter('dist<0.05')\
.count() This operation completes successfully, but I have noticed that, as num_samples increases, the execution time increases exponentially. I believe I am failing to correctly parallelize the row-wise operation. Is this assumption correct? How can I achieve parallelization on such an operation? | If you want to check equal values on a certain column, let's say Name , you can merge both DataFrames to a new one: mergedStuff = pd.merge(df1, df2, on=['Name'], how='inner')
mergedStuff.head() I think this is more efficient and faster than where if you have a big data set. | {
"source": [
"https://datascience.stackexchange.com/questions/33059",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/33166/"
]
} |
34,357 | I've been using SQL since 1996, so I may be biased. I've used MySQL and SQLite 3 extensively, but have also used Microsoft SQL Server and Oracle. The vast majority of the operations I've seen done with Pandas can be done more easily with SQL. This includes filtering a dataset, selecting specific columns for display, applying a function to a values, and so on. SQL has the advantage of having an optimizer and data persistence. SQL also has error messages that are clear and understandable. Pandas has a somewhat cryptic API, in which sometimes it's appropriate to use a single [ stuff ] , other times you need [[ stuff ]] , and sometimes you need a .loc . Part of the complexity of Pandas arises from the fact that there is so much overloading going on. So I'm trying to understand why Pandas is so popular. | The real first question is why are people more productive with DataFrame abstractions than pure SQL abstractions. TLDR; SQL is not geared around the (human) development and debugging process, DataFrames are. The main reason is that DataFrame abstractions allow you to construct SQL statements whilst avoiding verbose and illegible nesting. The pattern of writing nested routines, commenting them out to check them, and then uncommenting them is replaced by single lines of transformation. You can naturally run things line by line in a repl (even in Spark) and view the results. Consider the example, of adding a new transformed (string mangled column) to a table, then grouping by it and doing some aggregations. The SQL gets pretty ugly. Pandas can solve this but is missing some things when it comes to truly big data or in particular partitions (perhaps improved recently). DataFrames should be viewed as a high-level API to SQL routines, even if with pandas they are not at all rendered to some SQL planner. You can probably have many technical discussions around this, but I'm considering the user perspective below. One simple reason why you may see a lot more questions around Pandas data manipulation as opposed to SQL is that to use SQL, by definition, means using a database, and a lot of use-cases these days quite simply require bits of data for 'one-and-done' tasks (from .csv, web api, etc.). In these cases loading, storing, manipulating and extracting from a database is not viable. However, considering cases where the use-case may justify using either Pandas or SQL, you're certainly not wrong. If you want to do many, repetitive data manipulation tasks and persist the outputs, I'd always recommend trying to go via SQL first. From what I've seen the reason why many users, even in these cases, don't go via SQL is two-fold. Firstly, the major advantage pandas has over SQL is that it's part of the wider Python universe, which means in one fell swoop I can load, clean, manipulate, and visualize my data (I can even execute SQL through Pandas...). The other is, quite simply, that all too many users don't know the extent of SQL's capabilities. Every beginner learns the 'extraction syntax' of SQL (SELECT, FROM, WHERE, etc.) as a means to get your data from a DB to the next place. Some may pick up some of the more advance grouping and iteration syntax. But after that there tends to be a pretty significant gulf in knowledge, until you get to the experts (DBA, Data Engineers, etc.). tl;dr: It's often down to the use-case, convenience, or a gap in knowledge around the extent of SQL's capabilities. | {
"source": [
"https://datascience.stackexchange.com/questions/34357",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/13672/"
]
} |
34,392 | Can anyone suggest a blog where Variational Autoencoder has been used for time series forecasting? | The real first question is why are people more productive with DataFrame abstractions than pure SQL abstractions. TLDR; SQL is not geared around the (human) development and debugging process, DataFrames are. The main reason is that DataFrame abstractions allow you to construct SQL statements whilst avoiding verbose and illegible nesting. The pattern of writing nested routines, commenting them out to check them, and then uncommenting them is replaced by single lines of transformation. You can naturally run things line by line in a repl (even in Spark) and view the results. Consider the example, of adding a new transformed (string mangled column) to a table, then grouping by it and doing some aggregations. The SQL gets pretty ugly. Pandas can solve this but is missing some things when it comes to truly big data or in particular partitions (perhaps improved recently). DataFrames should be viewed as a high-level API to SQL routines, even if with pandas they are not at all rendered to some SQL planner. You can probably have many technical discussions around this, but I'm considering the user perspective below. One simple reason why you may see a lot more questions around Pandas data manipulation as opposed to SQL is that to use SQL, by definition, means using a database, and a lot of use-cases these days quite simply require bits of data for 'one-and-done' tasks (from .csv, web api, etc.). In these cases loading, storing, manipulating and extracting from a database is not viable. However, considering cases where the use-case may justify using either Pandas or SQL, you're certainly not wrong. If you want to do many, repetitive data manipulation tasks and persist the outputs, I'd always recommend trying to go via SQL first. From what I've seen the reason why many users, even in these cases, don't go via SQL is two-fold. Firstly, the major advantage pandas has over SQL is that it's part of the wider Python universe, which means in one fell swoop I can load, clean, manipulate, and visualize my data (I can even execute SQL through Pandas...). The other is, quite simply, that all too many users don't know the extent of SQL's capabilities. Every beginner learns the 'extraction syntax' of SQL (SELECT, FROM, WHERE, etc.) as a means to get your data from a DB to the next place. Some may pick up some of the more advance grouping and iteration syntax. But after that there tends to be a pretty significant gulf in knowledge, until you get to the experts (DBA, Data Engineers, etc.). tl;dr: It's often down to the use-case, convenience, or a gap in knowledge around the extent of SQL's capabilities. | {
"source": [
"https://datascience.stackexchange.com/questions/34392",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/55025/"
]
} |
34,401 | In a neural network there are 4 gates: input, output, forget and a gate whose output performs element wise multiplication with the output of the input gate, which is added to the cell state (I don't know the name of this gate, but it's the one in the below picture with the output C_tilde ). Why is the addition of the C_tilde gate required in the model? In order to allow the input gate to subtract from the cell state, we could change the activation function that results in i_t from sigmoid to tanh and remove the C_tilde gate. My reasoning is that the input gate already has a weight matrix W_i that can is being multiplied to the input gate's input, hence it already does filtering. However, when C_tilde is multiplied with i_t that seems to be another unnecessary filter. My proposed input gate would then be i_t = tanh(W_i * [h_t-1, x_t] + b_i) and i_t would directly be added to C_t ( C_t = f_t * C_t + i_t rather than C_t = f_t * C_t + i_t * C_tilde_t ). | The real first question is why are people more productive with DataFrame abstractions than pure SQL abstractions. TLDR; SQL is not geared around the (human) development and debugging process, DataFrames are. The main reason is that DataFrame abstractions allow you to construct SQL statements whilst avoiding verbose and illegible nesting. The pattern of writing nested routines, commenting them out to check them, and then uncommenting them is replaced by single lines of transformation. You can naturally run things line by line in a repl (even in Spark) and view the results. Consider the example, of adding a new transformed (string mangled column) to a table, then grouping by it and doing some aggregations. The SQL gets pretty ugly. Pandas can solve this but is missing some things when it comes to truly big data or in particular partitions (perhaps improved recently). DataFrames should be viewed as a high-level API to SQL routines, even if with pandas they are not at all rendered to some SQL planner. You can probably have many technical discussions around this, but I'm considering the user perspective below. One simple reason why you may see a lot more questions around Pandas data manipulation as opposed to SQL is that to use SQL, by definition, means using a database, and a lot of use-cases these days quite simply require bits of data for 'one-and-done' tasks (from .csv, web api, etc.). In these cases loading, storing, manipulating and extracting from a database is not viable. However, considering cases where the use-case may justify using either Pandas or SQL, you're certainly not wrong. If you want to do many, repetitive data manipulation tasks and persist the outputs, I'd always recommend trying to go via SQL first. From what I've seen the reason why many users, even in these cases, don't go via SQL is two-fold. Firstly, the major advantage pandas has over SQL is that it's part of the wider Python universe, which means in one fell swoop I can load, clean, manipulate, and visualize my data (I can even execute SQL through Pandas...). The other is, quite simply, that all too many users don't know the extent of SQL's capabilities. Every beginner learns the 'extraction syntax' of SQL (SELECT, FROM, WHERE, etc.) as a means to get your data from a DB to the next place. Some may pick up some of the more advance grouping and iteration syntax. But after that there tends to be a pretty significant gulf in knowledge, until you get to the experts (DBA, Data Engineers, etc.). tl;dr: It's often down to the use-case, convenience, or a gap in knowledge around the extent of SQL's capabilities. | {
"source": [
"https://datascience.stackexchange.com/questions/34401",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/14289/"
]
} |
34,409 | How I can transform my target variable( Y )? As it is list, I cann`t use it for fitting model, because I must use integers for fitting. | The real first question is why are people more productive with DataFrame abstractions than pure SQL abstractions. TLDR; SQL is not geared around the (human) development and debugging process, DataFrames are. The main reason is that DataFrame abstractions allow you to construct SQL statements whilst avoiding verbose and illegible nesting. The pattern of writing nested routines, commenting them out to check them, and then uncommenting them is replaced by single lines of transformation. You can naturally run things line by line in a repl (even in Spark) and view the results. Consider the example, of adding a new transformed (string mangled column) to a table, then grouping by it and doing some aggregations. The SQL gets pretty ugly. Pandas can solve this but is missing some things when it comes to truly big data or in particular partitions (perhaps improved recently). DataFrames should be viewed as a high-level API to SQL routines, even if with pandas they are not at all rendered to some SQL planner. You can probably have many technical discussions around this, but I'm considering the user perspective below. One simple reason why you may see a lot more questions around Pandas data manipulation as opposed to SQL is that to use SQL, by definition, means using a database, and a lot of use-cases these days quite simply require bits of data for 'one-and-done' tasks (from .csv, web api, etc.). In these cases loading, storing, manipulating and extracting from a database is not viable. However, considering cases where the use-case may justify using either Pandas or SQL, you're certainly not wrong. If you want to do many, repetitive data manipulation tasks and persist the outputs, I'd always recommend trying to go via SQL first. From what I've seen the reason why many users, even in these cases, don't go via SQL is two-fold. Firstly, the major advantage pandas has over SQL is that it's part of the wider Python universe, which means in one fell swoop I can load, clean, manipulate, and visualize my data (I can even execute SQL through Pandas...). The other is, quite simply, that all too many users don't know the extent of SQL's capabilities. Every beginner learns the 'extraction syntax' of SQL (SELECT, FROM, WHERE, etc.) as a means to get your data from a DB to the next place. Some may pick up some of the more advance grouping and iteration syntax. But after that there tends to be a pretty significant gulf in knowledge, until you get to the experts (DBA, Data Engineers, etc.). tl;dr: It's often down to the use-case, convenience, or a gap in knowledge around the extent of SQL's capabilities. | {
"source": [
"https://datascience.stackexchange.com/questions/34409",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/55027/"
]
} |
34,422 | I have a labeled training dataset where each observation has a sentence either in English or in French as its predictors and its label (target value) is whether this sentence is English or French. The test set includes again some sentences either in English or in French but without labels. A friend of mine suggested that we should model this problem by using the Bayes' theorem since we have have some prior values (labeled observations in training set). I agree that this can work too but I cannot really understand his argument "we should model this problem by using the Bayes' theorem since we have have some prior values". This is because in my mind every labeled observation can be considered as a prior value and every prior value can be considered as a labeled observation so you can also apply any machine learning classification algorithm e.g. decision trees) in these cases. Is this right in general or at least for this specific problem? Why Bayes' theorem modeling comes up as the best solution for the problem which I described above? | The real first question is why are people more productive with DataFrame abstractions than pure SQL abstractions. TLDR; SQL is not geared around the (human) development and debugging process, DataFrames are. The main reason is that DataFrame abstractions allow you to construct SQL statements whilst avoiding verbose and illegible nesting. The pattern of writing nested routines, commenting them out to check them, and then uncommenting them is replaced by single lines of transformation. You can naturally run things line by line in a repl (even in Spark) and view the results. Consider the example, of adding a new transformed (string mangled column) to a table, then grouping by it and doing some aggregations. The SQL gets pretty ugly. Pandas can solve this but is missing some things when it comes to truly big data or in particular partitions (perhaps improved recently). DataFrames should be viewed as a high-level API to SQL routines, even if with pandas they are not at all rendered to some SQL planner. You can probably have many technical discussions around this, but I'm considering the user perspective below. One simple reason why you may see a lot more questions around Pandas data manipulation as opposed to SQL is that to use SQL, by definition, means using a database, and a lot of use-cases these days quite simply require bits of data for 'one-and-done' tasks (from .csv, web api, etc.). In these cases loading, storing, manipulating and extracting from a database is not viable. However, considering cases where the use-case may justify using either Pandas or SQL, you're certainly not wrong. If you want to do many, repetitive data manipulation tasks and persist the outputs, I'd always recommend trying to go via SQL first. From what I've seen the reason why many users, even in these cases, don't go via SQL is two-fold. Firstly, the major advantage pandas has over SQL is that it's part of the wider Python universe, which means in one fell swoop I can load, clean, manipulate, and visualize my data (I can even execute SQL through Pandas...). The other is, quite simply, that all too many users don't know the extent of SQL's capabilities. Every beginner learns the 'extraction syntax' of SQL (SELECT, FROM, WHERE, etc.) as a means to get your data from a DB to the next place. Some may pick up some of the more advance grouping and iteration syntax. But after that there tends to be a pretty significant gulf in knowledge, until you get to the experts (DBA, Data Engineers, etc.). tl;dr: It's often down to the use-case, convenience, or a gap in knowledge around the extent of SQL's capabilities. | {
"source": [
"https://datascience.stackexchange.com/questions/34422",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/45610/"
]
} |
34,444 | What is the difference between fit() and fit_generator() in Keras? When should I use fit() vs fit_generator() ? | In keras, fit() is much similar to sklearn's fit method, where you pass array of features as x values and target as y values. You pass your whole dataset at once in fit method. Also, use it if you can load whole data into your memory (small dataset). In fit_generator() , you don't pass the x and y directly, instead they come from a generator . As it is written in keras documentation , generator is used when you want to avoid duplicate data when using multiprocessing. This is for practical purpose, when you have large dataset. Here is a link to understand more about this- A thing you should know about Keras if you plan to train a deep learning model on a large dataset For reference you can check this book- https://github.com/hktxt/bookshelf/blob/master/Computer%20Science/Deep%20Learning%20with%20Python%2C%20Fran%C3%A7ois%20Chollet.pdf | {
"source": [
"https://datascience.stackexchange.com/questions/34444",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/51129/"
]
} |
34,463 | I ran this code and this doesn't work, I'm using python 3 btw, I have checked the syntax a million times. I have installed all the necessary packages and all of them are up to date, here is the code I ran: from sklearn import tree
features = [[140, 1], [130, 1], [150, 0], [170, 0]]
labels = [0, 0, 1, 1]
clf = tree.DecisionTreeClassifier()
clf = clf.fit(features, labels)
print(clf.predict([[150, 0]])) Here is the console error message (I don't know what it's exactly called, please tell me if you know): pydev debugger: starting
Traceback (most recent call last):
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\ptvsd_launcher.py", line 111, in <module>
vspd.debug(filename, port_num, debug_id, debug_options, run_as)
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\Packages\ptvsd\debugger.py", line 36, in debug
run(address, filename, *args, **kwargs)
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\Packages\ptvsd\_main.py", line 47, in run_file
run(argv, addr, **kwargs)
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\Packages\ptvsd\_main.py", line 98, in _run
_pydevd.main()
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\Packages\ptvsd\pydevd\pydevd.py", line 1628, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\Packages\ptvsd\pydevd\pydevd.py", line 1035, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\Extensions\Microsoft\Python\Core\Packages\ptvsd\pydevd\_pydev_imps\_pydev_execfile.py", line 25, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:\Users\Sanjay\Documents\python files\SLNforVt\VisualTest\VisualTest.py", line 6
print(clf.predict([[150, 0]]))
^
SyntaxError: invalid character in identifier I am using Visual Studios here, I do not know if that affects this program in anyway but I also tried it using the python idle. Other Python programs I write work fine on Visual Studios without any errors. | In keras, fit() is much similar to sklearn's fit method, where you pass array of features as x values and target as y values. You pass your whole dataset at once in fit method. Also, use it if you can load whole data into your memory (small dataset). In fit_generator() , you don't pass the x and y directly, instead they come from a generator . As it is written in keras documentation , generator is used when you want to avoid duplicate data when using multiprocessing. This is for practical purpose, when you have large dataset. Here is a link to understand more about this- A thing you should know about Keras if you plan to train a deep learning model on a large dataset For reference you can check this book- https://github.com/hktxt/bookshelf/blob/master/Computer%20Science/Deep%20Learning%20with%20Python%2C%20Fran%C3%A7ois%20Chollet.pdf | {
"source": [
"https://datascience.stackexchange.com/questions/34463",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/55148/"
]
} |
36,049 | I am just getting touch with Multi-layer Perceptron. And, I got this accuracy when classifying the DEAP data with MLP. However, I have no idea how to adjust the hyperparameters for improving the result. Here is the detail of my code and result: . from sklearn.neural_network import MLPClassifier
import numpy as np
import scipy.io
x_vals = data['all_data'][:,0:320]
y_vals_new = np.array([0 if each=='Neg' else 1 if each =='Neu' else 2 for each in data['all_data'][:,320]])
y_vals_Arousal = np.array([3 if each=='Pas' else 4 if each =='Neu' else 5 for each in data['all_data'][:,321]])
DEAP_x_train = x_vals[:-256] #using 80% of whole data for training
DEAP_x_test = x_vals[-256:] #using 20% of whole data for testing
DEAP_y_train = y_vals_new[:-256] ##Valence
DEAP_y_test = y_vals_new[-256:]
DEAP_y_train_A = y_vals_Arousal[:-256] ### Arousal
DEAP_y_test_A = y_vals_Arousal[-256:]
mlp = MLPClassifier(solver='adam', activation='relu',alpha=1e-4,hidden_layer_sizes=(50,50,50), random_state=1,max_iter=11,verbose=10,learning_rate_init=.1)
mlp.fit(DEAP_x_train, DEAP_y_train)
print (mlp.score(DEAP_x_test,DEAP_y_test))
print (mlp.n_layers_)
print (mlp.n_iter_)
print (mlp.loss_) | If you are using SKlearn, you can use their hyper-parameter optimization tools. For example, you can use: GridSearchCV RandomizedSearchCV If you use GridSearchCV , you can do the following: 1) Choose your classifier from sklearn.neural_network import MLPClassifier
mlp = MLPClassifier(max_iter=100) 2) Define a hyper-parameter space to search. (All the values that you want to try out.) parameter_space = {
'hidden_layer_sizes': [(50,50,50), (50,100,50), (100,)],
'activation': ['tanh', 'relu'],
'solver': ['sgd', 'adam'],
'alpha': [0.0001, 0.05],
'learning_rate': ['constant','adaptive'],
} Note: the max_iter=100 that you defined on the initializer is not in the grid. So, that number will be constant, while the ones in the grid will be searched. 3) Run the search: from sklearn.model_selection import GridSearchCV
clf = GridSearchCV(mlp, parameter_space, n_jobs=-1, cv=3)
clf.fit(DEAP_x_train, DEAP_y_train) Note: the parameter n_jobs is to define how many CPU cores from your computer to use (-1 is for all the cores available). The cv is the number of splits for cross-validation. 4) See the best results: # Best paramete set
print('Best parameters found:\n', clf.best_params_)
# All results
means = clf.cv_results_['mean_test_score']
stds = clf.cv_results_['std_test_score']
for mean, std, params in zip(means, stds, clf.cv_results_['params']):
print("%0.3f (+/-%0.03f) for %r" % (mean, std * 2, params)) 5) Now you can use the clf to make new predictions. For example, check the performance on your test set . y_true, y_pred = DEAP_y_test , clf.predict(DEAP_x_test)
from sklearn.metrics import classification_report
print('Results on the test set:')
print(classification_report(y_true, y_pred)) | {
"source": [
"https://datascience.stackexchange.com/questions/36049",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/56593/"
]
} |
36,450 | What is the difference between Gradient Descent and Stochastic Gradient Descent? I am not very familiar with these, can you describe the difference with a short example? | For a quick simple explanation: In both gradient descent (GD) and stochastic gradient descent (SGD), you update a set of parameters in an iterative manner to minimize an error function. While in GD, you have to run through ALL the samples in your training set to do a single update for a parameter in a particular iteration, in SGD, on the other hand, you use ONLY ONE or SUBSET of training sample from your training set to do the update for a parameter in a particular iteration. If you use SUBSET, it is called Minibatch Stochastic gradient Descent. Thus, if the number of training samples are large, in fact very large, then using gradient descent may take too long because in every iteration when you are updating the values of the parameters, you are running through the complete training set. On the other hand, using SGD will be faster because you use only one training sample and it starts improving itself right away from the first sample. SGD often converges much faster compared to GD but the error function is not as well minimized as in the case of GD. Often in most cases, the close approximation that you get in SGD for the parameter values are enough because they reach the optimal values and keep oscillating there. If you need an example of this with a practical case, check Andrew NG's notes here where he clearly shows you the steps involved in both the cases. cs229-notes Source: Quora Thread | {
"source": [
"https://datascience.stackexchange.com/questions/36450",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/57082/"
]
} |
37,021 | If removing some neurons results in a better performing model, why not use a simpler neural network with fewer layers and fewer neurons in the first place? Why build a bigger, more complicated model in the beginning and suppress parts of it later? | The function of dropout is to increase the robustness of the model and also to remove any simple dependencies between the neurons. Neurons are only removed for a single pass forward and backward through the network - meaning their weights are synthetically set to zero for that pass, and so their errors are as well, meaning that the weights are not updated.
Dropout also works as a form of regularisation , as it is penalising the model for its complexity, somewhat. I would recommend having a read of the Dropout section in Michael Nielsen's Deep Learning book (freely available), which gives nice intuition and also has very helpful interactive diagrams/explanations. He explains that: Dropout is a radically different technique for regularization. Unlike L1 and L2 regularization, dropout doesn't rely on modifying the cost function. Instead, in dropout we modify the network itself. Here is a nice summary article . From that article: Some Observations: Dropout forces a neural network to learn more robust features that are useful in conjunction with many different random subsets of the other neurons. Dropout roughly doubles the number of iterations required to converge. However, training time for each epoch is less. With H hidden units, each of which can be dropped, we have
2^H possible models. In testing phase, the entire network is considered and each activation is reduced by a factor p. Example Imagine I ask you to make me a cup of tea - you might always use your right hand to pour the water, your left eye to measure the level of water and then your right hand again to stir the tea with a spoon. This would mean your left hand and right eye serve little purpose. Using dropout would e.g. tie your right hand behind your back - forcing you to use your left hand. Now after making me 20 cups of tea, with either one eye or one hand taken out of action, you are better trained at using everything available. Maybe you will later be forced to make tea in a tiny kitchen, where it is only possible to use the kettle with your left arm... and after using dropout, you have experience doing that! You have become more robust to unseen data. | {
"source": [
"https://datascience.stackexchange.com/questions/37021",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/57724/"
]
} |
37,186 | I am currently training a neural network and I cannot decide which to use to implement my Early Stopping criteria: validation loss or a metrics like accuracy/f1score/auc/whatever calculated on the validation set. In my research, I came upon articles defending both standpoints. Keras seems to default to the validation loss but I have also come across convincing answers for the opposite approach (e.g. here ). Anyone has directions on when to use preferably the validation loss and when to use a specific metric? | TLDR; Monitor the loss rather than the accuracy I will answer my own question since I think that the answers received missed the point and someone might have the same problem one day. First, let me quickly clarify that using early stopping is perfectly normal when training neural networks (see the relevant sections in Goodfellow et al's Deep Learning book, most DL papers, and the documentation for keras' EarlyStopping callback). Now, regarding the quantity to monitor: prefer the loss to the accuracy. Why?
The loss quantify how certain the model is about a prediction (basically having a value close to 1 in the right class and close to 0 in the other classes). The accuracy merely account for the number of correct predictions. Similarly, any metrics using hard predictions rather than probabilities have the same problem. Obviously, whatever metrics you end up choosing, it has to be calculated on a validation set and not a training set (otherwise, you are completely missing the point of using EarlyStopping in the first place) | {
"source": [
"https://datascience.stackexchange.com/questions/37186",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/50534/"
]
} |
37,345 | I am familiar with terms high bias and high variance and their effect on the model. Basically your model has high variance when it is too complex and sensitive too even outliers. But recently I was asked the meaning of term Variance in machine learning model in one of the interview? I would like to know what exactly Variance means in ML Model and how does it get introduce in your model? I would really appreciate if someone could explain this with an example. | It is pretty much what you said. Formally you can say: Variance, in the context of Machine Learning, is a type of error that occurs due to a model's sensitivity to small fluctuations in the training set. High variance would cause an algorithm to model the noise in the training set. This is most commonly referred to as overfitting . When discussing variance in Machine Learning, we also refer to bias . Bias, in the context of Machine Learning, is a type of error that occurs due to erroneous assumptions in the learning algorithm. High bias would cause an algorithm to miss relevant relations between the input features and the target outputs. This is sometimes referred to as underfitting . These terms can be decomposed from the expected error of the trained model, given different samples drawn from a training distribution. See here for a brief mathematical explanation of where the terms come from, and how to formally measure variance in the model. Relationship between bias and variance: In most cases, attempting to minimize one of these two errors, would lead to increasing the other. Thus the two are usually seen as a trade-off . Cause of high bias/variance in ML: The most common factor that determines the bias/variance of a model is its capacity (think of this as how complex the model is). Low capacity models (e.g. linear regression), might miss relevant relations between the features and targets, causing them to have high bias. This is evident in the left figure above. On the other hand, high capacity models (e.g. high-degree polynomial regression, neural networks with many parameters) might model some of the noise, along with any relevant relations in the training set, causing them to have high variance, as seen in the right figure above. How to reduce the variance in a model? The easiest and most common way of reducing the variance in a ML model is by applying techniques that limit its effective capacity, i.e. regularization . The most common forms of regularization are parameter norm penalties , which limit the parameter updates during the training phase; early stopping , which cuts the training short; pruning for tree-based algorithms; dropout for neural networks, etc. Can a model have both low bias and low variance? Yes . Likewise a model can have both high bias and high variance, as is illustrated in the figure below. How can we achieve both low bias and low variance? In practice the most methodology is: Select an algorithm with a high enough capacity to sufficiently model the problem. In this stage we want to minimize the bias , so we aren't concerned about the variance yet. Regularize the model above, to minimize its variance . | {
"source": [
"https://datascience.stackexchange.com/questions/37345",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/44083/"
]
} |
37,355 | So I have the following problem: I realized (while writing my master thesis) that I am still not sure/have vague descriptions of some of the machine learning principles. I already asked one question regard definitions that can be found here . Now I stumbled over another definition Problem.
Here is an excerpt from my thesis (this is in particular about neural-network classification): If the classes are mutually exclusive (i.e. if a sample $x^{j} = C_{0}$, $x^{j} \neq C_{i}\setminus~C_{0}$ ), the probabilities of all classes add up to one like \begin{equation}
\sum_{i} P(x^{j}=C_{i}) = 1.
\end{equation}
In this case the best practice is to use a softmax activation function for the output neurons.
If the classes are not mutually exclusive it would suffice to use a sigmoid output activation function, as the sigmoid function gets independent probabilities for each class \begin{equation}
\sum_{i} P(x^{j}=C_{i}) \geq 1.
\end{equation} I already found the following link regarding this topic.
However I know that in practise if you don't use softmax activation function in your output layer, the value can be larger than 1 but can a probability be larger 1? Isn't that against its definition? Is a non-mutual classification really a common case? Can somebody may be linking some cases (paper?) were they needed non-mutual classification? | It is pretty much what you said. Formally you can say: Variance, in the context of Machine Learning, is a type of error that occurs due to a model's sensitivity to small fluctuations in the training set. High variance would cause an algorithm to model the noise in the training set. This is most commonly referred to as overfitting . When discussing variance in Machine Learning, we also refer to bias . Bias, in the context of Machine Learning, is a type of error that occurs due to erroneous assumptions in the learning algorithm. High bias would cause an algorithm to miss relevant relations between the input features and the target outputs. This is sometimes referred to as underfitting . These terms can be decomposed from the expected error of the trained model, given different samples drawn from a training distribution. See here for a brief mathematical explanation of where the terms come from, and how to formally measure variance in the model. Relationship between bias and variance: In most cases, attempting to minimize one of these two errors, would lead to increasing the other. Thus the two are usually seen as a trade-off . Cause of high bias/variance in ML: The most common factor that determines the bias/variance of a model is its capacity (think of this as how complex the model is). Low capacity models (e.g. linear regression), might miss relevant relations between the features and targets, causing them to have high bias. This is evident in the left figure above. On the other hand, high capacity models (e.g. high-degree polynomial regression, neural networks with many parameters) might model some of the noise, along with any relevant relations in the training set, causing them to have high variance, as seen in the right figure above. How to reduce the variance in a model? The easiest and most common way of reducing the variance in a ML model is by applying techniques that limit its effective capacity, i.e. regularization . The most common forms of regularization are parameter norm penalties , which limit the parameter updates during the training phase; early stopping , which cuts the training short; pruning for tree-based algorithms; dropout for neural networks, etc. Can a model have both low bias and low variance? Yes . Likewise a model can have both high bias and high variance, as is illustrated in the figure below. How can we achieve both low bias and low variance? In practice the most methodology is: Select an algorithm with a high enough capacity to sufficiently model the problem. In this stage we want to minimize the bias , so we aren't concerned about the variance yet. Regularize the model above, to minimize its variance . | {
"source": [
"https://datascience.stackexchange.com/questions/37355",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/58067/"
]
} |
37,362 | In Keras, there are 2 methods to reduce over-fitting. L1,L2 regularization or dropout layer . What are some situations to use L1,L2 regularization instead of dropout layer? What are some situations when dropout layer is better? | I am unsure there will be a formal way to show which is best in which situations - simply trying out different combinations is likely best! It is worth noting that Dropout actually does a little bit more than just provide a form of regularisation, in that it is really adding robustness to the network, allowing it to try out many many different networks. This is true because the randomly deactivated neurons are essentially removed for that forward/backward pass, thereby giving the same effect as if you had used a totally different network! Have a look at this post for a few more pointers regarding the beauty of dropout layers . $L_1$ versus $L_2$ is easier to explain, simply by noting that $L_2$ treats outliers a little more thoroughly - returning a larger error for those points. Have a look here for more detailed comparisons . | {
"source": [
"https://datascience.stackexchange.com/questions/37362",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/57724/"
]
} |
37,378 | I was looking at code and found this: model.add(Dense(13, input_dim=13, kernel_initializer='normal', activation='relu')) I was keen to know about kernel_initializer but wasn't able to understand it's significance? | The neural network needs to start with some weights and then iteratively update them to better values. The term kernel_initializer is a fancy term for which statistical distribution or function to use for initialising the weights. In case of statistical distribution, the library will generate numbers from that statistical distribution and use as starting weights. For example in the above code, normal distribution will be used to initialise weights. You can use other functions (constants like 1s or 0s) and distributions (uniform) too. All possible options are documented here . Additional explanation: The term kernel is a carryover from other classical methods like SVM. The idea is to transform data in a given input space to another space where the transformation is achieved using kernel functions. We can think of neural network layers as non-linear maps doing these transformations, so the term kernels is used. | {
"source": [
"https://datascience.stackexchange.com/questions/37378",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/50406/"
]
} |
38,395 | When I was reading about using StandardScaler , most of the recommendations were saying that you should use StandardScaler before splitting the data into train/test, but when i was checking some of the codes posted online (using sklearn) there were two major uses. Case 1 : Using StandardScaler on all the data. E.g. from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_fit = sc.fit(X)
X_std = X_fit.transform(X) Or from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X = sc.fit(X)
X = sc.transform(X) Or simply from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_std = sc.fit_transform(X) Case 2 : Using StandardScaler on split data. from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform (X_test) I would like to standardize my data, but I am confused which approach is the best! | In the interest of preventing information about the distribution of the test set leaking into your model, you should go for option #2 and fit the scaler on your training data only, then standardise both training and test sets with that scaler. By fitting the scaler on the full dataset prior to splitting (option #1), information about the test set is used to transform the training set, which in turn is passed downstream. As an example, knowing the distribution of the whole dataset might influence how you detect and process outliers, as well as how you parameterise your model. Although the data itself is not exposed, information about the distribution of the data is. As a result, your test set performance is not a true estimate of performance on unseen data. Some further discussion you might find useful is on Cross Validated . | {
"source": [
"https://datascience.stackexchange.com/questions/38395",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/59087/"
]
} |
38,955 | Validation-split in Keras Sequential model fit function is documented as following on https://keras.io/models/sequential/ : validation_split: Float between 0 and 1. Fraction of the training data
to be used as validation data. The model will set apart this fraction
of the training data, will not train on it, and will evaluate the loss
and any model metrics on this data at the end of each epoch. The
validation data is selected from the last samples in the x and y data
provided, before shuffling. Please note the last line: The validation data is selected from the last samples in the x and y
data provided, before shuffling. Does it means that validation data is always fixed and taken from bottom of main dataset? Is there any way it can be made to randomly select given fraction of data from main dataset? | You actually would not want to resample your validation set after each epoch. If you did this your model would be trained on every single sample in your dataset and thus this will cause overfitting. You want to always split your data before the training process and then the algorithm should only be trained using the subset of the data for training. The function as it is designed ensures that the data is separated in such a way that it always trains on the same portion of the data for each epoch. All shuffling is done within the training sample between epochs if that option is chosen. However, for some datasets getting the last few instances is not useful, specifically if the dataset is regroup based on class. Then the distribution of your classes will be skewed. Thus you will need some kind of random way to extract a subset of the data to get balanced class distributions in the training and validation set. For this I always like to use the sklearn function as follows from sklearn.model_selection import train_test_split
# Split the data
x_train, x_valid, y_train, y_valid = train_test_split(data, labels, test_size=0.33, shuffle= True) It's a nice easy to use function that does what you want. The variables data and labels are standard numpy matrices with the first dimension being the instances. | {
"source": [
"https://datascience.stackexchange.com/questions/38955",
"https://datascience.stackexchange.com",
"https://datascience.stackexchange.com/users/49700/"
]
} |
Subsets and Splits