markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
The above figure makes it clear: In terms of validation loss, on this problem, this careful initialization gives a clear advantage. Train the model
model = make_model() model.load_weights(initial_weights) baseline_history = model.fit( train_features, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks = [early_stopping], validation_data=(val_features, val_labels))
Train on 182276 samples, validate on 45569 samples Epoch 1/100 182276/182276 [==============================] - 3s 16us/sample - loss: 0.0256 - tp: 64.0000 - fp: 745.0000 - tn: 181227.0000 - fn: 240.0000 - accuracy: 0.9946 - precision: 0.0791 - recall: 0.2105 - auc: 0.8031 - val_loss: 0.0079 - val_tp: 17.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 66.0000 - val_accuracy: 0.9984 - val_precision: 0.7083 - val_recall: 0.2048 - val_auc: 0.9377 Epoch 2/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0100 - tp: 111.0000 - fp: 131.0000 - tn: 181841.0000 - fn: 193.0000 - accuracy: 0.9982 - precision: 0.4587 - recall: 0.3651 - auc: 0.8758 - val_loss: 0.0056 - val_tp: 40.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 43.0000 - val_accuracy: 0.9989 - val_precision: 0.8511 - val_recall: 0.4819 - val_auc: 0.9422 Epoch 3/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0075 - tp: 148.0000 - fp: 57.0000 - tn: 181915.0000 - fn: 156.0000 - accuracy: 0.9988 - precision: 0.7220 - recall: 0.4868 - auc: 0.9206 - val_loss: 0.0048 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9382 Epoch 4/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0065 - tp: 157.0000 - fp: 48.0000 - tn: 181924.0000 - fn: 147.0000 - accuracy: 0.9989 - precision: 0.7659 - recall: 0.5164 - auc: 0.9210 - val_loss: 0.0045 - val_tp: 52.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 31.0000 - val_accuracy: 0.9992 - val_precision: 0.8814 - val_recall: 0.6265 - val_auc: 0.9387 Epoch 5/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0058 - tp: 172.0000 - fp: 43.0000 - tn: 181929.0000 - fn: 132.0000 - accuracy: 0.9990 - precision: 0.8000 - recall: 0.5658 - auc: 0.9246 - val_loss: 0.0042 - val_tp: 51.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 32.0000 - val_accuracy: 0.9991 - val_precision: 0.8793 - val_recall: 0.6145 - val_auc: 0.9390 Epoch 6/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 169.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 135.0000 - accuracy: 0.9991 - precision: 0.8579 - recall: 0.5559 - auc: 0.9210 - val_loss: 0.0039 - val_tp: 56.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 27.0000 - val_accuracy: 0.9993 - val_precision: 0.8889 - val_recall: 0.6747 - val_auc: 0.9391 Epoch 7/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0054 - tp: 167.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 137.0000 - accuracy: 0.9991 - precision: 0.8350 - recall: 0.5493 - auc: 0.9224 - val_loss: 0.0038 - val_tp: 60.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 23.0000 - val_accuracy: 0.9993 - val_precision: 0.8955 - val_recall: 0.7229 - val_auc: 0.9392 Epoch 8/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0050 - tp: 182.0000 - fp: 28.0000 - tn: 181944.0000 - fn: 122.0000 - accuracy: 0.9992 - precision: 0.8667 - recall: 0.5987 - auc: 0.9215 - val_loss: 0.0038 - val_tp: 62.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 21.0000 - val_accuracy: 0.9994 - val_precision: 0.8986 - val_recall: 0.7470 - val_auc: 0.9332 Epoch 9/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0047 - tp: 186.0000 - fp: 36.0000 - tn: 181936.0000 - fn: 118.0000 - accuracy: 0.9992 - precision: 0.8378 - recall: 0.6118 - auc: 0.9238 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332 Epoch 10/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0048 - tp: 176.0000 - fp: 33.0000 - tn: 181939.0000 - fn: 128.0000 - accuracy: 0.9991 - precision: 0.8421 - recall: 0.5789 - auc: 0.9208 - val_loss: 0.0036 - val_tp: 63.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9000 - val_recall: 0.7590 - val_auc: 0.9332 Epoch 11/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 180.0000 - fp: 32.0000 - tn: 181940.0000 - fn: 124.0000 - accuracy: 0.9991 - precision: 0.8491 - recall: 0.5921 - auc: 0.9341 - val_loss: 0.0035 - val_tp: 64.0000 - val_fp: 7.0000 - val_tn: 45479.0000 - val_fn: 19.0000 - val_accuracy: 0.9994 - val_precision: 0.9014 - val_recall: 0.7711 - val_auc: 0.9331 Epoch 12/100 169984/182276 [==========================>...] - ETA: 0s - loss: 0.0045 - tp: 175.0000 - fp: 30.0000 - tn: 169674.0000 - fn: 105.0000 - accuracy: 0.9992 - precision: 0.8537 - recall: 0.6250 - auc: 0.9306Restoring model weights from the end of the best epoch. 182276/182276 [==============================] - 1s 4us/sample - loss: 0.0045 - tp: 188.0000 - fp: 31.0000 - tn: 181941.0000 - fn: 116.0000 - accuracy: 0.9992 - precision: 0.8584 - recall: 0.6184 - auc: 0.9326 - val_loss: 0.0034 - val_tp: 63.0000 - val_fp: 6.0000 - val_tn: 45480.0000 - val_fn: 20.0000 - val_accuracy: 0.9994 - val_precision: 0.9130 - val_recall: 0.7590 - val_auc: 0.9332 Epoch 00012: early stopping
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Check training historyIn this section, you will produce plots of your model's accuracy and loss on the training and validation set. These are useful to check for overfitting, which you can learn more about in this [tutorial](https://www.tensorflow.org/tutorials/keras/overfit_and_underfit).Additionally, you can produce these plots for any of the metrics you created above. False negatives are included as an example.
def plot_metrics(history): metrics = ['loss', 'auc', 'precision', 'recall'] for n, metric in enumerate(metrics): name = metric.replace("_"," ").capitalize() plt.subplot(2,2,n+1) plt.plot(history.epoch, history.history[metric], color=colors[0], label='Train') plt.plot(history.epoch, history.history['val_'+metric], color=colors[0], linestyle="--", label='Val') plt.xlabel('Epoch') plt.ylabel(name) if metric == 'loss': plt.ylim([0, plt.ylim()[1]]) elif metric == 'auc': plt.ylim([0.8,1]) else: plt.ylim([0,1]) plt.legend() plot_metrics(baseline_history)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Note: That the validation curve generally performs better than the training curve. This is mainly caused by the fact that the dropout layer is not active when evaluating the model. Evaluate metricsYou can use a [confusion matrix](https://developers.google.com/machine-learning/glossary/confusion_matrix) to summarize the actual vs. predicted labels where the X axis is the predicted label and the Y axis is the actual label.
train_predictions_baseline = model.predict(train_features, batch_size=BATCH_SIZE) test_predictions_baseline = model.predict(test_features, batch_size=BATCH_SIZE) def plot_cm(labels, predictions, p=0.5): cm = confusion_matrix(labels, predictions > p) plt.figure(figsize=(5,5)) sns.heatmap(cm, annot=True, fmt="d") plt.title('Confusion matrix @{:.2f}'.format(p)) plt.ylabel('Actual label') plt.xlabel('Predicted label') print('Legitimate Transactions Detected (True Negatives): ', cm[0][0]) print('Legitimate Transactions Incorrectly Detected (False Positives): ', cm[0][1]) print('Fraudulent Transactions Missed (False Negatives): ', cm[1][0]) print('Fraudulent Transactions Detected (True Positives): ', cm[1][1]) print('Total Fraudulent Transactions: ', np.sum(cm[1]))
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Evaluate your model on the test dataset and display the results for the metrics you created above.
baseline_results = model.evaluate(test_features, test_labels, batch_size=BATCH_SIZE, verbose=0) for name, value in zip(model.metrics_names, baseline_results): print(name, ': ', value) print() plot_cm(test_labels, test_predictions_baseline)
loss : 0.005941324691873794 tp : 55.0 fp : 12.0 tn : 56845.0 fn : 50.0 accuracy : 0.99891156 precision : 0.8208955 recall : 0.52380955 auc : 0.9390888 Legitimate Transactions Detected (True Negatives): 56845 Legitimate Transactions Incorrectly Detected (False Positives): 12 Fraudulent Transactions Missed (False Negatives): 50 Fraudulent Transactions Detected (True Positives): 55 Total Fraudulent Transactions: 105
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
If the model had predicted everything perfectly, this would be a [diagonal matrix](https://en.wikipedia.org/wiki/Diagonal_matrix) where values off the main diagonal, indicating incorrect predictions, would be zero. In this case the matrix shows that you have relatively few false positives, meaning that there were relatively few legitimate transactions that were incorrectly flagged. However, you would likely want to have even fewer false negatives despite the cost of increasing the number of false positives. This trade off may be preferable because false negatives would allow fraudulent transactions to go through, whereas false positives may cause an email to be sent to a customer to ask them to verify their card activity. Plot the ROCNow plot the [ROC](https://developers.google.com/machine-learning/glossaryROC). This plot is useful because it shows, at a glance, the range of performance the model can reach just by tuning the output threshold.
def plot_roc(name, labels, predictions, **kwargs): fp, tp, _ = sklearn.metrics.roc_curve(labels, predictions) plt.plot(100*fp, 100*tp, label=name, linewidth=2, **kwargs) plt.xlabel('False positives [%]') plt.ylabel('True positives [%]') plt.xlim([-0.5,20]) plt.ylim([80,100.5]) plt.grid(True) ax = plt.gca() ax.set_aspect('equal') plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plt.legend(loc='lower right')
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
It looks like the precision is relatively high, but the recall and the area under the ROC curve (AUC) aren't as high as you might like. Classifiers often face challenges when trying to maximize both precision and recall, which is especially true when working with imbalanced datasets. It is important to consider the costs of different types of errors in the context of the problem you care about. In this example, a false negative (a fraudulent transaction is missed) may have a financial cost, while a false positive (a transaction is incorrectly flagged as fraudulent) may decrease user happiness. Class weights Calculate class weightsThe goal is to identify fradulent transactions, but you don't have very many of those positive samples to work with, so you would want to have the classifier heavily weight the few examples that are available. You can do this by passing Keras weights for each class through a parameter. These will cause the model to "pay more attention" to examples from an under-represented class.
# Scaling by total/2 helps keep the loss to a similar magnitude. # The sum of the weights of all examples stays the same. weight_for_0 = (1 / neg)*(total)/2.0 weight_for_1 = (1 / pos)*(total)/2.0 class_weight = {0: weight_for_0, 1: weight_for_1} print('Weight for class 0: {:.2f}'.format(weight_for_0)) print('Weight for class 1: {:.2f}'.format(weight_for_1))
Weight for class 0: 0.50 Weight for class 1: 289.44
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Train a model with class weightsNow try re-training and evaluating the model with class weights to see how that affects the predictions.Note: Using `class_weights` changes the range of the loss. This may affect the stability of the training depending on the optimizer. Optimizers whose step size is dependent on the magnitude of the gradient, like `optimizers.SGD`, may fail. The optimizer used here, `optimizers.Adam`, is unaffected by the scaling change. Also note that because of the weighting, the total losses are not comparable between the two models.
weighted_model = make_model() weighted_model.load_weights(initial_weights) weighted_history = weighted_model.fit( train_features, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS, callbacks = [early_stopping], validation_data=(val_features, val_labels), # The class weights go here class_weight=class_weight)
WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] WARNING:tensorflow:sample_weight modes were coerced from ... to ['...'] Train on 182276 samples, validate on 45569 samples Epoch 1/100 182276/182276 [==============================] - 3s 19us/sample - loss: 1.0524 - tp: 138.0000 - fp: 2726.0000 - tn: 179246.0000 - fn: 166.0000 - accuracy: 0.9841 - precision: 0.0482 - recall: 0.4539 - auc: 0.8321 - val_loss: 0.4515 - val_tp: 59.0000 - val_fp: 432.0000 - val_tn: 45054.0000 - val_fn: 24.0000 - val_accuracy: 0.9900 - val_precision: 0.1202 - val_recall: 0.7108 - val_auc: 0.9492 Epoch 2/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.5537 - tp: 216.0000 - fp: 3783.0000 - tn: 178189.0000 - fn: 88.0000 - accuracy: 0.9788 - precision: 0.0540 - recall: 0.7105 - auc: 0.9033 - val_loss: 0.3285 - val_tp: 69.0000 - val_fp: 514.0000 - val_tn: 44972.0000 - val_fn: 14.0000 - val_accuracy: 0.9884 - val_precision: 0.1184 - val_recall: 0.8313 - val_auc: 0.9605 Epoch 3/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.4178 - tp: 238.0000 - fp: 4540.0000 - tn: 177432.0000 - fn: 66.0000 - accuracy: 0.9747 - precision: 0.0498 - recall: 0.7829 - auc: 0.9237 - val_loss: 0.2840 - val_tp: 69.0000 - val_fp: 570.0000 - val_tn: 44916.0000 - val_fn: 14.0000 - val_accuracy: 0.9872 - val_precision: 0.1080 - val_recall: 0.8313 - val_auc: 0.9669 Epoch 4/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.3848 - tp: 247.0000 - fp: 5309.0000 - tn: 176663.0000 - fn: 57.0000 - accuracy: 0.9706 - precision: 0.0445 - recall: 0.8125 - auc: 0.9292 - val_loss: 0.2539 - val_tp: 71.0000 - val_fp: 622.0000 - val_tn: 44864.0000 - val_fn: 12.0000 - val_accuracy: 0.9861 - val_precision: 0.1025 - val_recall: 0.8554 - val_auc: 0.9709 Epoch 5/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.3596 - tp: 254.0000 - fp: 6018.0000 - tn: 175954.0000 - fn: 50.0000 - accuracy: 0.9667 - precision: 0.0405 - recall: 0.8355 - auc: 0.9323 - val_loss: 0.2363 - val_tp: 72.0000 - val_fp: 713.0000 - val_tn: 44773.0000 - val_fn: 11.0000 - val_accuracy: 0.9841 - val_precision: 0.0917 - val_recall: 0.8675 - val_auc: 0.9725 Epoch 6/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.3115 - tp: 255.0000 - fp: 6366.0000 - tn: 175606.0000 - fn: 49.0000 - accuracy: 0.9648 - precision: 0.0385 - recall: 0.8388 - auc: 0.9477 - val_loss: 0.2243 - val_tp: 72.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 11.0000 - val_accuracy: 0.9829 - val_precision: 0.0857 - val_recall: 0.8675 - val_auc: 0.9728 Epoch 7/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.3179 - tp: 258.0000 - fp: 6804.0000 - tn: 175168.0000 - fn: 46.0000 - accuracy: 0.9624 - precision: 0.0365 - recall: 0.8487 - auc: 0.9435 - val_loss: 0.2165 - val_tp: 72.0000 - val_fp: 812.0000 - val_tn: 44674.0000 - val_fn: 11.0000 - val_accuracy: 0.9819 - val_precision: 0.0814 - val_recall: 0.8675 - val_auc: 0.9739 Epoch 8/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2880 - tp: 260.0000 - fp: 6669.0000 - tn: 175303.0000 - fn: 44.0000 - accuracy: 0.9632 - precision: 0.0375 - recall: 0.8553 - auc: 0.9530 - val_loss: 0.2122 - val_tp: 72.0000 - val_fp: 783.0000 - val_tn: 44703.0000 - val_fn: 11.0000 - val_accuracy: 0.9826 - val_precision: 0.0842 - val_recall: 0.8675 - val_auc: 0.9769 Epoch 9/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2676 - tp: 262.0000 - fp: 6904.0000 - tn: 175068.0000 - fn: 42.0000 - accuracy: 0.9619 - precision: 0.0366 - recall: 0.8618 - auc: 0.9594 - val_loss: 0.2056 - val_tp: 72.0000 - val_fp: 855.0000 - val_tn: 44631.0000 - val_fn: 11.0000 - val_accuracy: 0.9810 - val_precision: 0.0777 - val_recall: 0.8675 - val_auc: 0.9750 Epoch 10/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2498 - tp: 266.0000 - fp: 6833.0000 - tn: 175139.0000 - fn: 38.0000 - accuracy: 0.9623 - precision: 0.0375 - recall: 0.8750 - auc: 0.9593 - val_loss: 0.2001 - val_tp: 73.0000 - val_fp: 840.0000 - val_tn: 44646.0000 - val_fn: 10.0000 - val_accuracy: 0.9813 - val_precision: 0.0800 - val_recall: 0.8795 - val_auc: 0.9761 Epoch 11/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2681 - tp: 262.0000 - fp: 6845.0000 - tn: 175127.0000 - fn: 42.0000 - accuracy: 0.9622 - precision: 0.0369 - recall: 0.8618 - auc: 0.9559 - val_loss: 0.1964 - val_tp: 73.0000 - val_fp: 865.0000 - val_tn: 44621.0000 - val_fn: 10.0000 - val_accuracy: 0.9808 - val_precision: 0.0778 - val_recall: 0.8795 - val_auc: 0.9768 Epoch 12/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2406 - tp: 268.0000 - fp: 7070.0000 - tn: 174902.0000 - fn: 36.0000 - accuracy: 0.9610 - precision: 0.0365 - recall: 0.8816 - auc: 0.9646 - val_loss: 0.1940 - val_tp: 73.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 10.0000 - val_accuracy: 0.9812 - val_precision: 0.0793 - val_recall: 0.8795 - val_auc: 0.9771 Epoch 13/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2285 - tp: 269.0000 - fp: 6976.0000 - tn: 174996.0000 - fn: 35.0000 - accuracy: 0.9615 - precision: 0.0371 - recall: 0.8849 - auc: 0.9680 - val_loss: 0.1930 - val_tp: 73.0000 - val_fp: 857.0000 - val_tn: 44629.0000 - val_fn: 10.0000 - val_accuracy: 0.9810 - val_precision: 0.0785 - val_recall: 0.8795 - val_auc: 0.9772 Epoch 14/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2322 - tp: 268.0000 - fp: 6718.0000 - tn: 175254.0000 - fn: 36.0000 - accuracy: 0.9629 - precision: 0.0384 - recall: 0.8816 - auc: 0.9644 - val_loss: 0.1915 - val_tp: 73.0000 - val_fp: 808.0000 - val_tn: 44678.0000 - val_fn: 10.0000 - val_accuracy: 0.9820 - val_precision: 0.0829 - val_recall: 0.8795 - val_auc: 0.9781 Epoch 15/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2631 - tp: 267.0000 - fp: 6578.0000 - tn: 175394.0000 - fn: 37.0000 - accuracy: 0.9637 - precision: 0.0390 - recall: 0.8783 - auc: 0.9551 - val_loss: 0.1900 - val_tp: 73.0000 - val_fp: 803.0000 - val_tn: 44683.0000 - val_fn: 10.0000 - val_accuracy: 0.9822 - val_precision: 0.0833 - val_recall: 0.8795 - val_auc: 0.9781 Epoch 16/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2314 - tp: 266.0000 - fp: 6644.0000 - tn: 175328.0000 - fn: 38.0000 - accuracy: 0.9633 - precision: 0.0385 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 806.0000 - val_tn: 44680.0000 - val_fn: 10.0000 - val_accuracy: 0.9821 - val_precision: 0.0830 - val_recall: 0.8795 - val_auc: 0.9784 Epoch 17/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2152 - tp: 271.0000 - fp: 6663.0000 - tn: 175309.0000 - fn: 33.0000 - accuracy: 0.9633 - precision: 0.0391 - recall: 0.8914 - auc: 0.9687 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 754.0000 - val_tn: 44732.0000 - val_fn: 10.0000 - val_accuracy: 0.9832 - val_precision: 0.0883 - val_recall: 0.8795 - val_auc: 0.9785 Epoch 18/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2420 - tp: 264.0000 - fp: 6535.0000 - tn: 175437.0000 - fn: 40.0000 - accuracy: 0.9639 - precision: 0.0388 - recall: 0.8684 - auc: 0.9610 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 749.0000 - val_tn: 44737.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0888 - val_recall: 0.8795 - val_auc: 0.9786 Epoch 19/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2279 - tp: 268.0000 - fp: 6443.0000 - tn: 175529.0000 - fn: 36.0000 - accuracy: 0.9645 - precision: 0.0399 - recall: 0.8816 - auc: 0.9672 - val_loss: 0.1895 - val_tp: 73.0000 - val_fp: 763.0000 - val_tn: 44723.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0873 - val_recall: 0.8795 - val_auc: 0.9788 Epoch 20/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2247 - tp: 267.0000 - fp: 6596.0000 - tn: 175376.0000 - fn: 37.0000 - accuracy: 0.9636 - precision: 0.0389 - recall: 0.8783 - auc: 0.9684 - val_loss: 0.1896 - val_tp: 73.0000 - val_fp: 760.0000 - val_tn: 44726.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0876 - val_recall: 0.8795 - val_auc: 0.9797 Epoch 21/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2296 - tp: 269.0000 - fp: 6562.0000 - tn: 175410.0000 - fn: 35.0000 - accuracy: 0.9638 - precision: 0.0394 - recall: 0.8849 - auc: 0.9656 - val_loss: 0.1889 - val_tp: 73.0000 - val_fp: 750.0000 - val_tn: 44736.0000 - val_fn: 10.0000 - val_accuracy: 0.9833 - val_precision: 0.0887 - val_recall: 0.8795 - val_auc: 0.9797 Epoch 22/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.1982 - tp: 271.0000 - fp: 6583.0000 - tn: 175389.0000 - fn: 33.0000 - accuracy: 0.9637 - precision: 0.0395 - recall: 0.8914 - auc: 0.9756 - val_loss: 0.1879 - val_tp: 73.0000 - val_fp: 764.0000 - val_tn: 44722.0000 - val_fn: 10.0000 - val_accuracy: 0.9830 - val_precision: 0.0872 - val_recall: 0.8795 - val_auc: 0.9777 Epoch 23/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2154 - tp: 273.0000 - fp: 6552.0000 - tn: 175420.0000 - fn: 31.0000 - accuracy: 0.9639 - precision: 0.0400 - recall: 0.8980 - auc: 0.9682 - val_loss: 0.1882 - val_tp: 73.0000 - val_fp: 762.0000 - val_tn: 44724.0000 - val_fn: 10.0000 - val_accuracy: 0.9831 - val_precision: 0.0874 - val_recall: 0.8795 - val_auc: 0.9779 Epoch 24/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.1861 - tp: 272.0000 - fp: 6248.0000 - tn: 175724.0000 - fn: 32.0000 - accuracy: 0.9655 - precision: 0.0417 - recall: 0.8947 - auc: 0.9779 - val_loss: 0.1885 - val_tp: 73.0000 - val_fp: 772.0000 - val_tn: 44714.0000 - val_fn: 10.0000 - val_accuracy: 0.9828 - val_precision: 0.0864 - val_recall: 0.8795 - val_auc: 0.9785 Epoch 25/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.1953 - tp: 270.0000 - fp: 6501.0000 - tn: 175471.0000 - fn: 34.0000 - accuracy: 0.9641 - precision: 0.0399 - recall: 0.8882 - auc: 0.9751 - val_loss: 0.1877 - val_tp: 73.0000 - val_fp: 768.0000 - val_tn: 44718.0000 - val_fn: 10.0000 - val_accuracy: 0.9829 - val_precision: 0.0868 - val_recall: 0.8795 - val_auc: 0.9786 Epoch 26/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.1704 - tp: 277.0000 - fp: 6215.0000 - tn: 175757.0000 - fn: 27.0000 - accuracy: 0.9658 - precision: 0.0427 - recall: 0.9112 - auc: 0.9808 - val_loss: 0.1903 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9788 Epoch 27/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.1946 - tp: 271.0000 - fp: 6036.0000 - tn: 175936.0000 - fn: 33.0000 - accuracy: 0.9667 - precision: 0.0430 - recall: 0.8914 - auc: 0.9748 - val_loss: 0.1908 - val_tp: 73.0000 - val_fp: 692.0000 - val_tn: 44794.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0954 - val_recall: 0.8795 - val_auc: 0.9786 Epoch 28/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2115 - tp: 271.0000 - fp: 5873.0000 - tn: 176099.0000 - fn: 33.0000 - accuracy: 0.9676 - precision: 0.0441 - recall: 0.8914 - auc: 0.9688 - val_loss: 0.1914 - val_tp: 73.0000 - val_fp: 691.0000 - val_tn: 44795.0000 - val_fn: 10.0000 - val_accuracy: 0.9846 - val_precision: 0.0955 - val_recall: 0.8795 - val_auc: 0.9785 Epoch 29/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2237 - tp: 266.0000 - fp: 6047.0000 - tn: 175925.0000 - fn: 38.0000 - accuracy: 0.9666 - precision: 0.0421 - recall: 0.8750 - auc: 0.9672 - val_loss: 0.1909 - val_tp: 73.0000 - val_fp: 698.0000 - val_tn: 44788.0000 - val_fn: 10.0000 - val_accuracy: 0.9845 - val_precision: 0.0947 - val_recall: 0.8795 - val_auc: 0.9784 Epoch 30/100 182276/182276 [==============================] - 1s 4us/sample - loss: 0.2232 - tp: 272.0000 - fp: 5990.0000 - tn: 175982.0000 - fn: 32.0000 - accuracy: 0.9670 - precision: 0.0434 - recall: 0.8947 - auc: 0.9668 - val_loss: 0.1919 - val_tp: 73.0000 - val_fp: 642.0000 - val_tn: 44844.0000 - val_fn: 10.0000 - val_accuracy: 0.9857 - val_precision: 0.1021 - val_recall: 0.8795 - val_auc: 0.9785 Epoch 31/100 178176/182276 [============================>.] - ETA: 0s - loss: 0.2022 - tp: 273.0000 - fp: 5659.0000 - tn: 172216.0000 - fn: 28.0000 - accuracy: 0.9681 - precision: 0.0460 - recall: 0.9070 - auc: 0.9705Restoring model weights from the end of the best epoch. 182276/182276 [==============================] - 1s 4us/sample - loss: 0.1989 - tp: 276.0000 - fp: 5796.0000 - tn: 176176.0000 - fn: 28.0000 - accuracy: 0.9680 - precision: 0.0455 - recall: 0.9079 - auc: 0.9708 - val_loss: 0.1920 - val_tp: 73.0000 - val_fp: 626.0000 - val_tn: 44860.0000 - val_fn: 10.0000 - val_accuracy: 0.9860 - val_precision: 0.1044 - val_recall: 0.8795 - val_auc: 0.9788 Epoch 00031: early stopping
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Check training history
plot_metrics(weighted_history)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Evaluate metrics
train_predictions_weighted = weighted_model.predict(train_features, batch_size=BATCH_SIZE) test_predictions_weighted = weighted_model.predict(test_features, batch_size=BATCH_SIZE) weighted_results = weighted_model.evaluate(test_features, test_labels, batch_size=BATCH_SIZE, verbose=0) for name, value in zip(weighted_model.metrics_names, weighted_results): print(name, ': ', value) print() plot_cm(test_labels, test_predictions_weighted)
loss : 0.06950428275801711 tp : 94.0 fp : 905.0 tn : 55952.0 fn : 11.0 accuracy : 0.9839191 precision : 0.0940941 recall : 0.8952381 auc : 0.9844724 Legitimate Transactions Detected (True Negatives): 55952 Legitimate Transactions Incorrectly Detected (False Positives): 905 Fraudulent Transactions Missed (False Negatives): 11 Fraudulent Transactions Detected (True Positives): 94 Total Fraudulent Transactions: 105
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Here you can see that with class weights the accuracy and precision are lower because there are more false positives, but conversely the recall and AUC are higher because the model also found more true positives. Despite having lower accuracy, this model has higher recall (and identifies more fraudulent transactions). Of course, there is a cost to both types of error (you wouldn't want to bug users by flagging too many legitimate transactions as fraudulent, either). Carefully consider the trade offs between these different types of errors for your application. Plot the ROC
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1]) plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--') plt.legend(loc='lower right')
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Oversampling Oversample the minority classA related approach would be to resample the dataset by oversampling the minority class.
pos_features = train_features[bool_train_labels] neg_features = train_features[~bool_train_labels] pos_labels = train_labels[bool_train_labels] neg_labels = train_labels[~bool_train_labels]
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Using NumPyYou can balance the dataset manually by choosing the right number of random indices from the positive examples:
ids = np.arange(len(pos_features)) choices = np.random.choice(ids, len(neg_features)) res_pos_features = pos_features[choices] res_pos_labels = pos_labels[choices] res_pos_features.shape resampled_features = np.concatenate([res_pos_features, neg_features], axis=0) resampled_labels = np.concatenate([res_pos_labels, neg_labels], axis=0) order = np.arange(len(resampled_labels)) np.random.shuffle(order) resampled_features = resampled_features[order] resampled_labels = resampled_labels[order] resampled_features.shape
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Using `tf.data` If you're using `tf.data` the easiest way to produce balanced examples is to start with a `positive` and a `negative` dataset, and merge them. See [the tf.data guide](../../guide/data.ipynb) for more examples.
BUFFER_SIZE = 100000 def make_ds(features, labels): ds = tf.data.Dataset.from_tensor_slices((features, labels))#.cache() ds = ds.shuffle(BUFFER_SIZE).repeat() return ds pos_ds = make_ds(pos_features, pos_labels) neg_ds = make_ds(neg_features, neg_labels)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Each dataset provides `(feature, label)` pairs:
for features, label in pos_ds.take(1): print("Features:\n", features.numpy()) print() print("Label: ", label.numpy())
Features: [-2.46955933 3.42534191 -4.42937043 3.70651659 -3.17895499 -1.30458304 -5. 2.86676917 -4.9308611 -5. 3.58555137 -5. 1.51535494 -5. 0.01049775 -5. -5. -5. 2.02380731 0.36595419 1.61836304 -1.16743779 0.31324117 -0.35515978 -0.62579636 -0.55952005 0.51255883 1.15454727 0.87478003] Label: 1
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Merge the two together using `experimental.sample_from_datasets`:
resampled_ds = tf.data.experimental.sample_from_datasets([pos_ds, neg_ds], weights=[0.5, 0.5]) resampled_ds = resampled_ds.batch(BATCH_SIZE).prefetch(2) for features, label in resampled_ds.take(1): print(label.numpy().mean())
0.48974609375
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
To use this dataset, you'll need the number of steps per epoch.The definition of "epoch" in this case is less clear. Say it's the number of batches required to see each negative example once:
resampled_steps_per_epoch = np.ceil(2.0*neg/BATCH_SIZE) resampled_steps_per_epoch
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Train on the oversampled dataNow try training the model with the resampled data set instead of using class weights to see how these methods compare.Note: Because the data was balanced by replicating the positive examples, the total dataset size is larger, and each epoch runs for more training steps.
resampled_model = make_model() resampled_model.load_weights(initial_weights) # Reset the bias to zero, since this dataset is balanced. output_layer = resampled_model.layers[-1] output_layer.bias.assign([0]) val_ds = tf.data.Dataset.from_tensor_slices((val_features, val_labels)).cache() val_ds = val_ds.batch(BATCH_SIZE).prefetch(2) resampled_history = resampled_model.fit( resampled_ds, epochs=EPOCHS, steps_per_epoch=resampled_steps_per_epoch, callbacks = [early_stopping], validation_data=val_ds)
Train for 278.0 steps, validate for 23 steps Epoch 1/100 278/278 [==============================] - 13s 48ms/step - loss: 0.4624 - tp: 267186.0000 - fp: 124224.0000 - tn: 160439.0000 - fn: 17495.0000 - accuracy: 0.7511 - precision: 0.6826 - recall: 0.9385 - auc: 0.9268 - val_loss: 0.3299 - val_tp: 79.0000 - val_fp: 2825.0000 - val_tn: 42661.0000 - val_fn: 4.0000 - val_accuracy: 0.9379 - val_precision: 0.0272 - val_recall: 0.9518 - val_auc: 0.9799 Epoch 2/100 278/278 [==============================] - 11s 39ms/step - loss: 0.2362 - tp: 264077.0000 - fp: 26654.0000 - tn: 257570.0000 - fn: 21043.0000 - accuracy: 0.9162 - precision: 0.9083 - recall: 0.9262 - auc: 0.9708 - val_loss: 0.1926 - val_tp: 75.0000 - val_fp: 1187.0000 - val_tn: 44299.0000 - val_fn: 8.0000 - val_accuracy: 0.9738 - val_precision: 0.0594 - val_recall: 0.9036 - val_auc: 0.9779 Epoch 3/100 278/278 [==============================] - 11s 40ms/step - loss: 0.1887 - tp: 263490.0000 - fp: 12935.0000 - tn: 271381.0000 - fn: 21538.0000 - accuracy: 0.9395 - precision: 0.9532 - recall: 0.9244 - auc: 0.9804 - val_loss: 0.1373 - val_tp: 75.0000 - val_fp: 1064.0000 - val_tn: 44422.0000 - val_fn: 8.0000 - val_accuracy: 0.9765 - val_precision: 0.0658 - val_recall: 0.9036 - val_auc: 0.9778 Epoch 4/100 278/278 [==============================] - 11s 41ms/step - loss: 0.1605 - tp: 263933.0000 - fp: 10513.0000 - tn: 274505.0000 - fn: 20393.0000 - accuracy: 0.9457 - precision: 0.9617 - recall: 0.9283 - auc: 0.9866 - val_loss: 0.1078 - val_tp: 75.0000 - val_fp: 1070.0000 - val_tn: 44416.0000 - val_fn: 8.0000 - val_accuracy: 0.9763 - val_precision: 0.0655 - val_recall: 0.9036 - val_auc: 0.9783 Epoch 5/100 278/278 [==============================] - 11s 39ms/step - loss: 0.1423 - tp: 265715.0000 - fp: 9592.0000 - tn: 275145.0000 - fn: 18892.0000 - accuracy: 0.9500 - precision: 0.9652 - recall: 0.9336 - auc: 0.9901 - val_loss: 0.0928 - val_tp: 75.0000 - val_fp: 1051.0000 - val_tn: 44435.0000 - val_fn: 8.0000 - val_accuracy: 0.9768 - val_precision: 0.0666 - val_recall: 0.9036 - val_auc: 0.9762 Epoch 6/100 278/278 [==============================] - 11s 40ms/step - loss: 0.1297 - tp: 267181.0000 - fp: 8944.0000 - tn: 275445.0000 - fn: 17774.0000 - accuracy: 0.9531 - precision: 0.9676 - recall: 0.9376 - auc: 0.9920 - val_loss: 0.0847 - val_tp: 75.0000 - val_fp: 1077.0000 - val_tn: 44409.0000 - val_fn: 8.0000 - val_accuracy: 0.9762 - val_precision: 0.0651 - val_recall: 0.9036 - val_auc: 0.9748 Epoch 7/100 278/278 [==============================] - 11s 39ms/step - loss: 0.1203 - tp: 267440.0000 - fp: 8606.0000 - tn: 276459.0000 - fn: 16839.0000 - accuracy: 0.9553 - precision: 0.9688 - recall: 0.9408 - auc: 0.9933 - val_loss: 0.0775 - val_tp: 75.0000 - val_fp: 1003.0000 - val_tn: 44483.0000 - val_fn: 8.0000 - val_accuracy: 0.9778 - val_precision: 0.0696 - val_recall: 0.9036 - val_auc: 0.9742 Epoch 8/100 278/278 [==============================] - 11s 40ms/step - loss: 0.1132 - tp: 268799.0000 - fp: 8165.0000 - tn: 276260.0000 - fn: 16120.0000 - accuracy: 0.9573 - precision: 0.9705 - recall: 0.9434 - auc: 0.9941 - val_loss: 0.0716 - val_tp: 75.0000 - val_fp: 927.0000 - val_tn: 44559.0000 - val_fn: 8.0000 - val_accuracy: 0.9795 - val_precision: 0.0749 - val_recall: 0.9036 - val_auc: 0.9713 Epoch 9/100 278/278 [==============================] - 11s 40ms/step - loss: 0.1074 - tp: 269627.0000 - fp: 7971.0000 - tn: 276559.0000 - fn: 15187.0000 - accuracy: 0.9593 - precision: 0.9713 - recall: 0.9467 - auc: 0.9947 - val_loss: 0.0670 - val_tp: 75.0000 - val_fp: 880.0000 - val_tn: 44606.0000 - val_fn: 8.0000 - val_accuracy: 0.9805 - val_precision: 0.0785 - val_recall: 0.9036 - val_auc: 0.9713 Epoch 10/100 278/278 [==============================] - 11s 39ms/step - loss: 0.1017 - tp: 270359.0000 - fp: 7590.0000 - tn: 277311.0000 - fn: 14084.0000 - accuracy: 0.9619 - precision: 0.9727 - recall: 0.9505 - auc: 0.9952 - val_loss: 0.0629 - val_tp: 75.0000 - val_fp: 848.0000 - val_tn: 44638.0000 - val_fn: 8.0000 - val_accuracy: 0.9812 - val_precision: 0.0813 - val_recall: 0.9036 - val_auc: 0.9717 Epoch 11/100 276/278 [============================>.] - ETA: 0s - loss: 0.0977 - tp: 269672.0000 - fp: 7408.0000 - tn: 274621.0000 - fn: 13547.0000 - accuracy: 0.9629 - precision: 0.9733 - recall: 0.9522 - auc: 0.9955Restoring model weights from the end of the best epoch. 278/278 [==============================] - 11s 39ms/step - loss: 0.0978 - tp: 271609.0000 - fp: 7474.0000 - tn: 276625.0000 - fn: 13636.0000 - accuracy: 0.9629 - precision: 0.9732 - recall: 0.9522 - auc: 0.9955 - val_loss: 0.0615 - val_tp: 75.0000 - val_fp: 841.0000 - val_tn: 44645.0000 - val_fn: 8.0000 - val_accuracy: 0.9814 - val_precision: 0.0819 - val_recall: 0.9036 - val_auc: 0.9637 Epoch 00011: early stopping
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
If the training process were considering the whole dataset on each gradient update, this oversampling would be basically identical to the class weighting.But when training the model batch-wise, as you did here, the oversampled data provides a smoother gradient signal: Instead of each positive example being shown in one batch with a large weight, they're shown in many different batches each time with a small weight. This smoother gradient signal makes it easier to train the model. Check training historyNote that the distributions of metrics will be different here, because the training data has a totally different distribution from the validation and test data.
plot_metrics(resampled_history )
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Re-train Because training is easier on the balanced data, the above training procedure may overfit quickly. So break up the epochs to give the `callbacks.EarlyStopping` finer control over when to stop training.
resampled_model = make_model() resampled_model.load_weights(initial_weights) # Reset the bias to zero, since this dataset is balanced. output_layer = resampled_model.layers[-1] output_layer.bias.assign([0]) resampled_history = resampled_model.fit( resampled_ds, # These are not real epochs steps_per_epoch = 20, epochs=10*EPOCHS, callbacks = [early_stopping], validation_data=(val_ds))
Train for 20 steps, validate for 23 steps Epoch 1/1000 20/20 [==============================] - 4s 181ms/step - loss: 0.8800 - tp: 18783.0000 - fp: 16378.0000 - tn: 4036.0000 - fn: 1763.0000 - accuracy: 0.5571 - precision: 0.5342 - recall: 0.9142 - auc: 0.7752 - val_loss: 1.3661 - val_tp: 83.0000 - val_fp: 40065.0000 - val_tn: 5421.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1208 - val_precision: 0.0021 - val_recall: 1.0000 - val_auc: 0.9425 Epoch 2/1000 20/20 [==============================] - 1s 35ms/step - loss: 0.7378 - tp: 19613.0000 - fp: 15282.0000 - tn: 5187.0000 - fn: 878.0000 - accuracy: 0.6055 - precision: 0.5621 - recall: 0.9572 - auc: 0.8680 - val_loss: 1.1629 - val_tp: 83.0000 - val_fp: 36851.0000 - val_tn: 8635.0000 - val_fn: 0.0000e+00 - val_accuracy: 0.1913 - val_precision: 0.0022 - val_recall: 1.0000 - val_auc: 0.9580 Epoch 3/1000 20/20 [==============================] - 1s 39ms/step - loss: 0.6431 - tp: 19522.0000 - fp: 13990.0000 - tn: 6558.0000 - fn: 890.0000 - accuracy: 0.6367 - precision: 0.5825 - recall: 0.9564 - auc: 0.8950 - val_loss: 0.9853 - val_tp: 82.0000 - val_fp: 32268.0000 - val_tn: 13218.0000 - val_fn: 1.0000 - val_accuracy: 0.2919 - val_precision: 0.0025 - val_recall: 0.9880 - val_auc: 0.9660 Epoch 4/1000 20/20 [==============================] - 1s 39ms/step - loss: 0.5563 - tp: 19488.0000 - fp: 12475.0000 - tn: 8032.0000 - fn: 965.0000 - accuracy: 0.6719 - precision: 0.6097 - recall: 0.9528 - auc: 0.9135 - val_loss: 0.8430 - val_tp: 82.0000 - val_fp: 26633.0000 - val_tn: 18853.0000 - val_fn: 1.0000 - val_accuracy: 0.4155 - val_precision: 0.0031 - val_recall: 0.9880 - val_auc: 0.9713 Epoch 5/1000 20/20 [==============================] - 1s 37ms/step - loss: 0.4984 - tp: 19489.0000 - fp: 11049.0000 - tn: 9377.0000 - fn: 1045.0000 - accuracy: 0.7047 - precision: 0.6382 - recall: 0.9491 - auc: 0.9242 - val_loss: 0.7307 - val_tp: 82.0000 - val_fp: 20850.0000 - val_tn: 24636.0000 - val_fn: 1.0000 - val_accuracy: 0.5424 - val_precision: 0.0039 - val_recall: 0.9880 - val_auc: 0.9753 Epoch 6/1000 20/20 [==============================] - 1s 39ms/step - loss: 0.4463 - tp: 19305.0000 - fp: 9622.0000 - tn: 10895.0000 - fn: 1138.0000 - accuracy: 0.7373 - precision: 0.6674 - recall: 0.9443 - auc: 0.9336 - val_loss: 0.6405 - val_tp: 82.0000 - val_fp: 15843.0000 - val_tn: 29643.0000 - val_fn: 1.0000 - val_accuracy: 0.6523 - val_precision: 0.0051 - val_recall: 0.9880 - val_auc: 0.9773 Epoch 7/1000 20/20 [==============================] - 1s 40ms/step - loss: 0.4121 - tp: 19365.0000 - fp: 8524.0000 - tn: 11931.0000 - fn: 1140.0000 - accuracy: 0.7641 - precision: 0.6944 - recall: 0.9444 - auc: 0.9411 - val_loss: 0.5691 - val_tp: 82.0000 - val_fp: 11981.0000 - val_tn: 33505.0000 - val_fn: 1.0000 - val_accuracy: 0.7371 - val_precision: 0.0068 - val_recall: 0.9880 - val_auc: 0.9787 Epoch 8/1000 20/20 [==============================] - 1s 39ms/step - loss: 0.3784 - tp: 19242.0000 - fp: 7375.0000 - tn: 13072.0000 - fn: 1271.0000 - accuracy: 0.7889 - precision: 0.7229 - recall: 0.9380 - auc: 0.9461 - val_loss: 0.5120 - val_tp: 80.0000 - val_fp: 9309.0000 - val_tn: 36177.0000 - val_fn: 3.0000 - val_accuracy: 0.7957 - val_precision: 0.0085 - val_recall: 0.9639 - val_auc: 0.9794 Epoch 9/1000 20/20 [==============================] - 1s 45ms/step - loss: 0.3551 - tp: 19106.0000 - fp: 6529.0000 - tn: 13989.0000 - fn: 1336.0000 - accuracy: 0.8080 - precision: 0.7453 - recall: 0.9346 - auc: 0.9495 - val_loss: 0.4657 - val_tp: 80.0000 - val_fp: 7354.0000 - val_tn: 38132.0000 - val_fn: 3.0000 - val_accuracy: 0.8386 - val_precision: 0.0108 - val_recall: 0.9639 - val_auc: 0.9799 Epoch 10/1000 20/20 [==============================] - 1s 38ms/step - loss: 0.3350 - tp: 19149.0000 - fp: 5794.0000 - tn: 14698.0000 - fn: 1319.0000 - accuracy: 0.8263 - precision: 0.7677 - recall: 0.9356 - auc: 0.9535 - val_loss: 0.4275 - val_tp: 80.0000 - val_fp: 5832.0000 - val_tn: 39654.0000 - val_fn: 3.0000 - val_accuracy: 0.8720 - val_precision: 0.0135 - val_recall: 0.9639 - val_auc: 0.9802 Epoch 11/1000 20/20 [==============================] - 1s 40ms/step - loss: 0.3168 - tp: 19224.0000 - fp: 5013.0000 - tn: 15322.0000 - fn: 1401.0000 - accuracy: 0.8434 - precision: 0.7932 - recall: 0.9321 - auc: 0.9552 - val_loss: 0.3969 - val_tp: 80.0000 - val_fp: 4730.0000 - val_tn: 40756.0000 - val_fn: 3.0000 - val_accuracy: 0.8961 - val_precision: 0.0166 - val_recall: 0.9639 - val_auc: 0.9805 Epoch 12/1000 20/20 [==============================] - 1s 40ms/step - loss: 0.3077 - tp: 19028.0000 - fp: 4564.0000 - tn: 16058.0000 - fn: 1310.0000 - accuracy: 0.8566 - precision: 0.8065 - recall: 0.9356 - auc: 0.9593 - val_loss: 0.3695 - val_tp: 80.0000 - val_fp: 3819.0000 - val_tn: 41667.0000 - val_fn: 3.0000 - val_accuracy: 0.9161 - val_precision: 0.0205 - val_recall: 0.9639 - val_auc: 0.9804 Epoch 13/1000 20/20 [==============================] - 1s 40ms/step - loss: 0.2936 - tp: 19047.0000 - fp: 4028.0000 - tn: 16444.0000 - fn: 1441.0000 - accuracy: 0.8665 - precision: 0.8254 - recall: 0.9297 - auc: 0.9597 - val_loss: 0.3461 - val_tp: 79.0000 - val_fp: 3149.0000 - val_tn: 42337.0000 - val_fn: 4.0000 - val_accuracy: 0.9308 - val_precision: 0.0245 - val_recall: 0.9518 - val_auc: 0.9802 Epoch 14/1000 20/20 [==============================] - 1s 38ms/step - loss: 0.2829 - tp: 19087.0000 - fp: 3596.0000 - tn: 16855.0000 - fn: 1422.0000 - accuracy: 0.8775 - precision: 0.8415 - recall: 0.9307 - auc: 0.9619 - val_loss: 0.3266 - val_tp: 79.0000 - val_fp: 2691.0000 - val_tn: 42795.0000 - val_fn: 4.0000 - val_accuracy: 0.9409 - val_precision: 0.0285 - val_recall: 0.9518 - val_auc: 0.9803 Epoch 15/1000 20/20 [==============================] - 1s 39ms/step - loss: 0.2748 - tp: 19020.0000 - fp: 3174.0000 - tn: 17283.0000 - fn: 1483.0000 - accuracy: 0.8863 - precision: 0.8570 - recall: 0.9277 - auc: 0.9627 - val_loss: 0.3095 - val_tp: 79.0000 - val_fp: 2360.0000 - val_tn: 43126.0000 - val_fn: 4.0000 - val_accuracy: 0.9481 - val_precision: 0.0324 - val_recall: 0.9518 - val_auc: 0.9797 Epoch 16/1000 20/20 [==============================] - 1s 40ms/step - loss: 0.2666 - tp: 18890.0000 - fp: 2889.0000 - tn: 17757.0000 - fn: 1424.0000 - accuracy: 0.8947 - precision: 0.8673 - recall: 0.9299 - auc: 0.9653 - val_loss: 0.2945 - val_tp: 78.0000 - val_fp: 2101.0000 - val_tn: 43385.0000 - val_fn: 5.0000 - val_accuracy: 0.9538 - val_precision: 0.0358 - val_recall: 0.9398 - val_auc: 0.9796 Epoch 17/1000 20/20 [==============================] - 1s 38ms/step - loss: 0.2583 - tp: 18959.0000 - fp: 2517.0000 - tn: 17973.0000 - fn: 1511.0000 - accuracy: 0.9017 - precision: 0.8828 - recall: 0.9262 - auc: 0.9657 - val_loss: 0.2817 - val_tp: 78.0000 - val_fp: 1929.0000 - val_tn: 43557.0000 - val_fn: 5.0000 - val_accuracy: 0.9576 - val_precision: 0.0389 - val_recall: 0.9398 - val_auc: 0.9794 Epoch 18/1000 20/20 [==============================] - 1s 46ms/step - loss: 0.2511 - tp: 19104.0000 - fp: 2344.0000 - tn: 18043.0000 - fn: 1469.0000 - accuracy: 0.9069 - precision: 0.8907 - recall: 0.9286 - auc: 0.9678 - val_loss: 0.2704 - val_tp: 78.0000 - val_fp: 1787.0000 - val_tn: 43699.0000 - val_fn: 5.0000 - val_accuracy: 0.9607 - val_precision: 0.0418 - val_recall: 0.9398 - val_auc: 0.9793 Epoch 19/1000 20/20 [==============================] - 1s 40ms/step - loss: 0.2445 - tp: 19183.0000 - fp: 2087.0000 - tn: 18215.0000 - fn: 1475.0000 - accuracy: 0.9130 - precision: 0.9019 - recall: 0.9286 - auc: 0.9693 - val_loss: 0.2598 - val_tp: 78.0000 - val_fp: 1665.0000 - val_tn: 43821.0000 - val_fn: 5.0000 - val_accuracy: 0.9634 - val_precision: 0.0448 - val_recall: 0.9398 - val_auc: 0.9791 Epoch 20/1000 20/20 [==============================] - 1s 39ms/step - loss: 0.2373 - tp: 18995.0000 - fp: 1906.0000 - tn: 18602.0000 - fn: 1457.0000 - accuracy: 0.9179 - precision: 0.9088 - recall: 0.9288 - auc: 0.9712 - val_loss: 0.2500 - val_tp: 78.0000 - val_fp: 1587.0000 - val_tn: 43899.0000 - val_fn: 5.0000 - val_accuracy: 0.9651 - val_precision: 0.0468 - val_recall: 0.9398 - val_auc: 0.9788 Epoch 21/1000 19/20 [===========================>..] - ETA: 0s - loss: 0.2378 - tp: 18121.0000 - fp: 1821.0000 - tn: 17599.0000 - fn: 1371.0000 - accuracy: 0.9180 - precision: 0.9087 - recall: 0.9297 - auc: 0.9714Restoring model weights from the end of the best epoch. 20/20 [==============================] - 1s 40ms/step - loss: 0.2376 - tp: 19083.0000 - fp: 1918.0000 - tn: 18513.0000 - fn: 1446.0000 - accuracy: 0.9179 - precision: 0.9087 - recall: 0.9296 - auc: 0.9714 - val_loss: 0.2401 - val_tp: 78.0000 - val_fp: 1485.0000 - val_tn: 44001.0000 - val_fn: 5.0000 - val_accuracy: 0.9673 - val_precision: 0.0499 - val_recall: 0.9398 - val_auc: 0.9785 Epoch 00021: early stopping
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Re-check training history
plot_metrics(resampled_history)
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Evaluate metrics
train_predictions_resampled = resampled_model.predict(train_features, batch_size=BATCH_SIZE) test_predictions_resampled = resampled_model.predict(test_features, batch_size=BATCH_SIZE) resampled_results = resampled_model.evaluate(test_features, test_labels, batch_size=BATCH_SIZE, verbose=0) for name, value in zip(resampled_model.metrics_names, resampled_results): print(name, ': ', value) print() plot_cm(test_labels, test_predictions_resampled)
loss : 0.3960801533448772 tp : 99.0 fp : 5892.0 tn : 50965.0 fn : 6.0 accuracy : 0.8964573 precision : 0.016524788 recall : 0.94285715 auc : 0.9804354 Legitimate Transactions Detected (True Negatives): 50965 Legitimate Transactions Incorrectly Detected (False Positives): 5892 Fraudulent Transactions Missed (False Negatives): 6 Fraudulent Transactions Detected (True Positives): 99 Total Fraudulent Transactions: 105
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Plot the ROC
plot_roc("Train Baseline", train_labels, train_predictions_baseline, color=colors[0]) plot_roc("Test Baseline", test_labels, test_predictions_baseline, color=colors[0], linestyle='--') plot_roc("Train Weighted", train_labels, train_predictions_weighted, color=colors[1]) plot_roc("Test Weighted", test_labels, test_predictions_weighted, color=colors[1], linestyle='--') plot_roc("Train Resampled", train_labels, train_predictions_resampled, color=colors[2]) plot_roc("Test Resampled", test_labels, test_predictions_resampled, color=colors[2], linestyle='--') plt.legend(loc='lower right')
_____no_output_____
Apache-2.0
courses/machine_learning/deepdive2/introduction_to_tensorflow/solutions/adv_logistic_reg_TF2.0.ipynb
Best-Cloud-Practice-for-Data-Science/training-data-analyst
Assignment 01: Evaluate the FAA Dataset*The comments/sections provided are your cues to perform the assignment. You don't need to limit yourself to the number of rows/cells provided. You can add additional rows in each section to add more lines of code.**If at any point in time you need help on solving this assignment, view our demo video to understand the different steps of the code.***Happy coding!*** * * 1: VIew and import the dataset
#Import necessary libraries import pandas as pd #Import the FAA (Federal Aviation Authority) dataset df_faa_dataset = pd.read_csv("D:/COURSES/Artificial Intellegence Engineer/Data Analytics With Python/Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/faa_ai_prelim.csv")
_____no_output_____
MIT
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
2: View and understand the dataset
#View the dataset shape df_faa_dataset.shape #View the first five observations df_faa_dataset.head() #View all the columns present in the dataset df_faa_dataset.columns
_____no_output_____
MIT
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
3: Extract the following attributes from the dataset:1. Aircraft make name2. State name3. Aircraft model name4. Text information5. Flight phase6. Event description type7. Fatal flag
#Create a new dataframe with only the required columns df_analyze_dataset = df_faa_dataset[['LOC_STATE_NAME', 'RMK_TEXT', 'EVENT_TYPE_DESC', 'ACFT_MAKE_NAME', 'ACFT_MODEL_NAME', 'FLT_PHASE', 'FATAL_FLAG']] #View the type of the object type(df_analyze_dataset) #Check if the dataframe contains all the required attributes df_analyze_dataset.head()
_____no_output_____
MIT
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
4. Clean the dataset and replace the fatal flag NaN with “No”
#Replace all Fatal Flag missing values with the required output df_analyze_dataset['FATAL_FLAG'].fillna(value="No",inplace=True) #Verify if the missing values are replaced df_analyze_dataset.head() #Check the number of observations df_analyze_dataset.shape
_____no_output_____
MIT
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
5. Remove all the observations where aircraft names are not available
#Drop the unwanted values/observations from the dataset df_final_dataset = df_analyze_dataset.dropna(subset=['ACFT_MAKE_NAME'])
_____no_output_____
MIT
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
6. Find the aircraft types and their occurrences in the dataset
#Check the number of observations now to compare it with the original dataset and see how many values have been dropped df_final_dataset.shape #Group the dataset by aircraft name aircraftType = df_final_dataset.groupby('ACFT_MAKE_NAME') #View the number of times each aircraft type appears in the dataset (Hint: use the size() method) aircraftType.size()
_____no_output_____
MIT
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
7: Display the observations where fatal flag is “Yes”
#Group the dataset by fatal flag fatalAccedents = df_final_dataset.groupby('FATAL_FLAG') #View the total number of fatal and non-fatal accidents fatalAccedents.size() #Create a new dataframe to view only the fatal accidents (Fatal Flag values = Yes) accidents_with_fatality = fatalAccedents.get_group('Yes') accidents_with_fatality.head()
_____no_output_____
MIT
Analyse the Federal Aviation Authority Dataset using Pandas/WORK DONE/Pandas - Assignment 01.ipynb
NeoWist/aiengineer-simplylearn-projects
1D Numpy in PythonEstimated time needed: **30** minutes ObjectivesAfter completing this lab you will be able to:- Import and use `numpy` library- Perform operations with `numpy` Table of Contents Preparation What is Numpy? Type Assign Value Slicing Assign Value with List Other Attributes Numpy Array Operations Array Addition Array Multiplication Product of Two Numpy Arrays Dot Product Adding Constant to a Numpy Array Mathematical Functions Linspace Preparation
# Import the libraries import time import sys import numpy as np import matplotlib.pyplot as plt %matplotlib inline # Plotting functions def Plotvec1(u, z, v): ax = plt.axes() ax.arrow(0, 0, *u, head_width=0.05, color='r', head_length=0.1) plt.text(*(u + 0.1), 'u') ax.arrow(0, 0, *v, head_width=0.05, color='b', head_length=0.1) plt.text(*(v + 0.1), 'v') ax.arrow(0, 0, *z, head_width=0.05, head_length=0.1) plt.text(*(z + 0.1), 'z') plt.ylim(-2, 2) plt.xlim(-2, 2) def Plotvec2(a,b): ax = plt.axes() ax.arrow(0, 0, *a, head_width=0.05, color ='r', head_length=0.1) plt.text(*(a + 0.1), 'a') ax.arrow(0, 0, *b, head_width=0.05, color ='b', head_length=0.1) plt.text(*(b + 0.1), 'b') plt.ylim(-2, 2) plt.xlim(-2, 2)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Create a Python List as follows:
# Create a python list a = ["0", 1, "two", "3", 4]
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can access the data via an index: We can access each element using a square bracket as follows:
# Print each element print("a[0]:", a[0]) print("a[1]:", a[1]) print("a[2]:", a[2]) print("a[3]:", a[3]) print("a[4]:", a[4])
a[0]: 0 a[1]: 1 a[2]: two a[3]: 3 a[4]: 4
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
What is Numpy? A numpy array is similar to a list. It's usually fixed in size and each element is of the same type. We can cast a list to a numpy array by first importing numpy:
# import numpy library import numpy as np
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We then cast the list as follows:
# Create a numpy array a = np.array([0, 1, 2, 3, 4]) a
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Each element is of the same type, in this case integers: As with lists, we can access each element via a square bracket:
# Print each element print("a[0]:", a[0]) print("a[1]:", a[1]) print("a[2]:", a[2]) print("a[3]:", a[3]) print("a[4]:", a[4])
a[0]: 0 a[1]: 1 a[2]: 2 a[3]: 3 a[4]: 4
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Type If we check the type of the array we get numpy.ndarray:
# Check the type of the array type(a)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
As numpy arrays contain data of the same type, we can use the attribute "dtype" to obtain the Data-type of the array’s elements. In this case a 64-bit integer:
# Check the type of the values stored in numpy array a.dtype
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can create a numpy array with real numbers:
# Create a numpy array b = np.array([3.1, 11.02, 6.2, 213.2, 5.2])
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
When we check the type of the array we get numpy.ndarray:
# Check the type of array type(b)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
If we examine the attribute dtype we see float 64, as the elements are not integers:
# Check the value type b.dtype
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Assign value We can change the value of the array, consider the array c:
# Create numpy array c = np.array([20, 1, 2, 3, 4]) c
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can change the first element of the array to 100 as follows:
# Assign the first element to 100 c[0] = 100 c
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can change the 5th element of the array to 0 as follows:
# Assign the 5th element to 0 c[4] = 0 c
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Slicing Like lists, we can slice the numpy array, and we can select the elements from 1 to 3 and assign it to a new numpy array d as follows:
# Slicing the numpy array d = c[1:4] d
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can assign the corresponding indexes to new values as follows:
# Set the fourth element and fifth element to 300 and 400 c[3:5] = 300, 400 c
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Assign Value with List Similarly, we can use a list to select a specific index.The list ' select ' contains several values:
# Create the index list select = [0, 2, 3]
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can use the list as an argument in the brackets. The output is the elements corresponding to the particular index:
# Use List to select elements d = c[select] d
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can assign the specified elements to a new value. For example, we can assign the values to 100 000 as follows:
# Assign the specified elements to new value c[select] = 100000 c
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Other Attributes Let's review some basic array attributes using the array a:
# Create a numpy array a = np.array([0, 1, 2, 3, 4]) a
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
The attribute size is the number of elements in the array:
# Get the size of numpy array a.size
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
The next two attributes will make more sense when we get to higher dimensions but let's review them. The attribute ndim represents the number of array dimensions or the rank of the array, in this case, one:
# Get the number of dimensions of numpy array a.ndim
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
The attribute shape is a tuple of integers indicating the size of the array in each dimension:
# Get the shape/size of numpy array a.shape # Create a numpy array a = np.array([1, -1, 1, -1]) # Get the mean of numpy array mean = a.mean() mean # Get the standard deviation of numpy array standard_deviation=a.std() standard_deviation # Create a numpy array b = np.array([-1, 2, 3, 4, 5]) b # Get the biggest value in the numpy array max_b = b.max() max_b # Get the smallest value in the numpy array min_b = b.min() min_b
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Numpy Array Operations Array Addition Consider the numpy array u:
u = np.array([1, 0]) u
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Consider the numpy array v:
v = np.array([0, 1]) v
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can add the two arrays and assign it to z:
# Numpy Array Addition z = u + v z
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
The operation is equivalent to vector addition:
# Plot numpy arrays Plotvec1(u, z, v)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Array Multiplication Consider the vector numpy array y:
# Create a numpy array y = np.array([1, 2]) y
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can multiply every element in the array by 2:
# Numpy Array Multiplication z = 2 * y z
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
This is equivalent to multiplying a vector by a scaler: Product of Two Numpy Arrays Consider the following array u:
# Create a numpy array u = np.array([1, 2]) u
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Consider the following array v:
# Create a numpy array v = np.array([3, 2]) v
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
The product of the two numpy arrays u and v is given by:
# Calculate the production of two numpy arrays z = u * v z
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Dot Product The dot product of the two numpy arrays u and v is given by:
# Calculate the dot product np.dot(u, v)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Adding Constant to a Numpy Array Consider the following array:
# Create a constant to numpy array u = np.array([1, 2, 3, -1]) u
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Adding the constant 1 to each element in the array:
# Add the constant to array u + 1
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
The process is summarised in the following animation: Mathematical Functions We can access the value of pi in numpy as follows :
# The value of pi np.pi
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can create the following numpy array in Radians:
# Create the numpy array in radians x = np.array([0, np.pi/2 , np.pi]) x
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can apply the function sin to the array x and assign the values to the array y; this applies the sine function to each element in the array:
# Calculate the sin of each elements y = np.sin(x) y
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Linspace A useful function for plotting mathematical functions is linspace. Linspace returns evenly spaced numbers over a specified interval. We specify the starting point of the sequence and the ending point of the sequence. The parameter "num" indicates the Number of samples to generate, in this case 5:
# Makeup a numpy array within [-2, 2] and 5 elements np.linspace(-2, 2, num=5)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
If we change the parameter num to 9, we get 9 evenly spaced numbers over the interval from -2 to 2:
# Makeup a numpy array within [-2, 2] and 9 elements np.linspace(-2, 2, num=9)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can use the function linspace to generate 100 evenly spaced samples from the interval 0 to 2π:
# Makeup a numpy array within [0, 2π] and 100 elements x = np.linspace(0, 2*np.pi, num=100)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
We can apply the sine function to each element in the array x and assign it to the array y:
# Calculate the sine of x list y = np.sin(x) # Plot the result plt.plot(x, y)
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Quiz on 1D Numpy Array Implement the following vector subtraction in numpy: u-v
# Write your code below and press Shift+Enter to execute u = np.array([1, 0]) v = np.array([0, 1]) u-v
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Click here for the solution```pythonu - v``` Multiply the numpy array z with -2:
# Write your code below and press Shift+Enter to execute z = np.array([2, 4]) -2*z
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Click here for the solution```python-2 * z``` Consider the list [1, 2, 3, 4, 5] and [1, 0, 1, 0, 1], and cast both lists to a numpy array then multiply them together:
# Write your code below and press Shift+Enter to execute a = np.array([1, 2, 3, 4, 5]) b = np.array([1, 0, 1, 0, 1]) a*b
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Click here for the solution```pythona = np.array([1, 2, 3, 4, 5])b = np.array([1, 0, 1, 0, 1])a * b``` Convert the list [-1, 1] and [1, 1] to numpy arrays a and b. Then, plot the arrays as vectors using the fuction Plotvec2 and find the dot product:
# Write your code below and press Shift+Enter to execute a = np.array([-1, 1]) b = np.array([1, 1]) Plotvec2(a, b) print("The dot product is", np.dot(a,b))
The dot product is 0
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Click here for the solution```pythona = np.array([-1, 1])b = np.array([1, 1])Plotvec2(a, b)print("The dot product is", np.dot(a,b))``` Convert the list [1, 0] and [0, 1] to numpy arrays a and b. Then, plot the arrays as vectors using the function Plotvec2 and find the dot product:
# Write your code below and press Shift+Enter to execute a = np.array([1, 0]) b = np.array([0, 1]) Plotvec2(a, b) print("The dot product is", np.dot(a,b))
The dot product is 0
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Click here for the solution```pythona = np.array([1, 0])b = np.array([0, 1])Plotvec2(a, b)print("The dot product is", np.dot(a, b))``` Convert the list [1, 1] and [0, 1] to numpy arrays a and b. Then plot the arrays as vectors using the fuction Plotvec2 and find the dot product:
# Write your code below and press Shift+Enter to execute a = np.array([1, 1]) b = np.array([0, 1]) Plotvec2(a, b) print("The dot product is", np.dot(a,b))
The dot product is 1
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Click here for the solution```pythona = np.array([1, 1])b = np.array([0, 1])Plotvec2(a, b)print("The dot product is", np.dot(a, b))print("The dot product is", np.dot(a, b))``` Why are the results of the dot product for [-1, 1] and [1, 1] and the dot product for [1, 0] and [0, 1] zero, but not zero for the dot product for [1, 1] and [0, 1]? Hint: Study the corresponding figures, pay attention to the direction the arrows are pointing to.
# Write your code below and press Shift+Enter to execute
_____no_output_____
MIT
material/PY0101EN-5-1-Numpy1D.ipynb
sergiodealencar/courses
Semi-Monocoque Theory: corrective solutions
from pint import UnitRegistry import sympy import networkx as nx import numpy as np import matplotlib.pyplot as plt import sys %matplotlib inline from IPython.display import display
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Import **Section** class, which contains all calculations
from Section import Section
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Initialization of **sympy** symbolic tool and **pint** for dimension analysis (not really implemented rn as not directly compatible with sympy)
ureg = UnitRegistry() sympy.init_printing()
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Define **sympy** parameters used for geometric description of sections
A, A0, t, t0, a, b, h, L, E, G = sympy.symbols('A A_0 t t_0 a b h L E G', positive=True)
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
We also define numerical values for each **symbol** in order to plot scaled section and perform calculations
values = [(A, 150 * ureg.millimeter**2),(A0, 250 * ureg.millimeter**2),(a, 80 * ureg.millimeter), \ (b, 20 * ureg.millimeter),(h, 35 * ureg.millimeter),(L, 2000 * ureg.millimeter), \ (t, 0.8 *ureg.millimeter),(E, 72e3 * ureg.MPa), (G, 27e3 * ureg.MPa)] datav = [(v[0],v[1].magnitude) for v in values]
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
First example: Simple rectangular symmetric section Define graph describing the section:1) **stringers** are **nodes** with parameters:- **x** coordinate- **y** coordinate- **Area**2) **panels** are **oriented edges** with parameters:- **thickness**- **lenght** which is automatically calculated
stringers = {1:[(2*a,h),A], 2:[(a,h),A], 3:[(sympy.Integer(0),h),A], 4:[(sympy.Integer(0),sympy.Integer(0)),A], 5:[(2*a,sympy.Integer(0)),A]} #5:[(sympy.Rational(1,2)*a,h),A]} panels = {(1,2):t, (2,3):t, (3,4):t, (4,5):t, (5,1):t}
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Define section and perform first calculations
S1 = Section(stringers, panels) S1.cycles
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Plot of **S1** section in original reference frame Define a dictionary of coordinates used by **Networkx** to plot section as a Directed graph.Note that arrows are actually just thicker stubs
start_pos={ii: [float(S1.g.node[ii]['ip'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, arrows= True, pos=start_pos) plt.arrow(0,0,20,0) plt.arrow(0,0,0,20) #plt.text(0,0, 'CG', fontsize=24) plt.axis('equal') plt.title("Section in starting reference Frame",fontsize=16);
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Plot of **S1** section in inertial reference Frame Section is plotted wrt **center of gravity** and rotated (if necessary) so that *x* and *y* are principal axes.**Center of Gravity** and **Shear Center** are drawn
positions={ii: [float(S1.g.node[ii]['pos'][i].subs(datav)) for i in range(2)] for ii in S1.g.nodes() } x_ct, y_ct = S1.ct.subs(datav) plt.figure(figsize=(12,8),dpi=300) nx.draw(S1.g,with_labels=True, pos=positions) plt.plot([0],[0],'o',ms=12,label='CG') plt.plot([x_ct],[y_ct],'^',ms=12, label='SC') #plt.text(0,0, 'CG', fontsize=24) #plt.text(x_ct,y_ct, 'SC', fontsize=24) plt.legend(loc='lower right', shadow=True) plt.axis('equal') plt.title("Section in pricipal reference Frame",fontsize=16);
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Compute **L** matrix: with 5 nodes we expect 2 **dofs**, one with _symmetric load_ and one with _antisymmetric load_
S1.compute_L() S1.L
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Compute **H** matrix
S1.compute_H() S1.H
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Compute $\tilde{K}$ and $\tilde{M}$ as:$$\tilde{K} = L^T \cdot \left[ \frac{A}{A_0} \right] \cdot L$$$$\tilde{M} = H^T \cdot \left[ \frac{l}{l_0}\frac{t_0}{t} \right] \cdot L$$
S1.compute_KM(A,h,t) S1.Ktilde S1.Mtilde
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Compute **eigenvalues** and **eigenvectors** as:$$\left| \mathbf{I} \cdot \beta^2 - \mathbf{\tilde{K}}^{-1} \cdot \mathbf{\tilde{M}} \right| = 0$$We substitute some numerical values to simplify the expressions
sol_data = (S1.Ktilde.inv()*(S1.Mtilde.subs(datav))).eigenvects()
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
**Eigenvalues** correspond to $\beta^2$
β2 = [sol[0] for sol in sol_data] β2
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
**Eigenvectors** are orthogonal as expected
X = [sol[2][0] for sol in sol_data] X
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
From $\beta_i^2$ we compute:$$\lambda_i = \sqrt{\frac{E A_0 l_0}{G t_0} \beta_i^2}$$substuting numerical values
λ = [sympy.N(sympy.sqrt(E*A*h/(G*t)*βi).subs(datav)) for βi in β2] λ
_____no_output_____
MIT
04_CorrectiveSolutions.ipynb
Ccaccia73/semimonocoque
Examining Total Category List
# restaurants categories in the dataset: all print(sorted(set(', '.join(dataset.category_alias).split(', ')))) yelp_categories = pd.read_json("../../data/raw/categories.json") # select only one parent categories yelp_categories = yelp_categories[yelp_categories.parents.apply(lambda x: len(x) == 1) == True] res_list = yelp_categories[yelp_categories.parents.apply(lambda x: len([i for i in x if i in 'restaurants']) > 0)] res_list = res_list.alias print(set(res_list)) 'arabian' in res_list.values dataset.category_alias.head() df_res_list = dataset[dataset.category_alias.apply(lambda x: len([i for i in x.split(', ') if i in res_list.values]) > 0)] len(df_res_list) df_res_list.head() from itertools import chain set(chain(*[i.split(',') for i in set(dataset.transactions)])) df_transactions = dataset[dataset.transactions.apply(lambda x: len(x) > 0)] len(df_transactions) df_transactions_not = dataset[dataset.transactions.apply(lambda x: len(x) == 0)] len(df_transactions_not) # restaurants? [BUSINESSES with TRANSACTION] print(set(', '.join(df_transactions.category_alias).split(', '))) # not restaurants? [BUSINESSES w/o TRANSACTION] print(set(', '.join(df_transactions_not.category_alias).split(', '))) print(f"{len(dataset)}") print(f"{len(dataset.alias.unique())}") dataset[dataset.alias == "kimos-maui-lahaina"] dataset[(dataset.dist_to_alias == "kimos-maui-lahaina") & (dataset.distance < 50)] pd_location.columns = 'loc_' + pd_location.columns pd_location.columns import matplotlib.pyplot as plt dataset.review_count.plot() plt.show() yelp_branches = [ 'kimos-maui-lahaina', 'sunnyside-tahoe-city-2', 'dukes-huntington-beach-huntington-beach-2', 'dukes-la-jolla-la-jolla', 'dukes-malibu-malibu-2', 'dukes-beach-house-lahaina', 'dukes-kauai-lihue-3', 'dukes-waikiki-honolulu-2', 'hula-grill-waikiki-honolulu-3', 'hula-grill-kaanapali-lahaina-2', 'keokis-paradise-koloa', 'leilanis-lahaina-2' ] [i for i in dataset.alias.values if i in yelp_branches]
_____no_output_____
MIT
notebooks/eda/businesses.ipynb
metinsenturk/semantic-analysis
Exploratory Data Analysis
print(dataset.loc[dataset.alias.isin(yelp_branches)].rating.sum()) print(dataset.loc[dataset.alias.isin(yelp_branches)].rating.mean()) len(dataset) dataset.is_closed[dataset.is_closed == True].count() dataset.price.value_counts() print(f"sum : {dataset.review_count.sum()}") print(f"mean: {dataset.review_count.mean()}") print(f"sum : {dataset.rating.sum()}") print(f"mean: {dataset.rating.mean()}") dataset.loc[dataset.alias.isin(yelp_branches)].price.value_counts() print(dataset.loc[dataset.alias.isin(yelp_branches)].review_count.sum()) print(dataset.loc[dataset.alias.isin(yelp_branches)].review_count.mean()) import math def distance(origin, destination): """ Calculate the Haversine distance. Parameters ---------- origin : tuple of float (lat, long) destination : tuple of float (lat, long) Returns ------- distance_in_km : float Examples -------- >>> origin = (48.1372, 11.5756) # Munich >>> destination = (52.5186, 13.4083) # Berlin >>> round(distance(origin, destination), 1) 504.2 """ lat1, lon1 = origin lat2, lon2 = destination radius = 6371 # km dlat = math.radians(lat2 - lat1) dlon = math.radians(lon2 - lon1) a = (math.sin(dlat / 2) * math.sin(dlat / 2) + math.cos(math.radians(lat1)) * math.cos(math.radians(lat2)) * math.sin(dlon / 2) * math.sin(dlon / 2)) c = 2 * math.atan2(math.sqrt(a), math.sqrt(1 - a)) d = radius * c return d origin = dataset.iloc[0].coordinate_latitude, dataset.iloc[0].coordinate_longitude destination = dataset.iloc[4].coordinate_latitude, dataset.iloc[4].coordinate_longitude distance(origin, destination) * 1000 bins = [0, 100, 500, 1000, 2000, 3000, 5000, 10000, 20000] lbls = [1, 2, 3, 4, 5, 6, 7, 8, 9] pd_bins = pd.cut(dataset.review_count, bins, lbls).value_counts() pd_bins.plot(title='Binned Review Count').tick_params(axis='x', labelrotation=45) bins = [0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5] pd_bins = pd.cut(dataset.rating, bins).value_counts() pd_bins.plot(title='Binned Rating').tick_params(axis='x', labelrotation=45) pd_bins
_____no_output_____
MIT
notebooks/eda/businesses.ipynb
metinsenturk/semantic-analysis
With additional features
models_path = "/home/soufiane.oualil/lustre/data_sec-um6p-st-sccs-6sevvl76uja/IDS/prod_ai_models/ML/full_binary_0/" dataset = "UNSW-NB15" dataset_path = f"/home/soufiane.oualil/lustre/data_sec-um6p-st-sccs-6sevvl76uja/IDS/preprocessed_datasets/{dataset}/flow_features/multi/" data = pd.read_pickle(f'{dataset_path}/test.p') data.head() y = data['Attack'].values data = data.drop(columns_to_delete + ['Attack'], axis = 1) data.head() for column in columns_to_encode: le = load(f'{models_path}encoders/{column}.joblib') data[column] = le.transform(data[column]) le_labels = load(f'{models_path}/encoders/attack_encoder.joblib') data.head() clf = load(f'{models_path}/clf_model.joblib') preds = clf.predict(data.values) preds_labels = le_labels.inverse_transform(preds) print(preds_labels[:20])
['Benign' 'Benign' 'Benign' 'Malign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign']
BSD-3-Clause
Prod_ML_IDS_model.ipynb
sooualil/atlas-plugin-sample
Without additional features
models_path = "/home/abdellah.elmekki/lustre/data_sec-um6p-st-sccs-6sevvl76uja/IDS/prod_ai_models/ML/full_binary_1/" data = pd.read_pickle(f'{dataset_path}/test.p') y = data['Attack'].values data = data.drop(columns_to_delete + additional_columns + ['Attack'], axis = 1) for column in columns_to_encode: le = load(f'{models_path}/encoders/{column}.joblib') data[column] = le.transform(data[column]) le_labels = load(f'{models_path}/encoders/attack_encoder.joblib') clf = load(f'{models_path}/clf_model.joblib') preds = clf.predict(data.values) print(preds_labels[:20])
['Benign' 'Benign' 'Benign' 'Malign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign' 'Benign']
BSD-3-Clause
Prod_ML_IDS_model.ipynb
sooualil/atlas-plugin-sample
Install python librariesThe following cell can be used to ensure that the python libraries usedin this test notebook are installed.These may be pre-installed in future notebook images.Once this cell has been run, it need not be re-run unless you have restarted your jupyter server.
# Install the library dependencies used in this notebook # (comment this out if you prefer to not re-run this cell) %pip install trino python-dotenv
Requirement already satisfied: trino in /opt/app-root/lib/python3.8/site-packages (0.306.0) Requirement already satisfied: python-dotenv in /opt/app-root/lib/python3.8/site-packages (0.19.1) Requirement already satisfied: requests in /opt/app-root/lib/python3.8/site-packages (from trino) (2.25.1) Requirement already satisfied: chardet<5,>=3.0.2 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (4.0.0) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (1.26.4) Requirement already satisfied: certifi>=2017.4.17 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (2020.12.5) Requirement already satisfied: idna<3,>=2.5 in /opt/app-root/lib/python3.8/site-packages (from requests->trino) (2.10) WARNING: You are using pip version 21.1; however, version 21.3 is available. You should consider upgrading via the '/opt/app-root/bin/python3.8 -m pip install --upgrade pip' command. Note: you may need to restart the kernel to use updated packages.
FTL
notebooks/test-trino-access.ipynb
os-climate/data-platform-demo
Loading credentialsThe following cell finds a `credentials.env` file at the jupyter "home" (top level) directory.Values in this `dotenv` file are loaded into the `os.environ` table,as if they were regular environment variables.Credentials are stored in `dotenv` files so that they can be referred to by standardenvironment variable names, and do not appear in notebooks or other code,which would be a security leak.
from dotenv import dotenv_values, load_dotenv import os import pathlib dotenv_dir = os.environ.get('CREDENTIAL_DOTENV_DIR', os.environ.get('PWD', '/opt/app-root/src')) dotenv_path = pathlib.Path(dotenv_dir) / 'credentials.env' if os.path.exists(dotenv_path): load_dotenv(dotenv_path=dotenv_path,override=True)
_____no_output_____
FTL
notebooks/test-trino-access.ipynb
os-climate/data-platform-demo