path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
notebooks/intermediate_net_in_keras.ipynb | ###Markdown
Intermediate Neural Network in Keras In this notebook, we improve our [introductory shallow net](https://github.com/the-deep-learners/TensorFlow-LiveLessons/blob/master/notebooks/shallow_net_in_keras.ipynb) from Lesson 1 by applying the theory we have covered since. Set seed for reproducibility
###Code
import numpy as np
np.random.seed(42)
###Output
_____no_output_____
###Markdown
Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
###Output
_____no_output_____
###Markdown
Load data
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
###Output
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step
###Markdown
Preprocess data
###Code
X_train = X_train.reshape(60000, 784).astype('float32')
X_test = X_test.reshape(10000, 784).astype('float32')
X_train /= 255
X_test /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_test = keras.utils.to_categorical(y_test, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 64) 50240
_________________________________________________________________
dense_2 (Dense) (None, 64) 4160
_________________________________________________________________
dense_3 (Dense) (None, 10) 650
=================================================================
Total params: 55,050
Trainable params: 55,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Configure model
###Code
model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.1), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train!
###Code
model.fit(X_train, y_train, batch_size=256, epochs=200, verbose=1, validation_data=(X_test, y_test))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/200
60000/60000 [==============================] - 3s 49us/step - loss: 0.6447 - acc: 0.8220 - val_loss: 0.3204 - val_acc: 0.9042
Epoch 2/200
60000/60000 [==============================] - 2s 36us/step - loss: 0.2877 - acc: 0.9166 - val_loss: 0.2666 - val_acc: 0.9219
Epoch 3/200
60000/60000 [==============================] - 2s 39us/step - loss: 0.2328 - acc: 0.9322 - val_loss: 0.2062 - val_acc: 0.9378
Epoch 4/200
60000/60000 [==============================] - 2s 29us/step - loss: 0.1978 - acc: 0.9428 - val_loss: 0.1897 - val_acc: 0.9424
Epoch 5/200
60000/60000 [==============================] - 2s 26us/step - loss: 0.1736 - acc: 0.9503 - val_loss: 0.1754 - val_acc: 0.9498
Epoch 6/200
60000/60000 [==============================] - 1s 23us/step - loss: 0.1542 - acc: 0.9556 - val_loss: 0.1665 - val_acc: 0.9499
Epoch 7/200
60000/60000 [==============================] - 1s 24us/step - loss: 0.1398 - acc: 0.9596 - val_loss: 0.1379 - val_acc: 0.9582
Epoch 8/200
60000/60000 [==============================] - 1s 24us/step - loss: 0.1279 - acc: 0.9625 - val_loss: 0.1332 - val_acc: 0.9592
Epoch 9/200
60000/60000 [==============================] - 1s 23us/step - loss: 0.1172 - acc: 0.9663 - val_loss: 0.1267 - val_acc: 0.9635
Epoch 10/200
60000/60000 [==============================] - 1s 22us/step - loss: 0.1079 - acc: 0.9683 - val_loss: 0.1460 - val_acc: 0.9557
Epoch 11/200
60000/60000 [==============================] - 1s 24us/step - loss: 0.1012 - acc: 0.9705 - val_loss: 0.1099 - val_acc: 0.9703
Epoch 12/200
60000/60000 [==============================] - 1s 24us/step - loss: 0.0941 - acc: 0.9722 - val_loss: 0.1150 - val_acc: 0.9662
Epoch 13/200
60000/60000 [==============================] - 1s 23us/step - loss: 0.0870 - acc: 0.9750 - val_loss: 0.1140 - val_acc: 0.9661
Epoch 14/200
60000/60000 [==============================] - 2s 26us/step - loss: 0.0826 - acc: 0.9756 - val_loss: 0.1050 - val_acc: 0.9691
Epoch 15/200
60000/60000 [==============================] - 2s 28us/step - loss: 0.0778 - acc: 0.9774 - val_loss: 0.1013 - val_acc: 0.9691
Epoch 16/200
60000/60000 [==============================] - 2s 25us/step - loss: 0.0732 - acc: 0.9789 - val_loss: 0.1084 - val_acc: 0.9681
Epoch 17/200
60000/60000 [==============================] - 2s 26us/step - loss: 0.0701 - acc: 0.9798 - val_loss: 0.0982 - val_acc: 0.9699
Epoch 18/200
60000/60000 [==============================] - 2s 27us/step - loss: 0.0658 - acc: 0.9809 - val_loss: 0.0951 - val_acc: 0.9718
Epoch 19/200
60000/60000 [==============================] - 2s 29us/step - loss: 0.0626 - acc: 0.9818 - val_loss: 0.0886 - val_acc: 0.9730
Epoch 20/200
17920/60000 [=======>......................] - ETA: 1s - loss: 0.0578 - acc: 0.9827
###Markdown
Intermediate Neural Network in Keras In this notebook, we improve our [introductory shallow net](https://github.com/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/shallow_net_in_keras.ipynb) by applying the theory we have covered since. [](https://colab.research.google.com/github/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/intermediate_net_in_keras.ipynb) Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
###Output
_____no_output_____
###Markdown
Load data
###Code
(X_train, y_train), (X_valid, y_valid) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
X_train = X_train.reshape(60000, 784).astype('float32')
X_valid = X_valid.reshape(10000, 784).astype('float32')
X_train /= 255
X_valid /= 255
from tensorflow.keras.utils import to_categorical
n_classes = 10
y_train = to_categorical(y_train, n_classes)
y_valid = to_categorical(y_valid, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_6 (Dense) (None, 64) 50240
_________________________________________________________________
dense_7 (Dense) (None, 64) 4160
_________________________________________________________________
dense_8 (Dense) (None, 10) 650
=================================================================
Total params: 55,050
Trainable params: 55,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Configure model
###Code
model.compile(loss='categorical_crossentropy', optimizer=SGD(learning_rate=0.01), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train!
###Code
model.fit(X_train, y_train, batch_size=128, epochs=40, verbose=1, validation_data=(X_valid, y_valid))
model.evaluate(X_valid, y_valid)
y_valid[0]
model.predict(X_valid[0].reshape(1,-1))
###Output
_____no_output_____
###Markdown
Intermediate Neural Network in Keras In this notebook, we improve our [introductory shallow net](https://github.com/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/shallow_net_in_keras.ipynb) by applying the theory we have covered since. [](https://colab.research.google.com/github/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/intermediate_net_in_keras.ipynb) Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
###Output
Using TensorFlow backend.
###Markdown
Load data
###Code
(X_train, y_train), (X_valid, y_valid) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
X_train = X_train.reshape(60000, 784).astype('float32')
X_valid = X_valid.reshape(10000, 784).astype('float32')
X_train /= 255
X_valid /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_valid = keras.utils.to_categorical(y_valid, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 64) 50240
_________________________________________________________________
dense_2 (Dense) (None, 64) 4160
_________________________________________________________________
dense_3 (Dense) (None, 10) 650
=================================================================
Total params: 55,050
Trainable params: 55,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Configure model
###Code
model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.1), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train!
###Code
model.fit(X_train, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(X_valid, y_valid))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 1s 15us/step - loss: 0.4744 - acc: 0.8637 - val_loss: 0.2686 - val_acc: 0.9234
Epoch 2/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.2414 - acc: 0.9289 - val_loss: 0.2004 - val_acc: 0.9404
Epoch 3/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1871 - acc: 0.9452 - val_loss: 0.1578 - val_acc: 0.9521
Epoch 4/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1538 - acc: 0.9551 - val_loss: 0.1435 - val_acc: 0.9574
Epoch 5/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1311 - acc: 0.9616 - val_loss: 0.1258 - val_acc: 0.9616
Epoch 6/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1148 - acc: 0.9659 - val_loss: 0.1245 - val_acc: 0.9641
Epoch 7/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1017 - acc: 0.9700 - val_loss: 0.1066 - val_acc: 0.9683
Epoch 8/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0914 - acc: 0.9728 - val_loss: 0.1029 - val_acc: 0.9672
Epoch 9/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0821 - acc: 0.9760 - val_loss: 0.0942 - val_acc: 0.9709
Epoch 10/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0738 - acc: 0.9785 - val_loss: 0.1035 - val_acc: 0.9691
Epoch 11/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0672 - acc: 0.9796 - val_loss: 0.1000 - val_acc: 0.9710
Epoch 12/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0617 - acc: 0.9820 - val_loss: 0.0913 - val_acc: 0.9735
Epoch 13/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0570 - acc: 0.9835 - val_loss: 0.0817 - val_acc: 0.9754
Epoch 14/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0526 - acc: 0.9844 - val_loss: 0.0917 - val_acc: 0.9729
Epoch 15/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0477 - acc: 0.9861 - val_loss: 0.0822 - val_acc: 0.9752
Epoch 16/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0450 - acc: 0.9868 - val_loss: 0.0845 - val_acc: 0.9752
Epoch 17/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0413 - acc: 0.9878 - val_loss: 0.0842 - val_acc: 0.9741
Epoch 18/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0384 - acc: 0.9887 - val_loss: 0.0833 - val_acc: 0.9752
Epoch 19/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0356 - acc: 0.9903 - val_loss: 0.0803 - val_acc: 0.9760
Epoch 20/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0332 - acc: 0.9906 - val_loss: 0.0821 - val_acc: 0.9759
###Markdown
Intermediate Neural Network in Keras In this notebook, we improve our [introductory shallow net](https://github.com/the-deep-learners/TensorFlow-LiveLessons/blob/master/notebooks/shallow_net_in_keras.ipynb) from Lesson 1 by applying the theory we have covered since. Set seed for reproducibility
###Code
import numpy as np
np.random.seed(42)
###Output
_____no_output_____
###Markdown
Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
###Output
Using TensorFlow backend.
###Markdown
Load data
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
X_train = X_train.reshape(60000, 784).astype('float32')
X_test = X_test.reshape(10000, 784).astype('float32')
X_train /= 255
X_test /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_test = keras.utils.to_categorical(y_test, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 64) 50240
_________________________________________________________________
dense_2 (Dense) (None, 64) 4160
_________________________________________________________________
dense_3 (Dense) (None, 10) 650
=================================================================
Total params: 55,050
Trainable params: 55,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Configure model
###Code
model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.1), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train!
###Code
model.fit(X_train, y_train, batch_size=128, epochs=200, verbose=1, validation_data=(X_test, y_test))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/200
60000/60000 [==============================] - 1s - loss: 0.4785 - acc: 0.8642 - val_loss: 0.2507 - val_acc: 0.9255
Epoch 2/200
60000/60000 [==============================] - 1s - loss: 0.2245 - acc: 0.9354 - val_loss: 0.1930 - val_acc: 0.9436
Epoch 3/200
60000/60000 [==============================] - 1s - loss: 0.1716 - acc: 0.9500 - val_loss: 0.1506 - val_acc: 0.9547
Epoch 4/200
60000/60000 [==============================] - 1s - loss: 0.1415 - acc: 0.9586 - val_loss: 0.1313 - val_acc: 0.9602
Epoch 5/200
60000/60000 [==============================] - 1s - loss: 0.1201 - acc: 0.9651 - val_loss: 0.1280 - val_acc: 0.9614
Epoch 6/200
60000/60000 [==============================] - 1s - loss: 0.1045 - acc: 0.9697 - val_loss: 0.1061 - val_acc: 0.9669
Epoch 7/200
60000/60000 [==============================] - 1s - loss: 0.0927 - acc: 0.9726 - val_loss: 0.0984 - val_acc: 0.9697
Epoch 8/200
60000/60000 [==============================] - 1s - loss: 0.0826 - acc: 0.9759 - val_loss: 0.0926 - val_acc: 0.9719
Epoch 9/200
60000/60000 [==============================] - 1s - loss: 0.0758 - acc: 0.9774 - val_loss: 0.0904 - val_acc: 0.9732
Epoch 10/200
60000/60000 [==============================] - 1s - loss: 0.0683 - acc: 0.9797 - val_loss: 0.0963 - val_acc: 0.9705
Epoch 11/200
60000/60000 [==============================] - 1s - loss: 0.0631 - acc: 0.9810 - val_loss: 0.0856 - val_acc: 0.9752
Epoch 12/200
60000/60000 [==============================] - 1s - loss: 0.0575 - acc: 0.9832 - val_loss: 0.0839 - val_acc: 0.9749
Epoch 13/200
60000/60000 [==============================] - 1s - loss: 0.0523 - acc: 0.9846 - val_loss: 0.0881 - val_acc: 0.9733
Epoch 14/200
60000/60000 [==============================] - 1s - loss: 0.0488 - acc: 0.9859 - val_loss: 0.0828 - val_acc: 0.9759
Epoch 15/200
60000/60000 [==============================] - 1s - loss: 0.0454 - acc: 0.9869 - val_loss: 0.0844 - val_acc: 0.9735
Epoch 16/200
60000/60000 [==============================] - 1s - loss: 0.0420 - acc: 0.9875 - val_loss: 0.0864 - val_acc: 0.9749
Epoch 17/200
60000/60000 [==============================] - 1s - loss: 0.0400 - acc: 0.9886 - val_loss: 0.0848 - val_acc: 0.9746
Epoch 18/200
60000/60000 [==============================] - 1s - loss: 0.0364 - acc: 0.9894 - val_loss: 0.0748 - val_acc: 0.9775
Epoch 19/200
60000/60000 [==============================] - 1s - loss: 0.0336 - acc: 0.9902 - val_loss: 0.0824 - val_acc: 0.9756
Epoch 20/200
60000/60000 [==============================] - 1s - loss: 0.0315 - acc: 0.9911 - val_loss: 0.0802 - val_acc: 0.9772
Epoch 21/200
60000/60000 [==============================] - 1s - loss: 0.0304 - acc: 0.9914 - val_loss: 0.0791 - val_acc: 0.9759
Epoch 22/200
60000/60000 [==============================] - 1s - loss: 0.0274 - acc: 0.9923 - val_loss: 0.0769 - val_acc: 0.9777
Epoch 23/200
60000/60000 [==============================] - 1s - loss: 0.0255 - acc: 0.9930 - val_loss: 0.0776 - val_acc: 0.9781
Epoch 24/200
60000/60000 [==============================] - 1s - loss: 0.0241 - acc: 0.9937 - val_loss: 0.0783 - val_acc: 0.9771
Epoch 25/200
60000/60000 [==============================] - 1s - loss: 0.0226 - acc: 0.9936 - val_loss: 0.0824 - val_acc: 0.9764
Epoch 26/200
60000/60000 [==============================] - 1s - loss: 0.0212 - acc: 0.9945 - val_loss: 0.0812 - val_acc: 0.9774
Epoch 27/200
60000/60000 [==============================] - 1s - loss: 0.0190 - acc: 0.9954 - val_loss: 0.0795 - val_acc: 0.9784
Epoch 28/200
60000/60000 [==============================] - 1s - loss: 0.0177 - acc: 0.9958 - val_loss: 0.0829 - val_acc: 0.9759
Epoch 29/200
60000/60000 [==============================] - 1s - loss: 0.0166 - acc: 0.9962 - val_loss: 0.0808 - val_acc: 0.9779
Epoch 30/200
60000/60000 [==============================] - 1s - loss: 0.0147 - acc: 0.9970 - val_loss: 0.0836 - val_acc: 0.9774
Epoch 31/200
60000/60000 [==============================] - 1s - loss: 0.0143 - acc: 0.9967 - val_loss: 0.0811 - val_acc: 0.9778
Epoch 32/200
60000/60000 [==============================] - 1s - loss: 0.0127 - acc: 0.9976 - val_loss: 0.0823 - val_acc: 0.9786
Epoch 33/200
60000/60000 [==============================] - 1s - loss: 0.0117 - acc: 0.9977 - val_loss: 0.0843 - val_acc: 0.9772
Epoch 34/200
60000/60000 [==============================] - 1s - loss: 0.0112 - acc: 0.9978 - val_loss: 0.0842 - val_acc: 0.9776
Epoch 35/200
60000/60000 [==============================] - 1s - loss: 0.0104 - acc: 0.9981 - val_loss: 0.0907 - val_acc: 0.9756
Epoch 36/200
60000/60000 [==============================] - 1s - loss: 0.0098 - acc: 0.9981 - val_loss: 0.0853 - val_acc: 0.9775
Epoch 37/200
60000/60000 [==============================] - 1s - loss: 0.0090 - acc: 0.9984 - val_loss: 0.0861 - val_acc: 0.9770
Epoch 38/200
60000/60000 [==============================] - 1s - loss: 0.0081 - acc: 0.9989 - val_loss: 0.0872 - val_acc: 0.9764
Epoch 39/200
60000/60000 [==============================] - 1s - loss: 0.0074 - acc: 0.9991 - val_loss: 0.0918 - val_acc: 0.9768
Epoch 40/200
60000/60000 [==============================] - 1s - loss: 0.0069 - acc: 0.9990 - val_loss: 0.0898 - val_acc: 0.9771
Epoch 41/200
60000/60000 [==============================] - 1s - loss: 0.0068 - acc: 0.9990 - val_loss: 0.0882 - val_acc: 0.9765
Epoch 42/200
60000/60000 [==============================] - 1s - loss: 0.0063 - acc: 0.9993 - val_loss: 0.0909 - val_acc: 0.9765
Epoch 43/200
60000/60000 [==============================] - 1s - loss: 0.0057 - acc: 0.9995 - val_loss: 0.0904 - val_acc: 0.9780
Epoch 44/200
60000/60000 [==============================] - 1s - loss: 0.0051 - acc: 0.9996 - val_loss: 0.0905 - val_acc: 0.9776
Epoch 45/200
60000/60000 [==============================] - 1s - loss: 0.0050 - acc: 0.9996 - val_loss: 0.0917 - val_acc: 0.9773
Epoch 46/200
60000/60000 [==============================] - 1s - loss: 0.0045 - acc: 0.9997 - val_loss: 0.0917 - val_acc: 0.9773
Epoch 47/200
60000/60000 [==============================] - 1s - loss: 0.0043 - acc: 0.9997 - val_loss: 0.0912 - val_acc: 0.9777
Epoch 48/200
60000/60000 [==============================] - 1s - loss: 0.0039 - acc: 0.9998 - val_loss: 0.0943 - val_acc: 0.9769
Epoch 49/200
60000/60000 [==============================] - 1s - loss: 0.0037 - acc: 0.9999 - val_loss: 0.0959 - val_acc: 0.9764
Epoch 50/200
60000/60000 [==============================] - 1s - loss: 0.0036 - acc: 0.9999 - val_loss: 0.0939 - val_acc: 0.9780
Epoch 51/200
60000/60000 [==============================] - 1s - loss: 0.0032 - acc: 0.9999 - val_loss: 0.0928 - val_acc: 0.9774
Epoch 52/200
60000/60000 [==============================] - 1s - loss: 0.0032 - acc: 0.9999 - val_loss: 0.0958 - val_acc: 0.9767
Epoch 53/200
60000/60000 [==============================] - 1s - loss: 0.0031 - acc: 0.9999 - val_loss: 0.0953 - val_acc: 0.9779
Epoch 54/200
60000/60000 [==============================] - 1s - loss: 0.0029 - acc: 0.9999 - val_loss: 0.0965 - val_acc: 0.9768
Epoch 55/200
60000/60000 [==============================] - 1s - loss: 0.0027 - acc: 0.9999 - val_loss: 0.0965 - val_acc: 0.9779
Epoch 56/200
60000/60000 [==============================] - 1s - loss: 0.0026 - acc: 0.9999 - val_loss: 0.0954 - val_acc: 0.9776
Epoch 57/200
60000/60000 [==============================] - 1s - loss: 0.0024 - acc: 0.9999 - val_loss: 0.0961 - val_acc: 0.9781
Epoch 58/200
60000/60000 [==============================] - 1s - loss: 0.0024 - acc: 0.9999 - val_loss: 0.0963 - val_acc: 0.9778
Epoch 59/200
60000/60000 [==============================] - 1s - loss: 0.0023 - acc: 1.0000 - val_loss: 0.0983 - val_acc: 0.9775
Epoch 60/200
60000/60000 [==============================] - 1s - loss: 0.0022 - acc: 1.0000 - val_loss: 0.0989 - val_acc: 0.9776
Epoch 61/200
60000/60000 [==============================] - 1s - loss: 0.0021 - acc: 1.0000 - val_loss: 0.0995 - val_acc: 0.9772
Epoch 62/200
60000/60000 [==============================] - 1s - loss: 0.0021 - acc: 1.0000 - val_loss: 0.1005 - val_acc: 0.9770
Epoch 63/200
60000/60000 [==============================] - 1s - loss: 0.0020 - acc: 1.0000 - val_loss: 0.1007 - val_acc: 0.9772
Epoch 64/200
###Markdown
Intermediate Neural Network in Keras Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
###Output
Using TensorFlow backend.
###Markdown
Load data
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Preprocess Data
###Code
X_train = X_train.reshape(60000, 784).astype('float32')
X_test = X_test.reshape(10000, 784).astype('float32')
X_train /= 255
X_test /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_test = keras.utils.to_categorical(y_test, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
(64*784)+64
(64*64)+64
(10*64)+10
###Output
_____no_output_____
###Markdown
Configure model
###Code
model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.01), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train
###Code
model.fit(X_train, y_train, validation_data=(X_test, y_test), batch_size=128, epochs=10, verbose=1)
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/10
60000/60000 [==============================] - 1s - loss: 1.3285 - acc: 0.6546 - val_loss: 0.6498 - val_acc: 0.8395
Epoch 2/10
60000/60000 [==============================] - 1s - loss: 0.5260 - acc: 0.8626 - val_loss: 0.4210 - val_acc: 0.8868
Epoch 3/10
60000/60000 [==============================] - 1s - loss: 0.4026 - acc: 0.8880 - val_loss: 0.3545 - val_acc: 0.9000
Epoch 4/10
60000/60000 [==============================] - 1s - loss: 0.3545 - acc: 0.8998 - val_loss: 0.3221 - val_acc: 0.9092
Epoch 5/10
60000/60000 [==============================] - 1s - loss: 0.3262 - acc: 0.9074 - val_loss: 0.2993 - val_acc: 0.9140
Epoch 6/10
60000/60000 [==============================] - 1s - loss: 0.3059 - acc: 0.9134 - val_loss: 0.2845 - val_acc: 0.9186
Epoch 7/10
60000/60000 [==============================] - 1s - loss: 0.2902 - acc: 0.9177 - val_loss: 0.2723 - val_acc: 0.9233
Epoch 8/10
60000/60000 [==============================] - 1s - loss: 0.2768 - acc: 0.9213 - val_loss: 0.2619 - val_acc: 0.9239
Epoch 9/10
60000/60000 [==============================] - 1s - loss: 0.2658 - acc: 0.9247 - val_loss: 0.2521 - val_acc: 0.9265
Epoch 10/10
60000/60000 [==============================] - 1s - loss: 0.2558 - acc: 0.9276 - val_loss: 0.2432 - val_acc: 0.9297
###Markdown
Intermediate Neural Network in Keras In this notebook, we improve our [introductory shallow net](https://github.com/the-deep-learners/TensorFlow-LiveLessons/blob/master/notebooks/shallow_net_in_keras.ipynb) from Lesson 1 by applying the theory we have covered since. Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
###Output
Using TensorFlow backend.
###Markdown
Load data
###Code
(X_train, y_train), (X_valid, y_valid) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
X_train = X_train.reshape(60000, 784).astype('float32')
X_valid = X_valid.reshape(10000, 784).astype('float32')
X_train /= 255
X_valid /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_valid = keras.utils.to_categorical(y_valid, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 64) 50240
_________________________________________________________________
dense_2 (Dense) (None, 64) 4160
_________________________________________________________________
dense_3 (Dense) (None, 10) 650
=================================================================
Total params: 55,050
Trainable params: 55,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Configure model
###Code
model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.1), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train!
###Code
model.fit(X_train, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(X_valid, y_valid))
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/20
60000/60000 [==============================] - 1s 15us/step - loss: 0.4744 - acc: 0.8637 - val_loss: 0.2686 - val_acc: 0.9234
Epoch 2/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.2414 - acc: 0.9289 - val_loss: 0.2004 - val_acc: 0.9404
Epoch 3/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1871 - acc: 0.9452 - val_loss: 0.1578 - val_acc: 0.9521
Epoch 4/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1538 - acc: 0.9551 - val_loss: 0.1435 - val_acc: 0.9574
Epoch 5/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1311 - acc: 0.9616 - val_loss: 0.1258 - val_acc: 0.9616
Epoch 6/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1148 - acc: 0.9659 - val_loss: 0.1245 - val_acc: 0.9641
Epoch 7/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.1017 - acc: 0.9700 - val_loss: 0.1066 - val_acc: 0.9683
Epoch 8/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0914 - acc: 0.9728 - val_loss: 0.1029 - val_acc: 0.9672
Epoch 9/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0821 - acc: 0.9760 - val_loss: 0.0942 - val_acc: 0.9709
Epoch 10/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0738 - acc: 0.9785 - val_loss: 0.1035 - val_acc: 0.9691
Epoch 11/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0672 - acc: 0.9796 - val_loss: 0.1000 - val_acc: 0.9710
Epoch 12/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0617 - acc: 0.9820 - val_loss: 0.0913 - val_acc: 0.9735
Epoch 13/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0570 - acc: 0.9835 - val_loss: 0.0817 - val_acc: 0.9754
Epoch 14/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0526 - acc: 0.9844 - val_loss: 0.0917 - val_acc: 0.9729
Epoch 15/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0477 - acc: 0.9861 - val_loss: 0.0822 - val_acc: 0.9752
Epoch 16/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0450 - acc: 0.9868 - val_loss: 0.0845 - val_acc: 0.9752
Epoch 17/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0413 - acc: 0.9878 - val_loss: 0.0842 - val_acc: 0.9741
Epoch 18/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0384 - acc: 0.9887 - val_loss: 0.0833 - val_acc: 0.9752
Epoch 19/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0356 - acc: 0.9903 - val_loss: 0.0803 - val_acc: 0.9760
Epoch 20/20
60000/60000 [==============================] - 1s 12us/step - loss: 0.0332 - acc: 0.9906 - val_loss: 0.0821 - val_acc: 0.9759
###Markdown
Intermediate Neural Network in Keras In this notebook, we improve our [introductory shallow net](https://github.com/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/shallow_net_in_keras.ipynb) by applying the theory we have covered since. [](https://colab.research.google.com/github/the-deep-learners/deep-learning-illustrated/blob/master/notebooks/intermediate_net_in_keras.ipynb) Load dependencies
###Code
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import SGD
###Output
Using TensorFlow backend.
###Markdown
Load data
###Code
(X_train, y_train), (X_valid, y_valid) = mnist.load_data()
###Output
_____no_output_____
###Markdown
Preprocess data
###Code
X_train = X_train.reshape(60000, 784).astype('float32')
X_valid = X_valid.reshape(10000, 784).astype('float32')
X_train /= 255
X_valid /= 255
n_classes = 10
y_train = keras.utils.to_categorical(y_train, n_classes)
y_valid = keras.utils.to_categorical(y_valid, n_classes)
###Output
_____no_output_____
###Markdown
Design neural network architecture
###Code
model = Sequential()
model.add(Dense(64, activation='relu', input_shape=(784,)))
model.add(Dense(64, activation='relu'))
model.add(Dense(10, activation='softmax'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 64) 50240
_________________________________________________________________
dense_2 (Dense) (None, 64) 4160
_________________________________________________________________
dense_3 (Dense) (None, 10) 650
=================================================================
Total params: 55,050
Trainable params: 55,050
Non-trainable params: 0
_________________________________________________________________
###Markdown
Configure model
###Code
model.compile(loss='categorical_crossentropy', optimizer=SGD(lr=0.1), metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train!
###Code
model.fit(X_train, y_train, batch_size=128, epochs=20, verbose=1, validation_data=(X_valid, y_valid))
###Output
_____no_output_____ |
examples/simulating_a_predefined_model.ipynb | ###Markdown
Simulate a Predefined ModelExample created by Wilson Rocha Lacerda Junior
###Code
pip install sysidentpy
import numpy as np
import pandas as pd
from sysidentpy.simulation import SimulateNARMAX
from sysidentpy.metrics import root_relative_squared_error
from sysidentpy.utils.generate_data import get_siso_data
from sysidentpy.basis_function._basis_function import Polynomial
from sysidentpy.utils.display_results import results
from sysidentpy.utils.plotting import plot_residues_correlation, plot_results
from sysidentpy.residues.residues_correlation import compute_residues_autocorrelation, compute_cross_correlation
###Output
_____no_output_____
###Markdown
Generating 1 input 1 output sample data The data is generated by simulating the following model:$y_k = 0.2y_{k-1} + 0.1y_{k-1}x_{k-1} + 0.9x_{k-2} + e_{k}$If *colored_noise* is set to True:$e_{k} = 0.8\nu_{k-1} + \nu_{k}$where $x$ is a uniformly distributed random variable and $\nu$ is a gaussian distributed variable with $\mu=0$ and $\sigma=0.1$In the next example we will generate a data with 1000 samples with white noise and selecting 90% of the data to train the model.
###Code
x_train, x_test, y_train, y_test = get_siso_data(
n=1000,
colored_noise=False,
sigma=0.001,
train_percentage=90
)
###Output
_____no_output_____
###Markdown
Defining the modelWe already know that the generated data is a result of the model $𝑦_𝑘=0.2𝑦_{𝑘−1}+0.1𝑦_{𝑘−1}𝑥_{𝑘−1}+0.9𝑥_{𝑘−2}+𝑒_𝑘$ . Thus, we can create a model with those regressors follwing a codification pattern:- $0$ is the constant term,- $[1001] = y_{k-1}$- $[100n] = y_{k-n}$- $[200n] = x1_{k-n}$- $[300n] = x2_{k-n}$- $[1011, 1001] = y_{k-11} \times y_{k-1}$- $[100n, 100m] = y_{k-n} \times y_{k-m}$- $[12001, 1003, 1001] = x11_{k-1} \times y_{k-3} \times y_{k-1}$- and so on Important NoteThe order of the arrays matter. If you use [2001, 1001], it will work, but [1001, 2001] will not (the regressor will be ignored). Always put the highest value first:- $[2003, 2001]$ **works**- $[2001, 2003]$ **do not work**We will handle this limitation in upcoming update.
###Code
s = SimulateNARMAX(basis_function=Polynomial(), calculate_err=True, estimate_parameter=False, extended_least_squares=True)
# the model must be a numpy array
model = np.array(
[
[1001, 0], # y(k-1)
[2001, 1001], # x1(k-1)y(k-1)
[2002, 0], # x1(k-2)
]
)
# theta must be a numpy array of shape (n, 1) where n is the number of regressors
theta = np.array([[0.2, 0.9, 0.1]]).T
###Output
_____no_output_____
###Markdown
Simulating the modelAfter defining the model and theta we just need to use the simulate method.The simulate method returns the predicted values and the results where we can look at regressors,parameters and ERR values.
###Code
yhat = s.simulate(
X_test=x_test,
y_test=y_test,
model_code=model,
theta=theta,
)
r = pd.DataFrame(
results(
s.final_model, s.theta, s.err,
s.n_terms, err_precision=8, dtype='sci'
),
columns=['Regressors', 'Parameters', 'ERR'])
print(r)
plot_results(y=y_test, yhat = yhat, n=1000)
ee = compute_residues_autocorrelation(y_test, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
x1e = compute_cross_correlation(y_test, yhat, x_test)
plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
###Output
Regressors Parameters ERR
0 y(k-1) 2.0000E-01 0.00000000E+00
1 x1(k-2) 9.0000E-01 0.00000000E+00
2 x1(k-1)y(k-1) 1.0000E-01 0.00000000E+00
###Markdown
OptionsYou can set the `steps_ahead` to run the prediction/simulation:
###Code
yhat = s.simulate(
X_test=x_test,
y_test=y_test,
model_code=model,
theta=theta,
steps_ahead=1,
)
rrse = root_relative_squared_error(y_test, yhat)
print(rrse)
yhat = s.simulate(
X_test=x_test,
y_test=y_test,
model_code=model,
theta=theta,
steps_ahead=21,
)
rrse = root_relative_squared_error(y_test, yhat)
print(rrse)
###Output
0.0018387456847899486
###Markdown
Estimating the parametersIf you have only the model strucuture, you can create an object with `estimate_parameter=True` andchoose the methed for estimation using `estimator`. In this case, you have to pass the training datafor parameters estimation. When `estimate_parameter=True`, we also computate the ERR considering only the regressors defined by the user.
###Code
s = SimulateNARMAX(basis_function=Polynomial(), estimate_parameter=True, estimator='least_squares', calculate_err=True)
yhat = s.simulate(
X_train=x_train,
y_train=y_train,
X_test=x_test,
y_test=y_test,
model_code=model,
# theta will be estimated using the defined estimator
)
r = pd.DataFrame(
results(
s.final_model, s.theta, s.err,
s.n_terms, err_precision=8, dtype='sci'
),
columns=['Regressors', 'Parameters', 'ERR'])
print(r)
plot_results(y=y_test, yhat = yhat, n=1000)
ee = compute_residues_autocorrelation(y_test, yhat)
plot_residues_correlation(data=ee, title="Residues", ylabel="$e^2$")
x1e = compute_cross_correlation(y_test, yhat, x_test)
plot_residues_correlation(data=x1e, title="Residues", ylabel="$x_1e$")
###Output
Regressors Parameters ERR
0 y(k-1) 2.0006E-01 9.56312958E-01
1 x1(k-2) 8.9993E-01 4.04769137E-02
2 x1(k-1)y(k-1) 9.9979E-02 3.20650789E-03
|
lab/lab04/lab04.ipynb | ###Markdown
Lab 4: Functions and Visualizations Welcome to Lab 4! This week, we'll learn about functions, table methods such as `apply`, and how to generate visualizations! Recommended Reading:* [Applying a Function to a Column](https://www.inferentialthinking.com/chapters/08/1/applying-a-function-to-a-column.html)* [Visualizations](https://www.inferentialthinking.com/chapters/07/visualization.html)First, set up the notebook by running the cell below.
###Code
import numpy as np
from datascience import *
# These lines set up graphing capabilities.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
**Deadline**: If you are not attending lab physically, you have to complete this lab and submit by Wednesday, February 12th before 8:59 A.M. in order to receive lab credit. Otherwise, please attend the lab you are enrolled in, get checked off with your (u)GSI or learning assistant **AND** submit this assignment by the end of the lab section (with whatever progress you've made) to receive lab credit.**Submission**: Once you're finished, select "Save and Checkpoint" in the File menu and then execute the submit cell at the end. The result will contain a link that you can use to check that your assignment has been submitted successfully. 1. Defining functionsLet's write a very simple function that converts a proportion to a percentage by multiplying it by 100. For example, the value of `to_percentage(.5)` should be the number 50 (no percent sign).A function definition has a few parts. `def`It always starts with `def` (short for **def**ine): def NameNext comes the name of the function. Like other names we've defined, it can't start with a number or contain spaces. Let's call our function `to_percentage`: def to_percentage SignatureNext comes something called the *signature* of the function. This tells Python how many arguments your function should have, and what names you'll use to refer to those arguments in the function's code. A function can have any number of arguments (including 0!). `to_percentage` should take one argument, and we'll call that argument `proportion` since it should be a proportion. def to_percentage(proportion) If we want our function to take more than one argument, we add a comma between each argument name. Note that if we had zero arguments, we'd still place the parentheses () after than name. We put a colon after the signature to tell Python it's over. If you're getting a syntax error after defining a function, check to make sure you remembered the colon! def to_percentage(proportion): DocumentationFunctions can do complicated things, so you should write an explanation of what your function does. For small functions, this is less important, but it's a good habit to learn from the start. Conventionally, Python functions are documented by writing an **indented** triple-quoted string: def to_percentage(proportion): """Converts a proportion to a percentage.""" BodyNow we start writing code that runs when the function is called. This is called the *body* of the function and every line **must be indented with a tab**. Any lines that are *not* indented and left-aligned with the def statement is considered outside the function. Some notes about the body of the function:- We can write code that we would write anywhere else. - We use the arguments defined in the function signature. We can do this because we assume that when we call the function, values are already assigned to those arguments.- We generally avoid referencing variables defined *outside* the function. If you would like to reference variables outside of the function, pass them through as arguments!Now, let's give a name to the number we multiply a proportion by to get a percentage: def to_percentage(proportion): """Converts a proportion to a percentage.""" factor = 100 `return`The special instruction `return` is part of the function's body and tells Python to make the value of the function call equal to whatever comes right after `return`. We want the value of `to_percentage(.5)` to be the proportion .5 times the factor 100, so we write: def to_percentage(proportion): """Converts a proportion to a percentage.""" factor = 100 return proportion * factor `return` only makes sense in the context of a function, and **can never be used outside of a function**. `return` is always the last line of the function because Python stops executing the body of a function once it hits a `return` statement.*Note:* `return` inside a function tells Python what value the function evaluates to. However, there are other functions, like `print`, that have no `return` value. For example, `print` simply prints a certain value out to the console. `return` and `print` are **very** different. **Question 1.1.** Define `to_percentage` in the cell below. Call your function to convert the proportion .2 to a percentage. Name that percentage `twenty_percent`.<!--BEGIN QUESTIONname: q11-->
###Code
...
"""" Converts a proportion to a percentage"""
factor = ...
...
twenty_percent = ...
twenty_percent
grader.check("q11")
###Output
_____no_output_____
###Markdown
Like you’ve done with built-in functions in previous labs (max, abs, etc.), you can pass in named values as arguments to your function.**Question 1.2.** Use `to_percentage` again to convert the proportion named `a_proportion` (defined below) to a percentage called `a_percentage`.*Note:* You don't need to define `to_percentage` again! Like other named values, functions stick around after you define them.<!--BEGIN QUESTIONname: q12-->
###Code
a_proportion = 2**(.5) / 2
a_percentage = ...
a_percentage
grader.check("q12")
###Output
_____no_output_____
###Markdown
Here's something important about functions: the names assigned *within* a function body are only accessible within the function body. Once the function has returned, those names are gone. So even if you created a variable called `factor` and defined `factor = 100` inside of the body of the `to_percentage` function and then called `to_percentage`, `factor` would not have a value assigned to it outside of the body of `to_percentage`:
###Code
# You should see an error when you run this. (If you don't, you might
# have defined factor somewhere above.)
factor
###Output
_____no_output_____
###Markdown
As we've seen with built-in functions, functions can also take strings (or arrays, or tables) as arguments, and they can return those things, too.**Question 1.3.** Define a function called `disemvowel`. It should take a single string as its argument. (You can call that argument whatever you want.) It should return a copy of that string, but with all the characters that are vowels removed. (In English, the vowels are the characters "a", "e", "i", "o", and "u".) You can use as many lines inside of the function to do this as you’d like.*Hint:* To remove all the "a"s from a string, you can use `that_string.replace("a", "")`. The `.replace` method for strings returns a new string, so you can call `replace` multiple times, one after the other. <!--BEGIN QUESTIONname: q13-->
###Code
def disemvowel(a_string):
"""Removes all vowels from a string."""
...
# An example call to your function. (It's often helpful to run
# an example call from time to time while you're writing a function,
# to see how it currently works.)
disemvowel("Can you read this without vowels?")
grader.check("q13")
###Output
_____no_output_____
###Markdown
Calls on calls on callsJust as you write a series of lines to build up a complex computation, it's useful to define a series of small functions that build on each other. Since you can write any code inside a function's body, you can call other functions you've written.If a function is a like a recipe, defining a function in terms of other functions is like having a recipe for cake telling you to follow another recipe to make the frosting, and another to make the jam filling. This makes the cake recipe shorter and clearer, and it avoids having a bunch of duplicated frosting recipes. It's a foundation of productive programming.For example, suppose you want to count the number of characters *that aren't vowels* in a piece of text. One way to do that is this to remove all the vowels and count the size of the remaining string.**Question 1.4.** Write a function called `num_non_vowels`. It should take a string as its argument and return a number. That number should be the number of characters in the argument string that aren't vowels. You should use the `disemvowel` function you wrote above inside of the `num_non_vowels` function.*Hint:* The function `len` takes a string as its argument and returns the number of characters in it.<!--BEGIN QUESTIONname: q14-->
###Code
def num_non_vowels(a_string):
"""The number of characters in a string, minus the vowels."""
...
# Try calling your function yourself to make sure the output is what
# you expect. You can also use the interact function in the next cell if you'd like.
grader.check("q14")
###Output
_____no_output_____
###Markdown
Functions can also encapsulate code that *displays output* instead of computing a value. For example, if you call `print` inside a function, and then call that function, something will get printed.The `movies_by_year` dataset in the textbook has information about movie sales in recent years. Suppose you'd like to display the year with the 5th-highest total gross movie sales, printed in a human-readable way. You might do this:
###Code
movies_by_year = Table.read_table("movies_by_year.csv")
rank = 5
fifth_from_top_movie_year = movies_by_year.sort("Total Gross", descending=True).column("Year").item(rank-1)
print("Year number", rank, "for total gross movie sales was:", fifth_from_top_movie_year)
###Output
_____no_output_____
###Markdown
After writing this, you realize you also wanted to print out the 2nd and 3rd-highest years. Instead of copying your code, you decide to put it in a function. Since the rank varies, you make that an argument to your function.**Question 1.5.** Write a function called `print_kth_top_movie_year`. It should take a single argument, the rank of the year (like 2, 3, or 5 in the above examples). It should print out a message like the one above. *Note:* Your function shouldn't have a `return` statement.<!--BEGIN QUESTIONname: q15-->
###Code
...
print(...)
...
# Example calls to your function:
print_kth_top_movie_year(2)
print_kth_top_movie_year(3)
grader.check("q15")
# interact also allows you to pass in an array for a function argument. It will
# then present a dropdown menu of options.
_ = interact(print_kth_top_movie_year, k=np.arange(1, 10))
###Output
_____no_output_____
###Markdown
`print` is not the same as `return`The `print_kth_top_movie_year(k)` function prints the total gross movie sales for the year that was provided! However, since we did not return any value in this function, we can not use it after we call it. Let's look at an example of another function that prints a value but does not return it.
###Code
def print_number_five():
print(5)
print_number_five()
###Output
_____no_output_____
###Markdown
However, if we try to use the output of `print_number_five()`, we see that the value `5` is printed but we get a TypeError when we try to add the number 2 to it!
###Code
print_number_five_output = print_number_five()
print_number_five_output + 2
###Output
_____no_output_____
###Markdown
It may seem that `print_number_five()` is returning a value, 5. In reality, it just displays the number 5 to you without giving you the actual value! If your function prints out a value without returning it and you try to use that value, you will run into errors, so be careful!Explain to your neighbor how you might add a line of code to the `print_number_five` function (after `print(5)`) so that the code `print_number_five_output + 5` would result in the value `10`, rather than an error. 2. Functions and CEO IncomesIn this question, we'll look at the 2015 compensation of CEOs at the 100 largest companies in California. The data was compiled from a [Los Angeles Times analysis](http://spreadsheets.latimes.com/california-ceo-compensation/), and ultimately came from [filings](https://www.sec.gov/answers/proxyhtf.htm) mandated by the SEC from all publicly-traded companies. Two companies have two CEOs, so there are 102 CEOs in the dataset.We've copied the raw data from the LA Times page into a file called `raw_compensation.csv`. (The page notes that all dollar amounts are in **millions of dollars**.)
###Code
raw_compensation = Table.read_table('raw_compensation.csv')
raw_compensation
###Output
_____no_output_____
###Markdown
We want to compute the average of the CEOs' pay. Try running the cell below.
###Code
np.average(raw_compensation.column("Total Pay"))
###Output
_____no_output_____
###Markdown
You should see a TypeError. Let's examine why this error occurred by looking at the values in the `Total Pay` column. **Question 2.1.** Use the `type` function and set `total_pay_type` to the type of the first value in the "Total Pay" column.<!--BEGIN QUESTIONname: q21-->
###Code
total_pay_type = ...
total_pay_type
grader.check("q21")
###Output
_____no_output_____
###Markdown
**Question 2.2.** You should have found that the values in the `Total Pay` column are strings. It doesn't make sense to take the average of string values, so we need to convert them to numbers if we want to do this. Extract the first value in `Total Pay`. It's Mark Hurd's pay in 2015, in *millions* of dollars. Call it `mark_hurd_pay_string`.<!--BEGIN QUESTIONname: q22-->
###Code
mark_hurd_pay_string = ...
mark_hurd_pay_string
grader.check("q22")
###Output
_____no_output_____
###Markdown
**Question 2.3.** Convert `mark_hurd_pay_string` to a number of *dollars*. Some hints, as this question requires multiple steps:- The string method `strip` will be useful for removing the dollar sign; it removes a specified character from the start or end of a string. For example, the value of `"100%".strip("%")` is the string `"100"`. - You'll also need the function `float`, which converts a string that looks like a number to an actual number. - Finally, remember that the answer should be in dollars, not millions of dollars.<!--BEGIN QUESTIONname: q23-->
###Code
mark_hurd_pay = ...
mark_hurd_pay
grader.check("q23")
###Output
_____no_output_____
###Markdown
To compute the average pay, we need to do this for every CEO. But that looks like it would involve copying this code 102 times.This is where functions come in. First, we'll define a new function, giving a name to the expression that converts "total pay" strings to numeric values. Later in this lab, we'll see the payoff: we can call that function on every pay string in the dataset at once.The next section of this lab explains how to define a function For now, just fill in the ellipses in the cell below.**Question 2.4.** Copy the expression you used to compute `mark_hurd_pay`, and use it as the return expression of the function below. But make sure you replace the specific `mark_hurd_pay_string` with the generic `pay_string` name specified in the first line in the `def` statement.*Hint*: When dealing with functions, you should generally not be referencing any variable outside of the function. Usually, you want to be working with the arguments that are passed into it, such as `pay_string` for this function. If you're using `mark_hurd_pay_string` within your function, you're referencing an outside variable!<!--BEGIN QUESTIONname: q24-->
###Code
def convert_pay_string_to_number(pay_string):
"""Converts a pay string like '$100' (in millions) to a number of
dollars."""
...
grader.check("q24")
###Output
_____no_output_____
###Markdown
Running that cell doesn't convert any particular pay string. Instead, it creates a function called `convert_pay_string_to_number` that can convert *any* string with the right format to a number representing millions of dollars.We can call our function just like we call the built-in functions we've seen. It takes one argument -- a string -- and it returns a float.
###Code
convert_pay_string_to_number('$42')
convert_pay_string_to_number(mark_hurd_pay_string)
# We can also compute Safra Catz's pay in the same way:
convert_pay_string_to_number(raw_compensation.where("Name", are.containing("Safra")).column("Total Pay").item(0))
###Output
_____no_output_____
###Markdown
So, what have we gained by defining the `convert_pay_string_to_number` function? Well, without it, we'd have to copy the code `10**6 * float(some_pay_string.strip("$"))` each time we wanted to convert a pay string. Now we just call a function whose name says exactly what it's doing. 3. `apply`ing functionsDefining a function is a lot like giving a name to a value with `=`. In fact, a function is a value just like the number 1 or the text "data"!For example, we can make a new name for the built-in function `max` if we want:
###Code
our_name_for_max = max
our_name_for_max(2, 6)
###Output
_____no_output_____
###Markdown
The old name for `max` is still around:
###Code
max(2, 6)
###Output
_____no_output_____
###Markdown
Try just writing `max` or `our_name_for_max` (or the name of any other function) in a cell, and run that cell. Python will print out a (very brief) description of the function.
###Code
max
###Output
_____no_output_____
###Markdown
Now try writing `?max` or `?our_name_for_max` (or the name of any other function) in a cell, and run that cell. A information box should show up at the bottom of your screen a longer description of the function*Note: You can also press Shift+Tab after clicking on a name to see similar information!*
###Code
?our_name_for_max
###Output
_____no_output_____
###Markdown
Let's look at what happens when we set `max`to a non-function value. You'll notice that a TypeError will occur when you try calling `max`. Things like integers and strings are not callable. Look out for any functions that might have been renamed when you encounter this type of error
###Code
max = 6
max(2, 6)
# This cell resets max to the built-in function. Just run this cell, don't change its contents
import builtins
max = builtins.max
###Output
_____no_output_____
###Markdown
Why is this useful? Since functions are just values, it's possible to pass them as arguments to other functions. Here's a simple but not-so-practical example: we can make an array of functions.
###Code
make_array(max, np.average, are.equal_to)
###Output
_____no_output_____
###Markdown
**Question 3.1.** Make an array containing any 3 other functions you've seen. Call it `some_functions`.<!--BEGIN QUESTIONname: q31-->
###Code
some_functions = ...
some_functions
grader.check("q31")
###Output
_____no_output_____
###Markdown
Working with functions as values can lead to some funny-looking code. For example, see if you can figure out why the following code works. Check your explanation with a neighbor or a staff member.
###Code
make_array(max, np.average, are.equal_to).item(0)(4, -2, 7)
###Output
_____no_output_____
###Markdown
A more useful example of passing functions to other functions as arguments is the table method `apply`.`apply` calls a function many times, once on *each* element in a column of a table. It produces an *array* of the results. Here we use `apply` to convert every CEO's pay to a number, using the function you defined:
###Code
raw_compensation.apply(convert_pay_string_to_number, "Total Pay")
###Output
_____no_output_____
###Markdown
Here's an illustration of what that did:Note that we didn’t write `raw_compensation.apply(convert_pay_string_to_number(), “Total Pay”)` or `raw_compensation.apply(convert_pay_string_to_number(“Total Pay”))`. We just passed the name of the function, with no parentheses, to `apply`, because all we want to do is let `apply` know the name of the function we’d like to use and the name of the column we’d like to use it on. `apply` will then call the function `convert_pay_string_to_number` on each value in the column for us!**Question 3.2.** Using `apply`, make a table that's a copy of `raw_compensation` with one additional column called `Total Pay ($)`. That column should contain the result of applying `convert_pay_string_to_number` to the `Total Pay` column (as we did above). Call the new table `compensation`.<!--BEGIN QUESTIONname: q32-->
###Code
compensation = raw_compensation.with_column(
"Total Pay ($)",
...
)
compensation
grader.check("q32")
###Output
_____no_output_____
###Markdown
Now that we have all the pays as numbers, we can learn more about them through computation.**Question 3.3.** Compute the average total pay of the CEOs in the dataset.<!--BEGIN QUESTIONname: q33-->
###Code
average_total_pay = ...
average_total_pay
grader.check("q33")
###Output
_____no_output_____
###Markdown
**Question 3.4.** Companies pay executives in a variety of ways: in cash, by granting stock or other equity in the company, or with ancillary benefits (like private jets). Compute the proportion of each CEO's pay that was cash. (Your answer should be an array of numbers, one for each CEO in the dataset.)*Note:* When you answer this question, you'll encounter a red box appearing below your code cell that says something like `RuntimeWarning: invalid value encountered in true_divide`. Don't worry too much about the message. Warnings are raised by Python when it encounters an unusual condition in your code, but the condition is not severe enough to warrant throwing an error. The warning below is Python's cryptic way of telling you that you're dividing a number by zero. If you extract the values in `Total Pay ($)` as an array, you'll see that the last element is 0.<!--BEGIN QUESTIONname: q34-->
###Code
cash_proportion = ...
cash_proportion
grader.check("q34")
###Output
_____no_output_____
###Markdown
Check out the `% Change` column in `compensation`. It shows the percentage increase in the CEO's pay from the previous year. For CEOs with no previous year on record, it instead says "(No previous year)". The values in this column are *strings*, not numbers, so like the `Total Pay` column, it's not usable without a bit of extra work.Given your current pay and the percentage increase from the previous year, you can compute your previous year's pay. For example, if your pay is $\$120$ this year, and that's an increase of 50% from the previous year, then your previous year's pay was $\frac{\$120}{1 + \frac{50}{100}}$, or \$80.**Question 3.5.** Create a new table called `with_previous_compensation`. It should be a copy of `compensation`, but with the "(No previous year)" CEOs filtered out, and with an extra column called `2014 Total Pay ($)`. That column should have each CEO's pay in 2014.*Hint 1:* You can print out your results after each step to make sure you're on the right track.*Hint 2:* We've provided a structure that you can use to get to the answer. However, if it's confusing, feel free to delete the current structure and approach the problem your own way!<!--BEGIN QUESTIONname: q35-->
###Code
# Definition to turn percent to number
def percent_string_to_num(percent_string):
"""Converts a percentage string to a number."""
return ...
# Compensation table where there is a previous year
having_previous_year = ...
# Get the percent changes as numbers instead of strings
# We're still working off the table having_previous_year
percent_changes = ...
# Calculate the previous year's pay
# We're still working off the table having_previous_year
previous_pay = ...
# Put the previous pay column into the having_previous_year table
with_previous_compensation = ...
with_previous_compensation
grader.check("q35")
###Output
_____no_output_____
###Markdown
**Question 3.6.** What was the average pay of these CEOs in 2014?<!--BEGIN QUESTIONname: q36-->
###Code
average_pay_2014 = np.average(with_previous_compensation.column("2014 Total Pay ($)"))
average_pay_2014
grader.check("q36")
###Output
_____no_output_____
###Markdown
**Why is `apply` useful?**For operations like arithmetic, or the functions in the NumPy library, you don't need to use `apply`, because they automatically work on each element of an array. But there are many things that don't. The string manipulation we did in today's lab is one example. Since you can write any code you want in a function, `apply` gives you total control over how you operate on data. 4. HistogramsEarlier, we computed the average pay among the CEOs in our 102-CEO dataset. The average doesn't tell us everything about the amounts CEOs are paid, though. Maybe just a few CEOs make the bulk of the money, even among these 102.We can use a *histogram* method to display the *distribution* of a set of numbers. The table method `hist` takes a single argument, the name of a column of numbers. It produces a histogram of the numbers in that column.**Question 4.1.** Make a histogram of the total pay of the CEOs in `compensation`. Check with your neighbor or a staff member to make sure you have the right plot.<!--BEGIN QUESTIONname: q41-->
###Code
...
###Output
_____no_output_____
###Markdown
**Question 4.2.** How many CEOs made more than $30 million in total pay? Find the value using code, then check that the value you found is consistent with what you see in the histogram.*Hint:* Use the table method `where` and the property `num_rows`.<!--BEGIN QUESTIONname: q42-->
###Code
num_ceos_more_than_30_million_2 = compensation.where("Total Pay ($)", are.above(30000000)).num_rows
num_ceos_more_than_30_million_2
grader.check("q42")
###Output
_____no_output_____
###Markdown
5. Project 1 Partner FormProject 1 will be released this Friday! You have the option of working with a partner that is enrolled in your lab. Your GSI will be sending out a form to match you up with a partner for this project. You may also indicate if you're working alone or have already found a partner and do not need to be paired up. This form is **mandatory** - please fill it out before submitting your lab. Set `submitted` to `True` once you've submitted the form.Note: If you are completing this lab before the early submission deadline, the form may not have been sent out yet. Set `submitted` to `True` for now, and keep an eye out for an email from your GSI later this week.<!--BEGIN QUESTIONname: q5-->
###Code
submitted = ...
grader.check("q5")
###Output
_____no_output_____
###Markdown
Great job! You're finished with lab 4! Be sure to...* **run all the tests** (the next cell has a shortcut for that),* **Save and Checkpoint** from the File menu,* **run the last cell to submit your work**,* and **ask one of the staff members to check you off**. ---To double-check your work, the cell below will rerun all of the autograder tests.
###Code
grader.check_all()
###Output
_____no_output_____
###Markdown
SubmissionMake sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a zip file for you to submit. **Please save before exporting!**
###Code
# Save your notebook first, then run this cell to export your submission.
grader.export(pdf=False)
###Output
_____no_output_____
###Markdown
Lab 4: Principal Component AnalysisIn this lab assignment, we will walk through two examples of Principal Component Analysis (PCA).The first is on the classic handwriting digits dataset to show the immediate utility that PCA can provide.In the second example, we will take a closer look at how PCA works via a diabetes dataset. Due DateThis assignment is due **Wednesday, May 1st at 11:59pm PST**.**Collaboration Policy**Data science is a collaborative activity. While you may talk with others about the homework, we ask that you **write your solutions individually**. If you do discuss the assignments with others please **include their names** in the cell below. **Collaborators:** ... Handwriting Digits The handwriting section of this notebook was taken from materials here from Jake VanderPlas: https://jakevdp.github.io/PythonDataScienceHandbook/05.09-principal-component-analysis.html
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns; sns.set()
from sklearn.decomposition import PCA
###Output
_____no_output_____
###Markdown
Let's load the handwriting digits and look at the shape:
###Code
from sklearn.datasets import load_digits
digits = load_digits()
digits.data.shape
###Output
_____no_output_____
###Markdown
Note that there are 1797 images and each one is 8x8, or 64 pixels Let's take a look at the handwriting digits dataset:
###Code
# set up the figure
fig = plt.figure(figsize=(6, 6)) # figure size in inches
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
# plot the digits: each image is 8x8 pixels
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
###Output
_____no_output_____
###Markdown
The digits themselves are 64-dimensional since they are 8x8. Let's use PCA to project the digits into two dimensions and look at the representation of the digits we get.Note that the dimension changes so that we got from 64-dimensional to 2-dimensional.
###Code
pca = PCA(2) # project from 64 to 2 dimensions
projected = pca.fit_transform(digits.data)
print(digits.data.shape)
print(projected.shape)
plt.scatter(projected[:, 0], projected[:, 1],
c=digits.target, edgecolor='none', alpha=0.5,
cmap=plt.cm.get_cmap('nipy_spectral', 10))
plt.xlabel('component 1')
plt.ylabel('component 2')
plt.colorbar();
###Output
_____no_output_____
###Markdown
Note that in two dimensions we can get an interesting visualization of the digits. Without doing any supervised learning - without clustering at all - we see the digits basically separate themselves into different regions.This is one of the main advantages of PCA. Our data began as 64-dimensional, but using simple techniques we were able to reduce it into the two dimensions that explain most of the variation in the data.In fact, let's do PCA, return the first 20 components, and examine a cumulative variance plot.
###Code
pca = PCA(20).fit(digits.data)
plt.plot(np.cumsum(pca.explained_variance_ratio_))
plt.xlabel('Number of components')
plt.ylabel('Cumulative explained variance');
###Output
_____no_output_____
###Markdown
In the cell above, we plot the cumulative variance of the number of components. You can see that with the first 20 components we can explain about 90% of the variance in the data. But the previous plot shows us that even with two components we can get a good representation of our digits.PCA-type methods can be useful in storing images. Rather than store the entire image, your phone/computer can store the PCA representation of it and preserve most of the quality. Now we'll take a closer look at PCA using a diabetes dataset.
###Code
from sklearn.datasets import load_diabetes
import pandas as pd
from scipy import stats
%matplotlib inline
diabetes_data = load_diabetes() # Loading the dataset
###Output
_____no_output_____
###Markdown
Let's take a look at the description of the diabetes dataset. Apply `.DESCR` to the `diabetes_data` to learn about the dataset. Use the `print` function to make it look nice.<!--BEGIN QUESTIONname: q0a-->
###Code
...
###Output
_____no_output_____
###Markdown
From the description above, we learn that there are 10 columns of numeric predictive values. Column 11 is the target value. Let's grab these from the data and make new variables for them. In the cell below, create a new variable `diabetes_features` that gets the `data` attribute of `diabetes_data`. Similarly, make a new variable `diabetes_target` that get the `target` attribute of `diabetes_data.`<!--BEGIN QUESTIONname: q0b-->
###Code
# Grab the feature names
diabetes_feature_names = diabetes_data['feature_names']
# Unpacking the data into new variables
diabetes_features = ...
diabetes_target = ...
###Output
_____no_output_____
###Markdown
Last, let's look at some summary statistics of `diabetes_target.`
###Code
# Look at the summary statistics of numpy array diabetes_target
stats.describe(diabetes_target)
###Output
_____no_output_____
###Markdown
We see that the mean is about 152. Let's make a new variable called `diabetes_class` that has value `Above152` if the mean is above 152 and `Below152` if the mean is below it.
###Code
# Run a loop to make a class variable for the target
diabetes_class = []
for i in range(0,442):
# Get current value of list
current_num = diabetes_target[i]
# If the current value exceeds 152, add "Above152" to the list
if current_num > 152:
diabetes_class.append("Above152")
# If it doesn't add "Below152"
else:
diabetes_class.append("Below152")
diabetes_class
###Output
_____no_output_____
###Markdown
Next, assign `diabetes_class` to `diabetes_target` so that we can use `diabetes_target` for visualization.
###Code
diabetes_target = diabetes_class
###Output
_____no_output_____
###Markdown
Question 1Let's explore the data by creating a scatter matrix of our diabetes features. To do this, we'll create 2D scatter plots for nine of our features, excluding sex.Complete the code below using `sns.pairplot` to create the scatter matrix of `diabetes_df`. Specify the `vars` to be all of the columns except for `sex`.**Hint:** Use the `hue` argument of `sns.pairplot` to color the points by `target`. A legend should then appear automatically on the right side of the figure.<!--BEGIN QUESTIONname: q1a-->
###Code
# Create a Pandas dataframe of the features
diabetes_df = pd.DataFrame(diabetes_features, columns = ['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6'])
# Add the target column to the data frame
diabetes_df['target'] = diabetes_target
# Make the plot using the instructions above
...
###Output
_____no_output_____
###Markdown
Are there any interesting relationships that you see? List at least two relationships you find notable.<!--BEGIN QUESTIONname: q1b--> *Write your answer here, replacing this text.* Question 2aTo apply PCA, we will first need to "center" the data so that the mean of each feature is 0. Additionally, we will need to scale the centered data by $\frac{1}{\sqrt n}$, where $n$ is the number of samples (rows) we have in our dataset. **Do you know why it is important to center and scale the data before applying PCA? Ask a tutor or TA if you are unsure.**<!--BEGIN QUESTIONname: q2a--> *Write your answer here, replacing this text.* Question 2bCompute the columnwise mean of `diabetes_features` in the cell below and store it in `diabetes_mean` (should be a numpy array of 10 means, 1 for each attribute). Then, subtract `diabetes_mean` from `diabetes_features`, divide the result by the $\sqrt n$, and save the result in `normalized_features`.**Hints:** * Use `np.mean` or `np.average` to compute `diabetes_mean`, and pay attention to the `axis` argument.* If you are confused about how numpy deals with arithmetic operations between arrays of different shapes, see this note about [broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) for explanations/examples.<!--BEGIN QUESTIONname: q2b-->
###Code
n = diabetes_features.shape[0] # should be 442
diabetes_mean = ...
normalized_features = ...
ok.grade("q2b");
###Output
_____no_output_____
###Markdown
Question 2cAs you may recall from lecture, PCA is a specific application of the singular value decomposition (SVD) for matrices. In the following cell, let's use the [`np.linalg.svd`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.linalg.svd.html) function compute the SVD of our `normalized_features`. Store the left singular vectors, singular values, and right singular vectors in `u`, `s`, and `vt` respectively.**Hint:** Set the `full_matrices` argument of `np.linalg.svd` to `False`.<!--BEGIN QUESTIONname: q2c-->
###Code
...
u.shape, s, vt.shape
ok.grade("q2c");
###Output
_____no_output_____
###Markdown
Question 2dWhat can we learn from the singular values in `s`? First, we can compute the total variance of the data by summing the squared singular values. We will later be able to use this value to determine the variance captured by a subset of our principal components.Compute the total variance below by summing the square of `s` and store the result in the variable `total_variance`.<!--BEGIN QUESTIONname: q2d-->
###Code
total_variance = ...
print("total_variance: {:.3f} should approximately equal the sum of feature variances: {:.3f}"
.format(total_variance, np.sum(np.var(diabetes_features, axis=0))))
ok.grade("q2d");
###Output
_____no_output_____
###Markdown
Question 3aLet's now use only the first two principal components to see what a 2D version of our diabetes data looks like.First, construct the 2D version of the diabetes data by matrix-multiplying our `normalized_features` by the first two right singular vectors in `v`. This will project the diabetes data down from a 10D subspace to a 2D subspace, and the first two right singular vectors are directions for the first two principal components.**Hints:*** To matrix multiply two numpy arrays, use @ or np.dot.* The first two right singular vectors in `v` will be the first two columns of `v`, or the first two rows of `vt` (transposed to be column vectors instead of row vectors). * Since we want to obtain a 2D version of our diabetes dataset, the shape of `diabetes_2d` should be (442, 2).<!--BEGIN QUESTIONname: q3a-->
###Code
diabetes_2d = ...
diabetes_2d[0]
ok.grade("q3a");
###Output
_____no_output_____
###Markdown
Now, run the cell below to create the scatter plot of our 2D version of the diabetes data, `diabetes_2d`.
###Code
plt.figure(figsize=(9, 6))
plt.title("PC2 vs. PC1 for Diabetes Data")
plt.xlabel("Diabetes PC1")
plt.ylabel("Diabetes PC2")
sns.scatterplot(diabetes_2d[:, 0], diabetes_2d[:, 1], hue=diabetes_target);
###Output
_____no_output_____
###Markdown
Question 3bWhat do you observe about the plot above? What value of PC1 would you use as a cutoff to distinguish between `Above152` and `Below152`?<!--BEGIN QUESTIONname: q3b--> *Write your answer here, replacing this text.* Question 3cWhat proportion of the total variance is accounted for when we project the diabetes data down to two dimensions? Compute this quantity in the cell below by dividing the sum of the first two squared singular values (also known as component scores) in `s` by the `total_variance` you calculated previously. Store the result in `two_dim_variance`.**Hint:** You can use the code from before where you calculated total variance, but this time, only sum the first two components.<!--BEGIN QUESTIONname: q3c-->
###Code
two_dim_variance = ...
two_dim_variance
ok.grade("q3c");
###Output
_____no_output_____
###Markdown
Question 4As a last step, let's create a [scree plot](https://en.wikipedia.org/wiki/Scree_plot) to visualize the weight of each of each principal component. In the cell below, create a scree plot by plotting a line plot of the square of the singular values in `s` vs. the principal component number (1st, 2nd, 3rd, or 4th).<!--BEGIN QUESTIONname: q4-->
###Code
...
###Output
_____no_output_____
###Markdown
You have completed Lab 4! SubmitMake sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output.**Please save before submitting!**
###Code
# Save your notebook first, then run this cell to submit.
ok.submit()
###Output
_____no_output_____
###Markdown
Lab 4 - Dictionaries and NumPy ArraysThis week we will be going over a few new Python data structures we can use: **dictionaries** and **NumPy** data structures. Dictionaries**Dictionaries** can be very useful. They store key/value pairs that can be used to map one value to another. You can think of a dictionary as a list where the indexes (locations) of the values of the list are no longer their integer locations, but rather their keys.In a list, you access the first item with `my_list[0]`.In a dictionary, you access the "key-th" item with `my_dictionary[key]`.If we think of list items as having their "address" be their location in the list, then a dictionary value's "address" is its key.Some important properties of dictionaries to note:- The key and value **do not** have to be of the same type- We designate a new key/value entry in a dictionary in this format: *key* **:** *value*- We store all these key/value entries in a dictionaries with braces `{}` around the ends (like `[]` with a list) and commas separating the entries - `new_dictionary` = {"a": 100, "b": 200, "c": 300} Let's take a closer look at a dictionary in practice:
###Code
my_dictionary = {"a": 100, "b": 200, "c": 300}
print("The value 'a' maps to the value:", my_dictionary["a"])
print("The value 'b' maps to the value:", my_dictionary["b"])
print("The value 'c' maps to the value:", my_dictionary["c"])
###Output
_____no_output_____
###Markdown
We can't access a dictionary's values like we can access a list's values. If we want the "first" item in a dictionary, we cannot ask for `my_dictionary[0]`, because this request is really asking "What does the key 0 map to in this dictionary?". If your dictionary does not have a value associated with the key 0, you will get an error.
###Code
my_dictionary[0]
###Output
_____no_output_____
###Markdown
A `KeyError` warning means that you asked for a key that is not in your dictionary. This may happen when you are writing a function with a dictionary, so if you see it, this is what it means. We can add the key value pair `(key, value)` with the following syntax:`my_dictionary[key] = value`
###Code
my_dictionary["d"] = 400 # Add the key/value pair ("d", 400) to our dictionary
my_dictionary
###Output
_____no_output_____
###Markdown
We can use **any** data type we know as a value in a dictionary...
###Code
# Here, the value we add is a list!
my_dictionary["grocery list"] = ["apples", "bananas", "carrots"]
my_dictionary
###Output
_____no_output_____
###Markdown
...including even having a **dictionary itself** as a value!
###Code
my_dictionary["squares"] = {1: 1, 2: 4, 3: 9, 4: 16}
my_dictionary
###Output
_____no_output_____
###Markdown
We can get a list of a dictionary's keys with the `.keys()` function.
###Code
my_keys = my_dictionary.keys()
my_keys
# Note the type of this list of keys
type(my_keys)
# To convert this to a list, we use the list() function
list(my_keys)
###Output
_____no_output_____
###Markdown
You can also call `list()` directly on the dictionary, and it will give you the same list of keys:
###Code
list(my_dictionary)
###Output
_____no_output_____
###Markdown
To iterate over the keys in a dictionary, we can use a `for` loop!
###Code
for key in my_dictionary:
print("I am a key, and my name is:", key)
###Output
_____no_output_____
###Markdown
We can also get a list of a dictionary's values with the `.values()` function.
###Code
my_values = my_dictionary.values()
my_values
# For the same reason as with the keys, we have to convert this to a list before we use it
list(my_values)
###Output
_____no_output_____
###Markdown
We can't use the same `list(my_dictionary)` trick we used for the keys, so we can iterate over the values of a dictionary like this:
###Code
for value in list(my_dictionary.values()):
print("I am a value, and my name is:", value)
###Output
_____no_output_____
###Markdown
We don't use this list of dictionary values terribly often, but it's be something you should know in case you ever need it when writing your own code.We can use this to do cool things like change all the values in a dictionary!
###Code
def add_one_to_dictionary_values(dictionary):
for key in dictionary:
dictionary[key] += 1
return dictionary
new_dictionary = {"data": 94, "cs": 61, "poli sci": 1}
modified_dictionary = add_one_to_dictionary_values(new_dictionary)
modified_dictionary
###Output
_____no_output_____
###Markdown
Question 1Let's try writing a function that uses a dictionary that can help us make up a whole new language so we can communicate in secret with our friends!We want to convert all of our text messages to our new language, which we call *Fake-lish*.Fake-lish converts all letters in a message to another letter. We make a dictionary that maps every letter to another letter, which makes our message impossible to read for anyone other than other people who have the *Fake-lish* dictionary!Spaces should be preserved by this function, so leave spaces as spaces when we convert the message to *Fake-lish*<!--BEGIN QUESTIONname: q1points: 0-->
###Code
def fake_lish(text, fake_lish_dictionary):
output_text = ...
for letter in text:
if letter != " ":
converted_letter = ...
output_text += converted_letter
else:
output_text += " "
return output_text
# This is the fake-lish dictionary we will use for this question
# You do not need to know how this works, and you do not need to touch it
fld = {}
for char in list(map(chr, range(97,123))):
fld[char] = chr((ord(char) - 97 + 13) % 26 + 97)
fld
grader.check("q1")
###Output
_____no_output_____
###Markdown
Now we can use this function to send messages that nobody will understand (unless they crack our code...)!
###Code
fake_lish("hello world", fld)
fake_lish("i am speaking in secret hehe", fld)
###Output
_____no_output_____
###Markdown
Just a cool property of the dictionary we chose to use, look what happens when we encrypt one of our messages... we can use the function again to *decrypt* the messages too!
###Code
fake_lish("hello can you hear me", fld)
fake_lish("uryyb pna lbh urne zr", fld)
###Output
_____no_output_____
###Markdown
Now we can talk in secret! See the **Extra Practice Problems** section to see how this can be useful in a cool way! NumPyYou may have seen in lecture that we can use a whole new family of functions using **NumPy**. We will talk about a few of them in this notebook, but we should make something clear before we get started.**NumPy** is what is known as a *library*. This means that its functions are not automatically present when you start a new Python environment, so we have to **import** it before we can use it.Let's see what happens if we try to use a NumPy function before we import it:
###Code
np.array([1, 2, 3])
###Output
_____no_output_____
###Markdown
We have to use an **import** statement to load in everything NumPy has to offer:
###Code
import numpy as np
our_array = np.array([1, 2, 3])
our_array
###Output
_____no_output_____
###Markdown
We can do all sorts of operations using NumPy arrays, they are similar to Python lists, so we can still do all of those operations:
###Code
for item in our_array:
print(item)
our_array[1]
our_array[1:]
###Output
_____no_output_____
###Markdown
However, NumPy arrays have a few more features we can use. Arithmetic operations with NumPy arrays are slightly different than arithmetic with Python lists. Let's see some examples:
###Code
our_list = [1, 2, 3]
print("Multiplying a Python list does this:", our_list * 2)
print("Multiplying a NumPy array does this:", our_array * 2)
###Output
_____no_output_____
###Markdown
Note the difference: Multiplication with lists puts two copies of it together, whereas NumPy array multiplication multiplies two by each number in the array. This applies to all arithmetic operations.
###Code
print(our_array + 10)
print(our_array - 10)
print(our_array / 2)
print(our_array ** 3)
###Output
_____no_output_____
###Markdown
Doing some of these arithmetic operations in Python are not allowed because lists cannot ineract as easily with non-list types. For example, addition (and subtraction) cause errors because the *list* does not understand the meaning of *subtracting 1*.
###Code
our_list + 1
###Output
_____no_output_____
###Markdown
We can also do arithmetic operations between two arrays:
###Code
np.array([2, 3, 4]) * np.array([10, 20, 30])
###Output
_____no_output_____
###Markdown
If you remember the problem from last week's lab about **pairwise multiplication**, you may recognize that this is exactly how NumPy arrays behave with each other!There is also an NumPy equivalent of the `range()` function, `np.arange()`. You can think of this as "array" range, and it returns a NumPy array instead of a Python list! It has the same end-exclusive behavior as range, and it overall behaves in a very similar way as a Python list `range()` call.
###Code
np.arange(10)
# We can use it in a for loop, just as we would expect!
for number in np.arange(10):
print(number)
###Output
_____no_output_____
###Markdown
There is one more very important property of NumPy arrays, and it involves something we talked about earlier in the course. NumPy arrays can be created with items of different types, but NumPy automatically casts them all to one type. This is called **type coercion** because all values of the list are *coreced* to become the same type.- **Booleans** cast to **integers** (True -> 1, False -> 0)- **Integers** cast to **strings** (1 -> "1", 2 -> "2") - By the associative property, **Booleans** also cast to **strings** (True -> "True")Let's look at some examples:
###Code
array1 = np.array([10, 20, True, 40, False])
array1
array2 = np.array([False, 200, 300, 400, True])
array2
array1 + array2
###Output
_____no_output_____
###Markdown
We get this resulting array because:- 10 + (False -> 0) = `10`- 20 + 200 = `20`- (True -> 1) + 300 = `301`- 40 + 400 = `440`- (False -> 0) + (True -> 1) = `1`We see the same behavior when adding strings into the mix:
###Code
array3 = np.array(["data", "science", 15, "cool", True])
array3
###Output
_____no_output_____
###Markdown
Notice how all the values become strings!There are ways to have other more complex data types (lists, dictionaries, etc.) made into NumPy arrays, but they are out of scope as far as this class is concerned. Feel free to try out different types on your own, but you will not be tested on it in this class. Done! 😇That's it! There's nowhere for you to submit this, as labs are not assignments. However, please ask any questions you have with this notebook in lab or on Ed.If you want some extra practice, you may proceed onto the next section, which contains practice problems for this week. Extra Practice Problems Question 2aIn this problem, we will be using dictionaries to implement a login system. For **new accounts**, we **create** a new username with the password given, and for **existing accounts**, we **log in** if the password given matches the correct password for the given username. If a user tries to make an account with a username that **already exists**, we **do not** allow them to make that new account, and if the password **does not** match the username's password, login **fails**.Here is what the function should return:- It should return `"New account"` when a new account is successfully created- It should return `"No new account"` when a new account is not successfully created- It should return `"Successful login"` when login is successful- It should return `"No successful login"` when login is not successfulYou will write two parts of this function:- You must add a new username/password pair to the `accounts` dictionary when a new account is being created- You must check if the given password is correct for an existing account in `accounts`You can see that the argument `new_account` appears as `new_account=False`. This makes `new_account` an *optional* argument, and if no third argument is given to `login`, the default value with be `False`. If you want the value of `new_account` to be `True`, you must put `True` in as the third argument (ex. `login("data94student", "1234", True)`).
###Code
accounts = {}
def login(username, password, new_account=False):
if new_account:
if username not in accounts:
...
print("Account with username:", username, "created with password:", password)
result = ...
return result
else:
print("Username already exists, please select another username")
result = ...
return result
elif password = ...
print("Successfully logged in as user:", username)
result = ...
return result
else:
print("Incorrect password, please try again")
result = ...
return result
grader.check("q2a")
###Output
_____no_output_____
###Markdown
Solution (for after you have tried yourself) def login(username, password, new_account=False): if new_account: if username not in accounts: accounts[username] = password print("Account with username:", username, "created with password:", password) result = "New account" return result else: print("Username already exists, please select another username") result = "No new account" return result elif password == accounts[username]: print("Successfully logged in as user:", username) result = "Successful login" return result else: print("Incorrect password, please try again") result = "No successful login" return result Let's look at this function at work:
###Code
login("suraj", "suraj12345", True)
login("isaac", "isaac9876", True)
login("angela", "angela4567", True)
login("suraj", "suraj12345")
login("isaac", "isaac9876")
login("angela", "angela4567")
login("suraj", "password")
login("isaac", "password")
login("angela", "password")
###Output
_____no_output_____
###Markdown
Now if we take a look at our accounts dicionary, we can see that the username/password pairs we have for login are here!
###Code
accounts
# Use this cell to explore how the login function works!
# Try and make your own accounts to see how the dictionary helps us log in!
###Output
_____no_output_____
###Markdown
Question 2bImagine our `accounts` dictionary has been obtained by some people who want to hack into our login system. They have access to all the passwords! We should figure out a way to make sure that even if people have access to the `accounts` dictionary, they still cannot steal peoples' passwords. We can do this using our `fake_lish` function from earlier! Modify the `login` funciton in `login_secure` so that it not only stores passwords in fake-lish, but also converts from fake-lish back to english while logging someone in!*Remember*: you have to pass in `fld` as the second input to `fake_lish` for it to work properly.
###Code
accounts_secure = {}
def login_secure(username, password, new_account=False):
if new_account:
if username not in accounts_secure:
password_fake_lish = ...
...
print("Account with username:", username, "created with secure password:", password)
result = ...
return result
else:
print("Username already exists, please select another username")
result = ...
return result
...
print("Successfully logged in as user:", username)
result = ...
return result
else:
print("Incorrect password, please try again")
result = ...
return result
grader.check("q2b")
###Output
_____no_output_____
###Markdown
Solution (for after you have tried yourself) def login_secure(username, password, new_account=False): if new_account: if username not in accounts_secure: password_fake_lish = ... ... print("Account with username:", username, "created with secure password:", password) result = ... return result else: print("Username already exists, please select another username") result = ... return result ... print("Successfully logged in as user:", username) result = ... return result else: print("Incorrect password, please try again") result = ... return result
###Code
login_secure("suraj", "berkeley", True)
login_secure("isaac", "datascience", True)
login_secure("angela", "iscool", True)
login_secure("suraj", "berkeley")
login_secure("isaac", "datascience")
login_secure("angela", "iscool")
login_secure("suraj", "password")
login_secure("isaac", "password")
login_secure("angela", "password")
###Output
_____no_output_____
###Markdown
Now if we look at our accounts dictionary, it is useless to those hackers!
###Code
accounts_secure
###Output
_____no_output_____
###Markdown
If they try to use these passwords to log in, they won't work! Go cybersecurity!
###Code
login_secure("data94admin", "tbbqcnffjbeq")
login_secure("suraj", "orexryrl")
login_secure("isaac", "qngnfpvrapr")
login_secure("angela", "vfpbby")
###Output
_____no_output_____
###Markdown
---To double-check your work, the cell below will rerun all of the autograder tests.
###Code
grader.check_all()
###Output
_____no_output_____
###Markdown
SubmissionMake sure you have run all cells in your notebook in order before running the cell below, so that all images/graphs appear in the output. The cell below will generate a zip file for you to submit. **Please save before exporting!**
###Code
# Save your notebook first, then run this cell to export your submission.
grader.export(pdf=False)
###Output
_____no_output_____
###Markdown
Lab 4: Functions and Visualizations Welcome to Lab 4! This week, we'll learn about functions, table methods such as `apply`, and how to generate visualizations! Recommended Reading:* [Applying a Function to a Column](https://www.inferentialthinking.com/chapters/08/1/applying-a-function-to-a-column.html)* [Visualizations](https://www.inferentialthinking.com/chapters/07/visualization.html)First, set up the notebook by running the cell below.
###Code
import numpy as np
from datascience import *
# These lines set up graphing capabilities.
import matplotlib
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('fivethirtyeight')
import warnings
warnings.simplefilter('ignore', FutureWarning)
from ipywidgets import interact, interactive, fixed, interact_manual
import ipywidgets as widgets
# When you log-in please hit return (not shift + return) after typing in your email
from client.api.notebook import Notebook
ok = Notebook('lab04.ok')
_ = ok.submit()
###Output
_____no_output_____
###Markdown
1. Defining functionsLet's write a very simple function that converts a proportion to a percentage by multiplying it by 100. For example, the value of `to_percentage(.5)` should be the number 50 (no percent sign).A function definition has a few parts. `def`It always starts with `def` (short for **def**ine): def NameNext comes the name of the function. Like other names we've defined, it can't start with a number or contain spaces. Let's call our function `to_percentage`: def to_percentage SignatureNext comes something called the *signature* of the function. This tells Python how many arguments your function should have, and what names you'll use to refer to those arguments in the function's code. A function can have any number of arguments (including 0!). `to_percentage` should take one argument, and we'll call that argument `proportion` since it should be a proportion. def to_percentage(proportion) If we want our function to take more than one argument, we add a comma between each argument name.We put a colon after the signature to tell Python it's over. If you're getting a syntax error after defining a function, check to make sure you remembered the colon! def to_percentage(proportion): DocumentationFunctions can do complicated things, so you should write an explanation of what your function does. For small functions, this is less important, but it's a good habit to learn from the start. Conventionally, Python functions are documented by writing an **indented** triple-quoted string: def to_percentage(proportion): """Converts a proportion to a percentage.""" BodyNow we start writing code that runs when the function is called. This is called the *body* of the function and every line **must be indented with a tab**. Any lines that are *not* indented and left-aligned with the def statement is considered outside the function. Some notes about the body of the function:- We can write code that we would write anywhere else. - We use the arguments defined in the function signature. We can do this because we assume that when we call the function, values are already assigned to those arguments.- We generally avoid referencing variables defined *outside* the function. If you would like to reference variables outside of the function, pass them through as arguments!Now, let's give a name to the number we multiply a proportion by to get a percentage: def to_percentage(proportion): """Converts a proportion to a percentage.""" factor = 100 `return`The special instruction `return` is part of the function's body and tells Python to make the value of the function call equal to whatever comes right after `return`. We want the value of `to_percentage(.5)` to be the proportion .5 times the factor 100, so we write: def to_percentage(proportion): """Converts a proportion to a percentage.""" factor = 100 return proportion * factor `return` only makes sense in the context of a function, and **can never be used outside of a function**. `return` is always the last line of the function because Python stops executing the body of a function once it hits a `return` statement.*Note:* `return` inside a function tells Python what value the function evaluates to. However, there are other functions, like `print`, that have no `return` value. For example, `print` simply prints a certain value out to the console. `return` and `print` are **very** different. **Question 1.1.** Define `to_percentage` in the cell below. Call your function to convert the proportion .2 to a percentage. Name that percentage `twenty_percent`.<!--BEGIN QUESTIONname: q11-->
###Code
def ...
''' ... '''
... = ...
return ...
twenty_percent = ...
twenty_percent
ok.grade("q11");
###Output
_____no_output_____
###Markdown
Like you’ve done with built-in functions in previous labs (max, abs, etc.), you can pass in named values as arguments to your function.**Question 1.2.** Use `to_percentage` again to convert the proportion named `a_proportion` (defined below) to a percentage called `a_percentage`.*Note:* You don't need to define `to_percentage` again! Like other named values, functions stick around after you define them.<!--BEGIN QUESTIONname: q12-->
###Code
a_proportion = 2**(.5) / 2
a_percentage = ...
a_percentage
ok.grade("q12");
###Output
_____no_output_____
###Markdown
Here's something important about functions: the names assigned *within* a function body are only accessible within the function body. Once the function has returned, those names are gone. So even if you created a variable called `factor` and defined `factor = 100` inside of the body of the `to_percentage` function and then called `to_percentage`, `factor` would not have a value assigned to it outside of the body of `to_percentage`:
###Code
# You should see an error when you run this. (If you don't, you might
# have defined factor somewhere above.)
factor
###Output
_____no_output_____
###Markdown
As we've seen with built-in functions, functions can also take strings (or arrays, or tables) as arguments, and they can return those things, too.**Question 1.3.** Define a function called `disemvowel`. It should take a single string as its argument. (You can call that argument whatever you want.) It should return a copy of that string, but with all the characters that are vowels removed. (In English, the vowels are the characters "a", "e", "i", "o", and "u".) You can use as many lines inside of the function to do this as you’d like.*Hint:* To remove all the "a"s from a string, you can use `that_string.replace("a", "")`. The `.replace` method for strings returns a new string, so you can call `replace` multiple times, one after the other. <!--BEGIN QUESTIONname: q13-->
###Code
def disemvowel(a_string):
...
...
# An example call to your function. (It's often helpful to run
# an example call from time to time while you're writing a function,
# to see how it currently works.)
disemvowel("Can you read this without vowels?")
ok.grade("q13");
###Output
_____no_output_____
###Markdown
Calls on calls on callsJust as you write a series of lines to build up a complex computation, it's useful to define a series of small functions that build on each other. Since you can write any code inside a function's body, you can call other functions you've written.If a function is a like a recipe, defining a function in terms of other functions is like having a recipe for cake telling you to follow another recipe to make the frosting, and another to make the jam filling. This makes the cake recipe shorter and clearer, and it avoids having a bunch of duplicated frosting recipes. It's a foundation of productive programming.For example, suppose you want to count the number of characters *that aren't vowels* in a piece of text. One way to do that is this to remove all the vowels and count the size of the remaining string.**Question 1.4.** Write a function called `num_non_vowels`. It should take a string as its argument and return a number. That number should be the number of characters in the argument string that aren't vowels. You should use the `disemvowel` function you wrote above inside of the `num_non_vowels` function.*Hint:* The function `len` takes a string as its argument and returns the number of characters in it.<!--BEGIN QUESTIONname: q14-->
###Code
def num_non_vowels(a_string):
"""The number of characters in a string, minus the vowels."""
...
# Try calling your function yourself to make sure the output is what
# you expect. You can also use the interact function in the next cell if you'd like.
ok.grade("q14");
###Output
_____no_output_____
###Markdown
Functions can also encapsulate code that *displays output* instead of computing a value. For example, if you call `print` inside a function, and then call that function, something will get printed.The `movies_by_year` dataset in the textbook has information about movie sales in recent years. Suppose you'd like to display the year with the 5th-highest total gross movie sales, printed in a human-readable way. You might do this:
###Code
movies_by_year = Table.read_table("movies_by_year.csv")
rank = 5
fifth_from_top_movie_year = movies_by_year.sort("Total Gross", descending=True).column("Year").item(rank-1)
print("Year number", rank, "for total gross movie sales was:", fifth_from_top_movie_year)
###Output
_____no_output_____
###Markdown
After writing this, you realize you also wanted to print out the 2nd and 3rd-highest years. Instead of copying your code, you decide to put it in a function. Since the rank varies, you make that an argument to your function.**Question 1.5.** Write a function called `print_kth_top_movie_year`. It should take a single argument, the rank of the year (like 2, 3, or 5 in the above examples). It should print out a message like the one above. *Note:* Your function shouldn't have a `return` statement.<!--BEGIN QUESTIONname: q15-->
###Code
def print_kth_top_movie_year(k):
...
print(...)
# Example calls to your function:
print_kth_top_movie_year(2)
print_kth_top_movie_year(3)
ok.grade("q15");
# interact also allows you to pass in an array for a function argument. It will
# then present a dropdown menu of options.
_ = interact(print_kth_top_movie_year, k=np.arange(1, 10))
###Output
_____no_output_____
###Markdown
`print` is not the same as `return`The `print_kth_top_movie_year(k)` function prints the total gross movie sales for the year that was provided! However, since we did not return any value in this function, we can not use it after we call it. Let's look at an example of another function that prints a value but does not return it.
###Code
def print_number_five():
print(5)
print_number_five()
###Output
_____no_output_____
###Markdown
However, if we try to use the output of `print_number_five()`, we see that the value `5` is printed but we get a TypeError when we try to add the number 2 to it!
###Code
print_number_five_output = print_number_five()
print_number_five_output + 2
###Output
_____no_output_____
###Markdown
It may seem that `print_number_five()` is returning a value, 5. In reality, it just displays the number 5 to you without giving you the actual value! If your function prints out a value without returning it and you try to use that value, you will run into errors, so be careful!Explain to your neighbor how you might add a line of code to the `print_number_five` function (after `print(5)`) so that the code `print_number_five_output + 5` would result in the value `10`, rather than an error. 2. Functions and CEO IncomesIn this question, we'll look at the 2015 compensation of CEOs at the 100 largest companies in California. The data was compiled from a [Los Angeles Times analysis](http://spreadsheets.latimes.com/california-ceo-compensation/), and ultimately came from [filings](https://www.sec.gov/answers/proxyhtf.htm) mandated by the SEC from all publicly-traded companies. Two companies have two CEOs, so there are 102 CEOs in the dataset.We've copied the raw data from the LA Times page into a file called `raw_compensation.csv`. (The page notes that all dollar amounts are in **millions of dollars**.)
###Code
raw_compensation = Table.read_table('raw_compensation.csv')
raw_compensation
###Output
_____no_output_____
###Markdown
We want to compute the average of the CEOs' pay. Try running the cell below.
###Code
np.average(raw_compensation.column("Total Pay"))
###Output
_____no_output_____
###Markdown
You should see a TypeError. Let's examine why this error occurred by looking at the values in the `Total Pay` column. **Question 2.1.** Use the `type` function and set `total_pay_type` to the type of the first value in the "Total Pay" column.<!--BEGIN QUESTIONname: q21-->
###Code
total_pay_type = ...
total_pay_type
ok.grade("q21");
###Output
_____no_output_____
###Markdown
**Question 2.2.** You should have found that the values in the `Total Pay` column are strings. It doesn't make sense to take the average of string values, so we need to convert them to numbers if we want to do this. Extract the first value in `Total Pay`. It's Mark Hurd's pay in 2015, in *millions* of dollars. Call it `mark_hurd_pay_string`.<!--BEGIN QUESTIONname: q22-->
###Code
mark_hurd_pay_string = ...
mark_hurd_pay_string
ok.grade("q22");
###Output
_____no_output_____
###Markdown
**Question 2.3.** Convert `mark_hurd_pay_string` to a number of *dollars*. Some hints, as this question requires multiple steps:- The string method `strip` will be useful for removing the dollar sign; it removes a specified character from the start or end of a string. For example, the value of `"100%".strip("%")` is the string `"100"`. - You'll also need the function `float`, which converts a string that looks like a number to an actual number. - Finally, remember that the answer should be in dollars, not millions of dollars.<!--BEGIN QUESTIONname: q23-->
###Code
mark_hurd_pay = ...
mark_hurd_pay
ok.grade("q23");
###Output
_____no_output_____
###Markdown
To compute the average pay, we need to do this for every CEO. But that looks like it would involve copying this code 102 times.This is where functions come in. First, we'll define a new function, giving a name to the expression that converts "total pay" strings to numeric values. Later in this lab, we'll see the payoff: we can call that function on every pay string in the dataset at once.The next section of this lab explains how to define a function For now, just fill in the ellipses in the cell below.**Question 2.4.** Copy the expression you used to compute `mark_hurd_pay`, and use it as the return expression of the function below. But make sure you replace the specific `mark_hurd_pay_string` with the generic `pay_string` name specified in the first line in the `def` statement.*Hint*: When dealing with functions, you should generally not be referencing any variable outside of the function. Usually, you want to be working with the arguments that are passed into it, such as `pay_string` for this function. If you're using `mark_hurd_pay_string` within your function, you're referencing an outside variable!<!--BEGIN QUESTIONname: q24-->
###Code
def convert_pay_string_to_number(pay_string):
"""Converts a pay string like '$100' (in millions) to a number of dollars."""
...
ok.grade("q24");
###Output
_____no_output_____
###Markdown
Running that cell doesn't convert any particular pay string. Instead, it creates a function called `convert_pay_string_to_number` that can convert *any* string with the right format to a number representing millions of dollars.We can call our function just like we call the built-in functions we've seen. It takes one argument -- a string -- and it returns a float.
###Code
convert_pay_string_to_number('$42')
convert_pay_string_to_number(mark_hurd_pay_string)
# We can also compute Safra Catz's pay in the same way:
convert_pay_string_to_number(raw_compensation.where("Name", are.containing("Safra")).column("Total Pay").item(0))
###Output
_____no_output_____
###Markdown
So, what have we gained by defining the `convert_pay_string_to_number` function? Well, without it, we'd have to copy the code `10**6 * float(some_pay_string.strip("$"))` each time we wanted to convert a pay string. Now we just call a function whose name says exactly what it's doing. 3. `apply`ing functionsDefining a function is a lot like giving a name to a value with `=`. In fact, a function is a value just like the number 1 or the text "data"!For example, we can make a new name for the built-in function `max` if we want:
###Code
our_name_for_max = max
our_name_for_max(2, 6)
###Output
_____no_output_____
###Markdown
The old name for `max` is still around:
###Code
max(2, 6)
###Output
_____no_output_____
###Markdown
Try just writing `max` or `our_name_for_max` (or the name of any other function) in a cell, and run that cell. Python will print out a (very brief) description of the function.
###Code
max
###Output
_____no_output_____
###Markdown
Now try writing `?max` or `?our_name_for_max` (or the name of any other function) in a cell, and run that cell. A information box should show up at the bottom of your screen a longer description of the function
###Code
?our_name_for_max
###Output
_____no_output_____
###Markdown
Let's look at what happens when we set `max`to a non-function value. You'll notice that a TypeError will occur when you try calling `max`. Things like integers and strings are not callable. Look out for any functions that might have been renamed when you encounter this type of error
###Code
max = 6
max(2, 6)
# This cell resets max to the built-in function. Just run this cell, don't change its contents
import builtins
max = builtins.max
###Output
_____no_output_____
###Markdown
Why is this useful? Since functions are just values, it's possible to pass them as arguments to other functions. Here's a simple but not-so-practical example: we can make an array of functions.
###Code
make_array(max, np.average, are.equal_to)
###Output
_____no_output_____
###Markdown
**Question 3.1.** Make an array containing any 3 other functions you've seen. Call it `some_functions`.<!--BEGIN QUESTIONname: q31-->
###Code
some_functions = ...
some_functions
ok.grade("q31");
###Output
_____no_output_____
###Markdown
Working with functions as values can lead to some funny-looking code. For example, see if you can figure out why the following code works. Check your explanation with a neighbor or a staff member.
###Code
make_array(max, np.average, are.equal_to).item(0)(4, -2, 7)
###Output
_____no_output_____
###Markdown
A more useful example of passing functions to other functions as arguments is the table method `apply`.`apply` calls a function many times, once on *each* element in a column of a table. It produces an *array* of the results. Here we use `apply` to convert every CEO's pay to a number, using the function you defined:
###Code
raw_compensation.apply(convert_pay_string_to_number, "Total Pay")
###Output
_____no_output_____
###Markdown
Here's an illustration of what that did:Note that we didn’t write `raw_compensation.apply(convert_pay_string_to_number(), “Total Pay”)` or `raw_compensation.apply(convert_pay_string_to_number(“Total Pay”))`. We just passed the name of the function, with no parentheses, to `apply`, because all we want to do is let `apply` know the name of the function we’d like to use and the name of the column we’d like to use it on. `apply` will then call the function `convert_pay_string_to_number` on each value in the column for us!**Question 3.2.** Using `apply`, make a table that's a copy of `raw_compensation` with one additional column called `Total Pay ($)`. That column should contain the result of applying `convert_pay_string_to_number` to the `Total Pay` column (as we did above). Call the new table `compensation`.<!--BEGIN QUESTIONname: q32-->
###Code
compensation = raw_compensation.with_column(
"Total Pay ($)",
...
)
compensation
ok.grade("q32");
###Output
_____no_output_____
###Markdown
Now that we have all the pays as numbers, we can learn more about them through computation.**Question 3.3.** Compute the average total pay of the CEOs in the dataset.<!--BEGIN QUESTIONname: q33-->
###Code
average_total_pay = ...
average_total_pay
ok.grade("q33");
###Output
_____no_output_____
###Markdown
**Question 3.4.** Companies pay executives in a variety of ways: in cash, by granting stock or other equity in the company, or with ancillary benefits (like private jets). Compute the proportion of each CEO's pay that was cash. (Your answer should be an array of numbers, one for each CEO in the dataset.)*Note:* When you answer this question, you'll encounter a red box appearing below your code cell that says something like `RuntimeWarning: invalid value encountered in true_divide`. Don't worry too much about the message. Warnings are raised by Python when it encounters an unusual condition in your code, but the condition is not severe enough to warrant throwing an error. The warning below is Python's cryptic way of telling you that you're dividing a number by zero. If you extract the values in `Total Pay ($)` as an array, you'll see that the last element is 0.<!--BEGIN QUESTIONname: q34-->
###Code
cash_proportion = ...
cash_proportion
ok.grade("q34");
###Output
_____no_output_____
###Markdown
Check out the `% Change` column in `compensation`. It shows the percentage increase in the CEO's pay from the previous year. For CEOs with no previous year on record, it instead says "(No previous year)". The values in this column are *strings*, not numbers, so like the `Total Pay` column, it's not usable without a bit of extra work.Given your current pay and the percentage increase from the previous year, you can compute your previous year's pay. For example, if your pay is $\$120$ this year, and that's an increase of 50% from the previous year, then your previous year's pay was $\frac{\$120}{1 + \frac{50}{100}}$, or \$80.**Question 3.5.** Create a new table called `with_previous_compensation`. It should be a copy of `compensation`, but with the "(No previous year)" CEOs filtered out, and with an extra column called `2014 Total Pay ($)`. That column should have each CEO's pay in 2014.*Hint 1:* You can print out your results after each step to make sure you're on the right track.*Hint 2:* We've provided a structure that you can use to get to the answer. However, if it's confusing, feel free to delete the current structure and approach the problem your own way!<!--BEGIN QUESTIONname: q35-->
###Code
# Definition to turn percent to number
def percent_string_to_num(percent_string):
"""Converts a percentage string to a number."""
return ...
# Compensation table where there is a previous year
having_previous_year = ...
# Get the percent changes as numbers instead of strings
# We're still working off the table having_previous_year
percent_changes = ...
# Calculate the previous year's pay
# We're still working off the table having_previous_year
previous_pay = ...
# Put the previous pay column into the having_previous_year table
with_previous_compensation = ...
with_previous_compensation
ok.grade("q35");
###Output
_____no_output_____
###Markdown
**Question 3.6.** What was the average pay of these CEOs in 2014?<!--BEGIN QUESTIONname: q36-->
###Code
average_pay_2014 = ...
average_pay_2014
ok.grade("q36");
###Output
_____no_output_____
###Markdown
**Why is `apply` useful?**For operations like arithmetic, or the functions in the NumPy library, you don't need to use `apply`, because they automatically work on each element of an array. But there are many things that don't. The string manipulation we did in today's lab is one example. Since you can write any code you want in a function, `apply` gives you total control over how you operate on data. 4. HistogramsEarlier, we computed the average pay among the CEOs in our 102-CEO dataset. The average doesn't tell us everything about the amounts CEOs are paid, though. Maybe just a few CEOs make the bulk of the money, even among these 102.We can use a *histogram* method to display the *distribution* of a set of numbers. The table method `hist` takes a single argument, the name of a column of numbers. It produces a histogram of the numbers in that column.**Question 4.1.** Make a histogram of the total pay of the CEOs in `compensation`. Check with your neighbor or a staff member to make sure you have the right plot.
###Code
...
###Output
_____no_output_____
###Markdown
**Question 4.2.** How many CEOs made more than $30 million in total pay? Find the value using code, then check that the value you found is consistent with what you see in the histogram.*Hint:* Use the table method `where` and the property `num_rows`.<!--BEGIN QUESTIONname: q42-->
###Code
num_ceos_more_than_30_million_2 = ...
num_ceos_more_than_30_million_2
ok.grade("q42");
###Output
_____no_output_____
###Markdown
Great job! You're finished with lab 4! Be sure to...* **run all the tests** (the next cell has a shortcut for that),* **Save and Checkpoint** from the File menu,* **download the .ipynb file and submit on Gradescope**,
###Code
# For your convenience, you can run this cell to run all the tests at once!
import os
_ = [ok.grade(q[:-3]) for q in os.listdir("tests") if q.startswith('q')]
_ = ok.submit()
###Output
_____no_output_____ |
.ipynb_checkpoints/A- Mapping health research effort - NbPatients-checkpoint.ipynb | ###Markdown
Mapping health research effort - RCTs-------------------------------------Database: 1. All RCTs registered at WHO ICTRP by Jan 1st 2016, 2. with start date between 2006 and 2015 3. with study type and design corresponding to RCT 4. with at least one country location among the 187 countries included in the GBD2010 study 5. with sample size information and sample size between 10 and 150,000We will: 1. General numbers of Nb Patients across regions and over time 2. Create replicates of the mapping of Patients across diseases and evaluate the uncertainty intervals of the local share of patients across diseases within regions
###Code
#Upload database
data <- read.table("/media/igna/Elements/HotelDieu/Cochrane/Mapping_Cancer/Flowchart/database_all_diseases_final_ok.txt")
data <- data[!is.na(data$Sample),]
data <- data[data$Sample>=10 & data$Sample<=150000,]
N <- nrow(data)
names(data)
###Output
_____no_output_____
###Markdown
- TrialID: unique trial ID from WHOICTRP- Regions: 7 epidemiological regions from GBD 2010 study- GBD28: classification according to 28 categories defined in Atal et al. BMC Bioinformatics (2016): This classification includes the injuries category, we exclude it
###Code
#Upload traduction names/label categories
Mgbd <- read.table("/home/igna/Desktop/Programs GBD/Classifier_Trial_GBD/Databases/Taxonomy_DL/GBD_data/GBD_ICD.txt")
grep("Injur",Mgbd$cause_name)
#We supress from GBD28 the label 28
GBD27 <- sapply(strsplit(as.character(data$GBD28),"&"),function(x){paste(x[x!="28"],collapse="&")})
data$GBD27 <- GBD27
#Number of trials relevant to the burden of diseases
table(GBD27=="")
###Output
_____no_output_____
###Markdown
1- Number Patients per region and over time
###Code
regs <- sort(unique(unlist(strsplit(as.character(data$Regions),"&"))))
nb_ctrs <- lapply(strsplit(as.character(data$Nb_ctr_per_reg),'&'),as.numeric)
RGs <-strsplit(as.character(data$Regions),'&')
pats <- data.frame(TrialID = rep(data$TrialID,sapply(nb_ctrs,length)),
Nb_ctrs = unlist(nb_ctrs),
Region = unlist(RGs),
Tot_sample = rep(data$Sample,sapply(nb_ctrs,length)))
pats$tot_ctrs <- rep(sapply(nb_ctrs,sum),sapply(nb_ctrs,length))
pats$sample_per_reg <- pats$Tot_sample*pats$Nb_ctrs/pats$tot_ctrs
Lgbd <- lapply(as.character(data$GBD27),function(x){as.numeric(unlist(strsplit(x,"&")))})
tot <- tapply(pats$sample_per_reg,pats$Region,sum)
tot
sum(tot)
sum(data$Sample)
#Distribution sample sizes
summary(data$Sample)
spl_qt <- c(10,20,40,60,100,200,400,1000,2000,10000,20000,100000,200000)
data$Sple_cl <- cut(data$Sample,spl_qt,right=F)
data$Sple_cl <- as.character(data$Sple_cl)
#Base$IF_classe<-factor(Base$IF_classe,levels=c("[10,56)","[5,10)","[0,5)","No IF"),labels=c("IF greater or equal than 10","IF between 5 and 10","IF less than 5","No IF"))
data$Sple_cl <- as.factor(data$Sple_cl)
library(gdata)
levels(data$Sple_cl)
data$Sple_cl <- reorder(data$Sple_cl,new.order=c('[10,20)',
'[20,40)',
'[40,60)',
'[60,100)',
'[100,200)',
'[200,400)',
'[400,1e+03)',
'[1e+03,2e+03)',
'[2e+03,1e+04)',
'[1e+04,2e+04)',
'[2e+04,1e+05)',
'[1e+05,2e+05)'
))
barplot(table(data$Sple_cl))
DRY <- do.call('cbind',tapply(regs,data$year,function(x){table(unlist(x))}))
DRY <- DRY[order(apply(DRY,1,sum)),]
DRY
barplot(DRY[rownames(DRY)!="High-income",])
###Output
_____no_output_____
###Markdown
2- Estimation of number RCTs per region and diseaseFor each disease, we simulate what would have been the mapping of RCTs within regions if the misclassification of RCTs towards groups of diseases was corrected, given the sensitivities and specificities of the classifier to identify each group of disease.To estimate the performances of the classifier for each group of diseases, we dispose a test set with 2,763 trials manually classified towards the 27-class grouping of diseases used in this work. The test set is described at Atal et al. BMC Bioinformatics 2016.The method used is based on the method presented at Fox et al. Int J Epidemiol 2005.To do so, for each disease for which we found a local research gap we will:1. Calculating sensitivity and specificity of the classifier to identify the disease and other studies relevant to the burden of diseases, and the number of success and number of trials to derive beta distributions2. Doing N=10k times the following simulation * Randomly choose a sens and spec based on beta distribution for identifying the disease and identifying another disease (no correlation between sens and spec, neither between disease and another disease both) * Derive Positive and Negative Predictive Values (PPV and NPV) for each. * Simulate the correction of the classification based on PPVs and NPVs * Derive the proportion of RCTs concerning the disease among all RCTs concerning the burden of disease in the region3. Derive 95% upper-bond simulation interval of the proportion of RCTs concerning the disease among all RCTs concerning the burden of diseases Construction of replicates
###Code
regs <- sort(unique(unlist(strsplit(as.character(data$Regions),"&"))))
LR <- lapply(regs,function(x){1:nrow(data)%in%grep(x,data$Regions)})
LR <- do.call('cbind',LR)
Lgbd <- lapply(as.character(data$GBD28),function(x){as.numeric(unlist(strsplit(x,"&")))})
Lgbd <- lapply(Lgbd,function(x){x[x!=28]})
PERF <- read.csv('Tables/Performances_per_27disease_data.csv')
NK <- 10000
set.seed(7212)
#For all diseases, we will simulate the mapping across regions of trials concerning
#the disease or concerning other diseases
dis <- 1:27
#For each disease
t0 <- proc.time()
for(g in dis){
PERF_g <- PERF[PERF$dis==g,]
#which trials concern the disease
is_dis <- sapply(Lgbd,function(x){g%in%x})
#which trials concern another disease
is_oth <- sapply(Lgbd,function(x){sum(setdiff(1:27,g)%in%x)>0})
#PPV et NPVs for finding the disease
sens_r <- PERF_g$TP_Dis
sens_n <- PERF_g$TP_Dis + PERF_g$FN_Dis
spec_r <- PERF_g$TN_Dis
spec_n <- PERF_g$TN_Dis + PERF_g$FP_Dis
sens <- rbeta(NK,sens_r+1,sens_n-sens_r+1)
spec <- rbeta(NK,spec_r+1,spec_n-spec_r+1)
a_dis <- sum(is_dis)
b_dis <- N-a_dis
As <- (a_dis-(1-spec)*N)/(sens - (1-spec))
Bs <- N-As
T1 <- sens*As
T0 <- spec*Bs
F1 <- (1-spec)*Bs
F0 <- (1-sens)*As
PPV_dis <- T1/(T1+F1)
NPV_dis <- T0/(T0+F0)
#PPV and NPVs for finding another disease
sens_r <- PERF_g$TP_Oth
sens_n <- PERF_g$TP_Oth + PERF_g$FN_Oth
spec_r <- PERF_g$TN_Oth
spec_n <- PERF_g$TN_Oth + PERF_g$FP_Oth
sens <- rbeta(NK,sens_r+1,sens_n-sens_r+1)
spec <- rbeta(NK,spec_r+1,spec_n-spec_r+1)
a_oth <- sum(is_oth)
b_oth <- N-a_oth
As <- (a_oth-(1-spec)*N)/(sens - (1-spec))
Bs <- N-As
T1 <- sens*As
T0 <- spec*Bs
F1 <- (1-spec)*Bs
F0 <- (1-sens)*As
PPV_oth <- T1/(T1+F1)
NPV_oth <- T0/(T0+F0)
#Some values of sens and spec may lead to impossible values of PPV or NPV (>1 or <0)
#We supress and count them. If the total of suppressed iterations is higher than 10% of total iterations we
#will modify the distributions for Specificity and Sensitivity
false_it <- PPV_dis<0 | PPV_dis>1 |
NPV_dis<0 | NPV_dis>1 |
PPV_oth<0 | PPV_oth>1 |
NPV_oth<0 | NPV_oth>1
print(paste(c(g,"has",sum(false_it),"suppressed false iterations"
),collapse=" "))
PPV_dis <- PPV_dis[!false_it]
NPV_dis <- NPV_dis[!false_it]
PPV_oth <- PPV_oth[!false_it]
NPV_oth <- NPV_oth[!false_it]
L <- list()
#Simulation: reclassifying each trial
for(k in 1:length(PPV_dis)){
AR <- matrix(0, nrow=length(regs)+1, ncol=2)
tp_dis <- runif(a_dis)
tn_dis <- runif(b_dis)
recl_dis <- is_dis
recl_dis[recl_dis==TRUE][tp_dis>PPV_dis[k]] <- FALSE
recl_dis[recl_dis==FALSE][tn_dis>NPV_dis[k]] <- TRUE
#Rq: we count all trials (even those with more than 3 diseases)
#it is a conservative choice
rt <- as.numeric(recl_dis)
if(sum(recl_dis)==0) AR[,1] <- c(rep(0,length(regs)+1))
else{ if(sum(recl_dis)==1) AR[,1] <- c(as.numeric(LR[recl_dis,]),1)
else AR[,1] <- c(apply(LR[recl_dis,],2,sum),sum(recl_dis))
}
#Oth_dis
tp_oth <- runif(a_oth)
tn_oth <- runif(b_oth)
recl_oth <- is_oth
recl_oth[recl_oth==TRUE][tp_oth>PPV_oth[k]] <- FALSE
recl_oth[recl_oth==FALSE][tn_oth>NPV_oth[k]] <- TRUE
rt <- rt + as.numeric(recl_oth)
if(sum(rt)==0) AR[,2] <- c(rep(0,length(regs)+1))
else{ if(sum(rt)==1) AR[,2] <- c(as.numeric(LR[rt!=0,]),1)
else AR[,2] <- c(apply(LR[rt!=0,],2,sum),sum(rt))
}
L[[k]] <- AR
}
T <- do.call('rbind',L)
write.table(T,paste(c("/media/igna/Elements/HotelDieu/Cochrane/Mapping_Cancer/Incertitude_mapping/Simulations/Total_simulation_",as.character(PERF_g$dis),".txt"),collapse=""))
}
t1 <- proc.time()
print(t1-t0)/60
###Output
[1] "1 has 90 suppressed false iterations"
[1] "2 has 0 suppressed false iterations"
[1] "3 has 0 suppressed false iterations"
[1] "4 has 3 suppressed false iterations"
[1] "5 has 0 suppressed false iterations"
[1] "6 has 0 suppressed false iterations"
[1] "7 has 5 suppressed false iterations"
[1] "8 has 0 suppressed false iterations"
[1] "9 has 1019 suppressed false iterations"
[1] "10 has 0 suppressed false iterations"
[1] "11 has 3108 suppressed false iterations"
[1] "12 has 0 suppressed false iterations"
[1] "13 has 0 suppressed false iterations"
[1] "14 has 0 suppressed false iterations"
[1] "15 has 3 suppressed false iterations"
[1] "16 has 0 suppressed false iterations"
[1] "17 has 0 suppressed false iterations"
[1] "18 has 0 suppressed false iterations"
[1] "19 has 0 suppressed false iterations"
[1] "20 has 0 suppressed false iterations"
[1] "21 has 3355 suppressed false iterations"
[1] "22 has 0 suppressed false iterations"
[1] "23 has 1324 suppressed false iterations"
[1] "24 has 0 suppressed false iterations"
[1] "25 has 0 suppressed false iterations"
[1] "26 has 0 suppressed false iterations"
[1] "27 has 9347 suppressed false iterations"
user system elapsed
17872.129 19.203 17911.586
###Markdown
It took 5h
###Code
# Diseases with more than 10% of suppressed iterations:
Mgbd$cause_name[c(9,11,21,23,27)]
###Output
_____no_output_____
###Markdown
We will re-simulate only for diseases corresponding to more than 1% of local burden in a region
###Code
set.seed(7212)
g <- 0
#For all diseases, we estimate number of RCTs relevant to the burden
PERF_g <- PERF[PERF$dis==0,]
#which trials are relevant to the burden
is_dis <- sapply(Lgbd,length)==1
#PPV et NPVs for finding the disease
sens_r <- PERF_g$TP_Dis
sens_n <- PERF_g$TP_Dis + PERF_g$FN_Dis
spec_r <- PERF_g$TN_Dis
spec_n <- PERF_g$TN_Dis + PERF_g$FP_Dis
sens <- rbeta(NK,sens_r+1,sens_n-sens_r+1)
spec <- rbeta(NK,spec_r+1,spec_n-spec_r+1)
a_dis <- sum(is_dis)
b_dis <- N-a_dis
As <- (a_dis-(1-spec)*N)/(sens - (1-spec))
Bs <- N-As
T1 <- sens*As
T0 <- spec*Bs
F1 <- (1-spec)*Bs
F0 <- (1-sens)*As
PPV_dis <- T1/(T1+F1)
NPV_dis <- T0/(T0+F0)
false_it <- PPV_dis<0 | PPV_dis>1 |
NPV_dis<0 | NPV_dis>1
print(paste(c(g,"has",sum(false_it),"suppressed false iterations"
),collapse=" "))
PPV_dis <- PPV_dis[!false_it]
NPV_dis <- NPV_dis[!false_it]
L <- data.frame()
#Simulation: reclassifying each trial
for(k in 1:length(PPV_dis)){
AR <- rep(0,length(regs)+1)
tp_dis <- runif(a_dis)
tn_dis <- runif(b_dis)
recl_dis <- is_dis
recl_dis[recl_dis==TRUE][tp_dis>PPV_dis[k]] <- FALSE
recl_dis[recl_dis==FALSE][tn_dis>NPV_dis[k]] <- TRUE
if(sum(recl_dis)==0) AR <- c(rep(0,length(regs)+1))
else{ if(sum(recl_dis)==1) AR <- c(as.numeric(LR[recl_dis,]),1)
else AR <- c(apply(LR[recl_dis,],2,sum),sum(recl_dis))
}
L <- rbind(L,AR)
}
write.table(L,paste(c("/media/igna/Elements/HotelDieu/Cochrane/Mapping_Cancer/Incertitude_mapping/Simulations/Total_simulation_",as.character(PERF_g$dis),".txt"),collapse=""))
t1 <- proc.time()-t0
t1/60
###Output
[1] "0 has 0 suppressed false iterations"
###Markdown
Deriving 95% uncertainty intervals
###Code
SM <- data.frame(Region = rep(c(regs,"All"),each=nrow(Mgbd)+1),
Disease = rep(c(as.character(Mgbd$cause_name),"All"),times=length(regs)+1))
SM$SimMn_NbRCTs <- NA
SM$SimMed_NbRCTs <- NA
SM$Sim95low_NbRCTs <- NA
SM$Sim95up_NbRCTs <- NA
SM$SimMn_PrRCTs <- NA
SM$SimMed_PrRCTs <- NA
SM$Sim95low_PrRCTs <- NA
SM$Sim95up_PrRCTs <- NA
for(g in dis){
T <- tryCatch(read.table(paste(c("/media/igna/Elements/HotelDieu/Cochrane/Mapping_Cancer/Incertitude_mapping/Simulations/Total_simulation_",
as.character(g),".txt"),collapse="")),error=NULL)
if(length(T)!=0){
#Mean, median and 95% uncertainty interval for the number of RCTs
M <- matrix(T[,1],ncol=8,byrow=TRUE)
SM$Sim95up_NbRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,function(x){quantile(x,0.975)})
SM$Sim95low_NbRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,function(x){quantile(x,0.025)})
SM$SimMed_NbRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,function(x){quantile(x,0.5)})
SM$SimMn_NbRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,mean)
#Mean and 95% upper-bound proportion of RCTs by simulation
M <- matrix(T[,1]/T[,2],ncol=8,byrow=TRUE)
SM$Sim95up_PrRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,function(x){quantile(x,0.975)})
SM$Sim95low_PrRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,function(x){quantile(x,0.025)})
SM$SimMed_PrRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,function(x){quantile(x,0.5)})
SM$SimMn_PrRCTs[SM$Disease==as.character(Mgbd$cause_name[g])] <- apply(M,2,mean)
}
}
#All diseases
g <- 0
T <- tryCatch(read.table(paste(c("/media/igna/Elements/HotelDieu/Cochrane/Mapping_Cancer/Incertitude_mapping/Simulations/Total_simulation_",
as.character(g),".txt"),collapse="")),error=NULL)
SM$Sim95up_NbRCTs[SM$Disease=="All"] <- apply(T,2,function(x){quantile(x,0.975)})
SM$Sim95low_NbRCTs[SM$Disease=="All"] <- apply(T,2,function(x){quantile(x,0.025)})
SM$SimMed_NbRCTs[SM$Disease=="All"] <- apply(T,2,function(x){quantile(x,0.5)})
SM$SimMn_NbRCTs[SM$Disease=="All"] <- apply(T,2,mean)
SM[SM$Dis=="All",]
write.table(SM,'Data/Simulations_Alldis_NbProp_MedMn95Int_RCTs.txt')
###Output
_____no_output_____ |
week_03/inclass/.ipynb_checkpoints/W3_InClass_Earthquakes_instructor-checkpoint.ipynb | ###Markdown
Earthquakes and Tectonic Plate Boundaries**Our goals for today:**- Review topography and seafloor age.- Load and visualize an earthquake catalog.- Plot histograms of earthquake magnitude and depth.- Think about these data in terms of plate tectonics. SetupRun this cell as it is to setup your environment and login to okpy.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib import cm
from cartopy import config
import cartopy.crs as ccrs
from client.api.notebook import Notebook
ok = Notebook('ic03.ok')
_ = ok.auth(inline=True,force=True)
###Output
_____no_output_____
###Markdown
Review topography looking at North AtlanticLet us use the `Robinson` projection to look at the large underwater mountain range of the Mid-Atlantic ridge.
###Code
lats = np.loadtxt('../../week_02/inclass/data/etopo20lats.txt')
lons = np.loadtxt('../../week_02/inclass/data/etopo20lons.txt')
topo_grid = np.loadtxt('../../week_02/inclass/data/etopo20data.txt')
# repeat the array of latitudes 1081 times
latitude = np.repeat(lats,1081)
# reshape that (583740,1) element array to (540,1081)
lat_grid = latitude.reshape(540,1081)
# repeat the array of longtitudes 540 times as rows
lon_grid = np.tile(lons,(540,1))
plt.figure(1,(10,10))
ax = plt.axes(projection=ccrs.Robinson(central_longitude=-40.0))
ax.set_global()
plt.contourf(lon_grid, lat_grid, topo_grid,100,vmax=0,vmin=-6000,
cmap=cm.viridis,transform=ccrs.PlateCarree())
ax.coastlines()
ax.gridlines()
color_ax = plt.axes([0.95, 0.3, 0.05, 0.35])
plt.colorbar(cax=color_ax)
plt.title('elevation/depth (m)');
plt.show()
###Output
_____no_output_____
###Markdown
Review seafloor age looking at North Atlantic
###Code
seafloor_age_data = pd.read_csv('../../week_02/assignment/data/age.csv')
age_longitude = np.asarray(seafloor_age_data['longitude'])
age_latitude = np.asarray(seafloor_age_data['latitude'])
age = np.asarray(seafloor_age_data['age_Ma'])
age_grid = age.reshape((901,1801))
age_long_grid = age_longitude.reshape((901,1801))
age_lat_grid = age_latitude.reshape((901,1801))
plt.figure(1,(10,10))
ax = plt.axes(projection=ccrs.Robinson(central_longitude=-40.0))
ax.set_global()
plt.contourf(age_long_grid, age_lat_grid, age_grid,30,
cmap=cm.magma_r,vmax=200,transform=ccrs.PlateCarree())
ax.coastlines()
ax.gridlines()
color_ax = plt.axes([0.95, 0.3, 0.05, 0.35])
plt.colorbar(cax=color_ax)
plt.title('Age, Myr');
plt.show()
###Output
_____no_output_____
###Markdown
What patterns do you observe? Where is the youngest seafloor in relation to the seafloor ridges we observed in our map of topography? Where is the oldest seafloor? Load the Earthquake Catalog Go to https://earthquake.usgs.gov/earthquakes/search/Download a .csv data file of all the earthquakes of magnitude 5.0 and higher from the past 10 years. To get a .csv, rather than a map, click on output options. Alternatively, you could use the USGS API to access the data as we did in the first in-class period by modifying this URL with the right dates:https://earthquake.usgs.gov/fdsnws/event/1/query.csv?starttime=2010-09-15%2000:00:00&endtime=2020-09-14%2023:59:59&minmagnitude=4.5&orderby=magnitude
###Code
start_day = '2010-09-15'
end_day = '2020-09-11'
standard_url = 'https://earthquake.usgs.gov/fdsnws/event/1/query?format=csv&orderby=magnitude'
query_url = standard_url + '&starttime=' + start_day + '&endtime=' + end_day + '&minmagnitude=5.0'
EQ_data = pd.read_csv(query_url)
EQ_data .head()
###Output
_____no_output_____
###Markdown
Recall from the homework that Pandas dataframe columns can be accessed using bracket notation with the name of the column as a string:
###Code
EQ_data['mag']
###Output
_____no_output_____
###Markdown
Largest Earthquake in CatalogWhat is the largest magnitude earthquake in our catalog?**_Code for you to write_**Use the `np.max()` function on the `EQ_data['Magnitude']` column to answer this question in the code block below.
###Code
np.max(EQ_data['mag'])
###Output
_____no_output_____
###Markdown
Write the magnitude in this cell: Determining when and where the largest Earthquake happenedTo determine when this earthquake happened we need to find the data associated with this magnitude event. Pandas has really nice filtering functions built in. They take a while to get comfortable with, but can help us answer this question. Define a variable `largest_magnitude` that is the largest magnitude and then execute the cell below to get the date and time.
###Code
largest_magnitude = np.max(EQ_data['mag'])
largest_magnitude
largest_eq_date = EQ_data['time'][EQ_data['mag'] == largest_magnitude]
print(largest_eq_date)
###Output
0 2011-03-11T05:46:24.120Z
Name: time, dtype: object
###Markdown
To determine where the earthquake happened we can use similar filtering. Replace the xxx with a conditional statement to get the latitude and longitude:
###Code
largest_eq_lon = EQ_data['longitude'][EQ_data['mag'] == largest_magnitude]
largest_eq_lat = EQ_data['latitude'][EQ_data['mag'] == largest_magnitude]
###Output
_____no_output_____
###Markdown
Let's plot a red square at the location of the largest earthquake in our catalog. To the `plt.scatter` function, add `s=100` to adjust the size of the marker. Add `color='red'` to change the color. Add `marker=s` to make it a square. Colors can be specified as detailed here: https://matplotlib.org/2.0.2/api/colors_api.html (html color names work: https://www.w3schools.com/colors/colors_names.asp).Marker options are:```markers = {'.': 'point', ',': 'pixel', 'o': 'circle', 'v': 'triangle_down', '^': 'triangle_up', '': 'triangle_right', '1': 'tri_down', '2': 'tri_up', '3': 'tri_left', '4': 'tri_right', '8': 'octagon', 's': 'square', 'p': 'pentagon', '*': 'star', 'h': 'hexagon1', 'H': 'hexagon2', '+': 'plus', 'x': 'x', 'D': 'diamond', 'd': 'thin_diamond', '|': 'vline', '_': 'hline', 'P': 'plus_filled', 'X': 'x_filled', 0: 'tickleft', 1: 'tickright', 2: 'tickup', 3: 'tickdown', 4: 'caretleft', 5: 'caretright', 6: 'caretup', 7: 'caretdown', 8: 'caretleftbase', 9: 'caretrightbase', 10: 'caretupbase', 11: 'caretdownbase', 'None': 'nothing', None: 'nothing', ' ': 'nothing', '': 'nothing'}```
###Code
plt.figure(1,(15,15))
ax = plt.axes(projection=ccrs.Robinson())
ax.set_global()
plt.scatter(largest_eq_lon,largest_eq_lat,color='red',marker='s',s=100,transform=ccrs.PlateCarree())
plt.title('2011 Tōhoku earthquake')
ax.coastlines()
ax.stock_img()
ax.gridlines()
plt.show()
###Output
_____no_output_____
###Markdown
**_Discussion question:_** *What were the effects of this earthquake?*https://youtu.be/lyqUhAq3oWo Plot histogram of Earthquake MagnitudeHow often do large earthquakes occur? To start addressing this question, let's plot a histogram of earthquake magnitudes.**_Code for you to write_**You have made a histogram before (such as in class last week) so go and ahead and write the code to make one in the cell below that plots up the `EQ_data['mag']`.
###Code
plt.hist(EQ_data['mag'],bins=10,label='mag')
plt.xlabel('Magnitude')
plt.ylabel('number of points')
plt.title('Earthquake magnitudes 2010-2020')
plt.show()
###Output
_____no_output_____
###Markdown
There are so many small earthquakes that we can't even see a bin for the Tohoku quake. Let's make the histogram on a log-scale. For any function, we can put a question mark after it to get its docstring. Let's do this for `plt.hist`. Once you execute the cell below, you will see that there are a lot of options (which you can also view here: https://matplotlib.org/api/_as_gen/matplotlib.pyplot.hist.html). One of the options is to make the plot be on a log scale by setting `log=True`.
###Code
plt.hist?
###Output
_____no_output_____
###Markdown
**_Make a histogram of the Earthquake magnitude data on a log-scale_** Set `log=True` within the `plt.hist` function.
###Code
plt.hist(EQ_data['mag'],bins=10,label='Magnitude',log=True)
plt.xlabel('Magnitude')
plt.ylabel('number of events')
plt.title('Earthquake Magnitudes $\geq$5 (2010-2020)')
plt.show()
###Output
_____no_output_____
###Markdown
Let's change the features of the plot from the defaults to improve our figure.
###Code
plt.figure(1,(6,6))
plt.hist(EQ_data['mag'],bins=10,label='Magnitude',log=True)
plt.xlabel('Magnitude, Mw', fontsize=16)
plt.ylabel('Number of Events', fontsize=16)
plt.title('Earthquake Magnitudes $\geq$5 (2010-2020)', fontsize=18)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.xlim([5, 9])
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
Plot histogram of Earthquake DepthsLet's see the range and frequency of depths where earthquakes occur. **_Make a histogram of earthquake depth_**
###Code
plt.figure(1,(6,6))
plt.hist(EQ_data['depth'],bins=10,label='Depth')
plt.xlabel('Earthquake Depth, km', fontsize=16)
plt.ylabel('Number of Events', fontsize=16)
plt.title('Depth of Earthquakes with $M_{W}\geq$5 (2010-2020)', fontsize=18)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plt.grid(True)
plt.show()
###Output
_____no_output_____
###Markdown
**_Discussion question:_** **_At what depth are the majority of earthquakes? How deep do they extend? How does that compare to the typical depth of the lithosphere (~100 km)?_**_Write your answer here._ Map of Earthquake Epicenters Now let's plot the epicenters of the earthquakes on a Robinson projection. Replace the xxx's with longitude and latitude in order to make the map.
###Code
plt.figure(figsize=(15,15))
ax = plt.axes(projection=ccrs.Robinson())
ax.set_global()
plt.scatter(EQ_data['longitude'],EQ_data['latitude'],marker='.',color='black',transform=ccrs.PlateCarree())
plt.title('Earthquake Epicenters 2010-2020')
ax.coastlines()
ax.stock_img()
ax.gridlines()
plt.show()
###Output
_____no_output_____
###Markdown
Small Group DiscussionGet into groups of three and discuss these questions while inspecting your maps:**_Where do the majority of earthquakes occur?_****_What do those locations correspond with?_****_What properties of these earthquakes should we investigate to learn more about the nature of plate tectonics in different places?_** _Write a summary of your discussion here._ Maps of Earthquake DepthThe map we made above is nice, but it doesn't tell us everything about our data such as the depth of the earthquakes. Let's color code the earthquakes by depth when we map them.To do this, use the same `plt.scatter()` function, but add the option to set the color by depth. You can do this by having `c=EQ_data['Depth']` within the function. You can customize the output by setting the minimum value for the color bar `vmin=0` and the maximum value `vmax=200`. You can also customize the colormap. A perceptually uniform sequential color map like `cmap='magma'` or `cmap='viridis'` works well (https://matplotlib.org/tutorials/colors/colormaps.html). I think it also is nice to make the points partially see through by setting `alpha=0.5`. All of these customizations can be made by adding these arguments within the `plt.scatter()` function.**_Make a map that colors points by depth by inserting these arguments in the plt.scatter() function in the code block below._**
###Code
fig = plt.figure(figsize=(15,15))
ax = plt.axes(projection=ccrs.Robinson())
ax.set_global()
plt.scatter(EQ_data['longitude'],EQ_data['latitude'],c=EQ_data['depth'],vmin=0,vmax=200,cmap='viridis',alpha=0.5,transform=ccrs.PlateCarree())
plt.title('Earthquake Epicenters 2010-2020')
ax.coastlines()
ax.gridlines(draw_labels=True)
plt.colorbar(shrink=0.4,label='depth (km)')
plt.show()
###Output
_____no_output_____
###Markdown
**_What depth of earthquakes occur at mid-ocean ridges?_***Write your answer here.* The earthquakes at trenches (like around the Pacific ocean's 'ring of fire') get deeper in a systematic way. The deepest earthquakes are the farthest from the trench. This reveals the location of the downgoing slabs. > A cross-section through a subduction zone. Red points are earthquake focus points. The most active region is the zone of contact between the plates. There is a back-arc seismic zone in the overriding plate. Below ~70 km depth earthquakes occur within the subducting plate, this region is call the Wadati-Benioff seosmic zone. What direction is subduction occuring below South America? Japan? _Write your answer here._ Andean subduction Let's look at a subset of this earthquake catalog across the Andes in South America. The code below is filtering the data frame to only include those between 20ºS and 25ºS latitude and 75ºW and 60ºW longitude.
###Code
selected_quakes = EQ_data[(EQ_data['latitude']>-25)&(EQ_data['latitude']<-20)
&(EQ_data['longitude']> -75)&(EQ_data['longitude']< -60)]
plt.figure(1,(10,10)) # make a big figure
ax = plt.axes(projection=ccrs.Robinson(central_longitude=-60))
ax.set_extent([-80, -40, -50, 5], crs=ccrs.PlateCarree())
plt.scatter(selected_quakes['longitude'],selected_quakes['latitude'],marker='.',
c=selected_quakes['depth'],transform=ccrs.PlateCarree())
plt.title('Earthquake Epicenters 2010-2020')
ax.coastlines()
ax.stock_img()
ax.gridlines(draw_labels=True)
plt.show()
###Output
_____no_output_____
###Markdown
Let's take all of the earthquakes within that region and plot earthquake depth on the y-axis and earthquake location on the x-axis. **Labeling axes is super important in science! Don't make plots without labeled axes!**
###Code
plt.scatter(selected_quakes['longitude'],-1*selected_quakes['depth'])
plt.xlabel('Longitude (°)')
plt.ylabel('Depth (km)')
plt.show()
###Output
_____no_output_____
###Markdown
Pick and plot two other locations of interestFilter the earthquake catalog by a latitude and longitude range like we did above the South America example. Plot the earthquakes on a map and make a similar depth vs. longitude plot (or depth vs latitude plot) for another region.
###Code
japan_selected_quakes = EQ_data[(EQ_data['latitude']>37)&(EQ_data['latitude']<44)
&(EQ_data['longitude']> 130)&(EQ_data['longitude']< 142)]
plt.figure(1,(10,10)) # make a big figure
ax = plt.axes(projection=ccrs.Robinson(central_longitude=130))
ax.set_extent([120, 170, 30, 60], crs=ccrs.PlateCarree())
plt.scatter(japan_selected_quakes['longitude'],japan_selected_quakes['latitude'],marker='.',
c=japan_selected_quakes['depth'],transform=ccrs.PlateCarree())
plt.title('Earthquake Epicenters 2010-2020')
ax.coastlines()
ax.stock_img()
ax.gridlines(draw_labels=True)
plt.show()
plt.scatter(japan_selected_quakes['longitude'],-1*japan_selected_quakes['depth'])
plt.xlabel('Longitude (°)')
plt.ylabel('Depth (km)')
plt.show()
alaska_selected_quakes = EQ_data[(EQ_data['latitude']>50)&(EQ_data['latitude']<60)
&(EQ_data['longitude']> -180)&(EQ_data['longitude']< -175)]
plt.figure(1,(10,10)) # make a big figure
ax = plt.axes(projection=ccrs.Mollweide(central_longitude=-170))
ax.set_extent([-180, -140, 45, 80], ccrs.PlateCarree())
plt.scatter(alaska_selected_quakes['longitude'],alaska_selected_quakes['latitude'],marker='.',
c=alaska_selected_quakes['depth'],transform=ccrs.PlateCarree())
plt.title('Earthquake Epicenters 2010-2020')
ax.coastlines()
ax.stock_img()
ax.gridlines(draw_labels=True)
plt.show()
plt.scatter(alaska_selected_quakes['latitude'],-1*alaska_selected_quakes['depth'])
plt.xlabel('Latitude (°)')
plt.ylabel('Depth (km)')
plt.show()
###Output
_____no_output_____
###Markdown
**You can take some time to explore different regions. We will then have a couple people come up and write the code to plot earthquake depths in other regions.** Turn in this notebookSave your completed notebook then run:
###Code
_ = ok.submit()
###Output
_____no_output_____ |
20pytorch/sequence_models_tutorial.ipynb | ###Markdown
序列模型和 LSTM 网络(长短记忆网络)===================================================之前我们已经学过了许多的前馈网络. 所谓前馈网络, 就是网络中不会保存状态. 然而有时这并不是我们想要的效果. 在自然语言处理 (NLP, Natural Language Processing)中, 序列模型是一个核心的概念. 所谓序列模型, 即输入依赖于时间信息的模型. 一个典型的序列模型是隐马尔科夫模型 (HMM, Hidden Markov Model). 另一个序列模型的例子是条件随机场 (CRF, Conditional Random Field).递归神经网络是指可以保存某种状态的神经网络. 比如说, 网络上个时刻的输出可以作为下个时刻的输入, 这样信息就可以通过序列在网络中一直往后传递. 对于LSTM (Long-Short Term Memory) 来说, 序列中的每个元素都有一个相应的隐状态 $h_t$, 该隐状态原则上可以包含序列当前结点之前的任一节点的信息. 我们可以使用隐藏状态来预测语言模型中的单词, 词性标签以及其他各种各样的东西.Pytorch 中的 LSTM~~~~~~~~~~~~~~~~~~开始例子之前,有几个点说明一下. Pytorch 中, LSTM 的所有的形式固定为3D 的 tensor.每个维度有固定的语义含义, 不能乱掉. 其中第一维是序列本身, 第二维以 mini-batch 形式来索引实例, 而第三维则索引输入的元素. 因为我们没有讨论过 mini-batch, 所以在这里我们假设第二维的维度总是1. 如果我们想在句子 "The cow jumped" 上运行一个序列模型, 模型的输入类似这样:\begin{align}\begin{bmatrix} \overbrace{q_\text{The}}^\text{row vector} \\ q_\text{cow} \\ q_\text{jumped} \end{bmatrix}\end{align}除了有一个额外的大小为1的第二维度.此外, 你还可以向网络逐个输入序列, 在这种情况下, 第一个轴的大小也是1.来看一个简单的例子.
###Code
# 作者: Robert Guthrie
import torch
import torch.autograd as autograd
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
torch.manual_seed(1)
lstm = nn.LSTM(3, 3) # 输入维度是3, 输出维度也是3
inputs = [autograd.Variable(torch.randn((1, 3)))
for _ in range(5)] # 构造一个长度为5的序列
# 初始化隐藏状态
hidden = (autograd.Variable(torch.randn(1, 1, 3)),
autograd.Variable(torch.randn((1, 1, 3))))
for i in inputs:
# 将序列的元素逐个输入到LSTM
# 经过每步操作,hidden 的值包含了隐藏状态的信息
out, hidden = lstm(i.view(1, 1, -1), hidden)
# 另外, 我们还可以一次对整个序列进行训练. LSTM 返回的第一个值表示所有时刻的隐状态值,
# 第二个值表示最近的隐状态值 (因此下面的 "out"的最后一个值和 "hidden" 的值是一样的).
# 之所以这样设计, 是为了通过 "out" 的值来获取所有的隐状态值, 而用 "hidden" 的值来
# 进行序列的反向传播运算, 具体方式就是将它作为参数传入后面的 LSTM 网络.
# 增加额外的第二个维度
inputs = torch.cat(inputs).view(len(inputs), 1, -1)
hidden = (autograd.Variable(torch.randn(1, 1, 3)), autograd.Variable(
torch.randn((1, 1, 3)))) # 清空输出隐状态
out, hidden = lstm(inputs, hidden)
print(out)
print(hidden)
###Output
tensor([[[-0.0187, 0.1713, -0.2944]],
[[-0.3521, 0.1026, -0.2971]],
[[-0.3191, 0.0781, -0.1957]],
[[-0.1634, 0.0941, -0.1637]],
[[-0.3368, 0.0959, -0.0538]]], grad_fn=<CatBackward>)
(tensor([[[-0.3368, 0.0959, -0.0538]]], grad_fn=<ViewBackward>), tensor([[[-0.9825, 0.4715, -0.0633]]], grad_fn=<ViewBackward>))
###Markdown
例子: 用 LSTM 来进行词性标注~~~~~~~~~~~~~~~~~~~~~~~~~~~~在这部分, 我们将会使用一个 LSTM 网络来进行词性标注. 在这里我们不会用到维特比算法,前向后向算法或者任何类似的算法, 而是将这部分内容作为一个 (有挑战) 的练习留给读者,希望读者在了解了这部分的内容后能够实现如何将维特比算法应用到 LSTM 网络中来.整个模型的参数定义如下: 输入的句子定义为 $w_1, \dots, w_M$, 其中动词定义为 $w_1, \dots, w_M$, 标签集合定义为 $T$, 单词 $w_i$ 的实际标签为 $y_i$. 定义单词 $w_i$ 的预测标签为 $\hat{y}_i$.这是一个结构预测模型, 我们的输出是一个序列 $\hat{y}_1, \dots, \hat{y}_M$,其中 $\hat{y}_i \in T$.在进行预测时, 需将句子每个词输入到一个 LSTM 网络中. 将时刻 $i$ 的隐状态标记为 $h_i$. 同样地, 对每个标签赋一个独一无二的索引 (类似 word embeddings 部分word\_to\_ix 的设置). 然后就得到了 $\hat{y}_i$ 的预测规则:\begin{align}\hat{y}_i = \text{argmax}_j \ (\log \text{Softmax}(Ah_i + b))_j\end{align}即先对隐状态进行一个仿射变换, 然后计算一个对数 softmax, 最后得到的预测标签即为对数softmax 中最大的值对应的标签. 注意, 这也意味着 $A$ 空间的维度是 $|T|$.准备数据:
###Code
def prepare_sequence(seq, to_ix):
idxs = [to_ix[w] for w in seq]
tensor = torch.LongTensor(idxs)
return autograd.Variable(tensor)
training_data = [
("The dog ate the apple".split(), ["DET", "NN", "V", "DET", "NN"]),
("Everybody read that book".split(), ["NN", "V", "DET", "NN"])
]
word_to_ix = {}
for sent, tags in training_data:
for word in sent:
if word not in word_to_ix:
word_to_ix[word] = len(word_to_ix)
print(word_to_ix)
tag_to_ix = {"DET": 0, "NN": 1, "V": 2}
# 实际中通常使用更大的维度如32维, 64维.
# 这里我们使用小的维度, 为了方便查看训练过程中权重的变化.
EMBEDDING_DIM = 6
HIDDEN_DIM = 6
###Output
{'The': 0, 'dog': 1, 'ate': 2, 'the': 3, 'apple': 4, 'Everybody': 5, 'read': 6, 'that': 7, 'book': 8}
###Markdown
构造模型:
###Code
class LSTMTagger(nn.Module):
def __init__(self, embedding_dim, hidden_dim, vocab_size, tagset_size):
super(LSTMTagger, self).__init__()
self.hidden_dim = hidden_dim
self.word_embeddings = nn.Embedding(vocab_size, embedding_dim)
# LSTM 以 word_embeddings 作为输入, 输出维度为 hidden_dim 的隐状态值
self.lstm = nn.LSTM(embedding_dim, hidden_dim)
# 线性层将隐状态空间映射到标注空间
self.hidden2tag = nn.Linear(hidden_dim, tagset_size)
self.hidden = self.init_hidden()
def init_hidden(self):
# 开始时刻, 没有隐状态
# 关于维度设置的详情,请参考 Pytorch 文档
# 各个维度的含义是 (num_layers, minibatch_size, hidden_dim)
return (autograd.Variable(torch.zeros(1, 1, self.hidden_dim)),
autograd.Variable(torch.zeros(1, 1, self.hidden_dim)))
def forward(self, sentence):
embeds = self.word_embeddings(sentence)
lstm_out, self.hidden = self.lstm(
embeds.view(len(sentence), 1, -1), self.hidden)
tag_space = self.hidden2tag(lstm_out.view(len(sentence), -1))
tag_scores = F.log_softmax(tag_space, dim=1)
return tag_scores
###Output
_____no_output_____
###Markdown
训练模型:
###Code
model = LSTMTagger(EMBEDDING_DIM, HIDDEN_DIM, len(word_to_ix), len(tag_to_ix))
loss_function = nn.NLLLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)
# 查看下训练前得分的值
# 注意: 输出的 i,j 元素的值表示单词 i 的 j 标签的得分
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs)
print(tag_scores)
for epoch in range(300): # 再次说明下, 实际情况下你不会训练300个周期, 此例中我们只是构造了一些假数据
for sentence, tags in training_data:
# Step 1. 请记住 Pytorch 会累加梯度
# 每次训练前需要清空梯度值
model.zero_grad()
# 此外还需要清空 LSTM 的隐状态
# 将其从上个实例的历史中分离出来
model.hidden = model.init_hidden()
# Step 2. 准备网络输入, 将其变为词索引的 Variables 类型数据
sentence_in = prepare_sequence(sentence, word_to_ix)
targets = prepare_sequence(tags, tag_to_ix)
# Step 3. 前向传播
tag_scores = model(sentence_in)
# Step 4. 计算损失和梯度值, 通过调用 optimizer.step() 来更新梯度
loss = loss_function(tag_scores, targets)
loss.backward()
optimizer.step()
# 查看训练后得分的值
inputs = prepare_sequence(training_data[0][0], word_to_ix)
tag_scores = model(inputs)
# 句子是 "the dog ate the apple", i,j 表示对于单词 i, 标签 j 的得分.
# 我们采用得分最高的标签作为预测的标签. 从下面的输出我们可以看到, 预测得
# 到的结果是0 1 2 0 1. 因为 索引是从0开始的, 因此第一个值0表示第一行的
# 最大值, 第二个值1表示第二行的最大值, 以此类推. 所以最后的结果是 DET
# NOUN VERB DET NOUN, 整个序列都是正确的!
print(tag_scores)
###Output
tensor([[-1.1389, -1.2024, -0.9693],
[-1.1065, -1.2200, -0.9834],
[-1.1286, -1.2093, -0.9726],
[-1.1190, -1.1960, -0.9916],
[-1.0137, -1.2642, -1.0366]], grad_fn=<LogSoftmaxBackward>)
###Markdown
练习: 使用字符级特征来增强 LSTM 词性标注器~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~在上面的例子中, 每个词都有一个词嵌入, 作为序列模型的输入. 接下来让我们使用每个的单词的字符级别的表达来增强词嵌入. 我们期望这个操作对结果能有显著提升, 因为像词缀这样的字符级信息对于词性有很大的影响. 比如说, 像包含词缀 *-ly* 的单词基本上都是被标注为副词.具体操作如下. 用 $c_w$ 来表示单词 $w$ 的字符级表达, 同之前一样, 我们使用 $x_w$ 来表示词嵌入. 序列模型的输入就变成了 $x_w$ 和 $c_w$ 的拼接. 因此, 如果 $x_w$ 的维度是5, $c_w$ 的维度是3, 那么我们的 LSTM网络的输入维度大小就是8.为了得到字符级别的表达, 将单词的每个字符输入一个 LSTM 网络, 而 $c_w$ 则为这个LSTM 网络最后的隐状态. 一些提示:* 新模型中需要两个 LSTM, 一个跟之前一样, 用来输出词性标注的得分, 另外一个新增加的用来 获取每个单词的字符级别表达.* 为了在字符级别上运行序列模型, 你需要用嵌入的字符来作为字符 LSTM 的输入.
###Code
###Output
_____no_output_____ |
notebooks/SVM.ipynb | ###Markdown
Some necessary library imports
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
from sklearn.svm import SVR
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Some data preprocessing
###Code
data = pd.read_csv('sbi_l.csv')
data.drop(['Unnamed: 0'],axis = 1, inplace = True)
y = pd.DataFrame(data['CloseNext'])
X = data.drop(['CloseNext'], axis = 1)
X = np.array(X)
y = np.array(y)
#X = X[1700:2030,:]
#y = y[1700:2030,:]
y = y.flatten()
###Output
_____no_output_____
###Markdown
Feature scaling
###Code
scaled = StandardScaler()
scaled.fit(X)
X = scaled.transform(X)
###Output
_____no_output_____
###Markdown
Train Test Split
###Code
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.25)
###Output
_____no_output_____
###Markdown
Support Vector Machine Regression Performing Grid Search for finding best parameters for SVM Regression
###Code
parameters = {'kernel':('poly', 'rbf'), 'C':[10000,20000,30000,50000,100000],'degree':[1,2],'epsilon':[0.1,1,2,4,5],'tol': [0.1,0.5]}
sv = SVR(gamma = 'auto')
grid_search = GridSearchCV(sv, parameters, verbose = 3 ,cv = 3)
#grid_search.fit(X_train,y_train)
#grid_search.best_estimator_
SVR(C=30000, cache_size=200, coef0=0.0, degree=1, epsilon=5, gamma='auto',
kernel='poly', max_iter=-1, shrinking=True, tol=0.1, verbose=False)
svm_regression.fit(X_train, y_train)
svm_predict = svm_regression.predict(X_test)
###Output
_____no_output_____
###Markdown
Testing
###Code
error = mean_absolute_error(svm_predict,y_test)
per_err = (error/np.mean(y_test)) * 100
print('The mean absolute error is {} and percentage error is {}.'.format(error,per_err))
fig=plt.figure(figsize=(30, 15), dpi= 80, facecolor='w', edgecolor='k')
plt.plot(svm_predict[0:50],label = "Predicted values")
plt.plot(y_test[0:50], label = "Actual values")
plt.xlabel('time in day', fontsize = 'xx-large')
plt.ylabel('Stock prices in INR', fontsize = 'xx-large')
plt.title('Comparing predicted and actual values', fontsize = 'xx-large')
plt.legend(fontsize = 'xx-large')
plt.show()
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis
###Code
print(titanic_train.shape)
print(titanic_test.shape)
print(titanic_result.shape)
#Combine train and test data set to cleanup and transform togather
titanic_data= pd.concat([titanic_train,titanic_test], axis=0)
PassengerId=titanic_test['PassengerId']
titanic_data.info()
titanic_data.head()
titanic_data.describe()
sb.countplot(x="Survived",data=titanic_train)
sb.countplot(x="Survived", hue="Sex", data=titanic_train)
sb.countplot(x="Survived", hue="Pclass", data=titanic_train)
titanic_train["Age"].plot.hist()
titanic_train["Fare"].plot.hist(bins=20, figsize=(10,4))
sb.countplot("Survived",hue="Embarked",data=titanic_train)
###Output
_____no_output_____
###Markdown
Data Wrangling
###Code
titanic_data.isnull().sum()
sb.boxplot(x="Pclass",y="Age", data=titanic_data)
def name_extract(word):
return word.split(',')[1].split('.')[0].strip()
titanic_data['Salutation']=titanic_data['Name'].apply(name_extract)
titanic_data.groupby('Salutation').PassengerId.count()
def group_salutation(Salutation):
if(Salutation=="Mr"):
return "Mr"
else:
if(Salutation=="Miss"):
return "Miss"
else:
if(Salutation=="Mrs"):
return "Mrs"
else:
if(Salutation=="Master"):
return "Master"
else:
return "Others"
titanic_data.Salutation=titanic_data.Salutation.apply(group_salutation)
#meanAge=np.mean(titanic_data.Age)
#titanic_data["Age"]=titanic_data.Age.fillna(meanAge)
missing_ages = titanic_data[titanic_data['Age'].isnull()]
# determine mean age based on Sex and Pclass
mean_ages = titanic_data.groupby(['Sex','Pclass'])['Age'].mean()
def remove_na_ages(df):
if pd.isnull(df['Age']):
return mean_ages[df['Sex'],df['Pclass']]
else:
return df['Age']
titanic_data['Age'] =titanic_data.apply(remove_na_ages, axis=1)
titanic_data.head()
#Assign value 1 for passengers with cabin
titanic_data.loc[titanic_data['Cabin'].notnull(), 'Cabin'] = 1
#Assign value 0 for passengers without cabin
titanic_data.loc[titanic_data['Cabin'].isnull(), 'Cabin'] = 0
titanic_data.head()
titanic_data=pd.get_dummies(titanic_data, columns=['Sex','Pclass','Embarked','Salutation'])
#sex=pd.get_dummies(titanic_data["Sex"],drop_first=True)
titanic_data.head()
###Output
_____no_output_____
###Markdown
def Age_trans(age): if(age<=5.0): return 'Infant' elif(age>5.0 and age<13.0): return 'Toddler' elif(age >=13.0 and age<18.0): return 'Teenage' elif(age >=18.0 and age<45.0): return 'Adult' else: return 'Old' titanic_data['Age']=titanic_data['Age'].apply(Age_trans)
###Code
titanic_data.shape
titanic_data.drop(['Name','Ticket','PassengerId'],axis=1,inplace=True)
#Split the records into train and test.
titanic_test=titanic_data[titanic_data.Source=='test']
titanic_train=titanic_data[titanic_data.Source=='train']
print(titanic_test.shape)
print(titanic_train.shape)
titanic_train=titanic_train.drop('Source',axis=1)
titanic_test=titanic_test.drop('Source',axis=1)
type(titanic_result.Survived.values)
y_train=titanic_train['Survived']
x_train=titanic_train.drop(['Survived'],axis=1).values
x_test=titanic_test.drop(['Survived'],axis=1).values
y_test=titanic_result.Survived
from sklearn.model_selection import GridSearchCV
from sklearn import svm
model=svm.SVC()
#Hyper Parameters Set
params = [{'kernel': ['rbf'],'gamma':[1e-4,1e-3,0.01,0.1,0.2,0.5,1,2,5,10],
'C': [1,10,100,1000]},
{'kernel':['linear'],'C':[1,10,100,1000]}]
model1=GridSearchCV(model1,param_grid=params,cv=3,n_jobs=-1)
###Output
_____no_output_____
###Markdown
Experiment 3.4 - SVM
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# import load_data function from helper file
%load_ext autoreload
%autoreload 2
# fix system path
import sys
sys.path.append("/home/jovyan/work")
from src.features.helper_functions import load_sets
X_train, y_train, X_val, y_val, X_test = load_sets()
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# fit and transform training set
X_train = scaler.fit_transform(X_train)
# only transform validation set
X_val = scaler.transform(X_val)
from sklearn.svm import SVC
svclassifier = SVC(kernel='linear', probability=True)
svclassifier.fit(X_train, y_train)
# predict class
y_train_preds = svclassifier.predict(X_train)
y_val_preds = svclassifier.predict(X_val)
# predict proabilities
y_train_preds_prob = svclassifier.predict_proba(X_train)
y_val_preds_prob = svclassifier.predict_proba(X_val)
from src.features.helper_functions import result_metrics
result_metrics(y_train, y_train_preds,y_train_preds_prob)
result_metrics(y_val, y_val_preds,y_val_preds_prob)
###Output
Accuracy: 83.94%
Precision: 83.94%
Recall: 100.00%
AUC using prediction probabilities: 46.567%
precision recall f1-score support
0 0.00 0.00 0.00 257
1 0.84 1.00 0.91 1343
accuracy 0.84 1600
macro avg 0.42 0.50 0.46 1600
weighted avg 0.70 0.84 0.77 1600
Confusion Matrix
[[ 0 257]
[ 0 1343]]
###Markdown
Not very good, perhaps a different kernel would do better
###Code
svclassifier_rbf = SVC(kernel='rbf', probability=True)
svclassifier_rbf.fit(X_train, y_train)
# predict class
y_train_preds = svclassifier_rbf.predict(X_train)
y_val_preds = svclassifier_rbf.predict(X_val)
# predict proabilities
y_train_preds_prob = svclassifier_rbf.predict_proba(X_train)
y_val_preds_prob = svclassifier_rbf.predict_proba(X_val)
from src.features.helper_functions import result_metrics
result_metrics(y_train, y_train_preds,y_train_preds_prob)
result_metrics(y_val, y_val_preds,y_val_preds_prob)
###Output
Accuracy: 83.94%
Precision: 83.94%
Recall: 100.00%
AUC using prediction probabilities: 60.510%
precision recall f1-score support
0 0.00 0.00 0.00 257
1 0.84 1.00 0.91 1343
accuracy 0.84 1600
macro avg 0.42 0.50 0.46 1600
weighted avg 0.70 0.84 0.77 1600
Confusion Matrix
[[ 0 257]
[ 0 1343]]
###Markdown
Bit better, but not much, try sigmoid kernel
###Code
svclassifier_sig = SVC(kernel='sigmoid', probability=True)
svclassifier_sig.fit(X_train, y_train)
# predict class
y_train_preds = svclassifier_sig.predict(X_train)
y_val_preds = svclassifier_sig.predict(X_val)
# predict proabilities
y_train_preds_prob = svclassifier_sig.predict_proba(X_train)
y_val_preds_prob = svclassifier_sig.predict_proba(X_val)
from src.features.helper_functions import result_metrics
result_metrics(y_train, y_train_preds,y_train_preds_prob)
result_metrics(y_val, y_val_preds,y_val_preds_prob)
###Output
Accuracy: 74.56%
Precision: 84.67%
Recall: 85.11%
AUC using prediction probabilities: 57.998%
precision recall f1-score support
0 0.20 0.19 0.20 257
1 0.85 0.85 0.85 1343
accuracy 0.75 1600
macro avg 0.52 0.52 0.52 1600
weighted avg 0.74 0.75 0.74 1600
Confusion Matrix
[[ 50 207]
[ 200 1143]]
###Markdown
Training SVM baseline on Feit-Thompson datasetWe train a SVM baseline on a set of heuristic features like context size, goal size, number of hypothesis in context and the smallest edit distance between a hypothesis and the context.
###Code
cd ..
import argparse
import pickle
import torch
import numpy as np
from ml.fold_model import TacStModel
from ml.fold_train import TacStTrainer
# from ipdb import launch_ipdb_on_exception
# from ml.rewrite.solver import to_goalattn_dataset, run
from ml.rewrite.simprw import run_end2end
from ml.rewrite.dataset_prep import to_goalattn_dataset
from ml.tacst_prep import Dataset, TacStPt
from coq.tactics import TACTICS_EQUIV
###Output
_____no_output_____
###Markdown
Loading dataset. Note that this is slightly smaller than the full dataset as the edit-distance calculation timed out on the biggest trees.
###Code
with open("tacst_edit.pickle", "rb") as f:
tacst_dataset, kern_tokens_to_idx, mid_tokens_to_idx = pickle.load(f)
print("Points Train={} Val={} Test={}".format(len(tacst_dataset.train), len(tacst_dataset.val), len(tacst_dataset.test)))
###Output
Points Train=61297 Val=7503 Test=7622
###Markdown
Fitting SVM modelsWe note that the features look approximately poisson, while the log features look approximately gaussian, hence we train the SVM models on the log features.
###Code
from sklearn import svm
def svm_models(dataset):
for typ in ["mid_noimp", "kern"]:
for targ in ["subtr_bin", "tac_bin"]:
size_features = ['%s_concl_size' % typ, '%s_ctx_size' % typ]
len_features = ['len_ctx']
edit_dist_features = ['%s_str_dist' % typ]
features = size_features + len_features + edit_dist_features
def _get_features(pt, features = features):
return [getattr(pt, f) for f in features]
def _get_targets(pt, targ = targ):
return getattr(pt, targ)
def get_xy(dataset):
X = np.asarray([_get_features(pt) for (tactr_id,pt) in dataset])
Y = np.asarray([_get_targets(pt) for (tactr_id,pt) in dataset])
return X,Y
x_t, y_t = get_xy(dataset.train)
x_v, y_v = get_xy(dataset.val)
x_s, y_s = get_xy(dataset.test)
clf = svm.SVC()
clf.fit(np.log(1+x_t), y_t)
score_t = clf.score(np.log(1+x_t), y_t)
score_v = clf.score(np.log(1+x_v), y_v)
score_s = clf.score(np.log(1+x_s), y_s)
print(typ, targ, "%.4f %.4f %.4f" % (score_t, score_v, score_s))
svm_models(tacst_dataset)
###Output
mid_noimp subtr_bin 0.5994 0.6008 0.5752
mid_noimp tac_bin 0.4933 0.5119 0.4945
kern subtr_bin 0.5975 0.6039 0.5737
kern tac_bin 0.4899 0.5045 0.4894
###Markdown
Project files
###Code
%load_ext autoreload
%autoreload 2
from google.colab import drive
drive.mount('/content/drive')
sys.path.append('/content/drive/MyDrive/TFG/implementations/machine_learning_tfg/')
from src.utils.model_metrics_generator import ModelMetricsGenerator
from src.utils.cross_validation_utils import CrossValidationMetricsResultPrinter
from src.utils.my_metrics import accuracy_precision_recall_specifity_f2_score
###Output
_____no_output_____
###Markdown
Load data
###Code
input_data = pd.read_excel('/content/drive/MyDrive/TFG/implementations/machine_learning_tfg/data/prepared/prepared_ICU_Prediction.xlsx')
ground_truth = input_data['ICU']
sample_data = input_data.drop('ICU', axis=1)
train_data, test_data, train_truth, test_truth = train_test_split(sample_data, ground_truth, test_size=0.2, shuffle=True, random_state=42)
###Output
_____no_output_____
###Markdown
Linear SVM
###Code
def linear_model(max_iter=1000):
""" Instantiate, train and predict a linear SVM model
"""
linear_model = svm.LinearSVC(max_iter=max_iter)
metric_generator = ModelMetricsGenerator(linear_model, test_truth)
metric_generator.fit_and_predict_model(train_data, train_truth, test_data)
metric_generator.print_results()
linear_model()
linear_model(10000)
def weighted_linear_model(max_iter=10000, weights = {0:1, 1:1}):
""" Instantiate, apply cross validation, train and predict a weighted linear SVM model
"""
metrics = accuracy_precision_recall_specifity_f2_score()
sskfold = StratifiedShuffleSplit(random_state=1)
linear_model = svm.LinearSVC(max_iter=max_iter, class_weight=weights)
#cross validation
results = cross_validate(linear_model, train_data, train_truth, cv=sskfold, scoring=metrics, n_jobs=-1)
printer = CrossValidationMetricsResultPrinter()
printer.print_metrics_report(results)
#fit and predict for getting measures
metric_generator = ModelMetricsGenerator(linear_model, test_truth)
metric_generator.fit_and_predict_model(train_data, train_truth, test_data)
metric_generator.print_results()
weighted_linear_model(max_iter=10000, weights={1:10})
weighted_linear_model(max_iter=30000, weights={1:6})
weighted_linear_model(max_iter=30000, weights={1:10})
weighted_linear_model(max_iter=50000, weights={1:10})
###Output
Valores medios:
Fit time: 22.8887
Test time: 0.0078
Accuracy: 70.91
Precision: 52.65
Recall: 92.86
Specificity: 60.67
F2 score: 80.46
22.8887
0.0078
70.91
52.65
92.86
60.67
80.46
Indicadores rendimiento:
Fit time: 16.8787
Predict time: 0.0017
Accuracy: 71.27
Precision: 56.98
Recall: 95.15
Specificity: 56.98
F2-score: 83.9
16.8787
0.0017
71.27
56.98
95.15
56.98
83.9
###Markdown
Non linear
###Code
def weighted_non_linear_model(max_iter=10000, weights = {0:1, 1:1}, kernel='rbf'):
""" Instantiate, apply cross validation, train and predict a weighted non linear SVC model
"""
metrics = accuracy_precision_recall_specifity_f2_score()
sskfold = StratifiedShuffleSplit(random_state=1)
model = svm.SVC(max_iter=max_iter, class_weight=weights, kernel=kernel)
#cross validation
results = cross_validate(model, train_data, train_truth, cv=sskfold, scoring=metrics, n_jobs=-1)
printer = CrossValidationMetricsResultPrinter()
printer.print_metrics_report(results)
#fit and predict for getting measures
metric_generator = ModelMetricsGenerator(model, test_truth)
metric_generator.fit_and_predict_model(train_data, train_truth, test_data)
metric_generator.print_results()
weighted_non_linear_model(max_iter=1000)
weighted_non_linear_model(weights={1:6}, kernel='poly')
weighted_non_linear_model(weights={1:6}, kernel='sigmoid')
weighted_non_linear_model(weights={1:3}, kernel='sigmoid')
###Output
Valores medios:
Fit time: 0.4239
Test time: 0.0507
Accuracy: 76.0
Precision: 60.25
Recall: 73.43
Specificity: 77.2
F2 score: 70.23
0.4239
0.0507
76.0
60.25
73.43
77.2
70.23
Indicadores rendimiento:
Fit time: 0.453
Predict time: 0.1017
Accuracy: 73.82
Precision: 62.6
Recall: 74.76
Specificity: 73.26
F2-score: 71.96
0.453
0.1017
73.82
62.6
74.76
73.26
71.96
###Markdown
Final Notebook - using SVM Add `archive.zip` (dataset) in your gdrive
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
Unzipping `archive.zip` from drive
###Code
!unzip drive/My\ Drive/archive.zip > /dev/null #-d drive/My\ Drive/enron
###Output
_____no_output_____
###Markdown
Combining all the datasets into one directory for final export of model
###Code
%%bash
cat << EOF > s.sh
#!/bin/bash
mkdir enron/{ham,spam} -p
for i in \$(ls | grep "enron[1-6]")
do
cp \$i/ham/* enron/ham
cp \$i/spam/* enron/spam
done
EOF
chmod +x s.sh
./s.sh
import os
import pandas as pd
import sklearn as sk
from sklearn.svm import LinearSVC
from sklearn.metrics import confusion_matrix, accuracy_score
from sklearn.model_selection import train_test_split
import nltk
nltk.download("stopwords")
nltk.download('punkt')
# Change the folder accordingly
dir = "enron1"
spam_list = [os.path.join(dir+"/spam",f) for f in os.listdir(dir+"/spam")]
ham_list = [os.path.join(dir+"/ham",f) for f in os.listdir(dir+"/ham")]
allHamData, allSpamData = [], []
for obj in ham_list:
with open(obj,encoding='latin1') as ip:
allHamData.append(" ".join(ip.readlines()))
for obj in spam_list:
with open(obj,encoding='latin1') as ip:
allSpamData.append(" ".join(ip.readlines()))
allHamData = list(set(allHamData))
allSpamData = list(set(allSpamData))
hamPlusSpamData = allHamData + allSpamData
labels = ["ham"]*len(allHamData) + ["spam"]*len(allSpamData)
df = pd.DataFrame({"email": hamPlusSpamData, "label": labels})
cv_vec = sk.feature_extraction.text.TfidfVectorizer(tokenizer = nltk.word_tokenize, stop_words = nltk.corpus.stopwords.words("english"))
X = cv_vec.fit_transform(df.email)
label_encoder = sk.preprocessing.LabelEncoder()
y = label_encoder.fit_transform(df.label)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=8, test_size=0.2)
model = LinearSVC()
model.fit(X_train,y_train)
result = model.predict(X_test)
print(confusion_matrix(y_test,result))
print("SVM Accuracy: ",accuracy_score(y_test,result)*100,"%")
# SVM Accuracy of enron1: 98.7987987987988 %
# SVM Accuracy of enron2: 99.14163090128756 %
# SVM Accuracy of enron3: 99.0521327014218 %
# SVM Accuracy of enron4: 98.80341880341881 %
# SVM Accuracy of enron5: 99.21798631476051 %
# SVM Accuracy of enron6: 98.41402337228715 %
###Output
_____no_output_____
###Markdown
Combining all the datasets ( preparing for export)
###Code
cv_vec = sk.feature_extraction.text.TfidfVectorizer(tokenizer = nltk.word_tokenize, stop_words = nltk.corpus.stopwords.words("english"))
X = cv_vec.fit_transform(df.email)
label_encoder = sk.preprocessing.LabelEncoder()
y = label_encoder.fit_transform(df.label)
model = LinearSVC()
model.fit(X,y)
!pip freeze | grep pic
###Output
cloudpickle==1.3.0
pickleshare==0.7.5
portpicker==1.3.9
###Markdown
Save Model Using `pickle`
###Code
import pickle
pickle.dump(model, open('SVM.sav', 'wb'))
pickle.dump(cv_vec,open('vectorizer.pk', 'wb'))
# load the model from disk
loaded_model = pickle.load(open(filename, 'rb'))
fin = pickle.load(open('vectorizer.pk', 'rb'))
text = '''Subject: re : 2 . 882 s - > np np
> deat : sun , 15 dec 91 2 : 25 : 2 est > : michael < mmorse @ vm1 . yorku . ca > > subject : re : 2 . 864 query > > wlodek zadrozny ask " anything interest " > construction " s > np np " . . . second , > much relate : consider construction form > discuss list late reduplication ? > logical sense " john mcnamara name " tautologous thus , > level , indistinguishable " , , here ? " . ' john mcnamara name ' tautologous support those logic-base semantics irrelevant'''
X = fin.transform([text])
result = loaded_model.predict(X)
print(result)
###Output
_____no_output_____
###Markdown
Support vector machines Kyle Willett 28 Jul 2016 Support vector machines are a general class of supervised learning models. They can be used for at least three different basic tasks in ML:* classification (assigning a label to data)* regression (predicting a continuous value associated with data)* detecting outliersIn the first two cases, SVMs compete with many other machine learning techniques that have a range of implementations. Here are some reasons why an SVM might be an appropriate tool for a problem:* The core of SVM lies in finding the extremum (usually the maximum) of the margin for the separating hyperplane. Since there is a particular value associated with that, it can be more robust than simple regression. * SVM is not limited to simple boundaries like those used in linear regression. * SVM uses the *kernel trick*, which maps input features into high-dimensional spaces with relatively low computational costs. This allows a variety of different kernels to be implemented. * A subset of the data (the support vectors themselves) are ultimately used as the predictor, so the technique can be more memory-efficient compared to (kernel) logistic regression. * Several parameters can be tuned to adjust for overfitting, including the cost $C$ and kernel parameters (type, $\sigma,\gamma$, etc).* SVM is by definition a convex optimization problem, so optimizing the margin does not risk being caught in local minima. The solution is thus always unique.* *Anecdotally*, SVM has given slightly better performance than regression in many real-world problems (eg, solvency analysis of Auria & Moro 2008).Reasons against using SVMs:* Linear (if applicable) or logistic regression can often provide comparable performance. * If $N_{features}\gg N_{samples}$, the margin will be strongly set by the high-dimensional spaces where there will be fewer support vectors, and so performance may suffer. (*It's not clear what better methods exist in that case.*)* Probability estimates come from $k$-fold cross-validation in scikit-learn, which is computationally expensive. * Doesn't work with categorical data unless one-hot encoded.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn import preprocessing, svm
from sklearn.cross_validation import train_test_split
# Try it out with some data; start with Titanic survivors.
titanic = pd.read_csv("../dc/titanic_train.csv")
titanic.head()
# Encode the categorical variables
# Sex (binary)
le_Sex = preprocessing.LabelEncoder()
le_Sex.fit(list(set(titanic.Sex)))
titanic['Sex_int'] = le_Sex.transform(titanic.Sex)
# Embarked (three sets)
embarked_filled = titanic.Embarked.fillna("N")
le_Embarked = preprocessing.LabelEncoder()
le_Embarked.fit(list(set(embarked_filled)))
titanic['Embarked_int'] = le_Embarked.transform(embarked_filled)
# Since there are still NaNs in the frame, impute missing values
tvar = ['Pclass', u'Sex_int', u'Age', u'SibSp', u'Parch',
u'Fare', u'Embarked_int']
imp = preprocessing.Imputer(missing_values="NaN",strategy="mean")
imp.fit(titanic[tvar])
imputed = imp.transform(titanic[tvar])
titanic['Survived'].values.shape
# Split into test and training data
X = imputed
y = titanic['Survived'].values
scaler = preprocessing.StandardScaler().fit(X)
X_scaled = scaler.transform(X)
X_train, X_test, y_train, y_test = train_test_split(X,y,train_size = 0.70,random_state=51)
# Load the SVM classifier
clf = svm.SVC(kernel='linear',C=1.0)
clf.fit(X_train,y_train);
clf.score(X_test,y_test)
y_score = clf.decision_function(X_test)
from sklearn.metrics import roc_curve,auc
fpr, tpr, thresholds = roc_curve(y_test,y_score)
roc_auc = auc(fpr,tpr)
fig,ax = plt.subplots(1,1,figsize=(8,6))
ax.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % roc_auc)
ax.plot([0, 1], [0, 1], 'k--')
ax.set_xlim([0.0, 1.0])
ax.set_ylim([0.0, 1.05])
ax.set_xlabel('False Positive Rate')
ax.set_ylabel('True Positive Rate')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Not great. $AUC=0.80$ could be much better, although it's a significant improvement over random.
###Code
print "This result used {} support vectors from the {}-sized training sample.".format(
clf.support_vectors_.shape[0],X_train.shape[0])
###Output
This result used 276 support vectors from the 623-sized training sample.
###Markdown
Primeiro, algumas manipulações no database para caber melhor no modelo.
###Code
iris = tc.SFrame.read_csv("Iris.csv")
def func1(x):
if x['Species']!='' :
return x['Species']
else :
return None
iris['Species'] = iris.apply(func1)
iris = iris.dropna(how ='any')
cor = {'Iris-setosa' : 'purple','Iris-virginica':'red'}
###Output
_____no_output_____
###Markdown
Nosso database é dividido em duas classes: 1) Iris - setosa, representado em roxo no gráfico. 2) Iris-virginica, representado em vermelho no gráfico.
###Code
mpl.scatter(iris['SepalLengthCm'],iris['Id'],c = iris['Species'].apply(lambda x: cor[x]))
mpl.scatter(iris['SepalWidthCm'],iris['Id'],c = iris['Species'].apply(lambda x: cor[x]))
mpl.scatter(iris['PetalLengthCm'],iris['Id'],c = iris['Species'].apply(lambda x: cor[x]))
mpl.scatter(iris['PetalWidthCm'],iris['Id'],c = iris['Species'].apply(lambda x: cor[x]))
###Output
_____no_output_____
###Markdown
Com os gráficos acima, podemos perceber que todas as características do nosso banco de dados apontam que existe uma separação trivial e linear entre elementos de uma classe e de outra.* Portanto, vamos utilizar o SVM para criar um modelo eficiente de classificação linear. *Aqui utilizo de uma versão extendida do banco de dados original, para efeitos de melhor demonstração.*
###Code
irisExtendido = tc.SFrame.read_csv('IrisExtendido.csv')
irisExtendido = irisExtendido.dropna()
irisExtendido
dataTreino, dataTeste = irisExtendido.random_split(0.8,seed = 666)
modelo = tc.svm_classifier.create(dataTreino,target = 'Species', features = ['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm'])
###Output
_____no_output_____
###Markdown
Aqui vemos que, apesar de ser um modelo mais aplicável somente a dados de classificação binária e separavél, o SVM tem um sucesso tremendo em classificar os dados com eficiência. Perceba no campo 'accuracy' que a precisão foi de 100%, ou seja, nosso modelo previu, com 100% de acerto, todas as classes dos dados de teste.
###Code
modelo.evaluate(dataTeste)
###Output
_____no_output_____
###Markdown
Abaixo temos os coeficientes lineares usados para traçar o hiperplano por cada uma das features recebidas pelo nosso modelo.
###Code
modelo.coefficients
###Output
_____no_output_____ |
demos/datasets/PDBMetaDataDemo.ipynb | ###Markdown
PDB Meta Data DemoThis demo shows how to query metadata from the PDB archive.This exmaple queries the \_citation category. Each category represents a table, and fields represent database columns. [Avalible tables and columns](https://pdbj.org/mine-rdb-docs)Example data from 100D.cif * _citation.id primary * _citation.title Crystal structure of ... * _citation.journal_abbrev 'Nucleic Acids Res.' * _citation.journal_volume 22 * _citation.page_first 5466 * _citation.page_last 5476 * _citation.year 1994 * _citation.journal_id_ASTM NARHAD * _citation.country UK * _citation.journal_id_ISSN 0305-1048 * _citation.journal_id_CSD 0389 * _citation.book_publisher ? * _citation.pdbx_database_id_PubMed 7816639 * _citation.pdbx_database_id_DOI 10.1093/nar/22.24.5466 Data are probided through [Mine 2 SQL](https://pdbj.org/help/mine2-sql)Queries can be designed with the interactive [PDBj Mine 2 query service](https://pdbj.org/help/mine2-sql) Imports
###Code
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
from mmtfPyspark.datasets import pdbjMineDataset
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Configure Spark
###Code
spark = SparkSession.builder.appName("PDBMetaDataDemo").getOrCreate()
###Output
2022-01-23 16:41:52 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
###Markdown
Query PDBj MineQuery the following fields from the \citation category using PDBj's Mine 2 web service: * journal_abbrev * pdbx_database_id_PubMed * yearNote: mixed case column names must be quoted and escaped with \
###Code
sqlQuery = "SELECT pdbid, journal_abbrev, \"pdbx_database_id_PubMed\", year from citation WHERE id = 'primary'"
ds = pdbjMineDataset.get_dataset(sqlQuery)
###Output
_____no_output_____
###Markdown
Show first 10 results from query
###Code
ds.show(10, False)
###Output
+-----------+------------------+-----------------------+----+
|structureId|journal_abbrev |pdbx_database_id_PubMed|year|
+-----------+------------------+-----------------------+----+
|101M |Thesis, Rice |-1 |1999|
|100D |Nucleic Acids Res.|7816639 |1994|
|101D |Biochemistry |7711020 |1995|
|107D |J.Mol.Biol. |7731041 |1995|
|107L |Science |8503008 |1993|
|108D |Biochemistry |7612596 |1995|
|107M |Thesis, Rice |-1 |1999|
|108L |Science |8503008 |1993|
|102M |Thesis, Rice |-1 |1999|
|102D |J.Med.Chem. |7608897 |1995|
+-----------+------------------+-----------------------+----+
only showing top 10 rows
###Markdown
Filter out unpublished entriesPublished entires contain the word "published" in various upper/lower case combinations
###Code
ds = ds.filter("UPPER(journal_abbrev) NOT LIKE '%PUBLISHED%'")
###Output
_____no_output_____
###Markdown
Show the top 10 journals that publish PDB structures
###Code
ds.groupBy("journal_abbrev").count().sort(col("count").desc()).show(10,False)
###Output
[Stage 4:============================================> (160 + 8) / 200]
###Markdown
Filter out entries without a PubMed Id -1 if PubMed Id is not available
###Code
ds = ds.filter("pdbx_database_id_PubMed > 0")
print(f"Entires with PubMed Ids: {ds.count()}")
###Output
Entires with PubMed Ids: 153325
###Markdown
Show growth of papers in PubMed
###Code
print("PubMed Ids per year: ")
idsPerYear = ds.groupBy("year").count().sort(col("year").desc())
idsPerYear.show(10, False)
###Output
PubMed Ids per year:
###Markdown
Make scatter plot for growth of papers in PubMed
###Code
# Get year and publications as list
year = idsPerYear.select("year").collect()
publications = idsPerYear.select("count").collect()
# Make scatter plot with matplotlib
plt.scatter(year, publications)
plt.xlabel("year")
plt.ylabel("papers")
plt.title("Growth of papers in PubMed each year")
###Output
###Markdown
Terminate Spark
###Code
spark.stop()
###Output
_____no_output_____
###Markdown
PDB Meta Data DemoThis demo shows how to query metadata from the PDB archive.This exmaple queries the \_citation category. Each category represents a table, and fields represent database columns. [Avalible tables and columns](https://pdbj.org/mine-rdb-docs)Example data from 100D.cif * _citation.id primary * _citation.title Crystal structure of ... * _citation.journal_abbrev 'Nucleic Acids Res.' * _citation.journal_volume 22 * _citation.page_first 5466 * _citation.page_last 5476 * _citation.year 1994 * _citation.journal_id_ASTM NARHAD * _citation.country UK * _citation.journal_id_ISSN 0305-1048 * _citation.journal_id_CSD 0389 * _citation.book_publisher ? * _citation.pdbx_database_id_PubMed 7816639 * _citation.pdbx_database_id_DOI 10.1093/nar/22.24.5466 Data are probided through [Mine 2 SQL](https://pdbj.org/help/mine2-sql)Queries can be designed with the interactive [PDBj Mine 2 query service](https://pdbj.org/help/mine2-sql) Imports
###Code
from pyspark.sql import SparkSession
from pyspark.sql.functions import col
from mmtfPyspark.datasets import pdbjMineDataset
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Configure Spark
###Code
spark = SparkSession.builder.appName("PDBMetaDataDemo").getOrCreate()
###Output
_____no_output_____
###Markdown
Query PDBj MineQuery the following fields from the \citation category using PDBj's Mine 2 web service: * journal_abbrev * pdbx_database_id_PubMed * yearNote: mixed case column names must be quoted and escaped with \
###Code
sqlQuery = "SELECT pdbid, journal_abbrev, \"pdbx_database_id_PubMed\", year from citation WHERE id = 'primary'"
ds = pdbjMineDataset.get_dataset(sqlQuery)
###Output
_____no_output_____
###Markdown
Show first 10 results from query
###Code
ds.show(10, False)
###Output
+-----------+------------------+-----------------------+----+
|structureId|journal_abbrev |pdbx_database_id_PubMed|year|
+-----------+------------------+-----------------------+----+
|100D |Nucleic Acids Res.|7816639 |1994|
|101D |Biochemistry |7711020 |1995|
|101M |Thesis, Rice |-1 |1999|
|102D |J.Med.Chem. |7608897 |1995|
|102L |Nature |8429913 |1993|
|102M |Thesis, Rice |-1 |1999|
|103D |J.Mol.Biol. |7966337 |1994|
|103L |Nature |8429913 |1993|
|103M |Thesis, Rice |-1 |1999|
|104D |Biochemistry |7857947 |1995|
+-----------+------------------+-----------------------+----+
only showing top 10 rows
###Markdown
Filter out unpublished entriesPublished entires contain the word "published" in various upper/lower case combinations
###Code
ds = ds.filter("UPPER(journal_abbrev) NOT LIKE '%PUBLISHED%'")
###Output
_____no_output_____
###Markdown
Show the top 10 journals that publish PDB structures
###Code
ds.groupBy("journal_abbrev").count().sort(col("count").desc()).show(10,False)
###Output
+------------------------+-----+
|journal_abbrev |count|
+------------------------+-----+
|J.Biol.Chem. |10416|
|J.Mol.Biol. |10258|
|Biochemistry |10254|
|Proc.Natl.Acad.Sci.USA |5597 |
|Structure |5502 |
|Acta Crystallogr.,Sect.D|4172 |
|J.Med.Chem. |3985 |
|Nature |3575 |
|Nat Commun |3382 |
|Protein Sci. |2500 |
+------------------------+-----+
only showing top 10 rows
###Markdown
Filter out entries without a PubMed Id -1 if PubMed Id is not available
###Code
ds = ds.filter("pdbx_database_id_PubMed > 0")
print(f"Entires with PubMed Ids: {ds.count()}")
###Output
Entires with PubMed Ids: 124598
###Markdown
Show growth of papers in PubMed
###Code
print("PubMed Ids per year: ")
idsPerYear = ds.groupBy("year").count().sort(col("year").desc())
idsPerYear.show(10, False)
###Output
PubMed Ids per year:
+----+-----+
|year|count|
+----+-----+
|2019|2998 |
|2018|8747 |
|2017|9244 |
|2016|9057 |
|2015|8323 |
|2014|7577 |
|2013|7717 |
|2012|7208 |
|2011|6192 |
|2010|6062 |
+----+-----+
only showing top 10 rows
###Markdown
Make scatter plot for growth of papers in PubMed
###Code
# Get year and publications as list
year = idsPerYear.select("year").collect()
publications = idsPerYear.select("count").collect()
# Make scatter plot with matplotlib
plt.scatter(year, publications)
plt.xlabel("year")
plt.ylabel("papers")
plt.title("Growth of papers in PubMed each year")
###Output
_____no_output_____
###Markdown
Terminate Spark
###Code
spark.stop()
###Output
_____no_output_____ |
paper/Cable_equation/old/.ipynb_checkpoints/cable_data_10elements-checkpoint.ipynb | ###Markdown
Configuration of the library function: We select athe library with a 2D spatial input. Note that that the max differential order has been pre-determined here out of convinience. So, for poly_order 1 the library contains the following 12 terms:* [$1, u_x, u_{xx}, u_{xxx}, u, u u_{x}, u u_{xx}, u u_{xxx}, u^2, u^2 u_{x}, u^2 u_{xx}, u^2 u_{xxx}$]
###Code
library = Library1D(poly_order=1, diff_order=2)
###Output
_____no_output_____
###Markdown
Configuration of the sparsity estimator and sparsity scheduler used. In this case we use the most basic threshold-based Lasso estimator and a scheduler that asseses the validation loss after a given patience. If that value is smaller than 1e-5, the algorithm is converged.
###Code
estimator = Threshold(0.1)
sparsity_scheduler = TrainTestPeriodic(periodicity=50, patience=10, delta=1e-5)
###Output
_____no_output_____
###Markdown
Configuration of the sparsity estimator
###Code
constraint = LeastSquares()
# Configuration of the sparsity scheduler
###Output
_____no_output_____
###Markdown
Now we instantiate the model and select the optimizer
###Code
model = DeepMoD(network, library, estimator, constraint)
# Defining optimizer
optimizer = torch.optim.Adam(model.parameters(), betas=(0.99, 0.99), amsgrad=True, lr=1e-3)
###Output
_____no_output_____
###Markdown
Run DeepMoD We can now run DeepMoD using all the options we have set and the training data:* The directory where the tensorboard file is written (log_dir)* The ratio of train/test set used (split)* The maximum number of iterations performed (max_iterations)* The absolute change in L1 norm considered converged (delta)* The amount of epochs over which the absolute change in L1 norm is calculated (patience)
###Code
train(model, X_train, y_train, optimizer,sparsity_scheduler, log_dir='runs/Akshay_small/', split=0.8, max_iterations=100000, delta=0.1e-6, patience=100)
# Configuring model
network = NN(2, [30, 30, 30, 30, 30], 1) # Function approximator
library = Library1D(poly_order=1, diff_order=2) # Library function
estimator = Threshold(0.01) # Sparse estimator
constraint = LeastSquares() # How to constrain
model = DeepMoD(network, library, estimator, constraint) # Putting it all in the model
# Running model
sparsity_scheduler = Periodic(periodicity=100) # Defining when to apply sparsity
optimizer = torch.optim.Adam(model.parameters(), betas=(0.99, 0.99), amsgrad=True) # Defining optimizer
train(model, X_train, y_train, optimizer, sparsity_scheduler,delta=0.002) # Running
train(model, X_train, y_train, optimizer, sparsity_scheduler,delta=0.0001, max_iterations = 100000) # Running
train(model, X_train, y_train, optimizer, sparsity_scheduler,delta=0.0001, max_iterations = 100000) # Running
train(model, X_train, y_train, optimizer, sparsity_scheduler,delta=0.0001, max_iterations = 100000) # Running
train(model, X_train, y_train, optimizer, sparsity_scheduler,delta=0.0001, max_iterations = 100000) # Running
###Output
| Iteration | Progress | Time remaining | Loss | MSE | Reg | L1 norm |
86050 86.05% 617s -1.50e+01 8.71e-05 1.56e-06 2.75e+01 |
Metody Numeryczne 2021/13. Rozwiazywanie rownan rozniczkowych/part_1.ipynb | ###Markdown
Metody Numeryczne Rozwiązywanie równań różniczkowych zwyczajnych dr hab. inż. Jerzy Baranowski, Prof.AGH
###Code
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import matplotlib as mpl
plt.style.context('seaborn-white')
mpl.rcParams['figure.dpi']= 200
def euler(rhs,x0,t0,tk,n_steps):
t = np.linspace(t0,tk,n_steps)
h = t[1]-t[0]
x = np.zeros(1,n_steps)
x[0]=x0
for k in range(1,n_steps):
###Output
_____no_output_____ |
CNN Architectures/2014_VGG.ipynb | ###Markdown
Model ImplementationBelow is the architecture of VGG
###Code
from tensorflow import keras
from keras.models import Sequential
from keras.layers import Dense, Activation, Dropout, Flatten, Conv2D, MaxPooling2D, BatchNormalization
VGG = Sequential()
# The first part has two conv64 with a pooling layer
VGG.add(Conv2D(input_shape=(224,224,3),filters=64,kernel_size=(3,3),padding="same", activation="relu"))
VGG.add(Conv2D(filters=64, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
# The second part has two conv128 with a pooling layer
VGG.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(Conv2D(filters=128, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
# The Third part has three conv256 with a pooling layer
VGG.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(Conv2D(filters=256, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
# The fourth part has four conv512 with a pooling layer
VGG.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
# The fifth part has four conv512 with a pooling layer
VGG.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(Conv2D(filters=512, kernel_size=(3,3), padding="same", activation="relu"))
VGG.add(MaxPooling2D(pool_size=(2,2),strides=(2,2)))
# At last, We have Two Dense layers with 4096 nodes and an output layer
VGG.add(Flatten())
VGG.add(Dense(units=4096, activation="relu"))
VGG.add(Dense(units=4096, activation="relu"))
VGG.add(Dense(units=1000, activation="softmax"))
VGG.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 224, 224, 64) 1792
_________________________________________________________________
conv2d_1 (Conv2D) (None, 224, 224, 64) 36928
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 112, 112, 64) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 112, 112, 128) 73856
_________________________________________________________________
conv2d_3 (Conv2D) (None, 112, 112, 128) 147584
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 56, 56, 128) 0
_________________________________________________________________
conv2d_4 (Conv2D) (None, 56, 56, 256) 295168
_________________________________________________________________
conv2d_5 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
conv2d_6 (Conv2D) (None, 56, 56, 256) 590080
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 28, 28, 256) 0
_________________________________________________________________
conv2d_7 (Conv2D) (None, 28, 28, 512) 1180160
_________________________________________________________________
conv2d_8 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
conv2d_9 (Conv2D) (None, 28, 28, 512) 2359808
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 14, 14, 512) 0
_________________________________________________________________
conv2d_10 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
conv2d_11 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
conv2d_12 (Conv2D) (None, 14, 14, 512) 2359808
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 512) 0
_________________________________________________________________
flatten (Flatten) (None, 25088) 0
_________________________________________________________________
dense (Dense) (None, 4096) 102764544
_________________________________________________________________
dense_1 (Dense) (None, 4096) 16781312
_________________________________________________________________
dense_2 (Dense) (None, 1000) 4097000
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________
|
nbs/04_text.inference.ipynb | ###Markdown
text.inference> Provides inference scripts specifically for text modules
###Code
#hide
from nbdev.showdoc import *
#export
from fastai2.text.all import *
from fastinference.inference.inference import _fully_decode, _decode_loss
import matplotlib.cm as cm
import html
from IPython.display import display, HTML
#export
@patch
def get_preds(x:LMLearner, ds_idx=1, dl=None, raw_outs=False, decoded_loss=True, fully_decoded=False, concat_dim=0,
**kwargs):
"Get predictions with possible decoding"
inps, outs, dec_out, raw = [], [], [], []
if dl is None: dl = x.dls[ds_idx].new(shuffle=False, drop_last=False)
x.model.eval()
for batch in dl:
with torch.no_grad():
inps.append(batch[:x.dls.n_inp])
if decoded_loss or fully_decoded:
out = x.model(*batch[:x.dls.n_inp])[0]
raw.append(out)
dec_out.append(x.loss_func.decodes(out))
else:
raw.append(x.model(*batch[:x.dls.n_inp])[0])
raw = torch.cat(raw, dim=concat_dim).cpu().numpy()
if decoded_loss or fully_decoded:
dec_out = torch.cat(dec_out, dim=0)
if not raw_outs:
try: outs.insert(0, x.loss_func.activation(tensor(raw)).numpy())
except: outs.insert(0, dec_out)
else:
outs.insert(0, raw)
if fully_decoded: outs = _fully_decode(x.dls, inps, outs, dec_out, False)
if decoded_loss: outs = _decode_loss(x.dls.categorize.vocab, dec_out, outs)
return outs
show_doc(LMLearner.get_preds)
#export
@patch
def get_preds(x:TextLearner, ds_idx=1, dl=None, raw_outs=False, decoded_loss=True, fully_decoded=False,
**kwargs):
"Get predictions with possible decoding"
inps, outs, dec_out, raw = [], [], [], []
if dl is None: dl = x.dls[ds_idx].new(shuffle=False, drop_last=False)
x.model.eval()
for batch in dl:
with torch.no_grad():
inps.append(batch[:x.dls.n_inp])
if decoded_loss or fully_decoded:
out = x.model(*batch[:x.dls.n_inp])[0]
raw.append(out)
dec_out.append(x.loss_func.decodes(out))
else:
raw.append(x.model(*batch[:x.dls.n_inp])[0])
raw = torch.cat(raw, dim=0).cpu().numpy()
if decoded_loss or fully_decoded:
dec_out = torch.cat(dec_out, dim=0)
if not raw_outs:
try: outs.insert(0, x.loss_func.activation(tensor(raw)).numpy())
except: outs.insert(0, dec_out)
else:
outs.insert(0, raw)
if fully_decoded: outs = _fully_decode(x.dls, inps, outs, dec_out, False)
if decoded_loss: outs = _decode_loss(x.dls.categorize.vocab, dec_out, outs)
return outs
show_doc(TextLearner.get_preds)
#export
@patch
def predict(x:LMLearner, text, n_words=1, no_unk=True, temperature=1., min_p=None,
decoder=decode_spec_tokens, only_last_word=False):
"Predict `n_words` from `text`"
x.model.reset()
idxs = idxs_all = x.dls.test_dl([text]).items[0].to(x.dls.device)
unk_idx = x.dls.vocab.index(UNK)
for _ in (range(n_words)):
preds = x.get_preds(dl=[(idxs[None],)], decoded_loss=False)
res = preds[0][0][-1]
if no_unk: res[unk_idx] = 0.
if min_p is not None:
if (res >= min_p).float().sum() == 0:
warn(f"There is no item with probability >= {min_p}, try a lower value.")
else: res[res < min_p] = 0.
if temperature != 1.: res.pow_(1 / temperature)
res = tensor(res)
idx = torch.multinomial(res, 1).item()
idxs = idxs_all = torch.cat([idxs_all, idxs.new([idx])])
if only_last_word: idxs = idxs[-1][None]
decoder=decode_spec_tokens
num = x.dls.train_ds.numericalize
tokens = [num.vocab[i] for i in idxs_all if num.vocab[i] not in [BOS, PAD]]
sep = x.dls.train_ds.tokenizer[-1].sep
return sep.join(decoder(tokens))
show_doc(LMLearner.predict)
path = untar_data(URLs.IMDB_SAMPLE)
df = pd.read_csv(path/'texts.csv')
data_lm = TextDataLoaders.from_csv(path, 'texts.csv', text_col='text', is_lm=True)
lm_learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3)
lm_learn.save_encoder('fine_tuned')
blocks = (TextBlock.from_df('text', seq_len=data_lm.seq_len, vocab=data_lm.vocab), CategoryBlock())
imdb_clas = DataBlock(blocks=blocks,
get_x=ColReader('text'),
get_y=ColReader('label'),
splitter=ColSplitter())
dls = imdb_clas.dataloaders(df, bs=64)
lm_learn.predict('my name is', n_words=2)
learn = text_classifier_learner(dls, AWD_LSTM, drop_mult=0.5, metrics=accuracy)
learn.path = path
learn.load_encoder('fine_tuned');
dl = learn.dls.test_dl(df.iloc[:1])
name, probs, full_dec = learn.get_preds(dl=dl, fully_decoded=True)
name, probs
full_dec
#export
def _value2rgba(x, cmap=cm.RdYlGn, alpha_mult=1.0):
"Convert a value `x` from 0 to 1 (inclusive) to an RGBA tuple according to `cmap` times transparency `alpha_mult`."
c = cmap(x)
rgb = (np.array(c[:-1]) * 255).astype(int)
a = c[-1] * alpha_mult
return tuple(rgb.tolist() + [a])
#export
def _eval_dropouts(mod):
module_name = mod.__class__.__name__
if 'Dropout' in module_name or 'BatchNorm' in module_name: mod.training = False
for module in mod.children(): _eval_dropouts(module)
#export
def _piece_attn_html(pieces, attns, sep=' ', **kwargs):
html_code,spans = ['<span style="font-family: monospace;">'], []
for p, a in zip(pieces, attns):
p = html.escape(p)
c = str(_value2rgba(a, alpha_mult=0.5, **kwargs))
spans.append(f'<span title="{a:.3f}" style="background-color: rgba{c};">{p}</span>')
html_code.append(sep.join(spans))
html_code.append('</span>')
return ''.join(html_code)
def _show_piece_attn(*args, **kwargs):
from IPython.display import display, HTML
display(HTML(_piece_attn_html(*args, **kwargs)))
#export
def _intrinsic_attention(learn, text, class_id=None):
"Calculate the intrinsic attention of the input w.r.t to an output `class_id`, or the classification given by the model if `None`."
learn.model.train()
_eval_dropouts(learn.model)
learn.model.zero_grad()
learn.model.reset()
dl = learn.dls.test_dl([text])
batch = next(iter(dl))[0]
emb = learn.model[0].module.encoder(batch).detach().requires_grad_(True)
lstm = learn.model[0].module(emb, True)
learn.model.eval()
cl = learn.model[1]((lstm, torch.zeros_like(batch).bool(),))[0].softmax(dim=-1)
if class_id is None: class_id = cl.argmax()
cl[0][class_id].backward()
attn = emb.squeeze().abs().sum(dim=-1)
attn /= attn.max()
tok, _ = learn.dls.decode_batch((*tuplify(batch), *tuplify(cl)))[0]
return tok, attn
#export
@patch
def intrinsic_attention(x:TextLearner, text:str, class_id:int=None, **kwargs):
"Shows the `intrinsic attention for `text`, optional `class_id`"
if isinstance(x, LMLearner): raise Exception("Language models are not supported")
text, attn = _intrinsic_attention(x, text, class_id)
return _show_piece_attn(text.split(), to_np(attn), **kwargs)
show_doc(TextLearner.intrinsic_attention)
learn.intrinsic_attention('Batman is rich')
###Output
_____no_output_____ |
examples_ja/tutorial014_set_packing.ipynb | ###Markdown
Set Packing問題この問題は[Exact Cover](tutorial007_exact_cover.ipynb)問題と似ています。ある自然数の集合Uを考えます。またその自然数を含むいくつかのグループ$V_{1}, V_{2}, \ldots, V_{N}$を想定します。1つの自然数が複数のグループに属していても構いません。さて、そのグループ$V_{i}$からいくつかピックアップしたときに、それらに同じ自然数が複数回含まれず、Uに含まれる自然数セットと同じになるようにピックアップする問題がExact Cover問題でした。このとき、選んだグループ数が最大になるように選ぶ問題をSet Packing問題といいます。 準備これをwildqatを使用して解いてみます。wildqatがインストールされていない場合は、環境に併せて以下のようにインストールしてください。```bashpip install wildqat``` 必要なライブラリをimportし、wildqatオブジェクトをインスタンス化します。
###Code
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
import wildqat as wq
###Output
_____no_output_____
###Markdown
QUBOの作成解きたい問題のコスト関数は$ E = E_{A} + E_{B} $で、$E_{A}, E_{B}$はそれぞれ、$ E _ { A } = A \sum _ { i , j : V _ { i } \cap V _ { j } \neq \emptyset } x _ { i } x _ { j }$$E _ { B } = - B \sum _ { i } x _ { i }$と定義されています。まず、$E_{A}$は、$i \not= j$である$V_{i}, V_{j}$を選んだときに、重複する自然数があったばあいにペナルティを与えることを意味しています。また、$E_{B}$は最も多く選択されている場合に、コストが低くなることを意味しています。係数$A, B$については、1つ多く選ぶ(コストがB下がる)より、重複がない(1つ重複があるとコストがA上がる)ことを優先したいので、$A > B$である必要があります。これをコードにすると次の通りです。
###Code
A = 1.0
B = 0.9
def get_qubo(V):
Q = np.zeros( (len(V), len(V)) )
for i in range(len(V)):
for j in range(i, len(V)):
if i == j:
Q[i][j] += -B
elif len(V[i]) + len(V[j]) != len( set(V[i]) | set(V[j]) ):
Q[i][j] += A
return Q
###Output
_____no_output_____
###Markdown
また、結果を表示する関数も定義しておきましょう。
###Code
def show_answer(list_x, energies = None, show_graph = False):
print("Result x:", list_x)
text = ""
for i in range(len(list_x)):
if(list_x[i]):
text += str(V[i])
print("Picked {} group(s): {}".format(sum(list_x), text))
if energies is not None:
print("Energy:", a.E[-1])
if show_graph:
plt.plot(a.E)
plt.show()
###Output
_____no_output_____
###Markdown
次の通り実行してみます
###Code
V = [ [1,2], [3,4,5,6], [7,8,9,10], [1,3,5], [10], [7,9], [2,4,6,8], [1,2,3,4,5,6,8,10] ]
for i in range(5):
print("---{}回目".format(i+1))
a = wq.opt()
a.qubo = get_qubo(V)
answer = a.sa()
show_answer(answer, a.E)
###Output
---1回目
1.804011583328247
Result x: [0, 0, 0, 1, 1, 1, 1, 0]
Picked 4 group(s): [1, 3, 5][10][7, 9][2, 4, 6, 8]
Energy: -3.6000000000000005
---2回目
1.720658779144287
Result x: [1, 1, 0, 0, 1, 1, 0, 0]
Picked 4 group(s): [1, 2][3, 4, 5, 6][10][7, 9]
Energy: -3.5999999999999996
---3回目
1.751054048538208
Result x: [0, 0, 0, 1, 1, 1, 1, 0]
Picked 4 group(s): [1, 3, 5][10][7, 9][2, 4, 6, 8]
Energy: -3.6000000000000005
---4回目
2.076296091079712
Result x: [0, 0, 0, 1, 1, 1, 1, 0]
Picked 4 group(s): [1, 3, 5][10][7, 9][2, 4, 6, 8]
Energy: -3.6000000000000005
---5回目
2.0937278270721436
Result x: [1, 1, 0, 0, 1, 1, 0, 0]
Picked 4 group(s): [1, 2][3, 4, 5, 6][10][7, 9]
Energy: -3.5999999999999996
|
jnotebook/mlperf-inference-v0.5/results.ipynb | ###Markdown
[MLPerf Inference Results v0.5](https://github.com/mlperf/inference/tree/master/v0.5) Automatic results table generation (c) [dividiti](http://dividiti.com/) Includes
###Code
import os
import re
import json
from pprint import pprint
import IPython as ip
import pandas as pd
import numpy as np
import matplotlib as mp
# import seaborn as sb
print ('IPython version: %s' % ip.__version__)
print ('Pandas version: %s' % pd.__version__)
print ('NumPy version: %s' % np.__version__)
print ('Matplotlib version: %s' % mp.__version__)
# print ('Seaborn version: %s' % sb.__version__)
import matplotlib.pyplot as plt
from matplotlib import cm
%matplotlib inline
default_dpi = 300
default_fontsize = 12
mp.rcParams['figure.dpi'] = default_dpi
mp.rcParams['font.size'] = default_fontsize
from IPython.display import Image, display
def display_in_full(df):
pd.options.display.max_columns = len(df.columns)
pd.options.display.max_rows = len(df.index)
display(df)
###Output
_____no_output_____
###Markdown
Definitions Path to the repository with results
###Code
# Clone the results directory:
# git clone https://github.com/mlperf/inference_results_v0.5 <results_path>
# or
# git clone https://github.com/dividiti/inference_results_v0.5 <results_path>
results_path = '/home/anton/projects/mlperf/inference_results_v0.5_dividiti'
#results_path = '/home/anton/projects/mlperf/inference_results_v0.5_plus'
###Output
_____no_output_____
###Markdown
Path to the cache
###Code
cache_name = 'mlperf-inference-v0.5-results.zip'
cache_compression = 'zip'
cache_protocol = 2 # Supported since Python 2.3
import ck.kernel as ck
repo_uoa = 'ck-mlperf'
module_uoa = 'module'
data_uoa = 'mlperf.inference'
r = ck.access({'action':'find', 'repo_uoa':repo_uoa, 'module_uoa':module_uoa, 'data_uoa':data_uoa})
if r['return']>0:
print('Error: %s' % r['error'])
exit(1)
cache_path = os.path.join(r['path'], cache_name)
cache_path
###Output
_____no_output_____
###Markdown
Divisions
###Code
divisions = [ 'closed', 'open' ]
###Output
_____no_output_____
###Markdown
Maps for DataFrame construction
###Code
# Lowercase or camelcase or camelcase with space to camelcase.
scenario_to_str = {
# SingleStream.
'singlestream' : 'SingleStream',
'SingleStream' : 'SingleStream',
'Single Stream' : 'SingleStream',
# MultiStream.
'multistream' : 'MultiStream',
'MultiStream' : 'MultiStream',
'Multi Stream' : 'MultiStream',
# Server.
'server' : 'Server',
'Server' : 'Server',
# Offline.
'offline' : 'Offline',
'Offline' : 'Offline',
}
division_to_str = {
# Open.
'open' : 'Open',
'Open' : 'Open',
# Closed.
'closed' : 'Closed',
'Closed' : 'Closed'
}
# dividiti-specific.
system_id_to_processor = {
'firefly' : 'Rockchip RK3399',
'hikey960' : 'HiSilicon Kirin960',
'mate10pro' : 'HiSilicon Kirin970',
'rpi4' : 'Broadcom BCM2711B0',
}
accelerator_name_to_accelerator = {
'NVIDIA Tesla T4': 'NVIDIA Tesla T4',
'Nvidia Tesla T4': 'NVIDIA Tesla T4',
'Tesla T4': 'NVIDIA Tesla T4',
'Nvidia Tesla V100 SXM3': 'NVIDIA Tesla V100 SXM3',
'tpu-v3.8': 'Google TPU v3-8', # NB: 8 TPU v3?
'HanGuang 800': 'Alibaba HanGuang 800',
'Goya': 'Habana Goya',
}
###Output
_____no_output_____
###Markdown
Metrics for DataFrame construction
###Code
# Performance metrics: Stream in ms; MultiStream in #streams; Server in QPS; Offline in inputs/s).
performance_columns = [
'P_{}_{}'.format(task, scenario)
for task in ['IC1','IC2','OD1','OD2','NMT']
for scenario in ['SS','MS','S','O']
]
# Accuracy metrics: Image Classification in Top1, %; Object Detection in mAP, %; Machine Translation in BLUE.
accuracy_columns = [
'A_{}_{}'.format(task, scenario)
for task in ['IC1','IC2','OD1','OD2','NMT']
for scenario in ['SS','MS','S','O']
]
# Score columns.
score_columns = performance_columns + accuracy_columns
###Output
_____no_output_____
###Markdown
Non-imagenet benchmarks
###Code
non_imagenet_benchmarks = {
# Non-ImageNet benchmarks from the closed division.
'ssd-small': {
"name" : "SSD-MobileNet-v1",
"width" : 300,
"height": 300,
},
'ssd-large': {
"name" : "SSD-ResNet34",
"width" : 1200,
"height": 1200,
},
'gnmt' : {
"name" : "GNMT",
"width" : -1,
"height": -1,
},
# Non-ImageNet benchmarks from the open division.
'rcnn-nas-lowproposals' : {
"name" : "Faster-RCNN-NAS lowproposals",
"url" : "http://download.tensorflow.org/models/object_detection/faster_rcnn_nas_lowproposals_coco_2018_01_28.tar.gz",
"width" : 1200,
"height" : 1200,
},
'rcnn-resnet50-lowproposals' : {
"name" : "Faster-RCNN-ResNet50 lowproposals",
"url" : "http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet50_lowproposals_coco_2018_01_28.tar.gz",
"width" : 1024,
"height" : 600,
},
'rcnn-resnet101-lowproposals' : {
"name" : "Faster-RCNN-ResNet101 lowproposals",
"url" : "http://download.tensorflow.org/models/object_detection/faster_rcnn_resnet101_lowproposals_coco_2018_01_28.tar.gz",
"width" : 1024,
"height" : 600,
},
'rcnn-inception-resnet-v2-lowproposals' : {
"name" : "Faster-RCNN-Inception-ResNet-v2 lowproposals",
"url" : "http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_resnet_v2_atrous_lowproposals_coco_2018_01_28.tar.gz",
"width" : 1024,
"height" : 600,
},
'rcnn-inception-v2' : {
"name" : "Faster-RCNN Inception-v2",
"url" : "http://download.tensorflow.org/models/object_detection/faster_rcnn_inception_v2_coco_2018_01_28.tar.gz",
"width" : 1024,
"height" : 600,
},
'ssd-inception-v2' : {
"name" : "SSD-Inception-v2",
"url" : "http://download.tensorflow.org/models/object_detection/ssd_inception_v2_coco_2018_01_28.tar.gz",
"width" : 300,
"height" : 300,
},
'ssd-mobilenet-v1-quantized-mlperf' : {
"name" : "SSD-MobileNet-v1",
"url" : "https://zenodo.org/record/3361502/files/ssd_mobilenet_v1_coco_2018_01_28.tar.gz",
"width" : 300,
"height" : 300,
"provenance" : "Google",
},
'ssd-mobilenet-v1-non-quantized-mlperf' : {
"name" : "SSD-MobileNet-v1 quantized",
"url" : "https://zenodo.org/record/3252084/files/mobilenet_v1_ssd_8bit_finetuned.tar.gz",
"width" : 300,
"height" : 300,
"provenance" : "Habana"
},
'ssd-mobilenet-v1-fpn' : {
"name" : "SSD-MobileNet-v1 FPN SBP",
"url" : "http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz",
"width" : 640,
"height" : 640,
},
'ssd-resnet50-fpn' : {
"name" : "SSD-ResNet50-v1 FPN SBP",
"url" : "http://download.tensorflow.org/models/object_detection/ssd_resnet50_v1_fpn_shared_box_predictor_640x640_coco14_sync_2018_07_03.tar.gz",
"width" : 640,
"height" : 640,
},
'ssdlite-mobilenet-v2' : {
"name" : "SSDLite-MobileNet-v2",
"url" : "http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz",
"width" : 300,
"height" : 300,
},
'yolo-v3' : {
"name" : "YOLO-v3",
"url" : "https://zenodo.org/record/3386327/files/yolo_v3_coco.tar.gz",
"width" : 416,
"height" : 416,
"provenance" : "https://github.com/YunYang1994/tensorflow-yolov3/"
}
}
###Output
_____no_output_____
###Markdown
Code
###Code
# We use two modes: the 'spreadsheet' mode tries to mimic the official submission table as much as possible;
# the 'dashboard' mode uses a more appropriate layout for the CK dashboard.
def get_data(results_path=results_path, mode='spreadsheet'):
dfs = []
# FOR EACH division.
for division in divisions:
#if division == 'open': continue # skip
# FOR EACH submitter.
submitters_dir = os.path.join(results_path, division)
submitters = [ fn for fn in os.listdir(submitters_dir) if os.path.isdir(os.path.join(submitters_dir, fn)) ]
for submitter in submitters:
# Selectively filter out submitters.
#all_submitters_closed = [ 'Alibaba', 'CentaurTechnology', 'DellEMC', 'dividiti', 'FuriosaAI', 'Google', 'Habana', 'Hailo', 'Intel', 'NVIDIA', 'Qualcomm', 'Tencent' ]
#if division == 'closed' and submitter not in all_submitters_closed: continue
#all_submitters_open = [ 'dividiti', 'Habana', 'Inspur', 'NVIDIA', 'Qualcomm' ]
#if division == 'open' and submitter not in all_submitters_open: continue
# FOR EACH system.
results_dir = os.path.join(submitters_dir, submitter, 'results')
systems = [ fn for fn in os.listdir(results_dir) if os.path.isdir(os.path.join(results_dir, fn)) ]
for system in systems:
system_dir = os.path.join(results_dir, system)
system_json_name = system + '.json'
system_json_path = os.path.join(submitters_dir, submitter, 'systems', system_json_name)
with open(system_json_path) as system_json_file:
system_json = json.load(system_json_file)
# Category.
if system_json['status'] in [ 'available', 'Available' ]:
category = 'Available'
elif system_json['status'] in [ 'preview', 'Preview' ]:
category = 'Preview'
elif system_json['status'] in [ 'rdi', 'RDI', 'rdo', 'RDO' ]:
category = 'Research, Development, Other'
elif system_json['status'] in [ 'Unofficial', 'unofficial' ]:
category = 'Unofficial'
else:
raise Exception("Unsupported category '%s'!" % (system_json['status']))
# System details.
system_name = system_json['system_name']
system_list = system.split('-')
system_id = system_list[0]
# Processor (CPU).
processor = system_id_to_processor.get(system_id, system_json.get('host_processor_model_name', 'N/A'))
processor_num = int(system_json.get('host_processors_per_node', 0))
# Accelerator.
# Tencent: https://github.com/mlperf/submissions_inference_0_5/issues/285
accelerator_name = system_json.get('accelerator_model_name', 'N/A')
accelerator_num = int(system_json.get('accelerators_per_node', 0))
accelerator = accelerator_name_to_accelerator.get(accelerator_name, accelerator_name)
# Software (framework).
software = system_json['framework']
# Default form factors and notes.
# NB: Using space rather than empty string turns out better for dashboard.
ff_m = ff_d = ff_s = ff_e = ' '
notes = ' '
# Submitter-specific form factors and notes.
submitter_str = submitter
if submitter == 'dividiti':
# Form factors.
if system_id in [ 'hikey960', 'firefly', 'rpi4' ]: ff_e = 'x'
if system_id in [ 'mate10pro', 'hikey960' ]: ff_m = 'x'
if system_id in [ 'velociti' ]: ff_d = 'x'
# Notes.
if system_id == 'hikey960':
notes = 'Mobile chip in embedded form factor (development board).'
if division == 'open':
# Object Detection is collaboration between dividiti and Politecnico di Milano.
if system_id == 'velociti': submitter_str = 'dividiti + PoliMi'
if system == 'velociti-tensorflow-v1.14-cpu':
notes = 'In the Other category, since this Intel CPU is no longer available (end-of-life).'
elif submitter == 'Alibaba':
ff_s = 'x'
if system_id == 'alibaba_cloud_t4':
notes = 'ECC off'
elif submitter == 'DellEMC':
ff_s = 'x'
if system_id == 'R740_T4x4_tensorrt':
notes = 'ECC off'
elif submitter == 'Google':
ff_s = 'x'
system_name = '{:d}x Cloud {:s}'.format(int(accelerator_num/8), accelerator)
elif submitter == 'Habana':
ff_d = ff_s = ff_e = 'x'
if division == 'open':
if system_id == 'Goya_fast_latency':
notes = 'Low latency results ...'
if system_id == 'Goya_med_latency':
notes = 'Medium latency results ...'
elif submitter == 'Intel':
if system_id == 'ICL':
ff_m = 'x'
else:
ff_s = 'x'
elif submitter == 'NVIDIA':
if system_id == 'Xavier':
ff_e = 'x'
if division == 'closed':
notes = 'GPU and both DLAs are used in Offline and MultiStream'
elif system_id == 'TitanRTXx4':
ff_e = ff_s = ff_d = 'x'
elif system_id == 'T4x8':
ff_e = ff_s = 'x'
elif system_id == 'T4x20':
ff_s = 'x'
else:
raise Exception("Unsupported NVIDIA system '%s'!" % system_id)
elif submitter == 'Qualcomm':
ff_m = 'x'
if division == 'open':
notes = 'Median latency. MultiStream: Both Hexagon Vector Extensions (HVX) and Hexagon Tensor Accelerator (HTA).'
if division == 'closed':
notes = 'Hexagon Vector Extensions being used.'
elif submitter == 'Tencent':
ff_s = 'x'
# Preview only.
elif submitter == 'CentaurTechnology':
ff_d = ff_s = ff_e = 'x'
elif submitter == 'Hailo':
ff_d = ff_e = 'x'
# RDO only.
elif submitter == 'FuriosaAI':
ff_d = ff_s = ff_e = 'x'
# Open only.
elif submitter == 'Inspur':
ff_s = 'x'
else:
raise Exception("Unsupported division/submitter combination '%s'/'%s'!" % (division, submitter))
# Create DataFrame for each row of the final table based on the division, submitter and system.
data = [{
#
'ID' : '-', # TODO: Fill in later.
'Submitter' : submitter_str,
'System' : system_name,
'Benchmark' : '-', # TODO: Fill in later.
# Processor.
'Processor' : processor,
'Processor #' : processor_num,
# Accelerator.
'Accelerator' : accelerator,
'Accelerator #' : accelerator_num if accelerator_num != '0' else '',
# Software.
'Software' : software,
# Form factor.
'FF_M' : ff_m,
'FF_D' : ff_d,
'FF_S' : ff_s,
'FF_E' : ff_e,
# Details. Code. Notes.
'Details' : 'https://github.com/mlperf/inference_results_v0.5/blob/master/{}/{}/systems/{}'. \
format(division, submitter, system_json_name),
'Code' : 'https://github.com/mlperf/inference_results_v0.5/tree/master/{}/{}/code'. \
format(division, submitter),
'Notes' : notes,
# Misc.
'Division' : division_to_str.get(division, division),
'Category' : category,
'Task' : '-', # TODO: Fill in later.
'Scenario' : '-', # TODO: Fill in later.
}]
# NB: 'Accelerator #' is important to sort Google's submissions correctly (not lexicographically).
index = [
'Division', 'Category', 'Submitter', 'Accelerator #', 'System', 'Software', 'Benchmark' #, 'Task', 'Scenario'
]
# Reset all scores.
if mode == 'spreadsheet':
data[0].update({ score : '' for score in score_columns })
# FOR EACH benchmark.
benchmarks = [ fn for fn in os.listdir(system_dir) if os.path.isdir(os.path.join(system_dir, fn)) ]
for (benchmark, benchmark_idx) in zip(benchmarks, range(len(benchmarks))):
is_last_benchmark = (benchmark_idx == len(benchmarks) - 1)
# Tencent and Inspur use resnet50.
benchmark_name = 'resnet' if benchmark == 'resnet50' else benchmark
# Benchmark (with notes).
benchmark_dict = non_imagenet_benchmarks.get(benchmark_name)
if benchmark_dict:
width = benchmark_dict['width']
height = benchmark_dict['height']
else:
if benchmark_name.endswith('96'):
side = 96
elif benchmark_name.endswith('128'):
side = 128
elif benchmark_name.endswith('160'):
side = 160
elif benchmark_name.endswith('192'):
side = 192
else:
side = 224
width = side
height = side
if width != -1 and height != -1:
# Benchmark (width x height).
benchmark_with_notes = '{} ({}x{})'.format(benchmark_name, width, height)
else:
# GNMT.
benchmark_with_notes = benchmark_name
# TODO: Rename to 'Model used, if not Closed Division default' for Open.
data[0]['Benchmark'] = benchmark_with_notes
# FOR EACH scenario.
benchmark_dir = os.path.join(system_dir, benchmark)
scenarios = [ fn for fn in os.listdir(benchmark_dir) if os.path.isdir(os.path.join(benchmark_dir, fn)) ]
for scenario in scenarios:
if mode != 'spreadsheet':
data[0].update({ score : '' for score in score_columns })
scenario_str = scenario_to_str.get(scenario,'')
if scenario_str not in [ 'SingleStream', 'MultiStream', 'Server', 'Offline' ]: continue
experiment_dir = os.path.join(benchmark_dir, scenario)
# Extract accuracy.
if submitter == 'Hailo' and benchmark == 'ssd-small':
# https://github.com/mlperf/submissions_inference_0_5/issues/287
task = 'OD'
accuracy = 21.920 # ssd-small/SingleStream/accuracy/results.json
else:
accuracy_dir = os.path.join(experiment_dir, 'accuracy')
with open(os.path.join(accuracy_dir, 'accuracy.txt'), 'r') as accuracy_file:
accuracy_txt = accuracy_file.readlines()
accuracy_line = accuracy_txt[-1]
if accuracy_line.startswith('mAP'):
task = 'OD'
match = re.match('mAP\=([\d\.]+)\%', accuracy_line)
accuracy = float(match.group(1))
elif accuracy_line.startswith('accuracy'):
task = 'IC'
match = re.match('accuracy=(.+)%, good=(\d+), total=(\d+)', accuracy_line)
accuracy = float(match.group(1))
elif accuracy_line.startswith('BLEU'):
task = 'MT'
match = re.match('BLEU:\s*(.+)', accuracy_line)
accuracy = float(match.group(1))
else:
pprint(accuracy_txt)
raise Exception('Failed to extract accuracy information from "%s"' % accuracy_line)
data[0]['Task'] = { 'IC': 'Image Classification', 'OD': 'Object Detection', 'MT': 'Machine Translation' }.get(task)
if scenario_str in [ 'SingleStream', 'MultiStream', 'Offline', 'Server' ]:
data[0]['Scenario'] = scenario_to_str.get(scenario, scenario)
if submitter == 'Tencent' and scenario_str in [ 'SingleStream', 'Offline' ]:
# https://github.com/mlperf/submissions_inference_0_5/issues/286
performance_dir = os.path.join(experiment_dir, 'performance')
else:
# TODO: Iterate over 5 runs for Server.
performance_dir = os.path.join(experiment_dir, 'performance', 'run_1')
with open(os.path.join(performance_dir, 'mlperf_log_summary.txt'), 'r') as summary_file:
summary_txt = summary_file.readlines()
for line in summary_txt:
if re.match("Scenario", line):
# NB: LoadGen scenario strings have spaces between 'Single'/'Multi' and 'Stream'.
loadgen_scenario = line.split(": ",1)[1].strip()
loadgen_scenario_str = scenario_to_str[loadgen_scenario]
if loadgen_scenario_str != scenario_str:
raise Exception("Expected '%s', parsed '%s'!" % (scenario_str, loadgen_scenario_str ))
continue
if scenario_str == "SingleStream":
if re.match("90th percentile latency", line):
score = line.split(": ",1)[1].strip()
continue
if scenario_str == "MultiStream":
if re.match("Samples per query", line):
score = line.split(": ",1)[1].strip()
continue
if scenario_str == "Server":
if re.match("Scheduled samples per second", line):
score = line.split(": ",1)[1].strip()
continue
if scenario_str == "Offline":
if re.match("Samples per second", line):
score = line.split(": ",1)[1].strip()
continue
if scenario_str == 'SingleStream':
time_ns = int(score)
time_ms = time_ns * 1e-6
elif scenario_str == 'MultiStream':
num_streams = int(score)
elif scenario_str == 'Server':
queries_per_second = float(score)
elif scenario_str == 'Offline':
samples_per_second = float(score)
else:
raise Exception("Unsupported scenario '%s'!" % scenario_str)
# Tasks.
if mode == 'spreadsheet':
ic1 = (task=='IC' and benchmark.startswith('mobilenet'))
ic2 = (task=='IC' and benchmark.startswith('resnet'))
od1 = (task=='OD' and benchmark=='ssd-small')
od2 = (task=='OD' and (benchmark=='ssd-large' or system_id=='velociti'))
nmt = (task=='MT')
else:
ic1 = (task=='IC')
ic2 = False
od1 = (task=='OD')
od2 = False
nmt = (task=='MT')
if scenario_str == 'SingleStream':
performance_str = '{:.03f}'.format(time_ms)
accuracy_str = '{:.03f}'.format(accuracy)
if ic1:
data[0]['A_IC1_SS'] = accuracy_str
data[0]['P_IC1_SS'] = performance_str
elif ic2:
data[0]['A_IC2_SS'] = accuracy_str
data[0]['P_IC2_SS'] = performance_str
elif od1:
data[0]['A_OD1_SS'] = accuracy_str
data[0]['P_OD1_SS'] = performance_str
elif od2:
data[0]['A_OD2_SS'] = accuracy_str
data[0]['P_OD2_SS'] = performance_str
elif nmt:
data[0]['A_NMT_SS'] = accuracy_str
data[0]['P_NMT_SS'] = performance_str
elif scenario_str == 'MultiStream':
performance_str = '{:d}'.format(num_streams)
accuracy_str = '{:.03f}'.format(accuracy)
if ic1:
data[0]['A_IC1_MS'] = accuracy_str
data[0]['P_IC1_MS'] = performance_str
elif ic2:
data[0]['A_IC2_MS'] = accuracy_str
data[0]['P_IC2_MS'] = performance_str
elif od1:
data[0]['A_OD1_MS'] = accuracy_str
data[0]['P_OD1_MS'] = performance_str
elif od2:
data[0]['A_OD2_MS'] = accuracy_str
data[0]['P_OD2_MS'] = performance_str
elif nmt:
data[0]['A_NMT_MS'] = accuracy_str
data[0]['P_NMT_MS'] = performance_str
elif scenario_str == 'Server':
performance_str = '{:.03f}'.format(queries_per_second)
accuracy_str = '{:.03f}'.format(accuracy)
if ic1:
data[0]['A_IC1_S'] = accuracy_str
data[0]['P_IC1_S'] = performance_str
elif ic2:
data[0]['A_IC2_S'] = accuracy_str
data[0]['P_IC2_S'] = performance_str
elif od1:
data[0]['A_OD1_S'] = accuracy_str
data[0]['P_OD1_S'] = performance_str
elif od2:
data[0]['A_OD2_S'] = accuracy_str
data[0]['P_OD2_S'] = performance_str
elif nmt:
data[0]['A_NMT_S'] = accuracy_str
data[0]['P_NMT_S'] = performance_str
elif scenario_str == 'Offline':
performance_str = '{:.03f}'.format(samples_per_second)
accuracy_str = '{:.03f}'.format(accuracy)
if ic1:
data[0]['A_IC1_O'] = accuracy_str
data[0]['P_IC1_O'] = performance_str
elif ic2:
data[0]['A_IC2_O'] = accuracy_str
data[0]['P_IC2_O'] = performance_str
elif od1:
data[0]['A_OD1_O'] = accuracy_str
data[0]['P_OD1_O'] = performance_str
elif od2:
data[0]['A_OD2_O'] = accuracy_str
data[0]['P_OD2_O'] = performance_str
elif nmt:
data[0]['A_NMT_O'] = accuracy_str
data[0]['P_NMT_O'] = performance_str
else:
print('Skipping unsupported task/scenario combination!')
continue
if mode != 'spreadsheet':
df = pd.DataFrame(data)
df = df.set_index(index)
dfs.append(df)
# END OF FOR EACH scenario
if mode == 'spreadsheet':
# For closed, multiple benchmarks can share the same row, so the Benchmark field can be misleading.
if division == 'closed': data[0]['Benchmark'] = ''
if is_last_benchmark or (division == 'open' and submitter == 'dividiti'):
df = pd.DataFrame(data)
df = df.set_index(index)
dfs.append(df)
# For the spreadsheet mode, include multiple benchmarks per row.
# END OF FOR EACH benchmark
# END OF FOR EACH system
# END OF FOR EACH submitter
# END OF FOR EACH division
# Concatenate all thus constructed DataFrames (i.e. stack on top of each other).
df = pd.concat(dfs)
# Temporarily capitalize the first letter in 'dividiti' for correct sorting and then back.
df = df \
.rename(index={'dividiti':'Dividiti', 'dividiti + PoliMi':'Dividiti + PoliMi'}) \
.sort_index(ascending=True) \
.rename(index={'Dividiti':'dividiti', 'Dividiti + PoliMi':'dividiti + PoliMi'}) \
# Reset the index, but keep Division and Category there.
df = df.reset_index(level=index[2:])
df['ID'] = [ 'Inf-0.5-{:03d}'.format(ID) for ID in range(1, len(df)+1) ]
# Mimic the official template.
columns = [ 'ID', 'Submitter', 'System', 'Benchmark' ]
columns += score_columns
columns += [ 'Processor', 'Processor #', 'Accelerator', 'Accelerator #', 'Software',
'FF_M', 'FF_D', 'FF_S', 'FF_E', 'Details', 'Code', 'Notes' ]
# Finalize the table.
if mode == 'spreadsheet':
df = df[columns]
else:
df = df.reset_index().set_index(keys=[ 'ID', 'Division', 'Category', 'Submitter', 'System', 'Benchmark' ], drop=False)
df[score_columns] = df[score_columns].apply(pd.to_numeric).astype('float32')
return df
df = get_data(results_path=results_path, mode='spreadsheet')
display_in_full(df)
###Output
_____no_output_____
###Markdown
Dump the table for the CK dashboard
###Code
cache_path
# Always clean the cache while in the development mode.
!rm -f $cache_path
results_path
if os.path.exists(cache_path):
# Load the table from cache.
print('Loading the results table from cache ...')
df = pd.read_pickle(cache)
else:
# Store the table in a simplified format.
print('Storing the results table to cache ...')
df = get_data(results_path=results_path, mode='dashboard')
df.to_pickle(path=cache_path, protocol=cache_protocol, compression=cache_compression)
display_in_full(df)
###Output
_____no_output_____
###Markdown
Dump the table into Excel (with separate sheets for Division / Category)
###Code
# Create a Pandas Excel writer using XlsxWriter as the engine.
from pandas import ExcelWriter
# NB: Cannot use dot for 'v0.5', as otherwise the engine complains about an unknown extension.
xlsx_filename = 'MLPerf Inference v0_5 - Results (Automatically Generated).xlsx'
xlsx_writer = ExcelWriter(xlsx_filename, engine='xlsxwriter', options={'strings_to_urls': True})
df_ = df.droplevel('ID')
for division in df.index.unique(level='Division'):
df_d = df_.loc[division]
for category in df_d.index.unique(level='Category'):
df_dc = df_d.loc[category]
if division == 'Open':
df_xlsx = df_dc
elif division == 'Closed':
df_xlsx = df_dc.drop(labels=['Benchmark']+accuracy_columns, axis=1)
else:
continue
# Write different division and category results to separate sheets. Omit index.
print('*' * 100)
print('* Division / Category: %s / %s' % (division, category))
print('*' * 100)
if category == 'Research, Development, Other': category = 'RDO' # NB: sheet_name must be =< 31 symbols.
df_xlsx.to_excel(xlsx_writer, sheet_name='{} - {}'.format(division, category), index=False)
display_in_full(df_xlsx)
print('')
xlsx_writer.save()
!cp "$xlsx_filename" ~/Downloads
###Output
_____no_output_____
###Markdown
Statistics Total number of results
###Code
# Performance columns are strings for formatting reasons. Convert the strings to numbers (with NaNs for empty strings),
# then count the numbers across the columns and finally sum.
print("#Results: %d" % df[performance_columns].apply(pd.to_numeric).count(numeric_only=True, axis=0).sum())
#print("#Results/Closed: %d" % df.loc['Closed'][performance_columns].apply(pd.to_numeric).count(numeric_only=True, axis=0).sum())
#print("#Results/Open: %d" % df.loc['Open'][performance_columns].apply(pd.to_numeric).count(numeric_only=True, axis=0).sum())
###Output
_____no_output_____
###Markdown
Number of results per division per submitter per benchmark
###Code
indices = [ 'Division', 'Submitter' ]
df_num_results_per_division_per_submitter_per_benchmark = df \
.reset_index(drop=True) \
[indices + performance_columns] \
.set_index(indices) \
.apply(pd.to_numeric) \
.groupby(level=indices).count()
# display_in_full(df_num_results_per_division_per_submitter_per_benchmark)
df_num_results_per_division_per_submitter = df_num_results_per_division_per_submitter_per_benchmark.sum(axis=1)
df_num_results_per_division_per_submitter
###Output
_____no_output_____
###Markdown
Pie charts
###Code
def plot_num_results(df_num_results_per_submitter, autopct='%1.0f%%', pctdistance=0.8, labeldistance=1.1, topN=5,
explode_submitters=['dividiti'], explode_distance=0.05, startangle=0, shadow=False,
title='MLPerf Inference v0.5 - Results per Submitter', fname=None, ftype='jpg', color='darkgray'):
df_num_results_per_submitter_descending = pd.DataFrame({
'Submitter' : df_num_results_per_submitter.index,
'#Results' : df_num_results_per_submitter.values}) \
.sort_values('#Results', ascending=False)
df_num_results_per_submitter_topN = df_num_results_per_submitter_descending[:topN].copy()
df_num_results_per_submitter_others = pd.DataFrame(data = {
'Submitter' : ['Others'],
'#Results' : [df_num_results_per_submitter_descending['#Results'][topN:].sum()]})
df_num_results_per_submitter_topN_and_others = \
pd.concat([df_num_results_per_submitter_topN, df_num_results_per_submitter_others]) \
.set_index('Submitter') \
.sort_values('Submitter', ascending=False)
results = df_num_results_per_submitter_topN_and_others['#Results']
submitters = df_num_results_per_submitter_topN_and_others.index
explode = [ explode_distance if submitter in explode_submitters else 0 for submitter in submitters ]
mp.rcParams['figure.dpi'] = default_dpi
plt.pie(results, labels=submitters, autopct=autopct,
pctdistance=pctdistance, labeldistance=labeldistance,
explode=explode, startangle=35, shadow=shadow)
plt.title(title)
plt.tight_layout()
if fname is not None:
# A lazy way to use the default file name.
if fname == '': fname = '{}.{}'.format(title, ftype)
plt.savefig(fname=fname, format=ftype, dpi=200, quality=95, optimize=True, bbox_inches='tight',
facecolor=color, edgecolor=color, transparent=True)
plt.show()
###Output
_____no_output_____
###Markdown
Plot by division
###Code
# for division, topN in zip([ 'Closed', 'Open' ], [ 10, 3 ]):
# explode_submitters = [] if division == 'Open' else ['dividiti']
# plot_num_results(
# df_num_results_per_division_per_submitter.loc[division],
# title='MLPerf Inference v0.5 - {} Division - Results per Submitter'.format(division),
# topN=topN, explode_submitters=explode_submitters
# )
###Output
_____no_output_____
###Markdown
Plot all
###Code
plot_num_results(
df_num_results_per_division_per_submitter.droplevel('Division').groupby(level='Submitter').sum(),
topN=8, explode_submitters=['dividiti', 'dividiti + PoliMi'], color='white', fname='')
###Output
_____no_output_____
###Markdown
Display HTML with embedded links (TODO)
###Code
# df = df.set_index(['Submitter', 'System', 'Benchmark', 'Software'], append=True)
# def link_code(url): return '<a target="_blank" href="{}">Code</a>'.format(url)
# def link_details(url): return '<a target="_blank" href="{}">Details</a>'.format(url)
# display_in_full(df.style.format({'Code': link_code, 'Details': link_details}))
###Output
_____no_output_____ |
homework07-fitting/homework07-ungermichael.ipynb | ###Markdown
Homework 07 Preparation...Run this code from the lecture to be ready for the exercises below!
###Code
import glob
import os.path
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn import datasets, linear_model, ensemble, neural_network
from sklearn.metrics import mean_squared_error, r2_score
from sklearn.model_selection import train_test_split
from pathlib import Path
CONFIG_FILE = './entsoe-data.config'
if not os.path.exists(CONFIG_FILE):
download_dir = input('Path to ENTSO-E data folder: ')
if not os.path.isdir(download_dir):
raise RuntimeError(f'Invalid download_dir, please run cell again: {download_dir}')
with open(CONFIG_FILE, 'w') as f:
f.write(download_dir)
else:
with open(CONFIG_FILE) as f:
download_dir = f.read()
# Clear the output after this cell if you want to aovid having your path in the notebook (or execute it twice)!
def read_single_csv_entso_e(file):
return pd.read_csv(file, sep='\t', encoding='utf-16', parse_dates=["DateTime"])
def load_complete_entso_e_data(directory):
pattern = Path(directory) / '*.csv'
files = glob.glob(str(pattern))
if not files:
raise ValueError(f"No files found when searching in {pattern}, wrong directory?")
print(f'Concatenating {len(files)} csv files...')
each_csv_file = [read_single_csv_entso_e(file) for file in files]
data = pd.concat(each_csv_file, ignore_index=True)
data = data.sort_values(by=["AreaName", "DateTime"])
data = data.set_index("DateTime")
print("Loading done.")
return data
power_demand = load_complete_entso_e_data(download_dir)
def get_hourly_country_data(data, country):
ret_data = data[data["AreaName"] == country].interpolate()
#ret_data = ret_data.set_index("DateTime")
ret_data = ret_data.resample("1h").mean().interpolate()
return ret_data
power_demand_at_hourly = get_hourly_country_data(power_demand, "Austria")["2015-01-01":"2019-12-31"]
###Output
_____no_output_____
###Markdown
Exercise 1Explain the following terms:**Input feature:** machine learning algorithms need input data to create an output. This input data needs to have speacial features - usually structured columns - in order to work. **Output feature:** you can train machine learning algorithms with input data in order to create predictions. these predictions are called output. **Fit a function to data:** data consists of discreet values. To make a prediction, we want to create a function, so we can input an arbitrary future date and predict. Fitting a function is the process of decreasing the error between the function that approximates the datapoints and the datapoints themselves.**Training data:** data that helps the algorithm to predict the outcome**Test data:** data that will help the user to see if the algorithm predicts correctly. the basic rule is 80% training data, 20% testing data from your data set. Exercise 2In lecture07 we created a plot of the ratio of actual load and predicted load for Austria step by step (Exercise 04). Now put all of this together in one function which takes one parameter `country` as input and calculates and plots the figure of Exercise 04 for this country! The model should be trained on 2015-2019 data and then you should predict for 2020 and compare it to observations. Also do a training/test split and print the R2 for both datasets.Apply the function to the following countries and show the results in one plot: Austria, Germany, Switzerland, Italy, Spain, Sweden, United Kingdom. (1) Print the country name. Get the data for the specific country using get_hourly_country_data from the lecture and extract two periods, i.e 2015-2019 and 2020 in two separate dataframes. (2) Define X (the input features, i.e. the indicators for time) and Y (i.e. the output feature, the electricity load). Observe that for training, we use the period 2015-2019. (3) Do a train/test split (4) Fit the input features to the output feature using a ```RandomForestRegressor``` (5) Predict the output with the training data and the test data and compute the R^2 for both! (6) Print the R^2. (7) Create a new variable ```X_2020``` which contains the input features for the year 2020. (8) Predict with your model the load for 2020. (9) Assign your prediction back to the dataframe in a new column and calculate the monthly mean for prediction and for observed load. You might need to copy the dataframe first by doing something like `power_demand_hourly = power_demand_hourly.copy()` (otherwise it might be just a slice of the complete time range and then you can't add a column for some rows only). (10) Plot the ratio of load and prediction. With ```label=country```, you can add a label to your plot for making a legend. (11) Execute the function for the following countries: Austria, Germany, Switzerland, Italy, Spain, Sweden, United Kingdom. (12) After calling the functions, use ```plt.legend()``` to show a legend.
###Code
def plot_ratio_for_country(country):
# (1)
print('Country name: ' + country)
power_demand_hourly_period1 = get_hourly_country_data(power_demand, country)["2015-01-01" : "2019-12-31"]
power_demand_hourly_period2 = get_hourly_country_data(power_demand, country)["2020-01-01" : "2020-12-31"]
# (2)
X = np.array([power_demand_hourly_period1.index.month.values, power_demand_hourly_period1.index.weekday.values, power_demand_hourly_period1.index.hour.values]).T
Y = power_demand_hourly_period1["TotalLoadValue"].values
model = ensemble.RandomForestRegressor()
# (3)
X_training, X_test, Y_training, Y_test = train_test_split(X, Y, test_size=0.2)
# (4)
model.fit(X_training, Y_training)
# (5)
prediction_test = model.predict(X_test)
prediction_training = model.predict(X_training)
r2_score_test = r2_score(Y_test, prediction_test)
r2_score_training = r2_score(Y_training, prediction_training)
# (6)
print('Period 1 r2 score test: ', r2_score_test)
print('Period 1 r2 score training: ', r2_score_training)
# (7)
X_2020 = np.array([power_demand_hourly_period2.index.month.values, power_demand_hourly_period2.index.weekday.values, power_demand_hourly_period2.index.hour.values]).T
# (8)
prediction_2020 = model.predict(X_2020)
# (9)
# We assign the hourly ratio back, not the average, since the assingment reads 'power_demand_hourly = power_demand_hourly.copy()'
power_demand_hourly_period2['pred_2020'] = power_demand_hourly_period2['TotalLoadValue'].values/prediction_2020
# (10)
plot = power_demand_hourly_period2.resample('1m').mean()['pred_2020'].plot()
plt.legend(countries)
countries = ['Austria', 'Germany', 'Switzerland', 'Italy', 'Spain', 'Sweden', 'United Kingdom']
for country in countries:
plot_ratio_for_country(country)
###Output
Country name: Austria
Period 1 r2 score test: 0.8294793414795285
Period 1 r2 score training: 0.8441846088331973
Country name: Germany
Period 1 r2 score test: 0.8847746083953704
Period 1 r2 score training: 0.8858446198744911
Country name: Switzerland
Period 1 r2 score test: 0.6384532815943642
Period 1 r2 score training: 0.664796519620709
Country name: Italy
Period 1 r2 score test: 0.8426541690061254
Period 1 r2 score training: 0.8612228185669872
Country name: Spain
Period 1 r2 score test: 0.8348691128211834
Period 1 r2 score training: 0.8710142816348845
Country name: Sweden
Period 1 r2 score test: 0.8327011267407192
Period 1 r2 score training: 0.8627758943842865
Country name: United Kingdom
Period 1 r2 score test: 0.6947334293630847
Period 1 r2 score training: 0.7252248815878022
###Markdown
Exercise 3Answer the following questions: (1) Which country had the strongest decline in electricity consumption? Without looking at the mean over the whole period, it seems that Italy, UK, and Spain have the biggest divergence from our expected consumption. (2) For which country does the fit work best? Germany, because we have the highest r2 value for both the test and training set. (3) Where is the difference of R2 between training data and test data the largest? What does that mean? Spain has the highest difference. That means that we might have overfitted the model, since we do better on the training set than on the test set. (4) Look into the data of the country with the largest difference in the R2 of the training and the test data. Can you explain what is happening there? Might this effect our model?
###Code
power_demand_hourly_period1_spain = get_hourly_country_data(power_demand, 'Spain')["2015-01-01" : "2019-12-31"]
power_demand_hourly_period2_spain = get_hourly_country_data(power_demand, 'Spain')["2020-01-01" : "2020-12-31"]
X_spain = np.array([power_demand_hourly_period1_spain.index.month.values, power_demand_hourly_period1_spain.index.weekday.values, power_demand_hourly_period1_spain.index.hour.values]).T
Y_spain = power_demand_hourly_period1_spain["TotalLoadValue"].values
model = ensemble.RandomForestRegressor()
X_training_spain, X_test_spain, Y_training_spain, Y_test_spain = train_test_split(X_spain, Y_spain, test_size=0.2)
plt.plot(Y_training_spain, label="Training Spain")
plt.plot(Y_test_spain, label="Test Spain", alpha=0.5)
plt.xlabel("Time")
plt.ylabel("Load (MW)")
plt.legend()
power_demand_hourly_period1_germany = get_hourly_country_data(power_demand, 'Germany')["2015-01-01" : "2019-12-31"]
power_demand_hourly_period2_germany = get_hourly_country_data(power_demand, 'Germany')["2020-01-01" : "2020-12-31"]
X_germany = np.array([power_demand_hourly_period1_germany.index.month.values, power_demand_hourly_period1_germany.index.weekday.values, power_demand_hourly_period1_germany.index.hour.values]).T
model = ensemble.RandomForestRegressor()
X_training_germany, X_test_germany, Y_training_germany, Y_test_germany = train_test_split(X_germany, Y_germany, test_size=0.2)
plt.plot(Y_training_germany, label="Training Germany")
plt.plot(Y_test_germany, label="Test Germany", alpha=0.5)
plt.xlabel("Time")
plt.ylabel("Load (MW)")
plt.legend()
###Output
_____no_output_____
###Markdown
As we can see in the plots above, the test data set seems to vary with a higher degree for Spain than it does for Germany. Due to that fact, it makes sense that the model has more difficulty predicting for the Spain's test set. Exercise 4 The difference between model prediction and actual observation may help understanding how people behaved during the lockdown. In this exercise, you should come up with your own hypothesis of how people behaved and how this affected power consumption. You may, e.g., look into demand on different weekdays or in different hours. Once you have a hypothesis and a theory, why this hypothesis may be valid, test it with the model: is your hypothesis covered by what you observe in the load data? Hypothesis: during lockdown, people rather do online activities because they can't meet (e.g. Netflix). Therefore the powerconsumption in the evening hours actually increased.
###Code
def check_evening_consumption():
power_demand_at_hourly_evening_p1 = power_demand_at_hourly.copy()
power_demand_at_hourly_evening_p1.drop(power_demand_at_hourly_evening_p1.between_time("00:00", "19:59").index)
power_demand_at_hourly_evening_p2 = get_hourly_country_data(power_demand, "Austria")["2020-01-01":"2020-12-31"]
power_demand_at_hourly_evening_p2.drop(power_demand_at_hourly_evening_p2.between_time("00:00", "19:59").index)
X = np.array([power_demand_at_hourly_evening_p1.index.month.values, power_demand_at_hourly_evening_p1.index.weekday.values, power_demand_at_hourly_evening_p1.index.hour.values]).T
Y = power_demand_at_hourly_evening_p1["TotalLoadValue"].values
model = ensemble.RandomForestRegressor()
X_training, X_test, Y_training, Y_test = train_test_split(X, Y, test_size=0.2)
model.fit(X_training, Y_training)
prediction_test = model.predict(X_test)
prediction_training = model.predict(X_training)
r2_score_test = r2_score(Y_test, prediction_test)
r2_score_training = r2_score(Y_training, prediction_training)
print('Period 1 r2 score test: ', r2_score_test)
print('Period 1 r2 score training: ', r2_score_training)
X_2020 = np.array([power_demand_at_hourly_evening_p2.index.month.values, power_demand_at_hourly_evening_p2.index.weekday.values, power_demand_at_hourly_evening_p2.index.hour.values]).T
prediction_2020 = model.predict(X_2020)
power_demand_at_hourly_evening_p2['pred_2020'] = power_demand_at_hourly_evening_p2['TotalLoadValue'].values/prediction_2020
plot = power_demand_at_hourly_evening_p2.resample('1m').mean()['pred_2020'].plot()
plt.legend('Austria')
check_evening_consumption()
###Output
Period 1 r2 score test: 0.8323949953465842
Period 1 r2 score training: 0.8437147096530475
###Markdown
Result: still less consumption than predicted :( let's hope people read books instead. Exercise 5Download ERA5 temperature data for the next lecture.First install necessary dependencies `xarray` and `cdsapi`:
###Code
conda install --yes xarray
conda install --yes -c conda-forge cdsapi
###Output
Collecting package metadata (current_repodata.json): done
Solving environment: done
## Package Plan ##
environment location: /home/lukas/miniconda3
added / updated specs:
- cdsapi
The following packages will be downloaded:
package | build
---------------------------|-----------------
cdsapi-0.2.7 | py_0 14 KB conda-forge
------------------------------------------------------------
Total: 14 KB
The following NEW packages will be INSTALLED:
cdsapi conda-forge/noarch::cdsapi-0.2.7-py_0
The following packages will be UPDATED:
ca-certificates pkgs/main::ca-certificates-2020.1.1-0 --> conda-forge::ca-certificates-2020.4.5.1-hecc5488_0
conda pkgs/main::conda-4.8.3-py37_0 --> conda-forge::conda-4.8.3-py37hc8dfbb8_1
The following packages will be SUPERSEDED by a higher-priority channel:
certifi pkgs/main::certifi-2020.4.5.1-py37_0 --> conda-forge::certifi-2020.4.5.1-py37hc8dfbb8_0
openssl pkgs/main::openssl-1.1.1g-h7b6447c_0 --> conda-forge::openssl-1.1.1g-h516909a_0
Downloading and Extracting Packages
cdsapi-0.2.7 | 14 KB | ##################################### | 100%
Preparing transaction: done
Verifying transaction: done
Executing transaction: done
Note: you may need to restart the kernel to use updated packages.
###Markdown
The [Copernicus Climate Data Store](https://cds.climate.copernicus.eu/) provides [reanalysis climate data](https://cds.climate.copernicus.eu/cdsapp!/search?type=dataset&keywords=((%20%22Product%20type:%20Reanalysis%22%20))). We are going to download [ERA5](https://cds.climate.copernicus.eu/cdsapp!/dataset/reanalysis-era5-land?tab=form) data and use the [temperature 2m above ground values](https://apps.ecmwf.int/codes/grib/param-db?id=167). Register for the CDS API and install the API key by following [this guide](https://cds.climate.copernicus.eu/api-how-to). You don't need to run `pip install cdsapi`, this has been done in the cell above already using conda.
###Code
import cdsapi
c = cdsapi.Client()
# Add the path to the lecture repository here:
PATH_TO_LECTURE_REPO = '../..'
if not os.path.isdir(Path(PATH_TO_LECTURE_REPO) / 'lecture00-introduction'):
raise RuntimeError(f"Wrong path to lecture repository: PATH_TO_LECTURE_REPO = {PATH_TO_LECTURE_REPO}")
###Output
_____no_output_____
###Markdown
We'll download data from 2015 to 2020 in a bounding box which covers all countries we used so far for our analysis.To make the download a bit faster, we'll use a [0.5° grid](https://confluence.ecmwf.int/display/CKB/ERA5%3A+Web+API+to+CDS+API) instead of the 0.1° grid. This will download approximately 500MB. The download might take a couple of hours, because the data is prepared on their servers before it can be downloaded.
###Code
filename = Path(PATH_TO_LECTURE_REPO) / 'data' / 'temperatures_era5.nc'
north, west, south, east = 70.,-13.5, 35.5, 24.5
c.retrieve(
'reanalysis-era5-land',
{
'format': 'netcdf',
'variable': '2m_temperature',
'area': [
north, west, south, east
],
'grid': [0.5, 0.5], # grid in 0.5deg steps in longitude/latitude
'day': [f"{day:02d}" for day in range(1, 32)],
'time': [f"{hour:02d}:00" for hour in range(24)],
'month': [f"{month:02d}" for month in range(1, 13)],
'year': [str(year) for year in range(2015, 2021)],
},
f"{filename}.part")
# this prevents you from accidentally using broken files:
os.rename(f"{filename}.part", filename)
###Output
2020-06-03 14:58:12,973 INFO Welcome to the CDS
2020-06-03 14:58:12,974 INFO Sending request to https://cds.climate.copernicus.eu/api/v2/resources/reanalysis-era5-land
2020-06-03 14:58:13,665 INFO Downloading http://136.156.132.201/cache-compute-0004/cache/data2/adaptor.mars.internal-1591183051.143155-14109-29-85466677-b2b7-422c-8304-0a46d57630dc.nc to /mnt/c/Users/Lukas/Desktop/Herzblatt/data/temperatures_era5.nc.part (473.2M)
2020-06-03 15:00:44,325 INFO Download rate 3.1M/s
###Markdown
Exercise 6Load the file downloaded in exercise 3 and plot the temperature for one location. This is also a test if the download was successful. To load the file import the library `xarray`. Typically it is imported by using `import xarray as xr`. Then load the file using the command `xr.load_dataset(filename)`. Check the type of the return value. Then select the data variable `t2m` (temperature at 2m), select the values for `longitude=16.5` and `latitude=48` by using `temperatures_dataset.t2m.sel(longitude=16.5, latitude=48.)`. Then plot the result by calling `.plot()` on the resulting object.Does the result look reasonable?
###Code
import xarray as xr
dataset = xr.load_dataset(filename)
dataset.t2m.sel(longitude=16.5, latitude=48.).plot()
###Output
_____no_output_____ |
analysis/Basic Population Stats.ipynb | ###Markdown
Categorical data from pre-screen survey
###Code
%pylab inline
import pandas as pd
import os
from collections import Counter
from scipy.stats import stats
data = pd.read_csv("Crowdclass_Data.csv")
data =data.dropna(how="all")
def basic_stats_categorical(field):
print "---------------------"
print field
cnt = Counter(data[field])
# print cnt
percentage = np.array(cnt.values())/float(sum(cnt.values()))*100
for category,percent in zip(cnt.keys(),percentage):
print "{} : %.2f ".format(str(category).ljust(5)) %percent ,"%"
field_lst = ["Age", "Highest Level of education","Area of Study/Professional Interest",\
"What is your experience with citizen science? (e.g. Zooniverse, Volunteer Computing, community sensing)",\
"Geographic Location"]
for field in field_lst:
basic_stats_categorical(field)
###Output
---------------------
Age
18-21 : 4.30 %
30-39 : 31.18 %
60+ : 4.30 %
40-59 : 31.18 %
22-29 : 29.03 %
---------------------
Highest Level of education
Professional (MS, PhD, JD ..etc) : 26.88 %
Bachelor / Associate Degree : 46.24 %
High School or Equivalent : 9.68 %
Some college. : 17.20 %
---------------------
Area of Study/Professional Interest
STEM : 46.24 %
Others : 53.76 %
---------------------
What is your experience with citizen science? (e.g. Zooniverse, Volunteer Computing, community sensing)
Yes : 17.20 %
No : 82.80 %
---------------------
Geographic Location
Other : 31.18 %
India : 18.28 %
USA : 50.54 %
###Markdown
Fisher's Exact : Making sure Group A and Group B demographics are simmilarhttp://yatani.jp/teaching/doku.php?id=hcistats:chisquare
###Code
users = data
A_summary_stats = users[users["Group"]=="A"]
B_summary_stats = users[users["Group"]=="B"]
def Fisher_Test(field,abbrev = "",output="p-value"):
cntA = Counter(A_summary_stats[field])
cntB = Counter(B_summary_stats[field])
# print "# Categories: ", sorted(cntA.keys())
ncol = len(cntA.keys())
lst = list(set(cntA) - set(cntB))
if len(cntA)>len(cntB):
for l in lst:
cntB[l]='0'
elif len(cntA)<len(cntB):
for l in lst:
cntA[l]='0'
alist = [cntA[i] for i in sorted(cntA.keys())]
blist = [cntB[i] for i in sorted(cntB.keys())]
alist.extend(blist)
# print len(alist)
# print "data <- matrix(c("+ ','.join(str(p) for p in alist)+"), ncol=5, byrow=T)"
# print "fisher.test(data)"
f = open("Fisher.r", "w")
f.write("data <- matrix(c("+ ','.join(str(p) for p in alist)+"), ncol={}, byrow=T) \n".format(ncol))
f.write("# Contingency Table \n")
# f.write("data \n")
f.write("fisher.test(data) \n")
f.write("library(vcd) \n")
f.write("assocstats(data) \n")
f.close()
# batcmd=os.getcwd()
# result = subprocess.check_output('dir/', shell=True)
os.system("r -f Fisher.r > Fisher_{}.out".format(abbrev))
f = open("Fisher_{}.out".format(abbrev), 'r')
lines = f.readlines()[18:] # supress header outputs
if output=="full":
for l in lines:
if l!='\n':
print l
elif output=="p-value":
for l in lines:
if l[:7]=='p-value':
p = float(l.split()[-1])
print "{0} : p ={1} ---> {2}".format(field,p,pcheck(p,"Independence"))
f.close()
###Output
_____no_output_____
###Markdown
Null hypothesis: the occurrence of the outcomes for the two groups is equal --> independence If we look at the output ``cat Fisher_age.out", we see that assocstats also prints out the contingency coefficient, Pearson's coeffient, Cramer's V ...etc which measures strength of independence of the categorical frequency. That info is a bit excessive but its there if we need them.
###Code
from stats_helper import *
Fisher_Test("Age",abbrev="age",output="p-value")
Fisher_Test("Highest Level of education",abbrev="Edu",output="p-value")
Fisher_Test("Area of Study/Professional Interest",abbrev="Interest",output="p-value")
Fisher_Test("What is your experience with citizen science? (e.g. Zooniverse, Volunteer Computing, community sensing)",abbrev="CS_exp",output="p-value")
Fisher_Test("Geographic Location",abbrev="Geo",output="p-value")
###Output
Age : p =0.4704 ---> Independence
Highest Level of education : p =0.2127 ---> Independence
Area of Study/Professional Interest : p =0.8364 ---> Independence
What is your experience with citizen science? (e.g. Zooniverse, Volunteer Computing, community sensing) : p =1.0 ---> Independence
Geographic Location : p =0.3882 ---> Independence
###Markdown
Fisher's Exact test shows that Group A and B are independent in these variables of interest measured in our pre-screening survey. Quantitative data from pre-screen survey - 10-point Likert scale data from pre-screen survey
###Code
def kolmogorov_smirnov(data1,data2,name):
result = stats.ks_2samp(data1,data2)
print "{0} : D = {1} ; p ={2} ---> {3}".format(name,np.around(result[0],2),np.around(result[1],2),pcheck(result[1],"from same distribution"))
def pcheck(p,null_hyp):
'''
if p>0.05 then reject null hypothesis
'''
if p>0.05:
return null_hyp
else:
return "NOT "+null_hyp
def basic_stats_quantitative(field,plot_hist = False):
print "---------------------"
# print field
Adata = np.array(A_summary_stats[field])
Bdata = np.array(B_summary_stats[field])
# print Adata
# print Bdata
# print "Check that they come from the same distribution with KS test"
kolmogorov_smirnov(Adata,Bdata,field)
if plot_hist:
plt.figure()
plt.title(field,fontsize=14)
plt.hist(Adata,label="A",bins =10)
plt.hist(Bdata,label="B",bins =10)
plt.xlim(0,10)
plt.legend(loc = "upper left")
print "For A: "
print "mean = ", mean(Adata)
print "std = ", std(Adata)
print "For B: "
print "mean = ", mean(Bdata)
print "std = ", std(Bdata)
qfield_lst = ["Level of knowledge in astronomy","Level of interest in astronomy", "Level of interest in science"]
for field in qfield_lst:
basic_stats_quantitative(field)
###Output
---------------------
Level of knowledge in astronomy : D = 0.16 ; p =0.59 ---> from same distribution
For A:
mean = 3.71739130435
std = 2.40165743021
For B:
mean = 3.23404255319
std = 2.10584562241
---------------------
Level of interest in astronomy : D = 0.12 ; p =0.85 ---> from same distribution
For A:
mean = 6.95652173913
std = 2.73429524931
For B:
mean = 6.48936170213
std = 2.7895880915
---------------------
Level of interest in science : D = 0.1 ; p =0.97 ---> from same distribution
For A:
mean = 8.39130434783
std = 1.87057606646
For B:
mean = 8.04255319149
std = 2.13339652087
|
stock_prices_using_python/getting_indices_data.ipynb | ###Markdown
Importing initial libraries
###Code
import investpy as inv
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['figure.figsize'] = (16, 8)
import pandas as pd
import seaborn as sns
%config Completer.use_jedi = False # this speeds up autocomplete
###Output
_____no_output_____
###Markdown
Initial search
###Code
inv.get_indices_list(country= 'brazil')
###Output
_____no_output_____
###Markdown
Getting Bovespa, IFIX, Electrical Energy, Real Estate, Basic Materials and Industrial Sector
###Code
# Ibovespa
ibov = inv.get_index_historical_data('Bovespa', country = 'brazil',
from_date = '01/01/2014', to_date = '31/12/2020',
interval = 'Monthly')
ibov = ibov[['Close']]
# IFIX
ifix = inv.get_index_historical_data('BM&FBOVESPA Real Estate IFIX', country = 'brazil',
from_date = '01/01/2014', to_date = '31/12/2020',
interval = 'Monthly')
ifix = ifix[['Close']]
# Electrical Energy
elec_energy = inv.get_index_historical_data('Electrical Energy', country = 'brazil',
from_date = '01/01/2014', to_date = '31/12/2020',
interval = 'Monthly')
elec_energy = elec_energy[['Close']]
# Real Estate
real_estate = inv.get_index_historical_data('Real Estate', country = 'brazil',
from_date = '01/01/2014', to_date = '31/12/2020',
interval = 'Monthly')
real_estate = real_estate[['Close']]
# Basic Materials
basic_materials = inv.get_index_historical_data('Basic Materials', country = 'brazil',
from_date = '01/01/2014', to_date = '31/12/2020',
interval = 'Monthly')
basic_materials = basic_materials[['Close']]
# Industrial Sector
industrial = inv.get_index_historical_data('Industrial Sector', country = 'brazil',
from_date = '01/01/2014', to_date = '31/12/2020',
interval = 'Monthly')
industrial = industrial[['Close']]
###Output
_____no_output_____
###Markdown
Joining the data
###Code
indices_data = pd.DataFrame()
indices_data['ibov'] = ibov['Close']
indices_data['ifix'] = ifix['Close']
indices_data['elec_energy'] = elec_energy['Close']
indices_data['real_estate'] = real_estate['Close']
indices_data['basic_materials'] = basic_materials['Close']
indices_data['industrial'] = industrial['Close']
indices_data
plt.plot(indices_data)
plt.show()
# Creating correlation object: corr
corr = indices_data.corr('spearman')
# Without values
sns.heatmap(corr,
xticklabels = corr.columns.values,
yticklabels = corr.columns.values,
cmap = "PiYG",
vmin = -1, vmax = 1,
annot = False # this argument must be 'False'
)
# With values
sns.heatmap(corr,
xticklabels = corr.columns.values,
yticklabels = corr.columns.values,
cmap = "PiYG",
vmin = -1, vmax = 1,
annot = True # this argument must be 'True'
)
###Output
_____no_output_____ |
analysis/evaluate-trained-models.ipynb | ###Markdown
ResNet18
###Code
rn18_audio = resnet.resnet_18(num_classes=4)
rn18_audio.build(input_shape=(None, 64, 64, 1))
rn18_audio.load_weights(PATH_RN18_AUDIO)
rn18_audio.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
rn18_nsa = resnet.resnet_18(num_classes=4)
rn18_nsa.build(input_shape=(None, 64, 64, 1))
rn18_nsa.load_weights(PATH_RN18_NSA)
rn18_nsa.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
d = dict(d,
rn18_test_acc=[evaluate_model(rn18_audio, X_test_mic[..., None], Y_test_mic), evaluate_model(rn18_nsa, X_test_nsa[..., None], Y_test_nsa)],
rn18_inf_time_cpu=[get_inference_time(rn18_audio, image_size=(1, 64, 64, 1), gpu=False), get_inference_time(rn18_nsa, image_size=(1, 64, 64, 1), gpu=False)],
rn18_inf_time_gpu=[get_inference_time(rn18_audio, image_size=(1, 64, 64, 1), gpu=True), get_inference_time(rn18_nsa, image_size=(1, 64, 64, 1), gpu=True)],
rn18_preds=[rn18_audio.predict(X_test_mic[..., None]), rn18_nsa.predict(X_test_nsa[..., None])],
rn18_nb_params=rn18_audio.count_params())
###Output
_____no_output_____
###Markdown
ResNet34
###Code
rn34_audio = resnet.resnet_34(num_classes=4)
rn34_audio.build(input_shape=(None, 64, 64, 1))
rn34_audio.load_weights(PATH_RN34_AUDIO)
rn34_audio.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
rn34_nsa = resnet.resnet_34(num_classes=4)
rn34_nsa.build(input_shape=(None, 64, 64, 1))
rn34_nsa.load_weights(PATH_RN34_NSA)
rn34_nsa.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
d = dict(d,
rn34_test_acc=[evaluate_model(rn34_audio, X_test_mic[..., None], Y_test_mic), evaluate_model(rn34_nsa, X_test_nsa[..., None], Y_test_nsa)],
rn34_inf_time_cpu=[get_inference_time(rn34_audio, image_size=(1, 64, 64, 1), gpu=False), get_inference_time(rn34_nsa, image_size=(1, 64, 64, 1), gpu=False)],
rn34_inf_time_gpu=[get_inference_time(rn34_audio, image_size=(1, 64, 64, 1), gpu=True), get_inference_time(rn34_nsa, image_size=(1, 64, 64, 1), gpu=True)],
rn34_preds=[rn34_audio.predict(X_test_mic[..., None]), rn34_nsa.predict(X_test_nsa[..., None])],
rn34_nb_params=rn34_audio.count_params())
###Output
_____no_output_____
###Markdown
EfficientNetB0
###Code
efn_audio = efn.EfficientNetB0(input_shape=(64, 64, 1),
include_top=True,
weights=PATH_EFNB0_AUDIO,
classes=NB_CLASSES)
efn_audio.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
efn_nsa = efn.EfficientNetB0(input_shape=(64, 64, 1),
include_top=True,
weights=PATH_EFNB0_NSA,
classes=NB_CLASSES)
efn_nsa.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
d = dict(d,
efn_test_acc=[evaluate_model(efn_audio, X_test_mic[..., None], Y_test_mic), evaluate_model(efn_nsa, X_test_nsa[..., None], Y_test_nsa)],
efn_inf_time_cpu=[get_inference_time(efn_audio, image_size=(1, 64, 64, 1), gpu=False), get_inference_time(efn_nsa, image_size=(1, 64, 64, 1), gpu=False)],
efn_inf_time_gpu=[get_inference_time(efn_audio, image_size=(1, 64, 64, 1), gpu=True), get_inference_time(efn_nsa, image_size=(1, 64, 64, 1), gpu=True)],
efn_preds=[efn_audio.predict(X_test_mic[..., None]), efn_nsa.predict(X_test_nsa[..., None])],
efn_nb_params=efn_audio.count_params())
###Output
_____no_output_____
###Markdown
MobileNetV2
###Code
mnet_audio = tf.keras.applications.MobileNetV2(input_shape=(64, 64, 1),
include_top=True,
weights=PATH_MNV2_AUDIO,
classes=NB_CLASSES)
mnet_audio.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
mnet_nsa = tf.keras.applications.MobileNetV2(input_shape=(64, 64, 1),
include_top=True,
weights=PATH_MNV2_NSA,
classes=NB_CLASSES)
mnet_nsa.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
d = dict(d,
mnet_test_acc=[evaluate_model(mnet_audio, X_test_mic[..., None], Y_test_mic), evaluate_model(mnet_nsa, X_test_nsa[..., None], Y_test_nsa)],
mnet_inf_time_cpu=[get_inference_time(mnet_audio, image_size=(1, 64, 64, 1), gpu=False), get_inference_time(mnet_nsa, image_size=(1, 64, 64, 1), gpu=False)],
mnet_inf_time_gpu=[get_inference_time(mnet_audio, image_size=(1, 64, 64, 1), gpu=True), get_inference_time(mnet_nsa, image_size=(1, 64, 64, 1), gpu=True)],
mnet_preds=[mnet_audio.predict(X_test_mic[..., None]), mnet_nsa.predict(X_test_nsa[..., None])],
mnet_nb_params=mnet_audio.count_params())
###Output
_____no_output_____
###Markdown
RNN Amoh (https://ieeexplore.ieee.org/abstract/document/7570164)
###Code
rnn_amoh_audio = tf.keras.Sequential([
tf.keras.layers.GRU(128, input_shape=(64, 64), return_sequences=True),
tf.keras.layers.GRU(64, return_sequences=True),
tf.keras.layers.GRU(32, return_sequences=True),
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(64)),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(4, activation='softmax')
])
rnn_amoh_nsa = tf.keras.Sequential([
tf.keras.layers.GRU(128, input_shape=(64, 64), return_sequences=True),
tf.keras.layers.GRU(64, return_sequences=True),
tf.keras.layers.GRU(32, return_sequences=True),
tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(64)),
tf.keras.layers.LSTM(64),
tf.keras.layers.Dense(256, activation='relu'),
tf.keras.layers.Dense(4, activation='softmax')
])
rnn_amoh_audio.load_weights(PATH_RNNAMOH_AUDIO)
rnn_amoh_nsa.load_weights(PATH_RNNAMOH_NSA)
rnn_amoh_audio.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
rnn_amoh_nsa.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
d = dict(d,
rnn_amoh_test_acc=[evaluate_model(rnn_amoh_audio, X_test_mic, Y_test_mic), evaluate_model(rnn_amoh_nsa, X_test_nsa, Y_test_nsa)],
rnn_amoh_inf_time_cpu=[get_inference_time(rnn_amoh_audio, image_size=(1, 64, 64), gpu=False), get_inference_time(rnn_amoh_nsa, image_size=(1, 64, 64), gpu=False)],
rnn_amoh_inf_time_gpu=[get_inference_time(rnn_amoh_audio, image_size=(1, 64, 64), gpu=True), get_inference_time(rnn_amoh_nsa, image_size=(1, 64, 64), gpu=True)],
rnn_amoh_preds=[rnn_amoh_audio.predict(X_test_mic), rnn_amoh_nsa.predict(X_test_nsa)],
rnn_amoh_nb_params=rnn_amoh_audio.count_params())
###Output
_____no_output_____
###Markdown
RNN Basic
###Code
rnn_basic_audio = tf.keras.Sequential([
tf.keras.layers.LSTM(128, input_shape=(64, 64), return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(32),
tf.keras.layers.Dense(NB_CLASSES, activation='softmax')
])
rnn_basic_nsa = tf.keras.Sequential([
tf.keras.layers.LSTM(128, input_shape=(64, 64), return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(32),
tf.keras.layers.Dense(NB_CLASSES, activation='softmax')
])
rnn_basic_audio.load_weights(PATH_RNNBASIC_AUDIO)
rnn_basic_nsa.load_weights(PATH_RNNBASIC_NSA)
rnn_basic_audio.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
rnn_basic_nsa.compile(optimizer=optimizer, loss=loss, metrics=["accuracy"])
d = dict(d,
rnn_basic_test_acc=[evaluate_model(rnn_basic_audio, X_test_mic, Y_test_mic), evaluate_model(rnn_basic_nsa, X_test_nsa, Y_test_nsa)],
rnn_basic_inf_time_cpu=[get_inference_time(rnn_basic_audio, image_size=(1, 64, 64), gpu=False), get_inference_time(rnn_basic_nsa, image_size=(1, 64, 64), gpu=False)],
rnn_basic_inf_time_gpu=[get_inference_time(rnn_basic_audio, image_size=(1, 64, 64), gpu=True), get_inference_time(rnn_basic_nsa, image_size=(1, 64, 64), gpu=True)],
rnn_basic_preds=[rnn_basic_audio.predict(X_test_mic), rnn_basic_nsa.predict(X_test_nsa)],
rnn_basic_nb_params=rnn_basic_audio.count_params())
###Output
_____no_output_____
###Markdown
Save results using 'flammkuchen'
###Code
fl.save("results_evaluate_models.vfp", d)
###Output
_____no_output_____ |
HW2/HW2 option 1.ipynb | ###Markdown
Import the dataset
###Code
buildings = pd.read_csv('building_inventory.csv')
buildings.head(10)
###Output
_____no_output_____
###Markdown
Relationship between the year acquired and the year constructed
###Code
buildings.plot(x="Year Acquired", y="Year Constructed",figsize=(10,6),kind='scatter')
###Output
_____no_output_____
###Markdown
On default, the graph generated a line plot and it displayed a rectangle pattern. The graph did not show any relationship between these two variables so I plotted a scatter plot instead. Then I noticed four clusters based on the result. Each case represents a different scenario. We can see that in most cases, buildings are acquired in a year after construction. (top right corner) Top left corner represents a few ongoing constructions where year acquired is 0. Bottom left data points shows that both year constructed and year acquired are missing. Bottom right points represents buildings whose year constructed information is missing. Although we can see the four types of relationships between the two variables, one drawback is that we cannot obtain the exact information of each data point from this plot. Total square footage as a function of congressional district
###Code
re_dist = buildings.groupby("Congress Dist")["Square Footage"].sum()
re_dist.plot(figsize=(10,6))
###Output
_____no_output_____
###Markdown
1.try normal2. groupby3. try adding x and y in the plot functionIntuitively, we should use "nominal" since the "Congress Dist" is categorical data. However, when I change the type of x from quantitative to nominal , the rank of x would be in a mess. It looks like the Javascript treats nominal data as string in sort so the rank would be "0,1,10,11……" rather than "0,1,2,3……" Average square footage per floor as a function of congressional district
###Code
buildings['Average Square Footage'] = buildings['Square Footage']/buildings['Total Floors']
av_dist = buildings.groupby("Congress Dist")["Average Square Footage"].sum()
av_dist.plot(figsize=(10,6))
###Output
_____no_output_____ |
notebooks/introduction_to_tensorflow/labs/3_keras_sequential_api_vertex.ipynb | ###Markdown
Introducing the Keras Sequential API **Learning Objectives** 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with the Keras model Introduction The [Keras sequential API](https://keras.io/models/sequential/) allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton. Start by importing the necessary libraries for this lab.
###Code
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into `../data`.
###Code
!ls -l ../data/*.csv
!head ../data/taxi*.csv
###Output
_____no_output_____
###Markdown
Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the [previous notebook](./2a_dataset_api.ipynb).
###Code
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow [feature columns guide](https://www.tensorflow.org/guide/feature_columns).In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use `tf.feature_column.numeric_column()`We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. **Exercise.** Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the `INPUT_COLS` list, while the values should be numeric feature columns.
###Code
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
feature_columns = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. **Exercise.** Create a deep neural network using Keras's Sequential API. In the cell below, use the `tf.keras.layers` library to create all the layers for your deep neural network.
###Code
# Build a keras DNN model using Sequential API
model = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:* An optimizer. This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the [Optimizer class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers).* A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the [Losses class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) (such as categorical_crossentropy or mse), or it can be a custom objective function.* A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.We will add an additional custom metric called `rmse` to our list of metrics which will return the root mean square error. **Exercise.** Compile the model you created above. Create a custom loss function called `rmse` which computes the root mean squared error between `y_true` and `y_pred`. Pass this function to the model as an evaluation metric.
###Code
# Create a custom evalution metric
def rmse(y_true, y_pred):
return # TODO: Your code here
# Compile the keras model
# TODO: Your code here
###Output
_____no_output_____
###Markdown
Train the model To train your model, Keras provides three functions that can be used: 1. `.fit()` for training a model for a fixed number of epochs (iterations on a dataset). 2. `.fit_generator()` for training a model on data yielded batch-by-batch by a generator 3. `.train_on_batch()` runs a single gradient update on a single batch of data. The `.fit()` function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use `.fit_generator()` instead. The `.train_on_batch()` method is for more fine-grained control over training and accepts only a single batch of data.The taxifare dataset we sampled is small enough to fit in memory, so can we could use `.fit` to train our model. Our `create_dataset` function above generates batches of training examples, so we could also use `.fit_generator`. In fact, when calling `.fit` the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically `.fit_generator` for training. We start by setting up some parameters for our training job and create the data generators for the training and validation data.We refer you the the blog post [ML Design Pattern 3: Virtual Epochs](https://medium.com/google-cloud/ml-design-pattern-3-virtual-epochs-f842296de730) for further details on why express the training in terms of `NUM_TRAIN_EXAMPLES` and `NUM_EVALS` and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
###Code
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
###Output
_____no_output_____
###Markdown
There are various arguments you can set when calling the [.fit method](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Modelfit). Here `x` specifies the input data which in our case is a `tf.data` dataset returning a tuple of (inputs, targets). The `steps_per_epoch` parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the `callback` argument we specify a Tensorboard callback so we can inspect Tensorboard after training. **Exercise.** In the cell below, you will train your model. First, define the `steps_per_epoch` then train your model using `.fit()`, saving the model training output to a variable called `history`.
###Code
%%time
steps_per_epoch = # TODO: Your code here
LOGDIR = "./taxi_trained"
history = # TODO: Your code here
###Output
_____no_output_____
###Markdown
High-level model evaluation Once we've run data through the model, we can call `.summary()` on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Running `.fit` (or `.fit_generator`) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
###Code
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
###Output
_____no_output_____
###Markdown
Making predictions with our model To make predictions with our trained model, we can call the [predict method](https://www.tensorflow.org/api_docs/python/tf/keras/Modelpredict), passing to it a dictionary of values. The `steps` parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set `steps=1` (setting `steps=None` would also work). Note, however, that if x is a `tf.data` dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
###Code
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
###Output
_____no_output_____
###Markdown
Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. **Exercise.** Use `tf.saved_model.save` to export the trained model to a Tensorflow SavedModel format. Reference the [documentation for `tf.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save) as you fill in the code for the cell below.Next, print the signature of your saved model using the SavedModel Command Line Interface command `saved_model_cli`. You can read more about the command line interface and the `show` and `run` commands it supports in the [documentation here](https://www.tensorflow.org/guide/saved_modeloverview_of_commands).
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(
# TODO: Your code here
)
!saved_model_cli show \
--tag_set # TODO: Your code here
--signature_def # TODO: Your code here
--dir # TODO: Your code here
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
###Output
_____no_output_____
###Markdown
Deploy our model to Vertex AI Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
###Code
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
MODEL_DISPLAYNAME = f"taxifare-{TIMESTAMP}"
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH gs://$BUCKET/$MODEL_DISPLAYNAME
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the `Model.upload` method. Have a look at [the documentation](https://googleapis.dev/python/aiplatform/latest/aiplatform.htmlgoogle.cloud.aiplatform.Model).
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri= # TODO: Your code here
serving_container_image_uri= # TODO: Your code here
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0,
}
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to call prediction on your deployed model for the example you just created in the `instance` variable above.
###Code
endpoint.predict(
# TODO: Your code here
)
###Output
_____no_output_____
###Markdown
CleanupWhen deploying a model to an endpoint for online prediction, the minimum `min-replica-count` is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
###Code
endpoint.undeploy_all()
###Output
_____no_output_____
###Markdown
...then delete the endpoint.
###Code
endpoint.delete()
###Output
_____no_output_____
###Markdown
Introducing the Keras Sequential API **Learning Objectives** 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with the Keras model Introduction The [Keras sequential API](https://keras.io/models/sequential/) allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton. Start by importing the necessary libraries for this lab. If you cannot import `aiplatform`, uncomment and run the below cell to install it, and restart the kernel.
###Code
# pip install google-cloud-aiplatform
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into `../data`.
###Code
!ls -l ../data/*.csv
!head ../data/taxi*.csv
###Output
_____no_output_____
###Markdown
Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the [previous notebook](./2a_dataset_api.ipynb).
###Code
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
UNWANTED_COLS = ['pickup_datetime', 'key']
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == 'train':
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow [feature columns guide](https://www.tensorflow.org/guide/feature_columns).In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use `tf.feature_column.numeric_column()`We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. **Exercise.** Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the `INPUT_COLS` list, while the values should be numeric feature columns.
###Code
INPUT_COLS = [
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
]
# Create input layer of feature columns
feature_columns = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. **Exercise.** Create a deep neural network using Keras's Sequential API. In the cell below, use the `tf.keras.layers` library to create all the layers for your deep neural network.
###Code
# Build a keras DNN model using Sequential API
model = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:* An optimizer. This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the [Optimizer class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers).* A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the [Losses class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) (such as categorical_crossentropy or mse), or it can be a custom objective function.* A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.We will add an additional custom metric called `rmse` to our list of metrics which will return the root mean square error. **Exercise.** Compile the model you created above. Create a custom loss function called `rmse` which computes the root mean squared error between `y_true` and `y_pred`. Pass this function to the model as an evaluation metric.
###Code
# Create a custom evalution metric
def rmse(y_true, y_pred):
return # TODO: Your code here
# Compile the keras model
# TODO: Your code here
###Output
_____no_output_____
###Markdown
Train the model To train your model, Keras provides three functions that can be used: 1. `.fit()` for training a model for a fixed number of epochs (iterations on a dataset). 2. `.fit_generator()` for training a model on data yielded batch-by-batch by a generator 3. `.train_on_batch()` runs a single gradient update on a single batch of data. The `.fit()` function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use `.fit_generator()` instead. The `.train_on_batch()` method is for more fine-grained control over training and accepts only a single batch of data.The taxifare dataset we sampled is small enough to fit in memory, so can we could use `.fit` to train our model. Our `create_dataset` function above generates batches of training examples, so we could also use `.fit_generator`. In fact, when calling `.fit` the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically `.fit_generator` for training. We start by setting up some parameters for our training job and create the data generators for the training and validation data.We refer you the the blog post [ML Design Pattern 3: Virtual Epochs](https://medium.com/google-cloud/ml-design-pattern-3-virtual-epochs-f842296de730) for further details on why express the training in terms of `NUM_TRAIN_EXAMPLES` and `NUM_EVALS` and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
###Code
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-train*',
batch_size=TRAIN_BATCH_SIZE,
mode='train')
evalds = create_dataset(
pattern='../data/taxi-valid*',
batch_size=1000,
mode='eval').take(NUM_EVAL_EXAMPLES//1000)
###Output
_____no_output_____
###Markdown
There are various arguments you can set when calling the [.fit method](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Modelfit). Here `x` specifies the input data which in our case is a `tf.data` dataset returning a tuple of (inputs, targets). The `steps_per_epoch` parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the `callback` argument we specify a Tensorboard callback so we can inspect Tensorboard after training. **Exercise.** In the cell below, you will train your model. First, define the `steps_per_epoch` then train your model using `.fit()`, saving the model training output to a variable called `history`.
###Code
%%time
steps_per_epoch = # TODO: Your code here
LOGDIR = "./taxi_trained"
history = # TODO: Your code here
###Output
_____no_output_____
###Markdown
High-level model evaluation Once we've run data through the model, we can call `.summary()` on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Running `.fit` (or `.fit_generator`) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
###Code
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ['loss', 'val_loss']
pd.DataFrame(history.history)[LOSS_COLS].plot()
###Output
_____no_output_____
###Markdown
Making predictions with our model To make predictions with our trained model, we can call the [predict method](https://www.tensorflow.org/api_docs/python/tf/keras/Modelpredict), passing to it a dictionary of values. The `steps` parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set `steps=1` (setting `steps=None` would also work). Note, however, that if x is a `tf.data` dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
###Code
model.predict(x={"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0])},
steps=1)
###Output
_____no_output_____
###Markdown
Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. **Exercise.** Use `tf.saved_model.save` to export the trained model to a Tensorflow SavedModel format. Reference the [documentation for `tf.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save) as you fill in the code for the cell below.Next, print the signature of your saved model using the SavedModel Command Line Interface command `saved_model_cli`. You can read more about the command line interface and the `show` and `run` commands it supports in the [documentation here](https://www.tensorflow.org/guide/saved_modeloverview_of_commands).
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save( # TODO: Your code here
!saved_model_cli show \
--tag_set # TODO: Your code here
--signature_def # TODO: Your code here
--dir # TODO: Your code here
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
###Output
_____no_output_____
###Markdown
Deploy our model to Vertex AI Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
###Code
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-east1"
MODEL_DISPLAYNAME = "taxifare-" + TIMESTAMP
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest'
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH/* gs://$BUCKET/$MODEL_DISPLAYNAME
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the `Model.upload` method. Have a look at [the documentation](https://googleapis.dev/python/aiplatform/latest/aiplatform.htmlgoogle.cloud.aiplatform.Model).
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri= # TODO: Your code here
serving_container_image_uri= # TODO: Your code here
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0
}
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to call prediction on your deployed model for the example you just created in the `instance` variable above.
###Code
endpoint.predict( # TODO: Your code here
###Output
_____no_output_____
###Markdown
CleanupWhen deploying a model to an endpoint for online prediction, the minimum `min-replica-count` is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
###Code
endpoint.undeploy_all()
###Output
_____no_output_____
###Markdown
...then delete the endpoint.
###Code
endpoint.delete()
###Output
_____no_output_____
###Markdown
Introducing the Keras Sequential API **Learning Objectives** 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with the Keras model Introduction The [Keras sequential API](https://keras.io/models/sequential/) allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton. Start by importing the necessary libraries for this lab.
###Code
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
###Output
2.6.2
###Markdown
Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into `../data`.
###Code
!ls -l ../data/*.csv
!head ../data/taxi*.csv
###Output
==> ../data/taxi-test.csv <==
6.0,2013-03-27 03:35:00 UTC,-73.977672,40.784052,-73.965332,40.801025,2,0
19.3,2012-05-10 18:43:16 UTC,-73.954366,40.778924,-74.004094,40.723104,1,1
7.5,2014-05-20 23:09:00 UTC,-73.999165,40.738377,-74.003473,40.723862,2,2
12.5,2015-02-23 19:51:31 UTC,-73.9652099609375,40.76948165893555,-73.98949432373047,40.739742279052734,1,3
10.9,2011-03-19 03:32:00 UTC,-73.99259,40.742957,-73.989908,40.711053,1,4
7.0,2012-09-18 12:51:11 UTC,-73.971195,40.751566,-73.975922,40.756361,1,5
19.0,2014-05-20 23:09:00 UTC,-73.998392,40.74517,-73.939845,40.74908,1,6
8.9,2012-07-18 08:46:08 UTC,-73.997638,40.756541,-73.973303,40.762019,1,7
4.5,2010-07-11 20:39:08 UTC,-73.976738,40.751321,-73.986671,40.74883,1,8
7.0,2013-12-12 02:16:40 UTC,-73.985024,40.767537,-73.981273,40.779302,1,9
==> ../data/taxi-train.csv <==
11.3,2011-01-28 20:42:59 UTC,-73.999022,40.739146,-73.990369,40.717866,1,0
7.7,2011-06-27 04:28:06 UTC,-73.987443,40.729221,-73.979013,40.758641,1,1
10.5,2011-04-03 00:54:53 UTC,-73.982539,40.735725,-73.954797,40.778388,1,2
16.2,2009-04-10 04:11:56 UTC,-74.001945,40.740505,-73.91385,40.758559,1,3
33.5,2014-02-24 18:22:00 UTC,-73.993372,40.753382,-73.8609,40.732897,2,4
6.9,2011-12-10 00:25:23 UTC,-73.996237,40.721848,-73.989416,40.718052,1,5
6.1,2012-09-01 14:30:19 UTC,-73.977048,40.758461,-73.984899,40.744693,2,6
9.5,2012-11-08 13:28:07 UTC,-73.969402,40.757545,-73.950049,40.776079,1,7
9.0,2014-07-15 11:37:25 UTC,-73.979318,40.760949,-73.95767,40.773724,1,8
3.3,2009-11-09 18:06:58 UTC,-73.955675,40.779154,-73.961172,40.772368,1,9
==> ../data/taxi-valid.csv <==
5.3,2012-01-03 19:21:35 UTC,-73.962627,40.763214,-73.973485,40.753353,1,0
25.3,2010-09-27 07:30:15 UTC,-73.965799,40.794243,-73.927134,40.852261,3,1
27.5,2015-05-19 00:40:02 UTC,-73.86344146728516,40.76899719238281,-73.96058654785156,40.76129913330078,1,2
5.7,2010-04-29 12:28:00 UTC,-73.989255,40.738912,-73.97558,40.749172,1,3
11.5,2013-06-23 06:08:09 UTC,-73.99731,40.763735,-73.955657,40.768141,1,4
18.0,2014-10-14 18:52:03 UTC,-73.997995,40.761638,-74.008985,40.712442,1,5
4.9,2010-04-29 12:28:00 UTC,-73.977315,40.766182,-73.970845,40.761462,5,6
32.33,2014-02-24 18:22:00 UTC,-73.985358,40.761352,-73.92427,40.699145,1,7
17.0,2015-03-26 02:48:58 UTC,-73.93981170654297,40.846473693847656,-73.97361755371094,40.786983489990234,1,8
12.5,2013-04-09 09:39:13 UTC,-73.977323,40.753934,-74.00719,40.741472,1,9
###Markdown
Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the [previous notebook](./2a_dataset_api.ipynb).
###Code
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow [feature columns guide](https://www.tensorflow.org/guide/feature_columns).In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use `tf.feature_column.numeric_column()`We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. **Exercise.** Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the `INPUT_COLS` list, while the values should be numeric feature columns.
###Code
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
feature_columns = [tf.feature_column.numeric_column(col) for col in INPUT_COLS] # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. **Exercise.** Create a deep neural network using Keras's Sequential API. In the cell below, use the `tf.keras.layers` library to create all the layers for your deep neural network.
###Code
# Build a keras DNN model using Sequential API
feature_layer = DenseFeatures(feature_columns)
model = tf.keras.Sequential([
feature_layer,
Dense(32, activation='relu'),
Dense(8, activation='relu'),
Dense(1, activation='linear')
])
# TODO: Your code here
###Output
2021-12-02 16:01:38.216741: I tensorflow/core/common_runtime/process_util.cc:146] Creating new thread pool with default inter op setting: 2. Tune using inter_op_parallelism_threads for best performance.
###Markdown
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:* An optimizer. This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the [Optimizer class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers).* A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the [Losses class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) (such as categorical_crossentropy or mse), or it can be a custom objective function.* A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.We will add an additional custom metric called `rmse` to our list of metrics which will return the root mean square error. **Exercise.** Compile the model you created above. Create a custom loss function called `rmse` which computes the root mean squared error between `y_true` and `y_pred`. Pass this function to the model as an evaluation metric.
###Code
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# TODO: Your code here
# Compile the keras model
# TODO: Your code here
model.compile(optimizer="adam", loss="mse", metrics=[rmse, "mse"])
###Output
_____no_output_____
###Markdown
Train the model To train your model, Keras provides three functions that can be used: 1. `.fit()` for training a model for a fixed number of epochs (iterations on a dataset). 2. `.fit_generator()` for training a model on data yielded batch-by-batch by a generator 3. `.train_on_batch()` runs a single gradient update on a single batch of data. The `.fit()` function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use `.fit_generator()` instead. The `.train_on_batch()` method is for more fine-grained control over training and accepts only a single batch of data.The taxifare dataset we sampled is small enough to fit in memory, so can we could use `.fit` to train our model. Our `create_dataset` function above generates batches of training examples, so we could also use `.fit_generator`. In fact, when calling `.fit` the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically `.fit_generator` for training. We start by setting up some parameters for our training job and create the data generators for the training and validation data.We refer you the the blog post [ML Design Pattern 3: Virtual Epochs](https://medium.com/google-cloud/ml-design-pattern-3-virtual-epochs-f842296de730) for further details on why express the training in terms of `NUM_TRAIN_EXAMPLES` and `NUM_EVALS` and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
###Code
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
###Output
_____no_output_____
###Markdown
There are various arguments you can set when calling the [.fit method](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Modelfit). Here `x` specifies the input data which in our case is a `tf.data` dataset returning a tuple of (inputs, targets). The `steps_per_epoch` parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the `callback` argument we specify a Tensorboard callback so we can inspect Tensorboard after training. **Exercise.** In the cell below, you will train your model. First, define the `steps_per_epoch` then train your model using `.fit()`, saving the model training output to a variable called `history`.
###Code
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS) # TODO: Your code here
LOGDIR = "./taxi_trained"
history = model.fit(
x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)]
)
# TODO: Your code here
###Output
2021-12-02 16:01:38.403543: I tensorflow/core/profiler/lib/profiler_session.cc:131] Profiler session initializing.
2021-12-02 16:01:38.403579: I tensorflow/core/profiler/lib/profiler_session.cc:146] Profiler session started.
2021-12-02 16:01:38.404180: I tensorflow/core/profiler/lib/profiler_session.cc:164] Profiler session tear down.
###Markdown
High-level model evaluation Once we've run data through the model, we can call `.summary()` on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
###Code
model.summary()
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_features (DenseFeature multiple 0
_________________________________________________________________
dense (Dense) multiple 192
_________________________________________________________________
dense_1 (Dense) multiple 264
_________________________________________________________________
dense_2 (Dense) multiple 9
=================================================================
Total params: 465
Trainable params: 465
Non-trainable params: 0
_________________________________________________________________
###Markdown
Running `.fit` (or `.fit_generator`) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
###Code
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
###Output
_____no_output_____
###Markdown
Making predictions with our model To make predictions with our trained model, we can call the [predict method](https://www.tensorflow.org/api_docs/python/tf/keras/Modelpredict), passing to it a dictionary of values. The `steps` parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set `steps=1` (setting `steps=None` would also work). Note, however, that if x is a `tf.data` dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
###Code
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
###Output
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'pickup_longitude': <tf.Tensor 'ExpandDims_4:0' shape=(1, 1) dtype=float32>, 'pickup_latitude': <tf.Tensor 'ExpandDims_3:0' shape=(1, 1) dtype=float32>, 'dropoff_longitude': <tf.Tensor 'ExpandDims_1:0' shape=(1, 1) dtype=float32>, 'dropoff_latitude': <tf.Tensor 'ExpandDims:0' shape=(1, 1) dtype=float32>, 'passenger_count': <tf.Tensor 'ExpandDims_2:0' shape=(1, 1) dtype=float32>}
Consider rewriting this model with the Functional API.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'pickup_longitude': <tf.Tensor 'ExpandDims_4:0' shape=(1, 1) dtype=float32>, 'pickup_latitude': <tf.Tensor 'ExpandDims_3:0' shape=(1, 1) dtype=float32>, 'dropoff_longitude': <tf.Tensor 'ExpandDims_1:0' shape=(1, 1) dtype=float32>, 'dropoff_latitude': <tf.Tensor 'ExpandDims:0' shape=(1, 1) dtype=float32>, 'passenger_count': <tf.Tensor 'ExpandDims_2:0' shape=(1, 1) dtype=float32>}
Consider rewriting this model with the Functional API.
###Markdown
Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. **Exercise.** Use `tf.saved_model.save` to export the trained model to a Tensorflow SavedModel format. Reference the [documentation for `tf.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save) as you fill in the code for the cell below.Next, print the signature of your saved model using the SavedModel Command Line Interface command `saved_model_cli`. You can read more about the command line interface and the `show` and `run` commands it supports in the [documentation here](https://www.tensorflow.org/guide/saved_modeloverview_of_commands).
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(model, EXPORT_PATH
# TODO: Your code here
)
!saved_model_cli show \
--tag_set serve \
--dir EXPORT_PATH
# TODO: Your code here
# --signature_def # TODO: Your code here
# --dir EXPORT_PATH# TODO: Your code here
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
###Output
Traceback (most recent call last):
File "/opt/conda/bin/saved_model_cli", line 10, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/tools/saved_model_cli.py", line 1204, in main
args.func(args)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/tools/saved_model_cli.py", line 738, in show
_show_signature_def_map_keys(args.dir, args.tag_set)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/tools/saved_model_cli.py", line 92, in _show_signature_def_map_keys
signature_def_map = get_signature_def_map(saved_model_dir, tag_set)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/tools/saved_model_cli.py", line 348, in get_signature_def_map
meta_graph = saved_model_utils.get_meta_graph_def(saved_model_dir, tag_set)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/tools/saved_model_utils.py", line 117, in get_meta_graph_def
saved_model = read_saved_model(saved_model_dir)
File "/opt/conda/lib/python3.7/site-packages/tensorflow/python/tools/saved_model_utils.py", line 55, in read_saved_model
raise IOError("SavedModel file does not exist at: %s" % saved_model_dir)
OSError: SavedModel file does not exist at: EXPORT_PATH
./export/savedmodel/20211202160231
./export/savedmodel/20211202160231/assets
./export/savedmodel/20211202160231/saved_model.pb
./export/savedmodel/20211202160231/variables
./export/savedmodel/20211202160231/variables/variables.index
./export/savedmodel/20211202160231/variables/variables.data-00000-of-00001
###Markdown
Deploy our model to Vertex AI Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
###Code
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
MODEL_DISPLAYNAME = f"taxifare-{TIMESTAMP}"
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH gs://$BUCKET/$MODEL_DISPLAYNAME
###Output
Copying file://./export/savedmodel/20211202160231/saved_model.pb [Content-Type=application/octet-stream]...
Copying file://./export/savedmodel/20211202160231/variables/variables.index [Content-Type=application/octet-stream]...
Copying file://./export/savedmodel/20211202160231/variables/variables.data-00000-of-00001 [Content-Type=application/octet-stream]...
/ [3 files][205.2 KiB/205.2 KiB]
Operation completed over 3 objects/205.2 KiB.
###Markdown
**Exercise.** Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the `Model.upload` method. Have a look at [the documentation](https://googleapis.dev/python/aiplatform/latest/aiplatform.htmlgoogle.cloud.aiplatform.Model).
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri=f"gs://{BUCKET}/{MODEL_DISPLAYNAME}", # TODO: Your code here
serving_container_image_uri= SERVING_CONTAINER_IMAGE_URI# TODO: Your code here
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0,
}
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to call prediction on your deployed model for the example you just created in the `instance` variable above.
###Code
endpoint.predict(instances= [instance]
# TODO: Your code here
)
###Output
_____no_output_____
###Markdown
CleanupWhen deploying a model to an endpoint for online prediction, the minimum `min-replica-count` is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
###Code
endpoint.undeploy_all()
###Output
INFO:google.cloud.aiplatform.models:Undeploying Endpoint model: projects/961854438162/locations/us-central1/endpoints/1653815021756481536
INFO:google.cloud.aiplatform.models:Undeploy Endpoint model backing LRO: projects/961854438162/locations/us-central1/endpoints/1653815021756481536/operations/1584109351809843200
INFO:google.cloud.aiplatform.models:Endpoint model undeployed. Resource name: projects/961854438162/locations/us-central1/endpoints/1653815021756481536
###Markdown
...then delete the endpoint.
###Code
endpoint.delete()
###Output
INFO:google.cloud.aiplatform.base:Deleting Endpoint : projects/961854438162/locations/us-central1/endpoints/1653815021756481536
INFO:google.cloud.aiplatform.base:Delete Endpoint backing LRO: projects/961854438162/locations/us-central1/operations/552785037141999616
INFO:google.cloud.aiplatform.base:Endpoint deleted. . Resource name: projects/961854438162/locations/us-central1/endpoints/1653815021756481536
###Markdown
Introducing the Keras Sequential API **Learning Objectives** 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with the Keras model Introduction The [Keras sequential API](https://keras.io/models/sequential/) allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton. Start by importing the necessary libraries for this lab. If you cannot import `aiplatform`, uncomment and run the below cell to install it, and restart the kernel.
###Code
# pip install google-cloud-aiplatform
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
###Output
2.3.4
###Markdown
Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into `../data`.
###Code
!ls -l ../data/*.csv
!head ../data/taxi*.csv
###Output
==> ../data/taxi-test.csv <==
6.0,2013-03-27 03:35:00 UTC,-73.977672,40.784052,-73.965332,40.801025,2,0
19.3,2012-05-10 18:43:16 UTC,-73.954366,40.778924,-74.004094,40.723104,1,1
7.5,2014-05-20 23:09:00 UTC,-73.999165,40.738377,-74.003473,40.723862,2,2
12.5,2015-02-23 19:51:31 UTC,-73.9652099609375,40.76948165893555,-73.98949432373047,40.739742279052734,1,3
10.9,2011-03-19 03:32:00 UTC,-73.99259,40.742957,-73.989908,40.711053,1,4
7.0,2012-09-18 12:51:11 UTC,-73.971195,40.751566,-73.975922,40.756361,1,5
19.0,2014-05-20 23:09:00 UTC,-73.998392,40.74517,-73.939845,40.74908,1,6
8.9,2012-07-18 08:46:08 UTC,-73.997638,40.756541,-73.973303,40.762019,1,7
4.5,2010-07-11 20:39:08 UTC,-73.976738,40.751321,-73.986671,40.74883,1,8
7.0,2013-12-12 02:16:40 UTC,-73.985024,40.767537,-73.981273,40.779302,1,9
==> ../data/taxi-train.csv <==
11.3,2011-01-28 20:42:59 UTC,-73.999022,40.739146,-73.990369,40.717866,1,0
7.7,2011-06-27 04:28:06 UTC,-73.987443,40.729221,-73.979013,40.758641,1,1
10.5,2011-04-03 00:54:53 UTC,-73.982539,40.735725,-73.954797,40.778388,1,2
16.2,2009-04-10 04:11:56 UTC,-74.001945,40.740505,-73.91385,40.758559,1,3
33.5,2014-02-24 18:22:00 UTC,-73.993372,40.753382,-73.8609,40.732897,2,4
6.9,2011-12-10 00:25:23 UTC,-73.996237,40.721848,-73.989416,40.718052,1,5
6.1,2012-09-01 14:30:19 UTC,-73.977048,40.758461,-73.984899,40.744693,2,6
9.5,2012-11-08 13:28:07 UTC,-73.969402,40.757545,-73.950049,40.776079,1,7
9.0,2014-07-15 11:37:25 UTC,-73.979318,40.760949,-73.95767,40.773724,1,8
3.3,2009-11-09 18:06:58 UTC,-73.955675,40.779154,-73.961172,40.772368,1,9
==> ../data/taxi-valid.csv <==
5.3,2012-01-03 19:21:35 UTC,-73.962627,40.763214,-73.973485,40.753353,1,0
25.3,2010-09-27 07:30:15 UTC,-73.965799,40.794243,-73.927134,40.852261,3,1
27.5,2015-05-19 00:40:02 UTC,-73.86344146728516,40.76899719238281,-73.96058654785156,40.76129913330078,1,2
5.7,2010-04-29 12:28:00 UTC,-73.989255,40.738912,-73.97558,40.749172,1,3
11.5,2013-06-23 06:08:09 UTC,-73.99731,40.763735,-73.955657,40.768141,1,4
18.0,2014-10-14 18:52:03 UTC,-73.997995,40.761638,-74.008985,40.712442,1,5
4.9,2010-04-29 12:28:00 UTC,-73.977315,40.766182,-73.970845,40.761462,5,6
32.33,2014-02-24 18:22:00 UTC,-73.985358,40.761352,-73.92427,40.699145,1,7
17.0,2015-03-26 02:48:58 UTC,-73.93981170654297,40.846473693847656,-73.97361755371094,40.786983489990234,1,8
12.5,2013-04-09 09:39:13 UTC,-73.977323,40.753934,-74.00719,40.741472,1,9
###Markdown
Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the [previous notebook](./2a_dataset_api.ipynb).
###Code
CSV_COLUMNS = [
'fare_amount',
'pickup_datetime',
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
'key'
]
LABEL_COLUMN = 'fare_amount'
DEFAULTS = [[0.0], ['na'], [0.0], [0.0], [0.0], [0.0], [0.0], ['na']]
UNWANTED_COLS = ['pickup_datetime', 'key']
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode='eval'):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS)
dataset = dataset.map(features_and_labels)
if mode == 'train':
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow [feature columns guide](https://www.tensorflow.org/guide/feature_columns).In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use `tf.feature_column.numeric_column()`We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. **Exercise.** Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the `INPUT_COLS` list, while the values should be numeric feature columns.
###Code
INPUT_COLS = [
'pickup_longitude',
'pickup_latitude',
'dropoff_longitude',
'dropoff_latitude',
'passenger_count',
]
# Create input layer of feature columns
feature_columns = {
colname: tf.feature_column.numeric_column(colname)
for colname in INPUT_COLS
}
feature_columns
###Output
_____no_output_____
###Markdown
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. **Exercise.** Create a deep neural network using Keras's Sequential API. In the cell below, use the `tf.keras.layers` library to create all the layers for your deep neural network.
###Code
# Build a keras DNN model using Sequential API
model = tf.keras.models.Sequential([
DenseFeatures(feature_columns=feature_columns.values()),
tf.keras.layers.Dense(28, activation='relu', name='hidden1'),
tf.keras.layers.Dense(1, activation='linear', name='output')
])
###Output
_____no_output_____
###Markdown
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:* An optimizer. This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the [Optimizer class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers).* A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the [Losses class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) (such as categorical_crossentropy or mse), or it can be a custom objective function.* A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.We will add an additional custom metric called `rmse` to our list of metrics which will return the root mean square error. **Exercise.** Compile the model you created above. Create a custom loss function called `rmse` which computes the root mean squared error between `y_true` and `y_pred`. Pass this function to the model as an evaluation metric.
###Code
# Create a custom evalution metric
def rmse(y_true, y_pred):
return tf.sqrt(tf.reduce_mean(tf.square(y_pred - y_true)))
# Compile the keras model
model.compile("adam", loss="mse", metrics=[rmse, "mse"])
###Output
_____no_output_____
###Markdown
Train the model To train your model, Keras provides three functions that can be used: 1. `.fit()` for training a model for a fixed number of epochs (iterations on a dataset). 2. `.fit_generator()` for training a model on data yielded batch-by-batch by a generator 3. `.train_on_batch()` runs a single gradient update on a single batch of data. The `.fit()` function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use `.fit_generator()` instead. The `.train_on_batch()` method is for more fine-grained control over training and accepts only a single batch of data.The taxifare dataset we sampled is small enough to fit in memory, so can we could use `.fit` to train our model. Our `create_dataset` function above generates batches of training examples, so we could also use `.fit_generator`. In fact, when calling `.fit` the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically `.fit_generator` for training. We start by setting up some parameters for our training job and create the data generators for the training and validation data.We refer you the the blog post [ML Design Pattern 3: Virtual Epochs](https://medium.com/google-cloud/ml-design-pattern-3-virtual-epochs-f842296de730) for further details on why express the training in terms of `NUM_TRAIN_EXAMPLES` and `NUM_EVALS` and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
###Code
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern='../data/taxi-train*',
batch_size=TRAIN_BATCH_SIZE,
mode='train')
evalds = create_dataset(
pattern='../data/taxi-valid*',
batch_size=1000,
mode='eval').take(NUM_EVAL_EXAMPLES//1000)
###Output
_____no_output_____
###Markdown
There are various arguments you can set when calling the [.fit method](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Modelfit). Here `x` specifies the input data which in our case is a `tf.data` dataset returning a tuple of (inputs, targets). The `steps_per_epoch` parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the `callback` argument we specify a Tensorboard callback so we can inspect Tensorboard after training. **Exercise.** In the cell below, you will train your model. First, define the `steps_per_epoch` then train your model using `.fit()`, saving the model training output to a variable called `history`.
###Code
%%time
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
LOGDIR = "./taxi_trained"
history = model.fit(
x=trainds,
steps_per_epoch=steps_per_epoch,
epochs=NUM_EVALS,
validation_data=evalds,
callbacks=[TensorBoard(LOGDIR)]
)
###Output
Epoch 1/50
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'collections.OrderedDict'> input: OrderedDict([('pickup_longitude', <tf.Tensor 'ExpandDims_4:0' shape=(1000, 1) dtype=float32>), ('pickup_latitude', <tf.Tensor 'ExpandDims_3:0' shape=(1000, 1) dtype=float32>), ('dropoff_longitude', <tf.Tensor 'ExpandDims_1:0' shape=(1000, 1) dtype=float32>), ('dropoff_latitude', <tf.Tensor 'ExpandDims:0' shape=(1000, 1) dtype=float32>), ('passenger_count', <tf.Tensor 'ExpandDims_2:0' shape=(1000, 1) dtype=float32>)])
Consider rewriting this model with the Functional API.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'collections.OrderedDict'> input: OrderedDict([('pickup_longitude', <tf.Tensor 'ExpandDims_4:0' shape=(1000, 1) dtype=float32>), ('pickup_latitude', <tf.Tensor 'ExpandDims_3:0' shape=(1000, 1) dtype=float32>), ('dropoff_longitude', <tf.Tensor 'ExpandDims_1:0' shape=(1000, 1) dtype=float32>), ('dropoff_latitude', <tf.Tensor 'ExpandDims:0' shape=(1000, 1) dtype=float32>), ('passenger_count', <tf.Tensor 'ExpandDims_2:0' shape=(1000, 1) dtype=float32>)])
Consider rewriting this model with the Functional API.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'collections.OrderedDict'> input: OrderedDict([('pickup_longitude', <tf.Tensor 'ExpandDims_4:0' shape=(1000, 1) dtype=float32>), ('pickup_latitude', <tf.Tensor 'ExpandDims_3:0' shape=(1000, 1) dtype=float32>), ('dropoff_longitude', <tf.Tensor 'ExpandDims_1:0' shape=(1000, 1) dtype=float32>), ('dropoff_latitude', <tf.Tensor 'ExpandDims:0' shape=(1000, 1) dtype=float32>), ('passenger_count', <tf.Tensor 'ExpandDims_2:0' shape=(1000, 1) dtype=float32>)])
Consider rewriting this model with the Functional API.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'collections.OrderedDict'> input: OrderedDict([('pickup_longitude', <tf.Tensor 'ExpandDims_4:0' shape=(1000, 1) dtype=float32>), ('pickup_latitude', <tf.Tensor 'ExpandDims_3:0' shape=(1000, 1) dtype=float32>), ('dropoff_longitude', <tf.Tensor 'ExpandDims_1:0' shape=(1000, 1) dtype=float32>), ('dropoff_latitude', <tf.Tensor 'ExpandDims:0' shape=(1000, 1) dtype=float32>), ('passenger_count', <tf.Tensor 'ExpandDims_2:0' shape=(1000, 1) dtype=float32>)])
Consider rewriting this model with the Functional API.
###Markdown
High-level model evaluation Once we've run data through the model, we can call `.summary()` on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
###Code
model.summary()
###Output
Model: "sequential_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_features_3 (DenseFeatu multiple 0
_________________________________________________________________
hidden1 (Dense) multiple 168
_________________________________________________________________
output (Dense) multiple 29
=================================================================
Total params: 197
Trainable params: 197
Non-trainable params: 0
_________________________________________________________________
###Markdown
Running `.fit` (or `.fit_generator`) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
###Code
RMSE_COLS = ['rmse', 'val_rmse']
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ['loss', 'val_loss']
pd.DataFrame(history.history)[LOSS_COLS].plot()
###Output
_____no_output_____
###Markdown
Making predictions with our model To make predictions with our trained model, we can call the [predict method](https://www.tensorflow.org/api_docs/python/tf/keras/Modelpredict), passing to it a dictionary of values. The `steps` parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set `steps=1` (setting `steps=None` would also work). Note, however, that if x is a `tf.data` dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
###Code
model.predict(x={"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0])},
steps=1)
###Output
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'pickup_longitude': <tf.Tensor 'ExpandDims_4:0' shape=(1, 1) dtype=float32>, 'pickup_latitude': <tf.Tensor 'ExpandDims_3:0' shape=(1, 1) dtype=float32>, 'dropoff_longitude': <tf.Tensor 'ExpandDims_1:0' shape=(1, 1) dtype=float32>, 'dropoff_latitude': <tf.Tensor 'ExpandDims:0' shape=(1, 1) dtype=float32>, 'passenger_count': <tf.Tensor 'ExpandDims_2:0' shape=(1, 1) dtype=float32>}
Consider rewriting this model with the Functional API.
WARNING:tensorflow:Layers in a Sequential model should only have a single input tensor, but we receive a <class 'dict'> input: {'pickup_longitude': <tf.Tensor 'ExpandDims_4:0' shape=(1, 1) dtype=float32>, 'pickup_latitude': <tf.Tensor 'ExpandDims_3:0' shape=(1, 1) dtype=float32>, 'dropoff_longitude': <tf.Tensor 'ExpandDims_1:0' shape=(1, 1) dtype=float32>, 'dropoff_latitude': <tf.Tensor 'ExpandDims:0' shape=(1, 1) dtype=float32>, 'passenger_count': <tf.Tensor 'ExpandDims_2:0' shape=(1, 1) dtype=float32>}
Consider rewriting this model with the Functional API.
###Markdown
Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. **Exercise.** Use `tf.saved_model.save` to export the trained model to a Tensorflow SavedModel format. Reference the [documentation for `tf.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save) as you fill in the code for the cell below.Next, print the signature of your saved model using the SavedModel Command Line Interface command `saved_model_cli`. You can read more about the command line interface and the `show` and `run` commands it supports in the [documentation here](https://www.tensorflow.org/guide/saved_modeloverview_of_commands).
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(model, EXPORT_PATH)
!saved_model_cli show \
--tag_set serve \
--signature_def serving_default \
--dir {EXPORT_PATH}
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
###Output
The given SavedModel SignatureDef contains the following input(s):
inputs['dropoff_latitude'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_dropoff_latitude:0
inputs['dropoff_longitude'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_dropoff_longitude:0
inputs['passenger_count'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_passenger_count:0
inputs['pickup_latitude'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_pickup_latitude:0
inputs['pickup_longitude'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: serving_default_pickup_longitude:0
The given SavedModel SignatureDef contains the following output(s):
outputs['output_1'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 1)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
./export/savedmodel/20211006191449
./export/savedmodel/20211006191449/assets
./export/savedmodel/20211006191449/saved_model.pb
./export/savedmodel/20211006191449/variables
./export/savedmodel/20211006191449/variables/variables.index
./export/savedmodel/20211006191449/variables/variables.data-00000-of-00001
###Markdown
Deploy our model to Vertex AI Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
###Code
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-east1"
MODEL_DISPLAYNAME = "taxifare-" + TIMESTAMP
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = 'us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest'
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
## Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH/* gs://$BUCKET/$MODEL_DISPLAYNAME
###Output
Copying file://./export/savedmodel/20211006191449/saved_model.pb [Content-Type=application/octet-stream]...
Copying file://./export/savedmodel/20211006191449/variables/variables.index [Content-Type=application/octet-stream]...
Copying file://./export/savedmodel/20211006191449/variables/variables.data-00000-of-00001 [Content-Type=application/octet-stream]...
/ [3 files][182.4 KiB/182.4 KiB]
Operation completed over 3 objects/182.4 KiB.
###Markdown
**Exercise.** Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the `Model.upload` method. Have a look at [the documentation](https://googleapis.dev/python/aiplatform/latest/aiplatform.htmlgoogle.cloud.aiplatform.Model).
###Code
uploaded_model = aiplatform.Model.upload(
display_name=f"{MODEL_DISPLAYNAME}",
artifact_uri=f"gs://{BUCKET}/{MODEL_DISPLAYNAME}",
serving_container_image_uri=SERVING_CONTAINER_IMAGE_URI
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0
}
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to call prediction on your deployed model for the example you just created in the `instance` variable above.
###Code
endpoint.predict([instance])
###Output
_____no_output_____
###Markdown
CleanupWhen deploying a model to an endpoint for online prediction, the minimum `min-replica-count` is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
###Code
endpoint.undeploy_all()
###Output
INFO:google.cloud.aiplatform.models:Undeploying Endpoint model: projects/432069008306/locations/us-central1/endpoints/224599439229059072
INFO:google.cloud.aiplatform.models:Undeploy Endpoint model backing LRO: projects/432069008306/locations/us-central1/endpoints/224599439229059072/operations/3199323475252609024
INFO:google.cloud.aiplatform.models:Endpoint model undeployed. Resource name: projects/432069008306/locations/us-central1/endpoints/224599439229059072
###Markdown
...then delete the endpoint.
###Code
endpoint.delete()
###Output
INFO:google.cloud.aiplatform.base:Deleting Endpoint : projects/432069008306/locations/us-central1/endpoints/224599439229059072
INFO:google.cloud.aiplatform.base:Delete Endpoint backing LRO: projects/432069008306/locations/us-central1/operations/7648879907094659072
INFO:google.cloud.aiplatform.base:Endpoint deleted. . Resource name: projects/432069008306/locations/us-central1/endpoints/224599439229059072
###Markdown
Introducing the Keras Sequential API **Learning Objectives** 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with the Keras model Introduction The [Keras sequential API](https://keras.io/models/sequential/) allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton. Start by importing the necessary libraries for this lab.
###Code
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.models import Sequential
print(tf.__version__)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into `../data`.
###Code
!ls -l ../data/*.csv
!head ../data/taxi*.csv
###Output
_____no_output_____
###Markdown
Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the [previous notebook](./2a_dataset_api.ipynb).
###Code
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow [feature columns guide](https://www.tensorflow.org/guide/feature_columns).In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use `tf.feature_column.numeric_column()`We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. **Exercise.** Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the `INPUT_COLS` list, while the values should be numeric feature columns.
###Code
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
feature_columns = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. **Exercise.** Create a deep neural network using Keras's Sequential API. In the cell below, use the `tf.keras.layers` library to create all the layers for your deep neural network.
###Code
# Build a keras DNN model using Sequential API
model = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:* An optimizer. This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the [Optimizer class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers).* A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the [Losses class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) (such as categorical_crossentropy or mse), or it can be a custom objective function.* A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.We will add an additional custom metric called `rmse` to our list of metrics which will return the root mean square error. **Exercise.** Compile the model you created above. Create a custom loss function called `rmse` which computes the root mean squared error between `y_true` and `y_pred`. Pass this function to the model as an evaluation metric.
###Code
# Create a custom evalution metric
def rmse(y_true, y_pred):
return # TODO: Your code here
# Compile the keras model
# TODO: Your code here
###Output
_____no_output_____
###Markdown
Train the model To train your model, Keras provides two functions that can be used: 1. `.fit()` for training a model for a fixed number of epochs (iterations on a dataset). 2. `.train_on_batch()` runs a single gradient update on a single batch of data. The `.fit()` function works for various formats of data such as Numpy array, list of Tensors tf.data and Python generators. The `.train_on_batch()` method is for more fine-grained control over training and accepts only a single batch of data.Our `create_dataset` function above generates batches of training examples, so we can use `.fit`. We start by setting up some parameters for our training job and create the data generators for the training and validation data.We refer you the the blog post [ML Design Pattern 3: Virtual Epochs](https://medium.com/google-cloud/ml-design-pattern-3-virtual-epochs-f842296de730) for further details on why express the training in terms of `NUM_TRAIN_EXAMPLES` and `NUM_EVALS` and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
###Code
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
###Output
_____no_output_____
###Markdown
There are various arguments you can set when calling the [.fit method](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Modelfit). Here `x` specifies the input data which in our case is a `tf.data` dataset returning a tuple of (inputs, targets). The `steps_per_epoch` parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the `callback` argument we specify a Tensorboard callback so we can inspect Tensorboard after training. **Exercise.** In the cell below, you will train your model. First, define the `steps_per_epoch` then train your model using `.fit()`, saving the model training output to a variable called `history`.
###Code
%%time
steps_per_epoch = # TODO: Your code here
LOGDIR = "./taxi_trained"
history = # TODO: Your code here
###Output
_____no_output_____
###Markdown
High-level model evaluation Once we've run data through the model, we can call `.summary()` on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Running `.fit` (or `.fit_generator`) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
###Code
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
###Output
_____no_output_____
###Markdown
Making predictions with our model To make predictions with our trained model, we can call the [predict method](https://www.tensorflow.org/api_docs/python/tf/keras/Modelpredict), passing to it a dictionary of values. The `steps` parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set `steps=1` (setting `steps=None` would also work). Note, however, that if x is a `tf.data` dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
###Code
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
###Output
_____no_output_____
###Markdown
Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. **Exercise.** Use `tf.saved_model.save` to export the trained model to a Tensorflow SavedModel format. Reference the [documentation for `tf.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save) as you fill in the code for the cell below.Next, print the signature of your saved model using the SavedModel Command Line Interface command `saved_model_cli`. You can read more about the command line interface and the `show` and `run` commands it supports in the [documentation here](https://www.tensorflow.org/guide/saved_modeloverview_of_commands).
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(
# TODO: Your code here
)
!saved_model_cli show \
--tag_set # TODO: Your code here
--signature_def # TODO: Your code here
--dir # TODO: Your code here
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
###Output
_____no_output_____
###Markdown
Deploy our model to Vertex AI Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
###Code
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
MODEL_DISPLAYNAME = f"taxifare-{TIMESTAMP}"
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH gs://$BUCKET/$MODEL_DISPLAYNAME
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the `Model.upload` method. Have a look at [the documentation](https://googleapis.dev/python/aiplatform/latest/aiplatform.htmlgoogle.cloud.aiplatform.Model).
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri= # TODO: Your code here
serving_container_image_uri= # TODO: Your code here
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0,
}
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to call prediction on your deployed model for the example you just created in the `instance` variable above.
###Code
endpoint.predict(
# TODO: Your code here
)
###Output
_____no_output_____
###Markdown
CleanupWhen deploying a model to an endpoint for online prediction, the minimum `min-replica-count` is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
###Code
endpoint.undeploy_all()
###Output
_____no_output_____
###Markdown
...then delete the endpoint.
###Code
endpoint.delete()
###Output
_____no_output_____
###Markdown
Introducing the Keras Sequential API **Learning Objectives** 1. Build a DNN model using the Keras Sequential API 1. Learn how to use feature columns in a Keras model 1. Learn how to train a model with Keras 1. Learn how to save/load, and deploy a Keras model on GCP 1. Learn how to deploy and make predictions with the Keras model Introduction The [Keras sequential API](https://keras.io/models/sequential/) allows you to create Tensorflow models layer-by-layer. This is useful for building most kinds of machine learning models but it does not allow you to create models that share layers, re-use layers or have multiple inputs or outputs. In this lab, we'll see how to build a simple deep neural network model using the Keras sequential api and feature columns. Once we have trained our model, we will deploy it using Vertex AI and see how to call our model for online prediciton. Start by importing the necessary libraries for this lab. If you cannot import `aiplatform`, uncomment and run the below cell to install it, and restart the kernel.
###Code
# pip install google-cloud-aiplatform
import datetime
import os
import shutil
import numpy as np
import pandas as pd
import tensorflow as tf
from google.cloud import aiplatform
from matplotlib import pyplot as plt
from tensorflow import keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, DenseFeatures
from tensorflow.keras.callbacks import TensorBoard
print(tf.__version__)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Load raw data We will use the taxifare dataset, using the CSV files that we created in the first notebook of this sequence. Those files have been saved into `../data`.
###Code
!ls -l ../data/*.csv
!head ../data/taxi*.csv
###Output
_____no_output_____
###Markdown
Use tf.data to read the CSV files We wrote these functions for reading data from the csv files above in the [previous notebook](./2a_dataset_api.ipynb).
###Code
CSV_COLUMNS = [
"fare_amount",
"pickup_datetime",
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
"key",
]
LABEL_COLUMN = "fare_amount"
DEFAULTS = [[0.0], ["na"], [0.0], [0.0], [0.0], [0.0], [0.0], ["na"]]
UNWANTED_COLS = ["pickup_datetime", "key"]
def features_and_labels(row_data):
label = row_data.pop(LABEL_COLUMN)
features = row_data
for unwanted_col in UNWANTED_COLS:
features.pop(unwanted_col)
return features, label
def create_dataset(pattern, batch_size=1, mode="eval"):
dataset = tf.data.experimental.make_csv_dataset(
pattern, batch_size, CSV_COLUMNS, DEFAULTS
)
dataset = dataset.map(features_and_labels)
if mode == "train":
dataset = dataset.shuffle(buffer_size=1000).repeat()
# take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(1)
return dataset
###Output
_____no_output_____
###Markdown
Build a simple keras DNN model We will use feature columns to connect our raw data to our keras DNN model. Feature columns make it easy to perform common types of feature engineering on your raw data. For example, you can one-hot encode categorical data, create feature crosses, embeddings and more. We'll cover these in more detail later in the course, but if you want to a sneak peak browse the official TensorFlow [feature columns guide](https://www.tensorflow.org/guide/feature_columns).In our case we won't do any feature engineering. However, we still need to create a list of feature columns to specify the numeric values which will be passed on to our model. To do this, we use `tf.feature_column.numeric_column()`We use a python dictionary comprehension to create the feature columns for our model, which is just an elegant alternative to a for loop. **Exercise.** Create a feature column dictionary that we will use when building our deep neural network below. The keys should be the element of the `INPUT_COLS` list, while the values should be numeric feature columns.
###Code
INPUT_COLS = [
"pickup_longitude",
"pickup_latitude",
"dropoff_longitude",
"dropoff_latitude",
"passenger_count",
]
# Create input layer of feature columns
feature_columns = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, we create the DNN model. The Sequential model is a linear stack of layers and when building a model using the Sequential API, you configure each layer of the model in turn. Once all the layers have been added, you compile the model. **Exercise.** Create a deep neural network using Keras's Sequential API. In the cell below, use the `tf.keras.layers` library to create all the layers for your deep neural network.
###Code
# Build a keras DNN model using Sequential API
model = # TODO: Your code here
###Output
_____no_output_____
###Markdown
Next, to prepare the model for training, you must configure the learning process. This is done using the compile method. The compile method takes three arguments:* An optimizer. This could be the string identifier of an existing optimizer (such as `rmsprop` or `adagrad`), or an instance of the [Optimizer class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/optimizers).* A loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function from the [Losses class](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/losses) (such as categorical_crossentropy or mse), or it can be a custom objective function.* A list of metrics. For any machine learning problem you will want a set of metrics to evaluate your model. A metric could be the string identifier of an existing metric or a custom metric function.We will add an additional custom metric called `rmse` to our list of metrics which will return the root mean square error. **Exercise.** Compile the model you created above. Create a custom loss function called `rmse` which computes the root mean squared error between `y_true` and `y_pred`. Pass this function to the model as an evaluation metric.
###Code
# Create a custom evalution metric
def rmse(y_true, y_pred):
return # TODO: Your code here
# Compile the keras model
# TODO: Your code here
###Output
_____no_output_____
###Markdown
Train the model To train your model, Keras provides three functions that can be used: 1. `.fit()` for training a model for a fixed number of epochs (iterations on a dataset). 2. `.fit_generator()` for training a model on data yielded batch-by-batch by a generator 3. `.train_on_batch()` runs a single gradient update on a single batch of data. The `.fit()` function works well for small datasets which can fit entirely in memory. However, for large datasets (or if you need to manipulate the training data on the fly via data augmentation, etc) you will need to use `.fit_generator()` instead. The `.train_on_batch()` method is for more fine-grained control over training and accepts only a single batch of data.The taxifare dataset we sampled is small enough to fit in memory, so can we could use `.fit` to train our model. Our `create_dataset` function above generates batches of training examples, so we could also use `.fit_generator`. In fact, when calling `.fit` the method inspects the data, and if it's a generator (as our dataset is) it will invoke automatically `.fit_generator` for training. We start by setting up some parameters for our training job and create the data generators for the training and validation data.We refer you the the blog post [ML Design Pattern 3: Virtual Epochs](https://medium.com/google-cloud/ml-design-pattern-3-virtual-epochs-f842296de730) for further details on why express the training in terms of `NUM_TRAIN_EXAMPLES` and `NUM_EVALS` and why, in this training code, the number of epochs is really equal to the number of evaluations we perform.
###Code
TRAIN_BATCH_SIZE = 1000
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset will repeat, wrap around
NUM_EVALS = 50 # how many times to evaluate
NUM_EVAL_EXAMPLES = 10000 # enough to get a reasonable sample
trainds = create_dataset(
pattern="../data/taxi-train*", batch_size=TRAIN_BATCH_SIZE, mode="train"
)
evalds = create_dataset(
pattern="../data/taxi-valid*", batch_size=1000, mode="eval"
).take(NUM_EVAL_EXAMPLES // 1000)
###Output
_____no_output_____
###Markdown
There are various arguments you can set when calling the [.fit method](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/Modelfit). Here `x` specifies the input data which in our case is a `tf.data` dataset returning a tuple of (inputs, targets). The `steps_per_epoch` parameter is used to mark the end of training for a single epoch. Here we are training for NUM_EVALS epochs. Lastly, for the `callback` argument we specify a Tensorboard callback so we can inspect Tensorboard after training. **Exercise.** In the cell below, you will train your model. First, define the `steps_per_epoch` then train your model using `.fit()`, saving the model training output to a variable called `history`.
###Code
%%time
steps_per_epoch = # TODO: Your code here
LOGDIR = "./taxi_trained"
history = # TODO: Your code here
###Output
_____no_output_____
###Markdown
High-level model evaluation Once we've run data through the model, we can call `.summary()` on the model to get a high-level summary of our network. We can also plot the training and evaluation curves for the metrics we computed above.
###Code
model.summary()
###Output
_____no_output_____
###Markdown
Running `.fit` (or `.fit_generator`) returns a History object which collects all the events recorded during training. Similar to Tensorboard, we can plot the training and validation curves for the model loss and rmse by accessing these elements of the History object.
###Code
RMSE_COLS = ["rmse", "val_rmse"]
pd.DataFrame(history.history)[RMSE_COLS].plot()
LOSS_COLS = ["loss", "val_loss"]
pd.DataFrame(history.history)[LOSS_COLS].plot()
###Output
_____no_output_____
###Markdown
Making predictions with our model To make predictions with our trained model, we can call the [predict method](https://www.tensorflow.org/api_docs/python/tf/keras/Modelpredict), passing to it a dictionary of values. The `steps` parameter determines the total number of steps before declaring the prediction round finished. Here since we have just one example, we set `steps=1` (setting `steps=None` would also work). Note, however, that if x is a `tf.data` dataset or a dataset iterator, and steps is set to None, predict will run until the input dataset is exhausted.
###Code
model.predict(
x={
"pickup_longitude": tf.convert_to_tensor([-73.982683]),
"pickup_latitude": tf.convert_to_tensor([40.742104]),
"dropoff_longitude": tf.convert_to_tensor([-73.983766]),
"dropoff_latitude": tf.convert_to_tensor([40.755174]),
"passenger_count": tf.convert_to_tensor([3.0]),
},
steps=1,
)
###Output
_____no_output_____
###Markdown
Export and deploy our model Of course, making individual predictions is not realistic, because we can't expect client code to have a model object in memory. For others to use our trained model, we'll have to export our model to a file, and expect client code to instantiate the model from that exported file. We'll export the model to a TensorFlow SavedModel format. Once we have a model in this format, we have lots of ways to "serve" the model, from a web application, from JavaScript, from mobile applications, etc. **Exercise.** Use `tf.saved_model.save` to export the trained model to a Tensorflow SavedModel format. Reference the [documentation for `tf.saved_model.save`](https://www.tensorflow.org/api_docs/python/tf/saved_model/save) as you fill in the code for the cell below.Next, print the signature of your saved model using the SavedModel Command Line Interface command `saved_model_cli`. You can read more about the command line interface and the `show` and `run` commands it supports in the [documentation here](https://www.tensorflow.org/guide/saved_modeloverview_of_commands).
###Code
OUTPUT_DIR = "./export/savedmodel"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
TIMESTAMP = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
EXPORT_PATH = os.path.join(OUTPUT_DIR, TIMESTAMP)
tf.saved_model.save(
# TODO: Your code here
)
!saved_model_cli show \
--tag_set # TODO: Your code here
--signature_def # TODO: Your code here
--dir # TODO: Your code here
!find {EXPORT_PATH}
os.environ['EXPORT_PATH'] = EXPORT_PATH
###Output
_____no_output_____
###Markdown
Deploy our model to Vertex AI Finally, we will deploy our trained model to Vertex AI and see how we can make online predicitons.
###Code
PROJECT = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT = PROJECT[0]
BUCKET = PROJECT
REGION = "us-central1"
MODEL_DISPLAYNAME = "taxifare-" + TIMESTAMP
print(f"MODEL_DISPLAYNAME: {MODEL_DISPLAYNAME}")
# from https://cloud.google.com/vertex-ai/docs/predictions/pre-built-containers
SERVING_CONTAINER_IMAGE_URI = (
"us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-3:latest"
)
os.environ["BUCKET"] = BUCKET
os.environ["REGION"] = REGION
%%bash
# Create GCS bucket if it doesn't exist already...
exists=$(gsutil ls -d | grep -w gs://${BUCKET}/)
if [ -n "$exists" ]; then
echo -e "Bucket exists, let's not recreate it."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET}
echo "\nHere are your current buckets:"
gsutil ls
fi
!gsutil cp -R $EXPORT_PATH gs://$BUCKET/$MODEL_DISPLAYNAME
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to upload and deploy your trained model to Vertex AI using the `Model.upload` method. Have a look at [the documentation](https://googleapis.dev/python/aiplatform/latest/aiplatform.htmlgoogle.cloud.aiplatform.Model).
###Code
uploaded_model = aiplatform.Model.upload(
display_name=MODEL_DISPLAYNAME,
artifact_uri= # TODO: Your code here
serving_container_image_uri= # TODO: Your code here
)
MACHINE_TYPE = "n1-standard-2"
endpoint = uploaded_model.deploy(
machine_type=MACHINE_TYPE,
accelerator_type=None,
accelerator_count=None,
)
instance = {
"pickup_longitude": -73.982683,
"pickup_latitude": 40.742104,
"dropoff_longitude": -73.983766,
"dropoff_latitude": 40.755174,
"passenger_count": 3.0,
}
###Output
_____no_output_____
###Markdown
**Exercise.** Complete the code in the cell below to call prediction on your deployed model for the example you just created in the `instance` variable above.
###Code
endpoint.predict(
# TODO: Your code here
)
###Output
_____no_output_____
###Markdown
CleanupWhen deploying a model to an endpoint for online prediction, the minimum `min-replica-count` is 1, and it is charged per node hour. So let's delete the endpoint to reduce unnecessary charges. Before we can delete the endpoint, we first undeploy all attached models...
###Code
endpoint.undeploy_all()
###Output
_____no_output_____
###Markdown
...then delete the endpoint.
###Code
endpoint.delete()
###Output
_____no_output_____ |
content/03/04e-visualEDA.ipynb | ###Markdown
Visual EDAThe [first page of this chapter](04b-whyplot) discussed the reasons we plot our data. 1. Data cleaning: To find issues in the data that need to get fixed before we can do larger analysis2. Data exploration: Learning about each of the variables, how they covary, and what further questions you can ask of the data3. Analysis and presentation EDA on a classic firm financial datasetIn [the Pandas EDA](02e_eda_golden) page, I explored Compustat by producing summary stats to get a sense of the variables involved, look for missing values, and look for problematic outliers. We noted that some variables, like $delaycon$, had a lot of missing values and decided we'd look into it. Let's continue exploring that dataset. First, let's download our slice of it. The variables are listed and described in a csv file in the [repo's data folder.](https://github.com/LeDataSciFi/ledatascifi-2021/tree/main/data)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# these three are used to download the file
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen
url = 'https://github.com/LeDataSciFi/ledatascifi-2021/blob/main/data/CCM_cleaned_for_class.zip?raw=true'
#firms = pd.read_stata(url)
# <-- that code would work, but GH said it was too big and
# forced me to zip it, so here is the work around to download it:
with urlopen(url) as request:
data = BytesIO(request.read())
with ZipFile(data) as archive:
with archive.open(archive.namelist()[0]) as stata:
ccm = pd.read_stata(stata)
###Output
_____no_output_____
###Markdown
The mystery of the poorly populated variablesAgain, there are some variables with lots of missing values.
###Code
(
( # these lines do the calculation - what % of missing values are there for each var
ccm.isna() # ccm.isna() TURNS every obs/variable = 1 when its missing and 0 else
.sum(axis=0) # count the number of na for each variable (now data is 1 obs per column = # missing)
/len(ccm) # convert # missing to % missing
*100 # report as percentage
)
# you can stop here and report this...
# but I wanted to format it a bit...
.sort_values(ascending=False)[:13]
.to_frame(name='% missing') # the next line only works on a frame, and because pandas sees only 1 variable at this pt
.style.format("{:.1f}") # in the code, it calls this a "series" type object, so convert it to dataframe type object
)
#
###Output
_____no_output_____
###Markdown
When variables missing that much in a dataset, something systematic is going on and you need to figure it out. One way you could investigate why those variables are missing. Maybe it's a data issue, as if some data for variable $x$ isn't available in all years. E.g. perhaps a variable isn't available before 1995 for some reason.A way you could get a start on that is to plot the % missing by year for each variable. This legend is UGGGGLY, because the plot has 40+ series, which is why it's a **spaghetti chart**. It would take extra work to unravel the spaghetti and figure out what variables are what. But CLEARLY some variables only become available in 1995 so they can be used after that.
###Code
(
ccm
.groupby('fyear')
[['privdelaycon','debtdelaycon','equitydelaycon','delaycon',
'prodmktfluid','tnic3tsimm','tnic3hhi']]
.apply(lambda x: 100*(x.isna().sum(axis=0)) / len(x) )
.plot.line(title="These variables didn't exist before 1997!",
ylabel="Fraction of missing observations")
)
plt.show()
###Output
_____no_output_____
###Markdown
Explorating covariances/relationships between variablesTo get a quick sense of relationships, I like to use I like to use `pairplot` and `heatmap` to get a quick since of relationships. Getting the big picture with [**Pairplot**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)I like passing `corner=True` or using the `x_vars` and `y_vars` parameters to make the info shown more usable. ```{warning}With pairplot, 1. Use 7 or fewer variables at a time. If your dataset has a lot of variables, do them part by part.2. Don't plot all of the data points! This will oversaturate your graphs and make it harder to draw any conclusions. Below, I randomly sample a piece of the dataset. ```**It's clear from running these two plots that some extreme outliers are hiding patterns by messing with the scales and influencing the regression lines.**(We should deal with these outliers later.)
###Code
# every time you run this, you'll get diff figures... why?!
f1 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='reg',
corner=True)
f2 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='hist',
corner=True) # hist handles a lot of datapoints well
###Output
_____no_output_____
###Markdown
Getting the big picture with [**Heatmap with correlations**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)After some pairplots (and often before), I like to look at correlations.```{warning}This analysis step doesn't help for categorical variables!Make sure you don't include categorical variables that are numbers!(E.g. industry classifications are numbers that have no meaning.)```Seeing the correlations between variables is nice.A correlation table is ugly and hard to work with:
###Code
ccm.corr()
###Output
_____no_output_____
###Markdown
But a lazily made figure of that exact same info is somewhat workable:
###Code
f3 = sns.heatmap(ccm.corr()) # v1, use the nicer version below!
###Output
_____no_output_____
###Markdown
Cleaning that and making it more useful is easy:1. Drop the numerical variables that don't make sense in a correlation matrix2. Make the figure large enough to see3. Colors: cold for negative corr, hot for positive corr
###Code
# dont plot identifying type info or categorical vars
corr = ccm.drop(columns=['gvkey','lpermno','sic3','fyear','sic']).corr()
fig, ax = plt.subplots(figsize=(9,9)) # make a big space for the figure
ax = sns.heatmap(corr,
# cmap for the colors,
center=0,square=True,
cmap=sns.diverging_palette(230, 20, as_cmap=True),
# mask to hide the upper diag (redundant)
mask=np.triu(np.ones_like(corr, dtype=bool)),
# shrink the heat legend
cbar_kws={"shrink": .5},
#optional: vmax and vmin will "cap" the color range
)
###Output
_____no_output_____
###Markdown
That is an information DENSE figure, but we somehow managed to get it on screen decently! Still, it's a ton of variables, and doing this in parts would be a good idea.```{tip}If you're feeling frisky, and your data is in good shape, you can push this farther by using [`sns.clustermap`](https://seaborn.pydata.org/generated/seaborn.clustermap.html) to find clusters of similar variables. ```Also - don't take these correlations as gospel yet: They should *point* you towards further relationships to explore, which you should do one plot at a time. Digging in with [**lmplot**](https://seaborn.pydata.org/generated/seaborn.relplot.html) and [**Jointplot**](https://seaborn.pydata.org/generated/seaborn.jointplot.html)These are good for digging into the relationships between two continuous variables. Let's dig into a strong correlation suggested by our heatmap.```{warning}Jointplot can be slow - it's doing a lot. Again, don't plot all of the data points! As your sample size goes up, either randomly sample data, or use "hex" style graphs. ```
###Code
f1 = sns.jointplot(data=ccm.query('xrd_a<.4').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# notice: most firms have 0 R&D!
f2 = sns.jointplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# set_title doesn't work with jointplots
f1.fig.suptitle('Strongly positive, even with zero R&D firms in sample')
f1.fig.subplots_adjust(top=0.95) # Reduce plot to make room
f2.fig.suptitle('Among R&D firms, even stronger relationship')
f2.fig.subplots_adjust(top=0.95) # Reduce plot to make room
###Output
_____no_output_____
###Markdown
I'd pencil this as a relationship to look into more (Do firms do more R&D _**because**_ of the fluidity of their product market?) and then continue exploring. `lmplot` will plot regressions as well, but it makes it easy add facets to see if the relationship depends on a third (categorical) variable with the `hue`, `col`, and `row` parameters. (And you can combine `hue`, `col`, and `row` to see several cuts!)
###Code
f3 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", hue='div_d')
f4 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", col='div_d')
###Output
_____no_output_____
###Markdown
Visual EDAThe [first page of this chapter](04b-whyplot) discussed the reasons we plot our data. 1. Data cleaning: To find issues in the data that need to get fixed before we can do larger analysis2. Data exploration: Learning about each of the variables, how they covary, and what further questions you can ask of the data3. Analysis and presentation EDA on a classic firm financial datasetIn [the Pandas EDA](02e_eda_golden) page, I explored Compustat by producing summary stats to get a sense of the variables involved, look for missing values, and look for problematic outliers. We noted that some variables, like $delaycon$, had a lot of missing values and decided we'd look into it. Let's continue exploring that dataset. First, let's download our slice of it. The variables are listed and described in a csv file in the [repo's data folder.](https://github.com/LeDataSciFi/ledatascifi-2022/tree/main/data)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# these three are used to download the file
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen
url = 'https://github.com/LeDataSciFi/ledatascifi-2022/blob/main/data/CCM_cleaned_for_class.zip?raw=true'
#firms = pd.read_stata(url)
# <-- that code would work, but GH said it was too big and
# forced me to zip it, so here is the work around to download it:
with urlopen(url) as request:
data = BytesIO(request.read())
with ZipFile(data) as archive:
with archive.open(archive.namelist()[0]) as stata:
ccm = pd.read_stata(stata)
###Output
_____no_output_____
###Markdown
The mystery of the poorly populated variablesAgain, there are some variables with lots of missing values.
###Code
(
( # these lines do the calculation - what % of missing values are there for each var
ccm.isna() # ccm.isna() TURNS every obs/variable = 1 when its missing and 0 else
.sum(axis=0) # count the number of na for each variable (now data is 1 obs per column = # missing)
/len(ccm) # convert # missing to % missing
*100 # report as percentage
)
# you can stop here and report this...
# but I wanted to format it a bit...
.sort_values(ascending=False)[:13]
.to_frame(name='% missing') # the next line only works on a frame, and because pandas sees only 1 variable at this pt
.style.format("{:.1f}") # in the code, it calls this a "series" type object, so convert it to dataframe type object
)
#
###Output
_____no_output_____
###Markdown
When variables missing that much in a dataset, something systematic is going on and you need to figure it out. One way you could investigate why those variables are missing. Maybe it's a data issue, as if some data for variable $x$ isn't available in all years. E.g. perhaps a variable isn't available before 1995 for some reason.A way you could get a start on that is to plot the % missing by year for each variable. This legend is UGGGGLY, because the plot has 40+ series, which is why it's a **spaghetti chart**. It would take extra work to unravel the spaghetti and figure out what variables are what. But CLEARLY some variables only become available in 1995 so they can be used after that.
###Code
(
ccm
.groupby('fyear')
[['privdelaycon','debtdelaycon','equitydelaycon','delaycon',
'prodmktfluid','tnic3tsimm','tnic3hhi']]
.apply(lambda x: 100*(x.isna().sum(axis=0)) / len(x) )
.plot.line(title="These variables didn't exist before 1997!",
ylabel="Fraction of missing observations")
)
plt.show()
###Output
_____no_output_____
###Markdown
Distributions Among the first things I do with new data, besides the statistical EDA [we covered in the Pandas section](02e_eda_golden), is plot the distribution of each variable to get a sense of the data. I generally want to know 1. If a variable is numerical 1. How many observations are missing and is it systematic (like in the example above)? 1. Is it continuous or discrete? 1. What is the shape of a distribution (normal, binary, skewed left, skewed right, fat-tailed, etc.)? 2. How prevalent are outliers?2. If a variable is categorical 1. What are the common values? 2. Are the averages of numerical variables different for different categories? ```{warning}Remember the `gsector` variable! Just because a variable is a "number" doesn't mean the numbers have a mathematical meaning! ```Four functions come in handy as a starting point, and you should look at their documentation and the example galleries: [`sns.displot`](https://seaborn.pydata.org/generated/seaborn.displot.html), [`sns.boxplot`](https://seaborn.pydata.org/generated/seaborn.boxplot.html), [`sns.catplot`](https://seaborn.pydata.org/generated/seaborn.catplot.html), and the built in pandas plot function [`df[].plot()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html)```{tip}1. Quick syntax help: Remember to type SHIFT + TAB when the cursor inside a function!2. Better syntax help: Go the official seaborn page for your function and look at the examples to figure out what argument creates the change you want.```
###Code
sns.displot(data=ccm,
x='td_a',
kind='kde').set(title='A density (kind="kde") graph')
plt.show()
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
kind='kde').set(title='Used .query() to filter outliers')
plt.show()
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
# kind=hist is the default, so I'm not even typing it
kde=True).set(title='kind="hist", kde=True --> histogram + kde' )
plt.show()
sns.boxplot(data=ccm,
x='td_a').set(title='Outliers can distort graph until useless')
plt.show()
sns.boxplot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a').set(title='With query, outliers are filtered, main patterns visible')
plt.show()
sns.displot(data = ccm,
x = 'div_d', kde=True
).set(title='div_d is a binary variable: Does the firm pay dividends?')
plt.show()
###Output
_____no_output_____
###Markdown
To visualize the counts of categorical variables, I'd just use pandas:
###Code
# sns.catplot is powerful, but it's overkill for a categorical count
sns.catplot(data=ccm,
x='gsector',
kind='count',
order = ccm['gsector'].value_counts().index)
plt.show()
# pandas built in plot is much easier:
ccm['gsector'].value_counts().plot(kind='bar')
plt.show()
###Output
_____no_output_____
###Markdown
But `sns.catplot` is a really useful function to look at other distributional statistics for different groups! Covariances and relationships between variablesTo get a quick sense of relationships, I like to use I like to use `pairplot` and `heatmap` to get a quick since of relationships. Getting the big picture with [**Pairplot**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)I like passing `corner=True` or using the `x_vars` and `y_vars` parameters to make the info shown more usable. ```{warning}With pairplot, 1. Use 7 or fewer variables at a time. If your dataset has a lot of variables, do them part by part.2. Don't plot all of the data points! This will oversaturate your graphs and make it harder to draw any conclusions. Below, I randomly sample a piece of the dataset. ```**It's clear from running these two plots that some extreme outliers are hiding patterns by messing with the scales and influencing the regression lines.**(We should deal with these outliers later.)
###Code
# every time you run this, you'll get diff figures... why?!
f1 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='reg',
corner=True)
f2 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='hist',
corner=True) # hist handles a lot of datapoints well
###Output
_____no_output_____
###Markdown
Getting the big picture with [**Heatmap with correlations**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)After some pairplots (and often before), I like to look at correlations.```{warning}This analysis step doesn't help for categorical variables!Make sure you don't include categorical variables that are numbers!(E.g. industry classifications are numbers that have no meaning.)```Seeing the correlations between variables is nice.A correlation table is ugly and hard to work with:
###Code
ccm.corr()
###Output
_____no_output_____
###Markdown
But a lazily made figure of that exact same info is somewhat workable:
###Code
f3 = sns.heatmap(ccm.corr()) # v1, use the nicer version below!
###Output
_____no_output_____
###Markdown
Cleaning that and making it more useful is easy:1. Drop the numerical variables that don't make sense in a correlation matrix2. Make the figure large enough to see3. Colors: cold for negative corr, hot for positive corr
###Code
# dont plot identifying type info or categorical vars
corr = ccm.drop(columns=['gvkey','lpermno','sic3','fyear','sic']).corr()
fig, ax = plt.subplots(figsize=(9,9)) # make a big space for the figure
ax = sns.heatmap(corr,
# cmap for the colors,
center=0,square=True,
cmap=sns.diverging_palette(230, 20, as_cmap=True),
# mask to hide the upper diag (redundant)
mask=np.triu(np.ones_like(corr, dtype=bool)),
# shrink the heat legend
cbar_kws={"shrink": .5},
#optional: vmax and vmin will "cap" the color range
)
###Output
_____no_output_____
###Markdown
That is an information DENSE figure, but we somehow managed to get it on screen decently! Still, it's a ton of variables, and doing this in parts would be a good idea.```{tip}If you're feeling frisky, and your data is in good shape, you can push this farther by using [`sns.clustermap`](https://seaborn.pydata.org/generated/seaborn.clustermap.html) to find clusters of similar variables. ```Also - don't take these correlations as gospel yet: They should *point* you towards further relationships to explore, which you should do one plot at a time. Digging in with [**lmplot**](https://seaborn.pydata.org/generated/seaborn.relplot.html) and [**Jointplot**](https://seaborn.pydata.org/generated/seaborn.jointplot.html)These are good for digging into the relationships between two continuous variables. Let's dig into a strong correlation suggested by our heatmap.```{warning}Jointplot can be slow - it's doing a lot. Again, don't plot all of the data points! As your sample size goes up, either randomly sample data, or use "hex" style graphs. ```
###Code
f1 = sns.jointplot(data=ccm.query('xrd_a<.4').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# notice: most firms have 0 R&D!
f2 = sns.jointplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# set_title doesn't work with jointplots
f1.fig.suptitle('Strongly positive, even with zero R&D firms in sample')
f1.fig.subplots_adjust(top=0.95) # Reduce plot to make room
f2.fig.suptitle('Among R&D firms, even stronger relationship')
f2.fig.subplots_adjust(top=0.95) # Reduce plot to make room
###Output
_____no_output_____
###Markdown
I'd pencil this as a relationship to look into more (Do firms do more R&D _**because**_ of the fluidity of their product market?) and then continue exploring. `lmplot` will plot regressions as well, but it makes it easy add [facets](04d-whichplot) to see if the relationship depends on a third (categorical) variable with the `hue`, `col`, and `row` parameters. (And you can combine `hue`, `col`, and `row` to see several cuts!)
###Code
f3 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", hue='div_d')
f4 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", col='div_d')
###Output
_____no_output_____
###Markdown
Visual EDAThe [first page of this chapter](04b-whyplot) discussed the reasons we plot our data. 1. Data cleaning: To find issues in the data that need to get fixed before we can do larger analysis2. Data exploration: Learning about each of the variables, how they covary, and what further questions you can ask of the data3. Analysis and presentation EDA on a classic firm financial datasetIn [the Pandas EDA](02e_eda_golden) page, I explored Compustat by producing summary stats to get a sense of the variables involved, look for missing values, and look for problematic outliers. We noted that some variables, like $delaycon$, had a lot of missing values and decided we'd look into it. Let's continue exploring that dataset. First, let's download our slice of it. The variables are listed and described in a csv file in the [repo's data folder.](https://github.com/LeDataSciFi/ledatascifi-2022/tree/main/data)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# these three are used to download the file
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen
url = 'https://github.com/LeDataSciFi/ledatascifi-2022/blob/main/data/CCM_cleaned_for_class.zip?raw=true'
#firms = pd.read_stata(url)
# <-- that code would work, but GH said it was too big and
# forced me to zip it, so here is the work around to download it:
with urlopen(url) as request:
data = BytesIO(request.read())
with ZipFile(data) as archive:
with archive.open(archive.namelist()[0]) as stata:
ccm = pd.read_stata(stata)
###Output
_____no_output_____
###Markdown
The mystery of the poorly populated variablesAgain, there are some variables with lots of missing values.
###Code
(
( # these lines do the calculation - what % of missing values are there for each var
ccm.isna() # ccm.isna() TURNS every obs/variable = 1 when its missing and 0 else
.sum(axis=0) # count the number of na for each variable (now data is 1 obs per column = # missing)
/len(ccm) # convert # missing to % missing
*100 # report as percentage
)
# you can stop here and report this...
# but I wanted to format it a bit...
.sort_values(ascending=False)[:13]
.to_frame(name='% missing') # the next line only works on a frame, and because pandas sees only 1 variable at this pt
.style.format("{:.1f}") # in the code, it calls this a "series" type object, so convert it to dataframe type object
)
#
###Output
_____no_output_____
###Markdown
When variables missing that much in a dataset, something systematic is going on and you need to figure it out. One way you could investigate why those variables are missing. Maybe it's a data issue, as if some data for variable $x$ isn't available in all years. E.g. perhaps a variable isn't available before 1995 for some reason.A way you could get a start on that is to plot the % missing by year for each variable. This legend is UGGGGLY, because the plot has 40+ series, which is why it's a **spaghetti chart**. It would take extra work to unravel the spaghetti and figure out what variables are what. But CLEARLY some variables only become available in 1995 so they can be used after that.
###Code
(
ccm
.groupby('fyear')
[['privdelaycon','debtdelaycon','equitydelaycon','delaycon',
'prodmktfluid','tnic3tsimm','tnic3hhi']]
.apply(lambda x: 100*(x.isna().sum(axis=0)) / len(x) )
.plot.line(title="These variables didn't exist before 1997!",
ylabel="Fraction of missing observations")
)
plt.show()
###Output
_____no_output_____
###Markdown
Distributions Among the first things I do with new data, besides the statistical EDA [we covered in the Pandas section](02e_eda_golden), is plot the distribution of each variable to get a sense of the data. I generally want to know 1. If a variable is numerical 1. How many observations are missing and is it systematic (like in the example above)? 1. Is it continuous or discrete? 1. What is the shape of a distribution (normal, binary, skewed left, skewed right, fat-tailed, etc.)? 2. How prevalent are outliers?2. If a variable is categorical 1. What are the common values? 2. Are the averages of numerical variables different for different categories? ```{warning}Remember the `gsector` variable! Just because a variable is a "number" doesn't mean the numbers have a mathematical meaning! ```Four functions come in handy as a starting point, and you should look at their documentation and the example galleries: [`sns.displot`](https://seaborn.pydata.org/generated/seaborn.displot.html), [`sns.boxplot`](https://seaborn.pydata.org/generated/seaborn.boxplot.html), [`sns.catplot`](https://seaborn.pydata.org/generated/seaborn.catplot.html), and the built in pandas plot function [`df[].plot()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html)```{tip}1. Quick syntax help: Remember to type SHIFT + TAB when the cursor inside a function!2. Better syntax help: Go the official seaborn page for your function and look at the examples to figure out what argument creates the change you want.```
###Code
sns.displot(data=ccm,
x='td_a',
kind='kde').set(title='A density (kind="kde") graph')
plt.show()
###Output
_____no_output_____
###Markdown
This time, let's query the dataset to eliminate outliers that distort the graph:
###Code
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
kind='kde').set(title='Used .query() to filter outliers')
plt.show()
###Output
_____no_output_____
###Markdown
And now I'll add a histogram on top:
###Code
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
# kind=hist is the default, so I'm not even typing it
kde=True).set(title='kind="hist", kde=True --> histogram + kde' )
plt.show()
###Output
_____no_output_____
###Markdown
Let's look at the distribution of leverage (`td_a`):
###Code
sns.boxplot(data=ccm,
x='td_a').set(title='Outliers can distort graph until useless')
plt.show()
###Output
_____no_output_____
###Markdown
Again, let's query the dataset to eliminate outliers that distort the graph:
###Code
sns.boxplot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a').set(title='With query, outliers are filtered, main patterns visible')
plt.show()
sns.displot(data = ccm,
x = 'div_d', kde=True
).set(title='div_d is a binary variable: Does the firm pay dividends?')
plt.show()
###Output
_____no_output_____
###Markdown
To visualize the counts of categorical variables, it's easy to do with pandas. - Note 1: You need to manually count the variable before calling the plot.- Note 2: What is industry 40? It's a great idea to replace the numeric category labels with the corresponding industry names (Finance)- Note 3: I like horizontal bars, because if you replace the industry code with the industry name, it's easier to read
###Code
# pandas built in plot
ccm['gsector'].value_counts().plot(kind='barh')
plt.show()
# sns.catplot is powerful, but it's overkill for a categorical count
sns.catplot(data=ccm,
y='gsector',
kind='count',
# sns wants to use all diff colors, which is distracting
# on this particular graph with no value.
# so I force all colors to grey by adding this:
color = 'grey',
# to sort the categories, need to tell sns the order to sort:
order = ccm['gsector'].value_counts().index)
plt.show()
###Output
_____no_output_____
###Markdown
Covariances and relationships between variablesTo get a quick sense of relationships, I like to use I like to use `pairplot` and `heatmap` to get a quick since of relationships. Getting the big picture with [**Pairplot**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)I like passing `corner=True` or using the `x_vars` and `y_vars` parameters to make the info shown more usable. ```{warning}With pairplot, 1. Use 7 or fewer variables at a time. If your dataset has a lot of variables, do them part by part.2. Don't plot all of the data points! This will oversaturate your graphs and make it harder to draw any conclusions. Below, I randomly sample a piece of the dataset. ```**It's clear from running these two plots that some extreme outliers are hiding patterns by messing with the scales and influencing the regression lines.**(We should deal with these outliers later.)
###Code
# every time you run this, you'll get diff figures... why?!
f1 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='reg',
corner=True)
###Output
_____no_output_____
###Markdown
This time the "kind" is a histogram. In 2D, a histogram is squares with colors corresponding to frequency at that spot.
###Code
f2 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='hist',
corner=True) # hist handles a lot of datapoints well
###Output
_____no_output_____
###Markdown
Getting the big picture with [**Heatmap with correlations**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)After some pairplots (and often before), I like to look at correlations.```{warning}This analysis step doesn't help for categorical variables!Make sure you don't include categorical variables that are numbers!(E.g. industry classifications are numbers that have no meaning.)```Seeing the correlations between variables is nice.A correlation table is ugly and hard to work with:
###Code
ccm.corr()
###Output
_____no_output_____
###Markdown
But a lazily made figure of that exact same info is somewhat workable:
###Code
f3 = sns.heatmap(ccm.corr()) # v1, use the nicer version below!
###Output
_____no_output_____
###Markdown
Cleaning that and making it more useful is easy:1. Drop the numerical variables that don't make sense in a correlation matrix2. Make the figure large enough to see3. Colors: cold for negative corr, hot for positive corr
###Code
# dont plot identifying type info or categorical vars
corr = ccm.drop(columns=['gvkey','lpermno','sic3','fyear','sic']).corr()
fig, ax = plt.subplots(figsize=(9,9)) # make a big space for the figure
ax = sns.heatmap(corr,
# cmap for the colors,
center=0,square=True,
cmap=sns.diverging_palette(230, 20, as_cmap=True),
# mask to hide the upper diag (redundant)
mask=np.triu(np.ones_like(corr, dtype=bool)),
# shrink the heat legend
cbar_kws={"shrink": .5},
#optional: vmax and vmin will "cap" the color range
)
###Output
_____no_output_____
###Markdown
That is an information DENSE figure, but we somehow managed to get it on screen decently! Still, it's a ton of variables, and doing this in parts would be a good idea.```{tip}If you're feeling frisky, and your data is in good shape, you can push this farther by using [`sns.clustermap`](https://seaborn.pydata.org/generated/seaborn.clustermap.html) to find clusters of similar variables. ```Also - don't take these correlations as gospel yet: They should *point* you towards further relationships to explore, which you should do one plot at a time. Digging in with [**lmplot**](https://seaborn.pydata.org/generated/seaborn.relplot.html) and [**Jointplot**](https://seaborn.pydata.org/generated/seaborn.jointplot.html)These are good for digging into the relationships between two continuous variables. Let's dig into a strong correlation suggested by our heatmap.```{warning}Jointplot can be slow - it's doing a lot. Again, don't plot all of the data points! As your sample size goes up, either randomly sample data, or use "hex" style graphs. ```
###Code
f1 = sns.jointplot(data=ccm.query('xrd_a<.4').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# notice: most firms have 0 R&D!
# set_title doesn't work with jointplots
f1.fig.suptitle('Strongly positive, even with zero R&D firms in sample')
f1.fig.subplots_adjust(top=0.95) # Reduce plot to make room
f2 = sns.jointplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
f2.fig.suptitle('Among R&D firms, even stronger relationship')
f2.fig.subplots_adjust(top=0.95) # Reduce plot to make room
###Output
_____no_output_____
###Markdown
I'd pencil this as a relationship to look into more (Do firms do more R&D _**because**_ of the fluidity of their product market?) and then continue exploring. `lmplot` will plot regressions as well, but it makes it easy add [facets](04d-whichplot) to see if the relationship depends on a third (categorical) variable with the `hue`, `col`, and `row` parameters. (And you can combine `hue`, `col`, and `row` to see several cuts!)
###Code
f3 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", hue='div_d')
###Output
_____no_output_____
###Markdown
Using `col` instead of `hue` "works":
###Code
f4 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", col='div_d')
###Output
_____no_output_____
###Markdown
Visual EDAThe [first page of this chapter](04b-whyplot) discussed the reasons we plot our data. 1. Data cleaning: To find issues in the data that need to get fixed before we can do larger analysis2. Data exploration: Learning about each of the variables, how they covary, and what further questions you can ask of the data3. Analysis and presentation EDA on a classic firm financial datasetIn [the Pandas EDA](02e_eda_golden) page, I explored Compustat by producing summary stats to get a sense of the variables involved, look for missing values, and look for problematic outliers. We noted that some variables, like $delaycon$, had a lot of missing values and decided we'd look into it. Let's continue exploring that dataset. First, let's download our slice of it. The variables are listed and described in a csv file in the [repo's data folder.](https://github.com/LeDataSciFi/ledatascifi-2022/tree/main/data)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# these three are used to download the file
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen
url = 'https://github.com/LeDataSciFi/ledatascifi-2022/blob/main/data/CCM_cleaned_for_class.zip?raw=true'
#firms = pd.read_stata(url)
# <-- that code would work, but GH said it was too big and
# forced me to zip it, so here is the work around to download it:
with urlopen(url) as request:
data = BytesIO(request.read())
with ZipFile(data) as archive:
with archive.open(archive.namelist()[0]) as stata:
ccm = pd.read_stata(stata)
###Output
_____no_output_____
###Markdown
The mystery of the poorly populated variablesAgain, there are some variables with lots of missing values.
###Code
(
( # these lines do the calculation - what % of missing values are there for each var
ccm.isna() # ccm.isna() TURNS every obs/variable = 1 when its missing and 0 else
.sum(axis=0) # count the number of na for each variable (now data is 1 obs per column = # missing)
/len(ccm) # convert # missing to % missing
*100 # report as percentage
)
# you can stop here and report this...
# but I wanted to format it a bit...
.sort_values(ascending=False)[:13]
.to_frame(name='% missing') # the next line only works on a frame, and because pandas sees only 1 variable at this pt
.style.format("{:.1f}") # in the code, it calls this a "series" type object, so convert it to dataframe type object
)
#
###Output
_____no_output_____
###Markdown
When variables missing that much in a dataset, something systematic is going on and you need to figure it out. One way you could investigate why those variables are missing. Maybe it's a data issue, as if some data for variable $x$ isn't available in all years. E.g. perhaps a variable isn't available before 1995 for some reason.A way you could get a start on that is to plot the % missing by year for each variable. This legend is UGGGGLY, because the plot has 40+ series, which is why it's a **spaghetti chart**. It would take extra work to unravel the spaghetti and figure out what variables are what. But CLEARLY some variables only become available in 1995 so they can be used after that.
###Code
(
ccm
.groupby('fyear')
[['privdelaycon','debtdelaycon','equitydelaycon','delaycon',
'prodmktfluid','tnic3tsimm','tnic3hhi']]
.apply(lambda x: 100*(x.isna().sum(axis=0)) / len(x) )
.plot.line(title="These variables didn't exist before 1997!",
ylabel="Fraction of missing observations")
)
plt.show()
###Output
_____no_output_____
###Markdown
Distributions Among the first things I do with new data, besides the statistical EDA [we covered in the Pandas section](02e_eda_golden), is plot the distribution of each variable to get a sense of the data. I generally want to know 1. If a variable is numerical 1. How many variables are missing and is it systematic (like in the example above)? 1. Is it continuous or discrete? 1. What is the shape of a distribution (normal, binary, skewed left, skewed right, fat-tailed, etc.)? 2. How prevalent are outliers?2. If a variable is categorical 1. What are the common values? 2. Are the averages of numerical variables different for different categories? ```{warning}Remember the `gsector` variable! Just because a variable is a "number" doesn't mean the numbers have a mathematical meaning! ```Four functions come in handy as a starting point, and you should look at their documentation and the example galleries: [`sns.displot`](https://seaborn.pydata.org/generated/seaborn.displot.html), [`sns.boxplot`](https://seaborn.pydata.org/generated/seaborn.boxplot.html), [`sns.catplot`](https://seaborn.pydata.org/generated/seaborn.catplot.html), and the built in pandas plot function [`df[].plot()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html)```{tip}1. Quick syntax help: Remember to type SHIFT + TAB when the cursor inside a function!2. Better syntax help: Go the official seaborn page for your function and look at the examples to figure out what argument creates the change you want.```
###Code
sns.displot(data=ccm,
x='td_a',
kind='kde').set(title='A density (kind="kde") graph')
plt.show()
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
kind='kde').set(title='Used .query() to filter outliers')
plt.show()
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
# kind=hist is the default, so I'm not even typing it
kde=True).set(title='kind="hist", kde=True --> histogram + kde' )
plt.show()
sns.boxplot(data=ccm,
x='td_a').set(title='Outliers can distort graph until useless')
plt.show()
sns.boxplot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a').set(title='With query, outliers are filtered, main patterns visible')
plt.show()
sns.displot(data = ccm,
x = 'div_d', kde=True
).set(title='div_d is a binary variable: Does the firm pay dividends?')
plt.show()
###Output
_____no_output_____
###Markdown
To visualize the counts of categorical variables, I'd just use pandas:
###Code
# sns.catplot is powerful, but it's overkill for a categorical count
sns.catplot(data=ccm,
x='gsector',
kind='count',
order = ccm['gsector'].value_counts().index)
plt.show()
# pandas built in plot is much easier:
ccm['gsector'].value_counts().plot(kind='bar')
plt.show()
###Output
_____no_output_____
###Markdown
But `sns.catplot` is a really useful function to look at other distributional statistics for different groups! Covariances and relationships between variablesTo get a quick sense of relationships, I like to use I like to use `pairplot` and `heatmap` to get a quick since of relationships. Getting the big picture with [**Pairplot**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)I like passing `corner=True` or using the `x_vars` and `y_vars` parameters to make the info shown more usable. ```{warning}With pairplot, 1. Use 7 or fewer variables at a time. If your dataset has a lot of variables, do them part by part.2. Don't plot all of the data points! This will oversaturate your graphs and make it harder to draw any conclusions. Below, I randomly sample a piece of the dataset. ```**It's clear from running these two plots that some extreme outliers are hiding patterns by messing with the scales and influencing the regression lines.**(We should deal with these outliers later.)
###Code
# every time you run this, you'll get diff figures... why?!
f1 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='reg',
corner=True)
f2 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='hist',
corner=True) # hist handles a lot of datapoints well
###Output
_____no_output_____
###Markdown
Getting the big picture with [**Heatmap with correlations**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)After some pairplots (and often before), I like to look at correlations.```{warning}This analysis step doesn't help for categorical variables!Make sure you don't include categorical variables that are numbers!(E.g. industry classifications are numbers that have no meaning.)```Seeing the correlations between variables is nice.A correlation table is ugly and hard to work with:
###Code
ccm.corr()
###Output
_____no_output_____
###Markdown
But a lazily made figure of that exact same info is somewhat workable:
###Code
f3 = sns.heatmap(ccm.corr()) # v1, use the nicer version below!
###Output
_____no_output_____
###Markdown
Cleaning that and making it more useful is easy:1. Drop the numerical variables that don't make sense in a correlation matrix2. Make the figure large enough to see3. Colors: cold for negative corr, hot for positive corr
###Code
# dont plot identifying type info or categorical vars
corr = ccm.drop(columns=['gvkey','lpermno','sic3','fyear','sic']).corr()
fig, ax = plt.subplots(figsize=(9,9)) # make a big space for the figure
ax = sns.heatmap(corr,
# cmap for the colors,
center=0,square=True,
cmap=sns.diverging_palette(230, 20, as_cmap=True),
# mask to hide the upper diag (redundant)
mask=np.triu(np.ones_like(corr, dtype=bool)),
# shrink the heat legend
cbar_kws={"shrink": .5},
#optional: vmax and vmin will "cap" the color range
)
###Output
_____no_output_____
###Markdown
That is an information DENSE figure, but we somehow managed to get it on screen decently! Still, it's a ton of variables, and doing this in parts would be a good idea.```{tip}If you're feeling frisky, and your data is in good shape, you can push this farther by using [`sns.clustermap`](https://seaborn.pydata.org/generated/seaborn.clustermap.html) to find clusters of similar variables. ```Also - don't take these correlations as gospel yet: They should *point* you towards further relationships to explore, which you should do one plot at a time. Digging in with [**lmplot**](https://seaborn.pydata.org/generated/seaborn.relplot.html) and [**Jointplot**](https://seaborn.pydata.org/generated/seaborn.jointplot.html)These are good for digging into the relationships between two continuous variables. Let's dig into a strong correlation suggested by our heatmap.```{warning}Jointplot can be slow - it's doing a lot. Again, don't plot all of the data points! As your sample size goes up, either randomly sample data, or use "hex" style graphs. ```
###Code
f1 = sns.jointplot(data=ccm.query('xrd_a<.4').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# notice: most firms have 0 R&D!
f2 = sns.jointplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# set_title doesn't work with jointplots
f1.fig.suptitle('Strongly positive, even with zero R&D firms in sample')
f1.fig.subplots_adjust(top=0.95) # Reduce plot to make room
f2.fig.suptitle('Among R&D firms, even stronger relationship')
f2.fig.subplots_adjust(top=0.95) # Reduce plot to make room
###Output
_____no_output_____
###Markdown
I'd pencil this as a relationship to look into more (Do firms do more R&D _**because**_ of the fluidity of their product market?) and then continue exploring. `lmplot` will plot regressions as well, but it makes it easy add [facets](04d-whichplot) to see if the relationship depends on a third (categorical) variable with the `hue`, `col`, and `row` parameters. (And you can combine `hue`, `col`, and `row` to see several cuts!)
###Code
f3 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", hue='div_d')
f4 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", col='div_d')
###Output
_____no_output_____
###Markdown
Visual EDAThe [first page of this chapter](04b-whyplot) discussed the reasons we plot our data. 1. Data cleaning: To find issues in the data that need to get fixed before we can do larger analysis2. Data exploration: Learning about each of the variables, how they covary, and what further questions you can ask of the data3. Analysis and presentation EDA on a classic firm financial datasetIn [the Pandas EDA](02e_eda_golden) page, I explored Compustat by producing summary stats to get a sense of the variables involved, look for missing values, and look for problematic outliers. We noted that some variables, like $delaycon$, had a lot of missing values and decided we'd look into it. Let's continue exploring that dataset. First, let's download our slice of it. The variables are listed and described in a csv file in the [repo's data folder.](https://github.com/LeDataSciFi/ledatascifi-2021/tree/main/data)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
# these three are used to download the file
from io import BytesIO
from zipfile import ZipFile
from urllib.request import urlopen
url = 'https://github.com/LeDataSciFi/ledatascifi-2021/blob/main/data/CCM_cleaned_for_class.zip?raw=true'
#firms = pd.read_stata(url)
# <-- that code would work, but GH said it was too big and
# forced me to zip it, so here is the work around to download it:
with urlopen(url) as request:
data = BytesIO(request.read())
with ZipFile(data) as archive:
with archive.open(archive.namelist()[0]) as stata:
ccm = pd.read_stata(stata)
###Output
_____no_output_____
###Markdown
The mystery of the poorly populated variablesAgain, there are some variables with lots of missing values.
###Code
(
( # these lines do the calculation - what % of missing values are there for each var
ccm.isna() # ccm.isna() TURNS every obs/variable = 1 when its missing and 0 else
.sum(axis=0) # count the number of na for each variable (now data is 1 obs per column = # missing)
/len(ccm) # convert # missing to % missing
*100 # report as percentage
)
# you can stop here and report this...
# but I wanted to format it a bit...
.sort_values(ascending=False)[:13]
.to_frame(name='% missing') # the next line only works on a frame, and because pandas sees only 1 variable at this pt
.style.format("{:.1f}") # in the code, it calls this a "series" type object, so convert it to dataframe type object
)
#
###Output
_____no_output_____
###Markdown
When variables missing that much in a dataset, something systematic is going on and you need to figure it out. One way you could investigate why those variables are missing. Maybe it's a data issue, as if some data for variable $x$ isn't available in all years. E.g. perhaps a variable isn't available before 1995 for some reason.A way you could get a start on that is to plot the % missing by year for each variable. This legend is UGGGGLY, because the plot has 40+ series, which is why it's a **spaghetti chart**. It would take extra work to unravel the spaghetti and figure out what variables are what. But CLEARLY some variables only become available in 1995 so they can be used after that.
###Code
(
ccm
.groupby('fyear')
[['privdelaycon','debtdelaycon','equitydelaycon','delaycon',
'prodmktfluid','tnic3tsimm','tnic3hhi']]
.apply(lambda x: 100*(x.isna().sum(axis=0)) / len(x) )
.plot.line(title="These variables didn't exist before 1997!",
ylabel="Fraction of missing observations")
)
plt.show()
###Output
_____no_output_____
###Markdown
Distributions Among the first things I do with new data, besides the statistical EDA [we covered in the Pandas section](02e_eda_golden), is plot the distribution of each variable to get a sense of the data. I generally want to know 1. If a variable is numerical 1. How many variables are missing and is it systematic (like in the example above)? 1. Is it continuous or discrete? 1. What is the shape of a distribution (normal, binary, skewed left, skewed right, fat-tailed, etc.)? 2. How prevalent are outliers?2. If a variable is categorical 1. What are the common values? 2. Are the averages of numerical variables different for different categories? ```{warning}Remember the `gsector` variable! Just because a variable is a "number" doesn't mean the numbers have a mathematical meaning! ```Four functions come in handy as a starting point, and you should look at their documentation and the example galleries: [`sns.displot`](https://seaborn.pydata.org/generated/seaborn.displot.html), [`sns.boxplot`](https://seaborn.pydata.org/generated/seaborn.boxplot.html), [`sns.catplot`](https://seaborn.pydata.org/generated/seaborn.catplot.html), and the built in pandas plot function [`df[].plot()`](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.plot.html)```{tip}1. Quick syntax help: Remember to type SHIFT + TAB when the cursor inside a function!2. Better syntax help: Go the official seaborn page for your function and look at the examples to figure out what argument creates the change you want.```
###Code
sns.displot(data=ccm,
x='td_a',
kind='kde').set(title='A density (kind="kde") graph')
plt.show()
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
kind='kde').set(title='Used .query() to filter outliers')
plt.show()
sns.displot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a',
# kind=hist is the default, so I'm not even typing it
kde=True).set(title='kind="hist", kde=True --> histogram + kde' )
plt.show()
sns.boxplot(data=ccm,
x='td_a').set(title='Outliers can distort graph until useless')
plt.show()
sns.boxplot(data=ccm.query('td_a < 1 & td_a > 0'),
x='td_a').set(title='With query, outliers are filtered, main patterns visible')
plt.show()
sns.displot(data = ccm,
x = 'div_d', kde=True
).set(title='div_d is a binary variable: Does the firm pay dividends?')
plt.show()
###Output
_____no_output_____
###Markdown
To visualize the counts of categorical variables, I'd just use pandas:
###Code
# sns.catplot is powerful, but it's overkill for a categorical count
sns.catplot(data=ccm,
x='gsector',
kind='count',
order = ccm['gsector'].value_counts().index)
plt.show()
# pandas built in plot is much easier:
ccm['gsector'].value_counts().plot(kind='bar')
plt.show()
###Output
_____no_output_____
###Markdown
But `sns.catplot` is a really useful function to look at other distributional statistics for different groups! Covariances/relationships between variablesTo get a quick sense of relationships, I like to use I like to use `pairplot` and `heatmap` to get a quick since of relationships. Getting the big picture with [**Pairplot**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)I like passing `corner=True` or using the `x_vars` and `y_vars` parameters to make the info shown more usable. ```{warning}With pairplot, 1. Use 7 or fewer variables at a time. If your dataset has a lot of variables, do them part by part.2. Don't plot all of the data points! This will oversaturate your graphs and make it harder to draw any conclusions. Below, I randomly sample a piece of the dataset. ```**It's clear from running these two plots that some extreme outliers are hiding patterns by messing with the scales and influencing the regression lines.**(We should deal with these outliers later.)
###Code
# every time you run this, you'll get diff figures... why?!
f1 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='reg',
corner=True)
f2 = sns.pairplot(ccm[['capx_a', 'xrd_a', 'cash_a','td_a']].sample(500),
kind='hist',
corner=True) # hist handles a lot of datapoints well
###Output
_____no_output_____
###Markdown
Getting the big picture with [**Heatmap with correlations**](https://seaborn.pydata.org/generated/seaborn.pairplot.html)After some pairplots (and often before), I like to look at correlations.```{warning}This analysis step doesn't help for categorical variables!Make sure you don't include categorical variables that are numbers!(E.g. industry classifications are numbers that have no meaning.)```Seeing the correlations between variables is nice.A correlation table is ugly and hard to work with:
###Code
ccm.corr()
###Output
_____no_output_____
###Markdown
But a lazily made figure of that exact same info is somewhat workable:
###Code
f3 = sns.heatmap(ccm.corr()) # v1, use the nicer version below!
###Output
_____no_output_____
###Markdown
Cleaning that and making it more useful is easy:1. Drop the numerical variables that don't make sense in a correlation matrix2. Make the figure large enough to see3. Colors: cold for negative corr, hot for positive corr
###Code
# dont plot identifying type info or categorical vars
corr = ccm.drop(columns=['gvkey','lpermno','sic3','fyear','sic']).corr()
fig, ax = plt.subplots(figsize=(9,9)) # make a big space for the figure
ax = sns.heatmap(corr,
# cmap for the colors,
center=0,square=True,
cmap=sns.diverging_palette(230, 20, as_cmap=True),
# mask to hide the upper diag (redundant)
mask=np.triu(np.ones_like(corr, dtype=bool)),
# shrink the heat legend
cbar_kws={"shrink": .5},
#optional: vmax and vmin will "cap" the color range
)
###Output
_____no_output_____
###Markdown
That is an information DENSE figure, but we somehow managed to get it on screen decently! Still, it's a ton of variables, and doing this in parts would be a good idea.```{tip}If you're feeling frisky, and your data is in good shape, you can push this farther by using [`sns.clustermap`](https://seaborn.pydata.org/generated/seaborn.clustermap.html) to find clusters of similar variables. ```Also - don't take these correlations as gospel yet: They should *point* you towards further relationships to explore, which you should do one plot at a time. Digging in with [**lmplot**](https://seaborn.pydata.org/generated/seaborn.relplot.html) and [**Jointplot**](https://seaborn.pydata.org/generated/seaborn.jointplot.html)These are good for digging into the relationships between two continuous variables. Let's dig into a strong correlation suggested by our heatmap.```{warning}Jointplot can be slow - it's doing a lot. Again, don't plot all of the data points! As your sample size goes up, either randomly sample data, or use "hex" style graphs. ```
###Code
f1 = sns.jointplot(data=ccm.query('xrd_a<.4').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# notice: most firms have 0 R&D!
f2 = sns.jointplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", kind='reg')
# set_title doesn't work with jointplots
f1.fig.suptitle('Strongly positive, even with zero R&D firms in sample')
f1.fig.subplots_adjust(top=0.95) # Reduce plot to make room
f2.fig.suptitle('Among R&D firms, even stronger relationship')
f2.fig.subplots_adjust(top=0.95) # Reduce plot to make room
###Output
_____no_output_____
###Markdown
I'd pencil this as a relationship to look into more (Do firms do more R&D _**because**_ of the fluidity of their product market?) and then continue exploring. `lmplot` will plot regressions as well, but it makes it easy add [facets](04d-whichplot) to see if the relationship depends on a third (categorical) variable with the `hue`, `col`, and `row` parameters. (And you can combine `hue`, `col`, and `row` to see several cuts!)
###Code
f3 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", hue='div_d')
f4 = sns.lmplot(data=ccm.query('xrd_a<.4 & xrd_a > 0').sample(1000),
x="prodmktfluid", y="xrd_a", col='div_d')
###Output
_____no_output_____ |
marketing_campaign_optimization/marketing_campaign_optimization_gcl.ipynb | ###Markdown
Marketing Campaign Optimization Objective and PrerequisitesCompanies across almost every industry are looking to optimize their marketing campaigns. In this Jupyter Notebook, we’ll explore a marketing campaign optimization problem that is common in the banking and financial services industry, which involves determining which products to offer to individual customers in order to maximize total expected profit while satisfying various business constraints. You’ll learn how to formulate a mathematical optimization model of the problem (using machine learning predictive response models as parameters) and solve it using the Gurobi Optimizer.This modeling example is at the beginner level, where we assume that you know Python and that you have some knowledge about building mathematical optimization models. The reader should also consult the [documentation](https://www.gurobi.com/resources/?category-filter=documentation)of the Gurobi Python API.**Download the Repository** You can download the repository containing this and other examples by clicking [here](https://github.com/Gurobi/modeling-examples/archive/master.zip). MotivationThe main goal of marketing in the banking and financial services industry is to offer "the right product to the right customer at the right time". However, actually being able to achieve this goal is a complicated and challenging undertaking. What makes this particularly difficult is that companies have multiple products and operate under a complex set of business constraints. Choosing which products to offer to which customers in order to maximize the marketing return on investment and satisfy the business constraints is enormously complex.Consider a major bank that has made a deliberate effort to become a customer-focused institution, as opposed to a vertical product driven company. The goal of the bank is "to be the best at helping customers become financially better off by providing relevant solutions to their unique needs". A direct consequence of this goal is that marketing campaigns are multiple product campaigns as opposed to single product campaigns. This transforms the data science and campaign targeting process from a fairly simple application of individual response models into a significantly more complex process that invloves choosing which product to offer to which customer and through which channel.The marketing team of the bank are used to applying business rules to target customers directly. For example, they target customers solely on their product gaps or on marketers' business intuition. The bank'smarketers have also applied RFM type analysis where general recency, frequency, and monetary measurements as well as product gaps are used to target customers for specific offers. The marketing team's current approach, which is widely used, relies on predictive response models to target customers for offers. These models estimate the probability that a customer will respond to a specific offer and can significantly increase the response rate to a product offering. However, simply knowing a customer's probability of responding to a particular offer is not enough when a company has several products to promote and other business constraints to consider in its marketing planning.Generally speaking, marketing teams also face the problem of knowing which product to offer to a customer, not just which customer to offer a product. In practice, many ad hoc rules are used:* prioritization rules based on response rates or estimated expected profitability measures* business rules to prioritize products that can be marketed* product response models to select customers for a particular campaign. One approach that is easily implemented but may not produce optimal customer contact plans relies on a measure of expected offer profitability to choose which products to offer customers. However, a shortcoming of this approach is its inability to effectively handle complex constraints on the customer contact plan.To address this marketing campaign optimization problem, M. D. Cohen [1] proposed a MIP approach with data from Scotiabank. The marketing campaign optimization problem considered eleven unique offers: five investment, three lending, and three day-to-day banking offers. The investment offers included Guaranteed Investment Certificates (GICs), mutual funds, Registered Education Savings Program (RESP) and two unique discount brokerage offers. The lending offers included a mortgage and two credit card offers. The day-to-day banking offers included one of two Scotia online banking service offers and a deposit account acquisition. The term campaign is used here to imply one large pro-active customer contact campaign that is comprised of eleven distinct offers; it can be thought of as eleven single product campaigns that are being offered at generally the same time to a non-overlapping set of customers. Approximately 2.5 million customers were included in the potential target market for the campaign.In this Jupyter Notebook, we will use this MIP approach to address the bank’s marketing campaign optimization problem. It should be noted that this approach could be used by virtually any company across various industries to optimize their marketing campaigns, while taking into account their business constraints. Problem DescriptionThe bank's marketing team needs to determine what products to offer to each customer in a way that maximizes the marketing campaign return on investment while considering the following constraints: * limits on funding available for the campaign.* restrictions on the minimum number of product offers that can be made in a campaign.* campaign return-on-investment hurdle rates that must be met. Solution ApproachMathematical programming is a declarative approach where the modeler formulates a mathematical optimization model that captures the key aspects of a complex decision problem. The Gurobi Optimizer solves such models using state-of-the-art mathematics and computer science.A mathematical optimization model has five components, namely:* Sets and indices.* Parameters.* Decision variables.* Objective function(s).* Constraints.We now present a MIP approach for this marketing campaign optimization problem.The MIP solution approach proposed by Cohen [1] is an improvement over the traditional myopicapproach of picking the customers that have the largest expected value for a particular product because itproduces a globally optimal solution from the viewpoint of the bank and allows for the effective implementation of business constraints across customers and business units. The approach accounts for limited resources and other business constraints. We assume that the estimates for customer/offer expected incremental profit, costs, and business constraints serve as inputs to the marketing campaign optimization approach. The optimization phase is independent of the construction of these inputs. The MIP approachinvolves a tactical and an operational problem. For the tactical problem, we aggregate customers based on the individual expected profit parameters. The estimated individual expected profit can be determined with data science techniques such as predictive response models. The key idea is to cluster the estimated individual expected profits and then consider the cluster centroids as representative of the data for all the individual customers within a single cluster. This aggregation enables the problem to be formulated as a linear programming problem so that rather than assigning offers to individual customers, the model identifies proportions within each cluster for each product offer that maximizes the marketing campaign return on investment while considering the business constraints. Typically, the number of customers in a cluster will be in the hundreds of thousands, which is the main decision variable of the tactical problem, consequently these variables can be considered as a continuous; therefore, the linear programming approach is justified.The operational problem can be formulated as a MIP model, where the estimated individual expected profits and the output of the tactical model can be used as inputs to assign *products offers* to individual customers of each cluster in such a way that the total marketing campaign return on investment is maximized. Tactical Model Formulation Sets and Indices$k \in K$: Index and set of clusters.$j \in J$: Index and set of products. Parameters$\pi_{k,j}$: Expected profit to the bank from the offer of product $j \in J$ to an average customer of cluster $k \in K$.$\nu_{k,j}$: Average variable cost associated with the offer of product $j \in J$ to an average customer of cluster $k \in K$. $N_{k}$: Number of customers in cluster $k \in K$.$Q_{j}$: Minimum number of offers of product $j \in J$. $R$: Corporate hurdle rate. This hurdle rate is used for the ROI calculation of the marketing campaign.$B$: Marketing campaign budget.$M$: Big M penalty. This penalty is associated with corrections on the budget that are necessary to satisfy other business constraints. Decision Variables$y_{k,j} \geq 0$: Number of customers in cluster $k \in K$ that are offered product $j \in J$.$z \geq 0$: Increase in budget in order to have a feasible campaign. Objective Function- **Total profit**. Maximize total expected profit from marketing campaign and heavily penalize any correction to the budget.\begin{equation}\text{Max} \quad Z = \sum_{k \in K} \sum_{j \in J} \pi_{k,j} \cdot y_{k,j} - M \cdot z\tag{0}\end{equation} Constraints- **Number of offers**. Maximum number of offers of products for each cluster is limited by the number of customers in the cluster.\begin{equation}\sum_{j \in J} y_{k,j} \leq N_{k} \quad \forall k \in K\tag{1}\end{equation}- **Budget**. The marketing campaign budget constraint enforces that the total cost of the campaign should be less than the budget campaign. There is the possibility of increasing the budget to ensure the feasibility of the model, the minimum number of offers for all the product may require this increase in the budget.\begin{equation}\sum_{k \in K} \sum_{j \in J} \nu_{k,j} \cdot y_{k,j} \leq B + z\tag{2}\end{equation}- **Offers limit**. Minimum number of offers of each product.\begin{equation}\sum_{k \in K} y_{k,j} \geq Q_{j} \quad \forall j \in J\tag{3}\end{equation}- **ROI**. The minimum ROI constraint ensures that the ratio of total profits over cost is at least one plus the corporate hurdle rate.\begin{equation}\sum_{k \in K} \sum_{j \in J} \pi_{k,j} \cdot y_{k,j} \geq (1+R) \cdot \sum_{k \in K} \sum_{j \in J} \nu_{k,j} \cdot y_{k,j}\tag{4}\end{equation} Operational Model FormulationOnce the optimal values $y_{k,j}$, for all $j \in J$ and $k \in K$, of the Tactical model have been found, we should determine which individual customers in cluster $k$ should get an offer of a product. Suppose that for a given cluster $k \in K$, the allocation of offers of product $j_1$ and $j_2$ are positive, i.e. $y_{k,j_1} > 0$ and $y_{k,j_2} > 0$. Then, $y_{k,j_1}$ and $y_{k,j_2}$ of customers in cluster $k$ must be offered product $j_1$ and $j_2$, respectively. The optimal way to do that is to solve an assignment problem using the estimated expected profit for the individual customers and not the one for clusters.We now provide a formulation of the operational problem. Sets and Indices$i \in I^{k}$: Index and set of customers in cluster $k \in K$.$j \in J^{k}$: Index and subset of products offered to customers in cluster $k \in K$ , where $J^{k} = \{ j \in J: y_{k,j} > 0 \}$ . Parameters$r_{k,i,j}$: Expected individual profit of customer $i \in I^{k}$ from offer of product $j \in J^{k}$. $Y_{k,j} = \lfloor y_{k,j} \rfloor $: Number of customers in cluster k that will get an offer of product $j \in J^{k}$. Decision Variables$x_{k,i,j} \in \{0,1 \}$: This variable is equal to 1, if product $j \in J^{k}$ is offered to customer $i \in I^{k}$, and 0 otherwise. Objective Function- **Total profit**. Maximize total individual profit.\begin{equation}\text{Max} \quad Z = \sum_{k \in K} \sum_{i \in I^{k}} \sum_{j \in J^{k}} r_{k,i,j} \cdot x_{k,i,j}\tag{0}\end{equation} Constraints- **Product offers**. Allocate offers of a product to customers of each cluster.\begin{equation}\sum_{i \in I^{k}} x_{k,i,j} = Y_{k,j} \quad \forall j \in J^{k}, k \in K\tag{1}\end{equation}- **Offers limit**. At most one product may be offered to a customer of a cluster.\begin{equation}\sum_{j \in J^{k}} x_{k,i,j} \leq 1 \quad \forall i \in I^{k}, k \in K\tag{2}\end{equation}- **Binary constraints**. Either a product offer is given to a customer of cluster k or not.\begin{equation}x_{k,i,j} \in \{0,1 \} \quad \forall i \in I^{k}, j \in J^{k}, k \in K\tag{3}\end{equation} Problem InstanceWe consider two products, ten customers, and two clusters of customers. The corporate hurdle-rate is twenty percent. Tactical problem dataThe following table defines the expected profit of an average customer in each cluster when offered a product.| | Product 1 | Product 2 || --- | --- | --- || cluster 1 | $\$2,000$ | $\$1,000$ || cluster 2 | $\$3,000$ | $\$2,000$ |The expected cost of offering a product to an average customer in a cluster is determined by the following table.| | Product 1 | Product 2 || --- | --- | --- || cluster 1 | $\$200$ | $\$100$ || cluster 2 | $\$300$ | $\$200$ |The budget available for the marketing campaign is $\$200$.The number of customers in each cluster is given by the following table.| | Num. Customers | | --- | --- || cluster 1 | 5 || cluster 2 | 5 | The minimum number of offers of each product is provided in the following table,| | Min Offers | | --- | --- || product 1 | 2 || product 2 | 2 | Operational problem dataThe following table shows the expected profit of each customer in each cluster when offered a product.| | Product 1 | Product 2 || --- | --- | --- || cluster 1, customer 1 | $\$2,050$ | $\$1,050$ || cluster 1, customer 2 | $\$1,950$ | $\$950$ || cluster 1, customer 3 | $\$2,000$ | $\$1,000$ || cluster 1, customer 4 | $\$2,100$ | $\$1,100$ || cluster 1, customer 5 | $\$1,900$ | $\$900$ || cluster 2, customer 6 | $\$3,000$ | $\$2,000$ || cluster 2, customer 7 | $\$2,900$ | $\$1,900$ || cluster 2, customer 8 | $\$3,050$ | $\$2,050$ || cluster 2, customer 9 | $\$3,100$ | $\$2,100$ || cluster 2, customer 10 | $\$2,950$ | $\$1,950$ |The following table shows the cost of offering a product to a customer in a cluster.| | Product 1 | Product 2 || --- | --- | --- || cluster 1, customer 1 | $\$205$ | $\$105$ || cluster 1, customer 2 | $\$195$ | $\$95$ || cluster 1, customer 3 | $\$200$ | $\$100$ || cluster 1, customer 4 | $\$210$ | $\$110$ || cluster 1, customer 5 | $\$190$ | $\$90$ || cluster 2, customer 6 | $\$300$ | $\$200$ || cluster 2, customer 7 | $\$290$ | $\$190$ || cluster 2, customer 8 | $\$305$ | $\$205$ || cluster 2, customer 9 | $\$310$ | $\$210$ || cluster 2, customer 10 | $\$295$ | $\$195$ | Python ImplementationWe now import the Gurobi Python Module. Then, we initialize the data structures with the given data.
###Code
%pip install gurobipy
import gurobipy as gp
from gurobipy import GRB
# tested with Gurobi v9.0.0 and Python 3.7.0
### SETS
products = ['p1', 'p2']
clusters = ['k1', 'k2']
###Output
_____no_output_____
###Markdown
Expected profitThe following tables shows the expected profit of a customer in each cluster when offered a product.| | Product 1 | Product 2 || --- | --- | --- || cluster 1 | $\$2,000$ | $\$1,000$ || cluster 2 | $\$3,000$ | $\$2,000$ |
###Code
### Parameters
# Expected profit
cp, expected_profit = gp.multidict({
('k1', 'p1'): 2000,
('k1', 'p2'): 1000,
('k2', 'p1'): 3000,
('k2', 'p2'): 2000
})
###Output
_____no_output_____
###Markdown
Expected costThe expected cost of offering a product to a customer in a cluster is shown in the following table.| | Product 1 | Product 2 || --- | --- | --- || cluster 1 | $\$200$ | $\$100$ || cluster 2 | $\$300$ | $\$200$ |
###Code
# Expected cost
cp, expected_cost = gp.multidict({
('k1', 'p1'): 200,
('k1', 'p2'): 100,
('k2', 'p1'): 300,
('k2', 'p2'): 200
})
###Output
_____no_output_____
###Markdown
Number of customersThe number of customers in each cluster can be seen in the following table.| | Num. Customers | | --- | --- || cluster 1 | 5 || cluster 2 | 5 |
###Code
# Num of customers in each cluster
clusters, number_customers = gp.multidict({
('k1'): 5,
('k2'): 5
})
###Output
_____no_output_____
###Markdown
Minimum number of offersThe minimum number of offers of each product is provided in the following table.| | Min Offers | | --- | --- || product 1 | 2 || product 2 | 2 |
###Code
# Minimum number offers for each product
products, min_offers = gp.multidict({
('p1'): 2,
('p2'): 2
})
###Output
_____no_output_____
###Markdown
ScalarsThe corporate hurdle-rate is twenty percent ($R = 0.20$).The budget available for the marketing campaign is $\$200$.
###Code
# Scalars
R = 0.20
#tight budget
budget = 200
###Output
_____no_output_____
###Markdown
Tactical Model Formulation Decision Variables$y_{k,j} \geq 0$: Number of customers in cluster $k \in K$ that are offered product $j \in J$.$z \geq 0$: Increase in budget in order to have a feasible campaign.
###Code
# Declare and initialize model
mt = gp.Model('Tactical')
### Decisions variables
# Allocation of product offers to customers in clusters.
y = mt.addVars(cp, name="allocate")
# Budget correction
z = mt.addVar(name="budget_correction")
###Output
Using license file c:\gurobi\gurobi.lic
###Markdown
Constraints- **Number of offers**. Maximum number of offers of products for each cluster.\begin{equation}\sum_{j \in J} y_{k,j} \leq N_{k} \quad \forall k \in K\tag{1}\end{equation}Where$y_{k,j} \geq 0$: Number of customers in cluster $k \in K$ that are offered product $j \in J$.$N_{k}$: Number of customers in cluster $k \in K$.
###Code
### Constraints
# Constraint on number of offers at each cluster
maxOffers_cons = mt.addConstrs((y.sum(k,'*') <= number_customers[k] for k in clusters), name='maxOffers')
###Output
_____no_output_____
###Markdown
Constraints- **Budget**. The marketing campaign budget constraint enforces that the total cost of the campaign should be less than the budget campaign. There is the possibility of increasing the budget to ensure the feasibility of the model, the minimum number of offers for all the product may require this increase in the budget.\begin{equation}\sum_{k \in K} \sum_{j \in J} \nu_{k,j} \cdot y_{k,j} \leq B + z\tag{2}\end{equation}Where$y_{k,j} \geq 0$: Number of customers in cluster $k \in K$ that are offered product $j \in J$.$z \geq 0$: Increase in budget in order to have a feasible campaign.$\nu_{k,j}$: Average variable cost associated with the offer of product $j \in J$ to an average customer of cluster $k \in K$.$B$: Marketing campaign budget.
###Code
# Budget constraint
budget_con = mt.addConstr((y.prod(expected_cost) - z <= budget), name='budget')
###Output
_____no_output_____
###Markdown
Constraints- **Offers limit**. Minimum number of offers of each product.\begin{equation}\sum_{k \in K} y_{k,j} \geq Q_{j} \quad \forall j \in J\tag{3}\end{equation}Where$y_{k,j} \geq 0$: Number of customers in cluster $k \in K$ that are offered product $j \in J$.$Q_{j}$: Minimum number of offers of product $j \in J$.
###Code
# Constraints on min number of offers of each product
minOffers_cons = mt.addConstrs( (y.sum('*',j) >= min_offers[j] for j in products), name='min_offers')
###Output
_____no_output_____
###Markdown
Constraints- **ROI**. The minimum ROI constraint ensures that the ratio of total profits over cost is at least one plus the corporate hurdle rate.\begin{equation}\sum_{k \in K} \sum_{j \in J} \pi_{k,j} \cdot y_{k,j} \geq (1+R) \cdot \sum_{k \in K} \sum_{j \in J} \nu_{k,j} \cdot y_{k,j}\tag{4}\end{equation}Where$y_{k,j} \geq 0$: Number of customers in cluster $k \in K$ that are offered product $j \in J$.$\pi_{k,j}$: Expected profit to the bank from the offer of product $j \in J$ to an average customer of cluster $k \in K$.$\nu_{k,j}$: Average variable cost associated with the offer of product $j \in J$ to an average customer of cluster $k \in K$.$R$: Corporate hurdle rate.
###Code
# Constraint to ensure minimum ROI
ROI_con = mt.addConstr((y.prod(expected_profit) - (1 + R)*y.prod(expected_cost) >= 0), name='ROI')
###Output
_____no_output_____
###Markdown
Objective Function- **Total profit**. Maximize total expected profit from marketing campaign and heavily penalize any correction to the budget.\begin{equation}\text{Max} \quad Z = \sum_{k \in K} \sum_{j \in J} \pi_{k,j} \cdot y_{k,j} - M \cdot z\tag{0}\end{equation}Where$y_{k,j} \geq 0$: Number of customers in cluster $k \in K$ that are offered product $j \in J$.$z \geq 0$: Increase in budget in order to have a feasible campaign.$\pi_{k,j}$: Expected profit to the bank from the offer of product $j \in J$ to an average customer of cluster $k \in K$.**Note:** The value of $M$ should be higher than any of the expected profits to ensure that the budget is increased only when the model is infeasible if this parameter is not increased.
###Code
### Objective function
# Maximize total expected profit
M = 10000
mt.setObjective(y.prod(expected_profit) -M*z, GRB.MAXIMIZE)
# Verify model formulation
mt.write('tactical.lp')
# Run optimization engine
mt.optimize()
### Output Reports
# Optimal allocation of product offers to clusters
total_expected_profit = 0
total_expected_cost = 0
print("\nOptimal allocation of product offers to clusters.")
print("___________________________________________________")
for k,p in cp:
if y[k,p].x > 1e-6:
#print(y[k,p].varName, y[k,p].x)
print(f"The number of customers in cluster {k} that gets an offer of product {p} is: {y[k,p].x}")
total_expected_profit += expected_profit[k,p]*y[k,p].x
total_expected_cost += expected_cost[k,p]*y[k,p].x
increased_budget = '${:,.2f}'.format(z.x)
print(f"\nThe increase correction in the campaign budget is {increased_budget}.")
# Financial reports
optimal_ROI = round(100*total_expected_profit/total_expected_cost,2)
min_ROI = round(100*(1+R),2)
money_expected_profit = '${:,.2f}'.format(total_expected_profit)
money_expected_cost = '${:,.2f}'.format(total_expected_cost)
money_budget = '${:,.2f}'.format(budget)
print(f"\nFinancial reports.")
print("___________________________________________________")
print(f"Optimal total expected profit is {money_expected_profit}.")
print(f"Optimal total expected cost is {money_expected_cost} with a budget of {money_budget} and an extra amount of {increased_budget}.")
print(f"Optimal ROI is {optimal_ROI}% with a minimum ROI of {min_ROI}%.")
###Output
Optimal allocation of product offers to clusters.
___________________________________________________
The number of customers in cluster k1 that gets an offer of product p1 is: 2.0
The number of customers in cluster k1 that gets an offer of product p2 is: 2.0
The increase correction in the campaign budget is $400.00.
Financial reports.
___________________________________________________
Optimal total expected profit is $6,000.00.
Optimal total expected cost is $600.00 with a budget of $200.00 and an extra amount of $400.00.
Optimal ROI is 1000.0% with a minimum ROI of 120.0%.
###Markdown
AnalysisThe cost of allocating products to clusters required an increase in the budget of $\$400$. The total expected profit is $\$6,000$. The total expected cost is $\$600$ which is equal to the original budget of $\$200$ plus the increase of $\$400$. The expected ROI is 1,000 % which is much higher than the minimum ROI required. Operational Model Formulation Customer expected profit
###Code
### Sets
customers = ['c1', 'c2','c3','c4','c5','c6','c7','c8','c9','c10']
### Parameters
# Expected profit from a product offering for each customer in each cluster
ccp, customer_profit = gp.multidict({
('k1', 'c1', 'p1'): 2050,
('k1', 'c1', 'p2'): 1050,
('k1', 'c2', 'p1'): 1950,
('k1', 'c2', 'p2'): 950,
('k1', 'c3', 'p1'): 2000,
('k1', 'c3', 'p2'): 1000,
('k1', 'c4', 'p1'): 2100,
('k1', 'c4', 'p2'): 1100,
('k1', 'c5', 'p1'): 1900,
('k1', 'c5', 'p2'): 900,
('k2', 'c6', 'p1'): 3000,
('k2', 'c6', 'p2'): 2000,
('k2', 'c7', 'p1'): 2900,
('k2', 'c7', 'p2'): 1900,
('k2', 'c8', 'p1'): 3050,
('k2', 'c8','p2'): 2050,
('k2', 'c9', 'p1'): 3100,
('k2', 'c9', 'p2'): 3100,
('k2', 'c10', 'p1'): 2950,
('k2', 'c10', 'p2'): 2950
})
###Output
_____no_output_____
###Markdown
Customer offering cost
###Code
# Customer cost of offering a product at a cluster
ccp, customer_cost = gp.multidict({
('k1', 'c1', 'p1'): 205,
('k1', 'c1', 'p2'): 105,
('k1', 'c2', 'p1'): 195,
('k1', 'c2', 'p2'): 95,
('k1', 'c3', 'p1'): 200,
('k1', 'c3', 'p2'): 100,
('k1', 'c4', 'p1'): 210,
('k1', 'c4', 'p2'): 110,
('k1', 'c5', 'p1'): 190,
('k1', 'c5', 'p2'): 90,
('k2', 'c6', 'p1'): 300,
('k2', 'c6', 'p2'): 200,
('k2', 'c7', 'p1'): 290,
('k2', 'c7', 'p2'): 190,
('k2', 'c8', 'p1'): 305,
('k2', 'c8','p2'): 205,
('k2', 'c9', 'p1'): 310,
('k2', 'c9', 'p2'): 310,
('k2', 'c10', 'p1'): 295,
('k2', 'c10', 'p2'): 295
})
###Output
_____no_output_____
###Markdown
Operational Model Formulation Decision Variables$x_{k,i,j} \in \{0,1 \}$: This variable is equal to 1, if product $j \in J^{k}$ is offered to customer $i \in I^{k}$, and 0 otherwise.
###Code
# Declare and initialize model
mo = gp.Model('Operational')
### Decision variables
x = mo.addVars(ccp, vtype=GRB.BINARY, name="assign")
###Output
_____no_output_____
###Markdown
Constraints- **Product offers**. Allocate offers of a product to customers of each cluster.\begin{equation}\sum_{i \in I^{k}} x_{k,i,j} = Y_{k,j} \quad \forall j \in J^{k}, k \in K\tag{1}\end{equation}Where$x_{k,i,j} \in \{0,1 \}$: This variable is equal to 1, if product $j \in J^{k}$ is offered to customer $i \in I^{k}$, and 0 otherwise.$Y_{k,j} = \lfloor y_{k,j} \rfloor $: Number of customers in cluster k that will get an offer of product $j \in J^{k}$.
###Code
# Product offers constraint
productOffers = {}
for k in clusters:
for j in products:
productOffers[k,j] = mo.addConstr(gp.quicksum(x[k,i,j] for kk,i,jj in ccp if (kk ==k and jj == j)) ==
int(y[k,j].x), name='prodOffers_' + str(k) + ',' + str(j) )
###Output
_____no_output_____
###Markdown
Constraints- **Offers limit**. At most one product may be offered to a customer of a cluster.\begin{equation}\sum_{j \in J^{k}} x_{k,i,j} \leq 1 \quad \forall i \in I^{k}, k \in K\tag{2}\end{equation}Where$x_{k,i,j} \in \{0,1 \}$: This variable is equal to 1, if product $j \in J^{k}$ is offered to customer $i \in I^{k}$, and 0 otherwise.
###Code
# limit on the number of offers to each customer in a cluster.
ki = [('k1', 'c1'),
('k1', 'c2'),
('k1', 'c3'),
('k1', 'c4'),
('k1', 'c5'),
('k2', 'c6'),
('k2', 'c7'),
('k2', 'c8'),
('k2', 'c9'),
('k2', 'c10')]
customerOffers = {}
for k,i in ki:
customerOffers[k,i] = mo.addConstr(gp.quicksum(x[k,i,j] for kk,ii,j in ccp if (kk == k and ii == i) ) <= 1,
name ='custOffers_' + str(k) + ',' + str(i) )
###Output
_____no_output_____
###Markdown
Objective Function- **Total profit**. Maximize total individual expected profit.\begin{equation}\text{Max} \quad Z = \sum_{k \in K} \sum_{i \in I^{k}} \sum_{j \in J^{k}} r_{k,i,j} \cdot x_{k,i,j}\tag{0}\end{equation}Where$x_{k,i,j} \in \{0,1 \}$: This variable is equal to 1, if product $j \in J^{k}$ is offered to customer $i \in I^{k}$, and 0 otherwise.$r_{k,i,j}$: Expected individual profit of customer $i \in I^{k}$ from offer of product $j \in J^{k}$.
###Code
### Objective function
# Maximoze total profit
mo.setObjective(x.prod(customer_profit), GRB.MAXIMIZE)
# Verify model formulation
mo.write('operational.lp')
# Run optimization engine
mo.optimize()
### Output Reports
# Optimal assignment of product offers to customers
total_customer_profit = 0
total_customer_cost = 0
kvalue = None
first = True
num_assignments = 0
print("\nOptimal assignment of product offers to customers.")
print("___________________________________________________")
for k,i,j in ccp:
if k != kvalue:
prevk = kvalue
kvalue = k
if not first:
print("___________________________________________________")
print(f"Number of assignments in cluster {prevk} is {num_assignments}")
print("___________________________________________________")
num_assignments = 0
if first:
first = False
if x[k,i,j].x > 0.5:
#print(x[k,i,j].varName, x[k,i,j].x)
profit = '${:,.2f}'.format(customer_profit[k,i,j])
cost = '${:,.2f}'.format(customer_cost[k,i,j])
print(f"Customer {i} in cluster {k} gets an offer of product {j}:")
print(f"The expected profit is {profit} at a cost of {cost}")
total_customer_profit += customer_profit[k,i,j]*x[k,i,j].x
total_customer_cost += customer_cost[k,i,j]*x[k,i,j].x
num_assignments += 1
print("___________________________________________________")
print(f"Number of assignments in cluster {kvalue} is {num_assignments}")
print("___________________________________________________\n")
# Financial reports
customers_ROI = round(100*total_customer_profit/total_customer_cost,2)
money_customers_profit = '${:,.2f}'.format(total_customer_profit)
money_customers_cost = '${:,.2f}'.format(total_customer_cost)
print(f"\nFinancial reports.")
print("___________________________________________________")
print(f"Optimal total customers profit is {money_customers_profit}.")
print(f"Optimal total customers cost is {money_customers_cost} with a budget of {money_budget} and an extra amount of {increased_budget}.")
print(f"Optimal ROI is {customers_ROI}% with a minimum ROI of {min_ROI}%.")
###Output
Optimal assignment of product offers to customers.
___________________________________________________
Customer c1 in cluster k1 gets an offer of product p2:
The expected profit is $1,050.00 at a cost of $105.00
Customer c2 in cluster k1 gets an offer of product p2:
The expected profit is $950.00 at a cost of $95.00
Customer c3 in cluster k1 gets an offer of product p1:
The expected profit is $2,000.00 at a cost of $200.00
Customer c4 in cluster k1 gets an offer of product p1:
The expected profit is $2,100.00 at a cost of $210.00
___________________________________________________
Number of assignments in cluster k1 is 4
___________________________________________________
___________________________________________________
Number of assignments in cluster k2 is 0
___________________________________________________
Financial reports.
___________________________________________________
Optimal total customers profit is $6,100.00.
Optimal total customers cost is $610.00 with a budget of $200.00 and an extra amount of $400.00.
Optimal ROI is 1000.0% with a minimum ROI of 120.0%.
###Markdown
AnalysisEach customer got, at most, one product offer. Product p2 is offered to customers c1 and c2, and product p1 is offered to customers c3 and c4. Products p1 and p2 are offerred to at least two customers -this is a constraint from the tactical model. Observe that to ensure these hard business constraints, the budget needs to be increased by $\$400$The cost of assigning products to customers is $\$610$, which slightly violates the total budget available of $\$600$. The total customers profit is $\$6,100$. The ROI is 1,000 %, which is much higher than the minimum ROI required.If the total available budget needs to be enforced, the following constraint can be added to the operational model:- **Budget**. Enforce budget constraint.\begin{equation}\sum_{k \in K} \sum_{i \in I^{k}} \sum_{j \in J^{k}} c_{k,i,j} \cdot x_{k,i,j} \leq B'\tag{4}\end{equation}The new budget is the original budget plus the correction, that is $B' = B + z$ Scenario 1Enforce total budget available constraint. In this case, the operational model is: Objective function- **Total profit**. Maximize total individual expected profit.\begin{equation}\text{Max} \quad Z = \sum_{k \in K} \sum_{i \in I^{k}} \sum_{j \in J^{k}} r_{k,i,j} \cdot x_{k,i,j}\tag{0}\end{equation} Constraints- **Product offers**. Allocate offers of a product to customers of each cluster.\begin{equation}\sum_{i \in I^{k}} x_{k,i,j} = Y_{k,j} \quad \forall j \in J^{k}, k \in K\tag{1}\end{equation}- **Offers limit**. At most one product may be offered to a customer in each cluster.\begin{equation}\sum_{j \in J^{k}} x_{k,i,j} \leq 1 \quad \forall i \in I^{k}, k \in K\tag{2}\end{equation}- **Budget**. Enforce budget constraint.\begin{equation}\sum_{k \in K} \sum_{i \in I^{k}} \sum_{j \in J^{k}} c_{k,i,j} \cdot x_{k,i,j} \leq B'\tag{3}\end{equation}
###Code
### Operational model enforcing constraint for total budget available
# Declare and initialize model
mob = gp.Model('OperationalB')
### Decision variables
xb = mob.addVars(ccp, vtype=GRB.BINARY, name="assign")
# Product offers constraint
productOffersb = {}
for k in clusters:
for j in products:
productOffersb[k,j] = mob.addConstr(gp.quicksum(xb[k,i,j] for kk,i,jj in ccp if (kk ==k and jj == j)) ==
int(y[k,j].x), name='prodOffersb_' + str(k) + ',' + str(j) )
# limit on the number of offers to each customer in a cluster.
customerOffersb = {}
for k,i in ki:
customerOffersb[k,i] = mob.addConstr(gp.quicksum(xb[k,i,j] for kk,ii,j in ccp if (kk == k and ii == i) ) <= 1,
name ='custOffersb_' + str(k) + ',' + str(i) )
# budget constraint
# New budget
new_budget = budget + z.x
totBudget = mob.addConstr(xb.prod(customer_cost) <= new_budget, name='total_budget')
### Objective function
# Maximize total profit
mob.setObjective(xb.prod(customer_profit), GRB.MAXIMIZE)
# Verify model formulation
mob.write('operationalB.lp')
# Run optimization engine
mob.optimize()
### Output Reports
# Optimal assignment of product offers to customers
total_customer_profitb = 0
total_customer_costb = 0
kvalueb = None
firstb = True
num_assignmentsb = 0
print("\nOptimal assignment of product offers to customers.")
print("___________________________________________________")
for k,i,j in ccp:
if k != kvalueb:
prevkb = kvalueb
kvalueb = k
if not firstb:
print("___________________________________________________")
print(f"Number of assignments in cluster {prevkb} is {num_assignmentsb}")
print("___________________________________________________")
num_assignmentsb = 0
if firstb:
firstb = False
if xb[k,i,j].x > 0.5:
#print(x[k,i,j].varName, x[k,i,j].x)
profitb = '${:,.2f}'.format(customer_profit[k,i,j])
costb = '${:,.2f}'.format(customer_cost[k,i,j])
print(f"Customer {i} in cluster {k} gets an offer of product {j}:")
print(f"The expected profit is {profitb} at a cost of {costb}")
total_customer_profitb += customer_profit[k,i,j]*xb[k,i,j].x
total_customer_costb += customer_cost[k,i,j]*xb[k,i,j].x
num_assignmentsb += 1
print("___________________________________________________")
print(f"Number of assignments in cluster {kvalueb} is {num_assignmentsb}")
print("___________________________________________________\n")
# Financial reports
customers_ROIb = round(100*total_customer_profitb/total_customer_costb,2)
money_customers_profitb = '${:,.2f}'.format(total_customer_profitb)
money_customers_costb = '${:,.2f}'.format(total_customer_costb)
print(f"\nFinancial reports.")
print("___________________________________________________")
print(f"Optimal total customers profit is {money_customers_profitb}.")
print(f"Optimal total customers cost is {money_customers_costb} with a budget of {money_budget} and an extra amount of {increased_budget}.")
print(f"Optimal ROI is {customers_ROIb}% with a minimum ROI of {min_ROI}%.")
###Output
Optimal assignment of product offers to customers.
___________________________________________________
Customer c1 in cluster k1 gets an offer of product p2:
The expected profit is $1,050.00 at a cost of $105.00
Customer c2 in cluster k1 gets an offer of product p1:
The expected profit is $1,950.00 at a cost of $195.00
Customer c4 in cluster k1 gets an offer of product p1:
The expected profit is $2,100.00 at a cost of $210.00
Customer c5 in cluster k1 gets an offer of product p2:
The expected profit is $900.00 at a cost of $90.00
___________________________________________________
Number of assignments in cluster k1 is 4
___________________________________________________
___________________________________________________
Number of assignments in cluster k2 is 0
___________________________________________________
Financial reports.
___________________________________________________
Optimal total customers profit is $6,000.00.
Optimal total customers cost is $600.00 with a budget of $200.00 and an extra amount of $400.00.
Optimal ROI is 1000.0% with a minimum ROI of 120.0%.
|
m04_machine_learning/m04_c06_ml_workflow/m04_c06_ml_workflow.ipynb | ###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Clase 06: Proyectos de Machine Learning Objetivos* Resumir lo que aprendido en el módulo.* Conocer el _workflow_ de un proyecto de _machine learning_. Contenidos* [Estimadores](estimator)* [Pre-Procesamiento](preprocessing)* [Pipelines](pipelines)* [Evaluación de Modelos](model_evaluation)* [Búsqueda de Hiper-Parámetros](hyperparameter_search) Estimadores Ya sabemos que `scikit-learn` nos provee de múltiples algoritmos y modelos de Machine Learning, que oficialmente son llamados **estimadores** (_estimators_). Cada _estimator_ puede ser ajustado (o coloquialmente, _fiteado_) utilizando los datos adecuados.Por ejemplo, para motivar, la __Regresión Ridge__ es un tipo de regresión que agrega un parámetro de regularización, en particular, busca minimizar la suma de residuos pero penalizada, es decir:$$\min_\beta \vert \vert y - X \beta \vert \vert_2^2 + \alpha \vert \vert \beta \vert \vert_2^2$$El hiper-parámetro $\alpha > 0$ es usualmente conocido como parámetro penalización ridge. En realidad, en la literatura estadística se denota con $lambda$, pero como en `python` el nombre lambda está reservado para las funciones anónimas, `scikit-learn` optó por utilizar otra letra griega. La regresión ridge es una alternativa popularpara sobrellevar el problema de colinealidad.En `scikit-learn.linear_models` se encuentra el estimador `Ridge`.
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rr_est = Ridge(alpha=0.1)
###Output
_____no_output_____
###Markdown
Típicamente el método `fit` acepta dos inputs:* La matriz de diseño `X`, arreglo bidimensional que típicamente es `(n_samples, n_features)`.* Los valores _target_ `y`. - En tareas de regresión corresponden a números reales. - En tareas de clasificación corresopnden a enteros (u otro conjunto de elementos discreto). - Para aprendizaje no-supervisado este input no es necesario.
###Code
rr_est.fit(X, y)
rr_est.coef_
rr_est.intercept_
###Output
_____no_output_____
###Markdown
El método `predict` necesita un arreglo bidimensional como input. Para ejemplificar podemos utilizar la misma _data_ de entrenamiento.
###Code
rr_est.predict(X)[:10]
###Output
_____no_output_____
###Markdown
En un flujo estándar ajustaríamos con los datos de entrenamiento, predeciríamos datos de test y luego calculamos alguna métrica, por ejemplo, para un caso de regresión, el error cuadrático medio.
###Code
rr_est.fit(X_train, y_train)
y_pred = rr_est.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Pre-Procesamiento En el flujo de trabajo típico de un proyecto de machine learning es usual procesar y transformar los datos. En `scikit-learn` el pre-procesamiento y transformación siguen la misma API que los objetos _estimators_, pero que se denotan como _transformers_. Sin embargo, estos no poseen un método `predict` pero si uno de transformación, `transform`.Motivaremos con la típica estandarización.
###Code
from sklearn.preprocessing import StandardScaler
#StandardScaler?
###Output
_____no_output_____
###Markdown
Usualmente se ajusta y transformar los mismos datos, por lo que se aplican los métodos concatenados.
###Code
StandardScaler().fit(X).transform(X)
###Output
_____no_output_____
###Markdown
Sin embargo, muchos de estos objetos (si es que no es la totalidad de ellos), poseen el método `fit_transform`.
###Code
StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Pipelines `Scikit-learn` nos permite combinar _transformers_ y _estimators_ uniéndolos a través de "tuberías", objeto denotado como _pipeline_. Nuevamente, la API es consistente con un _estimator_, tanto como para ajustar como para predecir.
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
StandardScaler(),
Ridge(alpha=0.1)
)
pipe.fit(X_train, y_train)
pipe.predict(X_test)[:10]
mean_squared_error(pipe.predict(X_test), y_test)
###Output
_____no_output_____
###Markdown
Evaluación de Modelos Ya sabemos que ajustar un modelo con datos conocidos no implica que se comportará de buena manera con datos nuevos, por lo que tenemos herramientas como _cross validation_ para evaluar los modelos con los datos conocidos.
###Code
from sklearn.model_selection import cross_validate
result = cross_validate(rr_est, X_train, y_train, cv=5) # defaults to 5-fold CV
result.keys()
result["test_score"]
###Output
_____no_output_____
###Markdown
Búsqueda de Hiper-parámetros Para el caso de la regeresión ridge, el parámetro de penalización es un hiper-parámetro que necesita ser escogido con algún procedimiento. Aunque no lo creas, `scikit-learn` también provee herramientas para escoger automáticamente este tipo de hiper-parámetros. Por ejemplo `GridSearchCV` realiza una búsqueda exhaustiva entre los posibles valores especificados para los hiper-parámetros.
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV
param_grid = {"alpha": np.arange(0, 1, 0.1)}
search = GridSearchCV(
estimator=rr_est,
param_grid=param_grid
)
search.fit(X_train, y_train)
search.best_params_
###Output
_____no_output_____
###Markdown
El objeto `search` ahora es equivalente a un estimator `Ridge` pero con los mejores parámetros encontrados (`alpha` = 0).
###Code
search.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Clase 06: Proyectos de Machine Learning Objetivos* Resumir lo que aprendido en el módulo.* Conocer el _workflow_ de un proyecto de _machine learning_. Contenidos* [Estimadores](estimator)* [Pre-Procesamiento](preprocessing)* [Pipelines](pipelines)* [Evaluación de Modelos](model_evaluation)* [Búsqueda de Hiper-Parámetros](hyperparameter_search) Estimadores Ya sabemos que `scikit-learn` nos provee de múltiples algoritmos y modelos de Machine Learning, que oficialmente son llamados **estimadores** (_estimators_). Cada _estimator_ puede ser ajustado (o coloquialmente, _fiteado_) utilizando los datos adecuados.Por ejemplo, para motivar, la __Regresión Ridge__ es un tipo de regresión que agrega un parámetro de regularización, en particular, busca minimizar la suma de residuos pero penalizada, es decir:$$\min_\beta \vert \vert y - X \beta \vert \vert_2^2 + \alpha \vert \vert \beta \vert \vert_2^2$$El hiper-parámetro $\alpha > 0$ es usualmente conocido como parámetro penalización ridge. En realidad, en la literatura estadística se denota con $lambda$, pero como en `python` el nombre lambda está reservado para las funciones anónimas, `scikit-learn` optó por utilizar otra letra griega. La regresión ridge es una alternativa popularpara sobrellevar el problema de colinealidad.En `scikit-learn.linear_models` se encuentra el estimador `Ridge`.
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rr_est = Ridge(alpha=0.1)
###Output
_____no_output_____
###Markdown
Típicamente el método `fit` acepta dos inputs:* La matriz de diseño `X`, arreglo bidimensional que típicamente es `(n_samples, n_features)`.* Los valores _target_ `y`. - En tareas de regresión corresponden a números reales. - En tareas de clasificación corresopnden a enteros (u otro conjunto de elementos discreto). - Para aprendizaje no-supervisado este input no es necesario.
###Code
rr_est.fit(X, y)
rr_est.coef_
rr_est.intercept_
###Output
_____no_output_____
###Markdown
El método `predict` necesita un arreglo bidimensional como input. Para ejemplificar podemos utilizar la misma _data_ de entrenamiento.
###Code
rr_est.predict(X)[:10]
###Output
_____no_output_____
###Markdown
En un flujo estándar ajustaríamos con los datos de entrenamiento, predeciríamos datos de test y luego calculamos alguna métrica, por ejemplo, para un caso de regresión, el error cuadrático medio.
###Code
rr_est.fit(X_train, y_train)
y_pred = rr_est.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Pre-Procesamiento En el flujo de trabajo típico de un proyecto de machine learning es usual procesar y transformar los datos. En `scikit-learn` el pre-procesamiento y transformación siguen la misma API que los objetos _estimators_, pero que se denotan como _transformers_. Sin embargo, estos no poseen un método `predict` pero si uno de transformación, `transform`.Motivaremos con la típica estandarización.
###Code
from sklearn.preprocessing import StandardScaler
# StandardScaler?
###Output
_____no_output_____
###Markdown
Usualmente se ajusta y transformar los mismos datos, por lo que se aplican los métodos concatenados.
###Code
StandardScaler().fit(X).transform(X)
###Output
_____no_output_____
###Markdown
Sin embargo, muchos de estos objetos (si es que no es la totalidad de ellos), poseen el método `fit_transform`.
###Code
StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Pipelines `Scikit-learn` nos permite combinar _transformers_ y _estimators_ uniéndolos a través de "tuberías", objeto denotado como _pipeline_. Nuevamente, la API es consistente con un _estimator_, tanto como para ajustar como para predecir.
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
StandardScaler(),
Ridge(alpha=0.1)
)
pipe.fit(X_train, y_train)
pipe.predict(X_test)[:10]
mean_squared_error(pipe.predict(X_test), y_test)
###Output
_____no_output_____
###Markdown
Evaluación de Modelos Ya sabemos que ajustar un modelo con datos conocidos no implica que se comportará de buena manera con datos nuevos, por lo que tenemos herramientas como _cross validation_ para evaluar los modelos con los datos conocidos.
###Code
from sklearn.model_selection import cross_validate
result = cross_validate(rr_est, X_train, y_train) # defaults to 5-fold CV
result.keys()
result["test_score"]
###Output
_____no_output_____
###Markdown
Búsqueda de Hiper-parámetros Para el caso de la regeresión ridge, el parámetro de penalización es un hiper-parámetro que necesita ser escogido con algún procedimiento. Aunque no lo creas, `scikit-learn` también provee herramientas para escoger automáticamente este tipo de hiper-parámetros. Por ejemplo `GridSearchCV` realiza una búsqueda exhaustiva entre los posibles valores especificados para los hiper-parámetros.
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV
param_grid = {"alpha": np.arange(0, 1, 0.1)}
search = GridSearchCV(
estimator=rr_est,
param_grid=param_grid
)
search.fit(X_train, y_train)
search.best_params_
###Output
_____no_output_____
###Markdown
El objeto `search` ahora es equivalente a un estimator `Ridge` pero con los mejores parámetros encontrados (`alpha` = 0).
###Code
search.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Clase 06: Proyectos de Machine Learning Objetivos* Resumir lo que aprendido en el módulo.* Conocer el _workflow_ de un proyecto de _machine learning_. Contenidos* [Estimadores](estimator)* [Pre-Procesamiento](preprocessing)* [Pipelines](pipelines)* [Evaluación de Modelos](model_evaluation)* [Búsqueda de Hiper-Parámetros](hyperparameter_search) Estimadores Ya sabemos que `scikit-learn` nos provee de múltiples algoritmos y modelos de Machine Learning, que oficialmente son llamados **estimadores** (_estimators_). Cada _estimator_ puede ser ajustado (o coloquialmente, _fiteado_) utilizando los datos adecuados.Por ejemplo, para motivar, la __Regresión Ridge__ es un tipo de regresión que agrega un parámetro de regularización, en particular, busca minimizar la suma de residuos pero penalizada, es decir:$$\min_\beta \vert \vert y - X \beta \vert \vert_2^2 + \alpha \vert \vert \beta \vert \vert_2^2$$El hiper-parámetro $\alpha > 0$ es usualmente conocido como parámetro penalización ridge. En realidad, en la literatura estadística se denota con $lambda$, pero como en `python` el nombre lambda está reservado para las funciones anónimas, `scikit-learn` optó por utilizar otra letra griega. La regresión ridge es una alternativa popularpara sobrellevar el problema de colinealidad.En `scikit-learn.linear_models` se encuentra el estimador `Ridge`.
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rr_est = Ridge(alpha=0.1)
###Output
_____no_output_____
###Markdown
Típicamente el método `fit` acepta dos inputs:* La matriz de diseño `X`, arreglo bidimensional que típicamente es `(n_samples, n_features)`.* Los valores _target_ `y`. - En tareas de regresión corresponden a números reales. - En tareas de clasificación corresopnden a enteros (u otro conjunto de elementos discreto). - Para aprendizaje no-supervisado este input no es necesario.
###Code
rr_est.fit(X, y)
rr_est.coef_
rr_est.intercept_
###Output
_____no_output_____
###Markdown
El método `predict` necesita un arreglo bidimensional como input. Para ejemplificar podemos utilizar la misma _data_ de entrenamiento.
###Code
rr_est.predict(X)[:10]
###Output
_____no_output_____
###Markdown
En un flujo estándar ajustaríamos con los datos de entrenamiento, predeciríamos datos de test y luego calculamos alguna métrica, por ejemplo, para un caso de regresión, el error cuadrático medio.
###Code
rr_est.fit(X_train, y_train)
y_pred = rr_est.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Pre-Procesamiento En el flujo de trabajo típico de un proyecto de machine learning es usual procesar y transformar los datos. En `scikit-learn` el pre-procesamiento y transformación siguen la misma API que los objetos _estimators_, pero que se denotan como _transformers_. Sin embargo, estos no poseen un método `predict` pero si uno de transformación, `transform`.Motivaremos con la típica estandarización.
###Code
from sklearn.preprocessing import StandardScaler
# StandardScaler?
###Output
_____no_output_____
###Markdown
Usualmente se ajusta y transformar los mismos datos, por lo que se aplican los métodos concatenados.
###Code
StandardScaler().fit(X).transform(X)
X
###Output
_____no_output_____
###Markdown
Sin embargo, muchos de estos objetos (si es que no es la totalidad de ellos), poseen el método `fit_transform`.
###Code
StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Pipelines `Scikit-learn` nos permite combinar _transformers_ y _estimators_ uniéndolos a través de "tuberías", objeto denotado como _pipeline_. Nuevamente, la API es consistente con un _estimator_, tanto como para ajustar como para predecir.
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
StandardScaler(),
Ridge(alpha=0.1)
)
pipe.fit(X_train, y_train)
pipe.predict(X_test)[:10]
mean_squared_error(pipe.predict(X_test), y_test)
###Output
_____no_output_____
###Markdown
Evaluación de Modelos Ya sabemos que ajustar un modelo con datos conocidos no implica que se comportará de buena manera con datos nuevos, por lo que tenemos herramientas como _cross validation_ para evaluar los modelos con los datos conocidos.
###Code
from sklearn.model_selection import cross_validate
result = cross_validate(rr_est, X_train, y_train) # defaults to 5-fold CV
result.keys()
result["test_score"]
###Output
_____no_output_____
###Markdown
Búsqueda de Hiper-parámetros Para el caso de la regeresión ridge, el parámetro de penalización es un hiper-parámetro que necesita ser escogido con algún procedimiento. Aunque no lo creas, `scikit-learn` también provee herramientas para escoger automáticamente este tipo de hiper-parámetros. Por ejemplo `GridSearchCV` realiza una búsqueda exhaustiva entre los posibles valores especificados para los hiper-parámetros.
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV
param_grid = {"alpha": np.arange(0, 1, 0.1)}
search = GridSearchCV(
estimator=rr_est,
param_grid=param_grid
)
search.fit(X_train, y_train)
search.best_params_
###Output
_____no_output_____
###Markdown
El objeto `search` ahora es equivalente a un estimator `Ridge` pero con los mejores parámetros encontrados (`alpha` = 0).
###Code
search.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Clase 06: Proyectos de Machine Learning Objetivos* Resumir lo que aprendido en el módulo.* Conocer el _workflow_ de un proyecto de _machine learning_. Contenidos* [Estimadores](estimator)* [Pre-Procesamiento](preprocessing)* [Pipelines](pipelines)* [Evaluación de Modelos](model_evaluation)* [Búsqueda de Hiper-Parámetros](hyperparameter_search) Estimadores Ya sabemos que `scikit-learn` nos provee de múltiples algoritmos y modelos de Machine Learning, que oficialmente son llamados **estimadores** (_estimators_). Cada _estimator_ puede ser ajustado (o coloquialmente, _fiteado_) utilizando los datos adecuados.Por ejemplo, para motivar, la __Regresión Ridge__ es un tipo de regresión que agrega un parámetro de regularización, en particular, busca minimizar la suma de residuos pero penalizada, es decir:$$\min_\beta \vert \vert y - X \beta \vert \vert_2^2 + \alpha \vert \vert \beta \vert \vert_2^2$$El hiper-parámetro $\alpha > 0$ es usualmente conocido como parámetro penalización ridge. En realidad, en la literatura estadística se denota con $lambda$, pero como en `python` el nombre lambda está reservado para las funciones anónimas, `scikit-learn` optó por utilizar otra letra griega. La regresión ridge es una alternativa popularpara sobrellevar el problema de colinealidad.En `scikit-learn.linear_models` se encuentra el estimador `Ridge`.
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rr_est = Ridge(alpha=0.1)
###Output
_____no_output_____
###Markdown
Típicamente el método `fit` acepta dos inputs:* La matriz de diseño `X`, arreglo bidimensional que típicamente es `(n_samples, n_features)`.* Los valores _target_ `y`. - En tareas de regresión corresponden a números reales. - En tareas de clasificación corresopnden a enteros (u otro conjunto de elementos discreto). - Para aprendizaje no-supervisado este input no es necesario.
###Code
rr_est.fit(X, y)
rr_est.coef_
rr_est.intercept_
###Output
_____no_output_____
###Markdown
El método `predict` necesita un arreglo bidimensional como input. Para ejemplificar podemos utilizar la misma _data_ de entrenamiento.
###Code
rr_est.predict(X)[:10]
###Output
_____no_output_____
###Markdown
En un flujo estándar ajustaríamos con los datos de entrenamiento, predeciríamos datos de test y luego calculamos alguna métrica, por ejemplo, para un caso de regresión, el error cuadrático medio.
###Code
rr_est.fit(X_train, y_train)
y_pred = rr_est.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Pre-Procesamiento En el flujo de trabajo típico de un proyecto de machine learning es usual procesar y transformar los datos. En `scikit-learn` el pre-procesamiento y transformación siguen la misma API que los objetos _estimators_, pero que se denotan como _transformers_. Sin embargo, estos no poseen un método `predict` pero si uno de transformación, `transform`.Motivaremos con la típica estandarización.
###Code
from sklearn.preprocessing import StandardScaler
# StandardScaler?
###Output
_____no_output_____
###Markdown
Usualmente se ajusta y transformar los mismos datos, por lo que se aplican los métodos concatenados.
###Code
StandardScaler().fit(X).transform(X)
###Output
_____no_output_____
###Markdown
Sin embargo, muchos de estos objetos (si es que no es la totalidad de ellos), poseen el método `fit_transform`.
###Code
StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Pipelines `Scikit-learn` nos permite combinar _transformers_ y _estimators_ uniéndolos a través de "tuberías", objeto denotado como _pipeline_. Nuevamente, la API es consistente con un _estimator_, tanto como para ajustar como para predecir.
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
StandardScaler(),
Ridge(alpha=0.1)
)
pipe.fit(X_train, y_train)
pipe.predict(X_test)[:10]
mean_squared_error(pipe.predict(X_test), y_test)
###Output
_____no_output_____
###Markdown
Evaluación de Modelos Ya sabemos que ajustar un modelo con datos conocidos no implica que se comportará de buena manera con datos nuevos, por lo que tenemos herramientas como _cross validation_ para evaluar los modelos con los datos conocidos.
###Code
from sklearn.model_selection import cross_validate
result = cross_validate(rr_est, X_train, y_train) # defaults to 5-fold CV
result.keys()
result["test_score"]
###Output
_____no_output_____
###Markdown
Búsqueda de Hiper-parámetros Para el caso de la regeresión ridge, el parámetro de penalización es un hiper-parámetro que necesita ser escogido con algún procedimiento. Aunque no lo creas, `scikit-learn` también provee herramientas para escoger automáticamente este tipo de hiper-parámetros. Por ejemplo `GridSearchCV` realiza una búsqueda exhaustiva entre los posibles valores especificados para los hiper-parámetros.
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV
param_grid = {"alpha": np.arange(0, 1, 0.1)}
search = GridSearchCV(
estimator=rr_est,
param_grid=param_grid
)
search.fit(X_train, y_train)
search.best_params_
###Output
_____no_output_____
###Markdown
El objeto `search` ahora es equivalente a un estimator `Ridge` pero con los mejores parámetros encontrados (`alpha` = 0).
###Code
search.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Clase 06: Proyectos de Machine Learning Objetivos* Resumir lo que aprendido en el módulo.* Conocer el _workflow_ de un proyecto de _machine learning_. Contenidos* [Estimadores](estimator)* [Pre-Procesamiento](preprocessing)* [Pipelines](pipelines)* [Evaluación de Modelos](model_evaluation)* [Búsqueda de Hiper-Parámetros](hyperparameter_search) Estimadores Ya sabemos que `scikit-learn` nos provee de múltiples algoritmos y modelos de Machine Learning, que oficialmente son llamados **estimadores** (_estimators_). Cada _estimator_ puede ser ajustado (o coloquialmente, _fiteado_) utilizando los datos adecuados.Por ejemplo, para motivar, la __Regresión Ridge__ es un tipo de regresión que agrega un parámetro de regularización, en particular, busca minimizar la suma de residuos pero penalizada, es decir:$$\min_\beta \vert \vert y - X \beta \vert \vert_2^2 + \alpha \vert \vert \beta \vert \vert_2^2$$El hiper-parámetro $\alpha > 0$ es usualmente conocido como parámetro penalización ridge. En realidad, en la literatura estadística se denota con $lambda$, pero como en `python` el nombre lambda está reservado para las funciones anónimas, `scikit-learn` optó por utilizar otra letra griega. La regresión ridge es una alternativa popularpara sobrellevar el problema de colinealidad.En `scikit-learn.linear_models` se encuentra el estimador `Ridge`.
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rr_est = Ridge(alpha=0.1)
###Output
_____no_output_____
###Markdown
Típicamente el método `fit` acepta dos inputs:* La matriz de diseño `X`, arreglo bidimensional que típicamente es `(n_samples, n_features)`.* Los valores _target_ `y`. - En tareas de regresión corresponden a números reales. - En tareas de clasificación corresopnden a enteros (u otro conjunto de elementos discreto). - Para aprendizaje no-supervisado este input no es necesario.
###Code
rr_est.fit(X, y)
rr_est.coef_
rr_est.intercept_
###Output
_____no_output_____
###Markdown
El método `predict` necesita un arreglo bidimensional como input. Para ejemplificar podemos utilizar la misma _data_ de entrenamiento.
###Code
rr_est.predict(X)[:10]
###Output
_____no_output_____
###Markdown
En un flujo estándar ajustaríamos con los datos de entrenamiento, predeciríamos datos de test y luego calculamos alguna métrica, por ejemplo, para un caso de regresión, el error cuadrático medio.
###Code
rr_est.fit(X_train, y_train)
y_pred = rr_est.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Pre-Procesamiento En el flujo de trabajo típico de un proyecto de machine learning es usual procesar y transformar los datos. En `scikit-learn` el pre-procesamiento y transformación siguen la misma API que los objetos _estimators_, pero que se denotan como _transformers_. Sin embargo, estos no poseen un método `predict` pero si uno de transformación, `transform`.Motivaremos con la típica estandarización.
###Code
from sklearn.preprocessing import StandardScaler
# StandardScaler?
###Output
_____no_output_____
###Markdown
Usualmente se ajusta y transformar los mismos datos, por lo que se aplican los métodos concatenados.
###Code
StandardScaler().fit(X).transform(X)
###Output
_____no_output_____
###Markdown
Sin embargo, muchos de estos objetos (si es que no es la totalidad de ellos), poseen el método `fit_transform`.
###Code
StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Pipelines `Scikit-learn` nos permite combinar _transformers_ y _estimators_ uniéndolos a través de "tuberías", objeto denotado como _pipeline_. Nuevamente, la API es consistente con un _estimator_, tanto como para ajustar como para predecir.
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
StandardScaler(),
Ridge(alpha=0.1)
)
pipe.fit(X_train, y_train)
pipe.predict(X_test)[:10]
mean_squared_error(pipe.predict(X_test), y_test)
###Output
_____no_output_____
###Markdown
Evaluación de Modelos Ya sabemos que ajustar un modelo con datos conocidos no implica que se comportará de buena manera con datos nuevos, por lo que tenemos herramientas como _cross validation_ para evaluar los modelos con los datos conocidos.
###Code
from sklearn.model_selection import cross_validate
result = cross_validate(rr_est, X_train, y_train) # defaults to 5-fold CV
result.keys()
result["test_score"]
###Output
_____no_output_____
###Markdown
Búsqueda de Hiper-parámetros Para el caso de la regeresión ridge, el parámetro de penalización es un hiper-parámetro que necesita ser escogido con algún procedimiento. Aunque no lo creas, `scikit-learn` también provee herramientas para escoger automáticamente este tipo de hiper-parámetros. Por ejemplo `GridSearchCV` realiza una búsqueda exhaustiva entre los posibles valores especificados para los hiper-parámetros.
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV
param_grid = {"alpha": np.arange(0, 1, 0.1)}
search = GridSearchCV(
estimator=rr_est,
param_grid=param_grid
)
search.fit(X_train, y_train)
search.best_params_
###Output
_____no_output_____
###Markdown
El objeto `search` ahora es equivalente a un estimator `Ridge` pero con los mejores parámetros encontrados (`alpha` = 0).
###Code
search.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Clase 06: Proyectos de Machine Learning Objetivos* Resumir lo que aprendido en el módulo.* Conocer el _workflow_ de un proyecto de _machine learning_. Contenidos* [Estimadores](estimator)* [Pre-Procesamiento](preprocessing)* [Pipelines](pipelines)* [Evaluación de Modelos](model_evaluation)* [Búsqueda de Hiper-Parámetros](hyperparameter_search) Estimadores Ya sabemos que `scikit-learn` nos provee de múltiples algoritmos y modelos de Machine Learning, que oficialmente son llamados **estimadores** (_estimators_). Cada _estimator_ puede ser ajustado (o coloquialmente, _fiteado_) utilizando los datos adecuados.Por ejemplo, para motivar, la __Regresión Ridge__ es un tipo de regresión que agrega un parámetro de regularización, en particular, busca minimizar la suma de residuos pero penalizada, es decir:$$\min_\beta \vert \vert y - X \beta \vert \vert_2^2 + \alpha \vert \vert \beta \vert \vert_2^2$$El hiper-parámetro $\alpha > 0$ es usualmente conocido como parámetro penalización ridge. En realidad, en la literatura estadística se denota con $lambda$, pero como en `python` el nombre lambda está reservado para las funciones anónimas, `scikit-learn` optó por utilizar otra letra griega. La regresión ridge es una alternativa popularpara sobrellevar el problema de colinealidad.En `scikit-learn.linear_models` se encuentra el estimador `Ridge`.
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rr_est = Ridge(alpha=0.1)
###Output
_____no_output_____
###Markdown
Típicamente el método `fit` acepta dos inputs:* La matriz de diseño `X`, arreglo bidimensional que típicamente es `(n_samples, n_features)`.* Los valores _target_ `y`. - En tareas de regresión corresponden a números reales. - En tareas de clasificación corresopnden a enteros (u otro conjunto de elementos discreto). - Para aprendizaje no-supervisado este input no es necesario.
###Code
rr_est.fit(X, y)
rr_est.coef_
rr_est.intercept_
###Output
_____no_output_____
###Markdown
El método `predict` necesita un arreglo bidimensional como input. Para ejemplificar podemos utilizar la misma _data_ de entrenamiento.
###Code
rr_est.predict(X)[:10]
###Output
_____no_output_____
###Markdown
En un flujo estándar ajustaríamos con los datos de entrenamiento, predeciríamos datos de test y luego calculamos alguna métrica, por ejemplo, para un caso de regresión, el error cuadrático medio.
###Code
rr_est.fit(X_train, y_train)
y_pred = rr_est.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Pre-Procesamiento En el flujo de trabajo típico de un proyecto de machine learning es usual procesar y transformar los datos. En `scikit-learn` el pre-procesamiento y transformación siguen la misma API que los objetos _estimators_, pero que se denotan como _transformers_. Sin embargo, estos no poseen un método `predict` pero si uno de transformación, `transform`.Motivaremos con la típica estandarización.
###Code
from sklearn.preprocessing import StandardScaler
# StandardScaler?
###Output
_____no_output_____
###Markdown
Usualmente se ajusta y transformar los mismos datos, por lo que se aplican los métodos concatenados.
###Code
StandardScaler().fit(X).transform(X)
###Output
_____no_output_____
###Markdown
Sin embargo, muchos de estos objetos (si es que no es la totalidad de ellos), poseen el método `fit_transform`.
###Code
StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Pipelines `Scikit-learn` nos permite combinar _transformers_ y _estimators_ uniéndolos a través de "tuberías", objeto denotado como _pipeline_. Nuevamente, la API es consistente con un _estimator_, tanto como para ajustar como para predecir.
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
StandardScaler(),
Ridge(alpha=0.1)
)
pipe.fit(X_train, y_train)
pipe.predict(X_test)[:10]
mean_squared_error(pipe.predict(X_test), y_test)
###Output
_____no_output_____
###Markdown
Evaluación de Modelos Ya sabemos que ajustar un modelo con datos conocidos no implica que se comportará de buena manera con datos nuevos, por lo que tenemos herramientas como _cross validation_ para evaluar los modelos con los datos conocidos.
###Code
from sklearn.model_selection import cross_validate
result = cross_validate(rr_est, X_train, y_train) # defaults to 5-fold CV
result.keys()
result["test_score"]
###Output
_____no_output_____
###Markdown
Búsqueda de Hiper-parámetros Para el caso de la regeresión ridge, el parámetro de penalización es un hiper-parámetro que necesita ser escogido con algún procedimiento. Aunque no lo creas, `scikit-learn` también provee herramientas para escoger automáticamente este tipo de hiper-parámetros. Por ejemplo `GridSearchCV` realiza una búsqueda exhaustiva entre los posibles valores especificados para los hiper-parámetros.
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV
param_grid = {"alpha": np.arange(0, 1, 0.1)}
search = GridSearchCV(
estimator=rr_est,
param_grid=param_grid
)
search.fit(X_train, y_train)
search.best_params_
###Output
_____no_output_____
###Markdown
El objeto `search` ahora es equivalente a un estimator `Ridge` pero con los mejores parámetros encontrados (`alpha` = 0).
###Code
search.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
MAT281 Aplicaciones de la Matemática en la Ingeniería Módulo 04 Clase 06: Proyectos de Machine Learning Objetivos* Resumir lo que aprendido en el módulo.* Conocer el _workflow_ de un proyecto de _machine learning_. Contenidos* [Estimadores](estimator)* [Pre-Procesamiento](preprocessing)* [Pipelines](pipelines)* [Evaluación de Modelos](model_evaluation)* [Búsqueda de Hiper-Parámetros](hyperparameter_search) Estimadores Ya sabemos que `scikit-learn` nos provee de múltiples algoritmos y modelos de Machine Learning, que oficialmente son llamados **estimadores** (_estimators_). Cada _estimator_ puede ser ajustado (o coloquialmente, _fiteado_) utilizando los datos adecuados.Por ejemplo, para motivar, la __Regresión Ridge__ es un tipo de regresión que agrega un parámetro de regularización, en particular, busca minimizar la suma de residuos pero penalizada, es decir:$$\min_\beta \vert \vert y - X \beta \vert \vert_2^2 + \alpha \vert \vert \beta \vert \vert_2^2$$El hiper-parámetro $\alpha > 0$ es usualmente conocido como parámetro penalización ridge. En realidad, en la literatura estadística se denota con $lambda$, pero como en `python` el nombre lambda está reservado para las funciones anónimas, `scikit-learn` optó por utilizar otra letra griega. La regresión ridge es una alternativa popularpara sobrellevar el problema de colinealidad.En `scikit-learn.linear_models` se encuentra el estimador `Ridge`.
###Code
from sklearn.linear_model import Ridge
from sklearn.model_selection import train_test_split
from sklearn.datasets import load_boston
X, y = load_boston(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
rr_est = Ridge(alpha=0.1)
###Output
_____no_output_____
###Markdown
Típicamente el método `fit` acepta dos inputs:* La matriz de diseño `X`, arreglo bidimensional que típicamente es `(n_samples, n_features)`.* Los valores _target_ `y`. - En tareas de regresión corresponden a números reales. - En tareas de clasificación corresopnden a enteros (u otro conjunto de elementos discreto). - Para aprendizaje no-supervisado este input no es necesario.
###Code
rr_est.fit(X, y)
rr_est.coef_
rr_est.intercept_
###Output
_____no_output_____
###Markdown
El método `predict` necesita un arreglo bidimensional como input. Para ejemplificar podemos utilizar la misma _data_ de entrenamiento.
###Code
rr_est.predict(X)[:10]
###Output
_____no_output_____
###Markdown
En un flujo estándar ajustaríamos con los datos de entrenamiento, predeciríamos datos de test y luego calculamos alguna métrica, por ejemplo, para un caso de regresión, el error cuadrático medio.
###Code
rr_est.fit(X_train, y_train)
y_pred = rr_est.predict(X_test)
from sklearn.metrics import mean_squared_error
mean_squared_error(y_pred, y_test)
###Output
_____no_output_____
###Markdown
Pre-Procesamiento En el flujo de trabajo típico de un proyecto de machine learning es usual procesar y transformar los datos. En `scikit-learn` el pre-procesamiento y transformación siguen la misma API que los objetos _estimators_, pero que se denotan como _transformers_. Sin embargo, estos no poseen un método `predict` pero si uno de transformación, `transform`.Motivaremos con la típica estandarización.
###Code
from sklearn.preprocessing import StandardScaler
# StandardScaler?
###Output
_____no_output_____
###Markdown
Usualmente se ajusta y transformar los mismos datos, por lo que se aplican los métodos concatenados.
###Code
StandardScaler().fit(X).transform(X)
###Output
_____no_output_____
###Markdown
Sin embargo, muchos de estos objetos (si es que no es la totalidad de ellos), poseen el método `fit_transform`.
###Code
StandardScaler().fit_transform(X)
###Output
_____no_output_____
###Markdown
Pipelines `Scikit-learn` nos permite combinar _transformers_ y _estimators_ uniéndolos a través de "tuberías", objeto denotado como _pipeline_. Nuevamente, la API es consistente con un _estimator_, tanto como para ajustar como para predecir.
###Code
from sklearn.pipeline import make_pipeline
pipe = make_pipeline(
StandardScaler(),
Ridge(alpha=0.1)
)
pipe.fit(X_train, y_train)
pipe.predict(X_test)[:10]
mean_squared_error(pipe.predict(X_test), y_test)
###Output
_____no_output_____
###Markdown
Evaluación de Modelos Ya sabemos que ajustar un modelo con datos conocidos no implica que se comportará de buena manera con datos nuevos, por lo que tenemos herramientas como _cross validation_ para evaluar los modelos con los datos conocidos.
###Code
from sklearn.model_selection import cross_validate
result = cross_validate(rr_est, X_train, y_train) # defaults to 5-fold CV
result.keys()
result["test_score"]
###Output
_____no_output_____
###Markdown
Búsqueda de Hiper-parámetros Para el caso de la regeresión ridge, el parámetro de penalización es un hiper-parámetro que necesita ser escogido con algún procedimiento. Aunque no lo creas, `scikit-learn` también provee herramientas para escoger automáticamente este tipo de hiper-parámetros. Por ejemplo `GridSearchCV` realiza una búsqueda exhaustiva entre los posibles valores especificados para los hiper-parámetros.
###Code
import numpy as np
from sklearn.model_selection import GridSearchCV
param_grid = {"alpha": np.arange(0, 1, 0.1)}
search = GridSearchCV(
estimator=rr_est,
param_grid=param_grid
)
search.fit(X_train, y_train)
search.best_params_
###Output
_____no_output_____
###Markdown
El objeto `search` ahora es equivalente a un estimator `Ridge` pero con los mejores parámetros encontrados (`alpha` = 0).
###Code
search.score(X_test, y_test)
###Output
_____no_output_____ |
Classifier Non-ANN (Project UTS)/SVM Classifier.ipynb | ###Markdown
SVM Arief Saferman - 1806148656 . What is SVM? SVM adalah algoritma pada machine learning (termasuk supervised machine learning) yang dapat digunakan untuk masalah klasifikasi atau regresi. Algoritma ini menggunakan teknik yang disebut dengan kernel untuk mengubah data kita kemudian mentrasformasikan batas optimal antara kemungkinan antar data-data. Singkatnya, SVM melakukan beberapa transformasi data yang sangat kompleks, lalu mencari cara untuk memisahkan data kita berdasarkan label yang sudah kita tentukan. . Suport Vector Machine Support Vector Machine melakukan sebuah klasifikasi dengan menemukan hyperplane yang memaksimalkan margin antara dua kelas. Vektor (kasus) yang mendefenisikan hyperplane adalah vektor pendukung untuk melakukan sebuah klasifikasi. Algoritma ini nantinya akan mendefinisikan sebuah hyperplane yang optimal guna memaksimalkan margin agar dapat memisahlkan per tipe data secara linear dan non-linear.
###Code
# Import seluruh library yang diperlukan
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
plt.style.use('grayscale')
from sklearn.model_selection import GridSearchCV
###Output
_____no_output_____
###Markdown
Fungsi Untuk Plotting
###Code
# Fungsi plotBar untuk mengukur akurasi dari masing-masing kelas
def PlotBar(yPred, yTest):
names = ['1', '2', '3']
height = []
for i in range(1, 4):
pred = yPred == i
real = yTest == i
accuracy = np.sum(pred == real) / len(real) * 100.0
height.append(accuracy)
plt.bar(names, height)
plt.suptitle('Classification Accuracy')
plt.show()
# Fungsi average test dan train akurasi
def PlotTrainTestAcc(listTrainAcc, listTestAcc, xLabel):
ind = np.arange(len(listTrainAcc))
width = 0.35
fig = plt.figure(figsize=(8,5))
ax = fig.add_axes([0,0,1,1])
diffTest = [listTestAcc[i]-listTrainAcc[i] for i in range(len(listTestAcc))]
ax.bar(ind, listTrainAcc, width)
ax.bar(ind, diffTest, width, bottom=listTrainAcc)
ax.legend(labels=['Test', 'Train'])
plt.xticks(ind, xLabel)
plt.xlabel('Kernel Tuning')
plt.ylabel('Percentage')
ax.set_title('Average Test and Train Accuracy Comparison')
#Inisialisasi dataset seeds
seeds = pd.read_csv(r'D:\Kuliah\Semester 6\Tugas\AI\seeds.csv')
print(seeds.head())
# Pisahkan variabel independen variabel dari dependen variabel
# x = sebagai independen variabel
# y = sebagai dependen variabel
x = seeds.iloc[:, :-1]
y = seeds.iloc[:, 7]
###Output
Area Perimeter Compactness Kernel.Length Kernel.Width \
0 15.26 14.84 0.8710 5.763 3.312
1 14.88 14.57 0.8811 5.554 3.333
2 14.29 14.09 0.9050 5.291 3.337
3 13.84 13.94 0.8955 5.324 3.379
4 16.14 14.99 0.9034 5.658 3.562
Asymmetry.Coeff Kernel.Groove Type
0 2.221 5.220 1
1 1.018 4.956 1
2 2.699 4.825 1
3 2.259 4.805 1
4 1.355 5.175 1
###Markdown
Set Parameter untuk SVM gamma = [0.1, 0.01, 0.0001, 0.00001] C = [1, 10, 100, 1000] Kernel = [Linear, RBF, Poly]
###Code
param_grid = [{'kernel' : ['poly'], 'C' : (1, 10, 100, 1000), 'gamma' : (0.1, 0.01, 0.001, 0.0001)}]
param_grid
# Untuk best praktis dan konsistensi kita harus convert pandas dataframe ke numpy ndarray
# hal ini karena library pandas dan sklearn dibangun di atas numpy array
# Sehingga lebih bagus jika kita convert terlebih dahulu sebelum processing
x = x.to_numpy()
y = y.to_numpy()
###Output
_____no_output_____
###Markdown
Training and Fitting Model sekarang kita akan melakukan pembagian data pada data yang sudah di konversi ke ndarray (n-th dimensional array). Kita akan mengggunakan 50% data training dan 50% data testing.
###Code
#import library SVC pada sklearn dan inisialisasi model algoritma SVM
from sklearn.svm import SVC
from sklearn.model_selection import train_test_split
# ubah kernel berdasarkan 3 pengujian masing-masing (linear, rbf, dan poly)
model_1 = SVC(kernel = 'poly')
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.5)
grid_search = GridSearchCV(model_1, param_grid, cv = 10, verbose = 10)
grid_search.fit(x_train, y_train)
grid_search.best_params_
###Output
_____no_output_____
###Markdown
Tuning hyperparameter dengan GridSearchCV Hyperparameter merupakan sebuah parameter yang ingin kita optimasi untuk mendapatkan akurasi terbaik. Dalam modul sci-kit learn mereka merupakan sebuah argumen dalam konstruktor fungsi gridsearchcv. Pencarian dengan gridsearch ini menggunakan pendekatan set-up hyper-parameter secara metodologi akan membangun dan mengevaluasi model untuk setiap kombinasi parameter algoritma yang ditentukan dalam sebuah grid. GridSearchCN membantu kita untuk menggabungkan estimator dengan pembukaan pencarian grid untuk tuning hyper-parameter. Dalam melakukan tuning hyper-parameter, ada beberapa parameter yang harus dituning. Diantaranya adalah: - Kernels, fungsi utama dari kernel adalah mengambil ruang input dimensi rendah dan mengubahnya menjadi ruang berdimensi lebih tinggi. Parameter ini sebagian besar berguna dalam masalah klasifikasi non-linear. - C (Regularization), merupakan sebuah parameter penalty yang merepresentasikan kesalahan klasifikasi atau error. Kesalahan klasifikasi atau error memberi tahu kita tentang pengoptimalasan SVM seberapa banyak kesalahan yang dapat ditanggung. Hal ini akan memberikan kita kontrol untuk trade-off antara batas keputusan dan error klasifikasi. ketika C diberikan nilai yang tinggi, algoritma tersebut akan mengklasifikasikan data point secara benar, tetapi juga ada kemungkinan algoritma tersebut akan overfitting - Gamma, parameter yang menentukan seberapa jauh pengaruh kalkulasi kita terhadap garis pemisah yang masih masuk akal Ketika gamma memilih nilai yang besar, data poin terdekat akan memiliki pengaruh lebih besar. Ketika gamma memiliki nilai yang kecil maka data poin yang memiliki jarak jauh untuk menentukan decision boundary [Referensi](https://www.vebuso.com/2020/03/svm-hyperparameter-tuning-using-gridsearchcv/) Gunakan parameter terbaik yang sudah dihasilkan oleh GridSearchCV dengan tuning parameter
###Code
model_1 = SVC(C=100, kernel='linear', degree=3, gamma=0.1, coef0=0.0, shrinking=True,
probability=False, tol=0.001, cache_size=200, class_weight=None,
verbose=0, max_iter=-1, decision_function_shape="ovr", random_state = 0)
# Lakukan training pada model yang sudah di inisiasi di atas menggunakan fit function
model_1.fit(x_train, y_train)
# Lakukan testing pada model yang sudah di training
pred = model_1.predict(x_test)
# Import library untuk akurasi testing
from sklearn.metrics import accuracy_score
acc1a = model_1.score(x_test, y_test) * 100
print('Accuracy of linear SVC on training set: {:.2f}'.format(model_1.score(x_train, y_train) * 100))
# print('Accuracy of linear SVC on test set: {:.2f}'.format(model_1.score(x_test, y_test) * 100))
print('Accuracy of linear SVC on test set: %.2f' % (acc1a))
PlotBar(pred, y_test)
from sklearn.metrics import plot_confusion_matrix
class_names = ['1', '2', '3']
titles_options = [("Confusion matrix, without normalization", None),
("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
matrix1a = plot_confusion_matrix(model_1, x_test, y_test, display_labels = class_names,
cmap = plt.cm.gray, normalize = normalize)
matrix1a.ax_.set_title(title)
print(title)
print(matrix1a.confusion_matrix)
plt.show()
###Output
Confusion matrix, without normalization
[[24 3 2]
[ 0 37 0]
[ 2 0 32]]
Normalized confusion matrix
[[0.82758621 0.10344828 0.06896552]
[0. 1. 0. ]
[0.05882353 0. 0.94117647]]
###Markdown
Cross Validation swap data test dan train
###Code
model_1.fit(x_test, y_test)
pred = model_1.predict(x_train)
acc1b = model_1.score(x_train, y_train) * 100
print('Accuracy of linear SVC on training set: {:.2f}'.format(model_1.score(x_test, y_test) * 100))
# print('Accuracy of linear SVC on test set: {:.2f}'.format(model_1.score(x_train, y_train) * 100))
print('Accuracy of linear SVC on test set: %.2f' % (acc1b))
PlotBar(pred, y_train)
from sklearn.metrics import plot_confusion_matrix
class_names = ['1', '2', '3']
titles_options = [("Confusion matrix, without normalization", None),
("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
matrix1b = plot_confusion_matrix(model_1, x_train, y_train, display_labels = class_names,
cmap = plt.cm.gray, normalize = normalize)
matrix1b.ax_.set_title(title)
print(title)
print(matrix1b.confusion_matrix)
plt.show()
###Output
Confusion matrix, without normalization
[[35 0 2]
[ 2 29 0]
[ 0 0 31]]
Normalized confusion matrix
[[0.94594595 0. 0.05405405]
[0.06451613 0.93548387 0. ]
[0. 0. 1. ]]
###Markdown
Analisis Model pertama Di model pertama ini saya gunakan sebuah kernel linear pada model saya. Hasilnya hyperplane linear bekerja cukup baik pada data seeds ini dimana terlihat akurasi berada di 95% dimasing-masing tiap kelasnya pada model linear ini. Variasi Uji Coba Ke-2 Uji Coba dengan kernel RBF
###Code
model_2 = SVC(C=100, kernel='rbf', degree=3, gamma=0.01, coef0=0.0, shrinking=True,
probability=False, tol=0.001, cache_size=200, class_weight=None,
verbose=0, max_iter=-1, decision_function_shape="ovr", random_state = 0)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.5)
model_2.fit(x_train, y_train)
pred = model_2.predict(x_test)
acc2a = model_2.score(x_test, y_test) * 100
print('Accuracy of linear SVC on training set: {:.2f}'.format(model_2.score(x_train, y_train) * 100))
# print('Accuracy of linear SVC on test set: {:.2f}'.format(model_2.score(x_test, y_test) * 100))
print('Accuracy of linear SVC on test set: %.2f' % (acc2a))
PlotBar(pred, y_test)
from sklearn.metrics import plot_confusion_matrix
class_names = ['1', '2', '3']
titles_options = [("Confusion matrix, without normalization", None),
("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
matrix2a = plot_confusion_matrix(model_2, x_test, y_test, display_labels = class_names,
cmap = plt.cm.gray, normalize = normalize)
matrix2a.ax_.set_title(title)
print(title)
print(matrix2a.confusion_matrix)
plt.show()
###Output
Confusion matrix, without normalization
[[28 0 1]
[ 0 37 0]
[ 4 0 30]]
Normalized confusion matrix
[[0.96551724 0. 0.03448276]
[0. 1. 0. ]
[0.11764706 0. 0.88235294]]
###Markdown
Swab Data
###Code
model_2.fit(x_test, y_test)
pred = model_2.predict(x_train)
acc2b = model_2.score(x_train, y_train) * 100
print('Accuracy of linear SVC on training set: {:.2f}'.format(model_2.score(x_test, y_test) * 100))
# print('Accuracy of linear SVC on test set: {:.2f}'.format(model_2.score(x_train, y_train) * 100))
print('Accuracy of linear SVC on test set: %.2f' % (acc2b))
PlotBar(pred, y_train)
from sklearn.metrics import plot_confusion_matrix
class_names = ['1', '2', '3']
titles_options = [("Confusion matrix, without normalization", None),
("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
matrix2b = plot_confusion_matrix(model_2, x_train, y_train, display_labels = class_names,
cmap = plt.cm.gray, normalize = normalize)
matrix2b.ax_.set_title(title)
print(title)
print(matrix2b.confusion_matrix)
plt.show()
###Output
Confusion matrix, without normalization
[[33 3 1]
[ 1 30 0]
[ 2 0 29]]
Normalized confusion matrix
[[0.89189189 0.08108108 0.02702703]
[0.03225806 0.96774194 0. ]
[0.06451613 0. 0.93548387]]
###Markdown
Variasi Uji Coba Ke-3 uji coba dengan kernel polinomial
###Code
model_3 = SVC(C=1, kernel='poly', degree=3, gamma=0.01, coef0=0.0, shrinking=True,
probability=False, tol=0.001, cache_size=200, class_weight=None,
verbose=0, max_iter=-1, decision_function_shape="ovr", random_state = 0)
x_train, x_test, y_train, y_test = train_test_split(x, y, test_size = 0.5)
model_3.fit(x_train, y_train)
pred = model_3.predict(x_test)
acc3a = model_3.score(x_test, y_test) * 100
print('Accuracy of linear SVC on training set: {:.2f}'.format(model_3.score(x_train, y_train) * 100))
# print('Accuracy of linear SVC on test set: {:.2f}'.format(model_3.score(x_test, y_test) * 100))
print('Accuracy of linear SVC on test set: %.2f' % (acc3a))
PlotBar(pred, y_test)
from sklearn.metrics import plot_confusion_matrix
class_names = ['1', '2', '3']
titles_options = [("Confusion matrix, without normalization", None),
("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
matrix3a = plot_confusion_matrix(model_3, x_test, y_test, display_labels = class_names,
cmap = plt.cm.gray, normalize = normalize)
matrix3a.ax_.set_title(title)
print(title)
print(matrix3a.confusion_matrix)
plt.show()
###Output
Confusion matrix, without normalization
[[27 0 2]
[ 1 36 0]
[ 1 0 33]]
Normalized confusion matrix
[[0.93103448 0. 0.06896552]
[0.02702703 0.97297297 0. ]
[0.02941176 0. 0.97058824]]
###Markdown
Swab Data
###Code
model_3.fit(x_test, y_test)
pred = model_3.predict(x_train)
acc3b = model_3.score(x_train, y_train) * 100
print('Accuracy of linear SVC on training set: {:.2f}'.format(model_3.score(x_test, y_test) * 100))
# print('Accuracy of linear SVC on test set: {:.2f}'.format(model_3.score(x_train, y_train) * 100))
# acc3b = "{:.2f}".format(acc3b)
print('Accuracy of linear SVC on test set: %.2f' % (acc3b))
PlotBar(pred, y_train)
from sklearn.metrics import plot_confusion_matrix
class_names = ['1', '2', '3']
titles_options = [("Confusion matrix, without normalization", None),
("Normalized confusion matrix", 'true')]
for title, normalize in titles_options:
matrix3b = plot_confusion_matrix(model_3, x_train, y_train, display_labels = class_names,
cmap = plt.cm.gray, normalize = normalize)
matrix3b.ax_.set_title(title)
print(title)
print(matrix3b.confusion_matrix)
plt.show()
###Output
Confusion matrix, without normalization
[[27 5 5]
[ 4 27 0]
[ 0 0 31]]
Normalized confusion matrix
[[0.72972973 0.13513514 0.13513514]
[0.12903226 0.87096774 0. ]
[0. 0. 1. ]]
###Markdown
Visualie Dataset
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets
def make_meshgrid(x, y, h = .02):
x_min, x_max = x.min() - 1, x.max() + 1
y_min, y_max = y.min() - 1, y.max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
return xx, yy
def plot_contours(ax, clf, xx, yy, **params):
z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
z = z.reshape(xx.shape)
out = ax.contourf(xx, yy, z, **params)
return out
# Masukkan Hasil list train, test, dan xLabel dalam list
train = [94.95, 90.91, 100.00]
test = [90.00, 89.00, 96.00]
xlabel = ['linear', 'rbf', 'polinomial']
PlotTrainTestAcc(train, test, xlabel)
xlabel = ['linear', 'rbf', 'polinomial']
PlotTrainTestAcc(test, train, xlabel)
x = seeds.iloc[:, :2]
y = seeds.iloc[:, 7]
c = 1.0
models = (svm.SVC(kernel = 'linear', C = c),
svm.SVC(kernel = 'rbf', gamma = 'auto', C = c),
svm.SVC(kernel = 'poly', degree = 3, gamma = 'auto', C = c))
models = (clf.fit(x, y) for clf in models)
titles = ('SVC with linear kernel',
'SVC with RBF kernel',
'SVC with polynomial kernel')
fig, sub = plt.subplots(3)
plt.subplots_adjust(left = 5, right = 10, bottom = 4, top = 10)
x0, x1 = x.iloc[:, 0], x.iloc[:, 1]
xx, yy = make_meshgrid(x0, x1)
c_map = cmap = plt.cm.coolwarm
for clf, title, ax in zip(models, titles, sub.flatten()):
plot_contours(ax, clf, xx, yy, cmap = c_map, alpha = 0.8)
ax.scatter(x0, x1, c = y, cmap = c_map, s = 400, edgecolors = 'k')
ax.set_xlim(xx.min(), xx.max())
ax.set_ylim(yy.min(), yy.max())
ax.set_xlabel('Area', fontsize = 36)
ax.set_ylabel('Perimeter', fontsize = 36)
ax.set_xticks(())
ax.set_yticks(())
ax.set_title(title, fontsize = 36)
plt.show()
# seeds = pd.read_csv(r'D:\Kuliah\Semester 6\Tugas\AI\seeds.csv')
x = seeds.iloc[:, :-1]
colu = df[['Area', 'Perimeter', 'Compactness', 'Kernel.Length', 'Kernel.Width', 'Asymmetry.Coeff', 'Kernel.Groove']]
col = colu.columns
dout = x
dout['Type'] = model_1.predict(x)
print(dout)
dout.to_csv('SVM.csv')
###Output
Area Perimeter Compactness Kernel.Length Kernel.Width \
0 15.26 14.84 0.8710 5.763 3.312
1 14.88 14.57 0.8811 5.554 3.333
2 14.29 14.09 0.9050 5.291 3.337
3 13.84 13.94 0.8955 5.324 3.379
4 16.14 14.99 0.9034 5.658 3.562
.. ... ... ... ... ...
194 12.19 13.20 0.8783 5.137 2.981
195 11.23 12.88 0.8511 5.140 2.795
196 13.20 13.66 0.8883 5.236 3.232
197 11.84 13.21 0.8521 5.175 2.836
198 12.30 13.34 0.8684 5.243 2.974
Asymmetry.Coeff Kernel.Groove Type
0 2.221 5.220 1
1 1.018 4.956 1
2 2.699 4.825 1
3 2.259 4.805 1
4 1.355 5.175 1
.. ... ... ...
194 3.631 4.870 3
195 4.325 5.003 3
196 8.315 5.056 3
197 3.598 5.044 3
198 5.637 5.063 3
[199 rows x 8 columns]
###Markdown
Plotting bar tiap kelas
###Code
data1a = [matrix1a.confusion_matrix[0][0],
matrix2a.confusion_matrix[0][0],
matrix3a.confusion_matrix[0][0]]
data2a = [matrix1a.confusion_matrix[1][1],
matrix2a.confusion_matrix[1][1],
matrix3a.confusion_matrix[1][1]]
data3a = [matrix1a.confusion_matrix[2][2],
matrix2a.confusion_matrix[2][2],
matrix3a.confusion_matrix[2][2]]
width = 0.3
fig = plt.figure(figsize= (8,5))
ax = fig.add_axes([0,0,1,1])
ax.grid(zorder = 0, color = 'gray')
# Show data in plots
ax.bar(np.arange(len(data1a)), data1a, width=width)
ax.bar(np.arange(len(data2a))+ width, data2a, width=width)
ax.bar(np.arange(len(data3a))+ 2*width, data3a, width=width)
# add axis labels
xLabel = ['Linear', 'RBF', 'Poly']
ax.legend(labels=['class 1', 'class 2', 'class 3'])
plt.xticks(np.arange(len(data1a)), xLabel)
plt.xlabel('Hyperparameter variation')
plt.ylabel('Percentage')
ax.set_title('Akurasi per kelas')
plt.show()
###Output
_____no_output_____
###Markdown
Plotting bar tiap kelas swap data
###Code
data1a = [matrix1b.confusion_matrix[0][0],
matrix2b.confusion_matrix[0][0],
matrix3b.confusion_matrix[0][0]]
data2a = [matrix1b.confusion_matrix[1][1],
matrix2b.confusion_matrix[1][1],
matrix3b.confusion_matrix[1][1]]
data3a = [matrix1b.confusion_matrix[2][2],
matrix2b.confusion_matrix[2][2],
matrix3b.confusion_matrix[2][2]]
width = 0.3
fig = plt.figure(figsize= (8,5))
ax = fig.add_axes([0,0,1,1])
ax.grid(zorder = 0, color = 'gray')
# Show data in plots
ax.bar(np.arange(len(data1a)), data1a, width=width)
ax.bar(np.arange(len(data2a))+ width, data2a, width=width)
ax.bar(np.arange(len(data3a))+ 2*width, data3a, width=width)
# add axis labels
xLabel = ['Linear', 'RBF', 'Poly']
ax.legend(labels=['class 1', 'class 2', 'class 3'])
plt.xticks(np.arange(len(data1a)), xLabel)
plt.xlabel('Hyperparameter variation')
plt.ylabel('Percentage')
ax.set_title('Akurasi per kelas cross validation')
plt.show()
import matplotlib.pyplot as plt
# line 1 points
x1 = ['Linear','RBF','Poly']
y1 = [acc1a,acc2a,acc3a]
# plotting the line 1 points
plt.plot(x1, y1, label = "Percoban-1")
# line 2 points
y2 = [acc1b,acc2b,acc3b]
# plotting the line 2 points
plt.plot(x1, y2, label = "Percobaan-2 (Cross Validation)")
plt.xlabel('x - axis')
# Set the y axis label of the current axis.
plt.ylabel('y - axis')
# Set a title of the current axes.
plt.title('Two or more lines on same plot with suitable legends ')
# show a legend on the plot
plt.legend()
# Display a figure.
plt.ylim(bottom=0, top=100)
plt.show()
###Output
_____no_output_____ |
5. transfer-learning/Transfer_Learning_Exercise.ipynb | ###Markdown
Transfer LearningMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. > Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using [VGGNet](https://arxiv.org/pdf/1409.1556.pdf) trained on the [ImageNet dataset](http://www.image-net.org/) as a feature extractor. Below is a diagram of the VGGNet architecture, with a series of convolutional and maxpooling layers, then three fully-connected layers at the end that classify the 1000 classes found in the ImageNet database.VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but **replace the final fully-connected layer** with our own classifier. This way we can use VGGNet as a _fixed feature extractor_ for our images then easily train a simple classifier on top of that. * Use all but the last fully-connected layer as a fixed feature extractor.* Define a new, final classification layer and apply it to a task of our choice!You can read more about transfer learning from [the CS231n Stanford course notes](http://cs231n.github.io/transfer-learning/).--- Flower powerHere we'll be using VGGNet to classify images of flowers. We'll start, as usual, by importing our usual resources. And checking if we can train our model on GPU. Download DataDownload the flower data from [this link](https://s3.amazonaws.com/video.udacity-data.com/topher/2018/September/5baa60a0_flower-photos/flower-photos.zip), save it in the home directory of this notebook and extract the zip file to get the directory `flower_photos/`. **Make sure the directory has this exact name for accessing data: flower_photos**.
###Code
import os
import numpy as np
import torch
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
%matplotlib inline
# check if CUDA is available
train_on_gpu = torch.cuda.is_available()
if not train_on_gpu:
print('CUDA is not available. Training on CPU ...')
else:
print('CUDA is available! Training on GPU ...')
###Output
CUDA is not available. Training on CPU ...
###Markdown
Load and Transform our DataWe'll be using PyTorch's [ImageFolder](https://pytorch.org/docs/stable/torchvision/datasets.htmlimagefolder) class which makes is very easy to load data from a directory. For example, the training images are all stored in a directory path that looks like this:```root/class_1/xxx.pngroot/class_1/xxy.pngroot/class_1/xxz.pngroot/class_2/123.pngroot/class_2/nsdf3.pngroot/class_2/asd932_.png```Where, in this case, the root folder for training is `flower_photos/train/` and the classes are the names of flower types.
###Code
# define training and test data directories
data_dir = 'flower_photos/'
train_dir = os.path.join(data_dir, 'train/')
test_dir = os.path.join(data_dir, 'test/')
# classes are folders in each directory with these names
classes = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
###Output
_____no_output_____
###Markdown
Transforming the DataWhen we perform transfer learning, we have to shape our input data into the shape that the pre-trained model expects. VGG16 expects `224`-dim square images as input and so, we resize each flower image to fit this mold.
###Code
# load and transform data using ImageFolder
# VGG-16 Takes 224x224 images as input, so we resize all of them
data_transform = transforms.Compose([transforms.RandomResizedCrop(224),
transforms.ToTensor()])
train_data = datasets.ImageFolder(train_dir, transform=data_transform)
test_data = datasets.ImageFolder(test_dir, transform=data_transform)
# print out some data stats
print('Num training images: ', len(train_data))
print('Num test images: ', len(test_data))
###Output
Num training images: 3130
Num test images: 540
###Markdown
DataLoaders and Data Visualization
###Code
# define dataloader parameters
batch_size = 20
num_workers=0
# prepare data loaders
train_loader = torch.utils.data.DataLoader(train_data, batch_size=batch_size,
num_workers=num_workers, shuffle=True)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size,
num_workers=num_workers, shuffle=True)
# Visualize some sample data
# obtain one batch of training images
dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display
# plot the images in the batch, along with the corresponding labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
plt.imshow(np.transpose(images[idx], (1, 2, 0)))
ax.set_title(classes[labels[idx]])
###Output
_____no_output_____
###Markdown
--- Define the ModelTo define a model for training we'll follow these steps:1. Load in a pre-trained VGG16 model2. "Freeze" all the parameters, so the net acts as a fixed feature extractor 3. Remove the last layer4. Replace the last layer with a linear classifier of our own**Freezing simply means that the parameters in the pre-trained model will *not* change during training.**
###Code
# Load the pretrained model from pytorch
vgg16 = models.vgg16(pretrained=True)
# print out the model structure
#print(vgg16)
print(vgg16.classifier[6].in_features)
print(vgg16.classifier[6].out_features)
# Freeze training for all "features" layers
for param in vgg16.features.parameters():
param.requires_grad = False
###Output
_____no_output_____
###Markdown
--- Final Classifier LayerOnce you have the pre-trained feature extractor, you just need to modify and/or add to the final, fully-connected classifier layers. In this case, we suggest that you repace the last layer in the vgg classifier group of layers. > This layer should see as input the number of features produced by the portion of the network that you are not changing, and produce an appropriate number of outputs for the flower classification task.You can access any layer in a pretrained network by name and (sometimes) number, i.e. `vgg16.classifier[6]` is the sixth layer in a group of layers named "classifier". TODO: Replace the last fully-connected layer with one that produces the appropriate number of class scores.
###Code
from torch import nn, optim
vgg16.classifier[6] = nn.Linear(4096,5)
vgg16.classifier
## TODO: add a last linear layer that maps n_inputs -> 5 flower classes
## new layers automatically have requires_grad = True
# after completing your model, if GPU is available, move the model to GPU
if train_on_gpu:
vgg16.cuda()
###Output
_____no_output_____
###Markdown
Specify [Loss Function](http://pytorch.org/docs/stable/nn.htmlloss-functions) and [Optimizer](http://pytorch.org/docs/stable/optim.html)Below we'll use cross-entropy loss and stochastic gradient descent with a small learning rate. Note that the optimizer accepts as input _only_ the trainable parameters `vgg.classifier.parameters()`.
###Code
import torch.optim as optim
# specify loss function (categorical cross-entropy)
criterion = nn.CrossEntropyLoss()
# specify optimizer (stochastic gradient descent) and learning rate = 0.001
optimizer = optim.SGD(vgg16.classifier.parameters(), lr=0.001)
###Output
_____no_output_____
###Markdown
--- TrainingHere, we'll train the network.> **Exercise:** So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.
###Code
# number of epochs to train the model
n_epochs = 5
valid_loss_min = np.Inf
## TODO complete epoch and training batch loops
## These loops should update the classifier-weights of this model
## And track (and print out) the training loss over time
for e in range(n_epochs):
train_loss = 0
training_accuracy = 0
vgg16.train()
for images,labels in train_loader:
if train_on_gpu:
images,labels = images.cuda(),labels.cuda()
optimizer.zero_grad()
output = vgg16(images)
loss = criterion(output,labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
ps = torch.exp(output)
_, top_class = ps.topk(1,dim = 1)
equals = top_class == labels.view(*top_class.shape)
training_accuracy += torch.mean(equals.type(torch.FloatTensor))
print('Epoch: {} \tTraining Loss: {:.6f} \tTraining Accuracy: {:.6f}'.format(
e+1, train_loss/len(train_loader),training_accuracy/len(train_loader)))
###Output
Epoch: 1 Training Loss: 0.010992 Training Accuracy: 0.001274
Epoch: 1 Training Loss: 0.022572 Training Accuracy: 0.001911
Epoch: 1 Training Loss: 0.033703 Training Accuracy: 0.002866
Epoch: 1 Training Loss: 0.043895 Training Accuracy: 0.004459
Epoch: 1 Training Loss: 0.053228 Training Accuracy: 0.006688
Epoch: 1 Training Loss: 0.063686 Training Accuracy: 0.007643
Epoch: 1 Training Loss: 0.073803 Training Accuracy: 0.008917
Epoch: 1 Training Loss: 0.084359 Training Accuracy: 0.010510
Epoch: 1 Training Loss: 0.094299 Training Accuracy: 0.013376
Epoch: 1 Training Loss: 0.104727 Training Accuracy: 0.014650
Epoch: 1 Training Loss: 0.114119 Training Accuracy: 0.016561
Epoch: 1 Training Loss: 0.123342 Training Accuracy: 0.018790
Epoch: 1 Training Loss: 0.133252 Training Accuracy: 0.020701
Epoch: 1 Training Loss: 0.143096 Training Accuracy: 0.021975
Epoch: 1 Training Loss: 0.153130 Training Accuracy: 0.023248
Epoch: 1 Training Loss: 0.163294 Training Accuracy: 0.024841
Epoch: 1 Training Loss: 0.173527 Training Accuracy: 0.025796
Epoch: 1 Training Loss: 0.183290 Training Accuracy: 0.028344
Epoch: 1 Training Loss: 0.192647 Training Accuracy: 0.030573
Epoch: 1 Training Loss: 0.202000 Training Accuracy: 0.033439
Epoch: 1 Training Loss: 0.211445 Training Accuracy: 0.036306
Epoch: 1 Training Loss: 0.220078 Training Accuracy: 0.039172
Epoch: 1 Training Loss: 0.229055 Training Accuracy: 0.041720
Epoch: 1 Training Loss: 0.239193 Training Accuracy: 0.043312
Epoch: 1 Training Loss: 0.247206 Training Accuracy: 0.047452
Epoch: 1 Training Loss: 0.256199 Training Accuracy: 0.050000
Epoch: 1 Training Loss: 0.265263 Training Accuracy: 0.052229
Epoch: 1 Training Loss: 0.274348 Training Accuracy: 0.054459
Epoch: 1 Training Loss: 0.282661 Training Accuracy: 0.057325
Epoch: 1 Training Loss: 0.291916 Training Accuracy: 0.059873
Epoch: 1 Training Loss: 0.300713 Training Accuracy: 0.063057
Epoch: 1 Training Loss: 0.310125 Training Accuracy: 0.065287
Epoch: 1 Training Loss: 0.318582 Training Accuracy: 0.068790
Epoch: 1 Training Loss: 0.326081 Training Accuracy: 0.073248
Epoch: 1 Training Loss: 0.335219 Training Accuracy: 0.075478
Epoch: 1 Training Loss: 0.343426 Training Accuracy: 0.078981
Epoch: 1 Training Loss: 0.352420 Training Accuracy: 0.081847
Epoch: 1 Training Loss: 0.360302 Training Accuracy: 0.085987
Epoch: 1 Training Loss: 0.367779 Training Accuracy: 0.090127
Epoch: 1 Training Loss: 0.375542 Training Accuracy: 0.093631
Epoch: 1 Training Loss: 0.384594 Training Accuracy: 0.096815
Epoch: 1 Training Loss: 0.392012 Training Accuracy: 0.100318
Epoch: 1 Training Loss: 0.400713 Training Accuracy: 0.103185
Epoch: 1 Training Loss: 0.409764 Training Accuracy: 0.106369
Epoch: 1 Training Loss: 0.418772 Training Accuracy: 0.108599
Epoch: 1 Training Loss: 0.427288 Training Accuracy: 0.111783
Epoch: 1 Training Loss: 0.436025 Training Accuracy: 0.114650
Epoch: 1 Training Loss: 0.443359 Training Accuracy: 0.119108
Epoch: 1 Training Loss: 0.451425 Training Accuracy: 0.123248
Epoch: 1 Training Loss: 0.459231 Training Accuracy: 0.126752
Epoch: 1 Training Loss: 0.467581 Training Accuracy: 0.130255
Epoch: 1 Training Loss: 0.474981 Training Accuracy: 0.133758
Epoch: 1 Training Loss: 0.482578 Training Accuracy: 0.136943
Epoch: 1 Training Loss: 0.489895 Training Accuracy: 0.141401
Epoch: 1 Training Loss: 0.497042 Training Accuracy: 0.144904
Epoch: 1 Training Loss: 0.504554 Training Accuracy: 0.148726
Epoch: 1 Training Loss: 0.512032 Training Accuracy: 0.151911
Epoch: 1 Training Loss: 0.520945 Training Accuracy: 0.153822
Epoch: 1 Training Loss: 0.529178 Training Accuracy: 0.157643
Epoch: 1 Training Loss: 0.536837 Training Accuracy: 0.160828
Epoch: 1 Training Loss: 0.544087 Training Accuracy: 0.164968
Epoch: 1 Training Loss: 0.550836 Training Accuracy: 0.169108
Epoch: 1 Training Loss: 0.557790 Training Accuracy: 0.172930
Epoch: 1 Training Loss: 0.564609 Training Accuracy: 0.177070
Epoch: 1 Training Loss: 0.571703 Training Accuracy: 0.180892
Epoch: 1 Training Loss: 0.578219 Training Accuracy: 0.185350
Epoch: 1 Training Loss: 0.584710 Training Accuracy: 0.190127
Epoch: 1 Training Loss: 0.591994 Training Accuracy: 0.194904
Epoch: 1 Training Loss: 0.600234 Training Accuracy: 0.197771
Epoch: 1 Training Loss: 0.607892 Training Accuracy: 0.201592
Epoch: 1 Training Loss: 0.615494 Training Accuracy: 0.206051
Epoch: 1 Training Loss: 0.622146 Training Accuracy: 0.210828
Epoch: 1 Training Loss: 0.629297 Training Accuracy: 0.214968
Epoch: 1 Training Loss: 0.637209 Training Accuracy: 0.218153
Epoch: 1 Training Loss: 0.644274 Training Accuracy: 0.221656
Epoch: 1 Training Loss: 0.651379 Training Accuracy: 0.225159
Epoch: 1 Training Loss: 0.657993 Training Accuracy: 0.229299
Epoch: 1 Training Loss: 0.664845 Training Accuracy: 0.232803
Epoch: 1 Training Loss: 0.672333 Training Accuracy: 0.236306
Epoch: 1 Training Loss: 0.679475 Training Accuracy: 0.240127
Epoch: 1 Training Loss: 0.687031 Training Accuracy: 0.243631
Epoch: 1 Training Loss: 0.694576 Training Accuracy: 0.247134
Epoch: 1 Training Loss: 0.701603 Training Accuracy: 0.250318
Epoch: 1 Training Loss: 0.708420 Training Accuracy: 0.254459
Epoch: 1 Training Loss: 0.715586 Training Accuracy: 0.258917
Epoch: 1 Training Loss: 0.721541 Training Accuracy: 0.263694
Epoch: 1 Training Loss: 0.727872 Training Accuracy: 0.268471
Epoch: 1 Training Loss: 0.734587 Training Accuracy: 0.273885
Epoch: 1 Training Loss: 0.742660 Training Accuracy: 0.277389
Epoch: 1 Training Loss: 0.749388 Training Accuracy: 0.281210
Epoch: 1 Training Loss: 0.756357 Training Accuracy: 0.285350
Epoch: 1 Training Loss: 0.762663 Training Accuracy: 0.289490
Epoch: 1 Training Loss: 0.768440 Training Accuracy: 0.293949
Epoch: 1 Training Loss: 0.776159 Training Accuracy: 0.296815
Epoch: 1 Training Loss: 0.781986 Training Accuracy: 0.300955
Epoch: 1 Training Loss: 0.787261 Training Accuracy: 0.306051
Epoch: 1 Training Loss: 0.793149 Training Accuracy: 0.310828
Epoch: 1 Training Loss: 0.799872 Training Accuracy: 0.314968
Epoch: 1 Training Loss: 0.806124 Training Accuracy: 0.318790
Epoch: 1 Training Loss: 0.814310 Training Accuracy: 0.321019
Epoch: 1 Training Loss: 0.820382 Training Accuracy: 0.324522
Epoch: 1 Training Loss: 0.826406 Training Accuracy: 0.329618
Epoch: 1 Training Loss: 0.835272 Training Accuracy: 0.332166
Epoch: 1 Training Loss: 0.841254 Training Accuracy: 0.336624
Epoch: 1 Training Loss: 0.847040 Training Accuracy: 0.341083
Epoch: 1 Training Loss: 0.853783 Training Accuracy: 0.345223
Epoch: 1 Training Loss: 0.860686 Training Accuracy: 0.349363
Epoch: 1 Training Loss: 0.867563 Training Accuracy: 0.352866
Epoch: 1 Training Loss: 0.873818 Training Accuracy: 0.357962
Epoch: 1 Training Loss: 0.880760 Training Accuracy: 0.361465
Epoch: 1 Training Loss: 0.886095 Training Accuracy: 0.366879
Epoch: 1 Training Loss: 0.892193 Training Accuracy: 0.371019
Epoch: 1 Training Loss: 0.899081 Training Accuracy: 0.374522
Epoch: 1 Training Loss: 0.905655 Training Accuracy: 0.378662
Epoch: 1 Training Loss: 0.913836 Training Accuracy: 0.381529
Epoch: 1 Training Loss: 0.920741 Training Accuracy: 0.385032
Epoch: 1 Training Loss: 0.926572 Training Accuracy: 0.389490
Epoch: 1 Training Loss: 0.931799 Training Accuracy: 0.394586
Epoch: 1 Training Loss: 0.938359 Training Accuracy: 0.399045
Epoch: 1 Training Loss: 0.945978 Training Accuracy: 0.402229
Epoch: 1 Training Loss: 0.951723 Training Accuracy: 0.407325
Epoch: 1 Training Loss: 0.958174 Training Accuracy: 0.411147
Epoch: 1 Training Loss: 0.965175 Training Accuracy: 0.415287
Epoch: 1 Training Loss: 0.970511 Training Accuracy: 0.420701
Epoch: 1 Training Loss: 0.976864 Training Accuracy: 0.425159
Epoch: 1 Training Loss: 0.982340 Training Accuracy: 0.429936
Epoch: 1 Training Loss: 0.987659 Training Accuracy: 0.434713
Epoch: 1 Training Loss: 0.994179 Training Accuracy: 0.438535
Epoch: 1 Training Loss: 1.000576 Training Accuracy: 0.442038
Epoch: 1 Training Loss: 1.008230 Training Accuracy: 0.445223
###Markdown
--- TestingBelow you see the test accuracy for each flower class.
###Code
# track test loss
# over 5 flower classes
test_loss = 0.0
class_correct = list(0. for i in range(5))
class_total = list(0. for i in range(5))
vgg16.eval() # eval mode
# iterate over test data
for data, target in test_loader:
# move tensors to GPU if CUDA is available
if train_on_gpu:
data, target = data.cuda(), target.cuda()
# forward pass: compute predicted outputs by passing inputs to the model
output = vgg16(data)
# calculate the batch loss
loss = criterion(output, target)
# update test loss
test_loss += loss.item()*data.size(0)
# convert output probabilities to predicted class
_, pred = torch.max(output, 1)
# compare predictions to true label
correct_tensor = pred.eq(target.data.view_as(pred))
correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())
# calculate test accuracy for each object class
for i in range(batch_size):
label = target.data[i]
class_correct[label] += correct[i].item()
class_total[label] += 1
# calculate avg test loss
test_loss = test_loss/len(test_loader.dataset)
print('Test Loss: {:.6f}\n'.format(test_loss))
for i in range(5):
if class_total[i] > 0:
print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
classes[i], 100 * class_correct[i] / class_total[i],
np.sum(class_correct[i]), np.sum(class_total[i])))
else:
print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))
print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
100. * np.sum(class_correct) / np.sum(class_total),
np.sum(class_correct), np.sum(class_total)))
###Output
_____no_output_____
###Markdown
Visualize Sample Test Results
###Code
# obtain one batch of test images
dataiter = iter(test_loader)
images, labels = dataiter.next()
images.numpy()
# move model inputs to cuda, if GPU available
if train_on_gpu:
images = images.cuda()
# get sample outputs
output = vgg16(images)
# convert output probabilities to predicted class
_, preds_tensor = torch.max(output, 1)
preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())
# plot the images in the batch, along with predicted and true labels
fig = plt.figure(figsize=(25, 4))
for idx in np.arange(20):
ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
plt.imshow(np.transpose(images[idx], (1, 2, 0)))
ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]),
color=("green" if preds[idx]==labels[idx].item() else "red"))
###Output
_____no_output_____ |
ChannelFlows/DiskActuator/SensitivityAnalysis.ipynb | ###Markdown
###Code
import numpy as np
import matplotlib.pyplot as mpl
import pathlib
if not pathlib.Path("mpl_utils.py").exists():
!curl -O https://raw.githubusercontent.com/joaochenriques/MCTE_2022/main/libs/mpl_utils.py &> /dev/null
import mpl_utils as mut
mut.config_plots()
%config InlineBackend.figure_formats = ['svg']
mpl.rcParams["figure.figsize"] = (14, 3.5) # (12.5,3)
###Output
_____no_output_____
###Markdown
###Code
fig, (ax1, ax2, ax3) = mpl.subplots(1,3 )
fig.subplots_adjust( wspace = .22 )
C_T_lst = []
C_P_lst = []
Fr2t_lst = []
Fr4b_lst = []
Fr4t_lst = []
Fr4b_zeta4b_lst = []
Fr4t_zeta4t_lst = []
B = 0.1
Fr1 = 0.1
for Fr4b in np.linspace( Fr1*1.001, Fr1*2.6, 1000 ):
ζ4 = (1/2.)*Fr1**2 - 1/2.*Fr4b**2 + 1
C1 = Fr1 - Fr4b*ζ4
C2 = B**2*Fr4b**2 - 2*B*Fr1**2 + 2*B*Fr1*Fr4b \
+ B*ζ4**2 - B + Fr1**2 - 2*Fr1*Fr4b*ζ4 + Fr4b**2*ζ4**2
Fr4t = ( C1 + np.sqrt(C2) ) / B
ζ4t = ( Fr4b*ζ4 - Fr1 ) / ( Fr4b - Fr4t )
ζ4b = ζ4 - ζ4t
Fr2t = Fr4t*ζ4t/B
C_T = (Fr4b**2 - Fr4t**2)/Fr1**2
C_P = C_T*Fr2t/Fr1
if C_P <= 0.0: break
C_P_lst.append( C_P )
C_T_lst.append( C_T )
Fr2t_lst.append( Fr2t )
Fr4t_lst.append( Fr4t )
Fr4b_lst.append( Fr4b )
Fr4b_zeta4b_lst.append( Fr4b*ζ4b / Fr1 )
Fr4t_zeta4t_lst.append( Fr4t*ζ4t / Fr1 )
ax1.set_title( "B=%.2f" % B )
ax1.plot( C_T_lst, C_P_lst, label="$\mathrm{Fr}_1=%.2f$" % Fr1 )
ax1.set_ylabel( "$C_P$" )
ax1.grid()
ax1.legend(loc="lower center");
#ax1.set_xticklabels( [] )
ax1.set_xlabel( "$C_T$" )
ax2.set_title( "B=%.2f" % B )
ax2.plot( C_T_lst, Fr4b_zeta4b_lst, label="$Q_{4b}/Q_1$" )
ax2.plot( C_T_lst, Fr4t_zeta4t_lst, 'r', dashes=(6,2), label="$Q_{4t}/Q_1$" )
ax2.set_xlabel( "$C_T$" )
ax2.set_ylabel( "$Q_i/Q_1$" )
ax2.grid()
ax2.legend(loc="center right")
ax3.set_title( "B=%.2f" % B )
ax3.plot( C_T_lst, Fr2t_lst, dashes=(6,2,2,2), label="$\mathrm{Fr}_{2t}$" )
ax3.plot( C_T_lst, Fr4t_lst, label="$\mathrm{Fr}_{4t}$" )
ax3.plot( C_T_lst, Fr4b_lst, dashes=(6,2), label="$\mathrm{Fr}_{4b}$" )
ax3.set_xlabel( "$C_T$" )
ax3.set_ylabel( "$\mathrm{Fr}$" )
ax3.grid()
ax3.legend(loc="center right")
mpl.savefig('ChannelFlowLimits_Ex.pdf', bbox_inches='tight', pad_inches=0.02);
fig, (ax1, ax2, ax3) = mpl.subplots(1,3)
fig.subplots_adjust( wspace = 0.22 )
B = 0.05
for Fr1 in ( 0.05, 0.10, 0.20, 0.3, 0.4 ):
C_T_lst = []
C_P_lst = []
Fr2t_lst = []
Fr4b_lst = []
Fr4t_lst = []
Fr4b_zeta4b_lst = []
Fr4t_zeta4t_lst = []
for Fr4b in np.linspace( Fr1*1.001, Fr1*2, 200 ):
ζ4 = (1/2.)*Fr1**2 - 1/2.*Fr4b**2 + 1
C1 = Fr1 - Fr4b*ζ4
C2 = B**2*Fr4b**2 - 2*B*Fr1**2 + 2*B*Fr1*Fr4b \
+ B*ζ4**2 - B + Fr1**2 - 2*Fr1*Fr4b*ζ4 + Fr4b**2*ζ4**2
Fr4t = ( C1 + np.sqrt(C2) ) / B
ζ4t = ( Fr4b*ζ4 - Fr1 ) / ( Fr4b - Fr4t )
ζ4b = ζ4 - ζ4t
Fr2t = Fr4t*ζ4t/B
C_T = (Fr4b**2 - Fr4t**2)/Fr1**2
C_P = C_T*Fr2t/Fr1
if C_P <= 0.0: break
C_P_lst.append( C_P )
C_T_lst.append( C_T )
Fr2t_lst.append( Fr2t )
Fr4t_lst.append( Fr4t )
Fr4b_lst.append( Fr4b )
Fr4b_zeta4b_lst.append( Fr4b*ζ4b / Fr1 )
Fr4t_zeta4t_lst.append( Fr4t*ζ4t / Fr1 )
ax1.plot( C_T_lst, C_P_lst, label="$\mathrm{Fr}_1=%.2f$" % Fr1 )
ax2.plot( C_T_lst, Fr4b_zeta4b_lst, label="$Q_{4b}/Q_1, \mathrm{Fr}_1=%.2f$" % Fr1 )
ax2.plot( C_T_lst, Fr4t_zeta4t_lst, dashes=(5,1), label="$Q_{4t}/Q_1, \mathrm{Fr}_1=%.2f$" % Fr1 )
ax3.plot( C_T_lst, Fr2t_lst, dashes=(2,1), label="$\mathrm{Fr}_{2t}, \mathrm{Fr}_1=%.2f$" % Fr1 )
ax3.plot( C_T_lst, Fr4t_lst, label="$\mathrm{Fr}_{4t}, \mathrm{Fr}_1=%.2f$" % Fr1 )
ax3.plot( C_T_lst, Fr4b_lst, dashes=(5,1), label="$\mathrm{Fr}_{4b}, \mathrm{Fr}_1=%.2f$" % Fr1 )
ax1.set_ylabel( "$C_P$" )
ax1.grid()
ax1.set_title( "B=%.2f" % B );
ax1.set_xlabel( "$C_T$" )
ax1.legend(loc="center left",fontsize=8 )
ax2.set_title( "B=%.2f" % B );
ax2.set_xlabel( "$C_T$" )
ax2.set_ylabel( "$Q_i/Q_1$" )
ax2.grid()
ax2.legend(loc="center right",fontsize=8 )
ax3.set_title( "B=%.2f" % B )
ax3.set_xlabel( "$C_T$" )
ax3.set_ylabel( "$\mathrm{Fr}$" )
ax3.grid()
ax3.legend( bbox_to_anchor=(1.05, 1), loc=2,fontsize=8, borderaxespad=0.0,handlelength=2,numpoints=1,labelspacing=0.15 )
mpl.savefig('Sensitivity_B%4.2f.pdf' % B, bbox_inches='tight', pad_inches=0.02);
fig, (ax1, ax2, ax3) = mpl.subplots(1,3)
fig.subplots_adjust( wspace = 0.22 )
Fr1 = 0.15
for B in ( 0.05, 0.10, 0.15 ):
C_T_lst = []
C_P_lst = []
Fr2t_lst = []
Fr4b_lst = []
Fr4t_lst = []
Fr4b_zeta4b_lst = []
Fr4t_zeta4t_lst = []
for Fr4b in np.linspace( Fr1*1.001, Fr1*2, 200 ):
ζ4 = (1/2.)*Fr1**2 - 1/2.*Fr4b**2 + 1
C1 = Fr1 - Fr4b*ζ4
C2 = B**2*Fr4b**2 - 2*B*Fr1**2 + 2*B*Fr1*Fr4b \
+ B*ζ4**2 - B + Fr1**2 - 2*Fr1*Fr4b*ζ4 + Fr4b**2*ζ4**2
Fr4t = ( C1 + np.sqrt(C2) ) / B
ζ4t = ( Fr4b*ζ4 - Fr1 ) / ( Fr4b - Fr4t )
ζ4b = ζ4 - ζ4t
Fr2t = Fr4t*ζ4t/B
C_T = (Fr4b**2 - Fr4t**2)/Fr1**2
C_P = C_T*Fr2t/Fr1
if C_P <= 0.0: break
C_P_lst.append( C_P )
C_T_lst.append( C_T )
Fr2t_lst.append( Fr2t )
Fr4t_lst.append( Fr4t )
Fr4b_lst.append( Fr4b )
Fr4b_zeta4b_lst.append( Fr4b*ζ4b / Fr1 )
Fr4t_zeta4t_lst.append( Fr4t*ζ4t / Fr1 )
ax1.plot( C_T_lst, C_P_lst, label="B=%.2f" % B )
ax2.plot( C_T_lst, Fr4b_zeta4b_lst, label="$Q_{4b}/Q_1, B=%.2f$" % B )
ax2.plot( C_T_lst, Fr4t_zeta4t_lst, dashes=(5,1), label="$Q_{4t}/Q_1, B=%.2f$" % B )
ax3.plot( C_T_lst, Fr2t_lst, dashes=(2,1), label="$\mathrm{Fr}_{2t}, B=%.2f$" % B )
ax3.plot( C_T_lst, Fr4t_lst, label="$\mathrm{Fr}_{4t}, B=%.2f$" % B )
ax3.plot( C_T_lst, Fr4b_lst, dashes=(5,1), label="$\mathrm{Fr}_{4b}, B=%.2f$" % B )
ax1.set_title( r"$\mathrm{Fr}_1=%.2f$" % Fr1 )
ax1.set_ylabel( "$C_P$" )
ax1.set_xlabel( "$C_T$" )
ax1.grid()
ax1.legend(loc="upper right",fontsize=8 )
ax2.set_title( r"$\mathrm{Fr}_1=%.2f$" % Fr1 )
ax2.set_xlabel( "$C_T$" )
ax2.set_ylabel( "$Q_i/Q_1$" )
ax2.grid()
ax2.legend(loc="center right",fontsize=8 )
ax3.set_title( r"$\mathrm{Fr}_1=%.2f$" % Fr1 )
ax3.set_xlabel( "$C_T$" )
ax3.set_ylabel( "$\mathrm{Fr}$" )
ax3.grid()
ax3.legend( bbox_to_anchor=(1.05, 1), loc=2,fontsize=8, borderaxespad=0.0,handlelength=2,numpoints=1,labelspacing=0.15 )
mpl.savefig('Sensitivity_Fr%4.2f.pdf' % Fr1, bbox_inches='tight', pad_inches=0.02);
def CardanoRoots( aa, bb ):
# Cardano algorithm to solve our polynomial, see:
# https://www.shsu.edu/kws006/professional/Concepts_files/SolvingCubics.pdf
P = -2.0*aa
Q = -2.0*bb
Δ = (P/3.0)**3 + (Q/2)**2
if Δ < 0.0: Δ = Δ + 0J
β = ( -Q/2.0 - np.sqrt(Δ) )**(1.0/3.0)
α = P/(3.0*β)
ω = ( -1.0 + np.sqrt(3.0)*1J) / 2.0
x1 = α - β
x2 = (α*ω - β)*ω
x3 = (α - β*ω)*ω
if np.imag(x1) < 1E-15: x1 = np.real( x1 )
if np.imag(x2) < 1E-15: x2 = np.real( x2 )
if np.imag(x3) < 1E-15: x3 = np.real( x3 )
# applies only for this solution
assert( np.imag( x1 ) == 0 )
assert( np.imag( x2 ) == 0 )
assert( np.imag( x3 ) == 0 )
assert( x1 <= 0.0 )
assert( x2 <= x3 )
return (x2, x3)
Fr1 = 0.3
Fr4b = Fr1*1.5
B = 0.6
ζ4 = (1/2.)*Fr1**2 - 1/2.*Fr4b**2 + 1
C1 = Fr1 - Fr4b*ζ4
C2 = B**2*Fr4b**2 - 2*B*Fr1**2 + 2*B*Fr1*Fr4b \
+ B*ζ4**2 - B + Fr1**2 - 2*Fr1*Fr4b*ζ4 + Fr4b**2*ζ4**2
Fr4t = ( C1 + np.sqrt(C2) ) / B
ζ4t = ( Fr4b*ζ4 - Fr1 ) / ( Fr4b - Fr4t )
ζ4b = ζ4 - ζ4t
Fr2t = Fr4t*ζ4t/B
C_T = (Fr4b**2 - Fr4t**2)/Fr1**2
C_P = C_T*Fr2t/Fr1
mb = Fr4b*ζ4b + Fr4t*ζ4t
bb = mb**2
aa = (Fr4b**2*ζ4b + Fr4t**2*ζ4t + 1/2*ζ4**2)
ζs = CardanoRoots( aa, bb )
ζ5 = ζs[1]
Fr5 = mb / ζ5
ζ4, ζ5, Fr5
###Output
_____no_output_____ |
external_nbs/poker_hand_induction_fastai_x_tabnet_final.ipynb | ###Markdown
Poker Hand Induction Datasethttps://www.kaggle.com/c/poker-rule-induction/dataFastai v2 best result: 99,48% (I suspect it could improve, cause the validation error kept slowly decreasing, but I did not have more time to keep training)TabNet best result: 99,10% (Also suspect it could get better)Most pain for fastai was setting hyperparameters (number of layers, number of neurons in each one, and dropouts). For TabNet, the same thing (n_d, n_a, n_steps)Crucial: treat ranks also as categories, not only as continuousBased on the work of:https://github.com/muellerzr/fastai2-Tabular-Baselinesandhttps://github.com/mgrankin/fast_tabnet
###Code
from fastai2.basics import *
from fastai2.tabular.all import *
from fast_tabnet.core import *
df = pd.read_csv('/media/hdd3tb/data/kaggle_pokerhand/train.csv')
df.head()
for i in range(1,6):
df['cC'+str(i)]=df['C'+str(i)]
cat_vars = ['S1', 'S2', 'S3', 'S4', 'S5','cC1', 'cC2', 'cC3', 'cC4', 'cC5']
cont_vars = ['C1', 'C2', 'C3', 'C4', 'C5']
dep_var = ['hand']
procs = [Categorify, Normalize]
splits = RandomSplitter()(range_of(df))
splits
to = TabularPandas(df, procs, cat_names=cat_vars, cont_names=cont_vars, y_names=dep_var, splits=splits)
to
dls = to.dataloaders(bs=512)
dls.show_batch()
class LabelSmoothingCrossEntropyFlat(BaseLoss):
"Same as `nn.CrossEntropyLoss`, but flattens input and target."
y_int = True
def __init__(self, *args, axis=-1, **kwargs): super().__init__(LabelSmoothingCrossEntropy, *args, axis=axis, **kwargs)
def decodes(self, x): return x.argmax(dim=self.axis)
def activation(self, x): return F.softmax(x, dim=self.axis)
learn = tabular_learner(dls, layers=[100, 50, 50], ps=[0.01, 0.01, 0.02], metrics=accuracy, opt_func=ranger, loss_func=CrossEntropyLossFlat(),n_out=10)
learn.model
learn.lr_find()
learn.fit_flat_cos(10, 5e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.save('round2_9922')
###Output
_____no_output_____
###Markdown
99,22%
###Code
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.save('round2_9932')
###Output
_____no_output_____
###Markdown
99,32%
###Code
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.save('round2_9934')
###Output
_____no_output_____
###Markdown
99,35%
###Code
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.save('round2_9940')
###Output
_____no_output_____
###Markdown
99,40%
###Code
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.save('round2_9944')
###Output
_____no_output_____
###Markdown
99,44%
###Code
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.save('round2_9948')
###Output
_____no_output_____
###Markdown
99,48% TabNet
###Code
emb_szs = get_emb_sz(to); print(emb_szs)
model = TabNetModel(emb_szs, len(to.cont_names), 10, n_d=16, n_a=8, n_steps=1);
opt_func = partial(Adam, wd=0.01, eps=1e-5)
learn = Learner(dls, model, loss_func=CrossEntropyLossFlat(), opt_func=ranger, lr=3e-2, metrics=[accuracy])
learn.lr_find()
learn.fit_flat_cos(10, 5e-1)
learn.fit_flat_cos(10, 3e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.save('best_97')
learn.load('best_97')
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-2)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 1e-1)
learn.fit_flat_cos(10, 5e-2)
learn.fit_one_cycle(10, 5e-2)
learn.fit_one_cycle(10, 5e-2)
learn.fit_one_cycle(10, 5e-2)
learn.fit_one_cycle(10, 5e-2)
learn.fit_one_cycle(10, 5e-2)
learn.fit_one_cycle(10, 5e-2)
###Output
_____no_output_____ |
Lab1/Lab_01_Pandas_Matplotlib.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount("/content/drive")
# import modules
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# reading data
main_data = pd.read_csv('/content/drive/MyDrive/ML_Labs/Lab1/Data_for_Transformation.csv')
# printing head and scatter plot
print(main_data.head())
plt.scatter(main_data["Age"], main_data["Salary"])
plt.show()
# 2nd plot
plt.hist(main_data["Salary"], bins = 10, color = "yellow")
plt.show()
# 3rd plot
fig_size = plt.figure(figsize=(7, 5))
plt.bar(main_data["Country"], main_data["Salary"], color="green")
plt.xlabel("Salaries")
plt.ylabel("Countries")
plt.title("Bar chart of country vs salary")
plt.show()
###Output
_____no_output_____ |
05_root_finding_optimization.ipynb | ###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Root Finding and Optimization**GOAL:** Find where $f(x) = 0$. Example: Future Time AnnuityWhen can I retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$$A$ total value after $n$ years$P$ is payment amount per compounding period$m$ number of compounding periods per year$r$ annual interest rate$n$ number of years to retirement If I want to retire in 20 years what does the annual interest rate $r$ need to be?Set $P = \frac{\$18,000}{12} = \$1500, ~~~~ m=12, ~~~~ n=20$. Code demo...
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, n))
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("A (total value)")
axes.set_title("When can I retire?")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$ Code demo...
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("$g(r)$")
axes.set_title("When can I retire?")
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Guess at $r_0$ and check to see what direction we need to go...1. $r_0 = 0.0800$, $g(r_0) - r_0 = -0.009317550125425428$1. $r_1 = 0.0850$, $g(r_1) - r_1 = -0.00505763375972$1. $r_2 = 0.0875$, $g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic:```pythonr = 0.09for steps in xrange(10): print "r = ", r print "Difference = ", g(P, m, r, n, goal) - r r = g(P, m, r, n, goal) print``` Code demo...
###Code
r = 0.09
for steps in xrange(10):
print "r = ", r
print "Difference = ", g(P, m, r, n, goal) - r
r = g(P, m, r, n, goal)
print
###Output
_____no_output_____
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$ Code demo...
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
x = 0.4
for steps in xrange(3):
print "x = ", x
print "Residual = ", numpy.exp(-x) - x
x = numpy.exp(-x)
print
# axes.plot(x, numpy.exp(-x),'kx')
axes.text(x, numpy.exp(-x), steps+1, fontsize="15")
plt.show()
###Output
_____no_output_____
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$. Code demo...
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_ylim([0.0, 1.5])
x = 0.5
for steps in xrange(3):
print "x = ", x
print "Residual = ", numpy.log(x) + x
x = -numpy.log(x)
print
axes.plot(x, -numpy.log(x),'o',)
plt.show()
###Output
_____no_output_____
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point Iteration*Theorem*: Existence and uniqueness of fixed point problemsAssume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b]~~~ \forall~~~ x \in [a, b]$ then $g$ has a fixed point in $[a, b]$. Code Demo...
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
plt.show()
###Output
_____no_output_____
###Markdown
Additionally, suppose $g'(x)$ is defined for $x \in [a,b]$ and $\exists K < 1$ s.t. $|g'(x)| \leq K < 1 ~~~ \forall ~~~ x \in (a,b)$, then $g$ has a unique fixed point $P \in [a,b]$ Code demo...
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_ylim((0.0, 1.1))
plt.show()
###Output
_____no_output_____
###Markdown
*Theorem 2*: Asymptotic convergence behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that $\exists ~ x^*$ s.t. $x^* = g(x^*)$, in other words we converge to the solution.$$x_k = x^* + e_k ~~~~~~~~~~~~~~ x_{k+1} = x^* + e_{k+1}$$$$x^* + e_{k+1} = g(x^* + e_k)$$ Using a Taylor expansion we know$$g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$$$x^* + e_{k+1} = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$ Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence (although somewhat arbitrarily fast). Also note that $K$ is related to $|g'(x^*)|$. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited$g(x) = e^{-x}$ with $x^* \approx 0.56$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ $g(x) = - \ln x$ with $x^* \approx 0.56$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ $g(r) = \frac{m P}{A} ((1 + \frac{r}{m})^{mn} - 1)$ with $r^* \approx 0.09$$$|g'(r^*)| = \frac{P m n}{A} \left(1 + \frac{r}{m} \right)^{m n - 1} \approx 2.15$$ Small code demo...
###Code
import sympy
m, P, A, r, n = sympy.symbols('m, P, A, r, n')
(m * P / A * ((1 + r / m)**(m * n) - 1)).diff(r)
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ s.t. it contains exactly one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $\text{sign}(f(a)) \neq \text{sign}(f(b))$. In the case of minima or maxima we need $f'(a)$ and $f'(b)$ to be opposite. **Theorem**: If $f(x) \in C[a,b]$ and $\text{sign}(f(a)) \neq \text{sign}(f(b))$ then there exists a number $c \in (a,b)$ s.t. $f(c) = 0$. (proof uses intermediate value theorem) Code demo...
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Once we are given a bracket what ways could we "shrink" the bracket so that the end points were closer and closer to the true solution? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went Code demo...
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
# Algorithm parameters
TOLERANCE = 1e-4
MAX_STEPS = 2
# Initialize loop
f_a = f(a)
f_b = f(b)
delta_x = b - a
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in xrange(MAX_STEPS):
c = a + delta_x / 2.0
f_c = f(c)
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
delta_x = b - a
# Plot iteration
axes.text(c, f(c), str(step + 1), fontsize="15")
# Check tolerance - Could also check the size of delta_x
if numpy.abs(f_c) < TOLERANCE:
break
if step == MAX_STEPS:
print "Reached maximum number of steps!"
else:
print "Success!"
print " x* = %s" % c
print " f(x*) = %s" % f(c)
print " number of steps = %s" % step
###Output
_____no_output_____
###Markdown
Convergence of Bisection$$|e_{k+1}| = C |e_k|^n$$$$e_k \approx \Delta x_k$$$$e_{k+1} \approx \frac{1}{2} \Delta x_k$$$$|e_{k+1}| = \frac{1}{2} |e_k| ~~~~ \Rightarrow \text{Linear convergence}$$ Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point **Basic Idea**: Given $f(x)$ and $f'(x)$ use a linear approximation to $f(x)$ "locally" and use x-intercept of the resulting line to predict where $x^*$ might be. Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$:Form equation for the line:$$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$$$f(x_k) = f'(x_k) x_k + b$$$$b = f(x_k) - f'(x_k) x_k$$and simplify.$$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$$$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$$$x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$ Code demo...
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
What does the alogrithm look like for Newton-Raphson? Algorithm1. Initialize $x_k$1. Begin loop 1. Compute $f(x_k)$ and $f'(x_k)$ 1. Use these to compute new $x_{k+1}$ 1. Check stopping criteria
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 2
TOLERANCE = 1e-4
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in xrange(1, MAX_STEPS + 1):
axes.text(x_k, f(x_k), str(n), fontsize="15")
x_k = x_k - f(x_k) / f_prime(x_k)
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print "Reached maximum number of steps!"
else:
print "Success!"
print " x* = %s" % x_k
print " f(x*) = %s" % f(x_k)
print " number of steps = %s" % n
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Example:$$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ Asymptotic Convergence of Newton's MethodFor a simple root (non-multiplicative) - Let $g(x) = x - \frac{f(x)}{f'(x)}$, then$$x_{k+1} = g(x_k)$$ Definitions of errors and iteration:$$x_{k+1} = x^* + e_{k+1} ~~~~~ x_k = x^* + e_k$$General Taylor expansion:$$x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition. Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$leads to $$ |e_{k+1}| = \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the the constant is controlled by the second derivative. For a multiple root (e.g. $f(x) = (x-1)^2$) the case is not particularly rosey unfortunately. Why might this be? Example:$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
plt.show()
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
x_kp = lambda x: x - 1.0 / (2.0 * numpy.pi) * numpy.tan(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$ Code demo...
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, given bracket $[a,b]$, a `TOLERANCE`, and a `MAX_STEPS` (note we need two points to start).1. Initialize $x_1 = a$, $x_2 = b$, $f_1 = f(x_1)$, and $f_2 = f(x_2)$2. Loop until either `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $x_{k+1}$ by update formula 2. Check for convergence and break if reached 3. Update parameters $x_1$, $x_2$, $f_1 = f(x_1)$ and $f_2(x_2)$ Code demo...
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 5
TOLERANCE = 1e-4
# Initial guess
x_k = 0.07
x_km = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in xrange(1, MAX_STEPS + 1):
# axes.plot(x_k, f(x_k), 'o')
axes.text(x_k, f(x_k), n, fontsize="15")
x_kp = x_k - f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km))
x_km = x_k
x_k = x_kp
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print "Reached maximum number of steps!"
else:
print "Success!"
print " x* = %s" % x_k
print " f(x*) = %s" % f(x_k)
print " number of steps = %s" % n
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Do not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Bracketing Algorithms: Golden-Section Search (linear) - Interpolation Algorithms: Repeated parabolic interpolation - Hybrid Algorithms Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit more tricky in this case. In particular choosing your initial bracket is important! Basic IdeaWe start with three points, say $x_0$, $x_1$, and $x_3$. We assume that $[x_0,x_3]$ brackets a minimum and that $x_1$ is somewhere inside of this bracket. Now we want to pick another point $x_2$ that lives between $x_1$ and $x_3$.If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$.If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
###Code
f = lambda x: x**2
search_points = [-1.0, -0.75, 0.5, 1.0]
# search_points = [-1.0, -0.75, -0.2, 0.1]
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for point in search_points:
axes.plot(point, f(point),'ok')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
plt.show()
###Output
_____no_output_____
###Markdown
Golden Section Search - Picking IntervalsDefine a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets.Define $x_1-x_0 = a$, $x_1 - x_3 = b$, $x_2 - x_1 = c$, then for **Golden Section Search** we require: - $a + c = b$. - Distance between subsequent triples are proportional. The first rule implies:$$\begin{aligned} a + c &= b \\ x_1 - x_0 + x_2 - x_1 &= x_1 - x_3 \\ x_2 - x_0 &= x_1 - x_3.\end{aligned}$$Assume that this allows us to pick $x_3$ (we have already figured out how to choose $x_1$). We then know$$ x_3 = x_1 - x_2 + x_0.$$ Subsequent proportionality means that we are attempting to always shrink the bracket we are looking at. This implies we must consider whether we choose the new triplet based on $f(x_1) f(x_2)$.If $f(x_1) < f(x_2)$ then we choose $(x_0, x_1, x_2)$ as our new triplet meaning$$ \frac{a}{b} = \frac{c}{a}$$If $f(x_1) > f(x_2)$ then we choose $(x_1, x_2, x_3)$ as our new triplet meaning$$ \frac{a}{b} = \frac{c}{b-c}$$Ok, that's weird. So what's golden about this? Take$$ \frac{a}{b} = \frac{c}{a} ~~~~ \text{and} ~~~~ \frac{a}{b} = \frac{c}{b-c}$$and eliminate $c$ to find$$\begin{aligned} c = \frac{a^2}{b} \Rightarrow \frac{a}{b} &= \frac{1}{b-\frac{a^2}{b}} \frac{a^2}{b} \\ \frac{a}{b} &= \frac{1}{b^2-a^2} \frac{a^2}{b^2}\\ 1 &= \frac{1}{b^2-a^2} \frac{a}{b} \\ 1 &= \left(\frac{b}{a}\right)^2 - \frac{b}{a}\end{aligned}$$This implies $\frac{b}{a} = \varphi$, i.e. the golden ratio.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.0) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'b')
axes.plot([x[0], x[2]], [0.0, 0.0], 'g')
axes.plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes.plot([x[1], x[1]], [-0.2, f(x[2])], 'r--')
axes.plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \varphi \cdot (x_3 - x_0)$ and $x_2 = x_0 + \varphi \cdot (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE` Code demo...
###Code
# New Test Function!
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
plt.show()
phi = (numpy.sqrt(5.0) - 1.0) / 2.0
TOLERANCE = 1e-4
MAX_STEPS = 100
x = [0.2, None, None, 0.5]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
success = False
for n in xrange(1, MAX_STEPS + 1):
axes.plot(x[0], f(x[0]),'ko')
axes.plot(x[3], f(x[3]),'ko')
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
x[3] = x[2]
x[2] = x[1]
x[1] = x[3] - phi * (x[3] - x[0])
else:
x[0] = x[1]
x[1] = x[2]
x[2] = x[0] + phi * (x[3] - x[0])
if numpy.abs(x[3] - x[0]) < TOLERANCE:
success = True
break
if success:
print "Success!"
print " t* = %s" % str((x[3] + x[0]) / 2.0)
print " f(t*) = %s" % f((x[3] + x[0]) / 2.0)
print " number of steps = %s" % n
else:
print "Reached maximum number of steps!"
plt.show()
###Output
_____no_output_____
###Markdown
Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find it's minima, and guess new points based on that minima 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ 1. Use a polynomial fit to the function: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 1. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 ~~~~ \Rightarrow ~~~~ x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance Code demo...
###Code
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([0.5, 0.2, (0.7) / 2.0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in xrange(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print "Success!"
print " t* = %s" % x[2]
print " f(t*) = %s" % f(x[2])
print " number of steps = %s" % n
else:
print "Reached maximum number of steps!"
axes.set_ylim((-5, 0.0))
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization!
###Code
import scipy.optimize as optimize
optimize.golden(f, brack=(0.2, 0.25, 0.5))
###Output
_____no_output_____
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import warnings
import sympy
sympy.init_printing()
###Output
_____no_output_____
###Markdown
Root Finding and OptimizationOur goal in this section is to develop techniques to approximate the roots of a given function $f(x)$. That is find solutions $x$ such that $f(x)=0$. At first glance this may not seem like a meaningful exercise, however, this problem arises in a wide variety of circumstances. For example, suppose that you are trying to find a solution to the equation$$ x^2 + x = \sin{x}.$$Simply rearranging, the expression can be rewritten in the form$$ f(x) = x^2 + x -\sin{x} = 0.$$Determining the roots of the function $f(x)$ is now equivalent to determining the solution to the original expression. Unfortunately, a number of other issues arise. In particular, with non-linear equations, there may be multiple solutions, or no real solutions at all. The task of approximating the roots of a function can be a deceptively difficult thing to do. For much of the treatment here we will ignore many details such as existence and uniqueness, but you should keep in mind that they are important considerations. **GOAL:** For this section we will focus on multiple techniques for efficiently and accurately solving the fundamental problem $f(x)=0$ for functions of a single variable. Example: Future Time AnnuityCan I ever retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$* $A$ total value after $n$ years* $P$ is payment amount per compounding period* $m$ number of compounding periods per year* $r$ annual interest rate* $n$ number of years to retirement Question:For a fix monthly Payment $P$, what does the minimum interest rate $r$ need to be so I can retire in 20 years with \$1M. Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.15, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, 10),label='10 years')
axes.plot(r, total_value(P, m, r, 15),label='15 years')
axes.plot(r, total_value(P, m, r, n),label='20 years')
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)", fontsize=16)
axes.set_ylabel("A (total value)", fontsize=16)
axes.set_title("When can I retire?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((r.min(), r.max()))
axes.set_ylim((total_value(P, m, r.min(), 10), total_value(P, m, r.max(), n)))
axes.legend(loc='best')
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$ Plot these$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("$g(r)$",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim((g(P, m, 0.00, n, goal), g(P, m, 0.1, n, goal)))
axes.legend()
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Guess at $r_0$ and check to see what direction we need to go...1. $r_0 = 0.0800, \quad g(r_0) - r_0 = -0.009317550125425428$1. $r_1 = 0.0850, \quad g(r_1) - r_1 = -0.00505763375972$1. $r_2 = 0.0875, \quad g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic:
###Code
r_values = numpy.linspace(0.08, 0.1, 11)
g_values = g(P,m,r_values,n,goal)
residual = numpy.abs(g_values - r_values)
print(' r\t\t g(r)\t\tresidual')
print('------------------------------------------------')
for i,r in enumerate(r_values):
print('{:8.3f}\t{:10.8f}\t{:10.8f}\t'.format(r,g_values[i],residual[i]))
###Output
_____no_output_____
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$f(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Consider the iterative schemeset $x_0$ then compute$$ x_i = g(x_{i-1})\quad \mathrm{for}\quad i=1,2,3\ldots$$ or in code```pythonx = x0for i in range(N): x = g(x)```
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$f(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend()
x = 0.4
print('\tx\t exp(-x)\t residual')
for steps in range(6):
residual = numpy.abs(numpy.exp(-x) - x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, numpy.exp(-x), residual))
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x+0.01, numpy.exp(-x)+0.01, steps, fontsize="15")
x = numpy.exp(-x)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$f(x)=-\log(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_ylabel("f(x)",fontsize=16)
axes.set_ylim([0, 1.5])
axes.legend(loc='best')
x = 0.55
print('\tx\t -log(x)\t residual')
for steps in range(5):
residual = numpy.abs(numpy.log(x) + x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, -numpy.log(x), residual))
axes.plot(x, -numpy.log(x),'kx')
axes.text(x + 0.01, -numpy.log(x) + 0.01, steps, fontsize="15")
x = -numpy.log(x)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point IterationExistence and uniqueness of fixed point problems*Existence:*Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] \quad \forall \quad x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = e^{-x}$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = -\ln(x)$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
plt.show()
r = numpy.linspace(0.06, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r")
axes.set_ylabel("$g(r)$")
axes.set_xlim([0.06, 0.1])
axes.set_ylim([g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot([0.08, 0.08], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot([0.095, 0.095], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.08, n, goal), '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.095, n, goal), '--k')
plt.show()
###Output
_____no_output_____
###Markdown
*Uniqueness:*Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that$$ |g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)$$then $g$ has a unique fixed point $P \in [a,b]$
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("$x$",fontsize=18)
axes.set_ylabel("$g\,'(x)$",fontsize=18)
axes.set_ylim((0.0, 1.1))
axes.set_title("$g(x) = e^{-x}$",fontsize=20)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
*Asymptotic convergence*: Behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that a fixed point $x^\ast$ exists, such that $$x^\ast = g(x^\ast)$$ Then define $$ x_{k+1} = x^\ast + e_{k+1} \quad \quad x_k = x^\ast + e_k$$ substituting$$ x^\ast + e_{k+1} = g(x^\ast + e_k)$$ Evaluate $$ g(x^\ast + e_k)$$ Taylor expand $g(x)$ about $x^\ast$ and substitute $$x = x_k = x^\ast + e_k$$ $$ g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ from our definition $$x^\ast + e_{k+1} = g(x^\ast + e_k)$$ we have$$ x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence. Also note that $K$ is related to $|g'(x^*)|$. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited* Example 1:$$g(x) = e^{-x}\quad\mathrm{with}\quad $x^* \approx 0.56$$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ * Example 2: $$g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ * Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
r, P, m, A, n = sympy.symbols('r P m A n')
g_sym = P * m / A * ((1 + r /m)**(m * n) - 1)
g_prime = g_sym.diff(r)
r_star = 0.08985602484084668
print("g(r) = ", g_sym)
print("g'(r) = ", g_prime)
print()
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
print("g(r*) - r* = {}".format(g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}) - r_star))
###Output
_____no_output_____
###Markdown
* Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
g = sympy.lambdify(r, g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
r = numpy.linspace(-0.01, 0.1, 100)
fig = plt.figure(figsize=(7,5))
fig.set_figwidth(2. * fig.get_figwidth())
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(r),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=14)
axes.set_ylabel("$g(r)$",fontsize=14)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=14)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim(g(0.00), g(0.1))
axes.legend()
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$",fontsize=14)
axes.set_ylabel("$g'(r)$",fontsize=14)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ that contains exactly one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $$ \text{sign}(f(a)) \neq \text{sign}(f(b)).$$In the case of minima or maxima we need $$ \text{sign}(f'(a)) \neq \text{sign}(f'(b))$$ **Theorem**: Let$$ f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))$$then there exists a number $$ c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.$$(proof uses intermediate value theorem) **Example**: The retirement problem again. For fixed $A, P, m, n$$$ f(r) = A - \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.What ways could we "shrink" the bracket so that the end points converge to the root/extrema? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went basic code```pythondef bisection(f,a,b,tol): c = a + delta_x / 2.0 f_a = f(a) f_b = f(b) f_c = f(c) for step in range(1, MAX_STEPS + 1): if numpy.abs(f_c) < tol: break if numpy.sign(f_a) != numpy.sign(f_c): b = c f_b = f_c else: a = c f_a = f_c delta_x = b - a c = a + delta_x / 2.0 f_c = f(c) return c```
###Code
# real code with standard bells and whistles
def bisection(f,a,b,tol = 1.e-6):
""" uses bisection to isolate a root x of a function of a single variable f such that f(x) = 0.
the root must exist within an initial bracket a < x < b
returns when f(x) at the midpoint of the bracket < tol
Parameters:
-----------
f: function of a single variable f(x) of type float
a: float
left bracket a < x
b: float
right bracket x < b
Note: the signs of f(a) and f(b) must be different to insure a bracket
tol: float
tolerance. Returns when |f((a+b)/2)| < tol
Returns:
--------
x: float
midpoint of final bracket
x_array: numpy array
history of bracket centers (for plotting later)
Raises:
-------
ValueError:
if initial bracket is invalid
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 1000
# initialize
delta_x = b - a
c = a + delta_x / 2.0
c_array = [ c ]
f_a = f(a)
f_b = f(b)
f_c = f(c)
# check bracket
if numpy.sign(f_a) == numpy.sign(f_b):
raise ValueError("no bracket: f(a) and f(b) must have different signs")
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < tol:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
delta_x = b - a
c = a + delta_x / 2.0
f_c = f(c)
c_array.append(c)
if step == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return c, numpy.array(c_array)
# set up function as an inline lambda function
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8)
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
r = numpy.linspace(0.05, 0.11, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a), 'ko')
axes.plot([a, a], [0.0, f(a)], 'k--')
axes.text(a, f(a), str(0), fontsize="15")
axes.plot(b, f(b), 'ko')
axes.plot([b, b], [f(b), 0.0], 'k--')
axes.text(b, f(b), str(1), fontsize="15")
axes.grid()
# plot out the first N steps
N = 5
for k,r in enumerate(r_array[:N]):
# Plot iteration
axes.plot(r, f(r),'kx')
axes.text(r, f(r), str(k + 2), fontsize="15")
axes.plot(r_star, f(r_star), 'go', markersize=10)
axes.set_title('Bisection method: first {} steps'.format(N), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why?
###Code
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8 )
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
# this might be useful
print(numpy.diff(r_array))
###Output
_____no_output_____
###Markdown
Convergence of BisectionGenerally have$$ |e_{k+1}| = C |e_k|^n$$where we need $C 0$.Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with$$ e_k \approx \Delta x_k$$and therefore$$ e_{k+1} \approx \frac{1}{2} \Delta x_k.$$Due to the relationship then between $x_k$ and $e_k$ we then know$$ |e_{k+1}| = \frac{1}{2} |e_k|$$so therefore the method is linearly convergent. Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point - Can we do "better"? Newton's method, *when well behaved* can achieve quadratic convergence. **Basic Ideas**: There are multiple interpretations we can use to derive Newton's method* Use Taylor's theorem to estimate a correction to minimize the residual $f(x)=0$ * A geometric interpretation that approximates $f(x)$ locally as a straight line to predict where $x^*$ might be.* As a special case of a fixed-point iteration Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$:Form equation for the line:$$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$$$f(x_k) = f'(x_k) x_k + b$$$$b = f(x_k) - f'(x_k) x_k$$and simplify.$$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$$$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$ 0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$$$ x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$ Perhaps the simplest derivation uses Taylor series. Consider an initial guess at point $x_k$. For arbitrary $x_k$, it's unlikely $f(x_k)=0$. However we can hope there is a correction $\delta_k$ such that at$$x_{k+1} = x_k + \delta_k$$and $$ f(x_{k+1}) = 0 $$ expanding in a Taylor series around point $x_k$ $$ f(x_k + \delta_k) \approx f(x_k) + f'(x_k) \delta_k + O(\delta_k^2)$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ or solving for the correction$$ \delta_k = -f(x_k)/f'(x_k)$$ which leads to the update for the next iteration$$ x_{k+1} = x_k + \delta_k $$or$$ x_{k+1} = x_k -f(x_k)/f'(x_k)$$rinse and repeat, as it's still unlikely that $f(x_{k+1})=0$ (but we hope the error will be reduced) Algorithm1. Initialize $x = x_0$1. While ( $f(x) > tol$ ) - solve $\delta = -f(x)/f'(x)$ - update $x \leftarrow x + \delta$ Geometric interpretationBy truncating the taylor series at first order, we are locally approximating $f(x)$ as a straight line tangent to the point $f(x_k)$. If the function was linear at that point, we could find its intercept such that $f(x_k+\delta_k)=0$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r",fontsize=16)
axes.set_ylabel("f(r)",fontsize=16)
axes.set_title("Newton-Raphson Steps",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Some code
###Code
def newton(f,f_prime,x0,tol = 1.e-6):
""" uses newton's method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
f_prime: function f'(x)
returns type: float
x0: float
initial guess
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
x = x0
x_array = [ x0 ]
for k in range(1, MAX_STEPS + 1):
x = x - f(x) / f_prime(x)
x_array.append(x)
if numpy.abs(f(x)) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x, x_array = newton(f, f_prime, x0, tol=1.e-8)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
print(f_prime(x)*numpy.finfo('float').eps)
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Newton-Raphson Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why? Example: $$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ setup in sympy
###Code
x = sympy.symbols('x')
f = x - sympy.exp(-x)
f_prime = f.diff(x)
f, f_prime
###Output
_____no_output_____
###Markdown
and solve
###Code
f = sympy.lambdify(x,f)
f_prime = sympy.lambdify(x,f_prime)
x0 = 0.
x, x_array = newton(f, f_prime, x0, tol = 1.e-9)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-1,1,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx',markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x - e^{-x}$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Asymptotic Convergence of Newton's MethodNewton's method can be also considered a fixed point iteration$$x_{k+1} = g(x_k)$$with $g(x) = x - \frac{f(x)}{f'(x)}$ Again if $x^*$ is the fixed point and $e_k$ the error at iteration $k$:$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$ Taylor Expansion around $x^*$$$ x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + O(e_k^3)$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$ which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved). Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ which leads to $$ |e_{k+1}| < \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative. Example: Convergence for a non-simple rootConsider our first problem$$ f(x) = x^2 + x - \sin(x)$$the case is, unfortunately, not as rosey. Why might this be? Setup the problem
###Code
f = lambda x: x*x + x - numpy.sin(x)
f_prime = lambda x: 2*x + 1. - numpy.cos(x)
x0 = .9
x, x_array = newton(f, f_prime, x0, tol= 1.e-16)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-2,2,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx', markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x^2 +x - sin(x)$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Convergence appears linear, can you show this?:$$f(x) = x^2 + x -\sin (2 \pi x)$$ Example: behavior of Newton with multiple roots$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
x_kp = lambda x: x - f(x)/f_prime(x)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $x_{k+1}(x)$",fontsize=18)
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basins of AttractionGiven a point $x_0$ can we determine if Newton-Raphson converges and to **which root** it converges to?A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges to the same root. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are complicated or even fractal.
###Code
# calculate the basin of attraction for f(x) = sin(2\pi x)
x_root = numpy.zeros(x.shape)
N_steps = numpy.zeros(x.shape)
for i,xk in enumerate(x):
x_root[i], x_root_array = newton(f, f_prime, xk)
N_steps[i] = len(x_root_array)
y = numpy.linspace(-2,2)
X,Y = numpy.meshgrid(x,y)
X_root = numpy.outer(numpy.ones(y.shape),x_root)
plt.figure(figsize=(8, 6))
plt.pcolor(X, Y, X_root,vmin=-5, vmax=5,cmap='seismic')
cbar = plt.colorbar()
cbar.set_label('$x_{root}$', fontsize=18)
plt.plot(x, f(x), 'k-')
plt.plot(x, numpy.zeros(x.shape),'k--', linewidth=0.5)
plt.xlabel('x', fontsize=16)
plt.title('Basins of Attraction: $f(x) = \sin{2\pi x}$', fontsize=18)
#plt.xlim(0.25-.1,0.25+.1)
plt.show()
###Output
_____no_output_____
###Markdown
Fractal Basins of AttractionIf $f(x)$ is complex (for $x$ complex), then the basins of attraction can be beautiful and fractalPlotted below are two fairly simple equations which demonstrate the issue:1. $f(x) = x^3 - 1$2. Kepler's equation $\theta - e \sin \theta = M$
###Code
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
roots = numpy.roots([1., 0., 0., -1])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
#axes.contourf(X, Y, numpy.sign(numpy.imag(R))*numpy.abs(R),vmin = -10, vmax = 10)
axes.contourf(X, Y, R, vmin = -8, vmax= 8.)
axes.scatter(numpy.real(roots), numpy.imag(roots))
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
axes.grid()
plt.show()
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contourf(X, Y, R, vmin = 0, vmax = 10)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
plt.show()
###Output
_____no_output_____
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$ Secant Method$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=14)
axes.set_title("Secant Method", fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, a `TOLERANCE`, and a `MAX_STEPS` 1. Initialize two points $x_0$, $x_1$, $f_0 = f(x_0)$, and $f_1 = f(x_1)$2. Loop for k=2, to `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $$x_{2} = x_1 - \frac{f(x_1) (x_1 - x_{0})}{f(x_1) - f(x_{0})}$$ 2. Check for convergence and break if reached 3. Update parameters $x_0 = x_1$, $x_1 = x_{2}$, $f_0 = f_1$ and $f_1 = f(x_1)$ Some Code
###Code
def secant(f, x0, x1, tol = 1.e-6):
""" uses a linear secant method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
x0: float
first point to initialize the algorithm
x1: float
second point to initialize the algorithm x1 != x0
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
ValueError:
if x1 is too close to x0
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
if numpy.isclose(x0, x1):
raise ValueError('Initial points are too close (preferably should be a bracket)')
x_array = [ x0, x1 ]
for k in range(1, MAX_STEPS + 1):
x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0))
x_array.append(x2)
if numpy.abs(f(x2)) < tol:
break
x0 = x1
x1 = x2
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x2, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x1 = 0.07
x, x_array = secant(f, x0, x1, tol= 1.e-7)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Secant Method Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Does not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package
###Code
from scipy.optimize import brentq
a = 0.07
b = 0.1
x, res = brentq(f, a, b, full_output=True)
print('x = {}, f(x) = {}'.format(x, f(x)))
print(res)
#brentq?
###Output
_____no_output_____
###Markdown
Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Interpolation Algorithms: Repeated parabolic interpolation - Bracketing Algorithms: Golden-Section Search (linear) - Hybrid Algorithms Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find its minima, and guess new points based on that minima 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ at the three points 1. Find the quadratic polynomial that interpolates those points: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 3. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance
###Code
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([0.5, 0.2, (0.7) / 2.0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Some Code
###Code
def parabolic_interpolation(f, bracket, tol = 1.e-6):
""" uses repeated parabolic interpolation to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x1] containing an initial bracket that contains a minimum
tolerance: float
Returns when relative error of last two iterates < tol
Returns:
--------
x: float
final estimate of the minima
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
x = numpy.zeros(3)
x[:2] = bracket
x[2] = (x[0] + x[1])/2.
x_array = [ x[2] ]
for k in range(1, MAX_STEPS + 1):
poly = numpy.polyfit(x, f(x), 2)
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
x_array.append(x[2])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x[2], numpy.array(x_array)
###Output
_____no_output_____
###Markdown
set up problem
###Code
bracket = numpy.array([0.5, 0.2])
x, x_array = parabolic_interpolation(f, bracket, tol = 1.e-6)
print("Extremum f(x) = {}, at x = {}, N steps = {}".format(f(x), x, len(x_array)))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.plot(x_array, f(x_array),'ro')
axes.plot(x, f(x), 'go')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit trickier in this case. In particular choosing your initial bracket is important! Bracket PickingSay we start with a bracket $[x_0, x_3]$ and pick two new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules: - If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$. - If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
###Code
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
###Output
_____no_output_____
###Markdown
Picking Brackets and PointsAgain say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets. Define: the length of the line segments in the interval\begin{aligned} a &= x_1 - x_0, \\ b &= x_2 - x_1,\\ c &= x_3 - x_2 \\\end{aligned}and the total bracket length\begin{aligned} d &= x_3 - x_0. \\\end{aligned}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
For **Golden Section Search** we require two conditions: - The two new possible brackets are of equal length. i.e $[x_0, x_2] = [x_1, x_3]$ or $$ a + b = b + c $$ or simply $a = c$
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
- The ratio of segment lengths is the same for every level of recursion so the problem is self-similar i.e. $$ \frac{b}{a} = \frac{c}{a + b} $$ These two requirements will allow maximum reuse of previous points and require adding only one new point $x^*$ at each iteration.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[1])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = phi * (x[1] - x[0]) + x[0]
#axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
#axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
#axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
#axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
#axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
#axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1], f(x_new[1]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[0].text(x_new[i], -0.5, i, color='g',fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) * phi + x[2]
#axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
#axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
#axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
#axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
#axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
#axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2], f(x_new[2]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[1].text(x_new[i], -0.5, i, color='r',fontsize='15')
axes[0].set_title('Choose left bracket', fontsize=18)
axes[1].set_title('Choose right bracket', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
As the first rule implies that $a = c$, we can substitute into the second rule to yield$$ \frac{b}{a} = \frac{a}{a + b}$$ or inverting and rearranging $$ \frac{a}{b} = 1 + \frac{b}{a}$$ if we let the ratio $b/a = x$, then $$ x + 1 = \frac{1}{x} \quad \text{or} \quad x^2 + x - 1 = 0$$ $$ x^2 + x - 1 = 0$$has a single positive root for $$ x = \frac{\sqrt{5} - 1}{2} = \varphi = 0.6180339887498949$$where $\varphi$ is related to the "golden ratio" (which in most definitions is given by $1+\varphi$, but either work as $ 1+\varphi = 1/\varphi $ ) Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. We can now use all of our information to find the points $x_1$ and $x_2$ given any overall bracket $[x_0, x_3]$ Given $b/a = \varphi$, $a = c$, and the known width of the bracket $d$ it follows that$$ d = a + b + c = (2 + \phi)a $$or $$ a = \frac{d}{2 + \varphi} = \frac{\varphi}{1 + \varphi} d$$by the rather special properties of $\varphi$. We could use this result immediately to find \begin{align} x_1 &= x_0 + a \\ x_2 &= x_3 - a \\\end{align} Equivalently, you can show that $$a + b = (1 + \varphi)a = \varphi d$$so\begin{align} x_1 &= x_3 - \varphi d \\ x_2 &= x_0 + \varphi d \\\end{align}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \varphi (x_3 - x_0)$ and $x_2 = x_0 + \varphi (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE` 1. calculate the appropriate new point $x^*$ ($x_1$ on left, $x_2$ on right)
###Code
def golden_section(f, bracket, tol = 1.e-6):
""" uses golden section search to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x3] containing an initial bracket that contains a minimum
tolerance: float
Returns when | x3 - x0 | < tol
Returns:
--------
x: float
final estimate of the midpoint of the bracket
x_array: numpy array
history of midpoint of each bracket
Raises:
-------
ValueError:
If initial bracket is < tol or doesn't appear to have any interior points
that are less than the outer points
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [ bracket[0], None, None, bracket[1] ]
delta_x = x[3] - x[0]
x[1] = x[3] - phi * delta_x
x[2] = x[0] + phi * delta_x
# check for initial bracket
fx = f(numpy.array(x))
bracket_min = min(fx[0], fx[3])
if fx[1] > bracket_min and fx[2] > bracket_min:
raise ValueError("interval does not appear to include a minimum")
elif delta_x < tol:
raise ValueError("interval is already smaller than tol")
x_mid = (x[3] + x[0])/2.
x_array = [ x_mid ]
for k in range(1, MAX_STEPS + 1):
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
delta_x = x_new[3] - x_new[0]
x_new[1] = x_new[3] - phi * delta_x
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
delta_x = x_new[3] - x_new[0]
x_new[2] = x_new[0] + phi * delta_x
x = x_new
x_array.append((x[3] + x[0])/ 2.)
if numpy.abs(x[3] - x[0]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x_array[-1], numpy.array(x_array)
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
x, x_array = golden_section(f,[0.2, 0.5], 1.e-4)
print('t* = {}, f(t*) = {}, N steps = {}'.format(x, f(x), len(x_array)-1))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.grid()
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x_array, f(x_array),'ko')
axes.plot(x_array[0],f(x_array[0]),'ro')
axes.plot(x_array[-1],f(x_array[-1]),'go')
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization!
###Code
import scipy.optimize as optimize
print(optimize.golden(f, brack=(0.2, 0.25, 0.5)))
###Output
_____no_output_____
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Root Finding and Optimization**GOAL:** Find where $f(x) = 0$. Example: Future Time AnnuityWhen can I retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$$A$ total value after $n$ years$P$ is payment amount per compounding period$m$ number of compounding periods per year$r$ annual interest rate$n$ number of years to retirement If I want to retire in 20 years what does the annual interest rate $r$ need to be?Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, n))
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("A (total value)")
axes.set_title("When can I retire?")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("$g(r)$")
axes.set_title("When can I retire?")
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Guess at $r_0$ and check to see what direction we need to go...1. $r_0 = 0.0800, \quad g(r_0) - r_0 = -0.009317550125425428$1. $r_1 = 0.0850, \quad g(r_1) - r_1 = -0.00505763375972$1. $r_2 = 0.0875, \quad g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic:```pythonr_values = numpy.linspace(0.08, 0.09, 10)for r in r_values: print("r = ", r, "g(r) =", g(P, m, r, n, goal)) print("Difference = ", numpy.abs(g(P, m, r, n, goal) - r)) r = g(P, m, r, n, goal)```
###Code
r_values = numpy.linspace(0.08, 0.09, 11)
for r in r_values:
print("r = ", r, "g(r) =", g(P, m, r, n, goal))
print("Difference = ", numpy.abs(g(P, m, r, n, goal) - r))
r = g(P, m, r, n, goal)
###Output
_____no_output_____
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
x = 0.4
for steps in range(3):
print("x = ", x, "Residual = ", numpy.abs(numpy.exp(-x) - x))
x = numpy.exp(-x)
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x, numpy.exp(-x), steps+1, fontsize="15")
plt.show()
###Output
_____no_output_____
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_ylim([0, 1.5])
x = 0.5
for steps in range(3):
print("x = ", x, "Residual = ", numpy.abs(numpy.log(x) + x))
x = -numpy.log(x)
axes.plot(x, -numpy.log(x),'kx')
axes.text(x, -numpy.log(x), steps+1, fontsize="15")
plt.show()
###Output
_____no_output_____
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point IterationExistence and uniqueness of fixed point problems*Existence:*Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] \quad \forall \quad x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
plt.show()
r = numpy.linspace(0.06, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r")
axes.set_ylabel("$g(r)$")
axes.set_xlim([0.06, 0.1])
axes.set_ylim([g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot([0.08, 0.08], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot([0.09, 0.09], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.08, n, goal), '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.09, n, goal), '--k')
plt.show()
###Output
_____no_output_____
###Markdown
*Uniqueness:*Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that$$ |g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)$$then $g$ has a unique fixed point $P \in [a,b]$
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_ylim((0.0, 1.1))
plt.show()
###Output
_____no_output_____
###Markdown
*Asymptotic convergence*: Behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that $\exists ~ x^\ast$ s.t. $x^\ast = g(x^\ast)$ (i.e. $x^\ast$ is the fixed point), then define$$ x_k = x^\ast + e_k \quad \quad x_{k+1} = x^\ast + e_{k+1}$$and$$ x^\ast + e_{k+1} = g(x^\ast + e_k)$$ Taylor expand the function $g$ about $x^\ast$:$$ g(x) = g(x^\ast) + g'(x^\ast) (x - x^\ast) + \frac{g''(x^\ast)}{2!} (x - x^\ast)^2 + \mathcal{O}((x - x^\ast)^3)$$Evaluate this series at $x_k = x^\ast + e_k$ to find$$ g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + \mathcal{O}(e_k^3)$$therefore from our definition from before that $x^\ast + e_{k+1} = g(x^\ast + e_k)$ we have$$ x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + \mathcal{O}(e_k^3)$$ Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence. Also note that $K$ is related to $|g'(x^*)|$. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited$g(x) = e^{-x}$ with $x^* \approx 0.56$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ $g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ $$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
import sympy
r, P, m, A, n = sympy.symbols('r P m A n')
g = P * m / A * ((1 + r /m)**(m * n) - 1)
g_prime = g.diff(r)
r_star = 0.08985602484084668
print("g'(r) = ", g_prime)
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
r = numpy.linspace(-0.01, 0.1, 100)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$")
axes.set_ylabel("$g'(r)$")
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ that contains exactly one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $$ \text{sign}(f(a)) \neq \text{sign}(f(b)).$$In the case of minima or maxima we need $$ \text{sign}(f'(a)) \neq \text{sign}(f'(b))$$ **Theorem**: Let$$ f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))$$then there exists a number $$ c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.$$(proof uses intermediate value theorem)
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.What ways could we "shrink" the bracket so that the end points converge to the root/extrema? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
# Algorithm parameters
TOLERANCE = 1e-4
MAX_STEPS = 5
# Initialize loop
delta_x = b - a
c = a + delta_x / 2.0
f_a = f(a)
f_b = f(b)
f_c = f(c)
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Plot iteration
axes.plot(c, f_c,'kx')
axes.text(c, f_c, str(step + 1), fontsize="15")
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < TOLERANCE:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
delta_x = b - a
c = a + delta_x / 2.0
f_c = f(c)
if step == MAX_STEPS:
print("Reached maximum number of steps!")
else:
print("Success!")
print(" x* = %s" % c)
print(" f(x*) = %s" % f(c))
print(" number of steps = %s" % step)
###Output
_____no_output_____
###Markdown
Convergence of BisectionGenerally have$$ |e_{k+1}| = C |e_k|^n$$where we need $C 0$.Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with$$ e_k \approx \Delta x_k$$and therefore$$ e_{k+1} \approx \frac{1}{2} \Delta x_k.$$Due to the relationship then between $x_k$ and $e_k$ we then know$$ |e_{k+1}| = \frac{1}{2} |e_k|$$so therefore the method is linearly convergent. Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point **Basic Idea**: Given $f(x)$ and $f'(x)$ use a linear approximation to $f(x)$ "locally" and use the x-intercept of the resulting line to predict where $x^*$ might be. Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$:Form equation for the line:$$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$$$f(x_k) = f'(x_k) x_k + b$$$$b = f(x_k) - f'(x_k) x_k$$and simplify.$$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$$$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$ 0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$$$ x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$ An alternative method of derivation for Newton-Raphson (and more in line with our methods) uses Taylor series. Expand the function $f(x)$ in a Taylor series about the current Newton-Raphson iteration $x_k$:$$ f(x) = f(x_k) + f'(x_k) (x - x_k) + \frac{f''(x_k)}{2!} (x - x_k)^2 + \mathcal{O}((x-x_k)^3)$$Let $\delta_k$ be the update to the $x_{k+1}$ iteration such that$$ x_{k+1} = x_k + \Delta x_k$$and evaluate our expression for $f(x)$ at $x_{k+1}$:$$ f(x_{k+1}) = f(x_k) + f'(x_k) \Delta x_k + \frac{f''(x_k)}{2!} \Delta x_k^2 + \mathcal{O}(\Delta x_k^3)$$ Now assume that $x_{k+1} = x^\ast$, if this is the case the above simplifies to$$ 0 = f(x_k) + f'(x_k) \Delta x_k + \frac{f''(x_k)}{2!} \Delta x_k^2 + \mathcal{O}(\Delta x_k^3)$$and dropping the higher order terms leads to$$ \Delta x_k = - \frac{f(x_k)}{f'(x_k)}$$assuming that $f \in \mathbb R$ leading to the update$$ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}.$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
What does the alogrithm look like for Newton-Raphson? Algorithm1. Initialize $x_k$1. Begin loop 1. Compute $f(x_k)$ and $f'(x_k)$ 1. Use these to compute new $x_{k+1}$ 1. Check stopping criteria
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 200
TOLERANCE = 1e-4
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in range(1, MAX_STEPS + 1):
axes.plot(x_k, f(x_k),'kx')
axes.text(x_k, f(x_k), str(n), fontsize="15")
x_k = x_k - f(x_k) / f_prime(x_k)
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print("Reached maximum number of steps!")
else:
print("Success!")
print(" x* = %s" % x_k)
print(" f(x*) = %s" % f(x_k))
print(" number of steps = %s" % n)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Example:$$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ Asymptotic Convergence of Newton's MethodFor a simple root (non-multiplicative) - Let $g(x) = x - \frac{f(x)}{f'(x)}$, then$$x_{k+1} = g(x_k)$$ Definitions of errors and iteration:$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$General Taylor expansion:$$ x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \mathcal{O}(e_k^3)$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved). Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$leads to $$ |e_{k+1}| = \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the the constant is controlled by the second derivative. For a multiple root (e.g. $f(x) = (x-1)^2$) the case is not particularly rosey unfortunately. Why might this be? Example:$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
plt.show()
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
x_kp = lambda x: x - 1.0 / (2.0 * numpy.pi) * numpy.tan(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basins of AttractionGiven a point $x_0$ can we determine if Newton-Raphson converges?A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are fractal. Plotted below are two fairly simple equations which demonstrate the problem:1. $f(x) = x^3 - 1$2. Kepler's equation $\theta - e \sin \theta = M$
###Code
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contour(X, Y, R)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contour(X, Y, R)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
###Output
_____no_output_____
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, given bracket $[a,b]$, a `TOLERANCE`, and a `MAX_STEPS` (note we need two points to start).1. Initialize $x_1 = a$, $x_2 = b$, $f_1 = f(x_1)$, and $f_2 = f(x_2)$2. Loop until either `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $x_{k+1}$ by update formula 2. Check for convergence and break if reached 3. Update parameters $x_1$, $x_2$, $f_1 = f(x_1)$ and $f_2(x_2)$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 50
TOLERANCE = 1e-4
# Initial bracket
x_k = 0.07
x_km = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in range(1, MAX_STEPS + 1):
axes.plot(x_k, f(x_k), 'o')
axes.text(x_k + 0.0025, f(x_k), n, fontsize="15")
x_kp = x_k - f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km))
x_km = x_k
x_k = x_kp
print("Residual = ", numpy.abs(f(x_k)))
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print("Reached maximum number of steps!")
else:
print("Success!")
print(" x* = %s" % x_k)
print(" f(x*) = %s" % f(x_k))
print(" number of steps = %s" % n)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Do not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Bracketing Algorithms: Golden-Section Search (linear) - Interpolation Algorithms: Repeated parabolic interpolation - Hybrid Algorithms Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit more tricky in this case. In particular choosing your initial bracket is important! Bracket PickingSay we start with a bracket $[x_0, x_3]$ and pick to new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules: - If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$. - If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
###Code
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
###Output
_____no_output_____
###Markdown
Picking Brackets and PointsAgain say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets.Define $$\begin{aligned} a &= x_1 - x_0, \\ b &= x_3 - x_1,\\ c &= x_2 - x_1 \quad \text{and} \\ d &= x_3 - x_2.\end{aligned}$$For **Golden Section Search** we require two conditions: - The two new possible brackets are of equal length. If we pick the left bracket $[x_0, x_2]$ then $$ a+c = b $$ and the right bracket $[x_1, x_3]$ $$ d + c = b. $$ - The distances between subsequent triplets is proportional.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (1.0 + numpy.sqrt(5.0)) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - 1.0 / phi * (x[3] - x[0])
x[2] = x[0] + 1.0 / phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[2])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = 1.0 / phi * (x[1] - x[0]) + x[0]
axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1] + 0.05, f(x_new[1]) + 0.1, "*", fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) / phi + x[2]
axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2] + 0.05, f(x_new[2]) + 0.1, "*", fontsize='15')
plt.show()
###Output
_____no_output_____
###Markdown
The first rule implies:$$\begin{aligned} a + c &= b \\ x_1 - x_0 + x_2 - x_1 &= x_3 - x_1 \\ x_2 - x_0 &= x_3 - x_1.\end{aligned}$$Assume that this allows us to pick $x_2$ (we need to figure out how to choose $x_1$). We then know$$ x_2 = x_3 - x_1 + x_0.$$ Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. Since we have two choices for our new interval we write down many proportionality constraints however let us focus on the two defined by the distances $a$, $b$, and $c$.If $f(x_1) < f(x_2)$ then we choose $(x_0, x_1, x_2)$ as our new triplet meaning$$ \frac{a}{b} = \frac{c}{a}$$If $f(x_1) > f(x_2)$ then we choose $(x_1, x_2, x_3)$ as our new triplet meaning$$ \frac{a}{b} = \frac{c}{b-c}$$ Using these relations we can solve for the ratio $b / a$ via the following. Take$$ \frac{a}{b} = \frac{c}{a} \quad \text{and} \quad \frac{a}{b} = \frac{c}{b-c}$$and eliminate $c$ to find$$\begin{aligned} c &= \frac{a^2}{b} \Rightarrow \\ \frac{a}{b} &= \frac{a^2}{b^2-a^2} \\ ab^2 - a^3 &= a^2 b \\ \frac{b^2}{a^2} - \frac{b}{a} - 1 &= 0\end{aligned}$$whose solution is$$ \frac{b}{a} = \frac{1 \pm \sqrt{5}}{2} = \varphi$$where $\varphi$ is the well known "golden ratio" (note that there are two values here, the most common definition of $\varphi$ uses the $+$ branch but in fact you can use either depending on the application). Back to the problem at hand, we now need to pick our new set of points. Note that we only need one new point as the the other three are left-overs from the previous iteration. Let us concentrate on the case where the extrema is between $[x_0, x_2]$. Denote the new bracket values with $\hat{\quad}$ and identify$$ \hat{x_0} = x_0, \quad \hat{x_2} = x_1, \quad \text{and} \quad \hat{x_3} = x_2.$$In this case we need to find $\hat{x_1}$, in that case use the subsequent intervals $a$ and $\hat{a_~}$ and equate$$ \varphi \hat{a~} = a \Rightarrow \varphi (\hat{x_1} - \hat{x_0}) = x_1 - x_0$$which in terms of the previous values can be solved for $\hat{x_1}$ to lead to$$ \hat{x_1} = \frac{x_1 - x_0}{\varphi} + x_0$$ In the alternative case we have the bracket $[x_1, x_3]$ and$$ \hat{x_0} = x_1, \quad \hat{x_1} = x_2, \quad \text{and} \quad \hat{x_3} = x_3$$where we now need to find $\hat{x_2}$. Instead of using $\hat{a~}$ we can use $\hat{b~}$ and the relationship$$ \varphi \hat{c~} = c \Rightarrow \varphi (\hat{x_2} - \hat{x_1}) = x_2 - x_1$$which again can be manipulated to lead to the value of $\hat{x_2}$ as$$ \hat{x_2} = \frac{x_2 - x_1}{\varphi} + x_0.$$ Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \frac{1}{\varphi} \cdot (x_3 - x_0)$ and $x_2 = x_0 + \frac{1}{\varphi} \cdot (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE`
###Code
# New Test Function!
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.set_xlim((0.0, 2.0))
plt.show()
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
phi = (1.0 + numpy.sqrt(5.0)) / 2.0
TOLERANCE = 1e-4
MAX_STEPS = 100
x = [0.2, None, None, 0.5]
x[1] = x[3] - 1.0 / phi * (x[3] - x[0])
x[2] = x[0] + 1.0 / phi * (x[3] - x[0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[0], f(x[0]),'ko')
axes.plot(x[3], f(x[3]),'ko')
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
x_new[1] = 1.0 / phi * (x[1] - x[0]) + x[0]
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) / phi + x[2]
x = x_new
if numpy.abs(x[3] - x[0]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % str((x[3] + x[0]) / 2.0))
print(" f(t*) = %s" % f((x[3] + x[0]) / 2.0))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
plt.show()
###Output
_____no_output_____
###Markdown
Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find it's minima, and guess new points based on that minima 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ 1. Use a polynomial fit to the function: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 1. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance
###Code
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([0.5, 0.2, (0.7) / 2.0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization!
###Code
import scipy.optimize as optimize
print(optimize.golden(f, brack=(0.2, 0.25, 0.5)))
###Output
_____no_output_____
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Root Finding and Optimization**GOAL:** Find where $f(x) = 0$. Example: Future Time AnnuityWhen can I retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$$P$ is payment amount per compounding period$m$ number of compounding periods per year$r$ annual interest rate$n$ number of years to retirement$A$ total value after $n$ years If I want to retire in 20 years what does $r$ need to be?Set $P = \frac{\$18,000}{12} = \$1500, ~~~~ m=12, ~~~~ n=20$.
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, n))
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("A (total value)")
axes.set_title("When can I retire?")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("$g(r)$")
axes.set_title("When can I retire?")
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Guess at $r_0$ and check to see what direction we need to go...1. $r_0 = 0.0800$, $g(r_0) - r_0 = -0.009317550125425428$1. $r_1 = 0.0850$, $g(r_1) - r_1 = -0.00505763375972$1. $r_2 = 0.0875$, $g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic:
###Code
r = 0.09
for steps in xrange(10):
print "r = ", r
print "Residual = ", g(P, m, r, n, goal) - r
r = g(P, m, r, n, goal)
print
###Output
_____no_output_____
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$Note that this problem is equivalent to $x = -\ln x$.
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
x = 0.4
for steps in xrange(7):
print "x = ", x
print "Residual = ", numpy.exp(-x) - x
x = numpy.exp(-x)
print
axes.plot(x, numpy.exp(-x),'o',)
plt.show()
###Output
_____no_output_____
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_ylim([0.0, 1.5])
x = 0.5
for steps in xrange(3):
print "x = ", x
print "Residual = ", numpy.log(x) + x
x = -numpy.log(x)
print
axes.plot(x, -numpy.log(x),'o',)
plt.show()
###Output
_____no_output_____
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point Iteration*Theorem*: Existence and uniqueness of fixed point problemsAssume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b]~~~ \forall~~~ x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
plt.show()
###Output
_____no_output_____
###Markdown
Additionally, suppose $g'(x)$ is defined for $x \in [a,b]$ and $\exists K < 1$ s.t. $|g'(x)| \leq K < 1 ~~~ \forall ~~~ x \in (a,b)$, then $g$ has a unique fixed point $P \in [a,b]$
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.exp(-x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
plt.show()
###Output
_____no_output_____
###Markdown
*Theorem 2*: Asymptotic convergence behavior of fixed point iterations$$x_{k+1} = g(x_k)$$Assume that $\exists ~ x^*$ s.t. $x^* = g(x^*)$$$x_k = x^* + e_k ~~~~~~~~~~~~~~ x_{k+1} = x^* + e_{k+1}$$$$x^* + e_{k+1} = g(x^* + e_k)$$Using a Taylor expansion we know$$g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$$$x^* + e_{k+1} = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence (although somewhat arbitrarily fast). Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and - $n=1$ then the scheme is **linearly convergence** - $n=2$ then the scheme exhibits **quadratic convergence** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited$g(x) = e^{-x}$ with $x^* \approx 0.56$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ $g(x) = - \ln x$ with $x^* \approx 0.56$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ $g(r) = \frac{m P}{A} ((1 + \frac{r}{m})^{mn} - 1)$ with $r^* \approx 0.09$$$|g'(r^*)| = \frac{P m n}{A} \left(1 + \frac{r}{m} \right)^{m n - 1} \approx 2.15$$
###Code
import sympy
m, P, A, r, n = sympy.symbols('m, P, A, r, n')
(m * P / A * ((1 + r / m)**(m * n) - 1)).diff(r)
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA *bracket* is an interval $[a,b]$ s.t. it contains the zero or minima/maxima of interest. In the case of a zeros the bracket should satisfy $\text{sign}(f(a)) \neq \text{sign}(f(b))$. In the case of minima or maxima we need $f'(a)$ and $f'(b)$ to be opposite.**Theorem**: If $f(x) \in C[a,b]$ and $\text{sign}(f(a)) \neq \text{sign}(f(b))$ then there exists a number $c \in (a,b)$ s.t. $f(c) = 0$. (proof uses intermediate value theorem)
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
# Algorithm parameters
TOLERANCE = 1e-4
MAX_STEPS = 100
# Initialize loop
f_a = f(a)
f_b = f(b)
delta_x = b - a
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in xrange(MAX_STEPS):
c = a + delta_x / 2.0
f_c = f(c)
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
delta_x = b - a
# Plot iteration
axes.text(c, f(c), str(step))
# Check tolerance - Could also check the size of delta_x
if numpy.abs(f_c) < TOLERANCE:
break
if step == MAX_STEPS:
print "Reached maximum number of steps!"
else:
print "Success!"
print " x* = %s" % c
print " f(x*) = %s" % f(c)
print " number of steps = %s" % step
###Output
_____no_output_____
###Markdown
Convergence of Bisection$$|e_{k+1}| = C |e_k|^n$$$$e_k \approx \Delta x_k$$$$e_{k+1} \approx \frac{1}{2} \Delta x_k$$$$|e_{k+1}| = \frac{1}{2} |e_k|$$$\Rightarrow$ Linear convergence Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point **Basic Idea**: Given $f(x)$ and $f'(x)$ use a linear approximation to $f(x)$ "locally" and use x-intercept of the resulting line to predict where $x^*$ might be. Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$:Form equation for the line:$$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$$$f(x_k) = f'(x_k) x_k + b$$$$b = f(x_k) - f'(x_k) x_k$$and simplify.$$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$$$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$$$x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 100
TOLERANCE = 1e-4
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_k$", fontsize=16)
axes.text(0.089, f(x_k) - 2e4, "$f(x_k)$", fontsize=16)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Algorithm1. Initialize $x_k$1. Begin loop and calculate what $x_{k+1}$1. Check stopping criteria
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 100
TOLERANCE = 1e-4
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in xrange(1, MAX_STEPS + 1):
axes.text(x_k, f(x_k), str(n))
x_k = x_k - f(x_k) / f_prime(x_k)
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print "Reached maximum number of steps!"
else:
print "Success!"
print " x* = %s" % x_k
print " f(x*) = %s" % f(x_k)
print " number of steps = %s" % n
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Example:$$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ Asymptotic Convergence of Newton's MethodFor a simple root (non-multiplicative) - Let $g(x) = x - \frac{f(x)}{f'(x)}$, then$$x_{k+1} = g(x_k)$$ Definitions of errors and iteration:$$x_{k+1} = x^* + e_{k+1} ~~~~~ x_k = x^* + e_k$$General Taylor expansion:$$x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$What about $g'(x^*)$ though:$$g'(x) = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x)f''(x)}{f'(x)^2}$$which simplifies when evaluated at $x = x^*$ to$$g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$ The expansion then simplifies to $$e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$leading to the conclusion that $$|e_{k+1}| = \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative. For a multiple root (e.g. $f(x) = (x-1)^2$) the case is not particularly rosey unfortunately. Example:$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
plt.show()
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
x_kp = lambda x: x - 1.0 / (2.0 * numpy.pi) * numpy.tan(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? Maybe something that calculates the derivative rather than expects it? Given $x_k$ and $x_{k-1}$ represent the derivative as$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the basic approach of Newton leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence (the exponent on the convergence is $\approx 1.7$) Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
AlgorithmGiven $f(x)$, given bracket $[a,b]$, a `TOLERANCE`, and a `MAX_STEPS`1. Initialize $x_1 = a$, $x_2 = b$, $f_1 = f(x_1)$, and $f_2 = f(x_2)$2. Loop until either `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $x_{k+1}$ 2. Check for convergence and break if reached 3. Update parameters $x_1$, $x_2$, $f_1 = f(x_1)$ and $f_2(x_2)$3. Celebrate
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 100
TOLERANCE = 1e-4
# Initial guess
x_k = 0.07
x_km = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in xrange(1, MAX_STEPS + 1):
axes.plot(x_k, f(x_k), 'o')
x_kp = x_k - f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km))
x_km = x_k
x_k = x_kp
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print "Reached maximum number of steps!"
else:
print "Success!"
print " x* = %s" % x_k
print " f(x*) = %s" % f(x_k)
print " number of steps = %s" % n
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not gauranteed to converge - Do not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Bracketing Algorithms: Golden-Section Search (linear) - Interpolation Alogithms: Repeated parabolic interpolation - Hybrid Algorithms Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[a,b]$ that is convex over an interval $x \in [a,b]$ reduce the interval size until it brackets the minimum.Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit more tricky in this case. In particular choosing your initial bracket is important! Golden Section Search - Picking IntervalsWe also may want to choose the search points $c$ and $d$ so that the distance between $a$ and $d$, say $\Delta_{ad}$, and $b$ and $c$, say $\Delta_{bc}$, is carefully choosen. For Golden Section Search we require that these are equal. This tells use where to put $d$ but not $c$. The Golden Section Search also requires that $b$ should be choosen so that the spacing between the points have the same proportion as $(a, c, d)$ and $(c, d, b)$. Ok, that's weird. Also, why are we calling this thing "Golden"? Mathematically:If $f(d) > f(c)$ then $$\frac{\Delta_{cd}}{\Delta_{ca}} = \frac{\Delta_{ca}}{\Delta_{bc}}$$If $f(d) < f(c)$ then$$\frac{\Delta_{cd}}{\Delta_{bc} - \Delta_{cd}} = \frac{\Delta_{ca}}{\Delta_{bc}}$$ Eliminating $\Delta_{cd}$ leads to the equation$$\left( \frac{\Delta_cb}{\Delta_{ca}} \right )^2 = \frac{\Delta_cb}{\Delta_{ca}} + 1$$Solving this leads to$$ \frac{\Delta_cb}{\Delta_{ca}} = \varphi$$where $\varphi$ is the golden ratio!$$\varphi = \frac{1 \pm \sqrt{5}}{2}$$ Algorithm1. Initialize bracket $[a,b]$ and compute $f_a = f(a)$ and $f_b = f(b)$, $\Delta x = b-a$1. Initialize points $c = b - \varphi * (b - a)$ and $d = a + \varphi * (b -a)$1. Loop 1. Evaluate $f_c$ and $f_d$ 1. If $f_c < f_d$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $\Delta_{cd} <$ `TOLERANCE`
###Code
# New Test Function!
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
plt.show()
phi = (numpy.sqrt(5.0) - 1.0) / 2.0
TOLERANCE = 1e-4
MAX_STEPS = 100
a = 0.2
b = 0.5
c = b - phi * (b - a)
d = a + phi * (b - a)
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
success = False
for n in xrange(1, MAX_STEPS + 1):
axes.plot(a, f(a),'ko')
axes.plot(b, f(b),'ko')
fc = f(c)
fd = f(d)
if fc < fd:
b = d
d = c
c = b - phi * (b - a)
else:
a = c
c = d
d = a + phi * (b - a)
if numpy.abs(b - a) < TOLERANCE:
success = True
break
if success:
print "Success!"
print " t* = %s" % str((b + a) / 2.0)
print " f(t*) = %s" % f((b + a) / 2.0)
print " number of steps = %s" % n
else:
print "Reached maximum number of steps!"
plt.show()
###Output
_____no_output_____
###Markdown
Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find it's minima, and guess new points based on that minima AlgorithmGiven $f(x)$ and $[a,b]$1. Initialize $x = [a, b, (a+b)/2]$1. Loop 1. Evaluate function $f(x)$ 1. Use a polynomial fit to the function: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 1. Calculate the minimum: $$p'(x) = 2 p_0 x + b = 0 ~~~~ \Rightarrow ~~~~ x = -b / (2 p_0)$$ 1. Calculate new interval 1. Check tolerance
###Code
MAX_STEPS = 100
TOLERANCE = 1e-4
a = 0.5
b = 0.2
x = numpy.array([a, b, (a + b) / 2.0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in xrange(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print "Success!"
print " t* = %s" % x[2]
print " f(t*) = %s" % f(x[2])
print " number of steps = %s" % n
else:
print "Reached maximum number of steps!"
axes.set_ylim((-5, 0.0))
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization!
###Code
import scipy.optimize as optimize
optimize.golden(f, brack=(0.2, 0.25, 0.5))
###Output
_____no_output_____
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Root Finding and OptimizationOur goal in this section is to develop techniques to approximate the roots of a given function. At first glance the task of finding the roots of a function does not seem like a meaningful exercise. It is used in a wide variety of circumstances, though. When faced with the task of approximating the solution of a system of equations, a common approach is to first transform the system into a root finding problem.As an example, suppose that you are trying to find a solution to the equation$$ x^2 + x = 3x - 5.$$By subtracting $3x-5$, the expression can be rewritten in the form$$ x^2 - 2x + 6 = 0.$$Determining the roots of the function $g(x)=x^2-2x+6$ is now equivalent to determining the solution to the original expression. Unfortunately, a number of other issues arise. First there may not be one single solution, and in this case there is no real valued solution.The task of approximating the roots of a function can be a deceptively difficult thing to do. For much of the treatment here we will ignore many details such as existence and uniqueness, but you should keep in mind that they are important considerations. **GOAL:** For this section we will focus on techniques to approximate the roots of a function.We want to approximate the value of $x$ that satisfies $f(x) = 0$. Example: Future Time AnnuityWhen can I retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$$A$ total value after $n$ years$P$ is payment amount per compounding period$m$ number of compounding periods per year$r$ annual interest rate$n$ number of years to retirement If I want to retire in 20 years what does the annual interest rate $r$ need to be?Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, n))
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("A (total value)")
axes.set_title("When can I retire?")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.05, 0.1))
axes.set_ylim((total_value(P, m, 0.05, n), total_value(P, m, 0.1, n)))
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r (interest rate)")
axes.set_ylabel("$g(r)$")
axes.set_title("When can I retire?")
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim((g(P, m, 0.00, n, goal), g(P, m, 0.1, n, goal)))
plt.show()
###Output
_____no_output_____
###Markdown
Guess at $r_0$ and check to see what direction we need to go...1. $r_0 = 0.0800, \quad g(r_0) - r_0 = -0.009317550125425428$1. $r_1 = 0.0850, \quad g(r_1) - r_1 = -0.00505763375972$1. $r_2 = 0.0875, \quad g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic:```pythonr_values = numpy.linspace(0.08, 0.09, 10)for r in r_values: print("r = ", r, "g(r) =", g(P, m, r, n, goal)) print("Difference = ", numpy.abs(g(P, m, r, n, goal) - r)) r = g(P, m, r, n, goal)```
###Code
r_values = numpy.linspace(0.08, 0.09, 11)
for r in r_values:
print("r = ", r, "g(r) =", g(P, m, r, n, goal))
print("Difference = ", numpy.abs(g(P, m, r, n, goal) - r))
r = g(P, m, r, n, goal)
###Output
_____no_output_____
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
x = 0.4
for steps in range(3):
print("x = ", x, "Residual = ", numpy.abs(numpy.exp(-x) - x))
x = numpy.exp(-x)
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x+0.01, numpy.exp(-x)+0.01, steps+1, fontsize="15")
plt.show()
###Output
_____no_output_____
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_ylim([0, 1.5])
x = 0.5
for steps in range(3):
print("x = ", x, "Residual = ", numpy.abs(numpy.log(x) + x))
x = -numpy.log(x)
axes.plot(x, -numpy.log(x),'kx')
axes.text(x + 0.01, -numpy.log(x) + 0.01, steps+1, fontsize="15")
plt.show()
###Output
_____no_output_____
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point IterationExistence and uniqueness of fixed point problems*Existence:*Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] \quad \forall \quad x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r')
axes.plot(x, x, 'b')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
plt.show()
r = numpy.linspace(0.06, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r")
axes.set_ylabel("$g(r)$")
axes.set_xlim([0.06, 0.1])
axes.set_ylim([g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot([0.08, 0.08], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot([0.09, 0.09], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.08, n, goal), '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.09, n, goal), '--k')
plt.show()
###Output
_____no_output_____
###Markdown
*Uniqueness:*Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that$$ |g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)$$then $g$ has a unique fixed point $P \in [a,b]$
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("x")
axes.set_ylabel("f(x)")
axes.set_ylim((0.0, 1.1))
plt.show()
###Output
_____no_output_____
###Markdown
*Asymptotic convergence*: Behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that $\exists ~ x^\ast$ s.t. $x^\ast = g(x^\ast)$ (i.e. $x^\ast$ is the fixed point), then define$$ x_k = x^\ast + e_k \quad \quad x_{k+1} = x^\ast + e_{k+1}$$and$$ x^\ast + e_{k+1} = g(x^\ast + e_k)$$ Taylor expand the function $g$ about $x^\ast$:$$ g(x) = g(x^\ast) + g'(x^\ast) (x - x^\ast) + \frac{g''(x^\ast)}{2!} (x - x^\ast)^2 + \mathcal{O}((x - x^\ast)^3)$$Evaluate this series at $x_k = x^\ast + e_k$ to find$$ g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + \mathcal{O}(e_k^3)$$therefore from our definition from before that $x^\ast + e_{k+1} = g(x^\ast + e_k)$ we have$$ x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + \mathcal{O}(e_k^3)$$ Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence. Also note that $K$ is related to $|g'(x^*)|$. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited$g(x) = e^{-x}$ with $x^* \approx 0.56$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ $g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ $$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
import sympy
r, P, m, A, n = sympy.symbols('r P m A n')
g = P * m / A * ((1 + r /m)**(m * n) - 1)
g_prime = g.diff(r)
r_star = 0.08985602484084668
print("g'(r) = ", g_prime)
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
r = numpy.linspace(-0.01, 0.1, 100)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$")
axes.set_ylabel("$g'(r)$")
plt.show()
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ that contains exactly one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $$ \text{sign}(f(a)) \neq \text{sign}(f(b)).$$In the case of minima or maxima we need $$ \text{sign}(f'(a)) \neq \text{sign}(f'(b))$$ **Theorem**: Let$$ f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))$$then there exists a number $$ c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.$$(proof uses intermediate value theorem)
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.What ways could we "shrink" the bracket so that the end points converge to the root/extrema? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
# Algorithm parameters
TOLERANCE = 1e-4
MAX_STEPS = 1000
# Initialize loop
delta_x = b - a
c = a + delta_x / 2.0
f_a = f(a)
f_b = f(b)
f_c = f(c)
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Plot iteration
axes.plot(c, f_c,'kx')
axes.text(c, f_c, str(step + 1), fontsize="15")
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < TOLERANCE:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
delta_x = b - a
c = a + delta_x / 2.0
f_c = f(c)
if step == MAX_STEPS:
print("Reached maximum number of steps!")
else:
print("Success!")
print(" x* = %s" % c)
print(" f(x*) = %s" % f(c))
print(" number of steps = %s" % step)
###Output
_____no_output_____
###Markdown
Convergence of BisectionGenerally have$$ |e_{k+1}| = C |e_k|^n$$where we need $C 0$.Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with$$ e_k \approx \Delta x_k$$and therefore$$ e_{k+1} \approx \frac{1}{2} \Delta x_k.$$Due to the relationship then between $x_k$ and $e_k$ we then know$$ |e_{k+1}| = \frac{1}{2} |e_k|$$so therefore the method is linearly convergent. Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point **Basic Idea**: Given $f(x)$ and $f'(x)$ use a linear approximation to $f(x)$ "locally" and use the x-intercept of the resulting line to predict where $x^*$ might be. Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$:Form equation for the line:$$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$$$f(x_k) = f'(x_k) x_k + b$$$$b = f(x_k) - f'(x_k) x_k$$and simplify.$$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$$$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$ 0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$$$ x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$ An alternative method of derivation for Newton-Raphson (and more in line with our methods) uses Taylor series. Expand the function $f(x)$ in a Taylor series about the current Newton-Raphson iteration $x_k$:$$ f(x) = f(x_k) + f'(x_k) (x - x_k) + \frac{f''(x_k)}{2!} (x - x_k)^2 + \mathcal{O}((x-x_k)^3)$$Let $\delta_k$ be the update to the $x_{k+1}$ iteration such that$$ x_{k+1} = x_k + \Delta x_k$$and evaluate our expression for $f(x)$ at $x_{k+1}$:$$ f(x_{k+1}) = f(x_k) + f'(x_k) \Delta x_k + \frac{f''(x_k)}{2!} \Delta x_k^2 + \mathcal{O}(\Delta x_k^3)$$ Now assume that $x_{k+1} = x^\ast$, if this is the case the above simplifies to$$ 0 = f(x_k) + f'(x_k) \Delta x_k + \frac{f''(x_k)}{2!} \Delta x_k^2 + \mathcal{O}(\Delta x_k^3)$$and dropping the higher order terms leads to$$ \Delta x_k = - \frac{f(x_k)}{f'(x_k)}$$assuming that $f \in \mathbb R$ leading to the update$$ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}.$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
What does the alogrithm look like for Newton-Raphson? Algorithm1. Initialize $x_k$1. Begin loop 1. Compute $f(x_k)$ and $f'(x_k)$ 1. Use these to compute new $x_{k+1}$ 1. Check stopping criteria
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 200
TOLERANCE = 1e-4
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in range(1, MAX_STEPS + 1):
axes.plot(x_k, f(x_k),'kx')
axes.text(x_k, f(x_k), str(n), fontsize="15")
x_k = x_k - f(x_k) / f_prime(x_k)
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print("Reached maximum number of steps!")
else:
print("Success!")
print(" x* = %s" % x_k)
print(" f(x*) = %s" % f(x_k))
print(" number of steps = %s" % n)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Newton-Raphson Steps")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Example:$$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ Asymptotic Convergence of Newton's MethodFor a simple root (non-multiplicative) - Let $g(x) = x - \frac{f(x)}{f'(x)}$, then$$x_{k+1} = g(x_k)$$ Definitions of errors and iteration:$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$General Taylor expansion:$$ x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \mathcal{O}(e_k^3)$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved). Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$leads to $$ |e_{k+1}| < \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the the constant is controlled by the second derivative. For a multiple root (e.g. $f(x) = (x-1)^2$) the case is not particularly rosey unfortunately. Why might this be? Example:$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
plt.show()
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
x_kp = lambda x: x - 1.0 / (2.0 * numpy.pi) * numpy.tan(2.0 * numpy.pi * x)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basins of AttractionGiven a point $x_0$ can we determine if Newton-Raphson converges?A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are fractal. Plotted below are two fairly simple equations which demonstrate the problem:1. $f(x) = x^3 - 1$2. Kepler's equation $\theta - e \sin \theta = M$
###Code
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contour(X, Y, R)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
plt.show()
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contour(X, Y, R)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
###Output
_____no_output_____
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, given bracket $[a,b]$, a `TOLERANCE`, and a `MAX_STEPS` (note we need two points to start).1. Initialize $x_1 = a$, $x_2 = b$, $f_1 = f(x_1)$, and $f_2 = f(x_2)$2. Loop until either `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $x_{k+1}$ by update formula 2. Check for convergence and break if reached 3. Update parameters $x_1$, $x_2$, $f_1 = f(x_1)$ and $f_2(x_2)$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Algorithm parameters
MAX_STEPS = 50
TOLERANCE = 1e-4
# Initial bracket
x_k = 0.07
x_km = 0.06
# Setup figure to plot convergence
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n in range(1, MAX_STEPS + 1):
axes.plot(x_k, f(x_k), 'o')
axes.text(x_k + 0.0025, f(x_k), n, fontsize="15")
x_kp = x_k - f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km))
x_km = x_k
x_k = x_kp
print("Residual = ", numpy.abs(f(x_k)))
if numpy.abs(f(x_k)) < TOLERANCE:
break
if n == MAX_STEPS:
print("Reached maximum number of steps!")
else:
print("Success!")
print(" x* = %s" % x_k)
print(" f(x*) = %s" % f(x_k))
print(" number of steps = %s" % n)
axes.set_xlabel("r (%)")
axes.set_ylabel("f(r)")
axes.set_title("Secant Method")
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Does not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Interpolation Algorithms: Repeated parabolic interpolation - Bracketing Algorithms: Golden-Section Search (linear) - Hybrid Algorithms Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find it's minima, and guess new points based on that minima 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ 1. Use a polynomial fit to the function: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 1. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance
###Code
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([0.5, 0.2, (0.7) / 2.0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
plt.show()
###Output
_____no_output_____
###Markdown
Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit trickier in this case. In particular choosing your initial bracket is important! Bracket PickingSay we start with a bracket $[x_0, x_3]$ and pick to new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules: - If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$. - If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
###Code
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
###Output
_____no_output_____
###Markdown
Picking Brackets and PointsAgain say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets.Define $$\begin{aligned} a &= x_1 - x_0, \\ b &= x_3 - x_1,\\ c &= x_2 - x_1 \quad \text{and} \\ d &= x_3 - x_2.\end{aligned}$$For **Golden Section Search** we require two conditions: - The two new possible brackets are of equal length. If we pick the left bracket $[x_0, x_2]$ then $$ a+c = b $$ and the right bracket $[x_1, x_3]$ $$ d + c = b. $$ - The distances between subsequent triplets is proportional.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (1.0 + numpy.sqrt(5.0)) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - 1.0 / phi * (x[3] - x[0])
x[2] = x[0] + 1.0 / phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[2])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = 1.0 / phi * (x[1] - x[0]) + x[0]
axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1] + 0.05, f(x_new[1]) + 0.1, "*", fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) / phi + x[2]
axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2] + 0.05, f(x_new[2]) + 0.1, "*", fontsize='15')
plt.show()
###Output
_____no_output_____
###Markdown
The first rule implies:$$\begin{aligned} a + c &= b \\ x_1 - x_0 + x_2 - x_1 &= x_3 - x_1 \\ x_2 - x_0 &= x_3 - x_1.\end{aligned}$$Assume that this allows us to pick $x_2$ (we need to figure out how to choose $x_1$). We then know$$ x_2 = x_3 - x_1 + x_0.$$ Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. Since we have two choices for our new interval we write down many proportionality constraints however let us focus on the two defined by the distances $a$, $b$, and $c$.If $f(x_1) < f(x_2)$ then we choose $(x_0, x_1, x_2)$ as our new triplet meaning$$ \frac{a}{b} = \frac{c}{a}$$If $f(x_1) > f(x_2)$ then we choose $(x_1, x_2, x_3)$ as our new triplet meaning$$ \frac{a}{b} = \frac{c}{b-c}$$ Using these relations we can solve for the ratio $b / a$ via the following. Take$$ \frac{a}{b} = \frac{c}{a} \quad \text{and} \quad \frac{a}{b} = \frac{c}{b-c}$$and eliminate $c$ to find$$\begin{aligned} c &= \frac{a^2}{b} \Rightarrow \\ \frac{a}{b} &= \frac{a^2}{b^2-a^2} \\ ab^2 - a^3 &= a^2 b \\ \frac{b^2}{a^2} - \frac{b}{a} - 1 &= 0\end{aligned}$$whose solution is$$ \frac{b}{a} = \frac{1 \pm \sqrt{5}}{2} = \varphi$$where $\varphi$ is the well known "golden ratio" (note that there are two values here, the most common definition of $\varphi$ uses the $+$ branch but in fact you can use either depending on the application). Back to the problem at hand, we now need to pick our new set of points. Note that we only need one new point as the the other three are left-overs from the previous iteration. Let us concentrate on the case where the extrema is between $[x_0, x_2]$. Denote the new bracket values with $\hat{\quad}$ and identify$$ \hat{x_0} = x_0, \quad \hat{x_2} = x_1, \quad \text{and} \quad \hat{x_3} = x_2.$$In this case we need to find $\hat{x_1}$, in that case use the subsequent intervals $a$ and $\hat{a_~}$ and equate$$ \varphi \hat{a~} = a \Rightarrow \varphi (\hat{x_1} - \hat{x_0}) = x_1 - x_0$$which in terms of the previous values can be solved for $\hat{x_1}$ to lead to$$ \hat{x_1} = \frac{x_1 - x_0}{\varphi} + x_0$$ In the alternative case we have the bracket $[x_1, x_3]$ and$$ \hat{x_0} = x_1, \quad \hat{x_1} = x_2, \quad \text{and} \quad \hat{x_3} = x_3$$where we now need to find $\hat{x_2}$. Instead of using $\hat{a~}$ we can use $\hat{b~}$ and the relationship$$ \varphi \hat{c~} = c \Rightarrow \varphi (\hat{x_2} - \hat{x_1}) = x_2 - x_1$$which again can be manipulated to lead to the value of $\hat{x_2}$ as$$ \hat{x_2} = \frac{x_2 - x_1}{\varphi} + x_0.$$ Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \frac{1}{\varphi} \cdot (x_3 - x_0)$ and $x_2 = x_0 + \frac{1}{\varphi} \cdot (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE`
###Code
# New Test Function!
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.set_xlim((0.0, 2.0))
plt.show()
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
phi = (1.0 + numpy.sqrt(5.0)) / 2.0
# Algorithm parameters
TOLERANCE = 1e-4
MAX_STEPS = 100
# Initialize
x = [0.2, None, None, 0.5]
x[1] = x[3] - 1.0 / phi * (x[3] - x[0])
x[2] = x[0] + 1.0 / phi * (x[3] - x[0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[0], f(x[0]),'ko')
axes.plot(x[3], f(x[3]),'ko')
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
x_new[1] = 1.0 / phi * (x[1] - x[0]) + x[0]
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) / phi + x[2]
x = x_new
if numpy.abs(x[3] - x[0]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % str((x[3] + x[0]) / 2.0))
print(" f(t*) = %s" % f((x[3] + x[0]) / 2.0))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization!
###Code
import scipy.optimize as optimize
print(optimize.golden(f, brack=(0.2, 0.25, 0.5)))
###Output
_____no_output_____
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import warnings
import sympy
sympy.init_printing()
###Output
_____no_output_____
###Markdown
Root Finding and OptimizationOur goal in this section is to develop techniques to approximate the roots of a given function $f(x)$. That is find solutions $x$ such that $f(x)=0$. At first glance this may not seem like a meaningful exercise, however, this problem arises in a wide variety of circumstances. For example, suppose that you are trying to find a solution to the equation$$ x^2 + x = \sin{x}.$$Simply rearranging, the expression can be rewritten in the form$$ f(x) = x^2 + x -\sin{x} = 0.$$Determining the roots of the function $f(x)$ is now equivalent to determining the solution to the original expression. Unfortunately, a number of other issues arise. In particular, with non-linear equations, there may be multiple solutions, or no real solutions at all. The task of approximating the roots of a function can be a deceptively difficult thing to do. For much of the treatment here we will ignore many details such as existence and uniqueness, but you should keep in mind that they are important considerations. **GOAL:** For this section we will focus on multiple techniques for efficiently and accurately solving the fundamental problem $f(x)=0$ for functions of a single variable. Example: Future Time AnnuityCan I ever retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$* $A$ total value after $n$ years* $P$ is payment amount per compounding period* $m$ number of compounding periods per year* $r$ annual interest rate* $n$ number of years to retirement Question:For a fix monthly Payment $P$, what does the minimum interest rate $r$ need to be so I can retire in 20 years with \$1M. Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.15, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, 10),label='10 years')
axes.plot(r, total_value(P, m, r, 15),label='15 years')
axes.plot(r, total_value(P, m, r, n),label='20 years')
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)", fontsize=16)
axes.set_ylabel("A (total value)", fontsize=16)
axes.set_title("When can I retire?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((r.min(), r.max()))
axes.set_ylim((total_value(P, m, r.min(), 10), total_value(P, m, r.max(), n)))
axes.legend(loc='best')
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$ Plot these$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("$g(r)$",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim((g(P, m, 0.00, n, goal), g(P, m, 0.1, n, goal)))
axes.legend()
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Guess at $r_0$ and check to see what direction we need to go...1. $r_0 = 0.0800, \quad g(r_0) - r_0 = -0.009317550125425428$1. $r_1 = 0.0850, \quad g(r_1) - r_1 = -0.00505763375972$1. $r_2 = 0.0875, \quad g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic:
###Code
r_values = numpy.linspace(0.08, 0.1, 11)
g_values = g(P,m,r_values,n,goal)
residual = numpy.abs(g_values - r_values)
print(' r\t\t g(r)\t\tresidual')
print('------------------------------------------------')
for i,r in enumerate(r_values):
print('{:8.3f}\t{:10.8f}\t{:10.8f}\t'.format(r,g_values[i],residual[i]))
###Output
r g(r) residual
------------------------------------------------
0.080 0.07068245 0.00931755
0.082 0.07427690 0.00772310
0.084 0.07801640 0.00598360
0.086 0.08190680 0.00409320
0.088 0.08595414 0.00204586
0.090 0.09016473 0.00016473
0.092 0.09454513 0.00254513
0.094 0.09910215 0.00510215
0.096 0.10384290 0.00784290
0.098 0.10877473 0.01077473
0.100 0.11390533 0.01390533
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$f(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Consider the iterative schemeset $x_0$ then compute$$ x_i = g(x_{i-1})\quad \mathrm{for}\quad i=1,2,3\ldots$$ or in code```pythonx = x0for i in range(N): x = g(x)```
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$f(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend()
x = 0.4
print('\tx\t exp(-x)\t residual')
for steps in range(6):
residual = numpy.abs(numpy.exp(-x) - x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, numpy.exp(-x), residual))
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x+0.01, numpy.exp(-x)+0.01, steps, fontsize="15")
x = numpy.exp(-x)
plt.grid()
plt.show()
###Output
x exp(-x) residual
0.4000000 0.6703200 0.2703200
0.6703200 0.5115448 0.1587752
0.5115448 0.5995686 0.0880238
0.5995686 0.5490484 0.0505202
0.5490484 0.5774991 0.0284507
0.5774991 0.5613004 0.0161987
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$f(x)=-\log(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_ylabel("f(x)",fontsize=16)
axes.set_ylim([0, 1.5])
axes.legend(loc='best')
x = 0.55
print('\tx\t -log(x)\t residual')
for steps in range(5):
residual = numpy.abs(numpy.log(x) + x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, -numpy.log(x), residual))
axes.plot(x, -numpy.log(x),'kx')
axes.text(x + 0.01, -numpy.log(x) + 0.01, steps, fontsize="15")
x = -numpy.log(x)
plt.grid()
plt.show()
###Output
x -log(x) residual
0.5500000 0.5978370 0.0478370
0.5978370 0.5144371 0.0833999
0.5144371 0.6646819 0.1502448
0.6646819 0.4084467 0.2562352
0.4084467 0.8953939 0.4869472
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point IterationExistence and uniqueness of fixed point problems*Existence:*Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] \quad \forall \quad x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = e^{-x}$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = -\ln(x)$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
plt.show()
r = numpy.linspace(0.06, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r")
axes.set_ylabel("$g(r)$")
axes.set_xlim([0.06, 0.1])
axes.set_ylim([g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot([0.08, 0.08], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot([0.095, 0.095], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.08, n, goal), '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.095, n, goal), '--k')
plt.show()
###Output
_____no_output_____
###Markdown
*Uniqueness:*Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that$$ |g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)$$then $g$ has a unique fixed point $P \in [a,b]$
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("$x$",fontsize=18)
axes.set_ylabel("$g\,'(x)$",fontsize=18)
axes.set_ylim((0.0, 1.1))
axes.set_title("$g(x) = e^{-x}$",fontsize=20)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
*Asymptotic convergence*: Behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that a fixed point $x^\ast$ exists, such that $$x^\ast = g(x^\ast)$$ Then define $$ x_{k+1} = x^\ast + e_{k+1} \quad \quad x_k = x^\ast + e_k$$ substituting$$ x^\ast + e_{k+1} = g(x^\ast + e_k)$$ Evaluate $$ g(x^\ast + e_k)$$ Taylor expand $g(x)$ about $x^\ast$ and substitute $$x = x_k = x^\ast + e_k$$ $$ g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ from our definition $$x^\ast + e_{k+1} = g(x^\ast + e_k)$$ we have$$ x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence. Also note that $K$ is related to $|g'(x^*)|$. **Note**: if $g(x)$ is Lipschitz, then the error $K$ is bounded by the same Lipschitz constant. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited* Example 1:$$g(x) = e^{-x}\quad\mathrm{with}\quad x^* \approx 0.56$$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ * Example 2: $$g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ * Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
r, P, m, A, n = sympy.symbols('r P m A n')
g_sym = P * m / A * ((1 + r /m)**(m * n) - 1)
g_prime = g_sym.diff(r)
r_star = 0.08985602484084668
print("g(r) = ", g_sym)
print("g'(r) = ", g_prime)
print()
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
print("g(r*) - r* = {}".format(g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}) - r_star))
###Output
g(r) = P*m*((1 + r/m)**(m*n) - 1)/A
g'(r) = P*m*n*(1 + r/m)**(m*n)/(A*(1 + r/m))
g'(r*) = 2.14108802539073
g(r*) - r* = 7.00606239689705E-12
###Markdown
* Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
g = sympy.lambdify(r, g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
r = numpy.linspace(-0.01, 0.1, 100)
fig = plt.figure(figsize=(7,5))
fig.set_figwidth(2. * fig.get_figwidth())
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(r),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=14)
axes.set_ylabel("$g(r)$",fontsize=14)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=14)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim(g(0.00), g(0.1))
axes.legend()
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$",fontsize=14)
axes.set_ylabel("$g'(r)$",fontsize=14)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ that contains exactly one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $$ \text{sign}(f(a)) \neq \text{sign}(f(b)).$$In the case of minima or maxima we need $$ \text{sign}(f'(a)) \neq \text{sign}(f'(b))$$ **Theorem**: Let$$ f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))$$then there exists a number $$ c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.$$(proof uses intermediate value theorem) **Example**: The retirement problem again. For fixed $A, P, m, n$$$ f(r) = A - \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.What ways could we "shrink" the bracket so that the end points converge to the root/extrema? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went basic code```pythondef bisection(f,a,b,tol): c = a + delta_x / 2.0 f_a = f(a) f_b = f(b) f_c = f(c) for step in range(1, MAX_STEPS + 1): if numpy.abs(f_c) < tol: break if numpy.sign(f_a) != numpy.sign(f_c): b = c f_b = f_c else: a = c f_a = f_c delta_x = b - a c = a + delta_x / 2.0 f_c = f(c) return c```
###Code
# real code with standard bells and whistles
def bisection(f,a,b,tol = 1.e-6):
""" uses bisection to isolate a root x of a function of a single variable f such that f(x) = 0.
the root must exist within an initial bracket a < x < b
returns when f(x) at the midpoint of the bracket < tol
Parameters:
-----------
f: function of a single variable f(x) of type float
a: float
left bracket a < x
b: float
right bracket x < b
Note: the signs of f(a) and f(b) must be different to insure a bracket
tol: float
tolerance. Returns when |f((a+b)/2)| < tol
Returns:
--------
x: float
midpoint of final bracket
x_array: numpy array
history of bracket centers (for plotting later)
Raises:
-------
ValueError:
if initial bracket is invalid
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 1000
# initialize
delta_x = b - a
c = a + delta_x / 2.0
c_array = [ c ]
f_a = f(a)
f_b = f(b)
f_c = f(c)
# check bracket
if numpy.sign(f_a) == numpy.sign(f_b):
raise ValueError("no bracket: f(a) and f(b) must have different signs")
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < tol:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
delta_x = b - a
c = a + delta_x / 2.0
f_c = f(c)
c_array.append(c)
if step == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return c, numpy.array(c_array)
# set up function as an inline lambda function
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8)
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
r = numpy.linspace(0.05, 0.11, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a), 'ko')
axes.plot([a, a], [0.0, f(a)], 'k--')
axes.text(a, f(a), str(0), fontsize="15")
axes.plot(b, f(b), 'ko')
axes.plot([b, b], [f(b), 0.0], 'k--')
axes.text(b, f(b), str(1), fontsize="15")
axes.grid()
# plot out the first N steps
N = 5
for k,r in enumerate(r_array[:N]):
# Plot iteration
axes.plot(r, f(r),'kx')
axes.text(r, f(r), str(k + 2), fontsize="15")
axes.plot(r_star, f(r_star), 'go', markersize=10)
axes.set_title('Bisection method: first {} steps'.format(N), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why?
###Code
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8 )
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
# this might be useful
print(numpy.diff(r_array))
###Output
[ 7.50000000e-03 -3.75000000e-03 1.87500000e-03 -9.37500000e-04
4.68750000e-04 -2.34375000e-04 -1.17187500e-04 5.85937500e-05
-2.92968750e-05 1.46484375e-05 7.32421875e-06 3.66210937e-06
-1.83105469e-06 -9.15527344e-07 -4.57763672e-07 -2.28881836e-07
-1.14440918e-07 -5.72204590e-08 2.86102295e-08 -1.43051147e-08
-7.15255737e-09 3.57627869e-09 -1.78813934e-09 8.94069666e-10
4.47034840e-10 2.23517413e-10 1.11758713e-10 -5.58793567e-11
2.79396783e-11 -1.39698392e-11 6.98492653e-12 3.49245632e-12
-1.74622816e-12 -8.73121020e-13 -4.36553571e-13 2.18283724e-13
1.09134923e-13 5.45674617e-14 2.72837308e-14]
###Markdown
Convergence of BisectionGenerally have$$ |e_{k+1}| = C |e_k|^n$$where we need $C 0$.Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with$$ e_k \approx \Delta x_k$$and therefore$$ e_{k+1} \approx \frac{1}{2} \Delta x_k.$$Due to the relationship then between $x_k$ and $e_k$ we then know$$ |e_{k+1}| = \frac{1}{2} |e_k|$$so therefore the method is linearly convergent. Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point - Can we do "better"? Newton's method, *when well behaved* can achieve quadratic convergence. **Basic Ideas**: There are multiple interpretations we can use to derive Newton's method* Use Taylor's theorem to estimate a correction to minimize the residual $f(x)=0$ * A geometric interpretation that approximates $f(x)$ locally as a straight line to predict where $x^*$ might be.* As a special case of a fixed-point iteration Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$:Form equation for the line:$$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$$$f(x_k) = f'(x_k) x_k + b$$$$b = f(x_k) - f'(x_k) x_k$$and simplify.$$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$$$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$ 0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$$$ x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$ Perhaps the simplest derivation uses Taylor series. Consider an initial guess at point $x_k$. For arbitrary $x_k$, it's unlikely $f(x_k)=0$. However we can hope there is a correction $\delta_k$ such that at$$x_{k+1} = x_k + \delta_k$$and $$ f(x_{k+1}) = 0 $$ expanding in a Taylor series around point $x_k$ $$ f(x_k + \delta_k) \approx f(x_k) + f'(x_k) \delta_k + O(\delta_k^2)$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ or solving for the correction$$ \delta_k = -f(x_k)/f'(x_k)$$ which leads to the update for the next iteration$$ x_{k+1} = x_k + \delta_k $$or$$ x_{k+1} = x_k -f(x_k)/f'(x_k)$$rinse and repeat, as it's still unlikely that $f(x_{k+1})=0$ (but we hope the error will be reduced) Algorithm1. Initialize $x = x_0$1. While ( $f(x) > tol$ ) - solve $\delta = -f(x)/f'(x)$ - update $x \leftarrow x + \delta$ Geometric interpretationBy truncating the taylor series at first order, we are locally approximating $f(x)$ as a straight line tangent to the point $f(x_k)$. If the function was linear at that point, we could find its intercept such that $f(x_k+\delta_k)=0$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r",fontsize=16)
axes.set_ylabel("f(r)",fontsize=16)
axes.set_title("Newton-Raphson Steps",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Some code
###Code
def newton(f,f_prime,x0,tol = 1.e-6):
""" uses newton's method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
f_prime: function f'(x)
returns type: float
x0: float
initial guess
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
x = x0
x_array = [ x0 ]
for k in range(1, MAX_STEPS + 1):
x = x - f(x) / f_prime(x)
x_array.append(x)
if numpy.abs(f(x)) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x, x_array = newton(f, f_prime, x0, tol=1.e-8)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
print(f_prime(x)*numpy.finfo('float').eps)
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Newton-Raphson Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why? Example: $$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ setup in sympy
###Code
x = sympy.symbols('x')
f = x - sympy.exp(-x)
f_prime = f.diff(x)
f, f_prime
###Output
_____no_output_____
###Markdown
and solve
###Code
f = sympy.lambdify(x,f)
f_prime = sympy.lambdify(x,f_prime)
x0 = 0.
x, x_array = newton(f, f_prime, x0, tol = 1.e-9)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-1,1,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx',markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x - e^{-x}$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Asymptotic Convergence of Newton's MethodNewton's method can be also considered a fixed point iteration$$x_{k+1} = g(x_k)$$with $g(x) = x - \frac{f(x)}{f'(x)}$ Again if $x^*$ is the fixed point and $e_k$ the error at iteration $k$:$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$ Taylor Expansion around $x^*$$$ x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + O(e_k^3)$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$ which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved). Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ which leads to $$ |e_{k+1}| < \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative. Example: Convergence for a non-simple rootConsider our first problem$$ f(x) = x^2 + x - \sin(x)$$the case is, unfortunately, not as rosey. Why might this be? Setup the problem
###Code
f = lambda x: x*x + x - numpy.sin(x)
f_prime = lambda x: 2*x + 1. - numpy.cos(x)
x0 = .9
x, x_array = newton(f, f_prime, x0, tol= 1.e-16)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-2,2,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx', markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x^2 +x - sin(x)$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Convergence appears linear, can you show this?:$$f(x) = x^2 + x -\sin (2 \pi x)$$ Example: behavior of Newton with multiple roots$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
x_kp = lambda x: x - f(x)/f_prime(x)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $x_{k+1}(x)$",fontsize=18)
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basins of AttractionGiven a point $x_0$ can we determine if Newton-Raphson converges and to **which root** it converges to?A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges to the same root. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are complicated or even fractal.
###Code
# calculate the basin of attraction for f(x) = sin(2\pi x)
x_root = numpy.zeros(x.shape)
N_steps = numpy.zeros(x.shape)
for i,xk in enumerate(x):
x_root[i], x_root_array = newton(f, f_prime, xk)
N_steps[i] = len(x_root_array)
y = numpy.linspace(-2,2)
X,Y = numpy.meshgrid(x,y)
X_root = numpy.outer(numpy.ones(y.shape),x_root)
plt.figure(figsize=(8, 6))
plt.pcolor(X, Y, X_root,vmin=-5, vmax=5,cmap='seismic')
cbar = plt.colorbar()
cbar.set_label('$x_{root}$', fontsize=18)
plt.plot(x, f(x), 'k-')
plt.plot(x, numpy.zeros(x.shape),'k--', linewidth=0.5)
plt.xlabel('x', fontsize=16)
plt.title('Basins of Attraction: $f(x) = \sin{2\pi x}$', fontsize=18)
#plt.xlim(0.25-.1,0.25+.1)
plt.show()
###Output
_____no_output_____
###Markdown
Fractal Basins of AttractionIf $f(x)$ is complex (for $x$ complex), then the basins of attraction can be beautiful and fractalPlotted below are two fairly simple equations which demonstrate the issue:1. $f(x) = x^3 - 1$2. Kepler's equation $\theta - e \sin \theta = M$
###Code
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
roots = numpy.roots([1., 0., 0., -1])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
#axes.contourf(X, Y, numpy.sign(numpy.imag(R))*numpy.abs(R),vmin = -10, vmax = 10)
axes.contourf(X, Y, R, vmin = -8, vmax= 8.)
axes.scatter(numpy.real(roots), numpy.imag(roots))
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
axes.grid()
plt.show()
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contourf(X, Y, R, vmin = 0, vmax = 10)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
plt.show()
###Output
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: RuntimeWarning: overflow encountered in sin
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in multiply
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: overflow encountered in cos
after removing the cwd from sys.path.
/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: invalid value encountered in multiply
after removing the cwd from sys.path.
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$ Secant Method$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=14)
axes.set_title("Secant Method", fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, a `TOLERANCE`, and a `MAX_STEPS` 1. Initialize two points $x_0$, $x_1$, $f_0 = f(x_0)$, and $f_1 = f(x_1)$2. Loop for k=2, to `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $$x_{2} = x_1 - \frac{f(x_1) (x_1 - x_{0})}{f(x_1) - f(x_{0})}$$ 2. Check for convergence and break if reached 3. Update parameters $x_0 = x_1$, $x_1 = x_{2}$, $f_0 = f_1$ and $f_1 = f(x_1)$ Some Code
###Code
def secant(f, x0, x1, tol = 1.e-6):
""" uses a linear secant method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
x0: float
first point to initialize the algorithm
x1: float
second point to initialize the algorithm x1 != x0
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
ValueError:
if x1 is too close to x0
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
if numpy.isclose(x0, x1):
raise ValueError('Initial points are too close (preferably should be a bracket)')
x_array = [ x0, x1 ]
for k in range(1, MAX_STEPS + 1):
x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0))
x_array.append(x2)
if numpy.abs(f(x2)) < tol:
break
x0 = x1
x1 = x2
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x2, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x1 = 0.07
x, x_array = secant(f, x0, x1, tol= 1.e-7)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Secant Method Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Does not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package
###Code
from scipy.optimize import brentq
a = 0.07
b = 0.1
x, res = brentq(f, a, b, full_output=True)
print('x = {}, f(x) = {}'.format(x, f(x)))
print(res)
#brentq?
###Output
x = 0.08985602483470466, f(x) = 2.1886080503463745e-08
converged: True
flag: 'converged'
function_calls: 8
iterations: 7
root: 0.08985602483470466
###Markdown
Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Interpolation Algorithms: Repeated parabolic interpolation - Bracketing Algorithms: Golden-Section Search (linear) - Hybrid Algorithms Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find its minima, and guess new points based on that minima 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ at the three points 1. Find the quadratic polynomial that interpolates those points: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 3. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance
###Code
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([0.5, 0.2, (0.7) / 2.0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
###Output
Success!
t* = 0.29588830731129795
f(t*) = -4.604285452397018
number of steps = 6
###Markdown
Some Code
###Code
def parabolic_interpolation(f, bracket, tol = 1.e-6):
""" uses repeated parabolic interpolation to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x1] containing an initial bracket that contains a minimum
tolerance: float
Returns when relative error of last two iterates < tol
Returns:
--------
x: float
final estimate of the minima
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
x = numpy.zeros(3)
x[:2] = bracket
x[2] = (x[0] + x[1])/2.
x_array = [ x[2] ]
for k in range(1, MAX_STEPS + 1):
poly = numpy.polyfit(x, f(x), 2)
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
x_array.append(x[2])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x[2], numpy.array(x_array)
###Output
_____no_output_____
###Markdown
set up problem
###Code
bracket = numpy.array([0.5, 0.2])
x, x_array = parabolic_interpolation(f, bracket, tol = 1.e-6)
print("Extremum f(x) = {}, at x = {}, N steps = {}".format(f(x), x, len(x_array)))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.plot(x_array, f(x_array),'ro')
axes.plot(x, f(x), 'go')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit trickier in this case. In particular choosing your initial bracket is important! Bracket PickingSay we start with a bracket $[x_0, x_3]$ and pick two new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules: - If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$. - If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
###Code
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
###Output
_____no_output_____
###Markdown
Picking Brackets and PointsAgain say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets. Define: the length of the line segments in the interval\begin{aligned} a &= x_1 - x_0, \\ b &= x_2 - x_1,\\ c &= x_3 - x_2 \\\end{aligned}and the total bracket length\begin{aligned} d &= x_3 - x_0. \\\end{aligned}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
For **Golden Section Search** we require two conditions: - The two new possible brackets are of equal length. i.e $[x_0, x_2] = [x_1, x_3]$ or $$ a + b = b + c $$ or simply $a = c$
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
- The ratio of segment lengths is the same for every level of recursion so the problem is self-similar i.e. $$ \frac{b}{a} = \frac{c}{a + b} $$ These two requirements will allow maximum reuse of previous points and require adding only one new point $x^*$ at each iteration.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[1])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = phi * (x[1] - x[0]) + x[0]
#axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
#axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
#axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
#axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
#axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
#axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1], f(x_new[1]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[0].text(x_new[i], -0.5, i, color='g',fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) * phi + x[2]
#axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
#axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
#axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
#axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
#axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
#axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2], f(x_new[2]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[1].text(x_new[i], -0.5, i, color='r',fontsize='15')
axes[0].set_title('Choose left bracket', fontsize=18)
axes[1].set_title('Choose right bracket', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
As the first rule implies that $a = c$, we can substitute into the second rule to yield$$ \frac{b}{a} = \frac{a}{a + b}$$ or inverting and rearranging $$ \frac{a}{b} = 1 + \frac{b}{a}$$ if we let the ratio $b/a = x$, then $$ x + 1 = \frac{1}{x} \quad \text{or} \quad x^2 + x - 1 = 0$$ $$ x^2 + x - 1 = 0$$has a single positive root for $$ x = \frac{\sqrt{5} - 1}{2} = \varphi = 0.6180339887498949$$where $\varphi$ is related to the "golden ratio" (which in most definitions is given by $1+\varphi$, but either work as $ 1+\varphi = 1/\varphi $ ) Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. We can now use all of our information to find the points $x_1$ and $x_2$ given any overall bracket $[x_0, x_3]$ Given $b/a = \varphi$, $a = c$, and the known width of the bracket $d$ it follows that$$ d = a + b + c = (2 + \phi)a $$or $$ a = \frac{d}{2 + \varphi} = \frac{\varphi}{1 + \varphi} d$$by the rather special properties of $\varphi$. We could use this result immediately to find \begin{align} x_1 &= x_0 + a \\ x_2 &= x_3 - a \\\end{align} Equivalently, you can show that $$a + b = (1 + \varphi)a = \varphi d$$so\begin{align} x_1 &= x_3 - \varphi d \\ x_2 &= x_0 + \varphi d \\\end{align}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \varphi (x_3 - x_0)$ and $x_2 = x_0 + \varphi (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE` 1. calculate the appropriate new point $x^*$ ($x_1$ on left, $x_2$ on right)
###Code
def golden_section(f, bracket, tol = 1.e-6):
""" uses golden section search to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x3] containing an initial bracket that contains a minimum
tolerance: float
Returns when | x3 - x0 | < tol
Returns:
--------
x: float
final estimate of the midpoint of the bracket
x_array: numpy array
history of midpoint of each bracket
Raises:
-------
ValueError:
If initial bracket is < tol or doesn't appear to have any interior points
that are less than the outer points
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [ bracket[0], None, None, bracket[1] ]
delta_x = x[3] - x[0]
x[1] = x[3] - phi * delta_x
x[2] = x[0] + phi * delta_x
# check for initial bracket
fx = f(numpy.array(x))
bracket_min = min(fx[0], fx[3])
if fx[1] > bracket_min and fx[2] > bracket_min:
raise ValueError("interval does not appear to include a minimum")
elif delta_x < tol:
raise ValueError("interval is already smaller than tol")
x_mid = (x[3] + x[0])/2.
x_array = [ x_mid ]
for k in range(1, MAX_STEPS + 1):
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
delta_x = x_new[3] - x_new[0]
x_new[1] = x_new[3] - phi * delta_x
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
delta_x = x_new[3] - x_new[0]
x_new[2] = x_new[0] + phi * delta_x
x = x_new
x_array.append((x[3] + x[0])/ 2.)
if numpy.abs(x[3] - x[0]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x_array[-1], numpy.array(x_array)
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
x, x_array = golden_section(f,[0.2, 0.5], 1.e-4)
print('t* = {}, f(t*) = {}, N steps = {}'.format(x, f(x), len(x_array)-1))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.grid()
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x_array, f(x_array),'ko')
axes.plot(x_array[0],f(x_array[0]),'ro')
axes.plot(x_array[-1],f(x_array[-1]),'go')
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization!
###Code
import scipy.optimize as optimize
print(optimize.golden(f, brack=(0.2, 0.25, 0.5)))
###Output
0.29588830308853215
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import warnings
import sympy
sympy.init_printing()
###Output
_____no_output_____
###Markdown
Root Finding and OptimizationOur goal in this section is to develop techniques to approximate the roots of a given function $f(x)$. That is find solutions $x$ such that $f(x)=0$. At first glance this may not seem like a meaningful exercise, however, this problem arises in a wide variety of circumstances. For example, suppose that you are trying to find a solution to the equation$$ x^2 + x = \alpha\sin{x}.$$where $\alpha$ is a real parameter. Simply rearranging, the expression can be rewritten in the form$$ f(x) = x^2 + x -\alpha\sin{x} = 0.$$Determining the roots of the function $f(x)$ is now equivalent to determining the solution to the original expression. Unfortunately, a number of other issues arise. In particular, with non-linear equations, there may be multiple solutions, or no real solutions at all. The task of approximating the roots of a function can be a deceptively difficult thing to do. For much of the treatment here we will ignore many details such as existence and uniqueness, but you should keep in mind that they are important considerations. **GOAL:** For this section we will focus on multiple techniques for efficiently and accurately solving the fundamental problem $f(x)=0$ for functions of a single variable. Objectives* Understand the general rootfinding problem as $f(x)=0$* Understand the equivalent formulation as a fixed point problem $x = g(x)$* Understand fixed point iteration and its stability analysis* Understand definitions of convergence and order of convergence* Understand practical rootfinding algorithms and their convergence * Bisection * Newton's method * Secant method * Hybrid methods and scipy.optimize routines (root_scalar) * Understand basic Optimization routines * Parabolic Interpolation * Golden Section Search * scipy.optimize routines (minimize_scalar and minimize) Example: Future Time AnnuityCan I ever retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$* $A$ total value after $n$ years* $P$ is payment amount per compounding period* $m$ number of compounding periods per year* $r$ annual interest rate* $n$ number of years to retirement Question:For a fix monthly Payment $P$, what does the minimum interest rate $r$ need to be so I can retire in 20 years with \$1M. Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.15, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, 10),label='10 years',linewidth=2)
axes.plot(r, total_value(P, m, r, 15),label='15 years',linewidth=2)
axes.plot(r, total_value(P, m, r, n),label='20 years',linewidth=2)
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)", fontsize=16)
axes.set_ylabel("A (total value)", fontsize=16)
axes.set_title("When can I retire?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((r.min(), r.max()))
axes.set_ylim((total_value(P, m, r.min(), 10), total_value(P, m, r.max(), n)))
axes.legend(loc='best')
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$ Plot these$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(P, m, r, n, goal),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("$g(r)$",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim((g(P, m, 0.00, n, goal), g(P, m, 0.1, n, goal)))
axes.legend(fontsize=14)
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, g(P, m, r, n, goal)-r,label='$r - g(r)$')
axes.plot(r, numpy.zeros(r.shape), 'r--',label='$0$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("residual",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.legend(fontsize=14)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Question:A single root $r>0$ clearly exists around $r=0.088$. But how to find it?One option might be to take a guess say $r_0 = 0.088$ and form the iterative scheme$$\begin{align} r_1 &= g(r_0)\\ r_2 &= g(r_1)\\ &\vdots \\ r_{k} &= g(r_{k-1})\\\end{align}$$ and hope this converges as $k\rightarrow\infty$ (or faster) Easy enough to code
###Code
r = 0.088
K = 20
for k in range(K):
print(r)
r = g(P,m,r,n,goal)
###Output
_____no_output_____
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, numpy.exp(-x), 'r',label='g(x)=exp(-x)$')
axes.plot(x, x, label='$x$')
axes.set_xlabel("$x$",fontsize=16)
axes.legend()
plt.grid()
f = lambda x : x - numpy.exp(-x)
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),label='$f(x) = x - g(x)$')
axes.plot(x, numpy.zeros(x.shape), 'r--',label='$0$')
axes.set_xlabel("$x$",fontsize=16)
axes.set_ylabel("residual",fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.legend(fontsize=14)
axes.grid()
plt.show()
plt.show()
###Output
_____no_output_____
###Markdown
Again, consider the iterative schemeset $x_0$ then compute$$ x_k = g(x_{k-1})\quad \mathrm{for}\quad k=1,2,3\ldots$$ or again in code```pythonx = x0for i in range(N): x = g(x)```
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(fontsize=14)
x = 0.4
print('\tx\t exp(-x)\t residual')
for steps in range(6):
residual = numpy.abs(numpy.exp(-x) - x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, numpy.exp(-x), residual))
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x+0.01, numpy.exp(-x)+0.01, steps, fontsize="15")
x = numpy.exp(-x)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)=-\log(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_ylabel("f(x)",fontsize=16)
axes.set_ylim([0, 1.5])
axes.legend(loc='best',fontsize=14)
x = 0.55
print('\tx\t -log(x)\t residual')
for steps in range(5):
residual = numpy.abs(numpy.log(x) + x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, -numpy.log(x), residual))
axes.plot(x, -numpy.log(x),'kx')
axes.text(x + 0.01, -numpy.log(x) + 0.01, steps, fontsize="15")
x = -numpy.log(x)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point IterationExistence and uniqueness of fixed point problems*Existence:*Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] ~~ \forall ~~ x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = e^{-x}$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = -\ln(x)$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
plt.show()
###Output
_____no_output_____
###Markdown
*Uniqueness:*Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that$$ |g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)$$then $g$ has a unique fixed point $P \in [a,b]$
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("$x$",fontsize=18)
axes.set_ylabel("$g\,'(x)$",fontsize=18)
axes.set_ylim((0.0, 1.1))
axes.set_title("$g(x) = e^{-x}$",fontsize=20)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
*Asymptotic convergence*: Behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that a fixed point $x^\ast$ exists, such that $$x^\ast = g(x^\ast)$$ Then define $$ x_{k+1} = x^\ast + e_{k+1} \quad \quad x_k = x^\ast + e_k$$ substituting$$ x^\ast + e_{k+1} = g(x^\ast + e_k)$$ Evaluate $$ g(x^\ast + e_k)$$ Taylor expand $g(x)$ about $x^\ast$ and substitute $$x = x_k = x^\ast + e_k$$ $$ g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ from our definition $$x^\ast + e_{k+1} = g(x^\ast + e_k)$$ we have$$ x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence. Also note that $K$ is related to $|g'(x^*)|$. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited* Example 1:$$g(x) = e^{-x}\quad\mathrm{with}\quad x^* \approx 0.56$$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ * Example 2: $$g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ * Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
r, P, m, A, n = sympy.symbols('r P m A n')
g_sym = P * m / A * ((1 + r /m)**(m * n) - 1)
g_prime = g_sym.diff(r)
r_star = 0.08985602484084668
print("g(r) = ", g_sym)
print("g'(r) = ", g_prime)
print()
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
print("g(r*) - r* = {}".format(g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}) - r_star))
###Output
_____no_output_____
###Markdown
* Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
g = sympy.lambdify(r, g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
r = numpy.linspace(-0.01, 0.1, 100)
fig = plt.figure(figsize=(7,5))
fig.set_figwidth(2. * fig.get_figwidth())
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(r),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=14)
axes.set_ylabel("$g(r)$",fontsize=14)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=14)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim(g(0.00), g(0.1))
axes.legend()
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$",fontsize=14)
axes.set_ylabel("$g'(r)$",fontsize=14)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ that contains at least one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $$ \text{sign}(f(a)) \neq \text{sign}(f(b)).$$In the case of minima or maxima we need $$ \text{sign}(f'(a)) \neq \text{sign}(f'(b))$$ **Theorem**: Let$$ f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))$$then there exists a number $$ c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.$$(proof uses intermediate value theorem) **Example**: The retirement problem again. For fixed $A, P, m, n$$$ f(r) = A - \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.What ways could we "shrink" the bracket so that the end points converge to the root/extrema? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went basic code```pythondef bisection(f,a,b,tol): c = (a + b)/2. f_a = f(a) f_b = f(b) f_c = f(c) for step in range(1, MAX_STEPS + 1): if numpy.abs(f_c) < tol: break if numpy.sign(f_a) != numpy.sign(f_c): b = c f_b = f_c else: a = c f_a = f_c c = (a + b)/ 2.0 f_c = f(c) return c``` Some real code
###Code
# real code with standard bells and whistles
def bisection(f,a,b,tol = 1.e-6):
""" uses bisection to isolate a root x of a function of a single variable f such that f(x) = 0.
the root must exist within an initial bracket a < x < b
returns when f(x) at the midpoint of the bracket < tol
Parameters:
-----------
f: function of a single variable f(x) of type float
a: float
left bracket a < x
b: float
right bracket x < b
Note: the signs of f(a) and f(b) must be different to insure a bracket
tol: float
tolerance. Returns when |f((a+b)/2)| < tol
Returns:
--------
x: float
midpoint of final bracket
x_array: numpy array
history of bracket centers (for plotting later)
Raises:
-------
ValueError:
if initial bracket is invalid
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 1000
# initialize
c = (a + b)/2.
c_array = [ c ]
f_a = f(a)
f_b = f(b)
f_c = f(c)
# check bracket
if numpy.sign(f_a) == numpy.sign(f_b):
raise ValueError("no bracket: f(a) and f(b) must have different signs")
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < tol:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
c = (a + b)/2.
f_c = f(c)
c_array.append(c)
if step == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return c, numpy.array(c_array)
# set up function as an inline lambda function
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8)
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
r = numpy.linspace(0.05, 0.11, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a), 'ko')
axes.plot([a, a], [0.0, f(a)], 'k--')
axes.text(a, f(a), str(0), fontsize="15")
axes.plot(b, f(b), 'ko')
axes.plot([b, b], [f(b), 0.0], 'k--')
axes.text(b, f(b), str(1), fontsize="15")
axes.grid()
# plot out the first N steps
N = 5
for k,r in enumerate(r_array[:N]):
# Plot iteration
axes.plot(r, f(r),'kx')
axes.text(r, f(r), str(k + 2), fontsize="15")
axes.plot(r_star, f(r_star), 'go', markersize=10)
axes.set_title('Bisection method: first {} steps'.format(N), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why?
###Code
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8 )
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
# this might be useful
print(numpy.diff(r_array))
###Output
_____no_output_____
###Markdown
Convergence of BisectionGenerally have$$ |e_{k+1}| = C |e_k|^n$$where we need $C 0$.Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with$$ e_k \approx \Delta x_k$$and therefore$$ e_{k+1} \approx \frac{1}{2} \Delta x_k.$$Due to the relationship then between $x_k$ and $e_k$ we then know$$ |e_{k+1}| = \frac{1}{2} |e_k|$$so therefore the method is linearly convergent. Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point - Can we do "better"? Newton's method, *when well behaved* can achieve quadratic convergence. **Basic Ideas**: There are multiple interpretations we can use to derive Newton's method* Use Taylor's theorem to estimate a correction to minimize the residual $f(x)=0$ * A geometric interpretation that approximates $f(x)$ locally as a straight line to predict where $x^*$ might be.* As a special case of a fixed-point iteration Perhaps the simplest derivation uses Taylor series. Consider an initial guess at point $x_k$. For arbitrary $x_k$, it's unlikely $f(x_k)=0$. However we can hope there is a correction $\delta_k$ such that at$$x_{k+1} = x_k + \delta_k$$and $$ f(x_{k+1}) = 0 $$ expanding in a Taylor series around point $x_k$ $$ f(x_k + \delta_k) \approx f(x_k) + f'(x_k) \delta_k + O(\delta_k^2)$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ or solving for the correction$$ \delta_k = -f(x_k)/f'(x_k)$$ which leads to the update for the next iteration$$ x_{k+1} = x_k + \delta_k $$or$$ x_{k+1} = x_k -f(x_k)/f'(x_k)$$rinse and repeat, as it's still unlikely that $f(x_{k+1})=0$ (but we hope the error will be reduced) Algorithm1. Initialize $x = x_0$1. While ( $f(x) > tol$ ) - solve $\delta = -f(x)/f'(x)$ - update $x \leftarrow x + \delta$ Geometric interpretationBy truncating the taylor series at first order, we are locally approximating $f(x)$ as a straight line tangent to the point $f(x_k)$. If the function was linear at that point, we could find its intercept such that $f(x_k+\delta_k)=0$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r",fontsize=16)
axes.set_ylabel("f(r)",fontsize=16)
axes.set_title("Newton-Raphson Steps",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
If we simply approximate the derivative $f'(x_k)$ with its finite difference approximation$$ f'(x_k) \approx \frac{0 - f(x_k)}{x_{k+1} - x_k}$$we can rearrange to find $x_{k+1}$ as$$ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}$$which is the classic Newton-Raphson iteration Some code
###Code
def newton(f,f_prime,x0,tol = 1.e-6):
""" uses newton's method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
f_prime: function f'(x)
returns type: float
x0: float
initial guess
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
x = x0
x_array = [ x0 ]
for k in range(1, MAX_STEPS + 1):
x = x - f(x) / f_prime(x)
x_array.append(x)
if numpy.abs(f(x)) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x, x_array = newton(f, f_prime, x0, tol=1.e-8)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
print(f_prime(x)*numpy.finfo('float').eps)
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Newton-Raphson Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why? Example: $$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ setup in sympy
###Code
x = sympy.symbols('x')
f = x - sympy.exp(-x)
f_prime = f.diff(x)
f, f_prime
###Output
_____no_output_____
###Markdown
and solve
###Code
f = sympy.lambdify(x,f)
f_prime = sympy.lambdify(x,f_prime)
x0 = 0.
x, x_array = newton(f, f_prime, x0, tol = 1.e-9)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-1,1,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx',markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x - e^{-x}$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Asymptotic Convergence of Newton's MethodNewton's method can be also considered a fixed point iteration$$x_{k+1} = g(x_k)$$with $g(x) = x - \frac{f(x)}{f'(x)}$ Again if $x^*$ is the fixed point and $e_k$ the error at iteration $k$:$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$ Taylor Expansion around $x^*$$$ x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + O(e_k^3)$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$ which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved). Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ which leads to $$ |e_{k+1}| < \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative. Example: Convergence for a non-simple rootConsider our first problem$$ f(x) = x^2 + x - \sin(x)$$the case is, unfortunately, not as rosey. Why might this be? Setup the problem
###Code
f = lambda x: x*x + x - numpy.sin(x)
f_prime = lambda x: 2*x + 1. - numpy.cos(x)
x0 = .9
x, x_array = newton(f, f_prime, x0, tol= 1.e-16)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-2,2,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx', markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x^2 +x - sin(x)$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Convergence appears linear, can you show this?:$$f(x) = x^2 + x -\sin (x)$$ Example: behavior of Newton with multiple roots$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x_k)}{2 \pi \cos (2 \pi x_k)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x_k)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
x_kp = lambda x: x - f(x)/f_prime(x)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $x_{k+1}(x)$",fontsize=18)
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basins of AttractionGiven a point $x_0$ can we determine if Newton-Raphson converges and to **which root** it converges to?A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges to the same root. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are complicated or even fractal.
###Code
# calculate the basin of attraction for f(x) = sin(2\pi x)
x_root = numpy.zeros(x.shape)
N_steps = numpy.zeros(x.shape)
for i,xk in enumerate(x):
x_root[i], x_root_array = newton(f, f_prime, xk)
N_steps[i] = len(x_root_array)
y = numpy.linspace(-2,2)
X,Y = numpy.meshgrid(x,y)
X_root = numpy.outer(numpy.ones(y.shape),x_root)
plt.figure(figsize=(8, 6))
plt.pcolor(X, Y, X_root,vmin=-5, vmax=5,cmap='seismic')
cbar = plt.colorbar()
cbar.set_label('$x_{root}$', fontsize=18)
plt.plot(x, f(x), 'k-')
plt.plot(x, numpy.zeros(x.shape),'k--', linewidth=0.5)
plt.xlabel('x', fontsize=16)
plt.title('Basins of Attraction: $f(x) = \sin{2\pi x}$', fontsize=18)
#plt.xlim(0.25-.1,0.25+.1)
plt.show()
###Output
_____no_output_____
###Markdown
Fractal Basins of AttractionIf $f(x)$ is complex (for $x$ complex), then the basins of attraction can be beautiful and fractalPlotted below are two fairly simple equations which demonstrate the issue:1. $f(x) = x^3 - 1$2. Kepler's equation $\theta - e \sin \theta = M$
###Code
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
roots = numpy.roots([1., 0., 0., -1])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
#axes.contourf(X, Y, numpy.sign(numpy.imag(R))*numpy.abs(R),vmin = -10, vmax = 10)
axes.contourf(X, Y, R, vmin = -8, vmax= 8.)
axes.scatter(numpy.real(roots), numpy.imag(roots))
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
axes.grid()
plt.show()
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contourf(X, Y, R, vmin = 0, vmax = 10)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
plt.show()
###Output
_____no_output_____
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$ Secant Method$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=14)
axes.set_title("Secant Method", fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, a `TOLERANCE`, and a `MAX_STEPS` 1. Initialize two points $x_0$, $x_1$, $f_0 = f(x_0)$, and $f_1 = f(x_1)$2. Loop for k=2, to `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $$x_{2} = x_1 - \frac{f(x_1) (x_1 - x_{0})}{f(x_1) - f(x_{0})}$$ 2. Check for convergence and break if reached 3. Update parameters $x_0 = x_1$, $x_1 = x_{2}$, $f_0 = f_1$ and $f_1 = f(x_1)$ Some Code
###Code
def secant(f, x0, x1, tol = 1.e-6):
""" uses a linear secant method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
x0: float
first point to initialize the algorithm
x1: float
second point to initialize the algorithm x1 != x0
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
ValueError:
if x1 is too close to x0
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
if numpy.isclose(x0, x1):
raise ValueError('Initial points are too close (preferably should be a bracket)')
x_array = [ x0, x1 ]
for k in range(1, MAX_STEPS + 1):
x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0))
x_array.append(x2)
if numpy.abs(f(x2)) < tol:
break
x0 = x1
x1 = x2
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x2, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x1 = 0.07
x, x_array = secant(f, x0, x1, tol= 1.e-7)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Secant Method Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Does not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package **new** root_scalar
###Code
from scipy.optimize import root_scalar
#root_scalar?
###Output
_____no_output_____
###Markdown
Set the problem up (again)
###Code
def f(r,A,m,P,n):
return A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
def f_prime(r,A,m,P,n):
return (-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) +
P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2)
A = 1.e6
m = 12
P = 1500.
n = 20.
###Output
_____no_output_____
###Markdown
Try Brent's method
###Code
a = 0.07
b = 0.1
sol = root_scalar(f,args=(A,m,P,n), bracket=(a, b), method='brentq')
print(sol)
###Output
_____no_output_____
###Markdown
Try Newton's method
###Code
sol = root_scalar(f,args=(A,m,P,n), x0=.07, fprime=f_prime, method='newton')
print(sol)
# Try something else
###Output
_____no_output_____
###Markdown
Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Interpolation Algorithms: Repeated parabolic interpolation - Bracketing Algorithms: Golden-Section Search (linear) - Hybrid Algorithms Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find its minima, and guess new points based on that minima 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ at the three points 1. Find the quadratic polynomial that interpolates those points: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 3. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance Demo
###Code
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
x0, x1 = 0.5, 0.2
x = numpy.array([x0, x1, (x0 + x1)/2.])
p = numpy.polyfit(x, f(x), 2)
parabola = lambda t: p[0]*t**2 + p[1]*t + p[2]
t_min = -p[1]/2./p[0]
MAX_STEPS = 100
TOLERANCE = 1e-4
t = numpy.linspace(0., 2., 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t), label='$f(t)$')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
axes.plot(x[2], f(x[2]), 'ko')
axes.plot(t, parabola(t), 'r--', label='parabola')
axes.plot(t_min, parabola(t_min), 'ro' )
axes.plot(t_min, f(t_min), 'k+')
axes.legend(loc='best')
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Rinse and repeat
###Code
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([x0, x1, (x0 + x1) / 2.0])
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Some Code
###Code
def parabolic_interpolation(f, bracket, tol = 1.e-6):
""" uses repeated parabolic interpolation to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x1] containing an initial bracket that contains a minimum
tolerance: float
Returns when relative error of last two iterates < tol
Returns:
--------
x: float
final estimate of the minima
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
x = numpy.zeros(3)
x[:2] = bracket
x[2] = (x[0] + x[1])/2.
x_array = [ x[2] ]
for k in range(1, MAX_STEPS + 1):
poly = numpy.polyfit(x, f(x), 2)
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
x_array.append(x[2])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x[2], numpy.array(x_array)
###Output
_____no_output_____
###Markdown
set up problem
###Code
bracket = numpy.array([0.5, 0.2])
x, x_array = parabolic_interpolation(f, bracket, tol = 1.e-6)
print("Extremum f(x) = {}, at x = {}, N steps = {}".format(f(x), x, len(x_array)))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.plot(x_array, f(x_array),'ro')
axes.plot(x, f(x), 'go')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit trickier in this case. In particular choosing your initial bracket is important! Bracket PickingSay we start with a bracket $[x_0, x_3]$ and pick two new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules: - If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$. - If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
###Code
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
###Output
_____no_output_____
###Markdown
Picking Brackets and PointsAgain say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets. Define: the length of the line segments in the interval\begin{aligned} a &= x_1 - x_0, \\ b &= x_2 - x_1,\\ c &= x_3 - x_2 \\\end{aligned}and the total bracket length\begin{aligned} d &= x_3 - x_0. \\\end{aligned}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
For **Golden Section Search** we require two conditions: - The two new possible brackets are of equal length. i.e $[x_0, x_2] = [x_1, x_3]$ or $$ a + b = b + c $$ or simply $a = c$
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
- The ratio of segment lengths is the same for every level of recursion so the problem is self-similar i.e. $$ \frac{b}{a} = \frac{c}{a + b} $$ These two requirements will allow maximum reuse of previous points and require adding only one new point $x^*$ at each iteration.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[1])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = phi * (x[1] - x[0]) + x[0]
#axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
#axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
#axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
#axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
#axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
#axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1], f(x_new[1]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[0].text(x_new[i], -0.5, i, color='g',fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) * phi + x[2]
#axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
#axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
#axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
#axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
#axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
#axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2], f(x_new[2]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[1].text(x_new[i], -0.5, i, color='r',fontsize='15')
axes[0].set_title('Choose left bracket', fontsize=18)
axes[1].set_title('Choose right bracket', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
As the first rule implies that $a = c$, we can substitute into the second rule to yield$$ \frac{b}{a} = \frac{a}{a + b}$$ or inverting and rearranging $$ \frac{a}{b} = 1 + \frac{b}{a}$$ if we let the ratio $b/a = x$, then $$ x + 1 = \frac{1}{x} \quad \text{or} \quad x^2 + x - 1 = 0$$ $$ x^2 + x - 1 = 0$$has a single positive root for $$ x = \frac{\sqrt{5} - 1}{2} = \varphi = 0.6180339887498949$$where $\varphi$ is related to the "golden ratio" (which in most definitions is given by $1+\varphi$, but either work as $ 1+\varphi = 1/\varphi $ ) Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. We can now use all of our information to find the points $x_1$ and $x_2$ given any overall bracket $[x_0, x_3]$ Given $b/a = \varphi$, $a = c$, and the known width of the bracket $d$ it follows that$$ d = a + b + c = (2 + \phi)a $$or $$ a = \frac{d}{2 + \varphi} = \frac{\varphi}{1 + \varphi} d$$by the rather special properties of $\varphi$. We could use this result immediately to find \begin{align} x_1 &= x_0 + a \\ x_2 &= x_3 - a \\\end{align} Equivalently, you can show that $$a + b = (1 + \varphi)a = \varphi d$$so\begin{align} x_1 &= x_3 - \varphi d \\ x_2 &= x_0 + \varphi d \\\end{align}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \varphi (x_3 - x_0)$ and $x_2 = x_0 + \varphi (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE` 1. calculate the appropriate new point $x^*$ ($x_1$ on left, $x_2$ on right)
###Code
def golden_section(f, bracket, tol = 1.e-6):
""" uses golden section search to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x3] containing an initial bracket that contains a minimum
tolerance: float
Returns when | x3 - x0 | < tol
Returns:
--------
x: float
final estimate of the midpoint of the bracket
x_array: numpy array
history of midpoint of each bracket
Raises:
-------
ValueError:
If initial bracket is < tol or doesn't appear to have any interior points
that are less than the outer points
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [ bracket[0], None, None, bracket[1] ]
delta_x = x[3] - x[0]
x[1] = x[3] - phi * delta_x
x[2] = x[0] + phi * delta_x
# check for initial bracket
fx = f(numpy.array(x))
bracket_min = min(fx[0], fx[3])
if fx[1] > bracket_min and fx[2] > bracket_min:
raise ValueError("interval does not appear to include a minimum")
elif delta_x < tol:
raise ValueError("interval is already smaller than tol")
x_mid = (x[3] + x[0])/2.
x_array = [ x_mid ]
for k in range(1, MAX_STEPS + 1):
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
delta_x = x_new[3] - x_new[0]
x_new[1] = x_new[3] - phi * delta_x
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
delta_x = x_new[3] - x_new[0]
x_new[2] = x_new[0] + phi * delta_x
x = x_new
x_array.append((x[3] + x[0])/ 2.)
if numpy.abs(x[3] - x[0]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x_array[-1], numpy.array(x_array)
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
x, x_array = golden_section(f,[0.2, 0.5], 1.e-4)
print('t* = {}, f(t*) = {}, N steps = {}'.format(x, f(x), len(x_array)-1))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.grid()
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x_array, f(x_array),'ko')
axes.plot(x_array[0],f(x_array[0]),'ro')
axes.plot(x_array[-1],f(x_array[-1]),'go')
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization. But a convenenient interface for minimization of functions of a single variable is `scipy.optimize.minimize_scalar`For optimization or constrained optimization for functions of more than one variable, see `scipy.optimized.minimize`
###Code
from scipy.optimize import minimize_scalar
#minimize_scalar?
###Output
_____no_output_____
###Markdown
Try some different methods
###Code
sol = minimize_scalar(f, bracket=(0.2, 0.25, 0.5), method='golden')
print(sol)
sol = minimize_scalar(f, method='brent')
print(sol)
sol = minimize_scalar(f, bounds=(0.,0.5), method='bounded')
print(sol)
###Output
_____no_output_____
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import warnings
import sympy
sympy.init_printing()
###Output
_____no_output_____
###Markdown
Root Finding and OptimizationOur goal in this section is to develop techniques to approximate the roots of a given function $f(x)$. That is find solutions $x$ such that $f(x)=0$. At first glance this may not seem like a meaningful exercise, however, this problem arises in a wide variety of circumstances. For example, suppose that you are trying to find a solution to the equation$$ x^2 + x = \alpha\sin{x}.$$where $\alpha$ is a real parameter. Simply rearranging, the expression can be rewritten in the form$$ f(x) = x^2 + x -\alpha\sin{x} = 0.$$Determining the roots of the function $f(x)$ is now equivalent to determining the solution to the original expression. Unfortunately, a number of other issues arise. In particular, with non-linear equations, there may be multiple solutions, or no real solutions at all. The task of approximating the roots of a function can be a deceptively difficult thing to do. For much of the treatment here we will ignore many details such as existence and uniqueness, but you should keep in mind that they are important considerations. **GOAL:** For this section we will focus on multiple techniques for efficiently and accurately solving the fundamental problem $f(x)=0$ for functions of a single variable. Objectives* Understand the general rootfinding problem as $f(x)=0$* Understand the equivalent formulation as a fixed point problem $x = g(x)$* Understand fixed point iteration and its stability analysis* Understand definitions of convergence and order of convergence* Understand practical rootfinding algorithms and their convergence * Bisection * Newton's method * Secant method * Hybrid methods and scipy.optimize routines (root_scalar) * Understand basic Optimization routines * Parabolic Interpolation * Golden Section Search * scipy.optimize routines (minimize_scalar and minimize) Example: Future Time AnnuityCan I ever retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$* $A$ total value after $n$ years* $P$ is payment amount per compounding period* $m$ number of compounding periods per year* $r$ annual interest rate* $n$ number of years to retirement Question:For a fix monthly Payment $P$, what does the minimum interest rate $r$ need to be so I can retire in 20 years with \$1M. Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.15, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, 10),label='10 years',linewidth=2)
axes.plot(r, total_value(P, m, r, 15),label='15 years',linewidth=2)
axes.plot(r, total_value(P, m, r, n),label='20 years',linewidth=2)
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)", fontsize=16)
axes.set_ylabel("A (total value)", fontsize=16)
axes.set_title("When can I retire?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((r.min(), r.max()))
axes.set_ylim((total_value(P, m, r.min(), 10), total_value(P, m, r.max(), n)))
axes.legend(loc='best')
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$ Plot these$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(P, m, r, n, goal),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("$g(r)$",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim((g(P, m, 0.00, n, goal), g(P, m, 0.1, n, goal)))
axes.legend(fontsize=14)
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, g(P, m, r, n, goal)-r,label='$r - g(r)$')
axes.plot(r, numpy.zeros(r.shape), 'r--',label='$0$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("residual",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.legend(fontsize=14)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Question:A single root $r>0$ clearly exists around $r=0.088$. But how to find it?One option might be to take a guess say $r_0 = 0.088$ and form the iterative scheme$$\begin{align} r_1 &= g(r_0)\\ r_2 &= g(r_1)\\ &\vdots \\ r_{k} &= g(r_{k-1})\\\end{align}$$ and hope this converges as $k\rightarrow\infty$ (or faster) Easy enough to code
###Code
r = 0.088
K = 20
for k in range(K):
print(r)
r = g(P,m,r,n,goal)
###Output
_____no_output_____
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, numpy.exp(-x), 'r',label='g(x)=exp(-x)$')
axes.plot(x, x, label='$x$')
axes.set_xlabel("$x$",fontsize=16)
axes.legend()
plt.grid()
f = lambda x : x - numpy.exp(-x)
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),label='$f(x) = x - g(x)$')
axes.plot(x, numpy.zeros(x.shape), 'r--',label='$0$')
axes.set_xlabel("$x$",fontsize=16)
axes.set_ylabel("residual",fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.legend(fontsize=14)
axes.grid()
plt.show()
plt.show()
###Output
_____no_output_____
###Markdown
Again, consider the iterative schemeset $x_0$ then compute$$ x_k = g(x_{k-1})\quad \mathrm{for}\quad k=1,2,3\ldots$$ or again in code```pythonx = x0for i in range(N): x = g(x)```
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(fontsize=14)
x = 0.4
print('\tx\t exp(-x)\t residual')
for steps in range(6):
residual = numpy.abs(numpy.exp(-x) - x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, numpy.exp(-x), residual))
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x+0.01, numpy.exp(-x)+0.01, steps, fontsize="15")
x = numpy.exp(-x)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)=-\log(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_ylabel("f(x)",fontsize=16)
axes.set_ylim([0, 1.5])
axes.legend(loc='best',fontsize=14)
x = 0.55
print('\tx\t -log(x)\t residual')
for steps in range(5):
residual = numpy.abs(numpy.log(x) + x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, -numpy.log(x), residual))
axes.plot(x, -numpy.log(x),'kx')
axes.text(x + 0.01, -numpy.log(x) + 0.01, steps, fontsize="15")
x = -numpy.log(x)
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point IterationExistence and uniqueness of fixed point problems*Existence:*Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] ~~ \forall ~~ x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = e^{-x}$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = -\ln(x)$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
plt.show()
###Output
_____no_output_____
###Markdown
*Uniqueness:*Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that$$ |g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)$$then $g$ has a unique fixed point $P \in [a,b]$
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("$x$",fontsize=18)
axes.set_ylabel("$|g\,'(x)|$",fontsize=18)
axes.set_ylim((0.0, 1.1))
axes.set_title("$g(x) = e^{-x}$",fontsize=20)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
*Asymptotic convergence*: Behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that a fixed point $x^\ast$ exists, such that $$x^\ast = g(x^\ast)$$ Then define $$ x_{k+1} = x^\ast + e_{k+1} \quad \quad x_k = x^\ast + e_k$$ substituting$$ x^\ast + e_{k+1} = g(x^\ast + e_k)$$ Evaluate $$ g(x^\ast + e_k)$$ Taylor expand $g(x)$ about $x^\ast$ and substitute $$x = x_k = x^\ast + e_k$$ $$ g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ from our definition $$x^\ast + e_{k+1} = g(x^\ast + e_k)$$ we have$$ x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ Note that because $x^* = g(x^*)$ these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence. Also note that $K$ is related to $|g'(x^*)|$. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited* Example 1:$$g(x) = e^{-x}\quad\mathrm{with}\quad x^* \approx 0.56$$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ * Example 2: $$g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ * Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
r, P, m, A, n = sympy.symbols('r P m A n')
g_sym = P * m / A * ((1 + r /m)**(m * n) - 1)
g_sym
g_prime = g_sym.diff(r)
g_prime
r_star = 0.08985602484084668
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
print("g(r*) - r* = {}".format(g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}) - r_star))
###Output
_____no_output_____
###Markdown
* Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
g = sympy.lambdify(r, g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
r = numpy.linspace(-0.01, 0.1, 100)
fig = plt.figure(figsize=(7,5))
fig.set_figwidth(2. * fig.get_figwidth())
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(r),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=14)
axes.set_ylabel("$g(r)$",fontsize=14)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=14)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim(g(0.00), g(0.1))
axes.legend()
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$",fontsize=14)
axes.set_ylabel("$g'(r)$",fontsize=14)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ that contains at least one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $$ \text{sign}(f(a)) \neq \text{sign}(f(b)).$$In the case of minima or maxima we need $$ \text{sign}(f'(a)) \neq \text{sign}(f'(b))$$ **Theorem**: Let$$ f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))$$then there exists a number $$ c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.$$(proof uses intermediate value theorem) **Example**: The retirement problem again. For fixed $A, P, m, n$$$ f(r) = A - \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.What ways could we "shrink" the bracket so that the end points converge to the root/extrema? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went basic code```pythondef bisection(f,a,b,tol): c = (a + b)/2. f_a = f(a) f_b = f(b) f_c = f(c) for step in range(1, MAX_STEPS + 1): if numpy.abs(f_c) < tol: break if numpy.sign(f_a) != numpy.sign(f_c): b = c f_b = f_c else: a = c f_a = f_c c = (a + b)/ 2.0 f_c = f(c) return c``` Some real code
###Code
# real code with standard bells and whistles
def bisection(f,a,b,tol = 1.e-6):
""" uses bisection to isolate a root x of a function of a single variable f such that f(x) = 0.
the root must exist within an initial bracket a < x < b
returns when f(x) at the midpoint of the bracket < tol
Parameters:
-----------
f: function of a single variable f(x) of type float
a: float
left bracket a < x
b: float
right bracket x < b
Note: the signs of f(a) and f(b) must be different to insure a bracket
tol: float
tolerance. Returns when |f((a+b)/2)| < tol
Returns:
--------
x: float
midpoint of final bracket
x_array: numpy array
history of bracket centers (for plotting later)
Raises:
-------
ValueError:
if initial bracket is invalid
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 1000
# initialize
c = (a + b)/2.
c_array = [ c ]
f_a = f(a)
f_b = f(b)
f_c = f(c)
# check bracket
if numpy.sign(f_a) == numpy.sign(f_b):
raise ValueError("no bracket: f(a) and f(b) must have different signs")
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < tol:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
c = (a + b)/2.
f_c = f(c)
c_array.append(c)
if step == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return c, numpy.array(c_array)
# set up function as an inline lambda function
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8)
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
r = numpy.linspace(0.05, 0.11, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a), 'ko')
axes.plot([a, a], [0.0, f(a)], 'k--')
axes.text(a, f(a), str(0), fontsize="15")
axes.plot(b, f(b), 'ko')
axes.plot([b, b], [f(b), 0.0], 'k--')
axes.text(b, f(b), str(1), fontsize="15")
axes.grid()
# plot out the first N steps
N = 5
for k,r in enumerate(r_array[:N]):
# Plot iteration
axes.plot(r, f(r),'kx')
axes.text(r, f(r), str(k + 2), fontsize="15")
axes.plot(r_star, f(r_star), 'go', markersize=10)
axes.set_title('Bisection method: first {} steps'.format(N), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why?
###Code
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8 )
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
# this might be useful
print(numpy.diff(r_array))
###Output
_____no_output_____
###Markdown
Convergence of BisectionGenerally have$$ |e_{k+1}| = C |e_k|^n$$where we need $C 0$.Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with$$ e_k \approx \Delta x_k$$and therefore$$ e_{k+1} \approx \frac{1}{2} \Delta x_k.$$Due to the relationship then between $x_k$ and $e_k$ we then know$$ |e_{k+1}| = \frac{1}{2} |e_k|$$so therefore the method is linearly convergent. Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point - Can we do "better"? Newton's method, *when well behaved* can achieve quadratic convergence. **Basic Ideas**: There are multiple interpretations we can use to derive Newton's method* Use Taylor's theorem to estimate a correction to minimize the residual $f(x)=0$ * A geometric interpretation that approximates $f(x)$ locally as a straight line to predict where $x^*$ might be.* As a special case of a fixed-point iteration Perhaps the simplest derivation uses Taylor series. Consider an initial guess at point $x_k$. For arbitrary $x_k$, it's unlikely $f(x_k)=0$. However we can hope there is a correction $\delta_k$ such that at$$x_{k+1} = x_k + \delta_k$$and $$ f(x_{k+1}) = 0 $$ expanding in a Taylor series around point $x_k$ $$ f(x_k + \delta_k) \approx f(x_k) + f'(x_k) \delta_k + O(\delta_k^2)$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ or solving for the correction$$ \delta_k = -f(x_k)/f'(x_k)$$ which leads to the update for the next iteration$$ x_{k+1} = x_k + \delta_k $$or$$ x_{k+1} = x_k -f(x_k)/f'(x_k)$$rinse and repeat, as it's still unlikely that $f(x_{k+1})=0$ (but we hope the error will be reduced) Algorithm1. Initialize $x = x_0$1. While ( $f(x) > tol$ ) - solve $\delta = -f(x)/f'(x)$ - update $x \leftarrow x + \delta$ Geometric interpretationBy truncating the taylor series at first order, we are locally approximating $f(x)$ as a straight line tangent to the point $f(x_k)$. If the function was linear at that point, we could find its intercept such that $f(x_k+\delta_k)=0$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r",fontsize=16)
axes.set_ylabel("f(r)",fontsize=16)
axes.set_title("Newton-Raphson Steps",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
If we simply approximate the derivative $f'(x_k)$ with its finite difference approximation$$ f'(x_k) \approx \frac{0 - f(x_k)}{x_{k+1} - x_k}$$we can rearrange to find $x_{k+1}$ as$$ x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)}$$which is the classic Newton-Raphson iteration Some code
###Code
def newton(f,f_prime,x0,tol = 1.e-6):
""" uses newton's method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
f_prime: function f'(x)
returns type: float
x0: float
initial guess
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
x = x0
x_array = [ x0 ]
for k in range(1, MAX_STEPS + 1):
x = x - f(x) / f_prime(x)
x_array.append(x)
if numpy.abs(f(x)) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x, x_array = newton(f, f_prime, x0, tol=1.e-8)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
print(f_prime(x)*numpy.finfo('float').eps)
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Newton-Raphson Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why? Example: $$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ setup in sympy
###Code
x = sympy.symbols('x')
f = x - sympy.exp(-x)
f_prime = f.diff(x)
f, f_prime
###Output
_____no_output_____
###Markdown
and solve
###Code
f = sympy.lambdify(x,f)
f_prime = sympy.lambdify(x,f_prime)
x0 = 0.
x, x_array = newton(f, f_prime, x0, tol = 1.e-9)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-1,1,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx',markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x - e^{-x}$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Asymptotic Convergence of Newton's MethodNewton's method can be also considered a fixed point iteration$$x_{k+1} = g(x_k)$$with $g(x) = x - \frac{f(x)}{f'(x)}$ Again if $x^*$ is the fixed point and $e_k$ the error at iteration $k$:$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$ Taylor Expansion around $x^*$$$ x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + O(e_k^3)$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$ which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved). Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ which leads to $$ |e_{k+1}| < \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative. Example: Convergence for a non-simple rootConsider our first problem$$ f(x) = x^2 + x - \sin(x)$$the case is, unfortunately, not as rosey. Why might this be? Setup the problem
###Code
f = lambda x: x*x + x - numpy.sin(x)
f_prime = lambda x: 2*x + 1. - numpy.cos(x)
x0 = .9
x, x_array = newton(f, f_prime, x0, tol= 1.e-16)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-2,2,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx', markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x^2 +x - sin(x)$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Convergence appears linear, can you show this?:$$f(x) = x^2 + x -\sin (x)$$ Example: behavior of Newton with multiple roots$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x_k)}{2 \pi \cos (2 \pi x_k)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x_k)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
x_kp = lambda x: x - f(x)/f_prime(x)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $x_{k+1}(x)$",fontsize=18)
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basins of AttractionGiven a point $x_0$ can we determine if Newton-Raphson converges and to **which root** it converges to?A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges to the same root. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are complicated or even fractal.
###Code
# calculate the basin of attraction for f(x) = sin(2\pi x)
x_root = numpy.zeros(x.shape)
N_steps = numpy.zeros(x.shape)
for i,xk in enumerate(x):
x_root[i], x_root_array = newton(f, f_prime, xk)
N_steps[i] = len(x_root_array)
y = numpy.linspace(-2,2)
X,Y = numpy.meshgrid(x,y)
X_root = numpy.outer(numpy.ones(y.shape),x_root)
plt.figure(figsize=(8, 6))
plt.pcolor(X, Y, X_root,vmin=-5, vmax=5,cmap='seismic')
cbar = plt.colorbar()
cbar.set_label('$x_{root}$', fontsize=18)
plt.plot(x, f(x), 'k-')
plt.plot(x, numpy.zeros(x.shape),'k--', linewidth=0.5)
plt.xlabel('x', fontsize=16)
plt.title('Basins of Attraction: $f(x) = \sin{2\pi x}$', fontsize=18)
#plt.xlim(0.25-.1,0.25+.1)
plt.show()
###Output
_____no_output_____
###Markdown
Fractal Basins of AttractionIf $f(x)$ is complex (for $x$ complex), then the basins of attraction can be beautiful and fractalPlotted below are two fairly simple equations which demonstrate the issue:1. $f(x) = x^3 - 1$2. Kepler's equation $\theta - e \sin \theta = M$
###Code
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
roots = numpy.roots([1., 0., 0., -1])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
#axes.contourf(X, Y, numpy.sign(numpy.imag(R))*numpy.abs(R),vmin = -10, vmax = 10)
axes.contourf(X, Y, R, vmin = -8, vmax= 8.)
axes.scatter(numpy.real(roots), numpy.imag(roots))
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
axes.grid()
plt.show()
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contourf(X, Y, R, vmin = 0, vmax = 10)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
plt.show()
###Output
_____no_output_____
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$ Secant Method$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=14)
axes.set_title("Secant Method", fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, a `TOLERANCE`, and a `MAX_STEPS` 1. Initialize two points $x_0$, $x_1$, $f_0 = f(x_0)$, and $f_1 = f(x_1)$2. Loop for k=2, to `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $$x_{2} = x_1 - \frac{f(x_1) (x_1 - x_{0})}{f(x_1) - f(x_{0})}$$ 2. Check for convergence and break if reached 3. Update parameters $x_0 = x_1$, $x_1 = x_{2}$, $f_0 = f_1$ and $f_1 = f(x_1)$ Some Code
###Code
def secant(f, x0, x1, tol = 1.e-6):
""" uses a linear secant method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
x0: float
first point to initialize the algorithm
x1: float
second point to initialize the algorithm x1 != x0
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
ValueError:
if x1 is too close to x0
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
if numpy.isclose(x0, x1):
raise ValueError('Initial points are too close (preferably should be a bracket)')
x_array = [ x0, x1 ]
for k in range(1, MAX_STEPS + 1):
x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0))
x_array.append(x2)
if numpy.abs(f(x2)) < tol:
break
x0 = x1
x1 = x2
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x2, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x1 = 0.07
x, x_array = secant(f, x0, x1, tol= 1.e-7)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Secant Method Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Does not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package **new** root_scalar
###Code
from scipy.optimize import root_scalar
#root_scalar?
###Output
_____no_output_____
###Markdown
Set the problem up (again)
###Code
def f(r,A,m,P,n):
return A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
def f_prime(r,A,m,P,n):
return (-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) +
P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2)
A = 1.e6
m = 12
P = 1500.
n = 20.
###Output
_____no_output_____
###Markdown
Try Brent's method
###Code
a = 0.07
b = 0.1
sol = root_scalar(f,args=(A,m,P,n), bracket=(a, b), method='brentq')
print(sol)
###Output
_____no_output_____
###Markdown
Try Newton's method
###Code
sol = root_scalar(f,args=(A,m,P,n), x0=.07, fprime=f_prime, method='newton')
print(sol)
# Try something else
###Output
_____no_output_____
###Markdown
Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Interpolation Algorithms: Repeated parabolic interpolation - Bracketing Algorithms: Golden-Section Search (linear) - Hybrid Algorithms Interpolation ApproachSuccessive parabolic interpolation - similar to secant methodBasic idea: Fit polynomial to function using three points, find its minima, and guess new points based on that minima 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ at the three points 1. Find the quadratic polynomial that interpolates those points: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 3. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance Demo
###Code
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
x0, x1 = 0.5, 0.2
x = numpy.array([x0, x1, (x0 + x1)/2.])
p = numpy.polyfit(x, f(x), 2)
parabola = lambda t: p[0]*t**2 + p[1]*t + p[2]
t_min = -p[1]/2./p[0]
MAX_STEPS = 100
TOLERANCE = 1e-4
t = numpy.linspace(0., 2., 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t), label='$f(t)$')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
axes.plot(x[2], f(x[2]), 'ko')
axes.plot(t, parabola(t), 'r--', label='parabola')
axes.plot(t_min, parabola(t_min), 'ro' )
axes.plot(t_min, f(t_min), 'k+')
axes.legend(loc='best')
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Rinse and repeat
###Code
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([x0, x1, (x0 + x1) / 2.0])
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Some Code
###Code
def parabolic_interpolation(f, bracket, tol = 1.e-6):
""" uses repeated parabolic interpolation to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x1] containing an initial bracket that contains a minimum
tolerance: float
Returns when relative error of last two iterates < tol
Returns:
--------
x: float
final estimate of the minima
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
x = numpy.zeros(3)
x[:2] = bracket
x[2] = (x[0] + x[1])/2.
x_array = [ x[2] ]
for k in range(1, MAX_STEPS + 1):
poly = numpy.polyfit(x, f(x), 2)
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
x_array.append(x[2])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x[2], numpy.array(x_array)
###Output
_____no_output_____
###Markdown
set up problem
###Code
bracket = numpy.array([0.5, 0.2])
x, x_array = parabolic_interpolation(f, bracket, tol = 1.e-6)
print("Extremum f(x) = {}, at x = {}, N steps = {}".format(f(x), x, len(x_array)))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.plot(x_array, f(x_array),'ro')
axes.plot(x, f(x), 'go')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit trickier in this case. In particular choosing your initial bracket is important! Bracket PickingSay we start with a bracket $[x_0, x_3]$ and pick two new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules: - If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$. - If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.
###Code
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
###Output
_____no_output_____
###Markdown
Picking Brackets and PointsAgain say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets. Define: the length of the line segments in the interval\begin{aligned} a &= x_1 - x_0, \\ b &= x_2 - x_1,\\ c &= x_3 - x_2 \\\end{aligned}and the total bracket length\begin{aligned} d &= x_3 - x_0. \\\end{aligned}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
For **Golden Section Search** we require two conditions: - The two new possible brackets are of equal length. i.e $[x_0, x_2] = [x_1, x_3]$ or $$ a + b = b + c $$ or simply $a = c$
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
- The ratio of segment lengths is the same for every level of recursion so the problem is self-similar i.e. $$ \frac{b}{a} = \frac{c}{a + b} $$ These two requirements will allow maximum reuse of previous points and require adding only one new point $x^*$ at each iteration.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[1])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = phi * (x[1] - x[0]) + x[0]
#axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
#axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
#axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
#axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
#axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
#axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1], f(x_new[1]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[0].text(x_new[i], -0.5, i, color='g',fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) * phi + x[2]
#axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
#axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
#axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
#axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
#axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
#axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2], f(x_new[2]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[1].text(x_new[i], -0.5, i, color='r',fontsize='15')
axes[0].set_title('Choose left bracket', fontsize=18)
axes[1].set_title('Choose right bracket', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
As the first rule implies that $a = c$, we can substitute into the second rule to yield$$ \frac{b}{a} = \frac{a}{a + b}$$ or inverting and rearranging $$ \frac{a}{b} = 1 + \frac{b}{a}$$ if we let the ratio $b/a = x$, then $$ x + 1 = \frac{1}{x} \quad \text{or} \quad x^2 + x - 1 = 0$$ $$ x^2 + x - 1 = 0$$has a single positive root for $$ x = \frac{\sqrt{5} - 1}{2} = \varphi = 0.6180339887498949$$where $\varphi$ is related to the "golden ratio" (which in most definitions is given by $1+\varphi$, but either work as $ 1+\varphi = 1/\varphi $ ) Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. We can now use all of our information to find the points $x_1$ and $x_2$ given any overall bracket $[x_0, x_3]$ Given $b/a = \varphi$, $a = c$, and the known width of the bracket $d$ it follows that$$ d = a + b + c = (2 + \phi)a $$or $$ a = \frac{d}{2 + \varphi} = \frac{\varphi}{1 + \varphi} d$$by the rather special properties of $\varphi$. We could use this result immediately to find \begin{align} x_1 &= x_0 + a \\ x_2 &= x_3 - a \\\end{align} Equivalently, you can show that $$a + b = (1 + \varphi)a = \varphi d$$so\begin{align} x_1 &= x_3 - \varphi d \\ x_2 &= x_0 + \varphi d \\\end{align}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \varphi (x_3 - x_0)$ and $x_2 = x_0 + \varphi (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE` 1. calculate the appropriate new point $x^*$ ($x_1$ on left, $x_2$ on right)
###Code
def golden_section(f, bracket, tol = 1.e-6):
""" uses golden section search to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x3] containing an initial bracket that contains a minimum
tolerance: float
Returns when | x3 - x0 | < tol
Returns:
--------
x: float
final estimate of the midpoint of the bracket
x_array: numpy array
history of midpoint of each bracket
Raises:
-------
ValueError:
If initial bracket is < tol or doesn't appear to have any interior points
that are less than the outer points
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [ bracket[0], None, None, bracket[1] ]
delta_x = x[3] - x[0]
x[1] = x[3] - phi * delta_x
x[2] = x[0] + phi * delta_x
# check for initial bracket
fx = f(numpy.array(x))
bracket_min = min(fx[0], fx[3])
if fx[1] > bracket_min and fx[2] > bracket_min:
raise ValueError("interval does not appear to include a minimum")
elif delta_x < tol:
raise ValueError("interval is already smaller than tol")
x_mid = (x[3] + x[0])/2.
x_array = [ x_mid ]
for k in range(1, MAX_STEPS + 1):
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
delta_x = x_new[3] - x_new[0]
x_new[1] = x_new[3] - phi * delta_x
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
delta_x = x_new[3] - x_new[0]
x_new[2] = x_new[0] + phi * delta_x
x = x_new
x_array.append((x[3] + x[0])/ 2.)
if numpy.abs(x[3] - x[0]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x_array[-1], numpy.array(x_array)
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
x, x_array = golden_section(f,[0.2, 0.5], 1.e-4)
print('t* = {}, f(t*) = {}, N steps = {}'.format(x, f(x), len(x_array)-1))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.grid()
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x_array, f(x_array),'ko')
axes.plot(x_array[0],f(x_array[0]),'ro')
axes.plot(x_array[-1],f(x_array[-1]),'go')
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization. But a convenenient interface for minimization of functions of a single variable is `scipy.optimize.minimize_scalar`For optimization or constrained optimization for functions of more than one variable, see `scipy.optimized.minimize`
###Code
from scipy.optimize import minimize_scalar
#minimize_scalar?
###Output
_____no_output_____
###Markdown
Try some different methods
###Code
sol = minimize_scalar(f, bracket=(0.2, 0.25, 0.5), method='golden')
print(sol)
sol = minimize_scalar(f, method='brent')
print(sol)
sol = minimize_scalar(f, bounds=(0.,0.5), method='bounded')
print(sol)
###Output
_____no_output_____
###Markdown
Text provided under a Creative Commons Attribution license, CC-BY. All code is made available under the FSF-approved MIT license. (c) Kyle T. Mandli
###Code
from __future__ import print_function
%matplotlib inline
import numpy
import matplotlib.pyplot as plt
import warnings
import sympy
sympy.init_printing()
###Output
_____no_output_____
###Markdown
Root Finding and OptimizationOur goal in this section is to develop techniques to approximate the roots of a given function $f(x)$. That is find solutions $x$ such that $f(x)=0$. At first glance this may not seem like a meaningful exercise, however, this problem arises in a wide variety of circumstances. For example, suppose that you are trying to find a solution to the equation$$ x^2 + x = \sin{x}.$$Simply rearranging, the expression can be rewritten in the form$$ f(x) = x^2 + x -\sin{x} = 0.$$Determining the roots of the function $f(x)$ is now equivalent to determining the solution to the original expression. Unfortunately, a number of other issues arise. In particular, with non-linear equations, there may be multiple solutions, or no real solutions at all.
###Code
x = numpy.linspace(-100, 100, 201)
f = x**2 + x - numpy.sin(x)
plt.plot(x, f, 'bo')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
The task of approximating the roots of a function can be a deceptively difficult thing to do. For much of the treatment here we will ignore many details such as existence and uniqueness, but you should keep in mind that they are important considerations. **GOAL:** For this section we will focus on multiple techniques for efficiently and accurately solving the fundamental problem $f(x)=0$ for functions of a single variable. Example: Future Time AnnuityCan I ever retire?$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$* $A$ total value after $n$ years* $P$ is payment amount per compounding period* $m$ number of compounding periods per year* $r$ annual interest rate* $n$ number of years to retirement Question:For a fix monthly Payment $P$, what does the minimum interest rate $r$ need to be so I can retire in 20 years with \$1M. Set $P = \frac{\$18,000}{12} = \$1500, \quad m=12, \quad n=20$.$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] $$
###Code
def total_value(P, m, r, n):
"""Total value of portfolio given parameters
Based on following formula:
A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n}
- 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
:Returns:
(float) - total value of portfolio
"""
return P / (r / float(m)) * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.05, 0.15, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, total_value(P, m, r, 10),label='10 years')
axes.plot(r, total_value(P, m, r, 15),label='15 years')
axes.plot(r, total_value(P, m, r, n),label='20 years')
axes.plot(r, numpy.ones(r.shape) * goal, 'r--')
axes.set_xlabel("r (interest rate)", fontsize=16)
axes.set_ylabel("A (total value)", fontsize=16)
axes.set_title("When can I retire?",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((r.min(), r.max()))
axes.set_ylim((total_value(P, m, r.min(), 10), total_value(P, m, r.max(), n)))
axes.legend(loc='best')
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Fixed Point IterationHow do we go about solving this?Could try to solve at least partially for $r$:$$ A = \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ] ~~~~ \Rightarrow ~~~~~$$$$ r = g(r)$$or $$ g(r) - r = 0$$ Plot these$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
def g(P, m, r, n, A):
"""Reformulated minimization problem
Based on following formula:
g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]
:Input:
- *P* (float) - Payment amount per compounding period
- *m* (int) - number of compounding periods per year
- *r* (float) - annual interest rate
- *n* (float) - number of years to retirement
- *A* (float) - total value after $n$ years
:Returns:
(float) - value of g(r)
"""
return P * m / A * ( (1.0 + r / float(m))**(float(m) * n)
- 1.0)
P = 1500.0
m = 12
n = 20.0
r = numpy.linspace(0.00, 0.1, 100)
goal = 1e6
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=16)
axes.set_ylabel("$g(r)$",fontsize=16)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=18)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim((g(P, m, 0.00, n, goal), g(P, m, 0.1, n, goal)))
axes.legend()
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Guess at $r_0$ and check to see what direction we need to go...1. $r_0 = 0.0800, \quad g(r_0) - r_0 = -0.009317550125425428$1. $r_1 = 0.0850, \quad g(r_1) - r_1 = -0.00505763375972$1. $r_2 = 0.0875, \quad g(r_2) - r_2 = -0.00257275331014$ A bit tedious, we can also make this algorithmic:
###Code
r_values = numpy.linspace(0.08, 0.1, 11)
g_values = g(P,m,r_values,n,goal)
residual = numpy.abs(g_values - r_values)
print(' r\t\t g(r)\t\tresidual')
print('------------------------------------------------')
for i,r in enumerate(r_values):
print('{:8.3f}\t{:10.8f}\t{:10.8f}\t'.format(r,g_values[i],residual[i]))
###Output
r g(r) residual
------------------------------------------------
0.080 0.07068245 0.00931755
0.082 0.07427690 0.00772310
0.084 0.07801640 0.00598360
0.086 0.08190680 0.00409320
0.088 0.08595414 0.00204586
0.090 0.09016473 0.00016473
0.092 0.09454513 0.00254513
0.094 0.09910215 0.00510215
0.096 0.10384290 0.00784290
0.098 0.10877473 0.01077473
0.100 0.11390533 0.01390533
###Markdown
Example 2:Let $f(x) = x - e^{-x}$, solve $f(x) = 0$Equivalent to $x = e^{-x}$ or $x = g(x)$ where $g(x) = e^{-x}$
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$f(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend()
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Consider the iterative schemeset $x_0$ then compute$$ x_i = g(x_{i-1})\quad \mathrm{for}\quad i=1,2,3\ldots$$ or in code```pythonx = x0for i in range(N): x = g(x)```
###Code
x = numpy.linspace(0.2, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$f(x)=exp(-x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend()
x = 0.4
print('\tx\t exp(-x)\t residual')
for steps in range(6):
residual = numpy.abs(numpy.exp(-x) - x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, numpy.exp(-x), residual))
axes.plot(x, numpy.exp(-x),'kx')
axes.text(x+0.01, numpy.exp(-x)+0.01, steps, fontsize="15")
x = numpy.exp(-x)
plt.grid()
plt.show()
###Output
x exp(-x) residual
0.4000000 0.6703200 0.2703200
0.6703200 0.5115448 0.1587752
0.5115448 0.5995686 0.0880238
0.5995686 0.5490484 0.0505202
0.5490484 0.5774991 0.0284507
0.5774991 0.5613004 0.0161987
###Markdown
Example 3:Let $f(x) = \ln x + x$ and solve $f(x) = 0$ or $x = -\ln x$.Note that this problem is equivalent to $x = e^{-x}$.
###Code
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$f(x)=-\log(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_ylabel("f(x)",fontsize=16)
axes.set_ylim([0, 1.5])
axes.legend(loc='best')
x = 0.55
print('\tx\t -log(x)\t residual')
for steps in range(5):
residual = numpy.abs(numpy.log(x) + x)
print("{:12.7f}\t{:12.7f}\t{:12.7f}".format(x, -numpy.log(x), residual))
axes.plot(x, -numpy.log(x),'kx')
axes.text(x + 0.01, -numpy.log(x) + 0.01, steps, fontsize="15")
x = -numpy.log(x)
plt.grid()
plt.show()
###Output
x -log(x) residual
0.5500000 0.5978370 0.0478370
0.5978370 0.5144371 0.0833999
0.5144371 0.6646819 0.1502448
0.6646819 0.4084467 0.2562352
0.4084467 0.8953939 0.4869472
###Markdown
These are equivalent problems! Something is awry... Analysis of Fixed Point IterationExistence and uniqueness of fixed point problems*Existence:*Assume $g \in C[a, b]$, if the range of the mapping $y = g(x)$ satisfies $y \in [a, b] \quad \forall \quad x \in [a, b]$ then $g$ has a fixed point in $[a, b]$.
###Code
x = numpy.linspace(0.0, 1.0, 100)
# Plot function and intercept
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.exp(-x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = e^{-x}$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * numpy.exp(-0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
axes.set_xlim((0.0, 1.0))
axes.set_ylim((0.0, 1.0))
plt.show()
x = numpy.linspace(0.1, 1.0, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, -numpy.log(x), 'r',label='$g(x)$')
axes.plot(x, x, 'b',label='$x$')
axes.set_xlabel("x",fontsize=16)
axes.set_xlim([0.1, 1.0])
axes.set_ylim([0.1, 1.0])
axes.legend(loc='best',fontsize=14)
axes.set_title('$g(x) = -\ln(x)$',fontsize=24)
# Plot domain and range
axes.plot(numpy.ones(x.shape) * 0.4, x, '--k')
axes.plot(numpy.ones(x.shape) * 0.8, x, '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.4), '--k')
axes.plot(x, numpy.ones(x.shape) * -numpy.log(0.8), '--k')
axes.plot(x, numpy.ones(x.shape) * 0.4, '--',color='gray',linewidth=.5)
axes.plot(x, numpy.ones(x.shape) * 0.8, '--',color='gray',linewidth=.5)
plt.show()
r = numpy.linspace(0.06, 0.1, 100)
goal = 1e6
fig = plt.figure()
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, g(P, m, r, n, goal))
axes.plot(r, r, 'r--')
axes.set_xlabel("r")
axes.set_ylabel("$g(r)$")
axes.set_xlim([0.06, 0.1])
axes.set_ylim([g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot([0.08, 0.08], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot([0.095, 0.095], [g(P, m, 0.06, n, goal), g(P, m, 0.1, n, goal)], '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.08, n, goal), '--k')
axes.plot(r, numpy.ones(r.shape) * g(P, m, 0.095, n, goal), '--k')
plt.show()
###Output
_____no_output_____
###Markdown
*Uniqueness:*Additionally, suppose $g'(x)$ is defined on $x \in [a, b]$ and $\exists K < 1$ such that$$ |g'(x)| \leq K < 1 \quad \forall \quad x \in (a,b)$$then $g$ has a unique fixed point $P \in [a,b]$Note: Lipschitz continuity
###Code
x = numpy.linspace(0.4, 0.8, 100)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(x, numpy.abs(-numpy.exp(-x)), 'r')
axes.plot(x, numpy.ones(x.shape), 'k--')
axes.set_xlabel("$x$",fontsize=18)
axes.set_ylabel("$g\,'(x)$",fontsize=18)
axes.set_ylim((0.0, 1.1))
axes.set_title("$g(x) = e^{-x}$",fontsize=20)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
*Asymptotic convergence*: Behavior of fixed point iterations$$x_{k+1} = g(x_k)$$ Assume that a fixed point $x^\ast$ exists, such that $$x^\ast = g(x^\ast)$$ Then define $$ x_{k+1} = x^\ast + e_{k+1} \quad \quad x_k = x^\ast + e_k$$Where $e_k$ is the correction (error) for $x_k$ substituting$$ x^\ast + e_{k+1} = g(x^\ast + e_k)$$ Evaluate $$ g(x^\ast + e_k)$$ Taylor expand $g(x)$ about $x^\ast$ and substitute $$x = x_k = x^\ast + e_k$$ $$ g(x^\ast + e_k) = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$Note: assumption is that $g'(x)$ at fixed point $\neq$ 0. from our definition $$x^\ast + e_{k+1} = g(x^\ast + e_k)$$ we have$$ x^\ast + e_{k+1} = g(x^\ast) + g'(x^\ast) e_k + \frac{g''(x^\ast) e_k^2}{2} + O(e_k^3)$$ Note that because $x^* = g(x^*)$ (fixed point) these terms cancel leaving$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2}$$So if $|g'(x^*)| \leq K < 1$ we can conclude that$$|e_{k+1}| = K |e_k|$$which shows convergence and fixed point iteration is stable. Also note that $K$ is related to $|g'(x^*)|$. Convergence of iterative schemesGiven any iterative scheme where$$|e_{k+1}| = C |e_k|^n$$If $C < 1$ and: - $n=1$ then the scheme is **linearly convergent** - $n=2$ then the scheme is **quadratically convergent** - $n > 1$ the scheme can also be called **superlinearly convergent**If $C > 1$ then the scheme is **divergent** Examples Revisited* Example 1:$$g(x) = e^{-x}\quad\mathrm{with}\quad x^* \approx 0.56$$ $$|g'(x^*)| = |-e^{-x^*}| \approx 0.56$$ * Example 2: $$g(x) = - \ln x \quad \text{with} \quad x^* \approx 0.56$$ $$|g'(x^*)| = \frac{1}{|x^*|} \approx 1.79$$ * Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
r, P, m, A, n = sympy.symbols('r P m A n')
g_sym = P * m / A * ((1 + r /m)**(m * n) - 1)
g_prime = g_sym.diff(r)
r_star = 0.08985602484084668
print("g(r) = ", g_sym)
print("g'(r) = ", g_prime)
print()
print("g'(r*) = ", g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}))
print("g(r*) - r* = {}".format(g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6, r: r_star}) - r_star))
###Output
g(r) = P*m*((1 + r/m)**(m*n) - 1)/A
g'(r) = P*m*n*(1 + r/m)**(m*n)/(A*(1 + r/m))
g'(r*) = 2.14108802539073
g(r*) - r* = 7.00606239689705E-12
###Markdown
* Example 3: The retirement problem$$ r = g(r) = \frac{P \cdot m}{A} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
f = sympy.lambdify(r, g_prime.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
g = sympy.lambdify(r, g_sym.subs({P: 1500.0, m: 12, n:20, A: 1e6}))
r = numpy.linspace(-0.01, 0.1, 100)
fig = plt.figure(figsize=(7,5))
fig.set_figwidth(2. * fig.get_figwidth())
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, g(r),label='$g(r)$')
axes.plot(r, r, 'r--',label='$r$')
axes.set_xlabel("r (interest rate)",fontsize=14)
axes.set_ylabel("$g(r)$",fontsize=14)
axes.set_title("Minimum rate for a 20 year retirement?",fontsize=14)
axes.set_ylim([0, 0.12])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.set_xlim((0.00, 0.1))
axes.set_ylim(g(0.00), g(0.1))
axes.legend()
axes.grid()
axes = fig.add_subplot(1, 2, 2)
axes.plot(r, f(r))
axes.plot(r, numpy.ones(r.shape), 'k--')
axes.plot(r_star, f(r_star), 'ro')
axes.plot(0.0, f(0.0), 'ro')
axes.set_xlim((-0.01, 0.1))
axes.set_xlabel("$r$",fontsize=14)
axes.set_ylabel("$g'(r)$",fontsize=14)
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Better ways for root-finding/optimizationIf $x^*$ is a fixed point of $g(x)$ then $x^*$ is also a *root* of $f(x^*) = g(x^*) - x^*$ s.t. $f(x^*) = 0$.For instance:$$f(r) = r - \frac{m P}{A} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$or$$f(r) = A - \frac{m P}{r} \left [ \left (1 + \frac{r}{m} \right)^{m n} - 1 \right ] =0 $$ Classical Methods - Bisection (linear convergence) - Newton's Method (quadratic convergence) - Secant Method (super-linear) Combined Methods - RootSafe (Newton + Bisection) - Brent's Method (Secant + Bisection) Bracketing and BisectionA **bracket** is an interval $[a,b]$ that contains exactly one zero or minima/maxima of interest. In the case of a zero the bracket should satisfy $$ \text{sign}(f(a)) \neq \text{sign}(f(b)).$$In the case of minima or maxima we need $$ \text{sign}(f'(a)) \neq \text{sign}(f'(b))$$ **Theorem**: Let$$ f(x) \in C[a,b] \quad \text{and} \quad \text{sign}(f(a)) \neq \text{sign}(f(b))$$then there exists a number $$ c \in (a,b) \quad \text{s.t.} \quad f(c) = 0.$$(proof uses intermediate value theorem) **Example**: The retirement problem again. For fixed $A, P, m, n$$$ f(r) = A - \frac{P}{(r / m)} \left[ \left(1 + \frac{r}{m} \right)^{m \cdot n} - 1 \right ]$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.1, 100)
f = lambda r, A, m, P, n: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r, A, m, P, n), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
a = 0.075
b = 0.095
axes.plot(a, f(a, A, m, P, n), 'ko')
axes.plot([a, a], [0.0, f(a, A, m, P, n)], 'k--')
axes.plot(b, f(b, A, m, P, n), 'ko')
axes.plot([b, b], [f(b, A, m, P, n), 0.0], 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basic bracketing algorithms shrink the bracket while ensuring that the root/extrema remains within the bracket.What ways could we "shrink" the bracket so that the end points converge to the root/extrema? Bisection AlgorithmGiven a bracket $[a,b]$ and a function $f(x)$ - 1. Initialize with bracket2. Iterate 1. Cut bracket in half and check to see where the zero is 2. Set bracket to new bracket based on what direction we went basic code```pythondef bisection(f,a,b,tol): c = a + delta_x / 2.0 f_a = f(a) f_b = f(b) f_c = f(c) for step in range(1, MAX_STEPS + 1): if numpy.abs(f_c) < tol: break if numpy.sign(f_a) != numpy.sign(f_c): b = c f_b = f_c else: a = c f_a = f_c delta_x = b - a c = a + delta_x / 2.0 f_c = f(c) return c```
###Code
# real code with standard bells and whistles
def bisection(f,a,b,tol = 1.e-6):
""" uses bisection to isolate a root x of a function of a single variable f such that f(x) = 0.
the root must exist within an initial bracket a < x < b
returns when f(x) at the midpoint of the bracket < tol
Parameters:
-----------
f: function of a single variable f(x) of type float
a: float
left bracket a < x
b: float
right bracket x < b
Note: the signs of f(a) and f(b) must be different to insure a bracket
tol: float
tolerance. Returns when |f((a+b)/2)| < tol
Returns:
--------
x: float
midpoint of final bracket
x_array: numpy array
history of bracket centers (for plotting later)
Raises:
-------
ValueError:
if initial bracket is invalid
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 1000
# initialize
delta_x = b - a
c = a + delta_x / 2.0
c_array = [ c ]
f_a = f(a)
f_b = f(b)
f_c = f(c)
# check bracket
if numpy.sign(f_a) == numpy.sign(f_b):
raise ValueError("no bracket: f(a) and f(b) must have different signs")
# Loop until we reach the TOLERANCE or we take MAX_STEPS
for step in range(1, MAX_STEPS + 1):
# Check tolerance - Could also check the size of delta_x
# We check this first as we have already initialized the values
# in c and f_c
if numpy.abs(f_c) < tol:
break
if numpy.sign(f_a) != numpy.sign(f_c):
b = c
f_b = f_c
else:
a = c
f_a = f_c
delta_x = b - a
c = a + delta_x / 2.0
f_c = f(c)
c_array.append(c)
if step == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return c, numpy.array(c_array)
# set up function as an inline lambda function
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r: A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initialize bracket
a = 0.07
b = 0.10
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8)
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
r = numpy.linspace(0.05, 0.11, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
# axes.set_xlim([0.085, 0.091])
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.plot(a, f(a), 'ko')
axes.plot([a, a], [0.0, f(a)], 'k--')
axes.text(a, f(a), str(0), fontsize="15")
axes.plot(b, f(b), 'ko')
axes.plot([b, b], [f(b), 0.0], 'k--')
axes.text(b, f(b), str(1), fontsize="15")
axes.grid()
# plot out the first N steps
N = 5
for k,r in enumerate(r_array[:N]):
# Plot iteration
axes.plot(r, f(r),'kx')
axes.text(r, f(r), str(k + 2), fontsize="15")
axes.plot(r_star, f(r_star), 'go', markersize=10)
axes.set_title('Bisection method: first {} steps'.format(N), fontsize=20)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why?
###Code
# find root
r_star, r_array = bisection(f, a, b, tol=1e-8 )
print('root at r = {}, f(r*) = {}, {} steps'.format(r_star,f(r_star),len(r_array)))
# this might be useful
print(numpy.diff(r_array))
###Output
[ 7.50000000e-03 -3.75000000e-03 1.87500000e-03 -9.37500000e-04
4.68750000e-04 -2.34375000e-04 -1.17187500e-04 5.85937500e-05
-2.92968750e-05 1.46484375e-05 7.32421875e-06 3.66210937e-06
-1.83105469e-06 -9.15527344e-07 -4.57763672e-07 -2.28881836e-07
-1.14440918e-07 -5.72204590e-08 2.86102295e-08 -1.43051147e-08
-7.15255737e-09 3.57627869e-09 -1.78813934e-09 8.94069666e-10
4.47034840e-10 2.23517413e-10 1.11758713e-10 -5.58793567e-11
2.79396783e-11 -1.39698392e-11 6.98492653e-12 3.49245632e-12
-1.74622816e-12 -8.73121020e-13 -4.36553571e-13 2.18283724e-13
1.09134923e-13 5.45674617e-14 2.72837308e-14]
###Markdown
Convergence of BisectionGenerally have$$ |e_{k+1}| = C |e_k|^n$$where we need $C 0$.Letting $\Delta x_k$ be the width of the $k$th bracket we can then estimate the error with$$ e_k \approx \Delta x_k$$and therefore$$ e_{k+1} \approx \frac{1}{2} \Delta x_k.$$Due to the relationship then between $x_k$ and $e_k$ we then know$$ |e_{k+1}| = \frac{1}{2} |e_k|$$so therefore the method is linearly convergent. Newton's Method (Newton-Raphson) - Given a bracket, bisection is guaranteed to converge linearly to a root - However bisection uses almost no information about $f(x)$ beyond its sign at a point - Can we do "better"? Newton's method, *when well behaved* can achieve quadratic convergence. **Basic Ideas**: There are multiple interpretations we can use to derive Newton's method* Use Taylor's theorem to estimate a correction to minimize the residual $f(x)=0$ * A geometric interpretation that approximates $f(x)$ locally as a straight line to predict where $x^*$ might be.* As a special case of a fixed-point iteration Given current location $x_k$, we have $f(x_k)$ and $f'(x_k)$ and form a line through the point $(x_k, f(x_k))$:Form equation for the line:$$y = f'(x_k) x + b$$ Solve for the y-intercept value $b$$$f(x_k) = f'(x_k) x_k + b$$$$b = f(x_k) - f'(x_k) x_k$$and simplify.$$y = f'(x_k) x + f(x_k) - f'(x_k) x_k$$$$y = f'(x_k) (x - x_k) + f(x_k)$$ Now find the intersection of our line and the x-axis (i.e. when $y = 0$) and use the resulting value of $x$ to set $x_{k+1}$ $$ 0 = f'(x_k) (x_{k+1}-x_k) + f(x_k)$$$$ x_{k+1} = x_k-\frac{f(x_k)}{f'(x_k)}$$ Perhaps the simplest derivation uses Taylor series. Consider an initial guess at point $x_k$. For arbitrary $x_k$, it's unlikely $f(x_k)=0$. However we can hope there is a correction $\delta_k$ such that at$$x_{k+1} = x_k + \delta_k$$and $$ f(x_{k+1}) = 0 $$ expanding in a Taylor series around point $x_k$ $$ f(x_k + \delta_k) \approx f(x_k) + f'(x_k) \delta_k + O(\delta_k^2)$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ substituting into $f(x_{k+1})=0$ and dropping the higher order terms gives$$ f(x_k) + f'(x_k) \delta_k =0$$ or solving for the correction$$ \delta_k = -f(x_k)/f'(x_k)$$ which leads to the update for the next iteration$$ x_{k+1} = x_k + \delta_k $$or$$ x_{k+1} = x_k -f(x_k)/f'(x_k)$$rinse and repeat, as it's still unlikely that $f(x_{k+1})=0$ (but we hope the error will be reduced) Algorithm1. Initialize $x = x_0$1. While ( $f(x) > tol$ ) - solve $\delta = -f(x)/f'(x)$ - update $x \leftarrow x + \delta$ Geometric interpretationBy truncating the taylor series at first order, we are locally approximating $f(x)$ as a straight line tangent to the point $f(x_k)$. If the function was linear at that point, we could find its intercept such that $f(x_k+\delta_k)=0$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
# Initial guess
x_k = 0.06
# Setup figure to plot convergence
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
# Plot x_k point
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, -5e4, "$x_k$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(x_k, f(x_k) + 2e4, "$f(x_k)$", fontsize=16)
axes.plot(r, f_prime(x_k) * (r - x_k) + f(x_k), 'k')
# Plot x_{k+1} point
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_k, f(x_k), 'ko')
axes.text(x_k, 1e4, "$x_{k+1}$", fontsize=16)
axes.plot(x_k, 0.0, 'xk')
axes.text(0.0873, f(x_k) - 2e4, "$f(x_{k+1})$", fontsize=16)
axes.set_xlabel("r",fontsize=16)
axes.set_ylabel("f(r)",fontsize=16)
axes.set_title("Newton-Raphson Steps",fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Some code
###Code
def newton(f,f_prime,x0,tol = 1.e-6):
""" uses newton's method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
f_prime: function f'(x)
returns type: float
x0: float
initial guess
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
x = x0
x_array = [ x0 ]
for k in range(1, MAX_STEPS + 1):
x = x - f(x) / f_prime(x)
x_array.append(x)
if numpy.abs(f(x)) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
f_prime = lambda r, A=A, m=m, P=P, n=n: \
-P*m*n*(1.0 + r/m)**(m*n)/(r*(1.0 + r/m)) \
+ P*m*((1.0 + r/m)**(m*n) - 1.0)/r**2
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x, x_array = newton(f, f_prime, x0, tol=1.e-8)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
print(f_prime(x)*numpy.finfo('float').eps)
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Newton-Raphson Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
What is the smallest tolerance that can be achieved with this routine? Why? Example: $$f(x) = x - e^{-x}$$$$f'(x) = 1 + e^{-x}$$$$x_{k+1} = x_k - \frac{f(x_k)}{f'(x_k)} = x_k - \frac{x_k - e^{-x_k}}{1 + e^{-x_k}}$$ setup in sympy
###Code
x = sympy.symbols('x')
f = x - sympy.exp(-x)
f_prime = f.diff(x)
f, f_prime
###Output
_____no_output_____
###Markdown
and solve
###Code
f = sympy.lambdify(x,f)
f_prime = sympy.lambdify(x,f_prime)
x0 = 0.
x, x_array = newton(f, f_prime, x0, tol = 1.e-9)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-1,1,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx',markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x - e^{-x}$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Asymptotic Convergence of Newton's MethodNewton's method can be also considered a fixed point iteration$$x_{k+1} = g(x_k)$$with $g(x) = x - \frac{f(x)}{f'(x)}$ Again if $x^*$ is the fixed point and $e_k$ the error at iteration $k$:$$x_{k+1} = x^* + e_{k+1} \quad \quad x_k = x^* + e_k$$ Taylor Expansion around $x^*$$$ x^* + e_{k+1} = g(x^* + e_k) = g(x^*) + g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + O(e_k^3)$$ Note that as before $x^*$ and $g(x^*)$ cancel:$$e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$ What about $g'(x^*)$ though? $$\begin{aligned} g(x) &= x - \frac{f(x)}{f'(x)} \\ g'(x) & = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x) f''(x)}{(f'(x))^2} = \frac{f(x) f''(x)}{(f'(x))^2}\end{aligned}$$ which evaluated at $x = x^*$ becomes$$ g'(x^*) = \frac{f(x^*)f''(x^*)}{f'(x^*)^2} = 0$$since $f(x^\ast) = 0$ by definition (assuming $f''(x^\ast)$ and $f'(x^\ast)$ are appropriately behaved). Back to our expansion we have again$$ e_{k+1} = g'(x^*) e_k + \frac{g''(x^*) e_k^2}{2!} + \ldots$$which simplifies to $$ e_{k+1} = \frac{g''(x^*) e_k^2}{2!} + \ldots$$ which leads to $$ |e_{k+1}| < \left | \frac{g''(x^*)}{2!} \right | |e_k|^2$$Newton's method is therefore quadratically convergent where the constant is controlled by the second derivative. Example: Convergence for a non-simple rootConsider our first problem$$ f(x) = x^2 + x - \sin(x)$$the case is, unfortunately, not as rosey. Why might this be? Setup the problem
###Code
f = lambda x: x*x + x - numpy.sin(x)
f_prime = lambda x: 2*x + 1. - numpy.cos(x)
x0 = .9
x, x_array = newton(f, f_prime, x0, tol= 1.e-16)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
xa = numpy.linspace(-2,2,100)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1,2,1)
axes.plot(xa,f(xa),'b')
axes.plot(xa,numpy.zeros(xa.shape),'r--')
axes.plot(x,f(x),'go', markersize=10)
axes.plot(x0,f(x0),'kx', markersize=10)
axes.grid()
axes.set_xlabel('x', fontsize=16)
axes.set_ylabel('f(x)', fontsize=16)
axes.set_title('$f(x) = x^2 +x - sin(x)$', fontsize=18)
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Convergence appears linear, can you show this?:$$f(x) = x^2 + x -\sin (2 \pi x)$$ Example: behavior of Newton with multiple roots$f(x) = \sin (2 \pi x)$$$x_{k+1} = x_k - \frac{\sin (2 \pi x)}{2 \pi \cos (2 \pi x)}= x_k - \frac{1}{2 \pi} \tan (2 \pi x)$$
###Code
x = numpy.linspace(0, 2, 1000)
f = lambda x: numpy.sin(2.0 * numpy.pi * x)
f_prime = lambda x: 2.0 * numpy.pi * numpy.cos(2.0 * numpy.pi * x)
x_kp = lambda x: x - f(x)/f_prime(x)
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(x, f(x),'b')
axes.plot(x, f_prime(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $f'(x)$")
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
x_k = 0.3
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x, f_prime(x_k) * (x - x_k) + f(x_k), 'k')
x_k = x_k - f(x_k) / f_prime(x_k)
axes.plot([x_k, x_k], [0.0, f(x_k)], 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes = fig.add_subplot(1, 2, 2)
axes.plot(x, f(x),'b')
axes.plot(x, x_kp(x), 'r')
axes.set_xlabel("x")
axes.set_ylabel("y")
axes.set_title("Comparison of $f(x)$ and $x_{k+1}(x)$",fontsize=18)
axes.set_ylim((-2,2))
axes.set_xlim((0,2))
axes.plot(x, numpy.zeros(x.shape), 'k--')
plt.show()
###Output
_____no_output_____
###Markdown
Basins of AttractionGiven a point $x_0$ can we determine if Newton-Raphson converges and to **which root** it converges to?A *basin of attraction* $X$ for Newton's methods is defined as the set such that $\forall x \in X$ Newton iterations converges to the same root. Unfortunately this is far from a trivial thing to determine and even for simple functions can lead to regions that are complicated or even fractal.
###Code
# calculate the basin of attraction for f(x) = sin(2\pi x)
x_root = numpy.zeros(x.shape)
N_steps = numpy.zeros(x.shape)
for i,xk in enumerate(x):
x_root[i], x_root_array = newton(f, f_prime, xk)
N_steps[i] = len(x_root_array)
y = numpy.linspace(-2,2)
X,Y = numpy.meshgrid(x,y)
X_root = numpy.outer(numpy.ones(y.shape),x_root)
plt.figure(figsize=(8, 6))
plt.pcolor(X, Y, X_root,vmin=-5, vmax=5,cmap='seismic')
cbar = plt.colorbar()
cbar.set_label('$x_{root}$', fontsize=18)
plt.plot(x, f(x), 'k-')
plt.plot(x, numpy.zeros(x.shape),'k--', linewidth=0.5)
plt.xlabel('x', fontsize=16)
plt.title('Basins of Attraction: $f(x) = \sin{2\pi x}$', fontsize=18)
#plt.xlim(0.25-.1,0.25+.1)
plt.show()
###Output
_____no_output_____
###Markdown
Fractal Basins of AttractionIf $f(x)$ is complex (for $x$ complex), then the basins of attraction can be beautiful and fractalPlotted below are two fairly simple equations which demonstrate the issue:1. $f(x) = x^3 - 1$2. Kepler's equation $\theta - e \sin \theta = M$
###Code
f = lambda x: x**3 - 1
f_prime = lambda x: 3 * x**2
N = 1001
x = numpy.linspace(-2, 2, N)
X, Y = numpy.meshgrid(x, x)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
roots = numpy.roots([1., 0., 0., -1])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
#axes.contourf(X, Y, numpy.sign(numpy.imag(R))*numpy.abs(R),vmin = -10, vmax = 10)
axes.contourf(X, Y, R, vmin = -8, vmax= 8.)
axes.scatter(numpy.real(roots), numpy.imag(roots))
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x^3 - 1$")
axes.grid()
plt.show()
def f(theta, e=0.083, M=1):
return theta - e * numpy.sin(theta) - M
def f_prime(theta, e=0.083):
return 1 - e * numpy.cos(theta)
N = 1001
x = numpy.linspace(-30.5, -29.5, N)
y = numpy.linspace(-17.5, -16.5, N)
X, Y = numpy.meshgrid(x, y)
R = X + 1j * Y
for i in range(30):
R = R - f(R) / f_prime(R)
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
axes = fig.add_subplot(1, 1, 1, aspect='equal')
axes.contourf(X, Y, R, vmin = 0, vmax = 10)
axes.set_xlabel("Real")
axes.set_ylabel("Imaginary")
axes.set_title("Basin of Attraction for $f(x) = x - e \sin x - M$")
plt.show()
###Output
/Users/janiceyang/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:2: RuntimeWarning: overflow encountered in sin
/Users/janiceyang/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:2: RuntimeWarning: invalid value encountered in multiply
/Users/janiceyang/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: overflow encountered in cos
after removing the cwd from sys.path.
/Users/janiceyang/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:4: RuntimeWarning: invalid value encountered in multiply
after removing the cwd from sys.path.
###Markdown
Other IssuesNeed to supply both $f(x)$ and $f'(x)$, could be expensive Example: FTV equation $f(r) = A - \frac{m P}{r} \left[ \left(1 + \frac{r}{m} \right )^{m n} - 1\right]$Can use symbolic differentiation (`sympy`) Secant MethodsIs there a method with the convergence of Newton's method but without the extra derivatives? What way would you modify Newton's method so that you would not need $f'(x)$? Given $x_k$ and $x_{k-1}$ represent the derivative as the approximation$$f'(x) \approx \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}}$$Combining this with the Newton approach leads to$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1}) }{f(x_k) - f(x_{k-1})}$$This leads to superlinear convergence and not quite quadratic as the exponent on the convergence is $\approx 1.7$. Alternative interpretation, fit a line through two points and see where they intersect the x-axis.$$(x_k, f(x_k)) ~~~~~ (x_{k-1}, f(x_{k-1})$$$$y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + b$$ $$b = f(x_{k-1}) - \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k-1} - x_k)$$$$ y = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x - x_k) + f(x_k)$$ Now solve for $x_{k+1}$ which is where the line intersects the x-axies ($y=0$)$$0 = \frac{f(x_k) - f(x_{k-1})}{x_k - x_{k-1}} (x_{k+1} - x_k) + f(x_k)$$$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$ Secant Method$$x_{k+1} = x_k - \frac{f(x_k) (x_k - x_{k-1})}{f(x_k) - f(x_{k-1})}$$
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
# Initial guess
x_k = 0.07
x_km = 0.06
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
axes.plot(x_k, 0.0, 'ko')
axes.plot(x_k, f(x_k), 'ko')
axes.plot([x_k, x_k], [0.0, f(x_k)], 'k--')
axes.plot(x_km, 0.0, 'ko')
axes.plot(x_km, f(x_km), 'ko')
axes.plot([x_km, x_km], [0.0, f(x_km)], 'k--')
axes.plot(r, (f(x_k) - f(x_km)) / (x_k - x_km) * (r - x_k) + f(x_k), 'k')
x_kp = x_k - (f(x_k) * (x_k - x_km) / (f(x_k) - f(x_km)))
axes.plot(x_kp, 0.0, 'ro')
axes.plot([x_kp, x_kp], [0.0, f(x_kp)], 'r--')
axes.plot(x_kp, f(x_kp), 'ro')
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=14)
axes.set_title("Secant Method", fontsize=18)
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
What would the algorithm look like for such a method? AlgorithmGiven $f(x)$, a `TOLERANCE`, and a `MAX_STEPS` 1. Initialize two points $x_0$, $x_1$, $f_0 = f(x_0)$, and $f_1 = f(x_1)$2. Loop for k=2, to `MAX_STEPS` is reached or `TOLERANCE` is achieved 1. Calculate new update $$x_{2} = x_1 - \frac{f(x_1) (x_1 - x_{0})}{f(x_1) - f(x_{0})}$$ 2. Check for convergence and break if reached 3. Update parameters $x_0 = x_1$, $x_1 = x_{2}$, $f_0 = f_1$ and $f_1 = f(x_1)$ Some Code
###Code
def secant(f, x0, x1, tol = 1.e-6):
""" uses a linear secant method to find a root x of a function of a single variable f
Parameters:
-----------
f: function f(x)
returns type: float
x0: float
first point to initialize the algorithm
x1: float
second point to initialize the algorithm x1 != x0
tolerance: float
Returns when |f(x)| < tol
Returns:
--------
x: float
final iterate
x_array: numpy array
history of iteration points
Raises:
-------
ValueError:
if x1 is too close to x0
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 200
if numpy.isclose(x0, x1):
raise ValueError('Initial points are too close (preferably should be a bracket)')
x_array = [ x0, x1 ]
for k in range(1, MAX_STEPS + 1):
x2 = x1 - f(x1) * (x1 - x0) / (f(x1) - f(x0))
x_array.append(x2)
if numpy.abs(f(x2)) < tol:
break
x0 = x1
x1 = x2
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x2, numpy.array(x_array)
###Output
_____no_output_____
###Markdown
Set the problem up
###Code
P = 1500.0
m = 12
n = 20.0
A = 1e6
r = numpy.linspace(0.05, 0.11, 100)
f = lambda r, A=A, m=m, P=P, n=n: \
A - m * P / r * ((1.0 + r / m)**(m * n) - 1.0)
###Output
_____no_output_____
###Markdown
and solve
###Code
x0 = 0.06
x1 = 0.07
x, x_array = secant(f, x0, x1, tol= 1.e-7)
print('x = {}, f(x) = {}, Nsteps = {}'.format(x, f(x), len(x_array)))
r = numpy.linspace(0.05, 0.10, 100)
# Setup figure to plot convergence
fig = plt.figure(figsize=(16,6))
axes = fig.add_subplot(1, 2, 1)
axes.plot(r, f(r), 'b')
axes.plot(r, numpy.zeros(r.shape),'r--')
for n, x in enumerate(x_array):
axes.plot(x, f(x),'kx')
axes.text(x, f(x), str(n), fontsize="15")
axes.set_xlabel("r", fontsize=16)
axes.set_ylabel("f(r)", fontsize=16)
axes.set_title("Secant Method Steps", fontsize=18)
axes.grid()
axes.ticklabel_format(axis='y', style='sci', scilimits=(-1,1))
axes = fig.add_subplot(1, 2, 2)
axes.semilogy(numpy.arange(len(x_array)), numpy.abs(f(x_array)), 'bo-')
axes.grid()
axes.set_xlabel('Iterations', fontsize=16)
axes.set_ylabel('Residual $|f(r)|$', fontsize=16)
axes.set_title('Convergence', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
Comments - Secant method as shown is equivalent to linear interpolation - Can use higher order interpolation for higher order secant methods - Convergence is not quite quadratic - Not guaranteed to converge - Does not preserve brackets - Almost as good as Newton's method if your initial guess is good. Hybrid MethodsCombine attributes of methods with others to make one great algorithm to rule them all (not really) Goals1. Robustness: Given a bracket $[a,b]$, maintain bracket1. Efficiency: Use superlinear convergent methods when possible Options - Methods requiring $f'(x)$ - NewtSafe (RootSafe, Numerical Recipes) - Newton's Method within a bracket, Bisection otherwise - Methods not requiring $f'(x)$ - Brent's Algorithm (zbrent, Numerical Recipes) - Combination of bisection, secant and inverse quadratic interpolation - `scipy.optimize` package
###Code
from scipy.optimize import brentq
a = 0.07
b = 0.1
x, res = brentq(f, a, b, full_output=True)
print('x = {}, f(x) = {}'.format(x, f(x)))
print(res)
#brentq?
###Output
x = 0.08985602483470466, f(x) = 2.1886080503463745e-08
converged: True
flag: 'converged'
function_calls: 8
iterations: 7
root: 0.08985602483470466
###Markdown
Optimization (finding extrema)I want to find the extrema of a function $f(x)$ on a given interval $[a,b]$.A few approaches: - Interpolation Algorithms: Repeated parabolic interpolation - Bracketing Algorithms: Golden-Section Search (linear) - Hybrid Algorithms Note: for continuous functions, root finding is much simpler. Interpolation ApproachSuccessive parabolic interpolation - *similar to secant method*Basic idea: Fit polynomial to function using three points, find its minima, and guess new points based on that minimaGood *if* you know 3 points that are close to the extremum 1. What do we need to fit a polynomial $p_n(x)$ of degree $n \geq 2$?2. How do we construct the polynomial $p_2(x)$?3. Once we have constructed $p_2(x)$ how would we find the minimum? AlgorithmGiven $f(x)$ and $[x_0,x_1]$ - Note that unlike a bracket these will be a sequence of better approximations to the minimum.1. Initialize $x = [x_0, x_1, (x_0+x_1)/2]$1. Loop 1. Evaluate function $f(x)$ at the three points 1. Find the quadratic polynomial that interpolates those points: $$p(x) = p_0 x^2 + p_1 x + p_2$$ 3. Calculate the minimum: $$p'(x) = 2 p_0 x + p_1 = 0 \quad \Rightarrow \quad x^\ast = -p_1 / (2 p_0)$$ 1. New set of points $x = [x_1, (x_0+x_1)/2, x^\ast]$ 1. Check tolerance Not guaranteed to converge, or find global minimum/maximum
###Code
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
MAX_STEPS = 100
TOLERANCE = 1e-4
x = numpy.array([0.5, 0.2, (0.7) / 2.0])
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x[0], f(x[0]), 'ko')
axes.plot(x[1], f(x[1]), 'ko')
success = False
for n in range(1, MAX_STEPS + 1):
axes.plot(x[2], f(x[2]), 'ko')
poly = numpy.polyfit(x, f(x), 2)
axes.plot(t, poly[0] * t**2 + poly[1] * t + poly[2], 'r--')
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < TOLERANCE:
success = True
break
if success:
print("Success!")
print(" t* = %s" % x[2])
print(" f(t*) = %s" % f(x[2]))
print(" number of steps = %s" % n)
else:
print("Reached maximum number of steps!")
axes.set_ylim((-5, 0.0))
axes.grid()
plt.show()
###Output
Success!
t* = 0.29588830731129795
f(t*) = -4.604285452397018
number of steps = 6
###Markdown
Some CodeUsing numpy.polyfit(x, y, degree))
###Code
def parabolic_interpolation(f, bracket, tol = 1.e-6):
""" uses repeated parabolic interpolation to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x1] containing an initial bracket that contains a minimum
tolerance: float
Returns when relative error of last two iterates < tol
Returns:
--------
x: float
final estimate of the minima
x_array: numpy array
history of iteration points
Raises:
-------
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
x = numpy.zeros(3)
x[:2] = bracket
x[2] = (x[0] + x[1])/2.
x_array = [ x[2] ]
for k in range(1, MAX_STEPS + 1):
poly = numpy.polyfit(x, f(x), 2)
x[0] = x[1]
x[1] = x[2]
x[2] = -poly[1] / (2.0 * poly[0])
x_array.append(x[2])
if numpy.abs(x[2] - x[1]) / numpy.abs(x[2]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x[2], numpy.array(x_array)
###Output
_____no_output_____
###Markdown
set up problem
###Code
bracket = numpy.array([0.5, 0.2])
x, x_array = parabolic_interpolation(f, bracket, tol = 1.e-6)
print("Extremum f(x) = {}, at x = {}, N steps = {}".format(f(x), x, len(x_array)))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8,6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.plot(x_array, f(x_array),'ro')
axes.plot(x, f(x), 'go')
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Bracketing Algorithm (Golden Section Search)Given $f(x) \in C[x_0,x_3]$ that is convex (concave) over an interval $x \in [x_0,x_3]$ reduce the interval size until it brackets the minimum (maximum).Note that we no longer have the $x=0$ help we had before so bracketing and doing bisection is a bit trickier in this case. In particular choosing your initial bracket is important! Bracket PickingSay we start with a bracket $[x_0, x_3]$ and pick two new points $x_1 < x_2 \in [x_0, x_3]$. We want to pick a new bracket that guarantees that the extrema exists in it. We then can pick this new bracket with the following rules: - If $f(x_1) < f(x_2)$ then we know the minimum is between $x_0$ and $x_2$. - If $f(x_1) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$. *Bracket point restriction*: f(x) for 1st and 3rd point must be higher (for minimum) or lower (maximum) than middle point (2nd point).
###Code
f = lambda x: x**2
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
fig.set_figheight(fig.get_figheight() * 2)
search_points = [-1.0, -0.5, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 1)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, 0.5, 1.0]
axes = fig.add_subplot(2, 2, 2)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
search_points = [-1.0, 0.25, 0.75, 1.0]
axes = fig.add_subplot(2, 2, 3)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) < f(x_2) \Rightarrow [x_0, x_2]$")
search_points = [-1.0, -0.75, -0.25, 1.0]
axes = fig.add_subplot(2, 2, 4)
x = numpy.linspace(search_points[0] - 0.1, search_points[-1] + 0.1, 100)
axes.plot(x, f(x), 'b')
for (i, point) in enumerate(search_points):
axes.plot(point, f(point),'or')
axes.text(point + 0.05, f(point), str(i))
axes.plot(0, 0, 'sk')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_title("$f(x_1) > f(x_2) \Rightarrow [x_1, x_3]$")
plt.show()
###Output
_____no_output_____
###Markdown
Picking Brackets and PointsAgain say we have a bracket $[x_0,x_3]$ and suppose we have two new search points $x_1$ and $x_2$ that separates $[x_0,x_3]$ into two new overlapping brackets. Define: the length of the line segments in the interval\begin{aligned} a &= x_1 - x_0, \\ b &= x_2 - x_1,\\ c &= x_3 - x_2 \\\end{aligned}and the total bracket length\begin{aligned} d &= x_3 - x_0. \\\end{aligned}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
For **Golden Section Search** we require two conditions: - The two new possible brackets are of equal length. i.e $[x_0, x_2] = [x_1, x_3]$ or $$ a + b = b + c $$ or simply $a = c$
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
- The ratio of segment lengths is the same for every level of recursion so the problem is *self-similar* i.e. $$ \frac{b}{a} = \frac{c}{a + b} $$ These two requirements will allow *maximum reuse* of previous points and require adding only one new point $x^*$ at each iteration.
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure()
fig.set_figwidth(fig.get_figwidth() * 2)
axes = []
axes.append(fig.add_subplot(1, 2, 1))
axes.append(fig.add_subplot(1, 2, 2))
t = numpy.linspace(-2.0, 2.0, 100)
for i in range(2):
axes[i].plot(t, f(t), 'k')
# First set of intervals
axes[i].plot([x[0], x[2]], [0.0, 0.0], 'g')
axes[i].plot([x[1], x[3]], [-0.2, -0.2], 'r')
axes[i].plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes[i].plot([x[2], x[2]], [0.0, f(x[2])], 'g--')
axes[i].plot([x[1], x[1]], [-0.2, f(x[1])], 'r--')
axes[i].plot([x[3], x[3]], [-0.2, f(x[3])], 'r--')
for (n, point) in enumerate(x):
axes[i].plot(point, f(point), 'ok')
axes[i].text(point, f(point)+0.1, n, fontsize='15')
axes[i].set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes[i].set_ylim((-1.0, 3.0))
# Left new interval
x_new = [x[0], None, x[1], x[2]]
x_new[1] = phi * (x[1] - x[0]) + x[0]
#axes[0].plot([x_new[0], x_new[2]], [1.5, 1.5], 'b')
#axes[0].plot([x_new[1], x_new[3]], [1.75, 1.75], 'c')
#axes[0].plot([x_new[0], x_new[0]], [1.5, f(x_new[0])], 'b--')
#axes[0].plot([x_new[2], x_new[2]], [1.5, f(x_new[2])], 'b--')
#axes[0].plot([x_new[1], x_new[1]], [1.75, f(x_new[1])], 'c--')
#axes[0].plot([x_new[3], x_new[3]], [1.75, f(x_new[3])], 'c--')
axes[0].plot(x_new[1], f(x_new[1]), 'ko')
axes[0].text(x_new[1], f(x_new[1]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[0].text(x_new[i], -0.5, i, color='g',fontsize='15')
# Right new interval
x_new = [x[1], x[2], None, x[3]]
x_new[2] = (x[2] - x[1]) * phi + x[2]
#axes[1].plot([x_new[0], x_new[2]], [1.25, 1.25], 'b')
#axes[1].plot([x_new[1], x_new[3]], [1.5, 1.5], 'c')
#axes[1].plot([x_new[0], x_new[0]], [1.25, f(x_new[0])], 'b--')
#axes[1].plot([x_new[2], x_new[2]], [1.25, f(x_new[2])], 'b--')
#axes[1].plot([x_new[1], x_new[1]], [1.5, f(x_new[2])], 'c--')
#axes[1].plot([x_new[3], x_new[3]], [1.5, f(x_new[3])], 'c--')
axes[1].plot(x_new[2], f(x_new[2]), 'ko')
axes[1].text(x_new[2], f(x_new[2]) + 0.1, "*", fontsize='15')
for i in range(4):
axes[1].text(x_new[i], -0.5, i, color='r',fontsize='15')
axes[0].set_title('Choose left bracket', fontsize=18)
axes[1].set_title('Choose right bracket', fontsize=18)
plt.show()
###Output
_____no_output_____
###Markdown
As the first rule implies that $a = c$, we can substitute into the second rule to yield$$ \frac{b}{a} = \frac{a}{a + b}$$ or inverting and rearranging $$ \frac{a}{b} = 1 + \frac{b}{a}$$ if we let the ratio $b/a = x$, then $$ x + 1 = \frac{1}{x} \quad \text{or} \quad x^2 + x - 1 = 0$$ $$ x^2 + x - 1 = 0$$has a single positive root for $$ x = \frac{\sqrt{5} - 1}{2} = \varphi = 0.6180339887498949$$where $\varphi$ is related to the **"golden ratio"** (which in most definitions is given by $1+\varphi$, but either work as $ 1+\varphi = 1/\varphi $ )*So* $x = \frac{b}{a} = \varphi = 0.6180339887498949$ Subsequent proportionality implies that the distances between the 4 points at one iteration is proportional to the next. We can now use all of our information to find the points $x_1$ and $x_2$ given any overall bracket $[x_0, x_3]$ Given $b/a = \varphi$, $a = c$, and the known width of the bracket $d$ it follows that$$ d = a + b + c = (2 + \phi)a $$or $$ a = \frac{d}{2 + \varphi} = \frac{\varphi}{1 + \varphi} d$$by the rather special properties of $\varphi$. We could use this result immediately to find \begin{align} x_1 &= x_0 + a \\ x_2 &= x_3 - a \\\end{align} Equivalently, you can show that $$a + b = (1 + \varphi)a = \varphi d$$so, to get middle 2 points $x_1$ and $x_2$:\begin{align} x_1 &= x_3 - \varphi d \\ x_2 &= x_0 + \varphi d \\\end{align}
###Code
f = lambda x: (x - 0.25)**2 + 0.5
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [-1.0, None, None, 1.0]
x[1] = x[3] - phi * (x[3] - x[0])
x[2] = x[0] + phi * (x[3] - x[0])
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
t = numpy.linspace(-2.0, 2.0, 100)
axes.plot(t, f(t), 'k')
# First set of intervals
axes.plot([x[0], x[1]], [0.0, 0.0], 'g',label='a')
axes.plot([x[1], x[2]], [0.0, 0.0], 'r', label='b')
axes.plot([x[2], x[3]], [0.0, 0.0], 'b', label='c')
axes.plot([x[0], x[3]], [2.5, 2.5], 'c', label='d')
axes.plot([x[0], x[0]], [0.0, f(x[0])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'g--')
axes.plot([x[1], x[1]], [0.0, f(x[1])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'r--')
axes.plot([x[2], x[2]], [0.0, f(x[2])], 'b--')
axes.plot([x[3], x[3]], [0.0, f(x[3])], 'b--')
axes.plot([x[0], x[0]], [2.5, f(x[0])], 'c--')
axes.plot([x[3], x[3]], [2.5, f(x[3])], 'c--')
points = [ (x[0] + x[1])/2., (x[1] + x[2])/2., (x[2] + x[3])/2., (x[0] + x[3])/2. ]
y = [ 0., 0., 0., 2.5]
labels = [ 'a', 'b', 'c', 'd']
for (n, point) in enumerate(points):
axes.text(point, y[n] + 0.1, labels[n], fontsize=15)
for (n, point) in enumerate(x):
axes.plot(point, f(point), 'ok')
axes.text(point, f(point)+0.1, n, fontsize='15')
axes.set_xlim((search_points[0] - 0.1, search_points[-1] + 0.1))
axes.set_ylim((-1.0, 3.0))
plt.show()
###Output
_____no_output_____
###Markdown
Algorithm1. Initialize bracket $[x_0,x_3]$1. Initialize points $x_1 = x_3 - \varphi (x_3 - x_0)$ and $x_2 = x_0 + \varphi (x_3 - x_0)$1. Loop 1. Evaluate $f_1$ and $f_2$ 1. If $f_1 < f_2$ then we pick the left interval for the next iteration 1. and otherwise pick the right interval 1. Check size of bracket for convergence $x_3 - x_0 <$ `TOLERANCE` 1. calculate the appropriate new point $x^*$ ($x_1$ on left, $x_2$ on right) Golden section search for extrema: **linear convergence**
###Code
def golden_section(f, bracket, tol = 1.e-6):
""" uses golden section search to refine a local minimum of a function f(x)
this routine uses numpy functions polyfit and polyval to fit and evaluate the quadratics
Parameters:
-----------
f: function f(x)
returns type: float
bracket: array
array [x0, x3] containing an initial bracket that contains a minimum
tolerance: float
Returns when | x3 - x0 | < tol
Returns:
--------
x: float
final estimate of the midpoint of the bracket
x_array: numpy array
history of midpoint of each bracket
Raises:
-------
ValueError:
If initial bracket is < tol or doesn't appear to have any interior points
that are less than the outer points
Warning:
if number of iterations exceed MAX_STEPS
"""
MAX_STEPS = 100
phi = (numpy.sqrt(5.0) - 1.) / 2.0
x = [ bracket[0], None, None, bracket[1] ]
delta_x = x[3] - x[0]
x[1] = x[3] - phi * delta_x
x[2] = x[0] + phi * delta_x
# check for initial bracket
fx = f(numpy.array(x))
bracket_min = min(fx[0], fx[3])
if fx[1] > bracket_min and fx[2] > bracket_min:
raise ValueError("interval does not appear to include a minimum")
elif delta_x < tol:
raise ValueError("interval is already smaller than tol")
x_mid = (x[3] + x[0])/2.
x_array = [ x_mid ]
for k in range(1, MAX_STEPS + 1):
f_1 = f(x[1])
f_2 = f(x[2])
if f_1 < f_2:
# Pick the left bracket
x_new = [x[0], None, x[1], x[2]]
delta_x = x_new[3] - x_new[0]
x_new[1] = x_new[3] - phi * delta_x
else:
# Pick the right bracket
x_new = [x[1], x[2], None, x[3]]
delta_x = x_new[3] - x_new[0]
x_new[2] = x_new[0] + phi * delta_x
x = x_new
x_array.append((x[3] + x[0])/ 2.)
if numpy.abs(x[3] - x[0]) < tol:
break
if k == MAX_STEPS:
warnings.warn('Maximum number of steps exceeded')
return x_array[-1], numpy.array(x_array)
def f(t):
"""Simple function for minimization demos"""
return -3.0 * numpy.exp(-(t - 0.3)**2 / (0.1)**2) \
+ numpy.exp(-(t - 0.6)**2 / (0.2)**2) \
+ numpy.exp(-(t - 1.0)**2 / (0.2)**2) \
+ numpy.sin(t) \
- 2.0
x, x_array = golden_section(f,[0.2, 0.5], 1.e-4)
print('t* = {}, f(t*) = {}, N steps = {}'.format(x, f(x), len(x_array)-1))
t = numpy.linspace(0, 2, 200)
fig = plt.figure(figsize=(8, 6))
axes = fig.add_subplot(1, 1, 1)
axes.plot(t, f(t))
axes.grid()
axes.set_xlabel("t (days)")
axes.set_ylabel("People (N)")
axes.set_title("Decrease in Population due to SPAM Poisoning")
axes.plot(x_array, f(x_array),'ko')
axes.plot(x_array[0],f(x_array[0]),'ro')
axes.plot(x_array[-1],f(x_array[-1]),'go')
plt.show()
###Output
_____no_output_____
###Markdown
Scipy OptimizationScipy contains a lot of ways for optimization!
###Code
import scipy.optimize as optimize
print(optimize.golden(f, brack=(0.2, 0.25, 0.5)))
optimize?
###Output
_____no_output_____ |
Copy_of_SVM_Assignment_1.ipynb | ###Markdown
Equation of the orange line:y=-4x+1Equation of the blue line:y=-5x+1You have to find out which line will act as a better classifier according to the SVM alsorithm and why?
###Code
data = pd.DataFrame(data=X,columns=['x1','x2'])
data['y'] = y
data.head(10)
class1,class2 = list(data[data.y==1][['x1','x2']].values),list(data[data.y==0][['x1','x2']].values)
# Function to find d for both the lines
def d_calc(data=data,line=1):
res = 0
# Lines : y = -5x + 1 ; y = -4x + 1
# For the first line(blue)
if line == 1:
c1,c2 = [],[]
# Support vector for class1(y=1)
for i in range(len(class1)):
dist = abs((class1[i][1] + (5*class1[i][0]) - 1))/np.sqrt((5)**2 + 1**2)
c1.append(dist)
# Support vector for class2(y=0)
for i in range(len(class2)):
dist = abs((class2[i][1] + (5*class2[i][0]) - 1))/np.sqrt((5)**2 + 1**2)
c2.append(dist)
res = min(c1) + min(c2)
if line == 2:
c1,c2 = [],[]
# Support vector for class1(y=1)
for i in range(len(class1)):
dist = abs((class1[i][1] + (4*class1[i][0]) - 1))/np.sqrt((4)**2 + 1**2)
c1.append(dist)
# Support vector for class2(y=0)
for i in range(len(class2)):
dist = abs((class2[i][1] + (4*class2[i][0]) - 1))/np.sqrt((4)**2 + 1**2)
c2.append(dist)
res = min(c1) + min(c2)
return res
# Function to decide the svm decision boundary
def svm_bound(data=data):
d1 = d_calc(data,line=1)
d2 = d_calc(data,line=2)
if d1 > d2:
return "blue"
else:
return "orange"
line_color = svm_bound(data=data)
print("The {} line is a better classifier".format(line_color.upper()))
###Output
The ORANGE line is a better classifier
|
wp6/analyse/backup_old/ok_enhancer_analyses.ipynb | ###Markdown
Prediction of TF-co-occurences for different cell lines (e.g. A375_enhancers.bed) and theire enhancer regionse.g. Data example:chr1 100503870 100506200 RP4-714D9.5,HIAT1
###Code
from tfcomb import CombObj
C = CombObj()
###Output
_____no_output_____
###Markdown
Automated Pipline for market basket analyses for enhancer cell lines
###Code
from tfcomb import CombObj
genome_path="../testdaten/hg19_masked.fa"
motif_path="../testdaten/HOCOMOCOv11_HUMAN_motifs.txt"
result_path="./results/"
def do_market_basket_analyses_for_cell_line(cell_line_name: str, rel_path: str ):
'''
Does market basket analyses for cell line with :cell_line_name and the rel_path: to the region data.
Saves the data to name.pkl file.
e.g.:
rel_path: "../testdaten/enhancers/A375_enhancers.bed"
'''
print(f'Starting with tfbs-detection and market basket analyses for cell_line: {cell_line_name}, data path:{rel_path}.')
comb = CombObj()
comb.TFBS_from_motifs(regions= rel_path,
motifs=motif_path,
genome=genome_path,
threads=4)
print(f'TFBS detection is done for cell_line: {cell_line_name}')
print(f'Start market basket analyses for cell line: {cell_line_name}')
comb.market_basket(threads=10)
print(f'Finished market basket analyses for cell line: {cell_line_name}')
print(f'Found rules: {len(comb.rules)}')
comb.to_pickle(f'{result_path}{cell_line_name}_complete.pkl')
print(f'Saved complete rules to {result_path}{cell_line_name}_complete.pkl')
print(f'Find significat rules for cell line: {cell_line_name}')
selected = comb.select_significant_rules(plot=False)
print(f'Finished selection')
print(f'count of selected rules: {len(selected.rules)}')
selected.to_pickle(f'{result_path}{cell_line_name}_significant.pkl')
print(f'Saved complete rules to {result_path}{cell_line_name}_significant.pkl')
###Output
_____no_output_____
###Markdown
This does read_in our enhancer region files for the different cell lines and then saves the tf_cooccurences for each into a .pkl file
###Code
from os import listdir
from os.path import isfile, join
enhancer_path="../testdaten/enhancers/"
def read_in_file_names_of_folder(rel_path:str):
return [f for f in listdir(rel_path) if isfile(join(rel_path, f))]
cell_line_names = read_in_file_names_of_folder(rel_path=enhancer_path)
for cell_line in cell_line_names:
cell_line_name = cell_line.split('.')[0]
print(cell_line)
print(cell_line_name)
do_market_basket_analyses_for_cell_line(cell_line_name=cell_line_name,
rel_path=f"{enhancer_path}{cell_line}")
###Output
_____no_output_____
###Markdown
Analyse: comparison between leucocyten: CD4+ T-Helferzellen und CD8+ T-Suppressorzellen, Caco-2_enhancers_significant (Darm Krebszellen)
###Code
A = CombObj().from_pickle(f"{result_path}CD4+_enhancers_significant.pkl")
A.prefix = "CD4+"
B = CombObj().from_pickle(f"{result_path}CD8+_enhancers_significant.pkl")
B.prefix = "CD8+"
A_c = CombObj().from_pickle(f"{result_path}CD4+_enhancers_complete.pkl")
A_c.prefix = "CD4+"
B_c = CombObj().from_pickle(f"{result_path}CD8+_enhancers_complete.pkl")
B_c.prefix = "CD8+"
C = CombObj().from_pickle(f"{result_path}Caco-2_enhancers_significant.pkl")
C.prefix = "Caco-2"
A.TFBS
A.rules[:10]
print(f"CD4+: {A}")
print(f"CD8+: {B}")
print(f"Caco-2: {C}")
print(f"CD4+_complete: {A_c}")
print(f"CD8+_complete: {B_c}")
compare_obj = A_c.compare(B_c)
compare_obj.rules
compare_obj.plot_heatmap()
selection = compare_obj.select_rules()
selection.rules
selection.plot_network()
selection.rules[-10:]
test = A.get_pair_locations("SP1","SP2")
test
A = CombObj().from_pickle(f"{result_path}CD4+_enhancers_significant.pkl")
A.prefix = "CD4+"
B = CombObj().from_pickle(f"{result_path}CD8+_enhancers_significant.pkl")
B.prefix = "CD8+"
compare_obj = A.compare(B)
compare_obj.select_rules()
###Output
INFO: Selecting rules for contrast: ('CD4+', 'CD8+')
INFO: measure_threshold is None; trying to calculate optimal threshold
INFO: Creating subset of rules using thresholds
###Markdown
Find bindings sites for TF in region with the binding sites of the Hocomoco FileTodo:Things to iterateUse another motif file?Use another genome file?
###Code
C.TFBS_from_motifs(regions="../testdaten/enhancers/A375_enhancers.bed",
motifs="../testdaten/HOCOMOCOv11_HUMAN_motifs.txt",
genome="../testdaten/hg19_masked.fa",
threads=4)
C.TFBS[:10]
###Output
_____no_output_____
###Markdown
Market Basket analyses for co-occurences:
###Code
C.market_basket(threads=10)
C.rules
_ = C.plot_heatmap()
_ = C.plot_bubble()
selected = C.select_significant_rules()
selected
selected.rules
top_rules_100 = C.select_top_rules(n=100)
top_rules_100.rules
top_rules_100.to_pickle("./results/A375_enhancers_selected_100.pkl")
top_rules_100.plot_network()
top_rules_100.plot_network(color_node_by="TF1", color_edge_by="zscore")
top_rules_100.plot_network(engine="fdp")
selected.plot_network()
selected.rules
selected.plot_network()
###Output
_____no_output_____ |
old_and_other_codes/Tobaco3482_BERT.ipynb | ###Markdown
###Code
import numpy as np
import pandas as pd
import os
import pathlib
import tensorflow_hub as hub
!pip install tensorflow_text
import tensorflow_text as text
data_root = pathlib.Path('/content/drive/MyDrive/tobaco_OCR/')
print(data_root)
for item in data_root.iterdir():
print(item)
def get_file_paths_and_labels(data_root):
text_paths = [str(path) for path in data_root.glob('*/*.txt')]
labels = [p.split("/")[-2] for p in text_paths]
return text_paths, labels
text_paths, labels = get_file_paths_and_labels(data_root)
print(text_paths)
print(labels)
print(len(text_paths))
print(len(labels))
df = pd.DataFrame(list(zip(text_paths, labels)),
columns =['text_path', 'data_label'])
df.head()
# import re
# def preprocess_text(text_string):
# preprocessed_string = re.sub(r'[^\w\s]','',text_string)
# preprocessed_string = preprocessed_string.replace('\n',' ')
# preprocessed_string = re.sub(' +', ' ', preprocessed_string)
# return preprocessed_string
def get_text_from_path(path):
with open(path) as f:
lines = f.readlines()
lines = ' '.join(lines)
f.close()
return lines
out_text = get_text_from_path('/content/drive/MyDrive/tobaco_OCR/ADVE/0000435350.txt')
# out_text = preprocess_text(out_text)
print(out_text)
## Tokenize, Lemmatize, stopwords removal
# import spacy
# import nltk
# nlp = spacy.load("en", disable=['parser', 'tagger', 'ner'])
# from nltk.corpus import stopwords
# nltk.download('stopwords')
# stops = stopwords.words("english")
# def normalize(comment, lowercase, remove_stopwords):
# if lowercase:
# comment = comment.lower()
# comment = nlp(comment)
# lemmatized = list()
# for word in comment:
# lemma = word.lemma_.strip()
# if lemma:
# if not remove_stopwords or (remove_stopwords and lemma not in stops):
# lemmatized.append(lemma)
# return " ".join(lemmatized)
# normalize("counting playing the Home", lowercase=True, remove_stopwords=True)
# preprocess_url = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3"
# encoder_url = "https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/4"
preprocess_url = "https://tfhub.dev/tensorflow/bert_en_uncased_preprocess/3"
encoder_url = "https://tfhub.dev/tensorflow/small_bert/bert_en_uncased_L-12_H-128_A-2/2"
bert_preprocess_model = hub.KerasLayer(preprocess_url)
bert_model = hub.KerasLayer(encoder_url)
# def getvector(s):
# text_preprocessed = bert_preprocess_model([s])
# bert_results = bert_model(text_preprocessed)
# out = bert_results['pooled_output']
# return np.array(out).flatten()
# vec_df = pd.DataFrame({}, columns = [str(i) for i in range(128)])
# for idx, this_path in enumerate(df['text_path']):
# out_text = get_text_from_path(this_path) # get text from the filepath
# # out_text = preprocess_text(out_text) # preprocess the text such as removing \n,.
# # out_text = normalize(out_text, lowercase=True, remove_stopwords=True) # apply spacy
# vec_df.loc[len(vec_df.index)] = getvector(out_text) # get vectorize using bert
# print(idx)
texts = [get_text_from_path(this_path) for this_path in text_paths]
text_preprocessed = bert_preprocess_model(texts)
bert_results = bert_model(text_preprocessed)
out = bert_results['pooled_output']
out = np.array(out)
vec_df = pd.DataFrame(out)
vec_df['data_label'] = df['data_label']
from sklearn import preprocessing
le = preprocessing.LabelEncoder()
vec_df['data_label']= le.fit_transform(vec_df['data_label'])
vec_df['data_label'].value_counts()
vec_df.head()
X =vec_df.iloc[:,:-1]
y =vec_df['data_label']
print(X.shape)
print(y.shape)
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state =42)
from sklearn.ensemble import RandomForestClassifier
model=RandomForestClassifier(n_estimators=100)
model.fit(X_train,y_train)
y_pred=model.predict(X_test)
from sklearn import metrics
print("Accuracy:",metrics.accuracy_score(y_test, y_pred))
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
from sklearn.metrics import classification_report
print(classification_report(y_test,y_pred))
###Output
_____no_output_____ |
A3/.ipynb_checkpoints/dirty values and outliers-checkpoint.ipynb | ###Markdown
FIT5196 Assessment 3 Group Number: 102 Student Name: Haoheng Zhu Student ID: 30376467Date: 03/10/2019Version: 1Environment: Python 3.6.5 and Jupyter notebookLibraries used: * pandas (for dataframe, included in Anaconda Python 3.6.5) * re (for regular expression, included in Anaconda Python 3.6.5) * networkx (NetworkX is a Python package for the networks function)* sklearn.linear_model (Ordinary least squares Linear Regression)* statsmodels.api (for the estimation of many different statistical models)* contextlib (for common tasks involving the with statement)* tqdm (Progress bar)* numpy (for scientific computing with Python) IntroductionIn this assignment, we are supposed to:Perform graphical and/or non-graphical EDA methods to understand the data first and then find the data problems. * Detect and fix errors in G102_dirty_data.csv* Detect and remove outlier rows in G102_outlier_data.csv (outliers are to be found w.r.t. delivery_fee attribute)* Impute the missing values in G102_missing_data.csvThis assignment mainly test the capability to distinguish and handle three types of __missing data__ (Missing at random, Missing completely at random, Missing not at random), the capability to detect and fix outliers using different approaches.
###Code
import pandas as pd #pandas tables
import numpy as np #numpy for linear algrebra solution
import matplotlib.pyplot as plt #EDA
%matplotlib inline
pd.set_option('display.max_rows', 500) #display of pandas
pd.set_option('display.max_columns', 500) #display of pandas
pd.set_option('display.width', 1000) #display of pandas
import math #math operations
from datetime import datetime #datetime manipulation
import tqdm #progress bar
from tqdm import tqdm_notebook as tqdm #progress bar
from tqdm.autonotebook import tqdm #progress bar
#progress bar
from sklearn import linear_model #linear regression
import statsmodels.api as sm #linear regression
import re #regex
from contextlib import suppress #suppress error
import networkx as nx #networkx to find shortest distance.
###Output
D:\Anacoda\lib\site-packages\tqdm\autonotebook\__init__.py:14: TqdmExperimentalWarning: Using `tqdm.autonotebook.tqdm` in notebook mode. Use `tqdm.tqdm` instead to force console mode (e.g. in jupyter console)
" (e.g. in jupyter console)", TqdmExperimentalWarning)
###Markdown
A. Dirty data This task is to **detect and fix errors** in Group102_dirty_data.csv According to the assignment specifications: * There is __at least one__ anomaly in the dataset from each category of the data anomalies (i.e.,syntactic, semantic, and coverage).* In the file _dirty_data.csv, any row can carry no more than one anomaly. (i.e.there can only be one anomaly in a single row and all anomalies are fixable)* There are no data anomalies in the file _outlier_data.csv, only outliers.Similarly, there are no data anomalies other than missing value problems in the file_missing_data.csv
###Code
dirty = pd.read_csv('Group102_dirty_data.csv') #reading data
ori_col = dirty.columns #keeping original columns
dirty['error'] = '' #this column of error will be used to mark the error found in each row
def day(x):
"""
To find if the day is weekday or weekend.
"""
if x == 5 or x == 6:
return 1 #'Saturday' or Sunday
else:
return 0 # Weekday
return
def timeofday(x):
"""
Create a column that encode the meal from Breakfast, Lunch, Dinner into 0,1,2
"""
if x == 'Breakfast':
return 0
elif x == 'Lunch':
return 1
elif x == 'Dinner':
return 2
###Output
_____no_output_____
###Markdown
1. date We will have a look at the column **date**.
###Code
dirty['date'].head()
###Output
_____no_output_____
###Markdown
The __year__ info is the first four digits. We will check how many different year.
###Code
dirty['date_aux'] = dirty['date'].apply(lambda x:x[:4])
dirty['date_aux'].unique()
###Output
_____no_output_____
###Markdown
From inspecting year, we can see that there are _**missformated**_ value in this column. Then we can fix the value to the same format YYYY-DD-MM
###Code
PATTERNS = [
# 0) 1-12-1963 => 1963-12-01
(re.compile(r'(\d{1,2})-(\d{1,2})-(\d{4})$'), '{2}-{1:0>2}-{0:0>2}'),
# 1) 1789-7-14 => 1789-07-14
(re.compile(r'(\d{4})-(\d{1,2})-(\d{1,2})$'), '{0}-{1:0>2}-{2:0>2}'),
]
def correct(date):
"""
Correct the date into order of YYYY-MM-DD
"""
with suppress(ValueError):
return str(int(date))
for pattern, formater in PATTERNS:
match = pattern.match(date)
if match is not None:
return formater.format(*match.groups())
return date
def fix_month(date):
"""
Correct the date from YYYY-DD-MM into YYYY-MM-DD
"""
date = date.split('-') #split the YYYY-MM-DD, into [YYYY, MM, DD]
fixed = date[0]
if int(date[1]) > 12: #if the numeric value larger than 12
fixed = fixed + str('-') + date[2] + str('-') + date[1]
else:
fixed = fixed + str('-') + date[1] + str('-') + date[2]
return fixed
###Output
_____no_output_____
###Markdown
We will create a column __'date_fix'__ to have the correct value. After getting the date column in correct format. Then we will compare between date_fix and date columns. The different will be spotted and marked in one collumn called errors, and corrected.
###Code
dirty['date_fix'] = dirty['date'].apply(lambda x : correct(x)) # fix the format to YYYY MM DD
dirty['date_fix'] = dirty['date_fix'].apply(lambda x: fix_month(x)) # incase the format is DD MM then fix to MM DD
print('There are {} errors in column date'.format(len(dirty[dirty['date_fix'] != dirty['date']][['date_fix','date']])))
###Output
There are 37 errors in column date
###Markdown
We then correct the date column according to the difference found, and mark the row's error as 'date'
###Code
#we only loop through the row that problems spotted
for index, loc in dirty[dirty['date_fix'] != dirty['date']][['date_fix','date']].iterrows():
if dirty.at[index,'error'] == '': #if error is not spotted for the row
dirty.at[index,'date'] = dirty.at[index,'date_fix'] #fix the date
dirty.at[index,'error'] += 'date' #mark in the error column the problem found
###Output
_____no_output_____
###Markdown
We fixed and marked the rows, from now on we can keep continue and fix other rows 2. order_type In order to check if the __order_type__ column is correct, we will look at the time of the order.
###Code
dirty['compared_time'] = dirty['time'].apply(lambda x: datetime.strptime(x,'%H:%M:%S')) #strip time
###Output
_____no_output_____
###Markdown
Create a function to find the meal based on given time.
###Code
def find_meal(time): #find meal based on given time.
"""
look at the time the order is made.
"""
if time.hour < 12: #Breakfast
meal = 'Breakfast'
elif time.hour < 16: #Lunch
meal = 'Lunch'
else: #Dinner
meal = 'Dinner'
return meal
###Output
_____no_output_____
###Markdown
We will find the __order_type__ based on the time given, then save the value to new column to find if any difference with order_type column
###Code
dirty['order_type_timechecked'] = dirty['compared_time'].apply(lambda x:find_meal(x)) # Find meal based on given time
print('There are {} errors in column order_type'.format(sum(dirty['order_type_timechecked'] != dirty['order_type']))) #from here we can find 37 difference
###Output
There are 37 errors in column order_type
###Markdown
We can fix the order_type column based on the data found in order_type_timechecked now.
###Code
#fixing problem of order_type based on time
for index, loc in dirty[dirty['order_type_timechecked'] != dirty['order_type']].iterrows():
if dirty.at[index,'error'] == '':
dirty.at[index,'order_type'] = dirty.at[index,'order_type_timechecked'] #update the correct value
dirty.at[index,'error'] += 'order_type' #mark the error of the row as order_type
###Output
_____no_output_____
###Markdown
3. order_items In order to check if the **order_items** were correct, we would need to know the price of each items, and the list of items sold for each meal. For lists of meals, we will get the items from data file outlier, because outliers only contains outlier data, and does not have errors in categorical columns.
###Code
outlier_meal = pd.read_csv('Group102_outlier_data.csv') #read data
Breakfast = [] #initiate Breakfast list
for index, each in outlier_meal[outlier_meal['order_type'] == 'Breakfast'].iterrows(): #look at all rows of Breakfast
for x in re.findall(r'[a-zA-Z&]+', outlier_meal.at[index, 'order_items']): #loop through the list
if x not in Breakfast:
Breakfast.append(x)
Lunch = [] #initiate lunch list
for index, each in outlier_meal[outlier_meal['order_type'] == 'Lunch'].iterrows(): #look at all rows of Lunch
for x in re.findall(r'[a-zA-Z&]+', outlier_meal.at[index, 'order_items']): #loop through the list
if x not in Lunch:
Lunch.append(x)
Dinner = [] #initiate Dinner list
for index, each in outlier_meal[outlier_meal['order_type'] == 'Dinner'].iterrows(): #look at all rows of Dinner
for x in re.findall(r'[a-zA-Z&]+', outlier_meal.at[index, 'order_items']): #loop through the list
if x not in Dinner:
Dinner.append(x)
###Output
_____no_output_____
###Markdown
In order to find the price of for each item, we will use `linear algebra api` of numpy
###Code
#Pancake, coffee, Cereal, Eggs
#Calculate the price of item in breakfast
#these are parameters of all the breakfast orders of Pancake, Coffee, Cereal, Eggs in outlier data first five rows
a = np.array([[8,9,1,10],
[0,0,3,6],
[3,0,4,4],
[8,6,8,0],
])
b = np.array([502.50,195.00,244.75,407.00]) #these are corresponding price of the breakfast order above
breakfast_price = np.linalg.solve(a, b) #solve using numpy.linalg.solve
bprice = {} #initiate pricelist of breakfast
i= 0 #initiate index
for item in ['Pancake', 'Coffee', 'Cereal', 'Eggs']:
bprice[item] = breakfast_price[i] #assignment corresponding price
i+=1
#Steak, Salad, Chicken, Burger, Fries
lprice = {} #initiate pricelist of lunch
#these are parameters of all the lunch orders of Pancake, Coffee, Cereal, Eggs in outlier data first five rows
a = np.array([[7,3,3,2,10],
[1,3,9,0,3],
[0,1,0,0,6],
[10,10,8,2,8],
[4,0,10,10,4],
])
b = np.array([644.6,420.6,89.2,1036.0,858.0]) #these are corresponding price of the lunch order above
lunch_price = np.linalg.solve(a, b)#solve using numpy.linalg.solve
i=0
for item in ['Steak', 'Salad', 'Chicken', 'Burger','Fries']:
lprice[item] = lunch_price[i]
i+=1
#['Salmon', 'Fish&Chips', 'Shrimp', 'Pasta']
dprice = {}
#these are parameters of all the dinner orders of Pancake, Coffee, Cereal, Eggs in outlier data first five rows
a = np.array([[3,1,6,0],
[4,0,3,5],
[0,0,1,5],
[0,5,0,4],
])
b = np.array([482,463.5,191.5,285])#these are corresponding price of the dinner order above
dinner_price = np.linalg.solve(a, b)#solve using numpy.linalg.solve
i=0
for item in ['Salmon', 'Fish&Chips', 'Shrimp', 'Pasta']:
dprice[item] = dinner_price[i]
i+=1
###Output
_____no_output_____
###Markdown
After collecting data for item price and the menu items for Breakfast, Lunch, Dinner, we will look at if any error in order_items column
###Code
#fix the wrong item in order_item
def order_item_fix(meal,items,total_price):
"""
At first, look at the meal (Breakfast, Lunch, Dinner) to find the price list.
Then check the items, if any of the which does not belong to the price list.
The item that does not belong to the price list is marked as out_list
The price for this item will be found by deducting price of other items from total price,
and then divided by the quantity of the item.
After the price of item found, look up in the price list to find the name of the item and replace the wrong
name of the item with the correct name found from list.
"""
menu_qtt = re.findall(r'[0-9]+', items) #quantities of each item in order
menu_item = re.findall(r'[a-zA-Z&]+', items) #name of each item in order
count = 0 #counting variable of item not in list
# let's loop through meal with take_price list, and meal_list
# look at meal variable to find the meal_list and assign the take_price list accordingly
if meal == 'Breakfast':
take_price = bprice
meal_list = Breakfast
if meal == 'Lunch':
take_price = lprice
meal_list = Lunch
if meal == 'Dinner':
take_price = dprice
meal_list = Dinner
#Find the item that do not belong to the meal_list
i = 0
for item in menu_item: #loop through item in menu
if item in meal_list: #if the item in meal list
total_price = total_price - int(menu_qtt[i])*take_price[item] #deducting price*quantity from total price
i+=1
else: #if the item is not in meal list, error found
out_list = item #mark the item
count = int(menu_qtt[i]) #count how many of that item
i+=1
price_o_outlist = total_price/count #price of the item not in the list
for item, price in take_price.items(): #lopp through the take_price item, and compare the price
if abs(price_o_outlist - price) < 0.1: #if the price found, then the name of the item found
fix_out_list = item
items = items.replace(out_list,fix_out_list) #replace the name of the item by correct item name
return items #return the correct item list
###Output
_____no_output_____
###Markdown
Before fixing using order_item_fix, we will create a function to check if the order_item has correct items before fixing to save time, and save the index of the rows that have errors with order_items.
###Code
def order_item_check(meal,items):
"""
This is to mark the error for each column that has issue with order_type
"""
error = 0
menu_qtt = re.findall(r'[a-zA-Z&]+', items) #enlist all the items
if meal == 'Breakfast': #grasping meal_list
meal_list = Breakfast
if meal == 'Lunch':
meal_list = Lunch
if meal == 'Dinner':
meal_list = Dinner
for item in menu_qtt: #check if all items in the correct list
if item not in meal_list:
error = 1 #if not then return error
break
return error
###Output
_____no_output_____
###Markdown
We will create a column 'order_items_menucheck' to see which column has wrong set of order_items
###Code
dirty['order_items_menucheck'] = dirty[['order_type','order_items']].apply(lambda x: order_item_check(*x), axis=1)
print('There are {} errors in column order_items'.format(len(dirty[dirty['order_items_menucheck'] == 1][['order_items']])))
###Output
There are 37 errors in column order_items
###Markdown
So we have 37 rows that have error in **items**. We can fix these error by using function order_item_fix
###Code
# loop through rows that have difference
for index, loc in dirty[dirty['order_items_menucheck'] == 1][['order_type','order_items','order_price']].iterrows():
dirty.at[index,'order_items'] = order_item_fix(dirty.at[index,'order_type'],dirty.at[index,'order_items'],dirty.at[index,'order_price']) #update
dirty.at[index,'order_items_menucheck'] = 0 # mark that the column has been fixed
dirty.at[index,'error'] = 'order_items' # mark the errors in column
###Output
_____no_output_____
###Markdown
4. customer_lat, customer_lon __Customer lattitute__ normally has value less than zero (eg:-37) due to the geolocation of Melbourne city, and longitude about 144. However, by looking at the row we can see some rows having value of customer_lat 144. We will inspect this.
###Code
# some value of customer_lat should be swap with customer_lon
dirty[dirty['customer_lat'] > dirty['customer_lon']][['customer_lat','customer_lon']]
###Output
_____no_output_____
###Markdown
So there are __four rows__ that has swapped value of longitude and lattitue, we can fix them
###Code
for index, loc in dirty[dirty['customer_lat'] > dirty['customer_lon']].iterrows():
if dirty.at[index,'error'] == '': #check if any error found before
#update
dirty.at[index,'customer_lat'],dirty.at[index,'customer_lon'] = dirty.at[index,'customer_lon'],dirty.at[index,'customer_lat']
#mark the error in column
dirty.at[index,'error'] += 'long_lat_swap'
###Output
_____no_output_____
###Markdown
We also noticed before that some columns that does not have negative sign with the customer_lat
###Code
#some customer_lat does not have negative sign
len(dirty[-1*dirty['customer_lat'] < 0])
###Output
_____no_output_____
###Markdown
We can fix these value by adding the negative sign.
###Code
for index, loc in dirty[-1*dirty['customer_lat'] < 0].iterrows():
if dirty.at[index,'error'] == '':
dirty.at[index,'customer_lat'] = -1*dirty.at[index,'customer_lat']
dirty.at[index, 'error'] = 'customer_lat'
###Output
_____no_output_____
###Markdown
5. order_price
###Code
def total_price_check(meal,items,total_price):
"""
Check if the total price of the meal is correct given the meal (eg: breakfast, lunch, dinner),
items of the meal, and the total price. Return the calculated price to compare with the total_price
later on.
"""
menu_qtt = re.findall(r'[0-9]+', items)
menu_item = re.findall(r'[a-zA-Z&]+', items)
count = 0
if meal == 'Breakfast':
take_price = bprice
meal_list = Breakfast
if meal == 'Lunch':
take_price = lprice
meal_list = Lunch
if meal == 'Dinner':
take_price = dprice
meal_list = Dinner
calculated = 0
i = 0
for item in menu_item:
if item in meal_list:
calculated = calculated + int(menu_qtt[i])*take_price[item]
i+=1
return calculated
def diff(a,b): # check if difference
if abs(a-b) < 0.01: #difference of 1 cent will be considered as the same
return 0 # return 0 if no difference
else:
return 1 # return 1 if difference
###Output
_____no_output_____
###Markdown
We will create a column __calculated_total_price__, and see if that column is different from the order_price.
###Code
#create a column to store the calculate total price
dirty['calculated_total_price'] = dirty[['order_type','order_items','order_price']].apply(lambda x: total_price_check(*x), axis=1)
#mark any row that has difference
dirty['error_total_price'] = dirty[['calculated_total_price','order_price']].apply(lambda x:diff(*x),axis =1)
print('There are {} errors in order_price'.format(len(dirty[dirty['error_total_price'] == 1])))
###Output
There are 37 errors in order_price
###Markdown
So there are 37 errors in total price, which is not calculated correctly. We can fix it based on the calculated_total_price
###Code
for index, loc in dirty[dirty['error_total_price'] == 1].iterrows():
dirty.at[index,'order_price'] = dirty.at[index,'calculated_total_price'] #update
dirty.at[index,'error'] = 'wrong order_price' #mark error
###Output
_____no_output_____
###Markdown
6. branch_code We will inspect the unique values of this column.
###Code
#check how many unique branch_code
dirty['branch_code'].unique()
###Output
_____no_output_____
###Markdown
Clearly there are some errors in lower case. We can fix this but we are not sure if other in uppercase are all correct. The other way of checking the branch_code is to look at the distance between the customer's geolocation and the branch's geolocation, which will be discussed further.
###Code
#check
dirty['branch_code_check'] = dirty['branch_code'].apply(lambda x:x.upper())
len(dirty[dirty['branch_code_check'] != dirty['branch_code']])
###Output
_____no_output_____
###Markdown
Now we can fix the lowercase
###Code
for index, loc in dirty[dirty['branch_code_check'] != dirty['branch_code']].iterrows():
dirty.at[index, 'branch_code'] = dirty.at[index,'branch_code_check']
dirty.at[index,'error'] = 'branch_code'
###Output
_____no_output_____
###Markdown
7. distance_to_customer_KM In order to work with __geolocation data__, we will need to look at the branches.csv, nodes.csv, and edges.csv. From these data, we can create a network with nodes and length between nodes to find distance. From node data, we can create a column node.
###Code
restaurants_csv = pd.read_csv('branches.csv') #read data
nodes = pd.read_csv('nodes.csv') #read nodes
edges = pd.read_csv('edges.csv') #read edges
nodes_dict = nodes.T.to_dict() #transpose table
def get_node(lat,lon): # get node
for i in range(len(l)):
if l[i]['lat'] == lat and l[i]['lon'] == lon:
return int(l[i]['node'])
break
l = [] #store node_dictionany in this list l
for i in range(len(nodes_dict)):
l.append(nodes_dict[i])
dirty['node'] = dirty[['customer_lat','customer_lon']].apply(lambda x: get_node(*x), axis=1)
#to minimize error in calculation, we will round the value to 3 digits after comma
dirty['distance_to_customer_KM'] = dirty['distance_to_customer_KM'].apply(lambda x: round(x,3))
###Output
_____no_output_____
###Markdown
We will then check if all node obtained correctly by check if any row has null value
###Code
#check if all node correctly found
dirty['node'].isnull().sum()
###Output
_____no_output_____
###Markdown
So far, all nodes are obtained correctly. We can start building network G using package networkx.
###Code
G = nx.Graph() #initiate graph G
nodes['node'] = nodes['node'].apply(lambda x:int(x)) #node as integer value
G.add_nodes_from(list(nodes['node'])) #add node to G
edges = edges[['u','v','distance(m)','street type','speed(km/h)']] #create edges
edges_dis = edges[['u','v','distance(m)']] #edge based on distance
edges_distance_tuple = [tuple(x) for x in edges_dis.values] #u,v, and distance to tuple
for start, end, length in edges_distance_tuple: #add u,v and distance to G
G.add_edge(int(start), int(end), length=length)
# Get node for restaurant
restaurants_csv['node'] = restaurants_csv[['branch_lat','branch_lon']].apply(lambda x: get_node(*x), axis=1)
# Get node of each restaurant and store in NS, TP, BK for futher calculation
NS = int(restaurants_csv[restaurants_csv['branch_code'] == 'NS']['node'][0])
TP = int(restaurants_csv[restaurants_csv['branch_code'] == 'TP']['node'][1])
BK = int(restaurants_csv[restaurants_csv['branch_code'] == 'BK']['node'][2])
def distance(u,v):
"""
Calculating distance from two nodes
"""
dist = 0. #initate a float value for dist
dist = nx.shortest_path_length(G, u, v,weight="length")/1000 #distance in km
return round(dist,3)
def get_r_node(res_name):
"""
Get restaurant node given the branch_code
"""
if res_name == 'NS':
return NS
elif res_name == 'BK':
return BK
else:
return TP
###Output
_____no_output_____
###Markdown
We create a column br_node to find the node in network G for each branch_code. Then a column of distance_check is created to calculate value between that node to customer's node. Then values of distance_check and distance_to_customer_KM will be compared to find if any error.
###Code
#get branch node
dirty['br_node'] = dirty['branch_code'].apply(lambda x:get_r_node(x))
#calculate the distance between the branch_node and the customer's node
dirty['distance_check'] = dirty[['br_node','node']].apply(lambda x: distance(*x), axis=1)
#compare the value between distance_check and distance_to_customer_KM
dirty[dirty['distance_check'] != dirty['distance_to_customer_KM']][['br_node','node','branch_code','distance_check','distance_to_customer_KM','error']].head()
###Output
_____no_output_____
###Markdown
From the table above, we can see that some rows has __different value__ in distance_check and distance_to_customer_KM, we do not know which one is correct yet. This can be either one of two possibilities; the distance_to_customer_KM was wrong and the distance_check is correct based on the branch_code (if branch_code correct), or the other was the distance_to_customer_KM is correct, the branch_code should be adjusted accordingly. Furthermore, we can see that in column error of some rows, the row has error with branch_code before. This error is due to the branch_code was in lowercase, and thus, the distance_to_customer_KM in this case must be correct, due to the specification 3 of the assignment that in dirty rows there can be no more than one anomalies. Thus, with these columns, we will adjust the value of branch_code again to match with the distance_to_customer_KM.
###Code
# assign these value to dataframe temp
temp = dirty[dirty['distance_check'] != dirty['distance_to_customer_KM']][['br_node','node','distance_check','branch_code','distance_to_customer_KM','error']]
len(temp[temp['error'] == 'branch_code'])
###Output
_____no_output_____
###Markdown
As discussed above, there are 19 such rows. In order to not violate the specification 3, we then fix the branch_code according to the distance. And the rest, we fix the distance according to the **branch_code**.
###Code
def find_branch_given_dist(u,dist):
"""
Given node and distance to node, we can
return the name of the branch_code
"""
distance_to_NS = distance(u,NS)
distance_to_TP = distance(u,TP)
distance_to_BK = distance(u,BK)
if abs(dist - distance_to_NS) < 0.01:
return 'NS'
elif abs(dist - distance_to_TP) < 0.01:
return 'TP'
elif abs(dist - distance_to_BK) < 0.01:
return 'BK'
for index, loc in temp[temp['error'] == 'branch_code'].iterrows():
dirty.at[index, 'branch_code'] = find_branch_given_dist(dirty.at[index, 'node'],dirty.at[index, 'distance_to_customer_KM'])
dirty.at[index, 'error'] = 'branch_code'
###Output
_____no_output_____
###Markdown
Now we can fix the error of the **wrong distance**.
###Code
# store the rows that has difference in fix_distance
fix_distance = dirty[dirty['distance_check'] != dirty['distance_to_customer_KM']]
# filter only the one that the column error has empty value
fix_distance = fix_distance[fix_distance['error'] == '']
# take a look at the data of distance
fix_distance[['distance_to_customer_KM','distance_check','error']].head()
###Output
_____no_output_____
###Markdown
We can not be sure from the table above that if it is a distance_to_customer_KM erroneous, or it could be the error from branch code. We will **investigate further** by calculating the distance from the node to two other branches.
###Code
for index, loc in fix_distance.iterrows():
fix_distance.at[index,'to_NS'] = distance(fix_distance.at[index,'node'],NS)
fix_distance.at[index,'to_TP'] = distance(fix_distance.at[index,'node'],TP)
fix_distance.at[index,'to_BK'] = distance(fix_distance.at[index,'node'],BK)
fix_distance[['branch_code','distance_to_customer_KM','distance_check','to_NS','to_TP','to_BK']].head()
###Output
_____no_output_____
###Markdown
From the table above,some distance_to_customer_KM values do not match any of the correct distance from customer node to the branches. To these values, we will fixed the value based on the **branch_code** So we will check from here if any distance to customer matches any of the branch_code. For easy check, we will store if match is found into one column named branch_check.
###Code
fix_distance['branch_check'] = '' # create column branch_check
for index, loc in fix_distance.iterrows(): #loop through the row
fix_distance.at[index,'branch_check'] = find_branch_given_dist(fix_distance.at[index,'node'],fix_distance.at[index,'distance_to_customer_KM'])
dirty['branch_check'] = ''# create column branch_check for dirty data
for index, loc in fix_distance.iterrows(): #update to dirty data
dirty.at[index,'branch_check'] = find_branch_given_dist(fix_distance.at[index,'node'],fix_distance.at[index,'distance_to_customer_KM'])
#store the filter in to branch_check_df
branch_check_df = fix_distance[fix_distance['branch_check'].notnull()][fix_distance['branch_check'] != '']
fix_distance[['branch_code','distance_to_customer_KM','distance_check','branch_check']].head()
###Output
_____no_output_____
###Markdown
By looking at the column above, we can see that there are some rows, that the distance did not mach any of the correct distance to branch_code. To these column, we can adjust the value of distance_to_customer_KM based on branch_code.
###Code
branch_check_df[['branch_code','distance_to_customer_KM','distance_check','branch_check']]
###Output
_____no_output_____
###Markdown
And to the row in which the __distance_to_customer_KM__ actually matchs the value of correct distance, we can conclude that the __branch_code__ of the row is erroneous because it is very unlikely that __distance_to_customer_KM__ can be erroneous, too. To these rows, we can fix by adjust the branch_code based on the __distance_to_customer_KM__.
###Code
for index, loc in branch_check_df.iterrows(): #loop through row of branch_check
dirty.at[index,'branch_code'] = dirty.at[index,'branch_check'] #update the branch_code
dirty.at[index,'error'] = 'branch_code' #mark error
fix_dist_index = fix_distance[fix_distance['branch_check'].isnull()] # store filter in fix_dist_index
for index, loc in fix_dist_index.iterrows():
dirty.at[index,'distance_to_customer_KM'] = fix_dist_index.at[index, 'distance_check'] #update correct value
dirty.at[index,'error'] = 'distance_to_customer_KM' #mark error
###Output
_____no_output_____
###Markdown
8. delivery fee and customerhasloyalty? At first, we will look at the **column delivery_fee** to check if any error. The delivery_fee will be discounted 50% if the customer has **_loyalty_**. We will create a column that contain the original fee __ori_fee__ by multiplying the delivery fee with two if the customer has loyalty, otherwise with one.
###Code
dirty['ori_fee'] = 0. #initiate a column of original delivery fee
for index, loc in tqdm(dirty.iterrows()):
if loc['customerHasloyalty?'] == 1:
dirty.at[index,'ori_fee'] = 2*dirty.at[index,'delivery_fee']
else:
dirty.at[index,'ori_fee'] = dirty.at[index,'delivery_fee']
dirty['date'] = dirty['date'].apply(lambda x: datetime.strptime(x, '%Y-%m-%d')) # change date column to date time
dirty['weekend'] = dirty['date'].apply(lambda x: day(x.weekday())) #one hot encode weekend/weekday column
dirty['timeofday'] = dirty['order_type'].apply(lambda x: timeofday(x)) #one hot encode order_type column
###Output
_____no_output_____
###Markdown
We will look at among order of the customers who have loyalty, if any outliers found. Outliers mean that the __*discounted*__ may or may not be applied appropriately.
###Code
ns_notnull = dirty[dirty['branch_code'] == 'NS'][dirty['customerHasloyalty?'] == 1]
ns = ns_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
ns.plot(y = 'ori_fee', x = 'distance_to_customer_KM', kind = 'scatter')
print('There are {} order that the delivery_fee too high among at branch Nickolson for the customers who have loyalty.'.format(len(ns_notnull[ns_notnull['ori_fee'] > 21]['customerHasloyalty?'])))
###Output
There are 13 order that the delivery_fee too high among at branch Nickolson for the customers who have loyalty.
###Markdown
We found 13 outlier over here, we can **deduce** that these are the customers that do not have loyalty fee for two reasons. First, their original_fees are too high in comparison to those of other customers who also have loyalty. Then we can conclude that these customers do not have loyalty.
###Code
#fixing ns loyalty
for index, loc in ns_notnull[ns_notnull['ori_fee'] > 21].iterrows():
dirty.at[index,'customerHasloyalty?'] = 0 #because originally ori_fee was multiplied by 2
dirty.at[index,'ori_fee'] = dirty.at[index,'ori_fee']/2 #Original fees of these case is the same
dirty.at[index,'error'] = 'loyalty'
dirty['error'].value_counts()
###Output
_____no_output_____
###Markdown
We will look at among order of the customers who have **_loyalty_** at branch TP now.
###Code
#tp_notnull = dirty[dirty['error'] == '']
tp_notnull = dirty[dirty['branch_code'] == 'TP'][dirty['customerHasloyalty?'] == 1]
tp = tp_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
tp.plot(y = 'ori_fee', x = 'distance_to_customer_KM', kind = 'scatter')
print('There are {} order that the delivery_fee too high among at branch TP for the customers who have loyalty.'.format(len(tp_notnull[tp_notnull['ori_fee'] > 21]['customerHasloyalty?'])))
###Output
There are 8 order that the delivery_fee too high among at branch TP for the customers who have loyalty.
###Markdown
We will **fix** these accordingly.
###Code
for index, loc in tp_notnull[tp_notnull['ori_fee'] > 20].iterrows():
dirty.at[index,'customerHasloyalty?'] = 0 #because originally ori_fee was multiplied by 2
dirty.at[index,'ori_fee'] = dirty.at[index,'ori_fee']/2 #Original fees of these case is the same
dirty.at[index,'error'] = 'loyalty'
dirty['error'].value_counts()
###Output
_____no_output_____
###Markdown
We will look at among order of the customers who have **_loyalty_** at branch BK now.
###Code
bk_notnull = dirty[dirty['branch_code'] == 'BK'][dirty['customerHasloyalty?'] == 1]
bk = bk_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
bk.plot(y = 'ori_fee', x = 'distance_to_customer_KM', kind = 'scatter')
print('There are {} order that the delivery_fee too high among at branch BK for the customers who have loyalty.'.format(len(bk_notnull[bk_notnull['ori_fee'] > 19.5])))
###Output
There are 13 order that the delivery_fee too high among at branch BK for the customers who have loyalty.
###Markdown
We can fix these accordingly. This mean that these customers may **not** have loyalty, but got **discount**.
###Code
for index, loc in bk_notnull[bk_notnull['ori_fee'] > 20].iterrows():
dirty.at[index,'customerHasloyalty?'] = 0 #because originally ori_fee was multiplied by 2
dirty.at[index,'ori_fee'] = dirty.at[index,'ori_fee']/2 #Original fees of these case is the same
dirty.at[index,'error'] = 'loyalty'
###Output
_____no_output_____
###Markdown
Let us look at the **pattern** of customer has no loyalty
###Code
ns_notnull = dirty[dirty['branch_code'] == 'NS'][dirty['customerHasloyalty?'] == 0]
ns = ns_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
ns.plot(y = 'ori_fee', x = 'distance_to_customer_KM', kind = 'scatter')
###Output
D:\Anacoda\lib\site-packages\ipykernel_launcher.py:1: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
"""Entry point for launching an IPython kernel.
###Markdown
The data looks **fair** at NS branch.
###Code
tp_notnull = dirty[dirty['branch_code'] == 'TP'][dirty['customerHasloyalty?'] == 0]
tp = tp_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
tp.plot(y = 'ori_fee', x = 'distance_to_customer_KM', kind = 'scatter')
###Output
D:\Anacoda\lib\site-packages\ipykernel_launcher.py:1: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
"""Entry point for launching an IPython kernel.
###Markdown
The data looks **fair** at TP branch.
###Code
bk_notnull = dirty[dirty['branch_code'] == 'BK'][dirty['customerHasloyalty?'] == 0]
bk = bk_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
bk.plot(y = 'ori_fee', x = 'distance_to_customer_KM', kind = 'scatter')
###Output
D:\Anacoda\lib\site-packages\ipykernel_launcher.py:1: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
"""Entry point for launching an IPython kernel.
###Markdown
Right here can see some **outliers**, which means that some of the fee actually got **_discounted_** of 50% but the data did not indicate that those customers have loyalty. We can **conclude** that these customers have loyalty. So we will need to adjust the loyalty score to 1.
###Code
# look at how many outliers
len(bk_notnull[bk_notnull['ori_fee'] < 8][bk_notnull['distance_to_customer_KM'] < 8]) + len(bk_notnull[bk_notnull['ori_fee'] < 12][bk_notnull['distance_to_customer_KM'] > 9])
###Output
D:\Anacoda\lib\site-packages\ipykernel_launcher.py:2: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
###Markdown
There are three such outliers, we can fix them by multiplying the delivery with two, as the order should **not** have discounts.
###Code
for index, loc in bk_notnull[bk_notnull['ori_fee'] < 8][bk_notnull['distance_to_customer_KM'] < 8].iterrows():
dirty.at[index,'customerHasloyalty?'] = 1 #because originally ori_fee was multiplied by 2
dirty.at[index,'ori_fee'] = dirty.at[index,'ori_fee']*2 #Original fees of these case should be doubled
dirty.at[index,'error'] = 'loyalty'
for index, loc in bk_notnull[bk_notnull['ori_fee'] < 12][bk_notnull['distance_to_customer_KM'] > 9].iterrows():
dirty.at[index,'customerHasloyalty?'] = 1 #because originally ori_fee was multiplied by 2
dirty.at[index,'ori_fee'] = dirty.at[index,'ori_fee']*2 #Original fees of these case should be doubled
dirty.at[index,'error'] = 'loyalty'
dirty_output = dirty[ori_col]
dirty_output.to_csv('Group102_dirty_data_solution.csv')
###Output
_____no_output_____
###Markdown
B. Missing data According to specification 4 of assignment: * There are __no__ data anomalies in the file G102_outlier_data.csv, only outliers.Similarly, there are no data anomalies other than missing value problems in the fileG102_missing_data.csv
###Code
missing = pd.read_csv('Group102_missing_data.csv') #read data
org_col = missing.columns # saving the original columns for output
missing.isnull().sum()
###Output
_____no_output_____
###Markdown
There are 100 missing values in column branch_code, 50 in distance_to_customer_KM, and 50 in delivery_fee According to the specification 8 of the assginment: * The restaurant uses Djikstra algorithm to calculate the shortest distance between customerand restaurant. We will create a column node to **store** node (from nodes.csv), weekend stored in encoded variable (1 for weekend, 0 for weekday) and timeofday in encoded form (0 for morning, 1 afternoon, 2 evening)
###Code
missing['node'] = missing[['customer_lat','customer_lon']].apply(lambda x: get_node(*x), axis=1)
missing['distance_to_customer_KM'] = missing['distance_to_customer_KM'].apply(lambda x: round(x,3))
missing['date'] = missing['date'].apply(lambda x: datetime.strptime(x, '%Y-%m-%d'))
missing['weekend'] = missing['date'].apply(lambda x: day(x.weekday()))
missing['timeofday'] = missing['order_type'].apply(lambda x: timeofday(x))
###Output
_____no_output_____
###Markdown
For further analysis, we will calculate the distance from each customer's node to each branch's node.
###Code
for index, loc in tqdm(missing[missing['branch_code'].isnull()].iterrows()): #loop through the missing rows only
#distance to TP
missing.at[index,'dist_to_TP'] = distance(missing.at[index,'node'],TP)
#distance to NS
missing.at[index,'dist_to_NS'] = distance(missing.at[index,'node'],NS)
#distance to BK
missing.at[index,'dist_to_BK'] = distance(missing.at[index,'node'],BK)
###Output
_____no_output_____
###Markdown
1. branch_code and distance_to_customer_KM Now we will **investigate** missing values for **branch_code** column.
###Code
bmissing = missing[missing['branch_code'].isnull()] #save the missing of branch_code in to bmissing
print('There are {} missing in branch_code column'.format(len(bmissing)))
###Output
There are 100 missing in branch_code column
###Markdown
We will see among 100 columns if any other columns has missing value
###Code
bmissing.isnull().sum()
###Output
_____no_output_____
###Markdown
Besides 100 missing value in branch_code, there are 50 missing value in distance_to_customer_KM. branch_code can be found by **_calculating_** the distance from customer's geolocation, and we can compared the calculated distance to the distance_to_customer_KM to find the exact branch_code. Clearly in this case, we can only do so for 50 rows. We will create a new column name 'fill' to mark the value that has been filled (1 filled, 0 not filled). And solve the 50 missing values that have distance_to_customer_KM in the same row.
###Code
#Here we can find the missing branchcode based on the distance_to_customer_KM and node
missing['fill'] = 0
for index, loc in bmissing.iterrows():
missing.at[index,'branch_code'] = find_branch_given_dist(missing.at[index,'node'],missing.at[index,'distance_to_customer_KM'])
missing.at[index,'fill'] = 1
###Output
_____no_output_____
###Markdown
After filling the missing value, we can have a look at the missing value in missing dataframe
###Code
missing.isnull().sum()
###Output
_____no_output_____
###Markdown
dist_to_XX is the three column we created for comparing purposes, so they are **not important**. Now we can see that 50 missing values have been solved. We now look that the next 50 missing value in this column. Because among these 50 columns, we can only rely on the clue of delivery fee to infer the distance. According to the specifications 7 of the assignment: * 7. Delivery fee is calculated using a **different method for each branch**.The fee depends linearly (but in different ways for each branch) on: * weekend or weekday (1 or 0) - as a continuous variable * time of the day (morning 0, afternoon 1, evening 2) - as a continuous variable * distance between branch and customer So our approach is that we can use **linear regression** on the rows that do not have missing value. From these linear models, we can use the _coefficient_ to estimate the distance from customer to the branches. We then use these calculated (3 values) to compare with the customer's distance to the three branches, and pick the one that has the least difference. However, we will need to consider the distcounted fee for the customer who have **_loyalty_**. To simplify the matter, we will only consider the **_original delivery fee_** before discounted. We will create a column ori_fee to store these value. If the customer has loyalty, the ori_fee is double the amount of delivery fee, otherwise equal.
###Code
for index, loc in tqdm(missing.iterrows()):
if loc['customerHasloyalty?'] == 1:
missing.at[index,'ori_fee'] = 2*missing.at[index,'delivery_fee']
else:
missing.at[index,'ori_fee'] = missing.at[index,'delivery_fee']
###Output
_____no_output_____
###Markdown
We will see how many missing value in ori_fee
###Code
len(missing[missing['ori_fee'].notnull()])
###Output
_____no_output_____
###Markdown
Clearly 50 missing, because there are 50 missing delivery_fee as founded above from the missing dataset, we will deal with these values later. Now for each branch, we wil look at the **not null value** of column ori_fee, and the rows of these dataframe must have available distance_to_customer_KM. From these column, a linear regression model is fitted to find the coefficients.
###Code
ns_notnull = missing[missing['ori_fee'].notnull()]
ns_notnull = ns_notnull[ns_notnull['distance_to_customer_KM'].notnull()]
ns_notnull = ns_notnull[ns_notnull['branch_code'] == 'NS']
ns = ns_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
X = ns[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = ns['ori_fee']
model_ns = sm.OLS(Y, X).fit()
model_ns.summary()
###Output
_____no_output_____
###Markdown
We have **coefficients** for NS branch are: 1.5367, 0.6063, 2.1365. The adjusted R-squared value is 0.998, which means that this model is quite good, and can explain 99.8% of the variance of the dataset. We will use these coefficients to predict the null values. The predicted distance will be stored in dist_hat_NS.
###Code
ns_isnull = missing[missing['ori_fee'].notnull()]
ns_isnull = ns_isnull[ns_isnull['distance_to_customer_KM'].isnull()]
ns_isnull.head()
_isnull = ns_isnull
for index, loc in ns_isnull.iterrows():
_isnull.at[index, 'dist_hat_ns'] = (ns_isnull.at[index, 'ori_fee'] -0.6063*ns_isnull.at[index, 'timeofday'] - 2.1365*ns_isnull.at[index, 'weekend'])/1.5367
###Output
_____no_output_____
###Markdown
Then we do the same with other branches like TP and BK
###Code
tp_notnull = missing[missing['ori_fee'].notnull()]
tp_notnull = tp_notnull[tp_notnull['distance_to_customer_KM'].notnull()]
tp_notnull = tp_notnull[tp_notnull['branch_code'] == 'TP']
tp = tp_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
X = tp[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = tp['ori_fee']
model_tp = sm.OLS(Y, X).fit()
model_tp.summary()
###Output
_____no_output_____
###Markdown
We have **coefficients** 1.2657, 0.8480, 1.1228 for TP
###Code
tp_isnull = missing[missing['ori_fee'].notnull()]
tp_isnull = tp_isnull[tp_isnull['distance_to_customer_KM'].isnull()]
for index, loc in ns_isnull.iterrows():
_isnull.at[index, 'dist_hat_tp'] = (tp_isnull.at[index, 'ori_fee'] -0.8487*tp_isnull.at[index, 'timeofday'] - 1.1237*tp_isnull.at[index, 'weekend'])/1.2655
bk_notnull = missing[missing['ori_fee'].notnull()]
bk_notnull = bk_notnull[bk_notnull['distance_to_customer_KM'].notnull()]
bk_notnull = bk_notnull[bk_notnull['branch_code'] == 'BK']
bk = bk_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
X = bk[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = bk['ori_fee']
model_bk = sm.OLS(Y, X).fit()
model_bk.summary()
###Output
_____no_output_____
###Markdown
We have **coefficients** = 1.5373 , 1.4764, 2.5715 for BK
###Code
bk_isnull = missing[missing['ori_fee'].notnull()]
bk_isnull = bk_isnull[bk_isnull['distance_to_customer_KM'].isnull()]
for index, loc in bk_isnull.iterrows():
_isnull.at[index, 'dist_hat_bk'] = (bk_isnull.at[index, 'ori_fee'] -1.4893*bk_isnull.at[index, 'timeofday'] - 2.5655*bk_isnull.at[index, 'weekend'])/1.5362
###Output
_____no_output_____
###Markdown
After getting predicted values, we can inspect futher.
###Code
_isnull[['dist_hat_tp','dist_hat_bk','dist_hat_ns','dist_to_TP','dist_to_BK','dist_to_NS','node']].head()
###Output
_____no_output_____
###Markdown
For instance, if we look at the first **predicted row**, three models predicted that given the deliver_fee, timeofday, and weekend variable, the distance is around 4.8 to 6.3, which is very close to distance from customer to branch BK. Then from here, we can clearly find the minimum difference to infer location. This method will work well with the customer's location that have nearly equal distance to two or three branches. If we look at index 53, this customer's location is in a place that the distances from three branches to which are 9.87(TP), 9.95(BK), and 9.93(NS). The TP linear model can detect the distance 97% accuracy, while the other like BK and NS can only predict 67%, 81% respectively.Then we can now use this estimation to impute the value of missing branch_code. And also, at the same time, we can impute the missing distance_to_customer_KM.
###Code
def find_min(dist_hat_tp,dist_hat_ns,dist_hat_bk,dist_to_BK,dist_to_TP,dist_to_NS):
"""
Return the branch_code to customer that has the minimum difference, and the distance
"""
tp = abs(dist_hat_tp-dist_to_TP) #difference from predicted distance to actual distance
bk = abs(dist_hat_bk-dist_to_BK) #difference from predicted distance to actual distance
ns = abs(dist_hat_ns-dist_to_NS) #difference from predicted distance to actual distance
dist = min(tp,bk,ns) # minimum of the three diffrence
dist_2cus = 0. #initiate a float value
if dist == tp:
branch_code = 'TP'
dist_2cus = dist_to_TP
elif dist == bk:
branch_code = 'BK'
dist_2cus = dist_to_BK
elif dist == ns:
branch_code = 'NS'
dist_2cus = dist_to_NS
return branch_code, dist_2cus
for index, loc in _isnull.iterrows():
_isnull.at[index,'branch_code'], _isnull.at[index,'distance_to_customer_KM'] \
= find_min(_isnull.at[index,'dist_hat_tp'],\
_isnull.at[index,'dist_hat_ns'],\
_isnull.at[index,'dist_hat_bk'],\
_isnull.at[index,'dist_to_BK'],\
_isnull.at[index,'dist_to_TP'],\
_isnull.at[index,'dist_to_NS']) #assign predict value to _isnull data frame
_isnull[['dist_hat_tp','dist_hat_bk','dist_hat_ns','dist_to_TP','dist_to_BK','dist_to_NS','branch_code','distance_to_customer_KM','delivery_fee','weekend','timeofday']].head()
###Output
_____no_output_____
###Markdown
After imputing by mininum difference, we can see that the distance_to_customer_KM makes sense given the delivery_fee, timeofday, and weekend. Now we can update these value to the missing data.
###Code
for index, loc in _isnull.iterrows(): #update the pred
missing.at[index,'branch_code'],missing.at[index,'distance_to_customer_KM'] = _isnull.at[index,'branch_code'],_isnull.at[index,'distance_to_customer_KM']
###Output
_____no_output_____
###Markdown
2. Missing in delivery_fee With delivery_fee, we can use **linear regression models** for each branch. According to specification 7, we can look at the rows that do not have null value, and fit linear regression model for each branch. Then we will use these linear models, to predict the values of delivery_fee for missing data. * 7. Delivery fee is calculated using a different method for each branch.The fee depends linearly (but in different ways for each branch) on: * weekend or weekday (1 or 0) - as a continuous variable * time of the day (morning 0, afternoon 1, evening 2) - as a continuous variable * distance between branch and customer Again, we will use the ori_fee column, instead of the delivery_fee to simplify the issue. After getting the predicted values for ori_fee, we can update again with the loyalty value.
###Code
ns_notnull = missing[missing['ori_fee'].notnull()]
ns_notnull = ns_notnull[ns_notnull['distance_to_customer_KM'].notnull()]
ns_notnull = ns_notnull[ns_notnull['branch_code'] == 'NS']
ns = ns_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
X = ns[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = ns['ori_fee']
model_ns = sm.OLS(Y, X).fit()
pred_ori_ns = model_ns.predict(missing[missing['delivery_fee'].isnull()][missing['branch_code'] == 'NS'][['distance_to_customer_KM','timeofday','weekend']])
for index, loc in missing[missing['delivery_fee'].isnull()][missing['branch_code'] == 'NS'][['ori_fee']].iterrows():
missing.at[index,'ori_fee'] = pred_ori_ns[index]
tp_notnull = missing[missing['ori_fee'].notnull()]
tp_notnull = tp_notnull[tp_notnull['distance_to_customer_KM'].notnull()]
tp_notnull = tp_notnull[tp_notnull['branch_code'] == 'TP']
tp = tp_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
X = tp[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = tp['ori_fee']
model_tp = sm.OLS(Y, X).fit()
pred_ori_tp = model_tp.predict(missing[missing['delivery_fee'].isnull()][missing['branch_code'] == 'TP'][['distance_to_customer_KM','timeofday','weekend']])
for index, loc in missing[missing['delivery_fee'].isnull()][missing['branch_code'] == 'TP'][['ori_fee']].iterrows():
missing.at[index,'ori_fee'] = pred_ori_tp[index]
bk_notnull = missing[missing['ori_fee'].notnull()]
bk_notnull = bk_notnull[bk_notnull['distance_to_customer_KM'].notnull()]
bk_notnull = bk_notnull[bk_notnull['branch_code'] == 'BK']
bk = bk_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
X = bk[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = bk['ori_fee']
model_bk = sm.OLS(Y, X).fit()
pred_ori_bk = model_bk.predict(missing[missing['delivery_fee'].isnull()][missing['branch_code'] == 'BK'][['distance_to_customer_KM','timeofday','weekend']])
for index, loc in missing[missing['delivery_fee'].isnull()][missing['branch_code'] == 'BK'][['ori_fee']].iterrows():
missing.at[index,'ori_fee'] = pred_ori_bk[index]
print('The adjusted R squared for TP, BK, and NS are {:.3}, {:.3} and {:.3}'.format(model_tp.rsquared_adj,model_bk.rsquared_adj,model_ns.rsquared_adj))
###Output
The adjusted R squared for TP, BK, and NS are 0.998, 0.995 and 0.998
###Markdown
Look at the values above, all three models can explain for **99%** of variance, then we can be confident that the predicted value of delivery_fee for each branches is really close to reality. We can **update** the value for delivery_fee
###Code
missing['delivery_fee'] = missing['ori_fee']/(1+missing['customerHasloyalty?'])
###Output
_____no_output_____
###Markdown
We can **double check** again with the missing data.
###Code
missing.isnull().sum()
###Output
_____no_output_____
###Markdown
So out of three columns created for EDA task, we can see that the original columns are now free from missing data. We can produce the **output**.
###Code
missing_output = missing[org_col]
missing_output.to_csv('Group102_missing_data_solution.csv') #produce output
###Output
_____no_output_____
###Markdown
C. Outlier In this task, we are supposed to explore the 'outlier_data' csv file and detect outliers. There are no data anomalies in this data file, all data is clean. Thus, we will use pandas boxplot and some other plotting techniques to explore the data. If the column is univariate, we believe inter_quantile_range approach is sufficient for detecting outliers. But if the column is multi-variate, we will use a combination of linear regression residual method and inter_quantile_range method to detect outliers.* create some new columns for analysis purpose* univariate -> Inter Quantile Range* multi-variate -> Linear Regression Residual & Inter Quantile Range
###Code
outlier = pd.read_csv("Group102_outlier_data.csv") #read outlier data
ori_col = outlier.columns #keeping original columns
outlier['ori_fee'] = 0. #initiate a column of original delivery fee
# generate original delivery fee by checking customer loyalty status
# if customer loyalty status has value 1, original delivery fee is twice as the column 'delivery_fee'
for index, loc in tqdm(outlier.iterrows()):
if loc['customerHasloyalty?'] == 1:
outlier.at[index,'ori_fee'] = 2*outlier.at[index,'delivery_fee'] # create new column 'ori_fee' and stores its original delivery_fee
else:
outlier.at[index,'ori_fee'] = outlier.at[index,'delivery_fee'] #c reate new column 'ori_fee' and stores its original delivery_fee
def qtt_items(order):
'''
takes an order from dataframe, returns the total item quantities in the order
'''
menu_qtt = re.findall(r'[0-9]+', order) # use re to find all the numerical data
qtt = 0
for each in menu_qtt:
qtt +=int(each) # increment the total quantity by each found matching object
return qtt
# create some columns to assist future statistical analysis
# create column distance to customer
outlier['distance_to_customer_KM'] = outlier['distance_to_customer_KM'].apply(lambda x: round(x,3))
# create column date
outlier['date'] = outlier['date'].apply(lambda x: datetime.strptime(x, '%Y-%m-%d'))
# create column weekend
outlier['weekend'] = outlier['date'].apply(lambda x: day(x.weekday()))
# create column time of day
outlier['timeofday'] = outlier['order_type'].apply(lambda x: timeofday(x))
###Output
_____no_output_____
###Markdown
Define a function that will return outliers by __IQR__ method.* IQR = Q3 - Q1* __*outlier*__ is any data that's smaller than $(q1 - 1.5*iqr)$ or greater than $(q3 + 1.5*iqr)$* return (index_list, outlier_list, count of outliers)
###Code
def iqr_outlier(df,column):
'''
takes dataframe (df) and column name (column) as input, returns a list of outliers using Inter-Quantile-Range (IQR)
return outlier_list, count of outliers
'''
q3 = df[column].quantile(.75)
q1 = df[column].quantile(.25)
iqr = q3-q1
outlier_list = [] # initiate outlier_list in order to store all outliers
index_list = []
# if the data is smaller than (q1 - 1.5*iqr), it's considered outlier by IQR
# if the data is greater than (q3 + 1.5*iqr), it's considered outlier by IQR
for index, rows in df[column].items():
if rows<(q1-1.5*iqr):
outlier_list.append(rows)
index_list.append(index)
elif rows>(q3+1.5*iqr):
outlier_list.append(rows)
index_list.append(index)
count = len(outlier_list)
return index_list,outlier_list, count
###Output
_____no_output_____
###Markdown
Use iqr_outlier to detect __potential outliers__. Examine the detected outliers to make sure they are really outliers.
###Code
outlier.boxplot(column='delivery_fee')
delivery_outlier = iqr_outlier(outlier, 'delivery_fee')
delivery_outlier[2]
###Output
_____no_output_____
###Markdown
There are 45 __potential outliers__ in column 'delivery_fee', but we don't think this is right because the column 'delivery_fee' is the deliver_fee after loyalty status discounts. Thus we need to examine the 'ori_fee' column next.
###Code
ori_outlier = iqr_outlier(outlier, 'ori_fee')
ori_outlier[2]
###Output
_____no_output_____
###Markdown
There are 27 __potential outliers__ in column 'ori_fee'. This tells us that these 27 customers are either really far away from any of the 3 branches or really close. But we can't simply classify these 27 rows as outliers because the delivery fee is based on some other variables, such as distance_to_customer_KM, timeofday, weekend. Thus this is also incorrect and we need to fit regression lines based on branch_code, after that we can do the IQR on the residuals. __'delivery_fee'__'delivery_fee' is determined by four variables, thus simple IQR won't be sufficient to determine if there is an outlier. We need to run linear regressions residuals method to identify the outliers along with IQR method. The variables which determine delivery_fee that we need to consider are: 1. branch_code2. weekend or weekday3. time of the day4. distance
###Code
# initiate a new column to store the prediction delivery_fee after fitting with some variables
outlier['ori_fee_hat'] =0.
###Output
_____no_output_____
###Markdown
Calculate the __coefficients__ of 'distance_to_customer_KM','timeofday','weekend' regarding to 'ori_fee'
###Code
# prepare the columns we need in order to perform calculation
o_bk_notnull = outlier[outlier['ori_fee'].notnull()]
o_bk_notnull = o_bk_notnull[o_bk_notnull['branch_code'] == 'BK']
o_bk = o_bk_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
# use StatsModel.OLS().fit() to calculate the coefficients
X = o_bk[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = o_bk['ori_fee']
model_o_bk = sm.OLS(Y, X).fit()
model_o_bk.params
plt.boxplot(o_bk['ori_fee'])
plt.boxplot(model_o_bk.resid)
###Output
_____no_output_____
###Markdown
We can see residuals have **_narrower_** IQR than its original data's IQR.
###Code
# prepare the columns we need in order to perform calculation
o_ns_notnull = outlier[outlier['ori_fee'].notnull()]
o_ns_notnull = o_ns_notnull[o_ns_notnull['branch_code'] == 'NS']
o_ns = o_ns_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
# use StatsModel.OLS().fit() to calculate the coefficients
X = o_ns[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = o_ns['ori_fee']
model_o_ns = sm.OLS(Y, X).fit()
print(model_o_ns.params)
plt.boxplot(model_o_ns.resid)
o_tp_notnull = outlier[outlier['ori_fee'].notnull()]
o_tp_notnull = o_tp_notnull[o_tp_notnull['branch_code'] == 'TP']
o_tp = o_tp_notnull[['ori_fee','distance_to_customer_KM','timeofday','weekend']]
X = o_tp[['distance_to_customer_KM','timeofday','weekend']] # here we have 2 variables for multiple regression. If you just want to use one variable for simple linear regression, then use X = df['Interest_Rate'] for example.Alternatively, you may add additional variables within the brackets
Y = o_tp['ori_fee']
model_o_tp = sm.OLS(Y, X).fit()
print(model_o_tp.params)
plt.boxplot(model_o_tp.resid)
###Output
distance_to_customer_KM 1.234974
timeofday 0.778199
weekend 1.397721
dtype: float64
###Markdown
By plotting **_residuals_**, we believe residuals is a good indentifier of outliers. Also, we've obtained the coefficients for different branches. Next thing we want to do is to combine residuals and IQR in order to get more accurate result. In order to generate a 'residual' column, we first need to predict delivery_fee based on 'distance', 'time of day', 'weekend or week day', 'branch_code'
###Code
# train the model of NS_branch with 'distance_to_customer_KM','timeofday','weekend'
pred_ns = model_o_ns.predict(outlier[outlier['branch_code'] == 'NS'][['distance_to_customer_KM','timeofday','weekend']])
for index, loc in outlier[outlier['branch_code'] == 'NS'].iterrows():
# create 'ori_fee_hat' column that stores the prediction of delivery fee after training the model
outlier.at[index,'ori_fee_hat'] = pred_ns[index]
# train the model of TP_branch with 'distance_to_customer_KM','timeofday','weekend'
pred_tp = model_o_tp.predict(outlier[outlier['branch_code'] == 'TP'][['distance_to_customer_KM','timeofday','weekend']])
for index, loc in outlier[outlier['branch_code'] == 'TP'].iterrows():
# create 'ori_fee_hat' column that stores the prediction of delivery fee after training the model
outlier.at[index,'ori_fee_hat'] = pred_tp[index]
# train the model of BK_branch with 'distance_to_customer_KM','timeofday','weekend'
pred_bk = model_o_bk.predict(outlier[outlier['branch_code'] == 'BK'][['distance_to_customer_KM','timeofday','weekend']])
for index, loc in outlier[outlier['branch_code'] == 'BK'].iterrows():
# create 'ori_fee_hat' column that stores the prediction of delivery fee after training the model
outlier.at[index,'ori_fee_hat'] = pred_bk[index]
# generate 'residual' column by subtracting 'ori_fee_hat' from 'ori_fee'
outlier['residual'] = outlier['ori_fee_hat']-outlier['ori_fee']
# boxplot residual by 'NS' branch
outlier[outlier['branch_code']=='NS'].boxplot(column='residual')
# apply iqr_outlier function on residuals by 'NS' branch
iqr_outlier(outlier[outlier['branch_code']=='NS'], 'residual')
###Output
_____no_output_____
###Markdown
Let us have a look at these rows
###Code
outlier.iloc[iqr_outlier(outlier[outlier['branch_code']=='NS'], 'residual')[0], :]
###Output
_____no_output_____
###Markdown
These 10 rows are derived from residuals and inter quantile range, the value of ori_fee are adjusted by 'branch_code', 'distance_to_customer_KM', 'timeofday', 'weekend or weekday', they are still too far away from most of the dataset, thus we believe these 10 rows are outliers.We want to remove these 10 rows for they are rare and unusual fees on delivery_fee, keeping them will result in bias prediction. The reason behind these occurences of the outliers could be that 'NS' branch is the closest restaurant to these customers but they are still too far from it, or these customers just live really close to the 'NS' branch.
###Code
# boxplot residual by 'TP' branch
outlier[outlier['branch_code']=='TP'].boxplot(column='residual')
# apply iqr_outlier function on residuals by 'TP' branch
iqr_outlier(outlier[outlier['branch_code']=='TP'], 'residual')
###Output
_____no_output_____
###Markdown
These 17 rows are derived from residuals and inter quantile range, they are adjusted by 'branch_code', 'distance_to_customer_KM', 'timeofday', 'weekend or weekday', they are still too far away from most of the dataset of TP, thus we believe these 17 rows are outliers.We want to remove these 17 rows for they are rare and unusual fees on delivery_fee, keeping them will result in bias prediction. The reason behind these occurences of the outliers could be that 'TP' branch is the closest restaurant to these customers but they are still too far from it, or these customers just live really close to the 'TP' branch.
###Code
# boxplot residual by 'BK' branch
outlier[outlier['branch_code']=='BK'].boxplot(column='residual')
# apply iqr_outlier function on residuals by 'BK' branch
iqr_outlier(outlier[outlier['branch_code']=='BK'], 'residual')
###Output
_____no_output_____
###Markdown
After adjusting 'branch_code', 'distance_to_customer_KM', 'timeofday', 'weekend or weekday', we believe these are the real outliers from the corresponding 'delivery_fee' column.We want to remove these 14 rows for they are rare and unusual fees on delivery_fee, keeping them will result in bias prediction. The reason behind these occurance of the outliers could be that 'BK' branch is the closest restaurant to these customers but they are still too far from it, or these customers just live really close to the 'BK' branch. Generate the output file
###Code
# obtain the indices of all the outliers
ns_outlier = iqr_outlier(outlier[outlier['branch_code']=='NS'], 'residual')[0]
tp_outlier = iqr_outlier(outlier[outlier['branch_code']=='TP'], 'residual')[0]
bk_outlier = iqr_outlier(outlier[outlier['branch_code']=='BK'], 'residual')[0]
# generate a list that stores all indices of outliers, then use set() to keep the unique indices
all_outlier = set(ns_outlier + tp_outlier + bk_outlier)
# drop outlier rows
for item in all_outlier:
outlier = outlier.drop(item,axis=0)
# drop extra columns and reset index
outlier_output = outlier[org_col]
# generate the output file
outlier_output.to_csv('Group102_outlier_data_solution.csv')
###Output
_____no_output_____ |
Projects/German-Traffic-Sign-Recognition/German-Traffic-Sign-Recognition-data-prep.ipynb | ###Markdown
Downloading the data Follow these steps to download and prepare the data:1. Download the train dataset file from https://www.kaggle.com/meowmeowmeowmeowmeow/gtsrb-german-traffic-sign? 2. Upload it to your Colab environment. After you upload you should be able to see the file by running the `ls` command.3. Unzip it (use the code below)
###Code
# Use this code to upload the data as a backup
FILEID='1p7K1Aw-fZUxha-aLczwZ6Z6cHvu1pEN1'
FILENAME='train.zip'
!wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --quiet --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=$FILEID' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=$FILEID" -O $FILENAME && rm -rf /tmp/cookies.txt
ls -lh
###Output
total 250M
drwxr-xr-x 1 root root 4.0K Jul 10 16:29 [0m[01;34msample_data[0m/
-rw-r--r-- 1 root root 250M Jul 22 19:36 train.zip
###Markdown
The dataset size above should be 250M otherwise it wasn't downloaded properly. Try running the download cell again!
###Code
!unzip -q train.zip -d kaggle_original_data
!rm -r kaggle_original_data/__MACOSX/
!mv kaggle_original_data/82373_191501_upload_Train/* kaggle_original_data
!rm -r kaggle_original_data/82373_191501_upload_Train
!ls -l kaggle_original_data | head
from random import shuffle
import os, shutil
# list all labels
label_dirs = [str(label) for label in range(0,43)]
# The path to the directory where the original
# dataset was uncompressed
original_dataset_dir = '/content/kaggle_original_data'
# The directory where we will
# store our smaller dataset
base_dir = '/content/german_traffic_sign'
os.mkdir(base_dir)
# Directories for our training,
# validation and test splits
train_dir = os.path.join(base_dir, 'train')
os.mkdir(train_dir)
validation_dir = os.path.join(base_dir, 'validation')
os.mkdir(validation_dir)
test_dir = os.path.join(base_dir, 'test')
os.mkdir(test_dir)
# Directory with our training/validation/test label pictures
for target_dir in [train_dir, validation_dir, test_dir]:
for label in label_dirs:
dir = os.path.join(target_dir, label)
os.mkdir(dir)
# Copy 70% of each label to train, 15% to valid, and 15% to test directories
for label in label_dirs:
fnames = os.listdir(os.path.join(original_dataset_dir, label))
shuffle(fnames) # shuffling the list
n_img_start_valid = int(len(fnames)*0.7)
n_img_start_test = int(len(fnames)*0.85)
for fname in fnames[:n_img_start_valid]: # train
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(train_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_valid:n_img_start_test]: # valid
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(validation_dir, label, fname)
shutil.copyfile(src, dst)
for fname in fnames[n_img_start_test:]: # test
src = os.path.join(original_dataset_dir, label, fname)
dst = os.path.join(test_dir, label, fname)
shutil.copyfile(src, dst)
###Output
_____no_output_____
###Markdown
As a sanity check, let's count how many pictures we have in each training split (train / validation / test):
###Code
total_train_imgs = 0
total_valid_imgs = 0
for label in label_dirs:
print('total images for label', label, 'in training:', len(os.listdir(os.path.join(train_dir, label))),
'in valid:', len(os.listdir(os.path.join(validation_dir, label))),
'in test:', len(os.listdir(os.path.join(test_dir, label))))
total_train_imgs += len(os.listdir(os.path.join(train_dir, label)))
total_valid_imgs += len(os.listdir(os.path.join(validation_dir, label)))
print('Total number of training images:', total_train_imgs)
print('Total number of validation images:', total_valid_imgs)
###Output
Total number of training images: 27439
Total number of validation images: 5879
|
Notebooks/ORF_CNN_209.ipynb | ###Markdown
ORF recognition by CNNIn 105, we used Conv1D layers with filter width=3 with dropout to reduce overfitting. The simulated RNA lengths were 1000.Here, try really short simulated RNA.RNA_LENS=71, CDS_LEN=63 and was cut in half with layers with filters 10, neurons 10 and 5 epochs.
###Code
import time
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
PC_SEQUENCES=20000 # how many protein-coding sequences
NC_SEQUENCES=20000 # how many non-coding sequences
PC_TESTS=1000
NC_TESTS=1000
RNA_LEN=71 # how long is each sequence
CDS_LEN=63
ALPHABET=4 # how many different letters are possible
INPUT_SHAPE_2D = (RNA_LEN,ALPHABET,1) # Conv2D needs 3D inputs
INPUT_SHAPE = (RNA_LEN,ALPHABET) # Conv1D needs 2D inputs
FILTERS = 10 # how many different patterns the model looks for
NEURONS = 10
DROP_RATE = 0.2
WIDTH = 3 # how wide each pattern is, in bases
STRIDE_2D = (1,1) # For Conv2D how far in each direction
STRIDE = 1 # For Conv1D, how far between pattern matches, in bases
EPOCHS=5 # how many times to train on all the data
SPLITS=5 # SPLITS=3 means train on 2/3 and validate on 1/3
FOLDS=1 # train the model this many times (range 1 to SPLITS)
import sys
try:
from google.colab import drive
IN_COLAB = True
print("On Google CoLab, mount cloud-local file, get our code from GitHub.")
PATH='/content/drive/'
#drive.mount(PATH,force_remount=True) # hardly ever need this
#drive.mount(PATH) # Google will require login credentials
DATAPATH=PATH+'My Drive/data/' # must end in "/"
import requests
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_gen.py')
with open('RNA_gen.py', 'w') as f:
f.write(r.text)
from RNA_gen import *
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_describe.py')
with open('RNA_describe.py', 'w') as f:
f.write(r.text)
from RNA_describe import ORF_counter
r = requests.get('https://raw.githubusercontent.com/ShepherdCode/Soars2021/master/SimTools/RNA_prep.py')
with open('RNA_prep.py', 'w') as f:
f.write(r.text)
from RNA_prep import *
except:
print("CoLab not working. On my PC, use relative paths.")
IN_COLAB = False
DATAPATH='data/' # must end in "/"
sys.path.append("..") # append parent dir in order to use sibling dirs
from SimTools.RNA_gen import *
from SimTools.RNA_describe import ORF_counter
from SimTools.RNA_prep import *
MODELPATH="BestModel" # saved on cloud instance and lost after logout
#MODELPATH=DATAPATH+MODELPATH # saved on Google Drive but requires login
if not assert_imported_RNA_gen():
print("ERROR: Cannot use RNA_gen.")
if not assert_imported_RNA_prep():
print("ERROR: Cannot use RNA_prep.")
from os import listdir
import csv
from zipfile import ZipFile
import numpy as np
import pandas as pd
from scipy import stats # mode
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import KFold
from sklearn.model_selection import cross_val_score
from keras.models import Sequential
from keras.layers import Dense,Embedding,Dropout
from keras.layers import Conv1D,Conv2D
from keras.layers import Flatten,MaxPooling1D,MaxPooling2D
from keras.losses import BinaryCrossentropy
# tf.keras.losses.BinaryCrossentropy
import matplotlib.pyplot as plt
from matplotlib import colors
mycmap = colors.ListedColormap(['red','blue']) # list color for label 0 then 1
np.set_printoptions(precision=2)
# Use code from our SimTools library.
def make_generators(seq_len):
pcgen = Collection_Generator()
pcgen.get_len_oracle().set_mean(seq_len)
tora = Transcript_Oracle()
tora.set_cds_len_mean(CDS_LEN) # CDS=ORF+STOP.
pcgen.set_seq_oracle(tora)
ncgen = Collection_Generator()
ncgen.get_len_oracle().set_mean(seq_len)
return pcgen,ncgen
pc_sim,nc_sim = make_generators(RNA_LEN)
pc_train = pc_sim.get_sequences(PC_SEQUENCES)
nc_train = nc_sim.get_sequences(NC_SEQUENCES)
print("Train on",len(pc_train),"PC seqs")
print("Train on",len(nc_train),"NC seqs")
# Describe the sequences
def describe_sequences(list_of_seq):
oc = ORF_counter()
num_seq = len(list_of_seq)
rna_lens = np.zeros(num_seq)
orf_lens = np.zeros(num_seq)
for i in range(0,num_seq):
rna_len = len(list_of_seq[i])
rna_lens[i] = rna_len
oc.set_sequence(list_of_seq[i])
orf_len = oc.get_max_orf_len()
orf_lens[i] = orf_len
print ("Average RNA length:",rna_lens.mean())
print ("Average ORF length:",orf_lens.mean())
print("PC train")
describe_sequences(pc_train)
print("NC train")
describe_sequences(nc_train)
# Use code from our SimTools library.
X,y = prepare_inputs_len_x_alphabet(pc_train,nc_train,ALPHABET) # shuffles
print("Data ready.")
def make_DNN():
print("make_DNN")
print("input shape:",INPUT_SHAPE)
dnn = Sequential()
#dnn.add(Embedding(input_dim=INPUT_SHAPE,output_dim=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same",
input_shape=INPUT_SHAPE))
dnn.add(Conv1D(filters=FILTERS,kernel_size=WIDTH,strides=STRIDE,padding="same"))
dnn.add(MaxPooling1D())
dnn.add(Flatten())
dnn.add(Dense(NEURONS,activation="sigmoid",dtype=np.float32))
dnn.add(Dropout(DROP_RATE))
dnn.add(Dense(1,activation="sigmoid",dtype=np.float32))
dnn.compile(optimizer='adam',
loss=BinaryCrossentropy(from_logits=False),
metrics=['accuracy']) # add to default metrics=loss
dnn.build(input_shape=INPUT_SHAPE)
#ln_rate = tf.keras.optimizers.Adam(learning_rate = LN_RATE)
#bc=tf.keras.losses.BinaryCrossentropy(from_logits=False)
#model.compile(loss=bc, optimizer=ln_rate, metrics=["accuracy"])
return dnn
model = make_DNN()
print(model.summary())
from keras.callbacks import ModelCheckpoint
def do_cross_validation(X,y):
cv_scores = []
fold=0
mycallbacks = [ModelCheckpoint(
filepath=MODELPATH, save_best_only=True,
monitor='val_accuracy', mode='max')]
splitter = KFold(n_splits=SPLITS) # this does not shuffle
for train_index,valid_index in splitter.split(X):
if fold < FOLDS:
fold += 1
X_train=X[train_index] # inputs for training
y_train=y[train_index] # labels for training
X_valid=X[valid_index] # inputs for validation
y_valid=y[valid_index] # labels for validation
print("MODEL")
# Call constructor on each CV. Else, continually improves the same model.
model = model = make_DNN()
print("FIT") # model.fit() implements learning
start_time=time.time()
history=model.fit(X_train, y_train,
epochs=EPOCHS,
verbose=1, # ascii art while learning
callbacks=mycallbacks, # called at end of each epoch
validation_data=(X_valid,y_valid))
end_time=time.time()
elapsed_time=(end_time-start_time)
print("Fold %d, %d epochs, %d sec"%(fold,EPOCHS,elapsed_time))
# print(history.history.keys()) # all these keys will be shown in figure
pd.DataFrame(history.history).plot(figsize=(8,5))
plt.grid(True)
plt.gca().set_ylim(0,1) # any losses > 1 will be off the scale
plt.show()
do_cross_validation(X,y)
from keras.models import load_model
pc_sim.set_reproducible(True)
nc_sim.set_reproducible(True)
pc_test = pc_sim.get_sequences(PC_TESTS)
nc_test = nc_sim.get_sequences(NC_TESTS)
X,y = prepare_inputs_len_x_alphabet(pc_test,nc_test,ALPHABET)
best_model=load_model(MODELPATH)
scores = best_model.evaluate(X, y, verbose=0)
print("The best model parameters were saved during cross-validation.")
print("Best was defined as maximum validation accuracy at end of any epoch.")
print("Now re-load the best model and test it on previously unseen data.")
print("Test on",len(pc_test),"PC seqs")
print("Test on",len(nc_test),"NC seqs")
print("%s: %.2f%%" % (best_model.metrics_names[1], scores[1]*100))
from sklearn.metrics import roc_curve
from sklearn.metrics import roc_auc_score
ns_probs = [0 for _ in range(len(y))]
bm_probs = best_model.predict(X)
ns_auc = roc_auc_score(y, ns_probs)
bm_auc = roc_auc_score(y, bm_probs)
ns_fpr, ns_tpr, _ = roc_curve(y, ns_probs)
bm_fpr, bm_tpr, _ = roc_curve(y, bm_probs)
plt.plot(ns_fpr, ns_tpr, linestyle='--', label='Guess, auc=%.4f'%ns_auc)
plt.plot(bm_fpr, bm_tpr, marker='.', label='Model, auc=%.4f'%bm_auc)
plt.title('ROC')
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.legend()
plt.show()
print("%s: %.2f%%" %('AUC',bm_auc*100.0))
t = time.time()
time.strftime('%Y-%m-%d %H:%M:%S %Z', time.localtime(t))
###Output
_____no_output_____
###Markdown
ConclusionThe CNN is very capable of learning ORF/nonORF from simulated short RNA.
###Code
###Output
_____no_output_____ |
examples/ets_example.ipynb | ###Markdown
Data
###Code
# can also consider transform=False
raw_df = load_iclaims(transform=True)
raw_df.dtypes
df = raw_df.copy()
df.head()
test_size=52
train_df=df[:-test_size]
test_df=df[-test_size:]
ets = ETS(response_col='claims',
date_col='week',
seasonality=52,
seed=2020,
estimator='stan-mcmc',
)
ets.fit(train_df)
predicted_df = ets.predict(df=df, decompose=True)
predicted_df
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col='week',
actual_col='claims',
test_actual_df=test_df)
_ = plot_predicted_components(predicted_df=predicted_df, date_col='week')
###Output
_____no_output_____
###Markdown
Data
###Code
# can also consider transform=False
raw_df = load_iclaims(transform=True)
raw_df.dtypes
df = raw_df.copy()
df.head()
test_size=52
train_df=df[:-test_size]
test_df=df[-test_size:]
ets = ETS(response_col='claims',
date_col='week',
seasonality=52,
seed=2020,
estimator='stan-mcmc',
)
ets.fit(train_df)
predicted_df = ets.predict(df=df, decompose=True)
predicted_df
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col='week',
actual_col='claims',
test_actual_df=test_df)
_ = plot_predicted_components(predicted_df=predicted_df, date_col='week')
###Output
_____no_output_____
###Markdown
Data
###Code
raw_df = load_iclaims()
raw_df.dtypes
df = raw_df.copy()
df.head()
test_size=52
train_df=df[:-test_size]
test_df=df[-test_size:]
ets = ETSMAP(response_col='claims',
date_col='week',
seasonality=52,
seed=2020)
ets.fit(train_df)
predicted_df = ets.predict(df=df, decompose=True)
predicted_df
_ = plot_predicted_data(training_actual_df=train_df,
predicted_df=predicted_df,
date_col='week',
actual_col='claims',
test_actual_df=test_df)
_ = plot_predicted_components(predicted_df=predicted_df, date_col='week')
###Output
/Users/zhishiw/Desktop/uTS-py/orbit/orbit/diagnostics/plot.py:218: UserWarning: This figure was using constrained_layout, but that is incompatible with subplots_adjust and/or tight_layout; disabling constrained_layout.
fig.tight_layout()
###Markdown
Data
###Code
raw_df = load_iclaims()
raw_df.dtypes
df=raw_df.copy()
test_size=52
train_df=df[:-test_size]
test_df=df[-test_size:]
ets = ETSMAP(
response_col='claims',
date_col='week',
seasonality=52,
seed=2020,
)
ets.fit(train_df)
x = np.zeros((3, 5))
x[0].shape
predicted_df = ets.predict(df=df, decompose=True)
predicted_df
_ = plot_predicted_data(training_actual_df=train_df, predicted_df=predicted_df,
date_col='week', actual_col='claims',
test_actual_df=test_df)
_ = plot_predicted_components(predicted_df=predicted_df, date_col='week')
###Output
_____no_output_____ |
src/notebooks/42-custom-linear-regression-fit-seaborn.ipynb | ###Markdown
You can custom the appearance of the **regression fit** in a scatterplot built with [seaborn](http://python-graph-gallery.com/seaborn/).In this example **color**, **transparency** and **width** are controlled through the `line_kws={}` option with the following elements:* `color` : color of the line* `alpha` : opacity value of the line* `lw` : line width
###Code
# library & dataset
import seaborn as sns
import matplotlib.pyplot as plt
df = sns.load_dataset('iris')
# plot
sns.regplot(x=df["sepal_length"], y=df["sepal_width"], line_kws={"color":"r","alpha":0.7,"lw":5})
plt.show()
###Output
_____no_output_____
###Markdown
Welcome in the introductory template of the python graph gallery. Here is how to proceed to add a new `.ipynb` file that will be converted to a blogpost in the gallery! Notebook Metadata It is very important to add the following fields to your notebook. It helps building the page later on:- **slug**: the URL of the blogPost. It should be exactly the same as the file title. Example: `70-basic-density-plot-with-seaborn`- **chartType**: the chart type like density or heatmap. For a complete list see [here](https://github.com/holtzy/The-Python-Graph-Gallery/blob/master/src/util/sectionDescriptions.js), it must be one of the `id` options.- **title**: what will be written in big on top of the blogpost! use html syntax there.- **description**: what will be written just below the title, centered text.- **keyword**: list of keywords related with the blogpost- **seoDescription**: a description for the bloppost meta. Should be a bit shorter than the description and must not contain any html syntax. Add a chart description A chart example always come with some explanation. It must:contain keywordslink to related pages like the parent page (graph section)give explanations. In depth for complicated charts. High level for beginner level charts Add a chart
###Code
import seaborn as sns, numpy as np
np.random.seed(0)
x = np.random.randn(100)
ax = sns.distplot(x)
###Output
_____no_output_____ |
ExamPrep/SciCompComplete/Assessment 2/SciComp_A2_AllQ/Q2_A2.ipynb | ###Markdown
2a
###Code
#2a
import numpy as np
from scipy.interpolate import interp1d, barycentric_interpolate
import pylab
#Raw data
x = [0.0, 0.1, 0.2, 0.3, 0.4]
fx = [0.000000, 0.078348, 0.138910, 0.192916,0.244981]
#generate the points where we want to evaluate the interpolating functions
x0 = np.linspace(0, 0.4, 100)
#polynomial interpolation - this gives vector y where the polynomial is already evaluated
y0 = barycentric_interpolate(x, fx, x0)
#This gives a polynomial which plot points that go through the original data points.
#Plotting this polynomial gives me an idication as to the type of spline I will use to construct a function
#for centred finite differencing (CFD)
pylab.plot(x, fx, 'x', label='data points')
pylab.plot(x0, y0, label='Polynomial fit')
pylab.legend()
pylab.show()
#I think a cubic spline would provide the closes function to the interpolated fit.
f_cubic = interp1d(x, fx, kind='cubic')
#plotting the cubic alongside the polynomial
pylab.plot(x0, y0, label='Polynomial fit')
pylab.plot(x0, f_cubic(x0), label='Cubic')
pylab.legend()
pylab.show()
#I can see that the cubic function constructed lies closely with the polynomial so I will use it in my central finite differencing.
#I know centeral finite differencing includes errors to the order of h squared.
#So by making h suitably small I can reduce the error.
#By creating a function for CFD, I can plot a line onto a graph and see if it follows my expectation.
h=0.0001
def df(x):
return (f_cubic(x+h)-f_cubic(x-h))/(2*h)
#plot all results and the original data
pylab.plot(x, fx, 'x', label='data points')
pylab.plot(x0, y0, label='Polynomial fit')
pylab.plot(x0, f_cubic(x0), label='Cubic')
pylab.plot(x0[1:99], df(x0[1:99]), label='First Differential')
pylab.legend()
pylab.show()
###Output
_____no_output_____
###Markdown
2b
###Code
#Note to self, the function and its derivatives are to be evaluated at x=0
#Defining the function
def f(x):
return np.sin(x)+np.exp(-x)
#Now the exact second derivative is (see markdown below)
def d2f(x):
return -np.sin(x)+np.exp(-x)
# I know that at x=0 the exact derivative, f"(x) = 1.
#Defining the numerical solution to the second order using central finite differencing.
def d2fNum(x,h):
return (f(x+h)-2*f(x)+f(x-h))/(h**2)
print('The etimate for the second derivative for x=o when h=0.1 is: ',d2fNum(0,0.1))
print('The etimate for the second derivative for x=o when h=0.5 is: ',d2fNum(0,0.5))
print ('\nThe error on both aproximated values (AVs) is of the order h squared.\nThe error can be calculated by subtracting the AVs from the exact value which is 1.\nThe AV error for h=0.1 is:', 1-d2fNum(0,0.1),'\nThe AV error for h=0.5 is:', 1-d2fNum(0,0.5))
print ('Both are negative values because they are over estimations but they are as expected.\nA bigger h value causes a greater error.')
###Output
The etimate for the second derivative for x=o when h=0.1 is: 1.00083361116
The etimate for the second derivative for x=o when h=0.5 is: 1.02100772165
The error on both aproximated values (AVs) is of the order h squared.
The error can be calculated by subtracting the AVs from the exact value which is 1.
The AV error for h=0.1 is: -0.000833611160723
The AV error for h=0.5 is: -0.021007721651
Both are negative values because they are over estimations but they are as expected.
A bigger h value causes a greater error.
###Markdown
The exact derivative of the function is:$f^{''}(x)=-sin(x)+e^{-x}$
###Code
#Now to make a definition of the Richardson extrap.
def G(x,h1,h2,p):
return (((h1/h2)**p)*d2fNum(x,h2)-d2fNum(x,h1))/((h1/h2)**p - 1)
#Know p=2 as it is the order of the errors on the two aproximations.
print('The Richardson Extrapolation for the function is:', G(0,0.5,0.1,2))
print('The AV error for h=0.1 is:', "{:.2e}".format(1-G(0,0.5,0.1,2)),'\nThis is considerably less than the error on the other AVs.')
###Output
The Richardson Extrapolation for the function is: 0.999993023224
The AV error for h=0.1 is: 6.98e-06
This is considerably less than the error on the other AVs.
|
docs/examples/xsarsea.ipynb | ###Markdown
Xsarsea exampleThe Normalized Radar Cross Section (sigma0) as computed from Level-1 SAR data can be detrended in the case of ocean scenes. The goal is to remove the averaged trend (decreasing) of the NRCS with (increasing) incidence angle observed for acquisitions over ocean. The detrend maximizes the contrasts in the image due to geophysical phenomena and improves the visualization experience of ocean scenes. **Sigma0_detrend** is also termed **image roughness** or **nice display**.
###Code
import xsar
import xsarsea
import os
import datetime
# use holoviews for plots
import bokeh.io
bokeh.io.output_notebook()
import holoviews as hv
hv.extension('bokeh')
from holoviews.operation.datashader import datashade,rasterize
###Output
_____no_output_____
###Markdown
read the dataset with xsar
###Code
# get test file. You can replace with an path to other SAFE
filename = xsar.get_test_file('S1A_IW_GRDH_1SDV_20170907T103020_20170907T103045_018268_01EB76_Z010.SAFE')
#filename = xsar.get_test_file('S1B_IW_GRDH_1SDV_20181013T062322_20181013T062347_013130_018428_Z010.SAFE')
filename
# open the dataset with xsar
sar_ds = xsar.open_dataset(filename, resolution={'atrack':10,'xtrack':10})
sar_ds[['sigma0','incidence']]
###Output
_____no_output_____
###Markdown
Sigma0 detrendingSigma0 detrending is done by [xsarsea.sigma0_detrend](../basic_api.rstxsarsea.sigma0_detrend) functionAs the resulting xarray dataset have the same coordinates as the original sigma0, we can add a `sigma0_detrend` variable to the dataset.
###Code
sar_ds['sigma0'] = sar_ds.sigma0.persist()
sar_ds['sigma0_detrend'] = xsarsea.sigma0_detrend(sar_ds.sigma0, sar_ds.incidence).persist()
rasterize(hv.Image(sar_ds.sigma0.sel(pol='VV')).opts(cmap='gray',colorbar=True,tools=['hover'],title="original sigma0")) \
+ rasterize(hv.Image(sar_ds.sigma0_detrend.sel(pol='VV')).opts(cmap='gray',colorbar=True,tools=['hover'],title="detrended sigma0"))
###Output
_____no_output_____
###Markdown
Xsarsea exampleThe Normalized Radar Cross Section (sigma0) as computed from Level-1 SAR data can be detrended in the case of ocean scenes. The goal is to remove the averaged trend (decreasing) of the NRCS with (increasing) incidence angle observed for acquisitions over ocean. The detrend maximizes the contrasts in the image due to geophysical phenomena and improves the visualization experience of ocean scenes. **Sigma0_detrend** is also termed **image roughness** or **nice display**.
###Code
import xsar
import xsarsea
import os
import datetime
# use holoviews for plots
import bokeh.io
bokeh.io.output_notebook()
import holoviews as hv
hv.extension('bokeh')
from holoviews.operation.datashader import datashade,rasterize
# optional debug message
import logging
logging.basicConfig()
logging.getLogger('xsarsea').setLevel(logging.INFO) # .setLevel(logging.DEBUG) for more messages
###Output
_____no_output_____
###Markdown
read the dataset with xsar
###Code
# get test file. You can replace with an path to other SAFE
filename = xsar.get_test_file('S1A_IW_GRDH_1SDV_20170907T103020_20170907T103045_018268_01EB76_Z010.SAFE')
#filename = xsar.get_test_file('S1B_IW_GRDH_1SDV_20181013T062322_20181013T062347_013130_018428_Z010.SAFE')
filename
# open the dataset with xsar
sar_ds = xsar.open_dataset(filename, resolution='1000m')
sar_ds[['sigma0','incidence']]
###Output
_____no_output_____
###Markdown
Sigma0 detrendingSigma0 detrending is done by [xsarsea.sigma0_detrend](../basic_api.rstxsarsea.sigma0_detrend) functionAs the resulting xarray dataset have the same coordinates as the original sigma0, we can add a `sigma0_detrend` variable to the dataset.
###Code
sar_ds['sigma0'] = sar_ds.sigma0
sar_ds['sigma0_detrend'] = xsarsea.sigma0_detrend(sar_ds.sigma0, sar_ds.incidence)
(
hv.Image(sar_ds.sigma0.sel(pol='VV')).opts(title="original sigma0") \
+ hv.Image(sar_ds.sigma0_detrend.isel(pol=0)).opts(title="detrended sigma0")
).opts(hv.opts.Image(cmap='gray', clim=(0,0.4)))
###Output
_____no_output_____ |
master/Intro_to_AVO.ipynb | ###Markdown
Introduction to AVOTrying to reproduce some figures in Blangy, JP, 1994, AVO in tranversely isotropic media—An overview. *Geophysics* **59** (5), 775–781. This is a good starting point, because Blangy defined some rocks very fully and clearly, and provides an anisotropic AVO approximation. So, even if we don't look at his equation in particular, it's a good starting point.Related blog post: [The Blangy equation](http://www.agilegeoscience.com/blog/2014/8/7/the-blangy-equation.html?rq=blangy)The usual prelims:
###Code
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
We'll also load Blangy's test data, from his Table 1:
###Code
type1 = {'shale': {'vp':3300., 'vs':1700., 'rho':2350., 'd':0.15, 'e':0.30},
'sand_gas': {'vp':4200., 'vs':2700., 'rho':2350., 'd':0.00, 'e':0.00},
'sand_water': {'vp':4200., 'vs':2100., 'rho':2450., 'd':0.00, 'e':0.00},
}
type2 = {'shale': {'vp':2896., 'vs':1402., 'rho':2250., 'd':0.15, 'e':0.30},
'sand_gas': {'vp':3322., 'vs':2215., 'rho':2000., 'd':0.00, 'e':0.00},
'sand_water': {'vp':3322., 'vs':1402., 'rho':2250., 'd':0.00, 'e':0.00},
}
type3 = {'shale': {'vp':2307., 'vs':1108., 'rho':2150., 'd':0.15, 'e':0.30},
'sand_gas': {'vp':1951., 'vs':1301., 'rho':1950., 'd':0.00, 'e':0.00},
'sand_water': {'vp':1951., 'vs': 930., 'rho':2200., 'd':0.00, 'e':0.00},
}
###Output
_____no_output_____
###Markdown
We'll start with a wet sand case, Type 2. This just defines the rock types we'll need.
###Code
# Upper layer
vp1 = type2['shale']['vp']
vs1 = type2['shale']['vs']
rho1 = type2['shale']['rho']
# Lower layer
vp0 = type2['sand_water']['vp']
vs0 = type2['sand_water']['vs']
rho0 = type2['sand_water']['rho']
# Angle range
theta1 = np.arange(0, 45, 1)
###Output
_____no_output_____
###Markdown
Linear Shuey equationLet's start with a bit of maths — [the 2-term Shuey approximation](http://subsurfwiki.org/wiki/Shuey_equation). I'm using the formulation given by Avesth, P, T Mukerji and G Mavko (2005). *Quantitative seismic interpretation.* Cambridge University Press, Cambridge, UK. $$R(\theta) \approx R(0) + G \sin^2 \theta$$where$$R(0) = \frac{1}{2} \left( \frac{\Delta V_\mathrm{P}}{V_\mathrm{P}} + \frac{\Delta \rho}{\rho} \right)$$and$$G = \frac{1}{2} \frac{\Delta V_\mathrm{P}}{V_\mathrm{P}} - 2 \frac{V^2_\mathrm{S}}{V^2_\mathrm{P}} \left( \frac{\Delta \rho}{\rho} + 2 \frac{\Delta V_\mathrm{S}}{V_\mathrm{S}} \right)$$In these equations, $\Delta V_\mathrm{P}$ means the difference in the velocity of the two layers, and $V_\mathrm{P}$ means the mean of the two layers. Let's make a function to help with this 'difference over mean':
###Code
def dom(upper, lower):
return np.subtract(lower, upper) / np.mean((lower, upper))
###Output
_____no_output_____
###Markdown
First term:
###Code
R0 = 0.5 * (dom(vp1, vp0) + dom(rho1, rho0))
###Output
_____no_output_____
###Markdown
Second term, in two parts:
###Code
G_1 = 0.5 * dom(vp1, vp0)
G_2 = 2 * (np.mean((vs0, vs1))**2 / np.mean((vp0, vp1))**2) * (dom(rho1, rho0) + 2 * dom(vs1, vs0))
G = G_1 - G_2
###Output
_____no_output_____
###Markdown
Put it all together with the angle term, remembering radians:
###Code
shuey = R0 + G * np.sin(np.radians(theta1))**2
plt.plot(shuey, 'g', lw=2)
plt.show()
###Output
_____no_output_____
###Markdown
Compare to ZoeppritzLet's avoid a lot of algebra and use [our library Bruges](https://github.com/agile-geoscience/bruges) for our equations, so we don't have to define everything from scratch.
###Code
import bruges
zoe = bruges.reflection.zoeppritz_rpp(vp1, vs1, rho1,
vp0, vs0, rho0,
theta1)
plt.plot(zoe, 'g', lw=2)
plt.plot(shuey)
plt.ylim(-0.05, 0.2)
plt.axhline(0, color='k')
plt.show()
###Output
_____no_output_____
###Markdown
Q. Try some of the other rocks in Blangy's dataset. What happens if you make a gas reservoir?
###Code
# Upper layer
vp1 = type2['shale']['vp']
vs1 = type2['shale']['vs']
rho1 = type2['shale']['rho']
# Lower layer
vp0 = type2['sand_gas']['vp']
vs0 = type2['sand_gas']['vs']
rho0 = type2['sand_gas']['rho']
zoe_gas = bruges.reflection.zoeppritz_rpp(vp1, vs1, rho1,
vp0, vs0, rho0,
theta1)
plt.plot(zoe, 'g', lw=2)
plt.plot(zoe_gas, 'r')
plt.ylim(-0.2, 0.2)
plt.axhline(0, color='k')
plt.show()
###Output
_____no_output_____
###Markdown
Q. Try some other approximations in the `bruges.reflection` module.
###Code
ar_gas = bruges.reflection.akirichards(vp1, vs1, rho1,
vp0, vs0, rho0,
theta1)
s2_gas = bruges.reflection.shuey2(vp1, vs1, rho1,
vp0, vs0, rho0,
theta1)
s3_gas = bruges.reflection.shuey3(vp1, vs1, rho1,
vp0, vs0, rho0,
theta1)
plt.plot(zoe_gas, label='Zoeppritz')
plt.plot(ar_gas, label='Aki-Richards')
plt.plot(s2_gas, label='Shuey 2-term')
plt.plot(s3_gas, label='Shuey 3-term')
plt.legend()
plt.ylim(-0.25, 0.05)
plt.axhline(0, color='k')
plt.grid()
plt.show()
###Output
_____no_output_____
###Markdown
Q. Can you put the reflection coefficient on a time series and use the convolutional model to make a gather?
###Code
rc = np.zeros((100, theta1.size))
rc[50,:] = zoe_gas
plt.imshow(rc, aspect=0.4)
plt.colorbar(shrink=0.8)
plt.show()
w = bruges.filters.ricker(duration=0.05, dt=0.001, f=50)
plt.plot(w); plt.show()
syn = np.apply_along_axis(lambda t: np.convolve(t, w, mode='same'), arr=rc, axis=0)
plt.imshow(syn, cmap='Greys', aspect=0.4)
plt.colorbar(shrink=0.8)
plt.show()
###Output
_____no_output_____
###Markdown
Notice how, if we formed a stacked volume from, say, 25 degrees of offset, the response would be completely different from the zero-offset response:I'll plot them vertically to be consistent with the other figure:
###Code
y = np.arange(100)
plt.figure(figsize=(4, 8))
plt.plot(syn[:, 0], y, 'r', lw=2) # Zero offset
plt.plot(np.mean(syn[:, 0:25], axis=1), y)
plt.xlim(-0.015, 0.015)
plt.show()
###Output
_____no_output_____ |
D-0 Estadistica.ipynb | ###Markdown
Estadistica**Nota:**Los metodos se aplican a una poblacion Calculo de la media y la mediana
###Code
# Calculo de la media aritmetica
from random import randint
def calculo_media(elementos):
s = sum(elementos)
n = len(elementos)
return s/n
def calculo_mediana(elementos):
n = len(elementos)
elementos.sort()
#numero de elementos par
if n%2==0:
ind1 = n/2
ind2 = (n/2) +1
#Conversion a entero
"""Se resta """
ind1 = int(ind1) - 1
ind2 = int(ind2) - 1
mediana = (elementos[ind1] + elementos[ind2]) / 2
#numero de elementos impar
else:
ind = (n + 1 )/2
ind = int(ind)-1
mediana = elementos[ind]
return mediana
elementos = [randint(1,100) for _ in range(18)]
print(f"Elementos en la lista:\n{elementos}")
print("\n")
print("La media es: {0:.2f}".format(calculo_media(elementos)))
print("La mediana es: {0:.2f}".format(calculo_mediana(elementos)))
###Output
Elementos en la lista:
[36, 5, 96, 3, 26, 83, 88, 45, 2, 97, 20, 53, 32, 89, 12, 12, 68, 95]
La media es: 47.89
La mediana es: 40.50
###Markdown
Calculo de la moda
###Code
# Uso de la clase Counter
from random import randint
from collections import Counter
elementos = [randint(1,10) for _ in range(20)]
c = Counter(elementos)
print("Elementos ordenados por el numero de veces que estos aparecen")
print(c.most_common())
print("\nEl mas comun", c.most_common(1))
print(f"Numero: {c.most_common(1)[0][0]}", end="\t" )
print(f"Apariciones: {c.most_common(1)[0][1]}")
###Output
Elementos ordenados por el numero de veces que estos aparecen
[(4, 3), (1, 3), (2, 3), (5, 2), (8, 2), (6, 2), (10, 2), (9, 1), (3, 1), (7, 1)]
El mas comun [(4, 3)]
Numero: 4 Apariciones: 3
###Markdown
Calculo de la moda y elaboracion de tabla de frecuencias
###Code
# Creacion de programa que encuentra la o las modas
from collections import Counter
from random import randint
def obterner_moda(elementos):
items = Counter(elementos)
frecuencia_elementos = items.most_common()
maxima_frec = frecuencia_elementos[0][1]
modas = []
for i in frecuencia_elementos:
if i [1] == maxima_frec:
modas.append(i[0])
return modas
def imprimir_moda(lista):
for i in lista:
print(i)
def tabla_frecuencias(elementos):
"""imprime una tabla mostrando los elementos y la frecuencia de aparicion"""
tabla = Counter(elementos)
print("Elementos\tFrecuencia")
for elemento, frecuencia in tabla.most_common():
print(f"{elemento}\t\t{frecuencia}")
def tabla_frec_ordenada(elementos):
"""imprime una tabla mostrando los elementos y la frecuencia de aparicion
pero de ordenada por elemento"""
tabla = Counter(elementos)
numeros_ordenados = tabla.most_common()
numeros_ordenados.sort()
print("Elementos\tFrecuencia")
for elemento, frecuencia in numeros_ordenados:
print(f"{elemento}\t\t{frecuencia}")
elementos = [randint(1,10) for _ in range(30)]
modas = obterner_moda(elementos)
print("Moda(s) encontrada(s))")
imprimir_moda(modas)
print("\n")
print("Tabla ordenada por frecuencia")
tabla_frecuencias(elementos)
print("\n\n")
print("Tabla ordenada por elementos")
tabla_frec_ordenada(elementos)
print("\n\n")
###Output
Moda(s) encontrada(s))
5
Tabla ordenada por frecuencia
Elementos Frecuencia
5 6
7 4
8 4
2 4
4 3
9 3
6 2
3 2
10 1
1 1
Tabla ordenada por elementos
Elementos Frecuencia
1 1
2 4
3 2
4 3
5 6
6 2
7 4
8 4
9 3
10 1
###Markdown
Midiendo la dispersion**Las medidas de dispercion son:*** Rango* Varianza* Desviacion estandar Rango de un conjunto de numeros
###Code
#Rango de los datos
from random import randint
def rango(elementos):
minimo = min(elementos)
maximo = max(elementos)
#Calculando el rango
rango = maximo - minimo
return minimo, maximo, rango
elementos = [randint(100,1500) for _ in range(50)]
minimo, maximo, rango = rango(elementos)
print("Elementos de la lista:\n\n")
print(f"Minimo:{minimo}\nMaximo:{maximo}\nRango:{rango}")
###Output
Elementos de la lista:
Minimo:111
Maximo:1499
Rango:1388
###Markdown
Varianza y desviacion estandarLa _Varianza_ esta definida por:$$ s^2=\frac{\sum{(x_i-x_{mean}})^2}{n}$$Mientras la _Desviacion estandar_ es solo la raiz cuadrada de la varianza:$$ s=\sqrt{\frac{\sum{(x_i-x_{mean}})^2}{n}}$$
###Code
# Encontrando la media y la desviacion estandar
from random import randint
def calculo_media(elementos):
suma = sum(elementos)
n_e = len(elementos)
return suma/n_e
def calculo_diferencia(elementos):
"""Diferencia (resta) de dos elementos"""
media = calculo_media(elementos)
#Diferencia respecto a la media
diferencia = [i - media for i in elementos]
return diferencia
def calculo_varianza(elementos):
diferencias = calculo_diferencia(elementos)
diff_sqr = [i**2 for i in diferencias]
#Calculo de varianza
sum_diff_sqr = sum(diff_sqr)
varianza = (sum_diff_sqr) / len(elementos)
return varianza
# Prueba
numeros = [randint(100,1500) for _ in range(50)]
varianza = calculo_varianza(numeros)
print("La varianza de la lista de elementos es:",end="\t")
print("{:,.2f}".format(varianza))
print("\n\nLa desviacion estandar de los elementos es:",end="\t")
print("{:,.2f}".format(varianza**0.5))
###Output
La varianza de la lista de elementos es: 138,079.41
La desviacion estandar de los elementos es: 371.59
###Markdown
Calculo de coeficiente de correlacionEl _coeficiente de correlacion_ esta definido por:$$ correlacion = \frac{n\sum{xy}-\sum{x}\sum{y}}{\sqrt{(n\sum{x^2}-{(\sum{x})^2})(n\sum{y^2}-(\sum{y})^2)}} $$
###Code
from random import randint
import matplotlib.pyplot as plt
def correlacion_xy(x, y):
n_etos = len(x)
#Definicion de listas
productos = [xi * yi for xi,yi in zip(x,y) ]
sqr_x = [xi**2 for xi in x]
sqr_y = [yi**2 for yi in y]
#Suma de las listas
suma_producto = sum(productos)
sumax = sum(x)
sumay = sum(y)
suma_sqrx = sum(sqr_x)
suma_sqry = sum(sqr_y)
#Aplicacion de la formula
numerador = (n_etos*suma_producto) - (sumax*sumay)
denominador = ((n_etos*suma_sqrx-(sumax)**2)*(n_etos*suma_sqry-(sumay)**2))**0.5
correlacion = numerador/denominador
return correlacion
listx = [randint(90,100) for _ in range(10)]
listy = [randint(80,100) for _ in range(10)]
print(f"El coeficiente de correlacion es: {correlacion_xy(listx, listy)}")
plt.scatter(listx, listy)
plt.show()
###Output
El coeficiente de correlacion es: -0.6505447983202833
|
11-nested-cross-validation/1_nested-cv_compact.ipynb | ###Markdown
L11: Model Evaluation 4 -- Algorithm Comparison (Nested Cross-Validation) -- Compact version This notebook illustrates how to implement nested cross-validation in scikit-learn. This notebook is a more compact version of the other notebooks [./2_nested-cv_verbose1.ipynb](./2_nested-cv_verbose1.ipynb) and [./3_nested-cv_verbose2.ipynb](./3nested-cv_verbose2.ipynb).Note that due to using `cross_val_score`, we cannot see the best settings for all the outer training folds here.
###Code
%load_ext watermark
%watermark -a 'Sebastian Raschka' -d -p sklearn,mlxtend -v
import numpy as np
from sklearn.model_selection import GridSearchCV
from sklearn.model_selection import train_test_split
from sklearn.model_selection import StratifiedKFold
from sklearn.model_selection import cross_val_score
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.svm import SVC
from mlxtend.data import mnist_data
from sklearn.metrics import accuracy_score
# Loading and splitting the dataset
# Note that this is a small (stratified) subset
# of MNIST; it consists of 5000 samples only, that is,
# 10% of the original MNIST dataset
# http://yann.lecun.com/exdb/mnist/
X, y = mnist_data()
X = X.astype(np.float32)
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size=0.2,
random_state=1,
stratify=y)
# Initializing Classifiers
#clf1 = LogisticRegression(random_state=1)
clf2 = KNeighborsClassifier()
clf3 = DecisionTreeClassifier(random_state=1)
#clf4 = SVC(random_state=1)
clf5 = RandomForestClassifier(random_state=1)
# Building the pipelines
#pipe1 = Pipeline([('std', StandardScaler()),
# ('clf1', clf1)])
pipe2 = Pipeline([('std', StandardScaler()),
('clf2', clf2)])
#pipe4 = Pipeline([('std', StandardScaler()),
# ('clf4', clf4)])
# Setting up the parameter grids
#param_grid1 = [{'clf1__penalty': ['l2'],
# 'clf1__C': np.power(10., np.arange(-4, 4))}]
param_grid2 = [{'clf2__n_neighbors': list(range(1, 10)),
'clf2__p': [1, 2]}]
param_grid3 = [{'max_depth': list(range(1, 10)) + [None],
'criterion': ['gini', 'entropy']}]
#param_grid4 = [{'clf4__kernel': ['rbf'],
# 'clf4__C': np.power(10., np.arange(-4, 4)),
# 'clf4__gamma': np.power(10., np.arange(-5, 0))},
# {'clf4__kernel': ['linear'],
# 'clf4__C': np.power(10., np.arange(-4, 4))}]
param_grid5 = [{'n_estimators': [10, 100, 500, 1000, 10000]}]
# Setting up multiple GridSearchCV objects, 1 for each algorithm
gridcvs = {}
inner_cv = StratifiedKFold(n_splits=2, shuffle=True, random_state=1)
for pgrid, est, name in zip((param_grid2,
param_grid3, param_grid5),
(pipe2, clf3, clf5),
('KNN', 'DTree', 'RForest')):
gcv = GridSearchCV(estimator=est,
param_grid=pgrid,
scoring='accuracy',
n_jobs=1, # be careful to only set one n_jobs to -1
cv=inner_cv,
verbose=0,
refit=True)
gridcvs[name] = gcv
outer_cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=1)
for name, gs_est in sorted(gridcvs.items()):
nested_score = cross_val_score(gs_est,
X=X_train,
y=y_train,
cv=outer_cv,
n_jobs=1) # be careful to only set one n_jobs to -1
print(f'{name:<7} | outer ACC {100*nested_score.mean():.2f}% +/- {100*nested_score.std():.2f}')
###Output
DTree | outer ACC 76.75% +/- 1.32
KNN | outer ACC 91.10% +/- 0.58
RForest | outer ACC 93.98% +/- 0.98
###Markdown
------ - Determine the best algorithm from the experiment above; e.g., we find that Random Forest is performing best- Now, select a hyperparameters for the model based on regular k-fold on the whole training set
###Code
gcv_model_select = GridSearchCV(estimator=clf5,
param_grid=param_grid5,
scoring='accuracy',
n_jobs=-1,
cv=inner_cv,
verbose=1,
refit=True)
gcv_model_select.fit(X_train, y_train)
best_model = gcv_model_select.best_estimator_
## We can skip the next step because we set refit=True
## so scikit-learn has already fit the model to the
## whole training set
# best_model.fit(X_train, y_train)
train_acc = accuracy_score(y_true=y_train, y_pred=best_model.predict(X_train))
test_acc = accuracy_score(y_true=y_test, y_pred=best_model.predict(X_test))
print(f'Nested CV Accuracy {100 * gcv_model_select.best_score_:.2f}% (average over k-fold CV test folds)')
print(f'Best Parameters: {gcv_model_select.best_params_}')
print(f'Training Accuracy {100 * train_acc:.2f}%')
print(f'Test Accuracy {100 * test_acc:.2f}%')
###Output
Nested CV Accuracy 93.30% (average over k-fold CV test folds)
Best Parameters: {'n_estimators': 10000}
Training Accuracy 100.00%
Test Accuracy 94.00%
|
Tutorial 0 - Overview.ipynb | ###Markdown
 Welcome! Welcome to the pyData Global 2021 Tutorial: Data and tools to model PV SystemsModeling tools for all aspects of photovoltaic systems are rapidly growing, and there are solutions for many of the things you might want to simulate. Python is becoming one of the scientific languages of choice, and many open-source tools are available for PV modeling. This tutorial will focus on teaching attendees PV modeling in python through the use of PVlib. In this interactive tutorial we will go from getting acquainted with some common data used or measured in pv systems (i.e. weather), to modeling the AC energy output of a single-axis tracker system. This includes learning and simulating sun position, plane of array irradiances, temperature models, single-diode models and more. We will review common vocabulary around python and ``data aggregation`` by hour, week, month, and visualization. The tutorial will present hands-on examples in python, enabled via jupyter notebooks and a Jupyterhub (remote hosted server for jupyter notebooks and python language) so you, the attendee, don’t have to install anything, and can follow along while we go over the theory and code! In case it's not obvious, a computer is required. The tutorial will wrap up with an overview of other available open-source tools for other aspects of modeling PV systems. More on your teachers:The three of us have ample experience in data, coding, and PV field performance modeling, so we look forward to all of your questions.| | || --- | :--- ||  | Silvana Ayala PelaezI am a research scientist at NREL, focusing mostly on bifacial PV system's performance, and circular economy. Python is my daily bread and butter for data analysis and building tools. Silvana has made substantial contributions to the NREL [bifacialvf pvmismatch](https://github.com/NREL/bifacialvf) and [bifacial radiance](https://bifacial-radiance.readthedocs.io/en/latest/) software packages. ||  | Kevin AndersonI am a research scientist at NREL doing cool stuff! I have contributed to work on slope aware backtracking, clipping loss errors in hourly yield estimates, and am a maintainer for [pvlib python](https://pvlib-python.readthedocs.io/en/latest/) and a frequent contributor to [RdTools](https://rdtools.readthedocs.io/en/latest/). ||  | Mark MikofskiI am a principal solar engineer at DNV and product manager for SolarFarmer. I research, analyze, and predict PV system performance, degradation, and reliability. I have contributed to a few Python projects like [pvlib python](https://pvlib-python.readthedocs.io/en/latest/), [PVMismatch](https://sunpower.github.io/PVMismatch/), and [SciPy](https://scipy.org/) ||  | Abhishek Parikh I am Abhi, a solar analyst at DNV. Like many, I love python and try not to miss any opportunity to deploy it’s terrific power in handling the data challenges renewables analytics offer. I want to promote and be a part of collaborations between the amazing worlds of data science and renewables. | Learning Objectives0. Why Model PV? 1. Access weather data (TMY3), understand irradiance data, and visualize it monthly.2. Calculate sun position, plane of array irradiance, and aggregate irradiance data into average daily insolation by month and year.3. Calculate module temperature from ambient data. 4. Use POA and module temperature to forecast a module's performance. 5. Other Tools OverviewThe sketch below from [the Sandia PV Performance Modeling Collaborative (PVPMC)](https://pvpmc.sandia.gov/) outlines the topics we will cover in this tutorial: Why learn this? PV-lib is a library of algorithms and routines that you might encounter the need to use if you're doing anything PV-modeling related. It is managed by members of the PV research community, who make sure the formulas and code are not only sleek but accurate. * You want to know the sun position? No need to code from zero the SPA (Solar Position algorithm), it's in PVlib. * You want to reproduce the Sandia-King model to calculate module performance? It's there, also. * You can find the most well-known [models](https://pvpmc.sandia.gov/), as well as recently accepted values and approaches in published PV literature.* We hope adding this tool to your skillset will empower you to do better, faster research with an already solid foundation. Don't reinvent the wheel! How to use this tutorial?This tutorial is a [Jupyter](https://jupyter.org) notebook. Jupyter is a browser based interactive tool that combines text, images, equations, and code that can be shared with others. Please see the setup section in the [README](./README.md) to learn more about how to get started. Useful links1. References * [PVlib Documentation](https://pvlib-python.readthedocs.io/en/stable/) * [Github Code Repository](https://github.com/pvlib/pvlib-python)2. Ask for help: * [Use the pvlib-python tag on StackOverflow](https://stackoverflow.com/questions/tagged/pvlib-python) * [Google Group - Discussions and more!](https://groups.google.com/g/pvlib-python) * [Open an Issue on the Github Repository](https://github.com/pyvlib/pvlib-python/issues) Tutorial StructureThis tutorial is made up of multiple Jupyter Notebooks. These notebooks mixcode, text, visualization, and exercises.If you haven't used JupyterLab before, it's similar to the Jupyter Notebook. Ifyou haven't used the Notebook, the quick intro is1. There are two modes: ``command`` and ``edit``1. From ``command`` mode, press `Enter` to edit a cell (like this markdown cell)1. From ``edit`` mode, press `Esc` to change to command mode1. Press `shift+enter` to execute a cell and move to the next cell.1. The toolbar has commands for executing, converting, and creating cells.The layout of the tutorial will be as follows: Exercise: Print Hello, world!Each notebook will have exercises for you to solve. You'll be given a blank orpartially completed cell, followed by a hidden cell with a solution. Forexample.Print the text "Hello, world!".
###Code
# Your code here
print("Hello, world!")
###Output
Hello, world!
###Markdown
Exercise 1: Modify to print something else:
###Code
my_string = # Add your text here. Remember to put it inside of single quotes or double quotes ( " " or '' )
print(my_string)
###Output
_____no_output_____
###Markdown
Let's go over some Python Concepts(A lot of this examples were shamely taken from https://jckantor.github.io/CBE30338/01.01-Getting-Started-with-Python-and-Jupyter-Notebooks.html :$) Basic Arithmetic OperationsBasic arithmetic operations are built into the Python langauge. Here are some examples. In particular, note that exponentiation is done with the ** operator.
###Code
a = 2
b = 3
print(a + b)
print(a ** b)
print(a / b)
###Output
5
8
0.6666666666666666
###Markdown
Python LibrariesThe Python language has only very basic operations. Most math functions are in various math libraries. The numpy library is convenient library. This next cell shows how to import numpy with the prefix np, then use it to call a common mathematical functions.
###Code
import numpy as np
# mathematical constants
print(np.pi)
print(np.e)
# trignometric functions
angle = np.pi/4
print(np.sin(angle))
print(np.cos(angle))
print(np.tan(angle))
###Output
3.141592653589793
2.718281828459045
0.7071067811865476
0.7071067811865476
0.9999999999999999
###Markdown
Lists are a versatile way of organizing your data in Python. Here are some examples, more can be found on this Khan Academy video.
###Code
xList = [1, 2, 3, 4]
xList
###Output
_____no_output_____
###Markdown
ConcatenationConcatentation is the operation of joining one list to another.
###Code
x = [1, 2, 3, 4];
y = [5, 6, 7, 8];
x + y
np.sum(x)
###Output
_____no_output_____
###Markdown
Loops
###Code
for x in xList:
print("sin({0}) = {1:8.5f}".format(x,np.sin(x)))
###Output
sin(1) = 0.84147
sin(2) = 0.90930
sin(3) = 0.14112
sin(4) = -0.75680
###Markdown
Working with DictionariesDictionaries are useful for storing and retrieving data as key-value pairs. For example, here is a short dictionary of molar masses. The keys are molecular formulas, and the values are the corresponding molar masses.
###Code
States_SolarInstallations2020 = {'Arizona': 16.04, 'California': 30.02, 'Texas':18.00, 'Colorado': 44.01} # GW
States_SolarInstallations2020
###Output
_____no_output_____
###Markdown
We can a value to an existing dictionary.
###Code
States_SolarInstallations2020['New Mexico'] = 22.4
###Output
_____no_output_____
###Markdown
PlottingImporting the matplotlib.pyplot library gives IPython notebooks plotting functionality very similar to Matlab's. Here are some examples using functions from the
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,10)
y = np.sin(x)
z = np.cos(x)
plt.plot(x,y,'b',x,z,'r')
plt.xlabel('Radians');
plt.ylabel('Value');
plt.title('Plotting Demonstration')
plt.legend(['Sin','Cos'])
plt.grid()
###Output
_____no_output_____
###Markdown
 Welcome! Welcome to the PV Software 101: from Sun position to AC Output! tutorialModeling tools for all aspects of photovoltaic systems are rapidly growing, and there are solutions for many of the things you might want to simulate. Python is becoming one of the scientific languages of choice, and many open-source tools are available for PV modeling. This tutorial will focus on teaching attendees PV modeling in python through the use of PVlib. In this interactive tutorial we will go from getting acquainted with some common data used or measured in pv systems (i.e. weather), to modeling the AC energy output of a single-axis tracker system. This includes learning and simulating sun position, plane of array irradiances, temperature models, single-diode models and more. We will review common vocabulary around python and ``data aggregation`` by hour, week, month, and visualization. The tutorial will present hands-on examples in python, enabled via jupyter notebooks and a Jupyterhub (remote hosted server for jupyter notebooks and python language) so you, the attendee, don’t have to install anything, and can follow along while we go over the theory and code! In case it's not obvious, a computer is required. The tutorial will wrap up with an overview of other available open-source tools for other aspects of modeling PV systems. More on your teachers:The three of us have ample experience in data, coding, and PV field performance modeling, so we look forward to all of your questions.| | || --- | :--- ||  | Mark MikofskiI am a principal solar engineer at DNV and product manager for SolarFarmer. I research, analyze, and predict PV system performance, degradation, and reliability. I have contributed to a few Python projects like [pvlib python](https://pvlib-python.readthedocs.io/en/latest/), [PVMismatch](https://sunpower.github.io/PVMismatch/), and [SciPy](https://scipy.org/) ||  | Silvana Ayala PelaezI am a research scientist at NREL, focusing mostly on bifacial PV system's performance, and circular economy. Python is my daily bread and butter for data analysis and building tools. Silvana has made substantial contributions to the NREL [bifacialvf pvmismatch](https://github.com/NREL/bifacialvf) and [bifacial radiance](https://bifacial-radiance.readthedocs.io/en/latest/) software packages. ||  | Kevin AndersonI am a research scientist at NREL doing cool stuff! I have contributed to work on slope aware backtracking, clipping loss errors in hourly yield estimates, and am a maintainer for [pvlib python](https://pvlib-python.readthedocs.io/en/latest/) and a frequent contributor to [RdTools](https://rdtools.readthedocs.io/en/latest/). | Learning Objectives1. Access weather data (TMY3), understand irradiance data, and visualize it monthly.2. Calculate sun position, plane of array irradiance, and aggregate irradiance data into average daily insolation by month and year.3. Calculate module temperature from ambient data. 4. Use POA and module temperature to forecast a module's performance. [Overview](images\tutorial_overview.PNG) Why learn this? PV-lib is a library of algorithms and routines that you might encounter the need to use if you're donig anything PV-modeling related. It is managed by members of the PV research community, who make sure the formulas and code are not only sleek but accurate. You want to know the sun position? No need to code from zero the SPA (Solar Position algorithm), it's in PVlib. You want to reproduce the Sandia-King model to calculate module performance? It's there, also. You can find the most well-known [models](https://pvpmc.sandia.gov/), as well as recently accepted values and approaches in published PV literature.We hope adding this tool to your skillset will empower you to do better, faster research with an already solid foundation. Don't reinvent the wheel! How to use this tutorial?This tutorial is a [Jupyter](https://jupyter.org) notebook. Jupyter is a browser based interactive tool that combines text, images, equations, and code that can be shared with others. Please see the setup section in the [README](./README.md) to learn more about how to get started. Useful links1. References * [PVlib Documentation](https://pvlib-python.readthedocs.io/en/stable/) * [Github Code Repository](https://github.com/pvlib/pvlib-python)2. Ask for help: * [Use the pvlib-python tag on StackOverflow](https://stackoverflow.com/questions/tagged/pvlib-python) * [Google Group - Discussions and more!](https://groups.google.com/g/pvlib-python) * [Open an Issue on the Github Repository](https://github.com/pyvlib/pvlib-python/issues) Tutorial StructureThis tutorial is made up of multiple Jupyter Notebooks. These notebooks mixcode, text, visualization, and exercises.If you haven't used JupyterLab before, it's similar to the Jupyter Notebook. Ifyou haven't used the Notebook, the quick intro is1. There are two modes: ``command`` and ``edit``1. From ``command`` mode, press `Enter` to edit a cell (like this markdown cell)1. From ``edit`` mode, press `Esc` to change to command mode1. Press `shift+enter` to execute a cell and move to the next cell.1. The toolbar has commands for executing, converting, and creating cells.The layout of the tutorial will be as follows: Exercise: Print Hello, world!Each notebook will have exercises for you to solve. You'll be given a blank orpartially completed cell, followed by a hidden cell with a solution. Forexample.Print the text "Hello, world!".
###Code
# Your code here
print("Hello, world!")
a = 2
b = 3
a + b
###Output
_____no_output_____
###Markdown
 Welcome! Welcome to the PV Software 101: from Sun position to AC Output! tutorialModeling tools for all aspects of photovoltaic systems are rapidly growing, and there are solutions for many of the things you might want to simulate. Python is becoming one of the scientific languages of choice, and many open-source tools are available for PV modeling. This tutorial will focus on teaching attendees PV modeling in python through the use of PVlib. In this interactive tutorial we will go from getting acquainted with some common data used or measured in pv systems (i.e. weather), to modeling the AC energy output of a single-axis tracker system. This includes learning and simulating sun position, plane of array irradiances, temperature models, single-diode models and more. We will review common vocabulary around python and ``data aggregation`` by hour, week, month, and visualization. The tutorial will present hands-on examples in python, enabled via jupyter notebooks and a Jupyterhub (remote hosted server for jupyter notebooks and python language) so you, the attendee, don’t have to install anything, and can follow along while we go over the theory and code! In case it's not obvious, a computer is required. The tutorial will wrap up with an overview of other available open-source tools for other aspects of modeling PV systems. More on your teachers:The three of us have ample experience in data, coding, and PV field performance modeling, so we look forward to all of your questions.| | || --- | :--- ||  | Mark MikofskiI am a principal solar engineer at DNV and product manager for SolarFarmer. I research, analyze, and predict PV system performance, degradation, and reliability. I have contributed to a few Python projects like [pvlib python](https://pvlib-python.readthedocs.io/en/latest/), [PVMismatch](https://sunpower.github.io/PVMismatch/), and [SciPy](https://scipy.org/) ||  | Silvana Ayala PelaezI am a research scientist at NREL, focusing mostly on bifacial PV system's performance, and circular economy. Python is my daily bread and butter for data analysis and building tools. Silvana has made substantial contributions to the NREL [bifacialvf pvmismatch](https://github.com/NREL/bifacialvf) and [bifacial radiance](https://bifacial-radiance.readthedocs.io/en/latest/) software packages. ||  | Kevin AndersonI am a research scientist at NREL doing cool stuff! I have contributed to work on slope aware backtracking, clipping loss errors in hourly yield estimates, and am a maintainer for [pvlib python](https://pvlib-python.readthedocs.io/en/latest/) and a frequent contributor to [RdTools](https://rdtools.readthedocs.io/en/latest/). | Learning Objectives1. Access weather data (TMY3), understand irradiance data, and visualize it monthly.2. Calculate sun position, plane of array irradiance, and aggregate irradiance data into average daily insolation by month and year.3. Calculate module temperature from ambient data. 4. Use POA and module temperature to forecast a module's performance. OverviewThe sketch below from [the Sandia PV Performance Modeling Collaborative (PVPMC)](https://pvpmc.sandia.gov/) outlines the topics we will cover in this tutorial: Why learn this? PV-lib is a library of algorithms and routines that you might encounter the need to use if you're doing anything PV-modeling related. It is managed by members of the PV research community, who make sure the formulas and code are not only sleek but accurate. * You want to know the sun position? No need to code from zero the SPA (Solar Position algorithm), it's in PVlib. * You want to reproduce the Sandia-King model to calculate module performance? It's there, also. * You can find the most well-known [models](https://pvpmc.sandia.gov/), as well as recently accepted values and approaches in published PV literature.* We hope adding this tool to your skillset will empower you to do better, faster research with an already solid foundation. Don't reinvent the wheel! How to use this tutorial?This tutorial is a [Jupyter](https://jupyter.org) notebook. Jupyter is a browser based interactive tool that combines text, images, equations, and code that can be shared with others. Please see the setup section in the [README](./README.md) to learn more about how to get started. Useful links1. References * [PVlib Documentation](https://pvlib-python.readthedocs.io/en/stable/) * [Github Code Repository](https://github.com/pvlib/pvlib-python)2. Ask for help: * [Use the pvlib-python tag on StackOverflow](https://stackoverflow.com/questions/tagged/pvlib-python) * [Google Group - Discussions and more!](https://groups.google.com/g/pvlib-python) * [Open an Issue on the Github Repository](https://github.com/pyvlib/pvlib-python/issues) Tutorial StructureThis tutorial is made up of multiple Jupyter Notebooks. These notebooks mixcode, text, visualization, and exercises.If you haven't used JupyterLab before, it's similar to the Jupyter Notebook. Ifyou haven't used the Notebook, the quick intro is1. There are two modes: ``command`` and ``edit``1. From ``command`` mode, press `Enter` to edit a cell (like this markdown cell)1. From ``edit`` mode, press `Esc` to change to command mode1. Press `shift+enter` to execute a cell and move to the next cell.1. The toolbar has commands for executing, converting, and creating cells.The layout of the tutorial will be as follows: Exercise: Print Hello, world!Each notebook will have exercises for you to solve. You'll be given a blank orpartially completed cell, followed by a hidden cell with a solution. Forexample.Print the text "Hello, world!".
###Code
# Your code here
print("Hello, world!")
###Output
Hello, world!
###Markdown
Exercise 1: Modify to print something else:
###Code
my_string = # Add your text here. Remember to put it inside of single quotes or double quotes ( " " or '' )
print(my_string)
###Output
_____no_output_____
###Markdown
Let's go over some Python Concepts(A lot of this examples were shamely taken from https://jckantor.github.io/CBE30338/01.01-Getting-Started-with-Python-and-Jupyter-Notebooks.html :$) Basic Arithmetic OperationsBasic arithmetic operations are built into the Python langauge. Here are some examples. In particular, note that exponentiation is done with the ** operator.
###Code
a = 2
b = 3
print(a + b)
print(a ** b)
print(a / b)
###Output
5
8
0.6666666666666666
###Markdown
Python LibrariesThe Python language has only very basic operations. Most math functions are in various math libraries. The numpy library is convenient library. This next cell shows how to import numpy with the prefix np, then use it to call a common mathematical functions.
###Code
import numpy as np
# mathematical constants
print(np.pi)
print(np.e)
# trignometric functions
angle = np.pi/4
print(np.sin(angle))
print(np.cos(angle))
print(np.tan(angle))
###Output
3.141592653589793
2.718281828459045
0.7071067811865476
0.7071067811865476
0.9999999999999999
###Markdown
Lists are a versatile way of organizing your data in Python. Here are some examples, more can be found on this Khan Academy video.
###Code
xList = [1, 2, 3, 4]
xList
###Output
_____no_output_____
###Markdown
ConcatenationConcatentation is the operation of joining one list to another.
###Code
x = [1, 2, 3, 4];
y = [5, 6, 7, 8];
x + y
np.sum(x)
###Output
_____no_output_____
###Markdown
Loops
###Code
for x in xList:
print("sin({0}) = {1:8.5f}".format(x,np.sin(x)))
###Output
sin(1) = 0.84147
sin(2) = 0.90930
sin(3) = 0.14112
sin(4) = -0.75680
###Markdown
Working with DictionariesDictionaries are useful for storing and retrieving data as key-value pairs. For example, here is a short dictionary of molar masses. The keys are molecular formulas, and the values are the corresponding molar masses.
###Code
States_SolarInstallations2020 = {'Arizona': 16.04, 'California': 30.02, 'Texas':18.00, 'Colorado': 44.01} # GW
States_SolarInstallations2020
###Output
_____no_output_____
###Markdown
We can a value to an existing dictionary.
###Code
States_SolarInstallations2020['New Mexico'] = 22.4
###Output
_____no_output_____
###Markdown
PlottingImporting the matplotlib.pyplot library gives IPython notebooks plotting functionality very similar to Matlab's. Here are some examples using functions from the
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
x = np.linspace(0,10)
y = np.sin(x)
z = np.cos(x)
plt.plot(x,y,'b',x,z,'r')
plt.xlabel('Radians');
plt.ylabel('Value');
plt.title('Plotting Demonstration')
plt.legend(['Sin','Cos'])
plt.grid()
###Output
_____no_output_____ |
milestones/3_spectral_graph_theory.ipynb | ###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: `8`* Students: `Matyas Lustig, Aurélien Pomini, David Salathé, Justine Weber`* Dataset: `Flight Routes` Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
from scipy import sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
plt.style.use('seaborn')
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
# We import both the weighted and unweighted symmetric adjacency matrices, and set the diagonal
# to zero (remember that in Milestone 2 we found out that we had one self loop)
# the unweighted adjacency matrix
adjacency_uw = np.load('data/adjacency_sym_mtx_uw.npy')
# the weighted adjacency matrix
adjacency = np.load('data/adjacency_sym_mtx.npy')
# the number of nodes in the network
n_nodes = adjacency_uw.shape[0]
# set diagonal elements to 0 (as explained in Milestone 2)
adjacency_uw[np.diag_indices_from(adjacency_uw)] = 0
adjacency[np.diag_indices_from(adjacency)] = 0
# the number of edges in the network
n_edges = adjacency_uw.sum() / 2
## We are removing those edges where the weight is smaller thane the threshold
threshold = 20
node_map = np.where(degrees >= threshold)[0]
adjacency_th = np.delete(adjacency_uw,np.where(degrees < threshold),0)
adjacency_th = np.delete(adjacency_th,np.where(degrees < threshold),1)
degrees_th = np.sum(adjacency_th, axis = 0)
n_nodes_th = adjacency_th.shape[0]
adjacency_csr = sparse.csr_matrix(adjacency_uw);
degree_matrix_csc = sparse.diags(degrees,format = "csc")
###Output
_____no_output_____
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
# we use the weighted adjacency matrix (named adjacency)
laplacian_combinatorial = sparse.csgraph.laplacian(adjacency_th, normed=False).astype('float64')
laplacian_normalized = sparse.csgraph.laplacian(adjacency_th, normed=True)
###Output
_____no_output_____
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference!
###Code
# Variable used in the rest of the milestone, to change easily between normalized and combinatorial
laplacian = laplacian_normalized
###Output
_____no_output_____
###Markdown
Question 2Compute the eigendecomposition of the Laplacian $L = U^\top \Lambda U$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
def is_sorted(a):
for i in range(a.size-1):
if a[i+1] < a[i] :
return False
return True
eigenvalues, eigenvectors = scipy.linalg.eigh(laplacian)
print("Check sorted :", is_sorted(eigenvalues))
print(eigenvectors.shape)
assert eigenvectors.shape == (n_nodes, n_nodes)
# We have used this code in order to check if the computation of the eigenvalues and eigenvectors
# was correct and satisfied the property of eigenvalues/vectors
idx = 1
u = eigenvectors[:, idx]
c = laplacian.dot(u)
for i in range(3178) :
a = eigenvalues[i] * u
if (np.allclose(a,c, 1e-20)) :
print('TRUE almost equal :', i)
if (np.array_equal(a,c)) :
print('TRUE equal :', i)
###Output
TRUE almost equal : 1
###Markdown
Justify your choice of eigensolver. Since we know our matrix is symmetric, we use the scipy.linalg.eigh function which is designed for this situation, and implements a faster algorithm than scipy.linalg.eig (which works for any kind of matrix). Moreover scipy.linalg.eigh returns the eigenvalues in ascending order. We could have taken numpy.linalg.eigh instead, since it doesn't change anything. The advantage of scipy.linalg.eigh is that it has more functionnalities. For example, it can take a second matrix as an argument, but we don't use it here, so it doesn't make any difference. sparse.linalg.eigs provides a fast way to get the first k << N eigenvalues of a sparse matrix, using a partial decomposition, with Lanczos algorithm. However it is not made for computing all the eigenvalues : it approximates the values of the eigenvalues, and we get a RunTime Warning when trying to do so. Hence it is not good for computing all the eigenvalues, so we don't use it for this question. Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? Matrix $S$ is the **incidence matrix** whose elements are equal to $0$ or $\pm 1$. The rows are for nodes and columns for edges. $S(i,j)$ is equal to $+1$ if there is an edge $e_j = (v_i, v_k)$ and is equal to $-1$ if there is an edge $e_j = (v_k, v_i)$ for some node $k$. It is equal to $0$ otherwise.If there is a signal $x \in \mathbb{R}^N$ then $S^\top x$ computes **the gradient of $x$**. It is a generalization of the fact that $(S^\top x)[j] = x[i] - x[k]$ is a derivative of $x$ along edge $j$. Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). $\| S^\top u_k \|_2^2 = u_k^\top S S^\top u_k = u_k^\top L u_k$ and while $u_k = D^{-1/2}f_k$ where $D$ is a diagonal degree matrix we can write$=(D^{-1/2}f_k)^\top L D^{-1/2}f_k$$=f_k^\top (D^{-1/2})^\top L D^{-1/2}f_k$ as $D$ is diagonal we know that $D=D^\top$ and $L_{norm} = D^{-1/2} L D^{-1/2}$ we can deduce$=f^\top L_{norm} f_k$$=\lambda_k$ the desired eigenvalue What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? It is a quadratic Dirichlet form, a measure of how smooth a signal $x$ is. Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians?
###Code
eigenvalues_comb, eigenvectors_comb = scipy.linalg.eigh(laplacian_combinatorial)
eigenvalues_norm, eigenvectors_norm = scipy.linalg.eigh(laplacian_normalized)
u0_comb = eigenvectors_comb[:,0]
u0_norm = eigenvectors_norm[:,0]
print("Combinatorial u0 : \n", u0_comb)
print("min (absolute) value : ", np.min(np.absolute(u0_comb)))
print("max (absolute) value : ", np.max(np.absolute(u0_comb)))
print("\nNormalized u0 : \n", u0_norm)
print("min (absolute) value : ", np.min(np.absolute(u0_norm)))
print("max (absolute) value : ", np.max(np.absolute(u0_norm)))
print("Min eigenvalue (combinatorial):", eigenvalues_comb[0])
print("Min eigenvalue (normalized):", eigenvalues_norm[0])
###Output
Min eigenvalue (combinatorial): -7.511050221243483e-15
Min eigenvalue (normalized): -4.817978322293023e-16
###Markdown
The value of $u_0$ for the combinatorial and normalized laplacians, is a vector composed of positive and negative numbers, whose absolute values are in the ranges showed above.We cannot see any particular properties about these vectors, because they are computed on the whole adjacency matrix of the graph, ie. composed of several connected components. Also it is very difficult so say something about them, because we don't know if the difference between all the values is due to the approximation done by the function scipy.linalg.eigh, or if the values are actually differents. For example, the smallest eigenvalue for the combinatorial laplacian is given by $-7.511050221243483e-15$. This value is extremely small, so we could approximate it by zero, but we might also loose some information by doing that (such as the sign of the minimal eigenvalue)! Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
plt.plot(eigenvalues)
###Output
_____no_output_____
###Markdown
**Your answer here.**We can see that the eigenvalues' values are rising exponentially, an abrupt increase takes place around the last fifth.//Null space not that visible... Is something to be said on that point? How many connected components are there in your graph? Answer using the eigenvalues only.
###Code
min_value = eigenvalues[0]
n_components = np.count_nonzero(eigenvalues == min_value)
n_components
###Output
_____no_output_____
###Markdown
**Comments :** Since we don't have values exactly equal to zero among our eigenvalues, we have considered the minimal value as being an approximation of zero (cf. comment of question 5 also). The number of connected components is given by the number of eigenvalues equal to zero (here equal to the minimal value). The result we got is consistent with what we got in Milestone 2, so we think it is a good approximation. Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians.
###Code
print("Max eigenvalue for combinatorial:",max(eigenvalues_comb))
print("Max eigenvalue for normalized:",max(eigenvalues_norm))
print("Norm of order 2 of combinatorial Laplacian:", np.linalg.norm(laplacian_combinatorial,2))
###Output
Max eigenvalue for combinatorial: 247.21587610156323
Max eigenvalue for normalized: 2.000000000000001
Norm of order 2 of combinatorial Laplacian: 207.12022651531862
###Markdown
**Your answer here.**An upper bound exists for the normalized Laplacian eigenvalues and is equal to 2 if and only if we are dealing with a bipartite graph which we can observe in our case.Our graph should not be bipartite, it would be really strange as an interpretation for flight routes and airports. What we could say is that if the normalized Laplacian has an eigenvalue equal to 2, there exists a *component* of the graph which is bipartite. We will probably investigate deeper on this for the project. For the combinatorial Laplacian, the upper bound on the eigenvalues is given by the norm of order 2 of the Laplacian. 3 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$. We will now use only the largest component.Notice that for our graph, we are very close to the original one (ie. the largest component was already almost the whole graph)
###Code
import networkx as nx
G = nx.from_numpy_matrix(adjacency)
Gc = max(nx.connected_component_subgraphs(G), key=len)
adjacency_c = np.array(nx.to_numpy_matrix(Gc))
n_nodes_c = nx.number_of_nodes(Gc)
laplacian_c_comb = sparse.csgraph.laplacian(adjacency_c, normed=False).astype('float64')
laplacian_c_norm = sparse.csgraph.laplacian(adjacency_c, normed=True)
laplacian_c = laplacian_c_norm
###Output
_____no_output_____
###Markdown
Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) Graph embeddings map networks, graphs into a vector space preserving relevant network properties. Laplacian eigenmaps produce coordinate maps that are smooth functions over the original graph. That alows us to reduce the possible dimensions of each of the graph data point based on their similarity. Which is useful for making any computations less demanding and clearer visualization of the problem.On some problems, our data points are living in a lower-dimensional manifold than the actual dimension space. Laplacian eigenmaps is a "non-linear dimensionality reduction", which means that it can reduce an "S-shape" or a "Swiss roll" living in a 3 (or greater) dimension back on a lower (e.g. two) dimensions space by conserving connectiveness property. (which would not be possible with a linear dimensionality reduction algorithm). The purpose of this technique is to reduce our dimensional problem onto a lower dimension, which can improve efficiency of some computations (eg in machine learning). Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations.
###Code
laplacian_c = sparse.csgraph.laplacian(adjacency_th, normed=True)
adjacency_c = adjacency_th
k_eig_val, k_eig_vect = scipy.sparse.linalg.eigsh(laplacian_c, k=3, which='SM')
eigen_map = k_eig_vect[:,[1,2]]
print(np.max(eigen_map))
###Output
0.13751005256345533
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
plt.plot(eigen_map[:,0],eigen_map[:,1], 'r.')
###Output
_____no_output_____
###Markdown
**Answer/Comments:**We have tried doing this part and the following with the combinatorial and the normalized Laplacian. We think that the plot is visually more relevant with the normalized Laplacian, so we have kept this one. We embed our graph, using the second and third eigenvectors. We don't take the first one since it is constant, so it does not carry information about the graph. The second and third carry information about the connectiveness of the graph. Since we only have two dimensions we chose these two ones. From the plot, we clearly see two groups of points, aligned along each direction. We can even distinguish three groups, since a lot of points are located in the corner. Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? In this case $Z$ is the data matrix, $N$ is the number of data points and $d$ the dimension of each of the data points that we wanted to reduce. The embedding Z preserves the number of nodes, but it also preserves the *connectiveness* of the nodes, i.e. how components are connected. 2 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? We choose d = 2 because we want a visualization of our graph and we will use d as the dimension to plot the nodes."k" is the number of clusters we should observe. It should be linked with the number of labels, but in our case, we do not really know what kind of labels we should face. From the plot we got in question 8, we think we should pick k as 2, 3 or 4. We have tried all of these, and 3 seems to be the most relevant value. Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
clusters = 3
k_eig_val, k_eig_vect = scipy.sparse.linalg.eigsh(laplacian_c, k=clusters, which='SM')
# Normalizing by the degree
diag = np.diag(laplacian_c)
for i in range(clusters):
k_eig_vect[:, i] /= diag
inter = KMeans(n_clusters=clusters, random_state=0).fit_predict(k_eig_vect)
Z = np.array(inter)
###Output
_____no_output_____
###Markdown
Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe?
###Code
ordered_adj = np.zeros((n_nodes_c, n_nodes_c))
last_idx = 0
for i in range(clusters):
ordered_adji = adjacency_c[Z==i]
size = ordered_adji.shape[0]
ordered_adj[last_idx:last_idx+size] = ordered_adji
last_idx += size
print(ordered_adj)
plt.spy(ordered_adj)
biggest_cluster_size = (Z==0).sum()
ordered_adj_matrix_without_biggest_cluster = np.zeros((n_nodes_c-biggest_cluster_size, n_nodes_c-biggest_cluster_size))
ordered_adj_matrix = np.zeros((n_nodes_c, n_nodes_c))
size = 0
for i in range(clusters):
current_matrix = adjacency_c[Z==i]
print(current_matrix.shape)
current_matrix = current_matrix[:, Z==i]
if i != 0:
ordered_adj_matrix_without_biggest_cluster[\
size - biggest_cluster_size : size-biggest_cluster_size + current_matrix.shape[0],\
size - biggest_cluster_size : size-biggest_cluster_size + current_matrix.shape[0]] = current_matrix
ordered_adj_matrix[size:size+current_matrix.shape[0], size:size+current_matrix.shape[0]] = current_matrix
size += current_matrix.shape[0]
plt.title("adjacency of cluster " + str(i) + " of shape " + str(current_matrix.shape))
plt.spy(current_matrix)
plt.show()
plt.title("reordered adjacency matrix, without the largest cluster")
plt.spy(ordered_adj_matrix_without_biggest_cluster)
plt.show()
plt.title("reordered adjacency matrix, with all the clusters,")
plt.spy(ordered_adj_matrix, markersize=1)
###Output
(253, 451)
###Markdown
We expected to see significant changes by reordering of the matrix.However, the new adjacency matrix is very close to the original one: 97% of the matrix remains unchanged since most of the airports seem to show similar properties (they are widely connected to the other airports) and are assigned to the main cluster. We also note that increasing k, even up to 10,'' does not avoid this phenomenon.We can distinguish two small clusters, which are both well connected together but poorly connected to the rest of the graph. Geographical or political reasons might explain those two outliers. Further research would be necessary to verify that; maybe in the next milestone! Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
We do not have the ground truth: the flight routes dataset is not labeled.It is difficult to assess the quality of our clustering. If the purpose is to label our airports, we can state that this clustering is not suitable. Indeed, almost all the points receive the same label.However, if the goal is to identify small groups of similar airports, this clustering might be adapted. Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
# Your code here.
clust_1 = np.where(Z==0)
clust_2 = np.where(Z==1)
clust_3 = np.where(Z==2)
#clust_4 = np.where(Z==3)
a1 = np.squeeze(np.take(eigen_map, clust_1, 0))
a2 = np.squeeze(np.take(eigen_map, clust_2, 0))
a3 = np.squeeze(np.take(eigen_map, clust_3, 0))
#a4 = np.squeeze(np.take(eigen_map, clust_4, 0))
plt.figure(figsize = (10,6))
plt.scatter(a1[:,0],a1[:,1], color='blue')
plt.scatter(a2[:,0],a2[:,1], color='orange')
plt.scatter(a3[:,0],a3[:,1], color='green')
#plt.scatter(a4[:,0],a4[:,1], color='red')
#plt.ylim((-0.03, 0.03))
#plt.xlim((-0.05, 0.05))
###Output
_____no_output_____
###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: `Team 07`* Students: `Mathieu Lamiot, Julien Heitmann, Louis Landelle, Mathias Goncalves`* Dataset: `US Senators` Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
from scipy import sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
adjacency = np.load("../projet/data/adjacency.npy")
n_nodes = adjacency.shape[0]
###Output
_____no_output_____
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
degree_vect = np.sum(adjacency,axis=1)
degree_matrix = np.diag(degree_vect)
laplacian_combinatorial = degree_matrix-adjacency
d_inv_sqrt = np.diag(np.power(degree_vect, -0.5, where = degree_vect>0))
laplacian_normalized = d_inv_sqrt.dot(laplacian_combinatorial).dot(d_inv_sqrt)
###Output
_____no_output_____
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference!
###Code
laplacian = laplacian_combinatorial
laplacian_csr = scipy.sparse.csr_matrix(laplacian)
###Output
_____no_output_____
###Markdown
Question 2Compute the eigendecomposition of the Laplacian $L = U \Lambda U^\top$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
# Using non-sparse laplacian
eigenvalues, eigenvectors = scipy.linalg.eigh(laplacian)
eigenvalues_normalized, eigenvectors_normalized = scipy.linalg.eigh(laplacian_normalized)
# Using sparse laplacian - Missing the last eigenvalue
eigenvalues2, eigenvectors2 = sparse.linalg.eigsh(laplacian_csr, k= n_nodes-1,which = 'SM')
assert eigenvectors.shape == (n_nodes, n_nodes)
###Output
_____no_output_____
###Markdown
Justify your choice of eigensolver. We first tried to take advantage of the sparsity of the laplacian matrix, but unforutnately scipy methods for solving eigenproblems with CSR formatted matrix can't return all the eigenvalues and eigenvectors (k has to be < `n_node`). This limitation prevented us from passing the assertion.So we ended up going for `scipy.linalg.eigh()`, which works on a dense formatted matrix. Our lapacian is real symetric matrix to we can use `scipy.linalg.eigh()` instead of `scipy.linalg.eig()`.Since our network is relatively small, computation is fast enough, even without taking advantage of sparsity. Note that both methods return similar results (except for the last eigenvalue, not returned by the sparse method.) Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? It is the incidence matrix. As our graph is weighted and undirected, we took a definition adpated from the one presented in class. It is defined as :$S(i,j) = W_{(i,k)} \ if e_j=\{v_i,v_k\}\ and \ i<k \ for \ some \ k $$S(i,j) = -W_{(i,k)} \ if e_j=\{v_i,v_k\}\ and \ i>k \ for \ some \ k $$S(i,j) = 0 \ otherwise$Basically, we make our graph directed (from low-indice vertices to high-indice vertices) is order to be able to apply the definition seen in class. Doing so, the property $L = S S^\top$ holds, and the the "smoothness of a signal" notion is preserved, as discussed in the following. Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). **Your answer here.**$\| S^\top u_k \|_2^2 = = u_k^\top S S^\top u_k = u_k^\top L u_k $$u_k$ being the k-th eigenvector of L: $\| S^\top u_k \|_2^2 = \lambda_k $Hence: $\| S^\top u_k \|_2^2 = \lambda_k$ What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? **Your answer here.**Consider the k-th element of x as a value assigned to the k-th node of the network. Hence x can be seen as a signal over the network. Let $y = S^\top x $. Then, the k-th element of y is the difference of the signal x between the two nodes linked by the k-th vertex of the network (multiplied by its weigth).Hence, $\| S^\top x \|_2^2$ is the sum of the squares of those differences. It is a measure of the variations of the signal x on the network. High values mean a non-smooth signal over the network, whereas small values reflect a signal with low variations across the network. Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians? **Your annswer here.**$u_0$ is an indicator function of a connected component of the graph. In our graph, we have one isolated node and the 99 others nodes form a connected component. Hence, we have twice the eigenvalue 0. $u_0$ and $u_1$ are the indicator functions of those 2 connected components, normalized. So they have positive values over the related connected components, and have zero value elsewhere.
###Code
#Indicator function of the first connected component: An isolated node.
eigenvectors[:,0]
#Indicator function of the second connected component: 99 nodes.
eigenvectors[:,1]
###Output
_____no_output_____
###Markdown
Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
# Your code here.
plt.plot(eigenvalues)
plt.xlabel('Eigenvalue index')
plt.ylabel('Eigenvalue')
###Output
_____no_output_____
###Markdown
**Your answer here.**The eigenvalues quickly increase: there are just a few low eigenvalues. This tends to show that, apart from the disconnected node, the network is well connected. How many connected components are there in your graph? Answer using the eigenvalues only.
###Code
# Your code here.
tol = 10**(-5);
nCC = np.sum(eigenvalues<tol)
print("Number of connected components: " + str(nCC))
###Output
Number of connected components: 1
###Markdown
Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians. **Your answer here.**Eigenvalues of the combinatorial laplacian are upper-bounded by $2*degree_{max}$.Eigenvalues of the normalized laplacian are upper-bounded by 2.This comes from the Gershgorin circle theorem applied to the specific structure of the Laplacian. 3 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$.From now on, if your graph has more than one connected component, work with the giant component only.
###Code
degree_vect = np.sum(adjacency,axis=1)
degree_matrix = np.diag(degree_vect)
laplacian = degree_matrix-adjacency
#Reconstruct the needed variables
laplacian_csr = scipy.sparse.csr_matrix(laplacian)
###Output
_____no_output_____
###Markdown
Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) **Your answer here.**To reduce the dimension of the graph in order to visulaize the graph in a lower dimension while keeping some network properties. Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations.
###Code
k= 2
eig_val, eig_vect = sparse.linalg.eigsh(laplacian_csr, k=k, which = 'SM')
Y = eig_vect
# norm_eig_vect = Y / degree_vect[:,None]
###Output
_____no_output_____
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
plt.scatter(Y[:,0], Y[:,1])
plt.title("Scatter plot of 2-Dimensional reduction")
plt.xlabel("Dimension 1")
plt.ylabel("Dimension 2")
plt.show()
###Output
_____no_output_____
###Markdown
**Your answer here.**There are clearly two clusters differentiated thanks to the second dimension: They are probably republicans and democrats. On the first dimension, all vertices have the same location: It is the dimension linked to the eigenvalue 0, hence it is just an indicator of the connected component. Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? **Your answer here.**It preserves the local geometry structure of the graph, hence preserves the communities. 2 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? **Your answer here.**We selected k = 2 as we want to have the separation between the republican and democrats ; and we select d = 2 according to the second point in the previous question. It really seems that 2 dimensions are enough to distinguish the two clusters. Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
d = 2
k = 2
eig_val, eig_vect = sparse.linalg.eigsh(laplacian_csr, k=d, which = 'SM')
Fiedler_vector = eig_vect[:,1]
clusters = 1*(Fiedler_vector>0)
#kmean = KMeans(n_clusters = k).fit(eig_vect)
#clusters = kmean.labels_
clusters
###Output
_____no_output_____
###Markdown
Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe?
###Code
new_order = np.argsort(clusters)
A1 = adjacency[:, new_order][new_order]
plt.spy(A1)
plt.show()
###Output
_____no_output_____
###Markdown
**Your answer here.**We expect a matrix close to a block-diagonal matrix: The strongly connected communities are gathered together and there are almost no connections toward other communities. In fact, this is what we observe: our clusters are well connected inside but poorly connected to each other. Hence, the diagonal blocks are non-zero values and off-diagonal have almost all zero values. Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).
###Code
# Your code here.
# Load party affiliation list
party = np.load("../projet/data/party.npy")
# Combine categorization value and ground truth (obtained from party)
id_and_party = np.dstack((clusters,party)).squeeze()
# Compute statistics
np.unique(id_and_party, return_counts=True, axis=0)
###Output
_____no_output_____
###Markdown
Note that there is neither a \['1', 'R'\] tuple nor a \['0', 'D'\] tuple. From this we deduce that category 0 corresponds to a Republican, and 1 to a Democrat affiliation. Moreover there are two independent senators that were (falsely?) categorized as Democrats. We made the choice to rather have 2 clusters than 3, since independent senators are in the minority and don't have an common political platform (one of the independent senators, Bernie Sanders, has no party affialiation, but is very much considered a democrat).Hence there are 3 false categorization, out of a 100 senators: Two independent senators, and one outlier (lonely senator).Conclusion: 97% of the nodes have been correctly categorized. Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
# Your code here.
color_map = np.repeat('r', clusters.size)
color_map[clusters == 1] = 'b'
plt.scatter(Y[:,0], Y[:,1], c=color_map)
plt.title("Scatter plot of 2-Dimensional reduction, with colors")
plt.xlabel("Dimension 1")
plt.ylabel("Dimension 2")
plt.show()
###Output
_____no_output_____
###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: `32`* Students: `George Adaimi, Okan Altingovde, Isinsu Katircioglu, Sena Kiciroglu`* Dataset: `FMA` Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
from scipy import sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
adjacency = np.load('adjacency_noDisconnectedNodes_mfccStdMeanSkewKurtosisMedian.npy')# the adjacency matrix
n_nodes = adjacency.shape[0]# the number of nodes in the network
n_edges = np.count_nonzero(adjacency)//2# the number of edges in the network
print('Number of nodes: ', n_nodes)
print('Number of edges: ', n_edges)
###Output
Number of nodes: 2000
Number of edges: 662928
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
D = np.diag(adjacency.sum(axis=1))
L = D - adjacency
laplacian_combinatorial = sparse.csr_matrix(L)
sqrt_D = np.power(np.linalg.inv(D),0.5)
laplacian_normalized = sparse.csr_matrix(sqrt_D@L@sqrt_D)
###Output
_____no_output_____
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference!
###Code
laplacian = laplacian_normalized
###Output
_____no_output_____
###Markdown
Question 2Compute the eigendecomposition of the Laplacian $L = U^\top \Lambda U$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
eigenvalues, eigenvectors = np.linalg.eig(laplacian.todense())
assert eigenvectors.shape == (n_nodes, n_nodes)
## Sorting Eigenvalues and EigenVectors
sorted_indexes = eigenvalues.argsort()
eigenvalues = eigenvalues[sorted_indexes]
eigenvectors = eigenvectors[sorted_indexes]
###Output
_____no_output_____
###Markdown
Justify your choice of eigensolver. **Our answer:*** We choose numpy's eigensolver since we want to compute eigenvalues for all eigenvectors which is not possible with scipy sparse eigensolver. Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? **Our answer:** If L is the unnormalized laplacian, then S is the incidence matrix with its elements:$$S(i,j) = \left\{\begin{matrix} +1& \text{if } e_j=(v_i,v_k) \text{ for some } k \\ -1& \text{if } e_j=(v_k,v_i) \text{ for some } k\\ 0& otherwise\end{matrix}\right.$$In $S^T$ each column corresponds to a node and each row corresponds to an edge. It is used to show which node the edge comes out of and which node the edges goes to. Meaning that, in $S^T$ there is always a -1 and a 1 in each row and the rest of the elements are 0. Therefore when we do $S^Tx$ we do finite-differences between the two nodes that are associated with an edge. The result is a (edge_num x 1) column vector. Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). **Our answer:**The eigenvalue equation for the Laplacian is the following:$$Lu_k = u_k\lambda_k$$Since $L = SS^T$$$SS^Tu_k = u_k\lambda_k\\u_k^{T}SS^Tu_k = u_k^{T}u_k\lambda_k \\$$We can say $u_k^{T}u_k = I$ for eigenvectors, since they are normalized. Therefore$$u_k^{T}SS^Tu_k = \lambda_k\\\| S^T u_k \|_2^2 = \lambda_k$$ What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? **Our answer:**This quantity corresponds to $x^TSS^Tx = x^TLx$. This is a quality that measures how x changes. For example, the result is big if the there is a difference between x[i] and x[j] with a large weight W[i,j] on the edge between them. Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians?
###Code
eigenvalues_c, eigenvectors_c = np.linalg.eig(laplacian_combinatorial.todense())
sorted_indexes_c = eigenvalues_c.argsort()
eigenvalues_c = eigenvalues_c[sorted_indexes_c]
eigenvectors_c = eigenvectors_c[sorted_indexes_c]
u0_n = np.squeeze(np.asarray(eigenvectors[0]))
u0_c = np.squeeze(np.asarray(eigenvectors_c[0]))
def plot_eigenvector(values, norm= True):
if norm:
title = 'Value of u_0 for normalized Laplacian'
else:
title = 'Value of u_0 for combinatorial Laplacian'
values = values[:None]
fig = plt.figure()
index = np.arange(len(values))
ax = fig.add_subplot(1,1,1)
plt.plot(index, values, label='eigenvalues', color='c', marker='o')
ax.set_ylabel('Value')
ax.grid(True)
plt.title(title, pad=25)
plt.show()
plt.close(fig)
plot_eigenvector(u0_n, norm = True)
plot_eigenvector(u0_c, norm = False)
print("u_0 norm:", u0_n)
print("u_0 comb:", u0_c)
###Output
_____no_output_____
###Markdown
**Our answer:**We have plotted the values within $u_0$ above. For the normalized Laplacian, there are very few non-zero values in $u_0$. Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
# Your code here.
def plot_spectrum(eigenvalues,with_label,scatter = False,range = None):
eigenvalues = eigenvalues[:None]
fig = plt.figure()
index = np.arange(len(eigenvalues))
ax = fig.add_subplot(1,1,1)
ax.set_facecolor((1.0, 1.0, 1.0))
ax.set_ylabel('Value')
plt.ylabel('Value')
plt.xlabel('k')
ax.grid(True)
if with_label:
for x, eigenvalue in zip(index,eigenvalues):
ax.annotate('%.2f'%(eigenvalue), (x,eigenvalue))
if not scatter:
plt.plot(index, eigenvalues, label='eingenvalues', color='c', marker='o')
else:
plt.scatter(index, eigenvalues, label='eingenvalues', color='c', marker='o')
plt.title('Value of lambda(k)', pad=25)
plt.show()
plt.close(fig)
plot_spectrum(eigenvalues,scatter=True, with_label=False)
###Output
_____no_output_____
###Markdown
**Our answer:*** The smallest eigenvalue is 0 and the next smallest eigenvalue is greater than 0. This shows that the multiplicity of 0 is 1, and we have only 1 connected component in our graph.* The largest eigenvalue is less than 2. How many connected components are there in your graph? Answer using the eigenvalues only.
###Code
# Your code here.
#we count the number of eigenvalues with value 0
def get_numbConnectedComp(eigenvalues):
return np.sum(np.round(eigenvalues,decimals=2) == 0)
print("The number of connected components:", get_numbConnectedComp(eigenvalues))
###Output
The number of connected components: 1
###Markdown
Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians. **Our answer:*** The upperbound for the normalized Laplacian's largest eigenvalue is 2. It can be equal to 2 if and only if the graph is bipartite. * There is no upperbound for the combanitorial Laplacian's largest eigenvalue. **check** 3 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$. Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) **Our answer:**We use Laplacian eigenmaps to reduce the dimensionality of the graph. This allows us to process the graph more easily by reducing the dimension while not affecting how the nodes are clustered within the graph. Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations.
###Code
def get_LaplacianEigenMaps(laplacian, new_dim =2,use_normalized=True):
eigenvalues, eigenvectors = sparse.linalg.eigsh(laplacian, which='SM', k=new_dim+1)
embeddings = eigenvectors[:,1:new_dim+1]
if use_normalized:
import pdb
pdb.set_trace()
return (embeddings.T*np.diag(sqrt_D)).T
return embeddings
embeddings_c = get_LaplacianEigenMaps(laplacian_combinatorial,use_normalized=False)
embeddings_n = get_LaplacianEigenMaps(laplacian_normalized)
###Output
> <ipython-input-18-cfea0b985686>(7)get_LaplacianEigenMaps()
-> return (embeddings.T*np.diag(sqrt_D)).T
(Pdb) (embeddings.T*np.diag(sqrt_D)).T.shape
(2000, 2)
(Pdb) embeddings.shape
(2000, 2)
(Pdb) type(embeddings)
<class 'numpy.ndarray'>
(Pdb) q
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
# Your code here.
fig = plt.figure()
plt.scatter(embeddings_c[:,0],embeddings_c[:,1], label='', color='c', marker='o')
plt.title('Combinatorial Laplacian Eigenmaps d=2')
plt.show()
plt.close(fig)
fig = plt.figure()
plt.scatter(embeddings_n[:,0],embeddings_n[:,1], label='', color='y', marker='o')
plt.title('Normalized Laplacian Eigenmaps d=2')
plt.show()
plt.close(fig)
###Output
_____no_output_____
###Markdown
**Our answer:** We would have liked to clearly see 2 separate clusters but we cannot. Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? **Our answer:*** It preserves the relationship between nodes, that is, it keeps information about how nodes are clustered together. It does not ruin the locality of the nodes. 2 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? **Our answer*** k=2 because we are using the small version of the FMA dataset which has 2 labels. Our d=1 because we are using the Fiedler vector to do clustering. Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
# Your code here.
# Fiedler vector is the 1 dimensional laplacian eigenmap.
fiedler_vector_c = get_LaplacianEigenMaps(laplacian_combinatorial, new_dim = 1, use_normalized=False).squeeze()
fiedler_vector_n = get_LaplacianEigenMaps(laplacian_normalized, new_dim = 1).squeeze()
# We do clustering according to the fiedler vector
clusters_n = np.zeros([n_nodes,])
clusters_n[fiedler_vector_n>0] = 1
clusters_n[fiedler_vector_n<=0] = -1
print("When we use normalized Laplacian:")
print("Number of nodes with label 1:", (clusters_n[clusters_n ==1]).shape[0])
print("Number of nodes with label -1:", (clusters_n[clusters_n ==-1]).shape[0])
clusters_c = np.zeros([n_nodes,])
clusters_c[fiedler_vector_c>0] = 1
clusters_c[fiedler_vector_c<=0] = -1
print("\nWhen we use combinatorial Laplacian:")
print("Number of nodes with label 1:", (clusters_c[clusters_c ==1]).shape[0])
print("Number of nodes with label -1:", (clusters_c[clusters_c ==-1]).shape[0])
print("\nSince we have less uneven clustering using the normalized Laplacian, we prefer to keep using this one.")
###Output
When we use normalized Laplacian:
Number of nodes with label 1: 1407
Number of nodes with label -1: 593
When we use combinatorial Laplacian:
Number of nodes with label 1: 1985
Number of nodes with label -1: 15
Since we have less uneven clustering using the normalized Laplacian, we prefer to keep using this one.
###Markdown
Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe?
###Code
# Your code here.
#reorder adjacency
adjacency_reordered = np.zeros([n_nodes, n_nodes])
n_nodes_1 = clusters_n[clusters_n == 1].shape[0]
ind_nodes_1 = np.where(clusters_n == 1)[0]
ind_nodes_minus_1 = np.where(clusters_n == -1)[0]
indices = np.concatenate([ind_nodes_1, ind_nodes_minus_1])
adjacency_reordered = adjacency[:, indices][indices, :]
fig = plt.figure()
plt.subplot(1,2,1)
plt.imshow(adjacency)
plt.title('Old adjacency')
plt.subplot(1,2,2)
plt.imshow(adjacency_reordered)
plt.title('New adjacency')
plt.show()
plt.close(fig)
###Output
_____no_output_____
###Markdown
**Our answer:*** We expected to be able to see a clear divide between the two clusters. The top left part of the adjacency should be mostly non-zero, and the bottom right should be mostly non-zero. The rest should be very sparsely filled. This is exactly what we observe. We can see that we have two clusters which are loosely connected together. Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).
###Code
import pandas as pd
import ast
def load(filename):
if 'features' in filename:
return pd.read_csv(filename, index_col=0, header=[0, 1, 2])
if 'echonest' in filename:
return pd.read_csv(filename, index_col=0, header=[0, 1, 2])
if 'genres' in filename:
return pd.read_csv(filename, index_col=0)
if 'tracks' in filename:
tracks = pd.read_csv(filename, index_col=0, header=[0, 1])
COLUMNS = [('track', 'tags'), ('album', 'tags'), ('artist', 'tags'),
('track', 'genres'), ('track', 'genres_all')]
for column in COLUMNS:
tracks[column] = tracks[column].map(ast.literal_eval)
COLUMNS = [('track', 'date_created'), ('track', 'date_recorded'),
('album', 'date_created'), ('album', 'date_released'),
('artist', 'date_created'), ('artist', 'active_year_begin'),
('artist', 'active_year_end')]
for column in COLUMNS:
tracks[column] = pd.to_datetime(tracks[column])
SUBSETS = ('small', 'medium', 'large')
tracks['set', 'subset'] = tracks['set', 'subset'].astype(
pd.api.types.CategoricalDtype(SUBSETS, ordered=True))
COLUMNS = [('track', 'genre_top'), ('track', 'license'),
('album', 'type'), ('album', 'information'),
('artist', 'bio')]
for column in COLUMNS:
tracks[column] = tracks[column].astype(pd.api.types.CategoricalDtype())
return tracks
tracks = load('dataset/tracks.csv') #Read tracks.csv
features = load('dataset/features.csv') # Read features.csv
#Create genre groundtruth array
small = tracks[tracks['set', 'subset'] == 'small']
# Filters out only the tracks that are Hip-Hop and Rock from small subset --> 2000 tracks
subset = small[(small['track', 'genre_top'] == 'Hip-Hop') | (small['track', 'genre_top'] == 'Rock')]
# Takes a subset of features based on the tracks found in the variable subset
small_features = features[features.index.isin(subset.index)]
genres_gt = (subset['track', 'genre_top'] == 'Hip-Hop').to_frame().values
# Your code here.
gt = np.array([1 if i else -1 for i in genres_gt])
accuracy = (len((((clusters_n == gt).nonzero())[0]))/clusters_n.shape[0])*100
print('Accuracy: ', accuracy)
np.save("labels.npy", gt)
###Output
Accuracy: 67.75
###Markdown
Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
# Your code here.
fig = plt.figure()
plt.scatter(embeddings_c[clusters_c==1,0],embeddings_c[clusters_c==1,1], label='', color='r', marker='x')
plt.scatter(embeddings_c[clusters_c==-1,0],embeddings_c[clusters_c==-1,1], label='', color='y', marker='o')
plt.title('Clusters (combinatorial)')
plt.show()
plt.close(fig)
fig = plt.figure()
plt.scatter(embeddings_n[clusters_n==1,0],embeddings_n[clusters_n==1,1], label='', color='r', marker='x')
plt.scatter(embeddings_n[clusters_n==-1,0],embeddings_n[clusters_n==-1,1], label='', color='y', marker='o')
plt.title('Clusters (normalized)')
plt.show()
plt.close(fig)
def remove_outliers(embeddings, clusters):
mean_vec = np.mean(embeddings, axis=0)
std_vec = np.std(embeddings, axis=0)
ind_temp = (abs(embeddings - mean_vec)< 2*std_vec)
ind = np.logical_and(ind_temp[:,0], ind_temp[:,1])
embeddings_new = embeddings[ind]
clusters_new = clusters[ind]
return embeddings_new, clusters_new
embeddings_new_c, clusters_new_c = remove_outliers(embeddings_c, clusters_c)
embeddings_new_n, clusters_new_n = remove_outliers(embeddings_n, clusters_n)
fig = plt.figure()
plt.scatter(embeddings_new_c[clusters_new_c==1,0],embeddings_new_c[clusters_new_c==1,1], label='', color='r', marker='x')
plt.scatter(embeddings_new_c[clusters_new_c==-1,0],embeddings_new_c[clusters_new_c==-1,1], label='', color='y', marker='o')
plt.title('Clusters (combinatorial) - removed outliers')
plt.show()
plt.close(fig)
fig = plt.figure()
plt.scatter(embeddings_new_n[clusters_new_n==1,0],embeddings_new_n[clusters_new_n==1,1], label='', color='r', marker='x')
plt.scatter(embeddings_new_n[clusters_new_n==-1,0],embeddings_new_n[clusters_new_n==-1,1], label='', color='y', marker='o')
plt.title('Clusters (normalized) - removed outliers')
plt.show()
plt.close(fig)
###Output
_____no_output_____
###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: 31* Students: Dilara Günay, Derin Sinan Bursa, Othman Benchekroun, Sinan Gökçe* Dataset: IMDb Films and Crew Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
import seaborn as sns
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
from scipy import sparse
import scipy.sparse.linalg
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
# getting adjacency matrix
import pandas as pd
adjacency = pd.read_csv('data/adjacency.csv')
n_nodes = len(adjacency)
#Dropping useless column from adjacency dataframe
adjacency.drop('Unnamed: 0', axis = 1, inplace = True)
adjacency = adjacency.values
np.set_printoptions(suppress = True)
adjacency
###Output
_____no_output_____
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
D = np.zeros(shape=(n_nodes,n_nodes))
for i in range(n_nodes):
sum = 0
for j in range(n_nodes):
sum = sum + adjacency[i,j]
D[i,i]=sum
laplacian_combinatorial = D - adjacency #Some elements of spectral graph theory page 5
I = np.eye(n_nodes,n_nodes)
D_sqrt = scipy.linalg.fractional_matrix_power(D,-0.5)
laplacian_normalized = I-np.matmul(np.matmul(D_sqrt,adjacency),D_sqrt)
###Output
_____no_output_____
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference!
###Code
laplacian = laplacian_normalized
###Output
_____no_output_____
###Markdown
Question 2Compute the eigendecomposition of the Laplacian $L = U^\top \Lambda U$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
eigenvalues,eigenvectors = scipy.linalg.eigh(laplacian)
assert eigenvectors.shape == (n_nodes, n_nodes)
eigenvalues
###Output
_____no_output_____
###Markdown
Justify your choice of eigensolver. **Answer:** Keeping in mind that our normalized graph Laplacian matrix (_laplacian__normalized_) is a real symmetrical matrix since the scipy.linalg.eigh is used to solve generalized eigenvalue problems, we have selected this eigensolver (_scipy.linalg.eigh_). Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? **Answer:** The matrix $S$ is the incidence matrix. $x^\top L x = x^\top SS^\top x = \|S^\top x\|_2^2 $ As such we obtain $S^\top x = \|(x^\top L x)\|_2 $ As we have $x^\top L x = \frac{1}{2}\sum W(i,j)(x[i]-x[j]) $ Also we have $x^\top L x = \|(S^\top x)\|_2^2 $ This gives $\|(S^\top x)\|_2^2=\frac{1}{2}\sum W(i,j)(x[i]-x[j])$ Then we obtain $|S^\top x| = \frac{1}{\sqrt{2}} \sqrt{W(i,j)}|x[i]-x[j]| $ Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). **Answer:** So we get $ L u_k = \lambda_k u_k $ As the eigenvectors are normalized, we can write it as $ u_k^\top L u_k = \lambda_k$ Tt gives us $ u_k^\top L u_k =u_k^\top S^\top S u_k = \|S^\top u_k \|_2^2 = \lambda_k$ What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? **Answer:** $S^\top $ is similar to a gradient. As such, it gives us information about the smoothness of x. Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians? **Answer:** $x^\top L x = \frac{1}{2}\sum W(i,j)(x[i]-x[j])$ $u_k^\top L u_k = \frac{1}{2}\sum W(i,j)(u_k[i]-u_k[j])$ $u_0^\top L u_0 = \frac{1}{2}\sum W(i,j)(u_0[i]-u_0[j]) =0$ As such, for each pair of vertices $(i,j)$ connected by an edge, we have $u_0(i) = u_0(j)$. Thus, the signal value $u_0$ for all vertice must be a constant. We conclude that the eigenspace of eigenvalue 0 has constant signal value. Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
plt.plot(eigenvalues, 'ro', markersize = 1)
plt.title('Eigenspectrum')
plt.ylabel('Eigenvalues')
plt.show()
###Output
_____no_output_____
###Markdown
**Answer:** There is a increasing trend in our eigenspectrum as expected since the first eigenvalue is always equal 0 and for the next eigenvalues, the value increases afterwards considering that our graph is fully connected. However, this increase in eigenvalues loses its initial acceleration very rapidly. How many connected components are there in your graph? Answer using the eigenvalues only.
###Code
#Multiplicity of eigenvalue 0 gives connectedness of graph
epsilon = 10**(-5)
print("There are {} connected components.".format(np.count_nonzero(eigenvalues<=epsilon)))
###Output
_____no_output_____
###Markdown
Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians. **Answer** For normalized Laplacians, the upperbound on the eigenvalues is equal to 2 due to the IFF bipartite graph. Additionally, for combinatorial Laplacians, due to Gershgorin circle theorem, the upperbound is bounded by the largest absolute row sum or column sum of combinatorial Laplacian matrix considering all the eigenvalues lie in the union of all Gershorin circles. 2 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$. Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) **Answer:** Laplacian eigenmaps, and graph embeddings in general, are used to reduce the dimensionality of data while remaining truthful to the original data. Usually, embeddings are used for visualization (reduction to 2 or 3 dimensions) but also for computation (reduce to a single dimension).This is done through the mapping of the graph to a vector space preserving the properties of the network. Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations. **Explanation:** We only need $k=3$ eigenvectors as we work in $d=2$. Thus, we use the *scipy.sparse.linalg.eigs* to compute the first 3 eigenvectors. Following our research in the original [Laplacian Eigenmaps Paper](http://web.cse.ohio-state.edu/~belkin.8/papers/LEM_NC_03.pdf) (cf bottom of p.6), we need to drop the eigenvector corresponding to the first eigenvalue ($\lambda_1$) and pick the 2 next ones as our basis.
###Code
from scipy.sparse.linalg import eigs as sparse_eigs
lap_sparse_eigenvals, lap_sparse_eigenvecs = sparse_eigs(laplacian, k=3, which='SM')
lap_norm_sparse_eigenvecs = np.matmul(D_sqrt, lap_sparse_eigenvecs)
lap_sparse_eigenvecs
#Only keep the 2 dimensions we need by leaving out the eigenvector corresponding to the "zero" eigenvalue
lap_sparse_eigenvecs = lap_sparse_eigenvecs[:,1:]
lap_norm_sparse_eigenvecs = lap_norm_sparse_eigenvecs[:,1:]
lap_sparse_eigenvecs
###Output
_____no_output_____
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
plt.plot(lap_sparse_eigenvecs[:,0].real, lap_sparse_eigenvecs[:,1].real, 'ro', markersize=1)
plt.show()
plt.plot(lap_norm_sparse_eigenvecs[:,0].real, lap_norm_sparse_eigenvecs[:,1].real, 'ro', markersize=1)
plt.show()
###Output
_____no_output_____
###Markdown
**Answer:** We prefer keeping the normalized values. Indeed, we can see more clearly the differerences in the data (upper right corner and line starting in the bottom right corner) as some clusters form. This is despite the fact that the values are closer to each other (_x_ ranges from $-0.3$ to $0$ instead of $-0.3$ to $0$, while _y_ ranges from $-0.025$ to $0.05$ instead of ranging up to $0.175$). Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? **Answer:** The embedding $Z$ preserves similarity. As explained in the Slides, "we want similar points to be embedded to each other". Given that we work with graphs, this similarity is nothing but the distance in the projected space. 3 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? **Answer:** In the eigenspectrum plotted in Question $6$, we see a clear gap between the first 3 eigenvalues and all of the others. Thus, we choose $k=3$ following the instructions on the Slides. ("If data has k clear clusters, there will be a gap in the Laplacian spectrum after the k-th eigenvalue. Use to choose k.").On the other hand, we choose $d=2$ in order to have a better visualization. Note that we have tried working with $d=3$, but it does not give much more additional information (especially in the case of the normalized eigenvectors, given that all the points are on the same plane). Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
sparse_eigenvals, sparse_eigenvecs = sparse_eigs(laplacian, k=3, which='SM')
kmeans = KMeans(n_clusters=3).fit_predict(sparse_eigenvecs.real)
plt.scatter(sparse_eigenvecs[:,1].real, sparse_eigenvecs[:,2].real, c=kmeans, s=20, cmap='cool')
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(sparse_eigenvecs[:,1].real, sparse_eigenvecs[:,2].real, sparse_eigenvecs[:,0].real, c=kmeans, cmap='cool')
norm_sparse_eigenvecs = np.matmul(D_sqrt, sparse_eigenvecs)
norm_kmeans = KMeans(n_clusters=3).fit_predict(norm_sparse_eigenvecs.real)
plt.scatter(norm_sparse_eigenvecs[:,1].real, norm_sparse_eigenvecs[:,2].real, c=norm_kmeans, s=20, cmap='cool')
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(norm_sparse_eigenvecs[:,1].real, norm_sparse_eigenvecs[:,2].real, norm_sparse_eigenvecs[:,0].real, c=norm_kmeans, cmap='cool')
###Output
_____no_output_____
###Markdown
**Explanation:** As told in the lecture, "normalization seeks to impose balanced clusters." This fact can be seen on the second 3-D plot. We choose using the eigenvectors which have not been re-normalized as the clusters are more straight-forward to understand, but also farthest from each other. Moreover, the K-means algorithm applied to the re-normalized eigenvectors is prone to initialization error (the assignment of points is different with each run). Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe?
###Code
x_idxs2, xi = zip(*sorted(zip(norm_kmeans,range(len(norm_kmeans)))))
y_idxs2, yi = zip(*sorted(zip(norm_kmeans,range(len(norm_kmeans)))))
ordered_adj = adjacency[xi,:][:,yi]
ordered_adj
###Output
_____no_output_____
###Markdown
**Answer:** Given the nature of clusters, we expect to see cliques appearing in the matrix. However, they will not be perfect as our network is comprised of a single connected component.
###Code
sns.heatmap(ordered_adj != 0, xticklabels=False)
###Output
_____no_output_____
###Markdown
**Answer:** This is indeed what we notice when looking at the heatmap of the ordered adjacency. We can see clearly the big rectangle representing the biggest cluster. However, we also notice that the rest of the matrix is not as tidy. Indeed, as expected, we notice some links between the biggest cluster and the others. But we cannot notice the smaller clusters, maybe because they are too small and cannot be considered as relevant ?
###Code
sns.heatmap(ordered_adj[:500,:500] != 0, xticklabels=False)
###Output
_____no_output_____
###Markdown
When zooming, we see that the only reason it didn't seem to have any ordering is because of the size of the cluster (around $350$ nodes only). We also notice that the $2^{nd}$ cluster is very small and very compact, as it is only comprised of around 15 nodes. Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md). **Answer:** We do not have ground truth assignments, thus we look at the quality of clusters following the previous rearrangement of the adjacency matrix (cf Question 12). We can clearly see the 3 different clusters, showing the weak communities of our network. In the 2 following graphs, we see that the left side (respectively right side) is more densely populated, meaning there are more links with itself. However, note that this trend not very distinguishable in the 1st graph.
###Code
sns.heatmap(ordered_adj[:354] != 0)
sns.heatmap(ordered_adj[363:] != 0)
###Output
_____no_output_____
###Markdown
However, we can question the existence of the $2^{nd}$ community, which is only constituted by around $15$ actors. When looking more closely, we see that this community is very strongly connected, while it is only very loosely connected to other nodes (which are far away).
###Code
sns.heatmap(ordered_adj[354:363, 354:363] != 0)
sns.heatmap(ordered_adj[354:363] != 0)
###Output
_____no_output_____
###Markdown
Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
plt.scatter(lap_norm_sparse_eigenvecs[:,0].real, lap_norm_sparse_eigenvecs[:,1].real, c=norm_kmeans, cmap='cool', s=10)
plt.show()
###Output
_____no_output_____
###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: ``* Students: ``* Dataset: `` Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
from scipy import sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
adjacency = # Your code here.
n_nodes = # Your code here.
###Output
_____no_output_____
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
laplacian_combinatorial = # Your code here.
laplacian_normalized = # Your code here.
###Output
_____no_output_____
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference!
###Code
laplacian = # Either laplacian_combinatorial or laplacian_normalized.
###Output
_____no_output_____
###Markdown
Question 2Compute the eigendecomposition of the Laplacian $L = U \Lambda U^\top$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
eigenvectors = # Your code here.
eigenvalues = # Your code here.
assert eigenvectors.shape == (n_nodes, n_nodes)
###Output
_____no_output_____
###Markdown
Justify your choice of eigensolver. **Your answer here.** Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? **Your answer here.** Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). **Your answer here.** What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? **Your answer here.** Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians? **Your annswer here.** Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
**Your answer here.** How many connected components are there in your graph? Answer using the eigenvalues only.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians. **Your answer here.** 3 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$.From now on, if your graph has more than one connected component, work with the giant component only.
###Code
# Your code here if needed.
###Output
_____no_output_____
###Markdown
Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) **Your answer here.** Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
**Your answer here.** Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? **Your answer here.** 2 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? **Your answer here.** Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe?
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
**Your answer here.** Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
# Your code here.
###Output
_____no_output_____
###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: 19* Students: Zahra Farsijani, Joëlle Hanna, Dorsan Lepour, Amin Mekacher* Dataset: Terrorist Relations Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
import networkx as nx
from scipy import sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
adjacency = np.load('adjacency_matrix.npy')# the adjacency matrix
n_nodes = len(adjacency) # the number of nodes in the network
degrees = np.sum(adjacency, axis =0)
degree_matrix = np.diag(degrees) # matrix D used in Laplacian calculations
###Output
_____no_output_____
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
laplacian_combinatorial = sparse.csr_matrix(degree_matrix - adjacency)
inv_sqrt_degree_matrix = np.diag(1 / np.diag(degree_matrix**(0.5)))
laplacian_normalized = inv_sqrt_degree_matrix @ laplacian_combinatorial @ inv_sqrt_degree_matrix
precision = 1e-15
plt.spy(laplacian_normalized);
###Output
/Users/aminmekacher/miniconda3/envs/ntds_project/lib/python3.6/site-packages/matplotlib/axes/_axes.py:7693: RuntimeWarning: invalid value encountered in greater
mask = np.abs(Z) > precision
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference!
###Code
laplacian = laplacian_combinatorial
print(laplacian.shape, type(laplacian.toarray()))
np.save("laplacian", laplacian.toarray())
###Output
(851, 851) <class 'numpy.ndarray'>
###Markdown
Question 2Compute the eigendecomposition of the Laplacian $L = U^\top \Lambda U$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
eigenvalues, eigenvectors = scipy.linalg.eigh(laplacian.asfptype().toarray())
assert eigenvectors.shape == (n_nodes, n_nodes)
###Output
_____no_output_____
###Markdown
Justify your choice of eigensolver. Numpy eigensolvers do not allow the user to choose the number of vectors and hence are not very efficient.Among various Scipy eigensolvers (e.g. `scipy.linalg.eigh`,`scipy.linalg.eigh`,`sparse.linalg.eigs`, `sparse.linalg.eigsh`, etc.) Since the laplacian matrix is real and symmetric, we prefer to use `scipy.linalg.eigh` since it accepts symmetric matrices. This solver is faster than the others and returns the eigenvalues and eigenvectros and does the job we need and not more than that which boosts the performance. Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? Matrix $S$ is the **incidence matrix** and $S^\top x$ computes the gradient of x. Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). **We know that $L$ is symmetric and hence diagonalizable. Thus, it accepts an eigenvalue decomposition of the form $L = U \Lambda U^\top$ Since $L = S S^\top$; hence, we can write: $S S^\top = U \Lambda U^\top$ $U^\top S S^\top U = U^\top U \Lambda U^\top U$ since L is symmetric (i.e. $ S S^\top$ is symmetric), therefore, $ U^\top U = U U^\top = I$ (i.e. $U$ and $U^\top$ can be chosen to be orthogonal); Thus, $\| S^\top U \|_2^2 = \Lambda$ Hence, $\| S^\top u_k \|_2^2 = \lambda_k$** What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? **This quadratic (dirichlet) form is a measure of how smooth the signal is.** Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians? **The smallest eigenvalue of L is 0, the corresponding eigenvector is the constant one vector.** Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
plt.plot(np.real(eigenvalues), '.-', markersize=2);
###Output
_____no_output_____
###Markdown
**As seen in the plot above, the eigenvalues are all non-negative. The number of connected components in our graph is equal to the multiplicity of the eigenvalue 0 which is, in our case, 13** How many connected components are there in your graph? Answer using the eigenvalues only. If graph G is connected, then $ \lambda_1>0$ . If $\lambda_i=0$ and $\lambda_{i+1} \neq 0$,then G has exactly $i + 1$ connected components. In other words, the multiplicity $k$ of the eigenvalue 0 ($\lambda_0$) of $L$ (Laplacian) equals the number of connected components. Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians. **As we saw in the lecture on spectral graph theory, we can not assume an upper bound for the eigenalues of the combinatorial Laplacian. However, in the case of the normalized one, we know from the theory that all of the eigenvalues we get from the eigen decomposition are smaller than 2 (the eigenvalue 2 is present only in the case of a bipartite graph, which does not apply to ours)** 3 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$. Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) **We use Laplacian Eigenmaps and spectral methods to perform graph embedding, a dimensionality reduction from a network to a vector space. This nonlinear mapping to a smaller-dimensional space have the benefit of preserving some relevant network properties.****Also, vector spaces are easier to work with than graphs : with their edges and nodes, graphs can only use limited operations. Vector spaces offer more possibilities from mathematics, statistics and machine learning (for example, we can define different distance metrics). Finally, a visualization can be easily done for d = 1,2,3 Euclidean spaces, while a network lying in high dimensional space is difficult to represent.** Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations. **As we have k = 13 << N = 851, we can use a partial eigendecomposition, in order to reduce the computational cost of the algorithm**
###Code
# Your code here.
G = nx.from_numpy_matrix(adjacency)
graphs = nx.connected_component_subgraphs(G)
graphs = list(graphs)
print(len(graphs))
print(len(graphs[0]))
new_adjacency = nx.adjacency_matrix(graphs[0]).toarray()
new_degrees = np.sum(new_adjacency, axis=0)
new_deg_matrix = np.diag(new_degrees)
new_laplacian = scipy.sparse.csr_matrix(new_deg_matrix - new_adjacency)
eigenvalues, eigenvectors = scipy.linalg.eig(new_laplacian.toarray())
inds = eigenvalues.argsort()
eigenvectors = eigenvectors[inds]
eigenvalues = np.sort(eigenvalues)
#first eigenvector is the trivial one
x = eigenvectors[1]
y = eigenvectors[2]
###Output
13
665
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
eigenvectors[1].reshape(665,1).shape
plt.scatter(np.real(x), np.real(y), c = 'red')
###Output
_____no_output_____
###Markdown
**From the 13 connected components of our network (851 nodes), we extract the biggest one containing 665 nodes. We then directly notice inside this giant component the presence of a cluster with strongly connected nodes.** Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? **Local distances between neighboring points.** 2 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? **The goal of the clustering is to separate dissimilar points in different clusters, in order for the edges within a group to have a high weight. As the labels in our dataset provide us a binary information, i.e. if the two people in the nodes are "colleagues" or "non-colleagues", we decided to set k = 2, to see if there is a clear distinction between these two clusters. We did not however see any gap after the 2nd eigenvalue, as mentioned in the lectures.We also need to consider enough feature vectors in order to be able to discriminate the different classes, but not too much, to avoid overfitting the data we have (i.e the curse of dimensionality): therefore, we decided to set d = 2**
###Code
# Laplacian spectrum of our giant component
plt.plot(np.arange(1,11), np.real(eigenvalues[0:10]), '.-', markersize=2);
plt.title("First 10 eigenvalues of the giant component of our network")
###Output
_____no_output_____
###Markdown
Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
feature_vectors = np.vstack((np.real(eigenvectors[1]), np.real(eigenvectors[2])))
# We assign each node to a cluster depending on the sign of the Fiedler vector (in our case, the first non-zero eigenvector)
Z = 1 * np.real(eigenvectors[1] > 0)
###Output
_____no_output_____
###Markdown
Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe?
###Code
inds_first = np.asarray(np.where(Z == 0))
inds_second = np.asarray(np.where(Z == 1))
inds_ordered = np.concatenate((inds_first, inds_second), axis=1)[0]
if (type(new_adjacency) == scipy.sparse.csr.csr_matrix):
new_adjacency = new_adjacency.toarray()
adjacency_ordered = [[new_adjacency[i][j] for j in inds_ordered for i in inds_ordered]]
adjacency_ordered = np.reshape(adjacency_ordered, (665, 665))
# Plot the original and the ordered adjacency matrix
plt.subplot(1, 2, 1)
plt.spy(new_adjacency)
plt.title("Before")
plt.subplot(1, 2, 2)
plt.spy(adjacency_ordered)
plt.title("After")
###Output
_____no_output_____
###Markdown
**After we reordered the adjacency matrix by discriminating the nodes according to their cluster assignment, we expected to see distincly some regions where the links are concentrated between the nodes, meaning that the nodes are mostly connected to the adjacent nodes in the matrix. However, in our case, we notice that the adjacency matrix does not display a clear cut between the two clusters. This is probably due to the fact that the k-means algorithm, with k = 2, will cut through the biggest community of this component, thus leading to a non-optimal clustering.** Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).
###Code
# We only want the indices of the connected components for the comparison
node_connected_component = list(graphs[0])
# Firstly, we need to import the labels to get the ground thruth clusters. As we used the colleague feature to define
# our adjacency matrix during the first milestone, we shall use this one now as well
import pandas as pd
colleague = pd.read_csv("TerroristRel_Colleague.nodes", header = None, sep="\t", encoding="utf-8")
truth_label = colleague[1225].values
correct_clustering = 0
index_matrix = 0
for i in node_connected_component:
if (truth_label[i] == "colleague" and Z[index_matrix] == 0) or (truth_label[i] == "non-colleague" and Z[index_matrix] == 1):
correct_clustering = correct_clustering + 1
index_matrix = index_matrix + 1
accuracy = correct_clustering / index_matrix
accuracy
###Output
_____no_output_____
###Markdown
**During the first milestone, the label we used to define our adjacency matrix is the colleague one: therefore, we used itas our ground truth cluster to assess the accuracy of our clustering, which is finally 49.6%. We were expecting to have an unsatisfying result, as the reordered adjacency matrix displayed in question 12 shows that a binary clustering is not very efficient with our dataset.** Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
plt.scatter(np.real(eigenvectors[1]), np.real(eigenvectors[2]), c=np.sign(np.real(eigenvectors[1])), cmap='rainbow')
###Output
_____no_output_____
###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: 50* Students: Görkem Çamli, Raphael Laporte, Ilija Gjorgjiev, Murat Genc* Dataset: Spammers on Social Network Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
import scipy as sp
from scipy import sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
from scipy import linalg
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
adjacency = sp.load("undirected_adjacency.npy")
adjacency = sparse.csr_matrix(adjacency,adjacency.shape,dtype=np.float16)
n_nodes = adjacency.shape[0]
###Output
_____no_output_____
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
def compute_laplacian_normalized(L, degrees):
newD = sparse.spdiags(1/np.sqrt(degrees),[0],degrees.size,degrees.size)
return ((newD @ L) @ newD)
degrees = adjacency.sum(0)
degree_matrix = sparse.spdiags(degrees,[0],n_nodes,n_nodes)
degree_matrix = sparse.csr_matrix(degree_matrix,degree_matrix.shape,dtype=np.float16)
#Laplacian Combinatorial
laplacian_combinatorial = degree_matrix - adjacency
scipy.sparse.save_npz("laplacian_combinatorial.npz",laplacian_combinatorial)
#Laplacian Normalized
laplacian_normalized = compute_laplacian_normalized(laplacian_combinatorial, degrees)
###Output
_____no_output_____
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference! Question 2Compute the eigendecomposition of the Laplacian $L = U \Lambda U^\top$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
eigenvalues_c, eigenvectors_c = sparse.linalg.eigsh(laplacian_combinatorial,k=1000,which='SM',tol=0.001)
np.save("eigenvectors_combinatorial",eigenvectors_c)
np.save("eigenvalues_combinatorial",eigenvalues_c)
eigenvalues_n, eigenvectors_n = sparse.linalg.eigsh(laplacian_normalized,k=1000,which='SM',tol=0.001)
np.save("eigenvectors_normalized",eigenvectors_n)
np.save("eigenvalues_normalized" ,eigenvalues_n)
np.save("lapacian_combinatorial",laplacian_combinatorial)
eigenvectors_c = np.load("eigenvectors_combinatorial.npy")
eigenvalues_c= np.load("eigenvalues_combinatorial.npy")
eigenvectors_n = np.load("eigenvectors_normalized.npy")
eigenvalues_n = np.load("eigenvalues_normalized.npy")
###Output
_____no_output_____
###Markdown
**Your answer here.**As mentioned above in the comment, we decided to compute only 1000 eigenvalues and eigenvectors as our matrix is too big. We used a tolerance level of 0.001, because it was faster to compute. We would have originaly used the function mentioned in the above comments scipy.linalg.eigh(laplacian.toarray()), as it would have returned us the eigenvalues in asceding order and all of them, also our matrix is real symmetric(hermitian). Otherwise we decided to go with eigsh as it allows us to specify how many eigenvalues and eigenvectors we want and in what order(in our case we use which="SM"). The eigsh can be executed also since our matrix is hermitian and sparse. Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? Matrix $S$ is the transpose of the incidence matrix. It has a shape of ($|V|$, $|E|$) , where $|E|$ is the number of edges(i, j) with i being the start node and j being the end node. $|V|$ is the number of vertices in the graph. S consits of values of $0$, $1$, $-1$ as the incidence matrix. The values are filled based on the following conditions for the row[i] in S.1. If e(i,j) exists in the graph then S[i][j] = 1, which means that there is an edge coming from i to j.2. If e(j,i) exists in the graph then S[i][j] = -1, which means that there is an edge coming from j to i.3. if e(i,j) and e(j,i) does not exist than S[i][j] = 0. Basically each ith row shows if there is an outgoing(1) or incoming(-1) edge for the ith node.Whereas S^T is the incidence matrix. Also L does not depend on edge direction.$S^T x$ computes and returns a vector which will contain for example for edge(i,j) where $i = 1$ and $j = -1$ in the $S^T$ matrix multiplying this will x will just result in x[i] - x[j], if the matrix is just adjacency and there are no weights. So we are going to have a vector which contains differences of ith element of x and jth element of x, for each row in the $S^T$ matrix. If we are dealing of a matrix, where the edges have weights than each element of $S^T x$ will be equal to (w[i] * x[i]) - (w[j] * x[j]), where i and j represent the direction of the node for a specific row of $S^T$.In basic words $S^T x$ vector as a gradient. Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). **Your answer here.**The following statements are equivalent:$\lambda_k u_{k} = L u_k$ $\lambda_k u_{k} = S S^{T} u_k$ $u_k^{T} \lambda_k u_{k} = u_k^{T} S S^{T} u_{k}$ $u_k^{T} \lambda_k u_{k} = (S^T u_k)^T S^T u_k$ (by property: $(AB)^T = B^T A^T$) $u_k^{T} \lambda_k u_{k} = ||S^T u_k||_2^2$ (by dot product which is the same as squared $L^2$ norm) $\lambda_k u_{T} u_{k} = ||S^T u_k||_2^2$ $\lambda_k ||u_k||_2^2 = ||S^T u_k||_2^2 $ $\lambda_k = \frac{||S^T u_k||_2^2}{||u_k||_2^2} $ $\lambda_k = || S^T \frac{u_k}{||u_k||_2^2}||_2^2$ $\lambda_k = \| S^\top u_k \|_2^2$ What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? As mentioned above that $S^T x$ works as a gradient, then the squared $L^2$ norm would give us the smoothnes of x with respect to the graph. Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians? The following statements are equivalent:$\lambda_0 u_{0} = L u_0$ $u_{0}^T \lambda_0 u_{0} = u_{0}^T L u_0$ $ 0 = u_{0}^T L u_0$ (by property that $\lambda_0 = 0$) $ 0 = \frac{1}{2} \sum_{i,j} W(i,j) (u_0[i] - u_0[j])^2$ (by property $x^T L x = \frac{1}{2} \sum_{i,j} W(i,j) (x[i] - x[j])^2$) This is only equal to 0, if and only if the non trivial values of $u_0$ are constant. So the elements of $u_0$ are all equal to one another. Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
eigenvalues = eigenvalues_n
# plot eigenvalues
def plotEigenvalues(eigenvalues):
plt.scatter(x=range(len(eigenvalues)), y=eigenvalues )
plt.title("Eigenvalue Plot")
plt.xlabel("Eigenvalue Index")
plt.ylabel("Eigenvalue")
# Combinatorial Laplacian Eigenvalues Plot
plotEigenvalues(eigenvalues_c)
# Normalized Laplacian Eigenvalues Plot
plotEigenvalues(eigenvalues_n)
###Output
_____no_output_____
###Markdown
Scipy library returns the eigenvalues, each repeated according to its multiplicity. Therefore, from the above plot we can observe the spectrum of Laplacian (plotting the repeated eigenvalues according to its algebraic multiplicities). We only plotted the first 1000 eigenvalues due to the computation time and memory constraints since our graph is too big.**Combinatorial eigenvalue plot:**We see that our first 1000 smallest eigenvalues ranges from 0 to 2. Overall, we see an increasing trend on the eigenvalue plot, more precisely, after ~0.30 eigenvalue it jumps to 1 eigenvalue continues stable for a while (values ar every close to each other) then has a huge increase up to ~2 in 550-700ish indices and stabilizes again.**Normalized eigenvalue plot:**In the normalized laplacian eigenvalue results, we see an increasing trend up to around ~250 indices and then the values continue with value almost 1 afterwards. How many connected components are there in your graph? Answer using the eigenvalues only.
###Code
# print('Count of zeros: ', len(np.where(eigenvalues_c == 0)[0]))
# print('Count of zeros: ', len(np.where(eigenvalues_n == 0)[0]))
print('Count of zeros for Combinatorial: ', len(np.where(eigenvalues_c < 0.00001)[0]))
print('Count of zeros for Normalized: ', len(np.where(eigenvalues_n < 0.00001)[0]))
###Output
Count of zeros for Combinatorial: 1
Count of zeros for Normalized: 1
###Markdown
**Our answer:** When the graph is undirected and has non-negative weight, the multiplicity k of the eigenvalue 0 of L gives the number of connected components within the graph. Our graph also undirected and has non-negative weight therefore we can check the connected component number using this rule. The number of eigenvalue 0, is the number of connected components in our graph in this case our graph is connected and we have 1 component in our graph. Note that since our graph is very big and it takes a lot of time to compute the eigenvalues we pass a tolerant argument while calculating in the linalg.sparse.eigsh function, machine precision took very long time. In the above result, that is why we checked the values close to zero under certain threshold. Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians. Yes, there is an upper bound on the eigenvalues. .__Combinatorial Laplacian:__ The largest possible eigenvalue is the maximum degree (max row/col sum) of the laplacian matrix. __Normalize Laplacian:__ The upper bound for the largest eigenvalue is 2, the largest eigenvalue is 2 if and only if the graph is bipartite. 3 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$.From now on, if your graph has more than one connected component, work with the giant component only.
###Code
# Since we have 1 component we skip this part
###Output
_____no_output_____
###Markdown
Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) We use Laplacian eigenmaps for non-linear dimensionality reduction and data visualization/representation purposes. Specifically, the assumption behind the laplacian eigenmaps is the data points lie in a lower dimensional manifold within a high-dimensional space. Since the Laplaican Eigenmap algorithm is rather insensitive to outliers and noise, it leads a natural implicit way for clustering, that is why it is also used in embeddings and clustering. Therefore, Laplacian Eigenmaps are useful for visualizing a graph, spectral clustering and partitioning and also graph coloring. Computer vision and learning is some of the areas they are used at. More generally, graph embeddings are used for again finding the latent vector representations of the graphs in order to capture topology of the graph and used commonly in machine learning. Their main purpose is to preserve some proximity measure defined on the graph by creating mappings. Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations.
###Code
d = 2
# We recompute the eigenvectors since it is asked - althought we could have used the prvios ones
# eigenval_c, eigenvec_c = sparse.linalg.eigsh(laplacian_combinatorial,k=d+1,which='SM')
# eigenval_n, eigenvec_n = sparse.linalg.eigsh(laplacian_normalized,k=d+1,which='SM')
# take the first d dimensions excluding
laplacian_embedded = eigenvectors_c[:,1:d+1]
laplacian_embedded_normalize = eigenvectors_n[:,1:d+1]
laplacian_embedded.shape
###Output
_____no_output_____
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
# plotting embedded graph in 2D for Combinatorial
fig = plt.figure(figsize=(10, 10))
ax = plt.scatter(laplacian_embedded[:, 0], laplacian_embedded[:, 1],c=['red'],cmap=plt.cm.Spectral)
# plotting embedded graph in 2D for Normalized
fig = plt.figure(figsize=(10, 10))
ax = plt.scatter(laplacian_embedded_normalize[:, 0], laplacian_embedded_normalize[:, 1],c=['red'],cmap=plt.cm.Spectral)
###Output
_____no_output_____
###Markdown
For the Combinatorial laplacian 2D plot result what we see is that we see a main location where the points are mostly gathered and two branches one with higher intensity and close to each other and the other more dispersed and a few more outlier points. For the normalized laplacian 2D embedded plot , we see that the points are more normalized and closer to each other and now dispersed through mostly in x direction and the projected points looks like dispersed through a line. Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? The idea for the Laplacian Eigenmap embedding is to keep connected nodes closer to each other in the vector space. Embedding Z preserves the local neighborhood information embedded for N data in a lower dimensional space in this case for d dimension. These information found by using nearest neighboor calculations for laplacian eigenmaps. 2 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? *In theory, we interpret the value of k by analyzing Laplacian spectrum. If the spectrum clearly has n different clusters, then n=k. For our spectrum, we'll choose k=2 as can be interpreted from Laplacian spectrum. "d" refers to dimension, and we'll choose d=3 for visualisation.* Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
eigenvectors = eigenvectors_n
#eigenvector normalization
from sklearn import preprocessing
k=2
d=k+1
r = eigenvectors[:,1:d]
thresh = 0
y_kmeans = np.copy(eigenvectors[:,1])
y_kmeans[y_kmeans> thresh] = 1
y_kmeans[y_kmeans<=thresh] = 0
#color the graph
plt.scatter(r[:, 0], r[:, 1], c=y_kmeans, s=1, cmap='viridis')
# We prefer keeping the re-normalized eigenvectors as the graph has k=2 clear clusters.
#visualize laplacian spectrum with normalized eigenvector v_normalized
r2 = preprocessing.normalize(eigenvectors, norm='l2')[:,1:d]
y_kmeans = np.copy(r2[:,0])
y_kmeans[y_kmeans> thresh] = 1
y_kmeans[y_kmeans<=thresh] = 0
#color the graph
plt.scatter(r2[:,0], r2[:,1], c=y_kmeans, cmap='viridis')
###Output
_____no_output_____
###Markdown
*We prefer keeping the re-normalized eigenvectors as the graph has less outliers.* Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe? Since our adjacency matrix is undirected, we were expecting to observe cliques
###Code
x_idx, x = zip(*sorted(zip(y_kmeans,range(len(y_kmeans)))))
y_idx, y = zip(*sorted(zip(y_kmeans,range(len(y_kmeans)))))
adjacency_ordered = adjacency[x,:][:,y]
###Output
_____no_output_____
###Markdown
Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).
###Code
import pandas as pd
users = pd.read_csv('data/filtered_users.csv')
ground_truth = np.array(users[["Spammer Label"]])
n_nodes = len(ground_truth)
# We assign our spammers / non-spammers clusters according to what gives us the best succes rate
error_rate = np.average((y_kmeans - ground_truth.reshape((n_nodes,)))**2)
success_rate = error_rate if error_rate > 0.5 else 1-error_rate
print("Our success ratio is : ",success_rate)
###Output
Our success ratio is : 0.7024110144274847
###Markdown
Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
plt.scatter(r2[:,0], r2[:,1], c=ground_truth.reshape((n_nodes,)), s=400, cmap='viridis', alpha = 0.6)
###Output
_____no_output_____
###Markdown
[NTDS'18] milestone 3: spectral graph theory[ntds'18]: https://github.com/mdeff/ntds_2018[Michaël Defferrard](http://deff.ch), [EPFL LTS2](https://lts2.epfl.ch) Students* Team: ``* Students: ``* Dataset: `` Rules* Milestones have to be completed by teams. No collaboration between teams is allowed.* Textual answers shall be short. Typically one to two sentences.* Code has to be clean.* You cannot import any other library than we imported.* When submitting, the notebook is executed and the results are stored. I.e., if you open the notebook again it should show numerical results and plots. We won't be able to execute your notebooks.* The notebook is re-executed from a blank state before submission. That is to be sure it is reproducible. You can click "Kernel" then "Restart & Run All" in Jupyter. ObjectiveThe goal of this milestone is to get familiar with the graph Laplacian and its spectral decomposition. 0 Load your network
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
If you get a `No module named 'sklearn'` error when running the below cell, install [scikit-learn](https://scikit-learn.org) with `conda install scikit-learn` (after activating the `ntds_2018` environment).
###Code
import numpy as np
from scipy import sparse
import scipy.sparse.linalg
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
###Output
_____no_output_____
###Markdown
Let's denote your graph as $\mathcal{G} = (\mathcal{V}, \mathcal{E}, A)$, where $\mathcal{V}$ is the set of nodes, $\mathcal{E}$ is the set of edges, $A \in \mathbb{R}^{N \times N}$ is the (weighted) adjacency matrix, and $N = |\mathcal{V}|$ is the number of nodes.Import the adjacency matrix $A$ that you constructed in the first milestone.(You're allowed to update it between milestones if you want to.)
###Code
adjacency = np.load('Adjacency1.npy') # Your code here.
# Suppression of the nodes with 0 degree (otherwise problem when computing the normalized Laplacian)
degree = adjacency.sum(axis=0)
list_to_delete = []; # This array is going to contain all the index of the nodes with degree 0
for i in range(0,len(adjacency)):
degree_node_i = degree.item(i)
if degree_node_i == 0:
list_to_delete.append(i)
list_to_delete = np.array(list_to_delete)
for j in range(0,len(list_to_delete)):
index = list_to_delete[j]
adjacency = np.delete(adjacency, index, axis=0)
adjacency = np.delete(adjacency, index, axis=1)
list_to_delete = list_to_delete - 1
degree = adjacency.sum(axis=0)
n_nodes = len(adjacency) # Your code here.
###Output
_____no_output_____
###Markdown
1 Graph Laplacian Question 1From the (weighted) adjacency matrix $A$, compute both the combinatorial (also called unnormalized) and the normalized graph Laplacian matrices.Note: if your graph is weighted, use the weighted adjacency matrix. If not, use the binary adjacency matrix.For efficient storage and computation, store these sparse matrices in a [compressed sparse row (CSR) format](https://en.wikipedia.org/wiki/Sparse_matrixCompressed_sparse_row_.28CSR.2C_CRS_or_Yale_format.29).
###Code
adjacency = sparse.csr_matrix(adjacency) # Transformation of the adjacency matrix to a CSR matrix
D = sparse.diags(degree) # Degree matrix
D2 = sparse.diags(1/np.sqrt(degree)) # Represent D^(-1/2)
laplacian_combinatorial = D - adjacency # Your code here.
laplacian_normalized = D2.dot(laplacian_combinatorial.dot(D2)) # Your code here.
###Output
_____no_output_____
###Markdown
Use one of them as the graph Laplacian $L$ for the rest of the milestone.We however encourage you to run the code with both to get a sense of the difference!
###Code
laplacian = laplacian_combinatorial # Either laplacian_combinatorial or laplacian_normalized.
###Output
_____no_output_____
###Markdown
Question 2Compute the eigendecomposition of the Laplacian $L = U^\top \Lambda U$, where the columns $u_k \in \mathbb{R}^N$ of $U = [u_1, \dots, u_N] \in \mathbb{R}^{N \times N}$ are the eigenvectors and the diagonal elements $\lambda_k = \Lambda_{kk}$ are the corresponding eigenvalues.Make sure that the eigenvalues are ordered, i.e., $0 = \lambda_1 \leq \lambda_2 \leq \dots \leq \lambda_N$.
###Code
# calculate eigendecomposition
values, vectors = np.linalg.eigh(laplacian.toarray())
eigenvectors = vectors # Your code here.
eigenvalues = np.round(np.absolute(values),2) # Your code here.
assert eigenvectors.shape == (n_nodes, n_nodes)
###Output
_____no_output_____
###Markdown
Justify your choice of eigensolver. **Your answer here.**As our laplacian matrix is a sparse matrix, it would be logical to use the function sparse.linalg.eigs() in order to find the eigen decomposition. Unfortunately, with this function, we can only get N-1 eigenvalues. Thus, in order to find all the eigenvalues of the laplacian, we transformed our matrix to a np.ndarray in order to use the function np.linalg.eigh(), which gives us the complete decomposition for a Hermitian matrix. Question 3We can write $L = S S^\top$. What is the matrix $S$? What does $S^\top x$, with $x \in \mathbb{R}^N$, compute? **Your answer here.**$S$ is the incidence matrix. It takes as input a node ($n_i$) and an edge ($e_j$). For directed graph we have:* If the edge $e_j$ is going from $n_i$ to $n_k$, the value $S[i,j]$ of the matrix is going to be 1 * If the edge $e_j$ is going from $n_k$ to $n_i$, the value $S[i,j]$ of the matrix is going to be -1 * If the edge $e_j$ is not related to $n_i$, the value $S[i,j]$ of the matrix is going to be 0For undirected graph, if the edge $e_j$ is going from $n_i$ to $n_k$ or the other way around, the value $S[i,j]$ of the matrix is going to be 1.Each row of the $S^\top$ matrix is going to have two values only: a $+1$ at the node corresponding to the head of the edge, and a $-1$ at the node corresponding to the tail of the edge. The vector $x$ attributes a values for each node, and therefore, the product $S^\top x$ represents the difference between the two numbers given to the two nodes of an edge. This is correct for directed graph.For undirected graph, the product $S^\top x$ represents the addition of the two numbers given to the nodes of an edge. Question 4Show that $\lambda_k = \| S^\top u_k \|_2^2$, where $\| \cdot \|_2^2$ denotes the squared Euclidean norm (a.k.a. squared $L^2$ norm). **Your answer here.**$\lambda_k = \lambda_k u_k^\top u_k = u_k^\top \lambda_k u_k = u_k^\top L u_k = u_k^\top S S^\top u_k = (S^\top u_k)^\top S^\top u_k=\| S^\top u_k \|_2^2 $ What does the quantity $\| S^\top x \|_2^2$ tell us about $x$? **Your answer here.**The quantity $\| S^\top x \|_2^2$ tell us the difference, or derivative, of the vector $x$ along all the edges of our graph. Question 5What is the value of $u_0$, both for the combinatorial and normalized Laplacians? **Your annswer here.**As our graph has many disconnected components, we are not going to find one single constant eigenvector, as we would expect according to the theory. Instead, we are going to find one single constant vector by doing a linear combination of all the eigenvector of the eigenvalue 0. Question 6Look at the spectrum of the Laplacian by plotting the eigenvalues.Comment on what you observe.
###Code
plt.figure(figsize=(25, 5))
for i in range(0, len(eigenvalues)):
plt.bar(i,eigenvalues[i], width=0.5)
plt.show()
###Output
_____no_output_____
###Markdown
**Your answer here.**An interesting part of the spectrum that we can see on both Laplacian, is the eigenvalues equal to 0. For both, the combinatorial and normalized Laplacian, we have 139 eigenvalues equal to 0. From the theory, we know that the multiplicity of the eivenvalue 0 is equal to the number of connected components. How many connected components are there in your graph? Answer using the eigenvalues only.
###Code
# Your code here.
number_connected_components = len(eigenvalues)- np.count_nonzero(eigenvalues)
print('Number of connected componenets equal to', number_connected_components)
###Output
_____no_output_____
###Markdown
Is there an upper bound on the eigenvalues, i.e., what is the largest possible eigenvalue? Answer for both the combinatorial and normalized Laplacians. **Your answer here.**For the combinatorial Laplacian, there are no theoretical largest value for the engienvalue. However, we saw in the theory that the normalized Laplacian is bound to an upper value equal to 2. 3 Laplacian eigenmaps*Laplacian eigenmaps* is a method to embed a graph $\mathcal{G}$ in a $d$-dimensional Euclidean space.That is, it associates a vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$.The graph $\mathcal{G}$ is thus embedded as $Z \in \mathbb{R}^{N \times d}$. Question 7What do we use Laplacian eigenmaps for? (Or more generally, graph embeddings.) **Your answer here.**Laplacian eigenmaps are used for nonlinear dimensionality reduction. Traditional techniques like PCA don't consider the intrinsic geometry of the data.So Laplacian eigenmaps builds a graph from neighborhood information of the data set. Each data point serves as a node on the graph and connectivity between nodes is governed by the proximity of neighboring points (using e.g. the k-nearest neighbor algorithm). Question 8Embed your graph in $d=2$ dimensions with Laplacian eigenmaps.Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.**Recompute** the eigenvectors you need with a partial eigendecomposition method for sparse matrices.When $k \ll N$ eigenvectors are needed, partial eigendecompositions are much more efficient than complete eigendecompositions.A partial eigendecomposition scales as $\Omega(k |\mathcal{E}|$), while a complete eigendecomposition costs $\mathcal{O}(N^3)$ operations.
###Code
import pandas as pd
from scipy.spatial.distance import pdist, squareform
import networkx as nx
# Prepare the 106 features
features = pd.read_csv('TerrorAttack/terrorist_attack.nodes',delim_whitespace=' ',header = None,engine='python')
gt = features[[107]]
features = features.drop( columns=0)
features = features.drop(columns=107)
X = features
np.save('features_matrix', X )
# Creation of the adjacency matrix for the Laplacian Eigenmaps corresponding to the distance
distances = pdist(X.values, metric='euclidean')
kernel_width = distances.mean()
weights = np.exp(-distances**2 / kernel_width**2)
adjacency_eigen = squareform(weights)
adjacency_eigen[adjacency_eigen < np.mean(weights)] = 0
adjacency_eigen[adjacency_eigen >= np.mean(weights)] = 1
degree_eigen = adjacency_eigen.sum(axis=0)
adjacency_eigen = sparse.csr_matrix(adjacency_eigen)
sparse.save_npz('adjacency_sparse', adjacency_eigen)
D_eigen = sparse.diags(degree_eigen) # Degree matrix
D2_eigen = sparse.diags(1/np.sqrt(degree_eigen)) # Represent D^(-1/2)
laplacian_combinatorial_eigen = D_eigen - adjacency_eigen # Your code here.
laplacian_normalized_eigen = D2_eigen.dot(laplacian_combinatorial_eigen.dot(D2_eigen)) # Your code here.
###Output
_____no_output_____
###Markdown
Plot the nodes embedded in 2D. Comment on what you see.
###Code
laplacian_eigen = laplacian_normalized_eigen
sparse.save_npz('laplacian_normalized_sparse',laplacian_eigen)
val, vect = sparse.linalg.eigsh(laplacian_eigen, k = 11, which = 'SM')
v1=vect[:,1]
v2=vect[:,2]
v3=vect[:,3]
v4=vect[:,4]
v5=vect[:,5]
v6=vect[:,6]
v7=vect[:,7]
v8=vect[:,8]
v9=vect[:,9]
v10=vect[:,10]
new_x1=adjacency_eigen*v1
new_x2=adjacency_eigen*v2
new_x3=adjacency_eigen*v3
new_x4=adjacency_eigen*v4
new_x5=adjacency_eigen*v5
new_x6=adjacency_eigen*v6
new_x7=adjacency_eigen*v7
new_x8=adjacency_eigen*v8
new_x9=adjacency_eigen*v9
new_x10=adjacency_eigen*v10
plt.scatter(v1,v2,s = 3);
###Output
_____no_output_____
###Markdown
**Your answer here.**In 2D, we cannot really observe any significant clusters. A projection in higher manifold would probably be better. Question 9 What does the embedding $Z \in \mathbb{R}^{N \times d}$ preserve? The similarities (local information) with smallest possible error. If two data are close to each other in the $\mathbb{R}^{N \times L}$ domain, these two data should also be close to each other in the $\mathbb{R}^{N \times d}$ domain. 2 Spectral clustering*Spectral clustering* is a method to partition a graph into distinct clusters.The method associates a feature vector $z_i \in \mathbb{R}^d$ to every node $v_i \in \mathcal{V}$, then runs [$k$-means](https://en.wikipedia.org/wiki/K-means_clustering) in the embedding space $\mathbb{R}^d$ to assign each node $v_i \in \mathcal{V}$ to a cluster $c_j \in \mathcal{C}$, where $k = |\mathcal{C}|$ is the number of desired clusters. Question 10Choose $k$ and $d$. How did you get to those numbers? k=6 because there were 6 distinct labels in the feature file, the goal is to check if we can cluster the the features into the 6 categories. The d represent the number of eigenvector that we use for the KMean. We tried ranging the d between 2 and 20 eigenvectors, and we found that the minimum error rate was obtained with d = 10. Question 111. Embed your graph in $\mathbb{R}^d$ as $Z \in \mathbb{R}^{N \times d}$. Try with and without re-normalizing the eigenvectors by the degrees, then keep the one your prefer.1. If you want $k=2$ clusters, partition with the Fiedler vector. For $k > 2$ clusters, run $k$-means on $Z$. Don't implement $k$-means, use the `KMeans` class imported from scikit-learn.
###Code
# Your code here.
kmeans = KMeans(n_clusters=6, random_state = 1)
data=np.transpose(np.vstack((v1,v2,v3,v4,v5,v6, v7, v8, v9, new_x10)))
kmeans.fit_predict(data);
###Output
_____no_output_____
###Markdown
Question 12Use the computed cluster assignment to reorder the adjacency matrix $A$.What do you expect? What do you observe?
###Code
# Your code here.
adj_ord=adjacency_eigen
labels=kmeans.labels_
indx = np.argsort(labels)
adj_ord=adj_ord[indx,:][:,indx]
plt.spy(adj_ord,markersize=0.004);
###Output
_____no_output_____
###Markdown
**Your answer here.**We expect to see the different clusters that we made with the kmeans. And as expected, we can clearly see 6 patterns in the adjacency matrix. We can also see that 3 different clusters are on top of each other on the top left of the adjacency matrix. This may be the reason why, in the following task, the differentation between the 6 clusters is going to hard to get. Question 13If you have ground truth clusters for your dataset, compare the cluster assignment from spectral clustering to the ground truth.A simple quantitative measure is to compute the percentage of nodes that have been correctly categorized.If you don't have a ground truth, qualitatively assess the quality of the clustering.Ground truth clusters are the "real clusters".For example, the genre of musical tracks in FMA, the category of Wikipedia articles, the spammer status of individuals, etc.Look for the `labels` in the [dataset descriptions](https://github.com/mdeff/ntds_2018/tree/master/projects/README.md).
###Code
# Your code here.
# Dictionary to map ordinal features arbitrary. choice based on pourcentage for k-means label in order to minimize the error
gt[107] = gt[107].map(lambda x: x.lstrip('http://counterterror.mindswap.org2005/ict_events.owl#'))
dico = {'Arson':0, 'Bombing':3, 'Kidnapping':4, 'NBCR_Attack':2, 'k':5, 'Weapon_Attack':1}
gt_dico = gt.replace(dico)
gt_labels=np.squeeze(gt_dico)
kmeans_labels = kmeans.labels_
diff = kmeans_labels-gt_labels
nbr_diff = np.count_nonzero(diff, axis=0)
print('The percentage of true clustering is ',np.round((1293-nbr_diff)/1293*100,2), '%.')
###Output
_____no_output_____
###Markdown
**Remark**The percentage of true clustering is very low. The labelling of the ground truth might be based on additional information that are not accesbile in our dataset. Question 14Plot the cluster assignment (one color per cluster) on the 2D embedding you computed above with Laplacian eigenmaps.
###Code
# Projection of our data on two eigenvectors.
# As we have d = 10, we plotted all the possibilities of 2 eigenvectors among the 10 we have.
# The plot asked for is in position (0,1)
max = 10
plt.figure(figsize=(20, 20))
for a in range(0,max):
for b in range(0,max):
plt.subplot(max,max,a*max+(b+1))
plt.scatter(data[:,a],data[:,b], c=kmeans_labels, cmap='rainbow', s=0.2)
plt.title("Vec_1: %d, Vec_2: %d" %(a+1,b+1))
plt.show;
plt.scatter(data[:,0],data[:,1], c=kmeans_labels, cmap='rainbow', s=5)
plt.scatter(data[:,0],data[:,1], c=gt_labels, cmap='rainbow', s=5) # Ground truth, we clearly see the difference
###Output
_____no_output_____ |
incubating/examples/nodejs_tensorflow/nodejs_tensorflow.ipynb | ###Markdown
Nodejs Tensorflow Example * Wrap a nodejs tensorflow model for use as a prediction microservice in seldon-core * Run locally on Docker to test Dependencies * ```pip install seldon-core``` * [Helm](https://github.com/kubernetes/helm) * [Minikube](https://github.com/kubernetes/minikube) * [S2I](https://github.com/openshift/source-to-image) * node (version>=8.11.0) * npm Train locally using npm commandsThis model example takes an input of 10 different features and predicts a out for the same. For the training part it uses a random normally distributed input set of 100 rows i.e a data set of [100,10] and trains it for another random normally distributed data set of size [100,1]. For every prediction the model expects a dataset of dimension [r,10] where r is the num of input rows to be predicted.
###Code
!make train && make clean_build
###Output
npm install
[K[?25h [27m[90m......[0m] \ refresh-package-json:@tensorflow/tfjs-node: [32;40mtiming[0m [35mactio[0m[Km[K
> @tensorflow/[email protected] install /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow/node_modules/@tensorflow/tfjs-node
> node scripts/install.js
* Downloading libtensorflow
[1G[ ] Infinity/bps 0% 0.0s[0K[1G[ ] 154364/bps 0% 128.0s[0K[1G[ ] 1240158/bps 0% 15.9s[0K[1G[ ] 3603148/bps 0% 5.4s[0K[1G[= ] 4762971/bps 1% 4.1s[0K[1G[= ] 5481686/bps 2% 3.5s[0K[1G[= ] 6214471/bps 3% 3.1s[0K[1G[= ] 6967373/bps 4% 2.7s[0K[1G[= ] 7131544/bps 4% 2.6s[0K[1G[== ] 6972763/bps 5% 2.7s[0K[1G[== ] 9257518/bps 7% 2.0s[0K[1G[=== ] 11301908/bps 10% 1.6s[0K[1G[==== ] 12855355/bps 13% 1.3s[0K[1G[===== ] 14330366/bps 15% 1.2s[0K[1G[====== ] 15672547/bps 18% 1.0s[0K[1G[====== ] 16246964/bps 20% 1.0s[0K[1G[====== ] 5880849/bps 21% 2.6s[0K[1G[======= ] 6088044/bps 22% 2.5s[0K[1G[======= ] 6089241/bps 23% 2.5s[0K[1G[======= ] 6240503/bps 24% 2.4s[0K[1G[======== ] 6469579/bps 25% 2.3s[0K[1G[======== ] 6936442/bps 27% 2.1s[0K[1G[========= ] 7304197/bps 30% 1.9s[0K[1G[========== ] 7668265/bps 32% 1.7s[0K[1G[========== ] 8104748/bps 34% 1.6s[0K[1G[=========== ] 8582095/bps 37% 1.4s[0K[1G[============ ] 9004661/bps 39% 1.3s[0K[1G[============= ] 9577209/bps 43% 1.2s[0K[1G[============== ] 9949360/bps 45% 1.1s[0K[1G[============== ] 10078374/bps 47% 1.0s[0K[1G[=============== ] 10481581/bps 49% 0.9s[0K[1G[================ ] 10819946/bps 52% 0.9s[0K[1G[================ ] 11096666/bps 54% 0.8s[0K[1G[================= ] 11347867/bps 56% 0.8s[0K[1G[================== ] 11607371/bps 59% 0.7s[0K[1G[================== ] 11842695/bps 61% 0.6s[0K[1G[=================== ] 12102357/bps 63% 0.6s[0K[1G[==================== ] 12291890/bps 65% 0.6s[0K[1G[==================== ] 12552382/bps 67% 0.5s[0K[1G[===================== ] 12774990/bps 70% 0.5s[0K[1G[====================== ] 12976247/bps 72% 0.4s[0K[1G[====================== ] 13042719/bps 73% 0.4s[0K[1G[======================= ] 13306582/bps 76% 0.4s[0K[1G[======================== ] 13591597/bps 79% 0.3s[0K[1G[======================== ] 13784480/bps 81% 0.3s[0K[1G[========================= ] 13861252/bps 82% 0.2s[0K[1G[========================== ] 14045382/bps 85% 0.2s[0K[1G[========================== ] 14251651/bps 87% 0.2s[0K[1G[=========================== ] 14372631/bps 89% 0.1s[0K[1G[============================ ] 14648295/bps 92% 0.1s[0K[1G[============================ ] 14670301/bps 93% 0.1s[0K[1G[============================ ] 14564979/bps 94% 0.1s[0K[1G[============================= ] 14524195/bps 95% 0.1s[0K[1G[============================= ] 14460878/bps 96% 0.1s[0K[1G[============================= ] 14434699/bps 97% 0.0s[0K[1G[============================= ] 14384762/bps 97% 0.0s[0K[1G[==============================] 14313428/bps 98% 0.0s[0K[1G[==============================] 14325589/bps 99% 0.0s[0K[1G[==============================] 14319896/bps 100% 0.0s[0K
* Building TensorFlow Node.js bindings
> [email protected] postinstall /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow/node_modules/protobufjs
> node scripts/postinstall
[37;40mnpm[0m [0m[34;40mnotice[0m[35m[0m created a lockfile as package-lock.json. You should commit this file.
[0m[37;40mnpm[0m [0m[30;43mWARN[0m[35m[0m [email protected] No repository field.
[0m[37;40mnpm[0m [0m[30;43mWARN[0m[35m[0m [email protected] No license field.
[0m
added 48 packages from 56 contributors and audited 61 packages in 9.829s
found [92m0[0m vulnerabilities
npm start
> [email protected] start /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow
> node train.js
2019-05-10 06:56:33.691232: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Epoch 0: loss = 1.1140578985214233
Epoch 1: loss = 1.0404443740844727
Epoch 2: loss = 1.0114623308181763
Epoch 3: loss = 0.994644284248352
Epoch 4: loss = 0.9810447692871094
Epoch 5: loss = 0.9564876556396484
Epoch 6: loss = 0.947548508644104
Epoch 7: loss = 0.9377892017364502
Epoch 8: loss = 0.9292038679122925
Epoch 9: loss = 0.9103612899780273
Epoch 10: loss = 0.9044468402862549
Epoch 11: loss = 0.8943670392036438
Epoch 12: loss = 0.8909915685653687
Epoch 13: loss = 0.8821757435798645
Epoch 14: loss = 0.8772059679031372
Epoch 15: loss = 0.8722608685493469
Epoch 16: loss = 0.870168149471283
Epoch 17: loss = 0.8628248572349548
Epoch 18: loss = 0.856920599937439
Epoch 19: loss = 0.8508269786834717
Epoch 20: loss = 0.8445506691932678
Epoch 21: loss = 0.8388644456863403
Epoch 22: loss = 0.8324810862541199
Epoch 23: loss = 0.8312572836875916
Epoch 24: loss = 0.8251888155937195
Epoch 25: loss = 0.8173127770423889
Epoch 26: loss = 0.8206360936164856
Epoch 27: loss = 0.825434684753418
Epoch 28: loss = 0.8106041550636292
Epoch 29: loss = 0.8014734387397766
Epoch 30: loss = 0.7964511513710022
Epoch 31: loss = 0.7898756265640259
Epoch 32: loss = 0.7860068082809448
Epoch 33: loss = 0.7900837659835815
Epoch 34: loss = 0.7788155674934387
Epoch 35: loss = 0.778168261051178
Epoch 36: loss = 0.774094820022583
Epoch 37: loss = 0.7649340033531189
Epoch 38: loss = 0.759834349155426
Epoch 39: loss = 0.7585961818695068
Epoch 40: loss = 0.7511364817619324
Epoch 41: loss = 0.7497982382774353
Epoch 42: loss = 0.7454034090042114
Epoch 43: loss = 0.7422577738761902
Epoch 44: loss = 0.7390987873077393
Epoch 45: loss = 0.7328671813011169
Epoch 46: loss = 0.7296737432479858
Epoch 47: loss = 0.7255033850669861
Epoch 48: loss = 0.7259540557861328
Epoch 49: loss = 0.7198896408081055
Epoch 50: loss = 0.7157299518585205
Epoch 51: loss = 0.7137295603752136
Epoch 52: loss = 0.7115896344184875
Epoch 53: loss = 0.7110546827316284
Epoch 54: loss = 0.7083038687705994
Epoch 55: loss = 0.7007032036781311
Epoch 56: loss = 0.6936700344085693
Epoch 57: loss = 0.693160891532898
Epoch 58: loss = 0.6876615881919861
Epoch 59: loss = 0.6804297566413879
Epoch 60: loss = 0.6776358485221863
Epoch 61: loss = 0.6728461980819702
Epoch 62: loss = 0.6687815189361572
Epoch 63: loss = 0.6673902869224548
Epoch 64: loss = 0.6670713424682617
Epoch 65: loss = 0.6624063849449158
Epoch 66: loss = 0.65739905834198
Epoch 67: loss = 0.6553966999053955
Epoch 68: loss = 0.6506110429763794
Epoch 69: loss = 0.6493582129478455
Epoch 70: loss = 0.6465271711349487
Epoch 71: loss = 0.6439094543457031
Epoch 72: loss = 0.6397424340248108
Epoch 73: loss = 0.6372050046920776
Epoch 74: loss = 0.6370261907577515
Epoch 75: loss = 0.6327844858169556
Epoch 76: loss = 0.6300538182258606
Epoch 77: loss = 0.6324681639671326
Epoch 78: loss = 0.6271001100540161
Epoch 79: loss = 0.6215335130691528
Epoch 80: loss = 0.6228755116462708
Epoch 81: loss = 0.6215202808380127
Epoch 82: loss = 0.6156829595565796
Epoch 83: loss = 0.6130117774009705
Epoch 84: loss = 0.6068021655082703
Epoch 85: loss = 0.6044408082962036
Epoch 86: loss = 0.6065412759780884
Epoch 87: loss = 0.6013280749320984
Epoch 88: loss = 0.5983843803405762
Epoch 89: loss = 0.5943615436553955
Epoch 90: loss = 0.5942131280899048
Epoch 91: loss = 0.5931912660598755
Epoch 92: loss = 0.5889885425567627
Epoch 93: loss = 0.5844202041625977
Epoch 94: loss = 0.5816053748130798
###Markdown
Training creates a model.json file and a weights.bin file which is utilized for prediction Prediction using REST API on the docker container
###Code
!s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.1
!docker run --name "nodejs_tensorflow_predictor" -d --rm -p 5000:5000 node-s2i-model-image:0.1
###Output
6cc8e4bca5aff59b1a4f0613f4e61ac212bd513954f4c61c964c0cb237a35f34
###Markdown
Send some random features that conform to the contract
###Code
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t
!docker rm nodejs_tensorflow_predictor --force
###Output
nodejs_tensorflow_predictor
###Markdown
Prediction using GRPC API on the docker container
###Code
!s2i build -E ./.s2i/environment_grpc . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.2
!docker run --name "nodejs_tensorflow_predictor" -d --rm -p 5000:5000 node-s2i-model-image:0.2
###Output
3e824d60a31f688ed8c5e7cc95cb6e15ff72669faec8e219d6fee3a900794007
###Markdown
Send some random features that conform to the contract
###Code
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t --grpc
!docker rm nodejs_tensorflow_predictor --force
###Output
nodejs_tensorflow_predictor
###Markdown
Test using Minikube**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
###Code
!minikube start --memory 4096
###Output
_____no_output_____
###Markdown
Setup Seldon CoreUse the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlSetup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlAmbassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlInstall-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html). Build image and test
###Code
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.1
!kubectl create -f nodejs_tensorflow_deployment.json
!kubectl rollout status deploy/seldon-cea8a97ce503f62508ad289c86fe0e27
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
!make clean
###Output
rm -rf node_modules
rm -f package-lock.json
rm -f model.json
rm -f weights.bin
###Markdown
Nodejs Tensorflow Example * Wrap a nodejs tensorflow model for use as a prediction microservice in seldon-core * Run locally on Docker to test Dependencies * ```pip install seldon-core``` * [Helm](https://github.com/kubernetes/helm) * [Minikube](https://github.com/kubernetes/minikube) * [S2I](https://github.com/openshift/source-to-image) * node (version>=8.11.0) * npm Train locally using npm commandsThis model example takes an input of 10 different features and predicts a out for the same. For the training part it uses a random normally distributed input set of 100 rows i.e a data set of [100,10] and trains it for another random normally distributed data set of size [100,1]. For every prediction the model expects a dataset of dimension [r,10] where r is the num of input rows to be predicted.
###Code
!make train && make clean_build
###Output
npm install
[K[?25h [27m[90m......[0m] \ refresh-package-json:@tensorflow/tfjs-node: [32;40mtiming[0m [35mactio[0m[Km[K
> @tensorflow/[email protected] install /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow/node_modules/@tensorflow/tfjs-node
> node scripts/install.js
* Downloading libtensorflow
[1G[ ] Infinity/bps 0% 0.0s[0K[1G[ ] 154364/bps 0% 128.0s[0K[1G[ ] 1240158/bps 0% 15.9s[0K[1G[ ] 3603148/bps 0% 5.4s[0K[1G[= ] 4762971/bps 1% 4.1s[0K[1G[= ] 5481686/bps 2% 3.5s[0K[1G[= ] 6214471/bps 3% 3.1s[0K[1G[= ] 6967373/bps 4% 2.7s[0K[1G[= ] 7131544/bps 4% 2.6s[0K[1G[== ] 6972763/bps 5% 2.7s[0K[1G[== ] 9257518/bps 7% 2.0s[0K[1G[=== ] 11301908/bps 10% 1.6s[0K[1G[==== ] 12855355/bps 13% 1.3s[0K[1G[===== ] 14330366/bps 15% 1.2s[0K[1G[====== ] 15672547/bps 18% 1.0s[0K[1G[====== ] 16246964/bps 20% 1.0s[0K[1G[====== ] 5880849/bps 21% 2.6s[0K[1G[======= ] 6088044/bps 22% 2.5s[0K[1G[======= ] 6089241/bps 23% 2.5s[0K[1G[======= ] 6240503/bps 24% 2.4s[0K[1G[======== ] 6469579/bps 25% 2.3s[0K[1G[======== ] 6936442/bps 27% 2.1s[0K[1G[========= ] 7304197/bps 30% 1.9s[0K[1G[========== ] 7668265/bps 32% 1.7s[0K[1G[========== ] 8104748/bps 34% 1.6s[0K[1G[=========== ] 8582095/bps 37% 1.4s[0K[1G[============ ] 9004661/bps 39% 1.3s[0K[1G[============= ] 9577209/bps 43% 1.2s[0K[1G[============== ] 9949360/bps 45% 1.1s[0K[1G[============== ] 10078374/bps 47% 1.0s[0K[1G[=============== ] 10481581/bps 49% 0.9s[0K[1G[================ ] 10819946/bps 52% 0.9s[0K[1G[================ ] 11096666/bps 54% 0.8s[0K[1G[================= ] 11347867/bps 56% 0.8s[0K[1G[================== ] 11607371/bps 59% 0.7s[0K[1G[================== ] 11842695/bps 61% 0.6s[0K[1G[=================== ] 12102357/bps 63% 0.6s[0K[1G[==================== ] 12291890/bps 65% 0.6s[0K[1G[==================== ] 12552382/bps 67% 0.5s[0K[1G[===================== ] 12774990/bps 70% 0.5s[0K[1G[====================== ] 12976247/bps 72% 0.4s[0K[1G[====================== ] 13042719/bps 73% 0.4s[0K[1G[======================= ] 13306582/bps 76% 0.4s[0K[1G[======================== ] 13591597/bps 79% 0.3s[0K[1G[======================== ] 13784480/bps 81% 0.3s[0K[1G[========================= ] 13861252/bps 82% 0.2s[0K[1G[========================== ] 14045382/bps 85% 0.2s[0K[1G[========================== ] 14251651/bps 87% 0.2s[0K[1G[=========================== ] 14372631/bps 89% 0.1s[0K[1G[============================ ] 14648295/bps 92% 0.1s[0K[1G[============================ ] 14670301/bps 93% 0.1s[0K[1G[============================ ] 14564979/bps 94% 0.1s[0K[1G[============================= ] 14524195/bps 95% 0.1s[0K[1G[============================= ] 14460878/bps 96% 0.1s[0K[1G[============================= ] 14434699/bps 97% 0.0s[0K[1G[============================= ] 14384762/bps 97% 0.0s[0K[1G[==============================] 14313428/bps 98% 0.0s[0K[1G[==============================] 14325589/bps 99% 0.0s[0K[1G[==============================] 14319896/bps 100% 0.0s[0K
* Building TensorFlow Node.js bindings
> [email protected] postinstall /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow/node_modules/protobufjs
> node scripts/postinstall
[37;40mnpm[0m [0m[34;40mnotice[0m[35m[0m created a lockfile as package-lock.json. You should commit this file.
[0m[37;40mnpm[0m [0m[30;43mWARN[0m[35m[0m [email protected] No repository field.
[0m[37;40mnpm[0m [0m[30;43mWARN[0m[35m[0m [email protected] No license field.
[0m
added 48 packages from 56 contributors and audited 61 packages in 9.829s
found [92m0[0m vulnerabilities
npm start
> [email protected] start /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow
> node train.js
2019-05-10 06:56:33.691232: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Epoch 0: loss = 1.1140578985214233
Epoch 1: loss = 1.0404443740844727
Epoch 2: loss = 1.0114623308181763
Epoch 3: loss = 0.994644284248352
Epoch 4: loss = 0.9810447692871094
Epoch 5: loss = 0.9564876556396484
Epoch 6: loss = 0.947548508644104
Epoch 7: loss = 0.9377892017364502
Epoch 8: loss = 0.9292038679122925
Epoch 9: loss = 0.9103612899780273
Epoch 10: loss = 0.9044468402862549
Epoch 11: loss = 0.8943670392036438
Epoch 12: loss = 0.8909915685653687
Epoch 13: loss = 0.8821757435798645
Epoch 14: loss = 0.8772059679031372
Epoch 15: loss = 0.8722608685493469
Epoch 16: loss = 0.870168149471283
Epoch 17: loss = 0.8628248572349548
Epoch 18: loss = 0.856920599937439
Epoch 19: loss = 0.8508269786834717
Epoch 20: loss = 0.8445506691932678
Epoch 21: loss = 0.8388644456863403
Epoch 22: loss = 0.8324810862541199
Epoch 23: loss = 0.8312572836875916
Epoch 24: loss = 0.8251888155937195
Epoch 25: loss = 0.8173127770423889
Epoch 26: loss = 0.8206360936164856
Epoch 27: loss = 0.825434684753418
Epoch 28: loss = 0.8106041550636292
Epoch 29: loss = 0.8014734387397766
Epoch 30: loss = 0.7964511513710022
Epoch 31: loss = 0.7898756265640259
Epoch 32: loss = 0.7860068082809448
Epoch 33: loss = 0.7900837659835815
Epoch 34: loss = 0.7788155674934387
Epoch 35: loss = 0.778168261051178
Epoch 36: loss = 0.774094820022583
Epoch 37: loss = 0.7649340033531189
Epoch 38: loss = 0.759834349155426
Epoch 39: loss = 0.7585961818695068
Epoch 40: loss = 0.7511364817619324
Epoch 41: loss = 0.7497982382774353
Epoch 42: loss = 0.7454034090042114
Epoch 43: loss = 0.7422577738761902
Epoch 44: loss = 0.7390987873077393
Epoch 45: loss = 0.7328671813011169
Epoch 46: loss = 0.7296737432479858
Epoch 47: loss = 0.7255033850669861
Epoch 48: loss = 0.7259540557861328
Epoch 49: loss = 0.7198896408081055
Epoch 50: loss = 0.7157299518585205
Epoch 51: loss = 0.7137295603752136
Epoch 52: loss = 0.7115896344184875
Epoch 53: loss = 0.7110546827316284
Epoch 54: loss = 0.7083038687705994
Epoch 55: loss = 0.7007032036781311
Epoch 56: loss = 0.6936700344085693
Epoch 57: loss = 0.693160891532898
Epoch 58: loss = 0.6876615881919861
Epoch 59: loss = 0.6804297566413879
Epoch 60: loss = 0.6776358485221863
Epoch 61: loss = 0.6728461980819702
Epoch 62: loss = 0.6687815189361572
Epoch 63: loss = 0.6673902869224548
Epoch 64: loss = 0.6670713424682617
Epoch 65: loss = 0.6624063849449158
Epoch 66: loss = 0.65739905834198
Epoch 67: loss = 0.6553966999053955
Epoch 68: loss = 0.6506110429763794
Epoch 69: loss = 0.6493582129478455
Epoch 70: loss = 0.6465271711349487
Epoch 71: loss = 0.6439094543457031
Epoch 72: loss = 0.6397424340248108
Epoch 73: loss = 0.6372050046920776
Epoch 74: loss = 0.6370261907577515
Epoch 75: loss = 0.6327844858169556
Epoch 76: loss = 0.6300538182258606
Epoch 77: loss = 0.6324681639671326
Epoch 78: loss = 0.6271001100540161
Epoch 79: loss = 0.6215335130691528
Epoch 80: loss = 0.6228755116462708
Epoch 81: loss = 0.6215202808380127
Epoch 82: loss = 0.6156829595565796
Epoch 83: loss = 0.6130117774009705
Epoch 84: loss = 0.6068021655082703
Epoch 85: loss = 0.6044408082962036
Epoch 86: loss = 0.6065412759780884
Epoch 87: loss = 0.6013280749320984
Epoch 88: loss = 0.5983843803405762
Epoch 89: loss = 0.5943615436553955
Epoch 90: loss = 0.5942131280899048
Epoch 91: loss = 0.5931912660598755
Epoch 92: loss = 0.5889885425567627
Epoch 93: loss = 0.5844202041625977
Epoch 94: loss = 0.5816053748130798
###Markdown
Training creates a model.json file and a weights.bin file which is utilized for prediction Prediction using REST API on the docker container
###Code
!s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.1
!docker run --name "nodejs_tensorflow_predictor" -d --rm -p 5000:5000 node-s2i-model-image:0.1
###Output
6cc8e4bca5aff59b1a4f0613f4e61ac212bd513954f4c61c964c0cb237a35f34
###Markdown
Send some random features that conform to the contract
###Code
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t
!docker rm nodejs_tensorflow_predictor --force
###Output
nodejs_tensorflow_predictor
###Markdown
Prediction using GRPC API on the docker container
###Code
!s2i build -E ./.s2i/environment_grpc . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.2
!docker run --name "nodejs_tensorflow_predictor" -d --rm -p 5000:5000 node-s2i-model-image:0.2
###Output
3e824d60a31f688ed8c5e7cc95cb6e15ff72669faec8e219d6fee3a900794007
###Markdown
Send some random features that conform to the contract
###Code
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t --grpc
!docker rm nodejs_tensorflow_predictor --force
###Output
nodejs_tensorflow_predictor
###Markdown
Test using Minikube**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
###Code
!minikube start --memory 4096
###Output
_____no_output_____
###Markdown
Setup Seldon CoreUse the setup notebook to [Setup Cluster](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlSetup-Cluster) with [Ambassador Ingress](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlAmbassador) and [Install Seldon Core](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.htmlInstall-Seldon-Core). Instructions [also online](https://docs.seldon.io/projects/seldon-core/en/latest/examples/seldon_core_setup.html). Build image and test
###Code
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.1
!kubectl create -f nodejs_tensorflow_deployment.json
!kubectl rollout status deploy/seldon-cea8a97ce503f62508ad289c86fe0e27
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
!make clean
###Output
rm -rf node_modules
rm -f package-lock.json
rm -f model.json
rm -f weights.bin
###Markdown
Nodejs Tensorflow Example * Wrap a nodejs tensorflow model for use as a prediction microservice in seldon-core * Run locally on Docker to test Dependencies * ```pip install seldon-core``` * [Helm](https://github.com/kubernetes/helm) * [Minikube](https://github.com/kubernetes/minikube) * [S2I](https://github.com/openshift/source-to-image) * node (version>=8.11.0) * npm Train locally using npm commandsThis model example takes an input of 10 different features and predicts a out for the same. For the training part it uses a random normally distributed input set of 100 rows i.e a data set of [100,10] and trains it for another random normally distributed data set of size [100,1]. For every prediction the model expects a dataset of dimension [r,10] where r is the num of input rows to be predicted.
###Code
!make train && make clean_build
###Output
npm install
[K[?25h [27m[90m......[0m] \ refresh-package-json:@tensorflow/tfjs-node: [32;40mtiming[0m [35mactio[0m[Km[K
> @tensorflow/[email protected] install /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow/node_modules/@tensorflow/tfjs-node
> node scripts/install.js
* Downloading libtensorflow
[1G[ ] Infinity/bps 0% 0.0s[0K[1G[ ] 154364/bps 0% 128.0s[0K[1G[ ] 1240158/bps 0% 15.9s[0K[1G[ ] 3603148/bps 0% 5.4s[0K[1G[= ] 4762971/bps 1% 4.1s[0K[1G[= ] 5481686/bps 2% 3.5s[0K[1G[= ] 6214471/bps 3% 3.1s[0K[1G[= ] 6967373/bps 4% 2.7s[0K[1G[= ] 7131544/bps 4% 2.6s[0K[1G[== ] 6972763/bps 5% 2.7s[0K[1G[== ] 9257518/bps 7% 2.0s[0K[1G[=== ] 11301908/bps 10% 1.6s[0K[1G[==== ] 12855355/bps 13% 1.3s[0K[1G[===== ] 14330366/bps 15% 1.2s[0K[1G[====== ] 15672547/bps 18% 1.0s[0K[1G[====== ] 16246964/bps 20% 1.0s[0K[1G[====== ] 5880849/bps 21% 2.6s[0K[1G[======= ] 6088044/bps 22% 2.5s[0K[1G[======= ] 6089241/bps 23% 2.5s[0K[1G[======= ] 6240503/bps 24% 2.4s[0K[1G[======== ] 6469579/bps 25% 2.3s[0K[1G[======== ] 6936442/bps 27% 2.1s[0K[1G[========= ] 7304197/bps 30% 1.9s[0K[1G[========== ] 7668265/bps 32% 1.7s[0K[1G[========== ] 8104748/bps 34% 1.6s[0K[1G[=========== ] 8582095/bps 37% 1.4s[0K[1G[============ ] 9004661/bps 39% 1.3s[0K[1G[============= ] 9577209/bps 43% 1.2s[0K[1G[============== ] 9949360/bps 45% 1.1s[0K[1G[============== ] 10078374/bps 47% 1.0s[0K[1G[=============== ] 10481581/bps 49% 0.9s[0K[1G[================ ] 10819946/bps 52% 0.9s[0K[1G[================ ] 11096666/bps 54% 0.8s[0K[1G[================= ] 11347867/bps 56% 0.8s[0K[1G[================== ] 11607371/bps 59% 0.7s[0K[1G[================== ] 11842695/bps 61% 0.6s[0K[1G[=================== ] 12102357/bps 63% 0.6s[0K[1G[==================== ] 12291890/bps 65% 0.6s[0K[1G[==================== ] 12552382/bps 67% 0.5s[0K[1G[===================== ] 12774990/bps 70% 0.5s[0K[1G[====================== ] 12976247/bps 72% 0.4s[0K[1G[====================== ] 13042719/bps 73% 0.4s[0K[1G[======================= ] 13306582/bps 76% 0.4s[0K[1G[======================== ] 13591597/bps 79% 0.3s[0K[1G[======================== ] 13784480/bps 81% 0.3s[0K[1G[========================= ] 13861252/bps 82% 0.2s[0K[1G[========================== ] 14045382/bps 85% 0.2s[0K[1G[========================== ] 14251651/bps 87% 0.2s[0K[1G[=========================== ] 14372631/bps 89% 0.1s[0K[1G[============================ ] 14648295/bps 92% 0.1s[0K[1G[============================ ] 14670301/bps 93% 0.1s[0K[1G[============================ ] 14564979/bps 94% 0.1s[0K[1G[============================= ] 14524195/bps 95% 0.1s[0K[1G[============================= ] 14460878/bps 96% 0.1s[0K[1G[============================= ] 14434699/bps 97% 0.0s[0K[1G[============================= ] 14384762/bps 97% 0.0s[0K[1G[==============================] 14313428/bps 98% 0.0s[0K[1G[==============================] 14325589/bps 99% 0.0s[0K[1G[==============================] 14319896/bps 100% 0.0s[0K
* Building TensorFlow Node.js bindings
> [email protected] postinstall /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow/node_modules/protobufjs
> node scripts/postinstall
[37;40mnpm[0m [0m[34;40mnotice[0m[35m[0m created a lockfile as package-lock.json. You should commit this file.
[0m[37;40mnpm[0m [0m[30;43mWARN[0m[35m[0m [email protected] No repository field.
[0m[37;40mnpm[0m [0m[30;43mWARN[0m[35m[0m [email protected] No license field.
[0m
added 48 packages from 56 contributors and audited 61 packages in 9.829s
found [92m0[0m vulnerabilities
npm start
> [email protected] start /home/clive/work/seldon-core/fork-seldon-core/examples/models/nodejs_tensorflow
> node train.js
2019-05-10 06:56:33.691232: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX AVX2 FMA
Epoch 0: loss = 1.1140578985214233
Epoch 1: loss = 1.0404443740844727
Epoch 2: loss = 1.0114623308181763
Epoch 3: loss = 0.994644284248352
Epoch 4: loss = 0.9810447692871094
Epoch 5: loss = 0.9564876556396484
Epoch 6: loss = 0.947548508644104
Epoch 7: loss = 0.9377892017364502
Epoch 8: loss = 0.9292038679122925
Epoch 9: loss = 0.9103612899780273
Epoch 10: loss = 0.9044468402862549
Epoch 11: loss = 0.8943670392036438
Epoch 12: loss = 0.8909915685653687
Epoch 13: loss = 0.8821757435798645
Epoch 14: loss = 0.8772059679031372
Epoch 15: loss = 0.8722608685493469
Epoch 16: loss = 0.870168149471283
Epoch 17: loss = 0.8628248572349548
Epoch 18: loss = 0.856920599937439
Epoch 19: loss = 0.8508269786834717
Epoch 20: loss = 0.8445506691932678
Epoch 21: loss = 0.8388644456863403
Epoch 22: loss = 0.8324810862541199
Epoch 23: loss = 0.8312572836875916
Epoch 24: loss = 0.8251888155937195
Epoch 25: loss = 0.8173127770423889
Epoch 26: loss = 0.8206360936164856
Epoch 27: loss = 0.825434684753418
Epoch 28: loss = 0.8106041550636292
Epoch 29: loss = 0.8014734387397766
Epoch 30: loss = 0.7964511513710022
Epoch 31: loss = 0.7898756265640259
Epoch 32: loss = 0.7860068082809448
Epoch 33: loss = 0.7900837659835815
Epoch 34: loss = 0.7788155674934387
Epoch 35: loss = 0.778168261051178
Epoch 36: loss = 0.774094820022583
Epoch 37: loss = 0.7649340033531189
Epoch 38: loss = 0.759834349155426
Epoch 39: loss = 0.7585961818695068
Epoch 40: loss = 0.7511364817619324
Epoch 41: loss = 0.7497982382774353
Epoch 42: loss = 0.7454034090042114
Epoch 43: loss = 0.7422577738761902
Epoch 44: loss = 0.7390987873077393
Epoch 45: loss = 0.7328671813011169
Epoch 46: loss = 0.7296737432479858
Epoch 47: loss = 0.7255033850669861
Epoch 48: loss = 0.7259540557861328
Epoch 49: loss = 0.7198896408081055
Epoch 50: loss = 0.7157299518585205
Epoch 51: loss = 0.7137295603752136
Epoch 52: loss = 0.7115896344184875
Epoch 53: loss = 0.7110546827316284
Epoch 54: loss = 0.7083038687705994
Epoch 55: loss = 0.7007032036781311
Epoch 56: loss = 0.6936700344085693
Epoch 57: loss = 0.693160891532898
Epoch 58: loss = 0.6876615881919861
Epoch 59: loss = 0.6804297566413879
Epoch 60: loss = 0.6776358485221863
Epoch 61: loss = 0.6728461980819702
Epoch 62: loss = 0.6687815189361572
Epoch 63: loss = 0.6673902869224548
Epoch 64: loss = 0.6670713424682617
Epoch 65: loss = 0.6624063849449158
Epoch 66: loss = 0.65739905834198
Epoch 67: loss = 0.6553966999053955
Epoch 68: loss = 0.6506110429763794
Epoch 69: loss = 0.6493582129478455
Epoch 70: loss = 0.6465271711349487
Epoch 71: loss = 0.6439094543457031
Epoch 72: loss = 0.6397424340248108
Epoch 73: loss = 0.6372050046920776
Epoch 74: loss = 0.6370261907577515
Epoch 75: loss = 0.6327844858169556
Epoch 76: loss = 0.6300538182258606
Epoch 77: loss = 0.6324681639671326
Epoch 78: loss = 0.6271001100540161
Epoch 79: loss = 0.6215335130691528
Epoch 80: loss = 0.6228755116462708
Epoch 81: loss = 0.6215202808380127
Epoch 82: loss = 0.6156829595565796
Epoch 83: loss = 0.6130117774009705
Epoch 84: loss = 0.6068021655082703
Epoch 85: loss = 0.6044408082962036
Epoch 86: loss = 0.6065412759780884
Epoch 87: loss = 0.6013280749320984
Epoch 88: loss = 0.5983843803405762
Epoch 89: loss = 0.5943615436553955
Epoch 90: loss = 0.5942131280899048
Epoch 91: loss = 0.5931912660598755
Epoch 92: loss = 0.5889885425567627
Epoch 93: loss = 0.5844202041625977
Epoch 94: loss = 0.5816053748130798
###Markdown
Training creates a model.json file and a weights.bin file which is utilized for prediction Prediction using REST API on the docker container
###Code
!s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.1
!docker run --name "nodejs_tensorflow_predictor" -d --rm -p 5000:5000 node-s2i-model-image:0.1
###Output
6cc8e4bca5aff59b1a4f0613f4e61ac212bd513954f4c61c964c0cb237a35f34
###Markdown
Send some random features that conform to the contract
###Code
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t
!docker rm nodejs_tensorflow_predictor --force
###Output
nodejs_tensorflow_predictor
###Markdown
Prediction using GRPC API on the docker container
###Code
!s2i build -E ./.s2i/environment_grpc . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.2
!docker run --name "nodejs_tensorflow_predictor" -d --rm -p 5000:5000 node-s2i-model-image:0.2
###Output
3e824d60a31f688ed8c5e7cc95cb6e15ff72669faec8e219d6fee3a900794007
###Markdown
Send some random features that conform to the contract
###Code
!seldon-core-tester contract.json 0.0.0.0 5000 -p -t --grpc
!docker rm nodejs_tensorflow_predictor --force
###Output
nodejs_tensorflow_predictor
###Markdown
Test using Minikube**Due to a [minikube/s2i issue](https://github.com/SeldonIO/seldon-core/issues/253) you will need [s2i >= 1.1.13](https://github.com/openshift/source-to-image/releases/tag/v1.1.13)**
###Code
!minikube start --memory 4096
###Output
_____no_output_____
###Markdown
Setup Seldon CoreUse the setup notebook to [Setup Cluster](../../seldon_core_setup.ipynbSetup-Cluster) with [Ambassador Ingress](../../seldon_core_setup.ipynbAmbassador) and [Install Seldon Core](../../seldon_core_setup.ipynbInstall-Seldon-Core). Instructions [also online](./seldon_core_setup.html). Build image and test
###Code
!eval $(minikube docker-env) && s2i build . seldonio/seldon-core-s2i-nodejs:0.2-SNAPSHOT node-s2i-model-image:0.1
!kubectl create -f nodejs_tensorflow_deployment.json
!kubectl rollout status deploy/seldon-cea8a97ce503f62508ad289c86fe0e27
!seldon-core-api-tester contract.json `minikube ip` `kubectl get svc ambassador -o jsonpath='{.spec.ports[0].nodePort}'` \
seldon-deployment-example --namespace default -p
!minikube delete
!make clean
###Output
rm -rf node_modules
rm -f package-lock.json
rm -f model.json
rm -f weights.bin
|
ipynb/15-Đối-chứng-Tổng-hợp-QA.ipynb | ###Markdown
Một thủ thuật Toán lợi hại để hiểu những điều tưởng không thểKhi vận dụng phương pháp sai khác của biến thiên, chúng ta sử dụng dữ liệu về một số khách hàng từ 2 thành phố khác nhau: Porto Alegre và Florianópolis. Dữ liệu trải dài qua 2 khoảng thời gian: trước và sau khi thực hiện chiến dịch marketing tại Porto Alegre nhằm tăng lượng tiền gửi của khách hàng. Để ước lượng tác động can thiệp, chúng ta đã chạy một mô hình hồi quy để cho ra mô hình sai khác của biến thiên và sai số chuẩn của mô hình.Trong trường hợp đó, chúng ta có rất nhiều mẫu, bởi vì dữ liệu khá chi tiết. Nhưng điều gì xảy ra nếu ta chỉ có bộ dữ liệu được tổng hợp ở cấp thành phố? Ví dụ, giả sử tất cả những gì chúng ta có là lượng tiền gửi trung bình ở cả hai thành phố trước và sau khi can thiệp.|Thành Phố|Trước Can Thiệp|Sau Can Thiệp||--|--|--||FL|171.64|206.16||POA|46.01|87.06|Chúng ta vẫn có thể ước lượng được Diff-in-Diff estimator$(E[Y(1)|D=1] - E[Y(1)|D=0]) - (E[Y(0)|D=1] - E[Y(0)|D=0]) = (87.06 - 206.16) - (46.01 - 171.64) = 6.53$Tuy nhiên, lưu ý rằng kích thước mẫu tại đây có giá trị là 4, cũng chính là số lượng tham số trong mô hình sai khác của biến thiên. Trong trường hợp này, sai số chuẩn không được xác định một cách chính xác, vậy chúng ta nên làm gì? Một vấn đề khác là Florianopolis có thể sẽ không giống với Porto Alegre theo cách chúng ta muốn. Ví dụ, Florianopolis được biết đến với những bãi biển đẹp cũng như sự mến khách của dân địa phương trong khi Porto Alegre lại nổi tiếng với món thịt nướng và những gã cao bồi đặc trưng. Vấn đề là bạn không thể biết chắc liệu mình có đang sử dụng một nhóm đối chứng phù hợp hay không.Để giải quyết vấn đề này, chúng ta sẽ sử dụng một phương pháp được biến đến là [**"sáng kiến quan trọng nhất đối với những bài báo đánh giá chính sách trong những năn gần đây"**](https://www.aeaweb.org/articles?id=10.1257/jep.31.2.3), Đối Chứng Tổng Hợp. Nó dựa trên một ý tưởng tuy đơn giản nhưng rất mạnh mẽ. Chúng ta không cần phải tìm bất kỳ nhóm đối chứng nào giống với nhóm được can thiệp. Thay vào đó, chúng ta có thể tự tạo ra nó bằng cách kết hợp nhiều nhóm đối chứng, tạo ra đối chứng tổng hợp một cách hiệu quả. Đối chứng tổng hợp hiệu quả và trực quan đến mức được xuất bản trên cả một bài báo đại chúng, mà không chỉ dành riêng cho tạp chí khoa học, đó là nhật báo [Washington Post](https://www.washingtonpost.com/news/wonk/wp/2015/10/30/how-to-measure-things-in-a-world-of-competing-claims/).
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from matplotlib import style
from matplotlib import pyplot as plt
import seaborn as sns
import statsmodels.formula.api as smf
%matplotlib inline
pd.set_option("display.max_columns", 6)
style.use("fivethirtyeight")
###Output
_____no_output_____
###Markdown
Để xem cách nó hoạt động, ta sẽ ước lượng tác động của việc đánh thuế thuốc lá đối với mức tiêu thụ thuốc lá. Đây là một câu hỏi đã gây tranh cãi suốt một thời gian dài trong kinh tế học. Một bên lập luận rằng việc áp thuế sẽ làm gia tăng giá thuốc lá, điều này sẽ làm giảm nhu cầu mua thuốc lá. Bên còn lại cho rằng vì thuốc lá gây nghiện nên việc thay đổi giá cả sẽ không làm thay đổi nhiều đến nhu cầu của họ. Theo thuật ngữ kinh tế học, chúng ta có thể nói rằng nhu cầu thuốc lá không co giãn theo giá, và việc tăng thuế chỉ là một cách để tăng ngân sách nhà nước với chi phí của chính người hút thuốc. Để làm rõ vấn đề này, chúng ta sẽ xem xét một số dữ liệu có liên quan tại Mỹ.Vào năm 1988, California đã thông qua Đạo luật nổi tiếng về Thuế Thuốc lá và Bảo vệ Sức khỏe, gọi là [Dự luật 99](https://en.wikipedia.org/wiki/1988_California_Proposition_99). "Tác động chính của nó là áp thuế tiêu thụ đặc biệt 25% trên mỗi bao thuốc lá bán ra tại bang California, mức thuế tương đương cũng được áp dụng tương tự đối với các sản phẩm khác của thuốc lá thương mại, chẳng hạn như xì gà và thuốc lá nhai. Các hạn chế bổ sung được đặt ra đối với ngành bán lẻ thuốc lá bao gồm lệnh cấm đặt máy bán thuốc lá tự động tại nơi công cộng, và lệnh cấm bán lẻ từng điếu thuốc. Số tiền thu được từ đạo luật sẽ dành cho các chương trình về môi trường và chăm sóc sức khỏe, cũng như việc chống quảng cáo thuốc lá."Để đánh giá tác động của nó, chúng ta có thể thu thập dữ liệu về doanh số thuốc lá từ nhiều tiểu bang trong một vài năm. Trong trường hợp này, chúng ta lấy dữ liệu từ năm 1970 đến năm 2000 gồm 39 tiểu bang. Những bang khác có chương trình kiểm soát Thuốc lá tương tự đã bị loại khỏi bảng phân tích. Đây là dữ liệu của chúng ta.
###Code
cigar = (pd.read_csv("data/smoking.csv")
.drop(columns=["lnincome","beer", "age15to24"]))
cigar.query("california").head()
###Output
_____no_output_____
###Markdown
Chúng ta có `state` là chỉ số tiểu bang, trong đó California là số 3. Biến giải thích là `retprice`, giá bán lẻ thuốc lá và `cigsale`, doanh số bán bình quân đầu người của thuốc lá (bao). Biến kết quả cần quan tâm là `cigsale`. Cuối cùng, chúng ta có các biến nhị phân để nhận biết dữ liệu từ bang California và giai đoạn sau can thiệp. Nếu chúng ta lập biểu đồ doanh số thuốc lá cho California và các tiểu bang khác theo thời gian, thì đây là những gì chúng ta nhận được.
###Code
ax = plt.subplot(1, 1, 1)
(cigar
.assign(california = np.where(cigar["california"], "California", "Other States"))
.groupby(["year", "california"])
["cigsale"]
.mean()
.reset_index()
.pivot("year", "california", "cigsale")
.plot(ax=ax, figsize=(10,5)))
plt.vlines(x=1988, ymin=40, ymax=140, linestyle=":", lw=2, label="Proposition 99")
plt.ylabel("Cigarette Sales Trend")
plt.title("Gap in per-capita cigarette sales (in packs)")
plt.legend();
###Output
_____no_output_____
###Markdown
Trong khoảng thời gian được biểu thị, người dân tại bang California dường như mua ít thuốc lá hơn mức trung bình toàn quốc. Ngoài ra, việc tiêu thụ thuốc lá có vẻ giảm dần sau những năm 80. Có vẻ như sau Dự luật 99, xu hướng giảm càng rõ rệt tại California khi so với các tiểu bang khác, nhưng ta không thể dám chắc điều đó. Dù sao nó cũng chỉ là một phỏng đoán khi ta phân tích biểu đồ.Để trả lời câu hỏi liệu Dự luật 99 có tác động đến việc tiêu thụ thuốc lá hay không, chúng ta sẽ sử dụng giai đoạn trước can thiệp để thiết lập đối chứng tổng hợp. Chúng ta sẽ kết hợp các tiểu bang khác để **thiết lập một tiểu bang giả có xu hướng giống với bang California**. Sau đó, chúng ta sẽ xem phương pháp đối chứng tổng hợp này hoạt động như thế nào sau can thiệp. Chúng ta có Thời gianĐể thực hiện vấn đề này theo hướng chính quy một chút, giả sử rằng ta có J+1 đối tượng. Không mất tính tổng quát, giả sử rằng đối tượng 1 bị tác động bởi can thiệp. Các đối tượng \\(j=2,...,J+1\\) là một tập hợp các đối tượng đối chứng, sau đây được gọi là "tổ hợp đối chứng". Bên cạnh đó, giả sử dữ liệu của chúng ta trải đều qua các thời điểm T, với \\(T_0\\) là thời điểm trước khi can thiệp. Với mỗi đối tượngcá thể j và thời điểm t, chúng ta quan sát kết quả \\(Y_{jt}\\). Đối với mỗi đơn vị j và thời điểm t, cho \\(Y^N_{jt}\\) là kết quả tiềm năng khi không có can thiệp và \\(Y^I_{jt}\\), là kết quả tiềm năng khi có can thiệp. Sau đó, tác động của đối tượng được can thiệp \\(j=1\\) tại thời điểm t, cho \\(t>T_0\\) được xác định như sau$\tau_{1t} = Y^I_{jt} - Y^N_{jt}$Bởi vì đối tượng \\(j = 1\\) được can thiệp, \\(Y^I_{jt}\\) là kết quả thực trong khi \\(Y^N_{jt}\\) thì không. Vậy nên thử thách bây giờ là làm thế nào để ước lượng \\(Y^N_{jt}\\). Lưu ý cách tác động can thiệp được xác định cho từng thời kỳ, nghĩa là nó có thể thay đổi theo thời gian, không nhất thiết phải là tức thời. Nó có thể tích tụ hoặc tiêu tan. Nói một cách dễ hiểu, vấn đề ước lượng tác động can thiệp chính là ước lượng điều gì sẽ xảy ra đối với kết quả của đối tượng \\(j=1\\) nếu nó không được can thiệp.Để ước lượng \(Y^N_{jt}\\), nên nhớ rằng sự kết hợp của các đối tượng trong tổ hợp đối chứng có thể tương đồng hơn với các đặc điểm của đối tượng được can thiệp so với bất kỳ đối tượng đối chứng đơn lẻ nào. Do đó, đối chứng tổng hợp được định nghĩa là bình quân gia quyền của các đối tượng trong nhóm đối chứng. Với trọng số \\(\pmb{W}=(w_2, ..., w_{J+1})\\), ước lượng của đối chứng tổng hợp \\(Y^N_{jt}\\) có giá trị$\hat{Y}^N_{jt} = \sum^{J+1}_{j=2} w_j Y_{jt}$Nếu bạn thấy đau đầu với những phép tính trên thì bạn không hề đơn độc đâu. Đừng lo lắng, chúng ta có rất nhiều ví dụ để làm cho nó trực quan hơn. Hãy thử một lần nghĩ về đối chứng tổng hợp như một cách đảo ngược của phép hồi quy. Như chúng ta đã biết, hồi quy tuyến tính cũng là một cách để dự đoán dựa vào bình quân gia quyền của các biến. Hãy nghĩ về những hồi quy đó giống như hồi quy trong ví dụ của phương pháp sai khác của biến thiên, trong đó mỗi biến là một biến giả cho một khoảng thời gian. Trong trường hợp này, hồi quy có thể được biểu diễn bởi tích của các ma trận sauTrong trường hợp đối chứng tổng hợp, chúng ta không có nhiều đối tượng, nhưng chúng ta có rất nhiều khoảng thời gian. Vì vậy, những gì ta làm là lật ngược ma trận của dữ liệu đầu vào. Sau đó, các đối tượng trở thành các "biến" và chúng ta biểu diễn kết quả theo bình quân gia quyền của các đối tượng, giống với tích của các ma trận sau.Nếu ta có nhiều hơn một đặc điểm trong một khoảng thời gian, chúng ta có thể gom chúng lại. Điều quan trọng của việc này là để phép hồi quy "dự đoán" đối tượng can thiệp 1 bằng các đối tượng khác. Bằng cách này, chúng ta có thể chọn trọng số theo một cách tối ưu nào đó để đạt được tỷ lệ mong muốn. Chúng ta thậm chí có thể chia tỷ lệ khác nhau cho từng đặc điểm để biểu thị tầm quan trọng của chúng.Vậy thì, nếu có thể xem đối chứng tổng hợp như hồi quy tuyến tính, điều đó cũng có nghĩa là chúng ta có thể ước lượng các trọng số bằng OLS có đúng không? Đúng vậy! Trên thực tế, chúng ta có thể thực hiện điều này ngay bây giờ. Đối Chứng Tổng Hợp bằng Hồi Quy Tuyến TínhĐể ước lượng tác động can thiệp bằng đối chứng tổng hợp, chúng ta sẽ thử lập một "đối tượng giả lập" giống với đối tượng can thiệp trước giai đoạn được can thiệp. Sau đó, chúng ta sẽ xem "đối tượng giả lập" này thay đổi thế nào sau can thiệp. Sự khác biệt giữa đối chứng tổng hợp và đối tượng mà nó tái lập chính là tác động can thiệp.Khi thực hiện với hồi quy tuyến tính, chúng ta sẽ tìm trọng số bằng OLS. Chúng ta sẽ tối thiểu hoá bình phương của hiệu giữa bình quân gia quyền của các đối tượng trong trong tổ hợp đối chứng và đối tượng được can thiệp cho giai đoạn trước can thiệp.Để làm được điều đó, đầu tiên chúng ta cần chuyển đổi các đối tượng (trong trường hợp của chúng ta là các bang) theo cột và thời gian theo hàng. Vì chúng ta có 2 đặc điểm, `cigsale` và` retprice`, ta sẽ xếp chồng chúng lên nhau như đã làm như hình trên. Chúng ta sẽ lập một đối chứng tổng hợp trông giống với bang California trong giai đoạn trước can thiệp và xem nó có biểu hiện như thế nào trong giai đoạn sau can thiệp. Vì lý do này, điều quan trọng là phải chọn giai đoạn trước can thiệp. Tại đây, các đặc điểm dường như có tỷ lệ tương tự, vì vậy chúng ta không cần phải làm bất cứ điều gì với chúng. Nếu hai đặc điểm có tỷ lệ khác nhau, một ở hàng nghìn và một ở số thập phân, thì đặc điểm có tỷ lệ lớn hơn sẽ là quan trọng nhất khi tối thiểu hoá sự khác biệt. Để tránh điều này, điều quan trọng là phải chia tỷ lệ của chúng trước.
###Code
features = ["cigsale", "retprice"]
inverted = (cigar.query("~after_treatment") # filter pre-intervention period
.pivot(index='state', columns="year")[features] # make one column per year and one row per state
.T) # flip the table to have one column per state
inverted.head()
###Output
_____no_output_____
###Markdown
Bây giờ, chúng ta có thể xác định biến Y là bang California và X là các tiểu bang khác
###Code
y = inverted[3].values # state of california
X = inverted.drop(columns=3).values # other states
###Output
_____no_output_____
###Markdown
Bây giờ, chúng ta chạy hồi quy. Có một hệ số chặn cũng giống với việc thêm một bang khác khi mỗi hàng là 1. Bạn có thể làm được, nhưng tôi nghĩ nó chỉ gây phức tạp hơn nên tôi bỏ nó đi. Hồi quy sẽ cho ra tập hợp các trọng số tối thiểu hoá bình phương của hiệu giữa đối tượng được can thiệp và các đối tượngcá thể trong tổ hợp đối chứng.
###Code
from sklearn.linear_model import LinearRegression
weights_lr = LinearRegression(fit_intercept=False).fit(X, y).coef_
weights_lr.round(3)
###Output
_____no_output_____
###Markdown
Các trọng số này cho chúng ta thấy cách tạo lập đối chứng tổng hợp. Chúng ta sẽ nhân kết quả của bang 1 với -0.436, bang 2 với -1.038, bang 3 với 0,679, v.v. Chúng ta làm được điều này với tích vô hướng giữa ma trận của các bang trong nhóm và trọng số.
###Code
calif_synth_lr = (cigar.query("~california")
.pivot(index='year', columns="state")["cigsale"]
.values.dot(weights_lr))
###Output
_____no_output_____
###Markdown
Một khi chúng ta có đối chứng tổng hợp, ta có thể vẽ biểu đồ biến kết quả của bang California.
###Code
plt.figure(figsize=(10,6))
plt.plot(cigar.query("california")["year"], cigar.query("california")["cigsale"], label="California")
plt.plot(cigar.query("california")["year"], calif_synth_lr, label="Synthetic Control")
plt.vlines(x=1988, ymin=40, ymax=140, linestyle=":", lw=2, label="Proposition 99")
plt.ylabel("Doanh số thuốc lá trên đầu người (bao)")
plt.legend();
###Output
_____no_output_____
###Markdown
Chờ chút… Có gì đó không ổn ở đây. Điều gì gây chú ý trong biểu đồ này? Đầu tiên, sau khi can thiệp, đối chứng tổng hợp có doanh số bán thuốc lá nhiều hơn California. Đây là một dấu hiệu cho thấy can thiệp đã thành công trong việc giảm nhu cầu thuốc lá. Thứ hai, hãy chú ý xem giai đoạn trước can thiệp được hồi quy hoàn hảo như thế nào. Đối chứng tổng hợp có thể khớp chính xác với bang California. Đây là một dấu hiệu cho thấy mô hình đối chứng tổng hợp có thể đang quá khớp với dữ liệu. Một dấu hiệu khác là có sự dao động rất lớn trong kết quả của đối chứng tổng hợp sau can thiệp. Ta có thể thấy đường kết quả không có một dáng vẻ mềm mại. Thay vào đó, nó dao động lên xuống và lại lên rồi xuống.Thử nghĩ xem tại sao điều này lại xảy ra, hãy nhớ rằng chúng ta có 38 tiểu bang trong tổ hợp đối chứng. Vì vậy, hồi quy tuyến tính có 38 tham số để cho tổ hợp đối chứng trước can thiệp khớp với nhóm can thiệp hết mức có thể. Đây là trường hợp mà ngay cả khi với T lớn, thì N cũng lớn, điều này mang lại quá nhiều dao động cho mô hình hồi quy tuyến tính. Nếu bạn đã quen với mô hình chính quy hoá, có thể sử dụng hồi quy Ridge hoặc Lasso để khắc phục. Tại đây, chúng ta sẽ xem xét phương pháp cổ điển hơn để tránh tình trạng quá khớp. Đừng Ngoại suyGiả sử ta có dữ liệu như sau và được yêu cầu lập một đối chứng tổng hợp để tái tạo đối tượng được can thiệp bằng cách sử dụng bất kỳ tổ hợp tuyến tính nào của các đối tượngcá thể đối chứng.|unit|sales|price||--|--|--||control 1|8|8||control 2|8|4||control 3|4|5||treated |2|10|Bởi vì có 3 đối tượng và chỉ có 2 đặc điểm để ghép cặp, nên có nhiều giải pháp chính xác cho bài toán này, nhưng một giải pháp hay là nhân đối chứng đầu tiên với 2.25, thứ hai với -2 và lấy tổng. Lưu ý phép nhân thứ hai tạo ra một đối tượng giả lập với doanh số là -16 và giá là -8. Phép nhân này đang ngoại suy đối tượng đối chứng 2 cho một vùng dữ liệu không có nhiều ý nghĩa, vì giá thành và doanh số âm gần như là bất khả thi. Phép nhân đầu tiên cũng là một phép ngoại suy, vì nó gán đối tượng đầu tiên cho doanh số và giá cả là 18. Những con số này cao hơn nhiều so với những gì ta có trong dữ liệu, do đó là phép ngoại suy.Đây là những gì hồi quy đang thực hiện khi chúng ta yêu cầu tạo ra một đối chứng tổng hợp. Phép ngoại suy không sai về mặt kỹ thuật, nhưng nó lại nguy hiểm trong thực tế. Chúng ta đang đưa ra giả thiết rằng dữ liệu mà chúng ta chưa từng thấy biểu hiện giống với dữ liệu mà ta có.Một cách an toàn hơn là hạn chế đối chứng tổng hợp sao cho nó chỉ thực hiện phép nội suy. Để làm được như vậy, chúng ta sẽ hạn chế để các trọng số là số dương và có tổng bằng một. Lúc này, đối chứng tổng hợp sẽ là một tổ hợp lồi của các đối tượng trong tổ hợp đối chứng. Khi thực hiện phép nội suy, chúng ta sẽ quy chiếu đối tượng được can thiệp theo bao lồi được xác định bởi đối tượng đối chứng, giống như trong hình dưới đây.Có 2 điều cần lưu ý. Đầu tiên, phép nội suy sẽ không thể tạo ra sự ghép cặp hoàn hảo cho đối tượng được can thiệp trong trường hợp này. Sở dĩ như vậy là vì đối tượng can thiệp là đối tượngcá thể có doanh số thấp nhất và giá cao nhất. Kết hợp lồi chỉ có thể tái lập chính xác các đặc điểm nằm giữa các đối tượng đối chứng. Một điều cần lưu ý nữa là nội suy rất thưa thớt. Chúng ta sẽ quy chiếu đối tượng được can thiệp trên bề mặt của bao lồi và bề mặt này được xác định bởi chỉ một vài đối tượng. Vì vậy, phép nội suy sẽ gán trọng số bằng 0 cho nhiều đối tượng.Đây cũng chỉ là ý tưởng chung, bây giờ chúng ta sẽ đi chi tiết hơn một chút. Đối chứng tổng hợp được xác định bởi$\hat{Y}^N_{jt} = \sum^{J+1}_{j=2} w_j Y_{jt}$bây giờ, ta sẽ sử dụng các trọng số \\(\pmb{W}=(w_2, ..., w_{J+1})\\) nhằm tối thiểu hoá$||\pmb{X}_1 - \pmb{X}_0 \pmb{W}|| = \bigg(\sum^k_{h=1}v_h \bigg(X_{h1} - \sum^{J+1}_{j=2} w_j X_{hj} \bigg)^2 \bigg)^{\frac{1}{2}}$dưới các ràng buộc \\(w_2, ..., w_{J+1}\\) có giá trị dương và tổng bằng một. Lưu ý rằng \\(v_h\\) phản ánh tầm quan trọng của mỗi biến khi tối thiểu hoá sự khác biệt giữa đối tượng được can thiệp và đối chứng tổng hợp. Các \\(v\\) khác nhau sẽ cho ra các trọng số tối ưu khác nhau. Một cách để chọn \\(V\\) là làm sao để mỗi biến có giá trị trung bình bằng 0 và phương sai bằng 1. Một cách phức tạp hơn là chọn \\(V\\) sao cho các biến dự đoán \\(Y\\) tốt hơn có tầm quan trọng cao hơn. Nhằm giữ cho code thật đơn giản, mỗi biến sẽ được gán tầm quan trọng như nhau.Để thực hiện điều này, trước tiên, hãy xác định hàm mất mát ở trên.
###Code
from typing import List
from operator import add
from toolz import reduce, partial
def loss_w(W: np.array, treated: np.array, controls: List[np.array], V:np.array) -> float:
diff = treated - reduce(add, [i * w for i, w in zip(controls, W)])
return np.sqrt(np.mean(diff**2)) # I'm using the mean instead of the sum, but it doesn't matter much
def loss_w(W, X, y) -> float:
return np.sqrt(np.mean((y - X.dot(W))**2))
###Output
_____no_output_____
###Markdown
Vì chúng ta cho rằng thuộc tính có tầm quan trọng như nhau, ta không cần phải lo lắng về v.Bây giờ, để có được trọng số tối ưu, chúng ta sẽ sử dụng phương pháp tối ưu hoá quy hoạch toàn phương của scipy. Chúng ta sẽ ràng buộc các trọng số có tổng bằng 1 với```python lambda x: np.sum(x) - 1```Ngoài ra, chúng ta sẽ thiết lập giới hạn tối ưu hóa trong khoảng từ 0 đến 1.
###Code
from scipy.optimize import fmin_slsqp
def get_w(X, y):
w_start = [1/X.shape[1]]*X.shape[1]
weights = fmin_slsqp(partial(loss_w, X=X, y=y),
np.array(w_start),
f_eqcons=lambda x: np.sum(x) - 1,
bounds=[(0.0, 1.0)]*len(w_start),
disp=False)
return weights
###Output
_____no_output_____
###Markdown
Thực hiện xong bước này, hãy lấy các trọng số xác định đối chứng tổng hợp
###Code
calif_weights = get_w(X, y)
print("Sum:", calif_weights.sum())
np.round(calif_weights, 4)
###Output
Sum: 1.0000000000007458
###Markdown
Như vậy, với trọng số này, chúng ta đang nhân các bang 1,2 và 3 với 0, bang 4 với 0,0852, v.v. Chúng ta thấy rằng các trọng số phân bổ thưa thớt, đúng như những gì ta đã dự đoán. Ngoài ra, tổng tất cả các trọng số bằng một và nằm trong khoảng từ 0 đến 1, thỏa mãn ràng buộc tổ hợp lồi.Bây giờ, để có được đối chứng tổng hợp, chúng ta có thể nhân các trọng số đó với các bang chính xác theo cách ta đã làm trước đây với các trọng số hồi quy.
###Code
calif_synth = cigar.query("~california").pivot(index='year', columns="state")["cigsale"].values.dot(calif_weights)
###Output
_____no_output_____
###Markdown
Nếu chúng ta lập biểu đồ kết quả của đối chứng tổng hợp tại đây, chúng ta sẽ có một đường mượt mà hơn nhiều. Cũng lưu ý rằng đối chứng tổng hợp không tái tạo chính xác đối tượng được can thiệp trong giai đoạn trước can thiệp. Đây là một dấu hiệu tốt, vì nó cho thấy rằng chúng ta không bị quá khớp.
###Code
plt.figure(figsize=(10,6))
plt.plot(cigar.query("california")["year"], cigar.query("california")["cigsale"], label="California")
plt.plot(cigar.query("california")["year"], calif_synth, label="Synthetic Control")
plt.vlines(x=1988, ymin=40, ymax=140, linestyle=":", lw=2, label="Proposition 99")
plt.ylabel("Doanh số thuốc lá trên đầu người (bao)")
plt.legend();
###Output
_____no_output_____
###Markdown
Với biện pháp đối chứng tổng hợp trong tay, chúng ta có thể ước lượng tác động can thiệp bằng hiệu giữa kết quả được can thiệp và kết quả của đối chứng tổng hợp.$\tau_{1t} = Y^I_{jt} - Y^N_{jt}$Trong trường hợp này, tác động ngày càng lớn qua thời gian.
###Code
plt.figure(figsize=(10,6))
plt.plot(cigar.query("california")["year"], cigar.query("california")["cigsale"] - calif_synth,
label="California Effect")
plt.vlines(x=1988, ymin=-30, ymax=7, linestyle=":", lw=2, label="Proposition 99")
plt.hlines(y=0, xmin=1970, xmax=2000, lw=2)
plt.title("State - Synthetic Across Time")
plt.ylabel("Doanh số thuốc lá trên đầu người (bao)")
plt.legend();
###Output
_____no_output_____
###Markdown
Cho đến năm 2000, có vẻ như Dự luật 99 đã làm doanh số thuốc lá giảm 25%. Điều đó thật tuyệt, nhưng bạn cũng có thể tự vấn rằng: làm thế nào ta có thể biết được điều này mang ý nghĩa thống kê hay không? Suy LuậnBởi vì chúng ta có kích thước mẫu rất nhỏ (39), ta sẽ phải tỉnh táo hơn khi muốn tìm hiểu xem liệu kết quả này thật sự có ý nghĩa về mặt thống kê hay chỉ do may mắn ngẫu nhiên. Tại đây, chúng ta sẽ ứng dụng ý tưởng về Kiểm Định Chính Xác của Fisher. Ý tưởng chủ đạo của nó rất đơn giản. Chúng ta hoán vị hoàn toàn các đối tượng được can thiệp và đối chứng. Vì chúng ta chỉ có một đối tượng được can thiệp, điều này có nghĩa là, đối với mỗi đối tượng, chúng ta giả định đó là đối tượng được can thiệp trong khi các đối tượngcá thể khác là đối chứng.|iteration|1|2|...|39||----|-|-|-|-||1|treated|0|0|0||2|0|treated|0|0||...|0|0|0|0|0|0||39|0|0|0|treated|Cuối cùng, chúng ta có một đối chứng tổng hợp và ước lượng tác động cho mỗi bang. Vì vậy, những gì phương pháp này làm là nó giả định can thiệp thực sự xảy ra cho một tiểu bang khác ngoài California, và xem tác động ước lượng cho can thiệp không xảy ra là gì. Sau đó, chúng ta xem liệu can thiệp tại Califórnia có lớn hơn đủ để so sánh với can thiệp giả lập khác hay không.Để thực hiện điều này, tôi đã lập hàm số nhận dữ liệu đầu vào là một bang và ước lượng đối chứng tổng hợp cho bang đó. Hàm này trả về một khung dữ liệu với một cột biểu thị bang, một cột cho năm, một cột cho kết quả `cigsale` và kết quả tổng hợp cho bang đó.
###Code
def synthetic_control(state: int, pool: List[int], data: pd.DataFrame) -> np.array:
features = ["cigsale", "retprice"]
inverted = (data.query("~after_treatment")
.pivot(index='state', columns="year")[features]
.T)
y = inverted[state].values # treated
X = inverted.drop(columns=state).values # donor pool
weights = get_w(X, y)
synthetic = (data.query(f"~(state=={state})")
.pivot(index='year', columns="state")["cigsale"]
.values.dot(weights))
return (data
.query(f"state=={state}")[["state", "year", "cigsale", "after_treatment"]]
.assign(synthetic=synthetic))
###Output
_____no_output_____
###Markdown
Đây là kết quả khi ta áp dụng bước thứ nhất
###Code
control_pool = cigar["state"].unique()
synthetic_control(1, control_pool, cigar).head()
###Output
_____no_output_____
###Markdown
Để có được kết quả cho tất cả các bang, chúng ta thực hiện song song việc tính toán trên 8 bộ vi xử lý. Nếu máy tính của bạn có nhiều hoặc ít lõi hơn, bạn có thể sử dụng một con số khác. Đoạn code này sẽ trả về một danh sách các khung dữ liệu như trên.
###Code
from joblib import Parallel, delayed
parallel_fn = delayed(partial(synthetic_control, pool=control_pool, data=cigar))
sinthetic_states = Parallel(n_jobs=8)(parallel_fn(state) for state in control_pool)
sinthetic_states[0].head()
###Output
_____no_output_____
###Markdown
Với đối chứng tổng hợp cho tất cả các bang, chúng ta có thể ước lượng hiệu giữa kết quả tổng hợp và kết quả thực cho tất cả các bang. Đối với California, đây là tác động can thiệp. Đối với các bang khác, điều này giống như tác động giả dược, chúng ta ước lượng tác động can thiệp của đối chứng tổng hợp khi can thiệp không thực sự xảy ra. Nếu chúng ta lập biểu đồ cho tất cả tác động giả dược cùng với tác động can thiệp của California, chúng ta có như sau.
###Code
plt.figure(figsize=(12,7))
for state in sinthetic_states:
plt.plot(state["year"], state["cigsale"] - state["synthetic"], color="C5",alpha=0.4)
plt.plot(cigar.query("california")["year"], cigar.query("california")["cigsale"] - calif_synth,
label="California");
plt.vlines(x=1988, ymin=-50, ymax=120, linestyle=":", lw=2, label="Proposition 99")
plt.hlines(y=0, xmin=1970, xmax=2000, lw=3)
plt.ylabel("Gap in per-capita cigarette sales (in packs)")
plt.title("State - Synthetic Across Time")
plt.legend();
###Output
_____no_output_____
###Markdown
Có hai thông tin đáng chú ý. Đầu tiên, chúng ta có thể thấy rằng phương sai sau can thiệp cao hơn phương sai trước can thiệp. Điều này đúng như kỳ vọng, vì đối chứng tổng hợp được thiết kế để tối thiểu hoá sự khác biệt trong giai đoạn trước can thiệp. Một khía cạnh thú vị khác là có một số đối tượngcá thể không tương thích với mô hình ngay trong giai đoạn trước can thiệp. Điều này không ngoài dự đoán. Ví dụ, nếu một số bang có mức tiêu thụ thuốc lá rất cao thì sẽ không có tổ hợp lồi nào của các bang khác khớp với nó.Vì những đối tượng đó tương thích quá kém, nên việc loại bỏ chúng ra khỏi phần phân tích là một ý kiến hay. Một cách để thực hiện một cách khách quan là đặt một ngưỡng cho lỗi trước can thiệp$MSE = \frac{1}{N}\sum\bigg(Y_t - \hat{Y}^{Synth}_t\bigg)^2$và loại bỏ những đối tượng có lỗi cao. Nếu chúng ta tiếp tục làm như vậy và biểu diễn trên cùng một biểu đồ, đây là những gì chúng ta nhận được.
###Code
def pre_treatment_error(state):
pre_treat_error = (state.query("~after_treatment")["cigsale"]
- state.query("~after_treatment")["synthetic"]) ** 2
return pre_treat_error.mean()
plt.figure(figsize=(12,7))
for state in sinthetic_states:
# remove units with mean error above 80.
if pre_treatment_error(state) < 80:
plt.plot(state["year"], state["cigsale"] - state["synthetic"], color="C5",alpha=0.4)
plt.plot(cigar.query("california")["year"], cigar.query("california")["cigsale"] - calif_synth,
label="California");
plt.vlines(x=1988, ymin=-50, ymax=120, linestyle=":", lw=2, label="Proposition 99")
plt.hlines(y=0, xmin=1970, xmax=2000, lw=3)
plt.ylabel("Gap in per-capita cigarette sales (in packs)")
plt.title("Distribution of Effects")
plt.title("State - Synthetic Across Time (Large Pre-Treatment Errors Removed)")
plt.legend();
###Output
_____no_output_____
###Markdown
Loại bỏ nhiễu, chúng ta có thể thấy tác động lớn đến mức nào tại bang California. Hình ảnh này cho chúng ta thấy rằng nếu chúng ta giả vờ việc can thiệp đã xảy ra ở bất kỳ tiểu bang nào khác, chúng ta gần như không bao giờ nhận được tác động quá lớn như những gì ở California.Hình này chỉ là một dạng suy luận, nhưng chúng ta cũng có thể suy ra trị số p từ các kết quả này. Tất cả những gì chúng ta phải làm là xem thử có bao nhiêu lần những tác động mà chúng ta có được nhỏ hơn tác động tại California.
###Code
calif_number = 3
effects = [state.query("year==2000").iloc[0]["cigsale"] - state.query("year==2000").iloc[0]["synthetic"]
for state in sinthetic_states
if pre_treatment_error(state) < 80] # filter out noise
calif_effect = cigar.query("california & year==2000").iloc[0]["cigsale"] - calif_synth[-1]
print("California Treatment Effect for the Year 2000", calif_effect)
np.array(effects)
###Output
California Treatment Effect for the Year 2000 -24.83015975492409
###Markdown
Nếu chúng ta muốn kiểm định giả thuyết một phía rằng tác động tại California là dưới 0, chúng ta có thể ước lượng Trị số p vì tỷ lệ số lần mà tác động tại California lớn hơn tất cả các tác động ước lượng.$PV=\frac{1}{N}\sum \mathcal{1}\{\hat{\tau}_{Calif} > \hat{\tau}_j\}$Hóa ra, tác động can thiệp tại California vào năm 2000 là -24,8, nghĩa là sự can thiệp đã làm giảm lượng thuốc lá tiêu thụ gần 25 bao. Trong số 34 tác động giả dược khác mà chúng ta đã ước lượng, chỉ có một tác động cao hơn tác động mà chúng ta có được tại California. Vì vậy, trị số p sẽ là 1/35.
###Code
np.mean(np.array(effects) < calif_effect)
###Output
_____no_output_____
###Markdown
Cuối cùng, chúng ta có thể biểu diễn phân phối của các tác động để biết tác động tại California thực sự lớn đến mức nào.
###Code
_, bins, _ = plt.hist(effects, bins=20, color="C5", alpha=0.5);
plt.hist([calif_effect], bins=bins, color="C0", label="California")
plt.ylabel("Frquency")
plt.title("Distribution of Effects")
plt.legend();
###Output
_____no_output_____ |
Laboratory Activity 6/LinAlg_58051_Hizon_Matrices.ipynb | ###Markdown
**TASK 1**Create a function named mat_desc() that througouhly describes a matrix, it should:1. Displays the shape, size, and rank of the matrix.2. Displays whether the matrix is square or non-square.3. Displays whether the matrix is an empty matrix.4. Displays if the matrix is an identity, ones, or zeros matrix.Use 5 sample matrices in which their shapes are not lower than (3,3) . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
import numpy as np
matrix_1 = np.array([
[1, 4, 7],
[2, 5, 8],
[3, 6, 9]
])
matrix_2 = np.array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
])
matrix_3 = np.array([
[1, 5, 9],
[3, 5, 7],
[7, 4, 3]
])
matrix_4 = np.array([
[5, 3, 4],
[9, 4, 7],
[7, 1, 6]
])
matrix_5 = np.array([
[6, 1, 8],
[9, 4, 1],
[1, 7, 6]
])
def mat_desc(matrix):
print(matrix)
if matrix.size > 0:
print("Shape: ", matrix.shape)
print("Size: ", matrix.size)
print("Rank: ", matrix.ndim)
square = True if matrix.shape[0] == matrix.shape[1] else False
print("Is the matrix square? ", square)
print("Is the matrix empty? NO")
zeroes = not np.any(matrix)
print("Is the matrix zero? ", zeroes)
ones = np.all((matrix == 1))
print("Is the matrix ones? ", ones)
rows = len(matrix);
columns = len(matrix[0]);
identity = True
for iteration in range(0, rows):
for number in range(0, columns):
if(iteration == number and matrix[iteration][number] !=1):
identity = False
if(iteration != number and matrix[iteration][number] !=0):
identity = False
if identity:
print("Is the matrix identity? True")
else:
print("Is the matrix identity? False")
else:
print("Shape: N/A")
print("Size: N/A")
print("Rank: N/A")
print("Is the matrix square? N/A")
print("Is the matrix empty? YES")
print("Is the matrix zero? N/A")
print("Is the matrix ones? N/A")
print("Is the matrix identity? N/A")
mat_desc(matrix_1)
mat_desc(matrix_2)
mat_desc(matrix_3)
mat_desc(matrix_4)
mat_desc(matrix_5)
###Output
[[6 1 8]
[9 4 1]
[1 7 6]]
Shape: (3, 3)
Size: 9
Rank: 2
Is the matrix square? True
Is the matrix empty? NO
Is the matrix zero? False
Is the matrix ones? False
Is the matrix identity? False
###Markdown
**TASK 2**Create a function named mat_operations() that takes in two matrices or scalars a input parameters it should:1. Display the description of each matrix, if the parameter is a scalar it tells that it is a scalar rather than describing it as a matrix.2. Determines if the matrices are viable for operation and returns your own error message if they are not viable.3. Returns the sum of the matrices.4. Returns the differen of the matrices.5. Returns the element-wise multiplication of the matrices.6. Returns the element-wise division of the matrices.Use 5 sample matrices in which their shapes are not lower than (3,3) . In your methodology, create a flowchart discuss the functions and methods you have done. Present your results in the results section showing the description of each matrix you have declared.
###Code
import numpy as np
matrix_6 = np.array([
[7, 4, 2],
[9, 4, 5],
[1, 6, 7]
])
matrix_7 = np.array([
[7, 6, 9],
[4, 1, 3],
[8, 5, 1]
])
matrix_8 = np.array([
[3, 5, 4],
[2, 8, 6],
[5, 4, 8]
])
matrix_9 = np.array([
[4, 5, 6],
[9, 7, 2],
[6, 3, 1]
])
matrix_10 = np.array([
[1, 2, 6],
[2, 3, 7],
[3, 9, 5]
])
def mat_operations(A, B):
print("First Matrix: \n", A)
scalar_One = isinstance(A, (int, float, bytes, complex))
print("Is the matrix scalar? \n", scalar_One)
print("Second Matrix: \n", B)
scalar_Two = isinstance(B, (int, float, bytes, complex))
print("Is the matrix scalar? \n", scalar_Two)
matrix_One = isinstance(A, np.ndarray)
matrix_Two = isinstance(B, np.ndarray)
if scalar_One == True or scalar_Two == True or matrix_One == True and matrix_Two == True and A.shape == B.shape:
print("Addition: \n", A+B)
print("Subtraction: \n", A-B)
print("Multiplication: \n", A*B)
print("Division: \n", A/B)
elif matrix_Two == True and matrix_Two == True and A.shape!= B.shape:
print("Operation Cannot be Performed. Shapes must be equal")
mat_operations(matrix_6, matrix_7)
mat_operations(matrix_8, matrix_9)
mat_operations(matrix_10, matrix_6)
###Output
_____no_output_____ |
Data Scientist Career Path/2. Getting Started with Data Science/2. The Data Science Process/script.ipynb | ###Markdown
Come up with a QuestionDo younger users tend to live in bigger cities and do older users live in smaller cities? Determine the Necessary Data
###Code
# Import modules:
import pandas as pd
import numpy as np
# Import the data:
user_data = pd.read_csv("user_data.csv")
# Create age variable and find population mean:
population_mean = np.mean(user_data["age"])
# Select increasingly larger samples:
extra_small_sample = user_data["age"][:10]
small_sample = user_data["age"][:50]
medium_sample = user_data["age"][:100]
large_sample = user_data["age"][:200]
# Calculate the mean of those samples:
extra_small_sample_mean = np.mean(extra_small_sample)
small_sample_mean = np.mean(small_sample)
medium_sample_mean = np.mean(medium_sample)
large_sample_mean = np.mean(large_sample)
# Print them all out!
print ("Extra Small Sample Mean: " + str(extra_small_sample_mean))
print ("Small Sample Mean: " + str(small_sample_mean))
print ("Medium Sample Mean: " + str(medium_sample_mean))
print ("Large Sample Mean: " + str(large_sample_mean))
print ("\nPopulation Mean: "+ str(population_mean))
###Output
Extra Small Sample Mean: 29.0
Small Sample Mean: 29.24
Medium Sample Mean: 29.04
Large Sample Mean: 29.35
Population Mean: 29.427860696517413
###Markdown
Get the DataYou can find the dataset at https://simplemaps.com/data/us-cities. Clean the Data
###Code
# Import the data:
pop_data = pd.read_csv("pop_data.csv")
# Look at the current pop_data DataFrame:
pop_data.head()
# Look at the current user_data DataFrame:
user_data.head()
# Merge the two datasets on the city column:
new_df = pd.merge(user_data, pop_data)
new_df.head(10)
# Write a logic statement that determines if a location is "rural" or "urban":
new_df.loc[new_df.population_proper < 100000, "location"] = "rural"
new_df.loc[new_df.population_proper >= 100000, "location"] = "urban"
# look at the new DataFrame:
new_df.head(20)
###Output
_____no_output_____
###Markdown
Explore the Data
###Code
# Plot a histogram that shows the distribution of ages in the dataset:
age = new_df["age"]
sns.distplot(age)
plt.show()
# Find the mean age of urban and rural users:
location_mean_age = new_df.groupby('location').age.mean() # turns it into a series
location_mean_age.head()
# Graph the age difference between rural and urban using a barplot:
sns.barplot(
data=new_df,
x= "location",
y= "age"
)
plt.show()
# Plot a violinplot, which shows the distribution of age in different locations:
sns.violinplot(data=new_df, x="location", y="age")
plt.show()
###Output
_____no_output_____
###Markdown
Model the Data
###Code
# Graph the population to age as a scatterplot:
x = new_df['population_proper']
y = new_df['age']
plt.scatter(x, y, alpha=0.5)
plt.show()
# Use Seaborn to visualize a linear regression:
sns.regplot(data=new_df, x="population_proper", y="age")
plt.show()
###Output
_____no_output_____
###Markdown
Communication
###Code
# Now we need to make our visualizations snazzy!
# Linear regression plot:
sns.regplot(data=new_df, x="population_proper", y="age")
# Change the axes, so they're eaiser to undersatnd:
ax = plt.subplot(1, 1, 1)
ax.set_xticks([100000, 1000000, 2000000, 4000000, 8000000])
ax.set_xticklabels(['100k', '1m', '2m','4m', '8m'])
# Change the figure style and palette:
sns.set_style("white")
sns.set_palette("pastel")
sns.despine()
# Title the axes and plot:
ax.set_xlabel("City Population")
ax.set_ylabel("User Age")
plt.title("Age against Population")
plt.show()
###Output
_____no_output_____ |
review2vec/Review2Vec_v1_demo.ipynb | ###Markdown
Review2Vec v1: "Botanist"Review2Vec (R2V) is the second type of model we designed for Groa. We trained Gensim's Doc2Vec word embedding model on documents containing all the reviews a user has written on IMDb. Restricting our training data to only those users who had written at least 6 reviews, we represented almost 60k IMDb users in this model. R2V can then be used to infer a vector for a new user's movie reviews, and find "r2v-similar" users. We query those similar users to find cult movies and hidden gems that the Groa user might enjoy.Hidden gems are movies are well regarded but relatively underdiscovered. To find them, we query the database for movies enjoyed by r2v-similar users, having between 1k and 10k votes.Cult movies are movies that some users enjoy much more than the average. Our goal is to provide the user with movies that they might watch and consider underrated. We query the database for r2v-similar users, and select movies they rate at least three stars above the average. A lengthier query can also be found in the SQL directory, which finds movies rated at least 2 standard deviations above the average, but this query was too slow to be used in the app. A future team could solve this problem by storing the standard deviation of each movie's ratings in the movies table. Connect to Database
###Code
! pip3 install psycopg2-binary --user
import pandas as pd
import psycopg2
import numpy as np
from getpass import getpass
# connect to database
connection = psycopg2.connect(
database = "postgres",
user = "postgres",
password = getpass(),
host = "movie-rec-scrape.cvslmiksgnix.us-east-1.rds.amazonaws.com",
port = '5432'
# database = "postgres",
# user = "postgres",
# password = getpass(),
# host = "groalives.cvslmiksgnix.us-east-1.rds.amazonaws.com",
# port = '5432'
)
# Enter database password below and press Enter.
# create cursor that is used throughout
try:
c = connection.cursor()
print("Connected!")
except:
print("Connection problem chief!")
###Output
Requirement already satisfied: psycopg2-binary in /home/ec2-user/.local/lib/python3.6/site-packages (2.8.4)
[33mYou are using pip version 19.0.2, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
Prepare data and train.Due to the high volume of data needed, this model was trained on an EC2 instance using the R2V_trainer.py script. Documentation on its usage can be found in `Groa/review2vec`.The general training plan is as follows:1. Get the list of reviewers who have written at least 6 reviews.2. For each user, download all reviews, concatenate, and tokenize. Pickle the result.3. Unpickle the reviews, format them as gensim.TaggedDocument, and train Doc2Vec on the data.4. Save the model. Install gensim
###Code
! python -m pip install tqdm # Unimportant; used for progress bars in terminal
! python -m pip install gensim
! python -m pip install numpy==1.18.1 # required to load the model.
###Output
Requirement already satisfied: tqdm in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (4.41.1)
[33mYou are using pip version 10.0.1, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Requirement already satisfied: gensim in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (3.8.1)
Requirement already satisfied: smart-open>=1.8.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gensim) (1.9.0)
Requirement already satisfied: numpy>=1.11.3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gensim) (1.18.1)
Requirement already satisfied: six>=1.5.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gensim) (1.11.0)
Requirement already satisfied: scipy>=0.18.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from gensim) (1.1.0)
Requirement already satisfied: boto3 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from smart-open>=1.8.1->gensim) (1.11.5)
Requirement already satisfied: requests in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from smart-open>=1.8.1->gensim) (2.20.0)
Requirement already satisfied: boto>=2.32 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from smart-open>=1.8.1->gensim) (2.48.0)
Requirement already satisfied: botocore<1.15.0,>=1.14.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from boto3->smart-open>=1.8.1->gensim) (1.14.5)
Requirement already satisfied: s3transfer<0.4.0,>=0.3.0 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from boto3->smart-open>=1.8.1->gensim) (0.3.1)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from boto3->smart-open>=1.8.1->gensim) (0.9.4)
Requirement already satisfied: urllib3<1.25,>=1.21.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from requests->smart-open>=1.8.1->gensim) (1.23)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from requests->smart-open>=1.8.1->gensim) (3.0.4)
Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from requests->smart-open>=1.8.1->gensim) (2019.9.11)
Requirement already satisfied: idna<2.8,>=2.5 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from requests->smart-open>=1.8.1->gensim) (2.6)
Requirement already satisfied: docutils<0.16,>=0.10 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from botocore<1.15.0,>=1.14.5->boto3->smart-open>=1.8.1->gensim) (0.14)
Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (from botocore<1.15.0,>=1.14.5->boto3->smart-open>=1.8.1->gensim) (2.7.3)
[33mYou are using pip version 10.0.1, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
Requirement already satisfied: numpy==1.18.1 in /home/ec2-user/anaconda3/envs/python3/lib/python3.6/site-packages (1.18.1)
[33mYou are using pip version 10.0.1, however version 20.0.2 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.[0m
###Markdown
Test the modelModel downloaded from EC2 instance using this command:`scp -i r2vkey.pem [email protected]:/home/ec2-user/Groa/review2vec/trained_models.zip ~/Downloads` Define inferencing functions
###Code
import gensim
from getpass import getpass
import numpy as np
import pandas as pd
import psycopg2
import re
import os
import warnings;
warnings.filterwarnings('ignore')
def prep_reviews(df):
"""Converts Letterboxd reviews dataframe to list of concatenated reviews."""
reviews = df['Review'].tolist()
for i in reviews:
i = i.lower()
return reviews
class r2v_Recommender():
def __init__(self, model_path):
"""Initialize model with name of .model file"""
self.model_path = model_path
self.model = None
self.cursor_dog = None
def connect_db(self):
"""connect to database, create cursor"""
# connect to database
connection = psycopg2.connect(
database = "postgres",
user = "postgres",
password = getpass(),
host = "movie-rec-scrape.cvslmiksgnix.us-east-1.rds.amazonaws.com",
port = '5432'
)
# create cursor that is used throughout
try:
self.cursor_dog = connection.cursor()
print("Connected!")
except:
print("Connection problem chief!")
# Enter database password and press Enter.
def _get_model(self):
"""Get the model object for this instance, loading it if it's not already loaded."""
if self.model == None:
model_path = self.model_path
d2v_model = gensim.models.Doc2Vec.load(model_path)
# Keep only the normalized vectors.
# This saves memory but makes the model untrainable (read-only).
d2v_model.init_sims(replace=True)
self.model = d2v_model
return self.model
def predict(self, reviews, hist_list=[], n=100, max_votes=1000):
"""Returns a list of recommendations and useful metadata, given a pretrained
word2vec model and a list of movies.
Parameters
----------
reviews: string
string of concatenated user reviews.
hist_list : iterable
list of movies the user has seen.
n : int
number of recommendations to return.
max_votes : int
maximum number of votes for a movie to be considered a hidden gem.
Returns
-------
Two lists of tuples: hidden_gems and cult_movies
(Title, Year, URL, # Votes, Avg. Rating, User Rating, Reviewer, Review, Movie ID)
"""
clf = self._get_model()
def _remove_dupes(recs, good_movies, bad_movies):
"""remove any recommended IDs that were in the good_movies list"""
all_rated = good_movies + bad_movies
if hist_list:
all_rated = list(set(all_rated+hist_list))
dupes = [x for x in recs if x[0] in all_rated]
return [x for x in recs if x[0] not in all_rated]
def similar_users(reviews, n_sims=30):
"""Get similar users based on reviews."""
vec = clf.infer_vector(reviews)
sims = clf.docvecs.most_similar([vec], topn=n_sims)
return [x[0] for x in sims]
def hidden_gems(sims, max_votes=10000, n=10):
"""Finds hidden gems (highly rated but unpopular).
Parameters
----------
sims : list
list of similar users.
max_votes : int
max number of votes (ratings) for movies to be included.
n : int
max number of results.
Returns
-------
List of recommendations as tuples:
(Title, Year, URL, # Votes, Avg. Rating, User Rating, Reviewer, Review)
"""
simset = tuple(sims)
hidden_query = f"""
SELECT m.primary_title, m.start_year, m.movie_id mid,
ra.num_votes num, ra.average_rating avgr,
r.user_rating taste, r.username,
r.review_text txt
FROM reviews r
JOIN movies m ON r.movie_id = m.movie_id
JOIN ratings ra ON r.movie_id = ra.movie_id
WHERE username IN {simset}
AND user_rating BETWEEN 8 AND 10
AND ra.average_rating BETWEEN 7 AND 10
AND ra.num_votes BETWEEN 1000 AND {max_votes}
ORDER BY ra.average_rating DESC
LIMIT {n}
"""
self.cursor_dog.execute(hidden_query)
try:
hidden_recs = self.cursor_dog.fetchall()
hidden_recs = [list(x) for x in hidden_recs]
for i in hidden_recs:
i.append(i[2]) # add ID to the end
i[2] = f"https://www.imdb.com/title/tt{i[2]}/" # add URL
hidden_recs = [tuple(x) for x in hidden_recs]
except Exception as e:
print(e)
hidden_recs = [("No hidden gems found! Better luck next time.",
None, None, None, None, None, None, None, None)]
return hidden_recs
def cult_movies(sims, n=10):
"""Takes a list of similar users to get cult movies (considered
underrated by similar users).
Parameters
----------
sims : list
list of similar users.
n : int
max number of results.
Returns
-------
List of recommendations as tuples:
(Title, Year, URL, # Votes, Avg. Rating, User Rating, Reviewer, Review)
"""
simset = tuple(sims)
cult_query = f"""
SELECT m.primary_title, m.start_year, m.movie_id mid,
ra.num_votes num, ra.average_rating avgr,
r.user_rating taste, r.username,
r.review_text txt
FROM reviews r
JOIN movies m ON r.movie_id = m.movie_id
JOIN ratings ra ON r.movie_id = ra.movie_id
WHERE username IN {simset}
AND user_rating BETWEEN 7 AND 10
AND user_rating BETWEEN 6 AND 10
AND user_rating >= (ra.average_rating + 3)
ORDER BY user_rating DESC
LIMIT {n}
"""
self.cursor_dog.execute(cult_query)
try:
cult_recs = self.cursor_dog.fetchall()
cult_recs = [list(x) for x in cult_recs]
for i in cult_recs:
i.append(i[2]) # add ID to the end
i[2] = f"https://www.imdb.com/title/tt{i[2]}/" # add URL
cult_recs = [tuple(x) for x in cult_recs]
except Exception as e:
print(e)
cult_recs = [("No cult movies found! Better luck next time.",
None, None, None, None, None, None, None, None)]
return cult_recs
sims = similar_users(reviews, n_sims=100)
cult_recs = cult_movies(sims, n=n/2)
hidden_gems = hidden_gems(sims, n=n/2)
return [cult_recs, hidden_gems]
# import user Letterboxd data (IMDb does not export user reviews)
reviews_df = pd.read_csv('reviews.csv')
# prep user data
reviews = prep_reviews(reviews_df)
print(len(reviews))
import numpy
numpy.version.version
r = r2v_Recommender('trained_models/r2v_Botanist_v1.1000.5.model')
r.connect_db()
predictions = r.predict(reviews)
predictions
###Output
_____no_output_____ |
Tuto-GUDHI-ConfRegions-PersDiag-datapoints.ipynb | ###Markdown
TDA with Python using the Gudhi Library Confidence regions for persistence diagrams : data points
###Code
import persistence_statistics as ps
import pandas as pd
import numpy as np
import pickle as pickle
import gudhi as gd
import seaborn as sbs
from scipy.spatial import distance_matrix
from pylab import *
###Output
_____no_output_____
###Markdown
Introduction In this tutorial, we introduce confidence regions for persistence diagrams built on a set of data points. We present the subsampling approach of [Fasy etal. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729). An alternative method is the bottleneck bootstrap method introduced in [Chazal etal. 2018](http://www.jmlr.org/papers/v18/15-484.html) and presented in this [notebook](Tuto-GUDHI-ConfRegions-PersDiag-BottleneckBootstrap.ipynb). See [this notebook](Tuto-GUDHI-persistence-diagrams.ipynb) for an introduction to persistence diagrams with Gudhi. For many applications of persistent homology, we observe topological features close to the diagonal. Since they correspond to topological structures that die very soon after they appear in the filtration, these points are generally considered as "topological noise". Confidence regions for persistence diagram provide a rigorous framework to this idea. Confidence regions for persistence diagrams provide a rigorous framework for selecting significant topological features in a persistence diagram. We use the bottleneck distance $d_b$ to define confidence regions. We see point clouds as random variables. Under this approach, persistence diagrams are also seen as random quantities. Confidence regions for persistence diagrams for point cloud data in $\mathbb R^d$ We introduce the method for a simulated dataset.
###Code
U1 = np.random.uniform(0,2 * pi,size= 1000)
V1 = np.array([[0.35 * cos(u) +0.02*np.random.uniform(-1,1) ,
0.35 *sin(u)+0.02*np.random.uniform(-1,1)] for u in U1])
U2 = np.random.uniform(0,2 * pi,size= 2000)
V2 = np.array([[0.7* cos(u) +0.02*np.random.uniform(-1,1) ,
0.7*sin(u)+0.02*np.random.uniform(-1,1)] for u in U2])
W = np.concatenate((V1,V2), axis=0)
plt.scatter(W[:,0],W[:,1],s=0.1);
###Output
_____no_output_____
###Markdown
Subsampling approachLet $\mathbb X$ and $\mathbb Y$ be two compact sets.For the filtrations given below, persistence homology is stable with respect of Hausdorff perturbations:$$d_b\left( Dgm \left(Filt(\mathbb X) \right) , Dgm \left( Filt(\mathbb Y) \right)\right)\leq C_{Filt} Haus \left(\mathbb X, \mathbb Y \right)$$ The previous inequality is valid for the following Gudhi filtrations: - for the Rips complex filtration with $C_{Rips} = 2$, - for the $\sqrt{alpha}$-complexes filtration (see further) with $C_{Alpha}= 1$. Following [Fasy etal. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729) we derive confidence sets for persistence diagrams (for $d_b$) from confidence sets for compact sets (for $Haus$). Let $\mathbb X_n$ be a sample from a distribution $P$ with compact support $\mathbb X$. The aim is to find a parameter $c_\alpha$ such that$$ P ( Hauss(\mathbb X_n, \mathbb X) \leq c_\alpha) \geq 1-\alpha .$$The confidence set $\mathcal C$ we consider is a subset of all persistence diagrams whose bottleneck distance to $Dgm \left(Filt(\mathbb X_n) \right) $ is less than $d_\alpha$:$$ \left\{ Dgm \: | \: d_b \left( Diag , Dgm \left(Filt(\mathbb X_n) \right) \right) c\leq d_\alpha \right\}, $$with $$ d_\alpha = C_{Filt} c_\alpha .$$The `hausd_interval` function from the `persistence_statistics` module implements the subsampling method of [Fasy etal. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729), it outputs an estimation $\hat c_\alpha$ of $c_\alpha$. By default a multiprocessing computation is applied.
###Code
hatc = ps.hausd_interval(data=W,level = 0.90, m = 2500)
print(hatc)
###Output
0.05642562098567528
###Markdown
Stability and confidence region for the $\sqrt{alpha}$-filtration When computing confidence regions for alpha complexes, we need to be careful with the scale of values of the filtration because the filtration value of each simplex is computed as the square of the circumradius of the simplex (if the circumsphere is empty).
###Code
Alpha_complex_W = gd.AlphaComplex(points = W)
Alpha_simplex_tree_W = Alpha_complex_W.create_simplex_tree()
###Output
_____no_output_____
###Markdown
We change the filtration value of each simplex by taking the square root of the filtration values:
###Code
Alpha_simplex_tree_W_list = Alpha_simplex_tree_W.get_filtration()
for splx in Alpha_simplex_tree_W_list:
Alpha_simplex_tree_W.assign_filtration(splx[0],filtration= np.sqrt(splx[1]))
###Output
_____no_output_____
###Markdown
Now we can compute persistence for the rescaled $\sqrt{alpha}$ complex filtration.
###Code
pers_alpha_W= Alpha_simplex_tree_W.persistence()
gd.plot_persistence_diagram(pers_alpha_W);
###Output
_____no_output_____
###Markdown
We now define the confidence region for this persistence diagram. We have to take a band of width $ d_\alpha = C_{Filt} c_\alpha $ $\hat c_\alpha$ to compute and plot the confidence band. The `band` parameter is the vertical height of the confidence region, it is thus twice the value of $\hat c _\alpha$ (because the bottleneck distance is based on the $\ell_\infty$ norm).
###Code
gd.plot_persistence_diagram(pers_alpha_W, band=2 * hatc);
###Output
_____no_output_____
###Markdown
Only the topological features above the red band are considered as significant. Here we select the main topological features by this way.Generally speaking, the procedure is very conservative: the band is very large and only very few topological features are seen as significant. An alternative approach is the bottleneck bootstrap method, see this [notebook](Tuto-GUDHI-ConfRegions-PersDiag-BottleneckBootstrap.ipynb). Confidence regions for persistence diagrams of filtrations based on pairwise distances The subsampling approach can be also applied when data come has a matrix of pairwise distances. We illustrate the procedure with the `trefoil_dist` dataset which contains the distances between 1000 points sampled in the neighborhood of a trefoil curve.
###Code
trefoil_dist = pickle.load( open( "./datasets/trefoil_dist", "rb" ) )
###Output
_____no_output_____
###Markdown
We use again the `hausd_interval` function to infer the Hausdorff distance between the data and the support of the underlying distribution of the data.
###Code
hatc = ps.hausd_interval(trefoil_dist,pairwise_dist=True,level = 0.90, m = 900)
print(hatc)
###Output
0.396104059680682
###Markdown
Now, we define the Rips complex filtration from the matrix of pairwise distances:
###Code
skeleton_trefoil = gd.RipsComplex(distance_matrix = trefoil_dist,max_edge_length=2)
Rips_simplex_tree_trefoil = skeleton_trefoil.create_simplex_tree(max_dimension=2)
###Output
_____no_output_____
###Markdown
and we compute persistence on this filtration:
###Code
BarCodes_trefoil = Rips_simplex_tree_trefoil.persistence()
###Output
_____no_output_____
###Markdown
To define a confidence band for the persistence diagram, we have to take a band of width $ \hat d_\alpha = 2 \hat c_\alpha$.The `band` parameter being the vertical height of the confidence region, it is twice the value of $\hat d _\alpha$ (because the bottleneck distance is based on the $\ell_\infty$ norm).So finally we take this band parameter equal to four times $\hat c_\alpha$.
###Code
gd.plot_persistence_diagram(BarCodes_trefoil,band = 4*hatc);
###Output
_____no_output_____
###Markdown
TDA with Python using the Gudhi Library Confidence regions for persistence diagrams : data points
###Code
import persistence_statistics as ps
import pandas as pd
import numpy as np
import pickle as pickle
import gudhi as gd
import seaborn as sbs
from scipy.spatial import distance_matrix
from pylab import *
###Output
_____no_output_____
###Markdown
Introduction In this tutorial, we introduce confidence regions for persistence diagrams built on a set of data points. We present the subsampling approach of [Fasy et al. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729). An alternative method is the bottleneck bootstrap method introduced in [Chazal etal. 2018](http://www.jmlr.org/papers/v18/15-484.html) and presented in this [notebook](Tuto-GUDHI-ConfRegions-PersDiag-BottleneckBootstrap.ipynb). See [this notebook](Tuto-GUDHI-persistence-diagrams.ipynb) for an introduction to persistence diagrams with Gudhi. For many applications of persistent homology, we observe topological features close to the diagonal. Since they correspond to topological structures that die very soon after they appear in the filtration, these points are generally considered as "topological noise". Confidence regions for persistence diagram provide a rigorous framework to this idea. Confidence regions for persistence diagrams provide a rigorous framework for selecting significant topological features in a persistence diagram. We use the bottleneck distance $d_b$ to define confidence regions. We see point clouds as random variables. Under this approach, persistence diagrams are also seen as random quantities. Confidence regions for persistence diagrams for point cloud data in $\mathbb R^d$ We introduce the method for a simulated dataset.
###Code
U1 = np.random.uniform(0,2 * pi,size= 1000)
V1 = np.array([[0.35 * cos(u) +0.02*np.random.uniform(-1,1) ,
0.35 *sin(u)+0.02*np.random.uniform(-1,1)] for u in U1])
U2 = np.random.uniform(0,2 * pi,size= 2000)
V2 = np.array([[0.7* cos(u) +0.02*np.random.uniform(-1,1) ,
0.7*sin(u)+0.02*np.random.uniform(-1,1)] for u in U2])
W = np.concatenate((V1,V2), axis=0)
plt.scatter(W[:,0],W[:,1],s=0.1);
###Output
_____no_output_____
###Markdown
Subsampling approachLet $\mathbb X$ and $\mathbb Y$ be two compact sets.For the filtrations given below, persistence homology is stable with respect of Hausdorff perturbations:$$d_b\left( Dgm \left(Filt(\mathbb X) \right) , Dgm \left( Filt(\mathbb Y) \right)\right)\leq C_{Filt} Haus \left(\mathbb X, \mathbb Y \right)$$ The previous inequality is valid for the following Gudhi filtrations: - for the Rips complex filtration with $C_{Rips} = 2$, - for the $\sqrt{alpha}$-complexes filtration (see further) with $C_{Alpha}= 1$. Following [Fasy et al. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729) we derive confidence sets for persistence diagrams (for $d_b$) from confidence sets for compact sets (for $Haus$). Let $\mathbb X_n$ be a sample from a distribution $P$ with compact support $\mathbb X$. The aim is to find a parameter $c_\alpha$ such that$$ P ( Hauss(\mathbb X_n, \mathbb X) \leq c_\alpha) \geq 1-\alpha .$$The confidence set $\mathcal C$ we consider is a subset of all persistence diagrams whose bottleneck distance to $Dgm \left(Filt(\mathbb X_n) \right) $ is less than $d_\alpha$:$$ \left\{ Dgm \: | \: d_b \left( Diag , Dgm \left(Filt(\mathbb X_n) \right) \right) c\leq d_\alpha \right\}, $$with $$ d_\alpha = C_{Filt} c_\alpha .$$The `hausd_interval` function from the `persistence_statistics` module implements the subsampling method of [Fasy et al. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729), it outputs an estimation $\hat c_\alpha$ of $c_\alpha$. By default a multiprocessing computation is applied.
###Code
hatc = ps.hausd_interval(data=W,level = 0.90, m = 2500)
print(hatc)
###Output
0.053896359713091466
###Markdown
Stability and confidence region for the $\sqrt{alpha}$-filtration When computing confidence regions for alpha complexes, we need to be careful with the scale of values of the filtration because the filtration value of each simplex is computed as the square of the circumradius of the simplex (if the circumsphere is empty).
###Code
Alpha_complex_W = gd.AlphaComplex(points = W)
Alpha_simplex_tree_W = Alpha_complex_W.create_simplex_tree()
###Output
_____no_output_____
###Markdown
We change the filtration value of each simplex by taking the square root of the filtration values:
###Code
Alpha_simplex_tree_W_list = Alpha_simplex_tree_W.get_filtration()
for splx in Alpha_simplex_tree_W_list:
Alpha_simplex_tree_W.assign_filtration(splx[0],filtration= np.sqrt(splx[1]))
###Output
_____no_output_____
###Markdown
Now we can compute persistence for the rescaled $\sqrt{alpha}$ complex filtration.
###Code
pers_alpha_W= Alpha_simplex_tree_W.persistence()
gd.plot_persistence_diagram(pers_alpha_W);
###Output
_____no_output_____
###Markdown
We now define the confidence region for this persistence diagram. We have to take a band of width $ d_\alpha = C_{Filt} c_\alpha $ $\hat c_\alpha$ to compute and plot the confidence band. The `band` parameter is the vertical height of the confidence region, it is thus twice the value of $\hat c _\alpha$ (because the bottleneck distance is based on the $\ell_\infty$ norm).
###Code
gd.plot_persistence_diagram(pers_alpha_W, band=2 * hatc);
###Output
_____no_output_____
###Markdown
Only the topological features above the red band are considered as significant. Here we select the main topological features by this way.Generally speaking, the procedure is very conservative: the band is very large and only very few topological features are seen as significant. An alternative approach is the bottleneck bootstrap method, see this [notebook](Tuto-GUDHI-ConfRegions-PersDiag-BottleneckBootstrap.ipynb). Confidence regions for persistence diagrams of filtrations based on pairwise distances The subsampling approach can be also applied when data come has a matrix of pairwise distances. We illustrate the procedure with the `trefoil_dist` dataset which contains the distances between 1000 points sampled in the neighborhood of a trefoil curve.
###Code
trefoil_dist = pickle.load( open( "./datasets/trefoil_dist", "rb" ) )
###Output
_____no_output_____
###Markdown
We use again the `hausd_interval` function to infer the Hausdorff distance between the data and the support of the underlying distribution of the data.
###Code
hatc = ps.hausd_interval(trefoil_dist,pairwise_dist=True,level = 0.90, m = 900)
print(hatc)
###Output
0.396104059680682
###Markdown
Now, we define the Rips complex filtration from the matrix of pairwise distances:
###Code
skeleton_trefoil = gd.RipsComplex(distance_matrix = trefoil_dist,max_edge_length=2)
Rips_simplex_tree_trefoil = skeleton_trefoil.create_simplex_tree(max_dimension=2)
###Output
_____no_output_____
###Markdown
and we compute persistence on this filtration:
###Code
BarCodes_trefoil = Rips_simplex_tree_trefoil.persistence()
###Output
_____no_output_____
###Markdown
To define a confidence band for the persistence diagram, we have to take a band of width $ \hat d_\alpha = 2 \hat c_\alpha$.The `band` parameter being the vertical height of the confidence region, it is twice the value of $\hat d _\alpha$ (because the bottleneck distance is based on the $\ell_\infty$ norm).So finally we take this band parameter equal to four times $\hat c_\alpha$.
###Code
gd.plot_persistence_diagram(BarCodes_trefoil,band = 4*hatc);
###Output
_____no_output_____
###Markdown
TDA with Python using the Gudhi Library Confidence regions for persistence diagrams : data points
###Code
import persistence_statistics as ps
import pandas as pd
import numpy as np
import pickle as pickle
import gudhi as gd
import seaborn as sbs
from scipy.spatial import distance_matrix
from pylab import *
###Output
_____no_output_____
###Markdown
Introduction In this tutorial, we introduce confidence regions for persistence diagrams built on a set of data points. We present the subsampling approach of [Fasy et al. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729). See [this notebook](https://github.com/GUDHI/TDA-tutorial/blob/master/Tuto-GUDHI-persistence-diagrams.ipynb) for an introduction to persistence diagrams with Gudhi. For many applications of persistent homology, we observe topological features close to the diagonal. Since they correspond to topological structures that die very soon after they appear in the filtration, these points are generally considered as "topological noise". Confidence regions for persistence diagram provide a rigorous framework to this idea. Confidence regions for persistence diagrams provide a rigorous framework for selecting significant topological features in a persistence diagram. We use the bottleneck distance $d_b$ to define confidence regions. We see point clouds as random variables. Under this approach, persistence diagrams are also seen as random quantities. Confidence regions for persistence diagrams for point cloud data in $\mathbb R^d$ We introduce the method for a simulated dataset.
###Code
U1 = np.random.uniform(0,2 * pi,size= 1000)
V1 = np.array([[0.35 * cos(u) +0.02*np.random.uniform(-1,1) ,
0.35 *sin(u)+0.02*np.random.uniform(-1,1)] for u in U1])
U2 = np.random.uniform(0,2 * pi,size= 2000)
V2 = np.array([[0.7* cos(u) +0.02*np.random.uniform(-1,1) ,
0.7*sin(u)+0.02*np.random.uniform(-1,1)] for u in U2])
W = np.concatenate((V1,V2), axis=0)
plt.scatter(W[:,0],W[:,1],s=0.1);
###Output
_____no_output_____
###Markdown
Subsampling approachLet $\mathbb X$ and $\mathbb Y$ be two compact sets.For the filtrations given below, persistence homology is stable with respect of Hausdorff perturbations:$$d_b\left( Dgm \left(Filt(\mathbb X) \right) , Dgm \left( Filt(\mathbb Y) \right)\right)\leq C_{Filt} Haus \left(\mathbb X, \mathbb Y \right)$$ The previous inequality is valid for the following Gudhi filtrations: - for the Rips complex filtration with $C_{Rips} = 2$, - for the $\sqrt{alpha}$-complexes filtration (see further) with $C_{Alpha}= 1$. Following [Fasy et al. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729) we derive confidence sets for persistence diagrams (for $d_b$) from confidence sets for compact sets (for $Haus$). Let $\mathbb X_n$ be a sample from a distribution $P$ with compact support $\mathbb X$. The aim is to find a parameter $c_\alpha$ such that$$ P ( Hauss(\mathbb X_n, \mathbb X) \leq c_\alpha) \geq 1-\alpha .$$The confidence set $\mathcal C$ we consider is a subset of all persistence diagrams whose bottleneck distance to $Dgm \left(Filt(\mathbb X_n) \right) $ is less than $d_\alpha$:$$ \left\{ Dgm \: | \: d_b \left( Diag , Dgm \left(Filt(\mathbb X_n) \right) \right) c\leq d_\alpha \right\}, $$with $$ d_\alpha = C_{Filt} c_\alpha .$$The `hausd_interval` function from the `persistence_statistics` module implements the subsampling method of [Fasy et al. 2014 AoS](https://projecteuclid.org/download/pdfview_1/euclid.aos/1413810729), it outputs an estimation $\hat c_\alpha$ of $c_\alpha$. By default a multiprocessing computation is applied.
###Code
hatc = ps.hausd_interval(data=W,level = 0.90, m = 2500)
print(hatc)
###Output
0.053896359713091466
###Markdown
Stability and confidence region for the $\sqrt{alpha}$-filtration When computing confidence regions for alpha complexes, we need to be careful with the scale of values of the filtration because the filtration value of each simplex is computed as the square of the circumradius of the simplex (if the circumsphere is empty).
###Code
Alpha_complex_W = gd.AlphaComplex(points = W)
Alpha_simplex_tree_W = Alpha_complex_W.create_simplex_tree()
###Output
_____no_output_____
###Markdown
We change the filtration value of each simplex by taking the square root of the filtration values:
###Code
Alpha_simplex_tree_W_list = Alpha_simplex_tree_W.get_filtration()
for splx in Alpha_simplex_tree_W_list:
Alpha_simplex_tree_W.assign_filtration(splx[0],filtration= np.sqrt(splx[1]))
###Output
_____no_output_____
###Markdown
Now we can compute persistence for the rescaled $\sqrt{alpha}$ complex filtration.
###Code
pers_alpha_W= Alpha_simplex_tree_W.persistence()
gd.plot_persistence_diagram(pers_alpha_W);
###Output
_____no_output_____
###Markdown
We now define the confidence region for this persistence diagram. We have to take a band of width $ d_\alpha = C_{Filt} c_\alpha $ $\hat c_\alpha$ to compute and plot the confidence band. The `band` parameter is the vertical height of the confidence region, it is thus twice the value of $\hat c _\alpha$ (because the bottleneck distance is based on the $\ell_\infty$ norm).
###Code
gd.plot_persistence_diagram(pers_alpha_W, band=2 * hatc);
###Output
_____no_output_____
###Markdown
Only the topological features above the red band are considered as significant. Here we select the main topological features by this way.Generally speaking, the procedure is very conservative: the band is very large and only very few topological features are seen as significant. Confidence regions for persistence diagrams of filtrations based on pairwise distances The subsampling approach can be also applied when data come has a matrix of pairwise distances. We illustrate the procedure with the `trefoil_dist` dataset which contains the distances between 1000 points sampled in the neighborhood of a trefoil curve.
###Code
trefoil_dist = pickle.load( open( "./datasets/trefoil_dist", "rb" ) )
###Output
_____no_output_____
###Markdown
We use again the `hausd_interval` function to infer the Hausdorff distance between the data and the support of the underlying distribution of the data.
###Code
hatc = ps.hausd_interval(trefoil_dist,pairwise_dist=True,level = 0.90, m = 900)
print(hatc)
###Output
0.396104059680682
###Markdown
Now, we define the Rips complex filtration from the matrix of pairwise distances:
###Code
skeleton_trefoil = gd.RipsComplex(distance_matrix = trefoil_dist,max_edge_length=2)
Rips_simplex_tree_trefoil = skeleton_trefoil.create_simplex_tree(max_dimension=2)
###Output
_____no_output_____
###Markdown
and we compute persistence on this filtration:
###Code
BarCodes_trefoil = Rips_simplex_tree_trefoil.persistence()
###Output
_____no_output_____
###Markdown
To define a confidence band for the persistence diagram, we have to take a band of width $ \hat d_\alpha = 2 \hat c_\alpha$.The `band` parameter being the vertical height of the confidence region, it is twice the value of $\hat d _\alpha$ (because the bottleneck distance is based on the $\ell_\infty$ norm).So finally we take this band parameter equal to four times $\hat c_\alpha$.
###Code
gd.plot_persistence_diagram(BarCodes_trefoil,band = 4*hatc);
###Output
_____no_output_____ |
09 Pandas Teil 2_alt/Daten klassifizieren.ipynb | ###Markdown
Daten klassifizieren **Inhalt:** Unsaubere Daten laden und klassifizieren**Nötige Skills:** Erste Schritte mit Pandas**Lernziele:**- Daten auf Integrität prüfen- Einfaches Putzen der gröbsten Fehler- Ein paar String-Funktionen- Klassifizieren a: df.apply kennenlernen- Klassifizieren b: df.merge kennenlernen- Plotting Level 2: mehrere Serien Das Beispiel P3-Datenbank des Schweizerischen Nationalfonds. Beinhaltet alle Forschungsprojekte, die seit 1975 vom SNF Fördergelder erhalten haben.Quelle und Dokumentation: http://p3.snf.ch/Pages/DataAndDocumentation.aspxDatenfile: http://p3.snf.ch/P3Export/P3_GrantExport.csvSpeichern Sie die Datei an einem geeigneten Ort, zB im Unterornder `dataprojects/SNF/` Vorbereitung Wir laden diesmal nicht nur das Pandas-Modul, sondern auch NumPy.*NumPy is the fundamental package for scientific computing with Python): http://www.numpy.org/*
###Code
import pandas as pd
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Datenfile laden Wie gehabt ...
###Code
path = 'dataprojects/SNF/P3_GrantExport.csv'
df = pd.read_csv(path, error_bad_lines=False)
###Output
_____no_output_____
###Markdown
**Oops:** What happened here?
###Code
df.head(3)
###Output
_____no_output_____
###Markdown
Offensichtlich sind die einzelnen Felder hier nicht mit einem Komma, sondern mit einem Semikolon abgetrennt. Wir müssen unseren Befehl anpassen:
###Code
df = pd.read_csv(path, delimiter=';')
df.head(2)
###Output
_____no_output_____
###Markdown
Besser! Schauen wir uns die Sache mal näher an.
###Code
df.shape
df.dtypes
df.describe()
###Output
_____no_output_____
###Markdown
Offensichtlich hat es einige Spalten drin, die noch nicht mit dem richtigen Datentyp formatiert sind, z.B. "Approved Amount".Das Problem ist: So lange wir da nicht die richtigen Datentypen haben, funktionieren einige Auswertungen nicht.
###Code
#Zum Beispiel diese hier:
df['Approved Amount'].mean()
###Output
_____no_output_____
###Markdown
Eigentlich wären das sehr interessante Informationen: wie viel Geld haben die Projekte im Schnitt gekriegt, im Maximum, im Minimum, etc. Entfernen von ungültigen WertenWir müssen also irgendwie diese Spalte reinigen, damit Pandas die Berechungen für uns machen kann.Um herauszukriegen, was das Problem sein könnte, ist `value_counts()` eine ziemlich einfache Option.
###Code
df['Approved Amount'].value_counts().sort_index()
###Output
_____no_output_____
###Markdown
Das Problem liegt in der letzten Zeile: Bei 12070 Einträgen steht: "`data not included in P3`."Wir können das auf mehrere Arten lösen: Variante 1: Werte mit NaN ersetzen Wir verwenden nun die Funktion `replace()`, um selektiv alle Instanzen von "`data not included in P3`" zu ersetzen - und zwar mit NaN:
###Code
df['Approved Amount'] = df['Approved Amount'].replace('data not included in P3', np.nan)
###Output
_____no_output_____
###Markdown
Die Einträge wurden jetzt in NaN verwandelt (und werden deshalb standardmässig gar nicht mehr angezeigt)
###Code
df['Approved Amount'].value_counts().sort_index()
###Output
_____no_output_____
###Markdown
Allerdings haben wir ein Problem: Der Datentyp von "Approved Amount" ist immer noch "object"...
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Wir sind gezwungen, noch eine Datenkonversion durchzuführen: mit `astype()`
###Code
df['Approved Amount'] = df['Approved Amount'].astype(float)
###Output
_____no_output_____
###Markdown
Endlich stimmt der Datentyp:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
Und wir können unsere Auswertung durchführen:
###Code
#Antwort
df['Approved Amount'].mean()
###Output
_____no_output_____
###Markdown
Variante 2: Datei nochmals einlesen mit einer SpezialoptionUm uns einige Schritte zu ersparen, lesen wir die Datei einfach nochmals neu ein.Die Option heisst `na_values=` (na = Not Available, wird durch NaN ersetzt = Not a Number oder so)
###Code
df = pd.read_csv(path, delimiter=';', na_values='data not included in P3')
###Output
_____no_output_____
###Markdown
Tadaaa!
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
**Übrigens:** Um zu checken, was es eigentlich mit den ungültigen Einträgen eigentlich auf sich hat, können wir `.isnull()` verwenden:
###Code
df[df['Approved Amount'].isnull()]
###Output
_____no_output_____
###Markdown
Es scheint sich also hier um ein spezielles Förderinstrument zu handeln ("Fellowships"). **Quiz:** Was war der maximale Betrag, den ein Projekt erhielt? Das Minimum? Der Median?
###Code
#Antwort
#Antwort
#Antwort
###Output
_____no_output_____
###Markdown
**Quiz:** Suche die fünfzig Projekte raus, die am meisten Geld gekriegt haben. Welche Universitäten kommen darunter am meisten vor?
###Code
#Antwort
###Output
_____no_output_____
###Markdown
**Quiz:** Über welches Förderinstrument ("Funding Instrument Hierarchy") wurde insgesamt am meisten Geld vergeben?
###Code
#Antwort
###Output
_____no_output_____
###Markdown
**Quiz:** Stellen Sie die Verteilung sämtlicher gesprochenen Beträge in einem Histogramm dar!
###Code
#Antwort
###Output
_____no_output_____
###Markdown
**Quiz:** In welchen Ländern waren die vergebenen Beträge im Schnitt am Grössten? Zeigen Sie die zehn obersten an.
###Code
#Antwort
# Time for a break ...
###Output
_____no_output_____
###Markdown
Werte Kategorisieren Sagen wir mal, wir interessieren uns für die Institutionen in der Schweiz, die vom SNF Geld gekriegt haben.Wir erstellen erstmal ein Dataframe, in dem nur diese Institutionen vorkommen:
###Code
df_swiss = df[df['Institution Country'] == 'Switzerland']
###Output
_____no_output_____
###Markdown
Und lassen uns dann eine Liste aller Universitäten anzeigen, die in diesem Dataframe vorkommen:
###Code
df_swiss['University'].unique()
###Output
_____no_output_____
###Markdown
Schnell wird klar: In dieser Liste sind nicht nur Universitäten, sondern auch Fachhochschulen und andere Institutionen enthalten.Wie gehen wir vor, wenn wir die Daten nach diesen Typen klassifizieren wollen? Mit anderen Worten, zB separate Durchschnittswerte ausrechnen für Universitäten, Fachhochschulen, etc? Methode 1: contains, replace Die allereinfachste (und nicht sehr empfehlenswerte) Variante ist, einfach zu checken, ob in einem bestimmten Eintrag das Wort "University" vorkommt.Wir können dafür die Funktion `str.contains()` verwenden - heraus kommt eine Liste von True/False-Werten, die wir weiter verwenden können...
###Code
df_swiss['University'].str.contains('University')
###Output
_____no_output_____
###Markdown
Zum Beispiel so:
###Code
df_swiss['Institution Type'] = df_swiss['University'].str.contains('University')
###Output
_____no_output_____
###Markdown
... oder vielleicht doch nicht so :-) Der Grund für die obige Warnung ist: Wir arbeiten auf einem Slice eines Dataframes, das kann Probleme machen (muss aber nicht).Um sicher zu sein: `.copy()` verwenden, um im Memory eine physische Kopie des Dataframes zu erstellen
###Code
df_ch = df_swiss.copy()
df_ch['Institution Type'] = df_ch['University'].str.contains('University')
df_ch.head(3)
###Output
_____no_output_____
###Markdown
Nun können wir die True/False-Werte mit generischen Einträgen ersetzen. Dafür gibt es `replace()`:
###Code
df_ch['Institution Type'] = df_ch['Institution Type'].replace(True, 'University')
df_ch['Institution Type'] = df_ch['Institution Type'].replace(False, 'Other')
df_ch.head(3)
###Output
_____no_output_____
###Markdown
Wir können nun zB ausrechnen, wie viel Geld die Universitäten und die übrigen Institutionen in der Summe gekriegt haben:
###Code
df_ch.groupby('Institution Type')['Approved Amount'].sum()
###Output
_____no_output_____
###Markdown
Aber wie gesagt, es gibt bessere Wege. (zB haben wir nun Einträge wie "Université" nicht berücksichtigt. Methode 2: apply, isin Auch nicht wirklich super, aber immerhin besser als vorher: Wir schreiben eine eigene Funktion zur Klassifizierung von Universitäten.Diese Funktion können wir unendlich kompliziert machen, wenn wir wollen. Hier halten wir sie bewusst einfach.
###Code
def categorize_institution(institution):
#Ist eine Institution eine Uni? Hier eine Liste von Wörtern, nach denen wir suchen.
university_names = ["University", "Universität", "Université"]
#Gehen wir die Liste durch...
for university_name in university_names:
#Kommt das Wort im String, den wir klassifizieren wollen, mehr als null mal vor?
if str(institution).count(university_name) > 0:
#Dann ist es eine Universität
return "University"
#sonst nicht
return "Other"
###Output
_____no_output_____
###Markdown
Wir testen die Funktion...
###Code
categorize_institution("University of Zurich")
categorize_institution("Fachhochschule Nordwestschweiz")
###Output
_____no_output_____
###Markdown
... und wenden sie auf die Spalte "University" an.
###Code
df_ch['University'].apply(categorize_institution)
###Output
_____no_output_____
###Markdown
Das Resultat kommt nun in die Spalte "Institution Type"
###Code
df_ch['Institution Type'] = df_ch['University'].apply(categorize_institution)
df_ch.head(3)
###Output
_____no_output_____
###Markdown
Wir sind jetzt ziemlich schnell durch `df.apply()` durchgegangen. Macht nix, wir kommen später nochmals drauf zurück. Man kann die Funktion übrigens auch auf ganze Zeilen anwenden, mehr dazu später. **Quiz:** Basierend auf unserer neuen Klassifizierung: Zeichnen Sie einen Balkenchart, der die durchschnittliche Vergabesumme für Universitäten und Nicht-Universitäten anzeigt.
###Code
#Antwort
###Output
_____no_output_____
###Markdown
Methode 3: mergeUnd nun zur saubersten Art, wie man die Institutionen in dieser Tabelle hier klassifizieren sollte: von Hand.Wie ziehen uns nochmals die Liste der unique Values, diesmal gleich als Dataframe:
###Code
df_unique = pd.DataFrame(df_ch['University'].unique())
df_unique
###Output
_____no_output_____
###Markdown
Weil es einfacher geht, bearbeiten wir die Liste in einem externen Programm... mit der Funktion `to_csv()`
###Code
df_unique.to_csv('dataprojects/SNF/klassifizieren.csv', index=False)
###Output
_____no_output_____
###Markdown
... im Excel, oder anderswo bearbeiten, und wieder laden: (Ich habe das hier schonmal vorbereitet)
###Code
df_unique_edited = pd.read_csv('dataprojects/SNF/klassifiziert.csv')
df_unique_edited
###Output
_____no_output_____
###Markdown
Wir haben jetzt zwei Tabellen: `df_ch` (die grosse Datentabelle) und `df_unique` (die Klassifizierungen).Diese zwei Tabellen können wir nun verknüpfen, und zwar mit der Funktion `merge()`
###Code
df_ch_classified = df_ch.merge(df_unique_edited, how='left', left_on='University', right_on='University')
df_ch_classified
###Output
_____no_output_____
###Markdown
Die Spalte "New Type" wurde nun zur Tabelle "df_ch" hinzugefügt, und zwar genau dort, wo es zum Eintrag in "University" passt!Schauen wir kurz, wie viele Einträge es von welchem Typ hat:
###Code
df_ch_classified['New Type'].value_counts()
###Output
_____no_output_____
###Markdown
Ging auch wirklich nichts vergessen?
###Code
df_ch_classified['New Type'].value_counts(dropna=False)
###Output
_____no_output_____
###Markdown
**Oops!** Es hat einen fehlenden Eintrag drin.Was ist das für ein Eintrag?
###Code
df_ch_classified[df_ch_classified['New Type'].isnull()]
###Output
_____no_output_____
###Markdown
Sieht nach einem grundsätzlich validen Projekt aus. Wir klassifizieren diesen Eintrag kurzerhand auf "Other":
###Code
df_ch_classified.loc[24179, "New Type"] = "Other"
df_ch_classified.loc[24179]
###Output
_____no_output_____
###Markdown
**Quiz:** Kategorisieren Sie die Einträge nach dem Herkunftsland der Universität (erstellen Sie dazu ein neues Feld "Country Type" mit den Einträgen "Switzerland" oder "Other". Wie viele Projekte kommen aus der Schweiz, wie viele aus anderen Ländern?**Achtung** Wechseln Sie jetzt wieder zum originalen Dataframe, "df"
###Code
#Neues, leeres Feld 'Country Type' erstellen
# Country Type = 'Switzerland', falls Switzerland
# Country Type = 'Other', falls nicht
# Auswertung nach Country Type
###Output
_____no_output_____
###Markdown
Plotting Level 2 Nun wollen wir darstellen, wie sich die Projekte über die Zeit hinweg in der Schweiz und in den übrigen Ländern entwickelt haben. Es geht also darum, zwei verschiedene Serien auf einer Grafik einzuzeichnen. Wir wenden dazu jetzt einen etwas faulen Trick an, um eine neue Spalte mit dem Jahr zu generieren (eigentlich gäbe es dazu noch einen speziellen Datentyp, aber den schauen wir ein anderes Mal an).
###Code
df['Year'] = df['Start Date'].str[6:]
###Output
_____no_output_____
###Markdown
Check, ob das einigermassen funktioniert hat...
###Code
df['Year'].value_counts(dropna=False).sort_index()
###Output
_____no_output_____
###Markdown
Jetzt plotten wir die Gesamtsumme der gesprochenen Gelder nach Jahr. Zuerst für die Schweiz ...
###Code
df[df['Country Type'] == "Switzerland"].groupby('Year')['Approved Amount'].sum().plot(figsize=(12,6))
###Output
_____no_output_____
###Markdown
... dann für die anderen Länder ...
###Code
df[df['Country Type'] == "Other"].groupby('Year')['Approved Amount'].sum().plot(figsize=(12,6))
###Output
_____no_output_____
###Markdown
... und schliesslich für beide Ländertypen: Methode 1: Zwei verschiedene Linien einzeichnen Die sicherste Methode, um mehrere Kurven auf derselben Grafik darzustellen, ist `ax=`.Wir speichern erste einen Plot als "chart1" und sagen dem zweiten Plot dann, sich zu "chart1" hinzuzugesellen.
###Code
chart1 = df[df['Country Type'] == "Switzerland"].groupby('Year')['Approved Amount'].sum().plot(figsize=(12,6))
df[df['Country Type'] == "Other"].groupby('Year')['Approved Amount'].sum().plot(ax=chart1)
###Output
_____no_output_____
###Markdown
Methode 2: Doppelt groupby, unstack In diesem Fall gibt es allerdings noch eine elegantere Variante. Und zwar mit `groupby()`.Diese Methode funktioniert nicht nur mit einem Level, sondern auch mit zwei. Die Summierung wird einerseits über die Jahre ("Years") gemacht und andererseits für die einzelenen Ländertypen ("Country Types"):
###Code
df.groupby(['Year', 'Country Type'])['Approved Amount'].sum()
###Output
_____no_output_____
###Markdown
Um diese Werte zu plotten, müssen wir Pandas die Tabelle allerdings etwas anders zur Verfügung stellen: im Wide-Format (dazu später noch mehr). Wir können dazu die Funktion `unstack()` verwenden:
###Code
df.groupby(['Year', 'Country Type'])['Approved Amount'].sum().unstack()
###Output
_____no_output_____
###Markdown
Letzter Schritt: `plot()`
###Code
df.groupby(['Year', 'Country Type'])['Approved Amount'].sum().unstack().plot(figsize=(12,6))
###Output
_____no_output_____
###Markdown
**Quiz:** Plotten Sie den durchschnittlichen Betrag, den Universitäten, Fachhochschulen, Spitäler und andere Institutionen über die Jahre erhalten haben - alles auf derselben Grafik. Benutzen Sie dazu wieder das Dataframe "df_ch_classfied" – Achtung, Sie müssen zuerst wieder eine Jahresspalte erstellen.
###Code
# Spalte 'Year' in df_ch_classified erstellen
df_ch_classified['Year'] = df_ch_classified['Start Date'].str[6:]
# Liste, nach Jahr und New Type gruppiert
df_ch_classified.groupby(['Year', 'New Type'])['Approved Amount'].mean()
# Plot
df_ch_classified.groupby(['Year', 'New Type'])['Approved Amount'].mean().unstack().plot(figsize=(12,6))
###Output
_____no_output_____
###Markdown
**Schlussfrage:** Haben wir nun bereits eine Story gefunden? Wenn ja, was könnte sie sein? Wenn nein, welches wären weitere Auswertungen, die man basierend auf diesen Daten machen könnte?
###Code
#Antwort in Textform...
#Zum Beispiel: Auswertung der Profile von einzelnen Forschern.
df['Responsible Applicant'].value_counts()
df[df['Responsible Applicant'] == 'Güntherodt Hans-Joachim'].groupby('Year')['Approved Amount'].sum().plot()
###Output
_____no_output_____
###Markdown
Übung Wir klassifizieren die Projekte nun nach Forschungsdisziplin und werten aus, welche Disziplinen zu welchem Zeitpunkt wie viel Geld gekriegt haben. **Schritt 1: ** Wir erstellen eine Liste der einzigartigen Einträge im Datenfeld "Discipline Name" und speichern sie als csv-Datei ab. (Arbeiten Sie mit dem dataframe "df_ch")
###Code
#Dataframe aus einzigartigen Disziplinennamen erstellen
# Dataframe als csv speichern
###Output
_____no_output_____
###Markdown
**Schritt 2:** Wir bearbeiten das csv-File extern und klassifizieren nach unserer Wahl
###Code
#extern bearbeiten...
###Output
_____no_output_____
###Markdown
**Schritt 3:** Wir fügen die Klassifizierung der Disziplinen in unsere Datenliste (Arbeiten Sie mit df) ein
###Code
# Einlesen des bearbeiteten csv-Files
# Verbinden Sie das dataframe "df_ch" mit der Klassifizierung, abspeichern unter neuem dataframe df_ch_classified
###Output
_____no_output_____
###Markdown
**Schritt 4: ** Auswertungen - Wie viele Projekte von welchem Disziplinen-Typ wurden durchgeführt?
###Code
df_ch_classified['Discipline Type'].value_counts()
###Output
_____no_output_____
###Markdown
- Welche Disziplinen-Typen haben meisten Geld gekriegt?
###Code
df_ch_classified.groupby('Discipline Type')['Approved Amount'].sum()
###Output
_____no_output_____
###Markdown
- Wie viel kosten Projekte der Disziplinen-Typen im Durchschnitt? Im Median?
###Code
df_ch_classified.groupby('Discipline Type')['Approved Amount'].mean()
df_ch_classified.groupby('Discipline Type')['Approved Amount'].median()
###Output
_____no_output_____
###Markdown
** Schritt 5: ** Plot einer Auswertung Wie viel Geld haben die verschiedenen Disziplinentypen im Jahresverlauf insgesamt gekriegt?
###Code
#Wir müssen auf df_ch_classified nochmals den Trick mit der Jahresspalte anwenden
# Tabelle anzeigen: Summe der gesprochenen Gelder, gruppiert nach Jahr und Disziplinentyp
#Plot als Liniendiagramm
###Output
_____no_output_____ |
BOW_Lemmatized_Unigrams.ipynb | ###Markdown
TODO: Figure out ravel() 1d array problem
###Code
import pandas as pd
import numpy as np
import os
from nltk import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import CountVectorizer
from __future__ import division
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
from sklearn.feature_extraction.text import TfidfVectorizer
labels = pd.read_csv('stocknews//labels.csv', header=None)
###Output
_____no_output_____
###Markdown
http://stackoverflow.com/questions/28382735/python-pandas-does-not-read-the-first-row-of-csv-file`pd.read_csv` was cutting off the first row of labels
###Code
# Confirm size of labels to make sure data loaded correctly
labels.shape
str_tokens_all = pd.read_csv('stocknews//tokens_str_all.csv', dtype=str, keep_default_na=False)
str_tokens_all.head(2)
str_tokens_all.shape
def replace_num(element):
return ' '.join([('numero' if k.isdigit() else k) for k in element.split()])
###Output
_____no_output_____
###Markdown
http://stackoverflow.com/questions/6905636/python-conditional-list-joins Instead of a digit, use `'NUMERO'` as it is resistant to stemming/lemmatizing. It's possible that headlines with numbers might contain some information. This will prevent preprocessing from discarding the information. Tag parts-of-speech and lemmatize the text
###Code
sample = str_tokens_all.iloc[0,0]
from nltk import pos_tag
from nltk.tokenize import word_tokenize
s = "This is a simple sentence"
tokens = word_tokenize(s) # Generate list of tokens
tokens_pos = pos_tag(tokens)
tokens_pos
type(tokens)
# Need to reconcile treebank and Wordnet tags
# http://stackoverflow.com/questions/15586721/wordnet-lemmatization-and-pos-tagging-in-python
from nltk.corpus import wordnet
def get_wordnet_pos(tag):
if tag.startswith('V'):
return wordnet.VERB
elif tag.startswith('N'):
return wordnet.NOUN
elif tag.startswith('J'):
return wordnet.ADJ
elif tag.startswith('R'):
return wordnet.ADV
elif tag.startswith('S'):
return wordnet.ADJ_SAT
else:
return wordnet.NOUN
# needs something there or else key error
p_o_s = {
'N' : wordnet.NOUN,
'V' : wordnet.VERB,
'J' : wordnet.ADJ,
'S' : wordnet.ADJ_SAT,
'R' : wordnet.ADV
}
part = {
'N' : 'n',
'V' : 'v',
'J' : 'a',
'S' : 's',
'R' : 'r'
}
def convert_tag(penn_tag):
'''
convert_tag accepts the first letter ofa Penn part-of-speech tag
from nltk.pos_tag() and then uses a dict to converts it to the
appropriate WordNet tag.
'''
if penn_tag in part.keys():
return part[penn_tag]
else:
return 'n'
tokens_pos[1][1][0] in part.keys()
from nltk.stem import WordNetLemmatizer
wnl = WordNetLemmatizer()
# Simple function to tag part of speech (pos) to a sentence and return lemmatized word.
# However, cannnot do it all in one step, need to pos tag before splitting sentence up.
def tag_and_lem(element):
sent = pos_tag(element.split()) # list of tuples [('token', 'Treebank tag')...]
return ' '.join([wnl.lemmatize(sent[k][0], convert_tag(sent[k][1][0]))for k in range(len(sent))])
#returns str
tag_and_lem(sample)
###Output
_____no_output_____
###Markdown
Tag and lemmatize entire corpus This will take some time...
###Code
# Tag and lem but not merge
lemma_str_tokens_all = str_tokens_all.applymap(tag_and_lem)
lemma_str_tokens_all.head(2)
###Output
_____no_output_____
###Markdown
Merge Cells
###Code
lemma_str_tokens_all['merged'] = lemma_str_tokens_all.iloc[:, 0:].apply(lambda x: ' '.join(x.dropna().values.tolist()), axis=1)
lemma_str_tokens_all['merged'] = lemma_str_tokens_all['merged'].apply(replace_num)
###Output
_____no_output_____
###Markdown
Split data into train and test sets
###Code
train_text = lemma_str_tokens_all.merged[0:1493] # train features
test_text = lemma_str_tokens_all.merged[1493:] # test features
train_labels = labels[0:1493].values # train labels
test_labels = labels[1493:].values; # test labels
vectorizer = TfidfVectorizer( max_features=250000, ngram_range=(1, 1), sublinear_tf=True, stop_words='english')
# Only need text, not labels
train_x = vectorizer.fit_transform( train_text )
test_x = vectorizer.transform( test_text )
train_x.shape
test_x.shape
###Output
_____no_output_____
###Markdown
Passive Aggressive Classifier
###Code
from sklearn.linear_model import PassiveAggressiveClassifier
classifier = PassiveAggressiveClassifier(n_iter=25)
train_labels.shape
train_labels.ravel()
classifier.fit(train_x, train_labels.ravel())
classifier.score(test_x, test_labels.ravel())
###Output
_____no_output_____
###Markdown
SGD Classifier
###Code
from sklearn.linear_model import SGDClassifier
classifier = SGDClassifier(loss='squared_loss', n_iter=8)
classifier.fit(train_x, train_labels.ravel())
classifier.score(test_x, test_labels.ravel())
###Output
_____no_output_____
###Markdown
Ridge Classifier
###Code
from sklearn.linear_model import RidgeClassifier
clf = RidgeClassifier()
clf.fit(train_x, train_labels.ravel())
clf.score(test_x, test_labels.ravel())
###Output
_____no_output_____
###Markdown
Gaussian Naive Bayes
###Code
from sklearn.naive_bayes import GaussianNB
gnb = GaussianNB()
gnb.fit(train_x.toarray(), train_labels.ravel())
gnb.score(test_x.toarray(), test_labels.ravel())
###Output
_____no_output_____
###Markdown
Support Vector Classifier
###Code
from sklearn.svm import SVC
supportvc = SVC()
supportvc.fit(train_x, train_labels.ravel())
supportvc.score(test_x, test_labels.ravel())
###Output
_____no_output_____ |
intro-to-tflearn/TFLearn_Digit_Recognition_Solution.ipynb | ###Markdown
Handwritten Number Recognition with TFLearn and MNISTIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the **MNIST** data set, which consists of images of handwritten numbers and their correct labels 0-9.We'll be using [TFLearn](http://tflearn.org/), a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
###Code
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
###Output
_____no_output_____
###Markdown
Retrieving training and test dataThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.Each MNIST data point has:1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image)We'll call the images, which will be the input to our neural network, **X** and their corresponding labels **Y**.We're going to want our labels as *one-hot vectors*, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened dataFor this example, we'll be using *flattened* data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
###Code
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
###Output
Extracting mnist/train-images-idx3-ubyte.gz
Extracting mnist/train-labels-idx1-ubyte.gz
Extracting mnist/t10k-images-idx3-ubyte.gz
Extracting mnist/t10k-labels-idx1-ubyte.gz
###Markdown
Visualize the training dataProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function `show_digit` will display that training image along with it's corresponding label in the title.
###Code
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def display_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
display_digit(0)
###Output
_____no_output_____
###Markdown
Building the networkTFLearn lets you build the network by defining the layers in that network. For this example, you'll define:1. The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. 2. Hidden layers, which recognize patterns in data and connect the input to the output layer, and3. The output layer, which defines how the network learns and outputs a label for a given image.Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,```net = tflearn.input_data([None, 100])```would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need **784 input units**. Adding layersTo add new hidden layers, you use ```net = tflearn.fully_connected(net, n_units, activation='ReLU')```This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument `net` is the network you created in the `tflearn.input_data` call, it designates the input to the hidden layer. You can set the number of units in the layer with `n_units`, and set the activation function with the `activation` keyword. You can keep adding layers to your network by repeated calling `tflearn.fully_connected(net, n_units)`. Then, to set how you train the network, use:```net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')```Again, this is passing in the network you've been building. The keywords: * `optimizer` sets the training method, here stochastic gradient descent* `learning_rate` is the learning rate* `loss` determines how the network error is calculated. In this example, with categorical cross-entropy.Finally, you put all this together to create the model with `tflearn.DNN(net)`.
###Code
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer(s)
net = tflearn.fully_connected(net, 128, activation='ReLU')
net = tflearn.fully_connected(net, 32, activation='ReLU')
# Output layer and training model
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
###Output
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/summaries.py:46 in get_summary.: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/summaries.py:46 in get_summary.: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/helpers/trainer.py:766 in create_summaries.: merge_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.merge.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/helpers/trainer.py:130 in __init__.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
###Markdown
Training the networkNow that we've constructed the network, saved as the variable `model`, we can fit it to the data. Here we use the `model.fit` method. You pass in the training features `trainX` and the training targets `trainY`. Below I set `validation_set=0.1` which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the `batch_size` and `n_epoch` keywords, respectively.Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
###Code
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)
###Output
Training Step: 49500 | total loss: [1m[32m0.06119[0m[0m
| SGD | epoch: 100 | loss: 0.06119 - acc: 0.9824 | val_loss: 0.10663 - val_acc: 0.9705 -- iter: 49500/49500
Training Step: 49500 | total loss: [1m[32m0.06119[0m[0m
| SGD | epoch: 100 | loss: 0.06119 - acc: 0.9824 | val_loss: 0.10663 - val_acc: 0.9705 -- iter: 49500/49500
--
###Markdown
TestingAfter you're satisified with the training output and accuracy, you can then run the network on the **test data set** to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.A good result will be **higher than 95% accuracy**. Some simple models have been known to get up to 99.7% accuracy!
###Code
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
###Output
Test accuracy: 0.9704
###Markdown
Handwritten Number Recognition with TFLearn and MNISTIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the **MNIST** data set, which consists of images of handwritten numbers and their correct labels 0-9.We'll be using [TFLearn](http://tflearn.org/), a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
###Code
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
###Output
_____no_output_____
###Markdown
Retrieving training and test dataThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.Each MNIST data point has:1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image)We'll call the images, which will be the input to our neural network, **X** and their corresponding labels **Y**.We're going to want our labels as *one-hot vectors*, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened dataFor this example, we'll be using *flattened* data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
###Code
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
###Output
Extracting mnist/train-images-idx3-ubyte.gz
Extracting mnist/train-labels-idx1-ubyte.gz
Extracting mnist/t10k-images-idx3-ubyte.gz
Extracting mnist/t10k-labels-idx1-ubyte.gz
###Markdown
Visualize the training dataProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function `show_digit` will display that training image along with it's corresponding label in the title.
###Code
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def display_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
display_digit(0)
###Output
_____no_output_____
###Markdown
Building the networkTFLearn lets you build the network by defining the layers in that network. For this example, you'll define:1. The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. 2. Hidden layers, which recognize patterns in data and connect the input to the output layer, and3. The output layer, which defines how the network learns and outputs a label for a given image.Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,```net = tflearn.input_data([None, 100])```would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need **784 input units**. Adding layersTo add new hidden layers, you use ```net = tflearn.fully_connected(net, n_units, activation='ReLU')```This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument `net` is the network you created in the `tflearn.input_data` call, it designates the input to the hidden layer. You can set the number of units in the layer with `n_units`, and set the activation function with the `activation` keyword. You can keep adding layers to your network by repeated calling `tflearn.fully_connected(net, n_units)`. Then, to set how you train the network, use:```net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')```Again, this is passing in the network you've been building. The keywords: * `optimizer` sets the training method, here stochastic gradient descent* `learning_rate` is the learning rate* `loss` determines how the network error is calculated. In this example, with categorical cross-entropy.Finally, you put all this together to create the model with `tflearn.DNN(net)`.
###Code
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer(s)
net = tflearn.fully_connected(net, 128, activation='ReLU')
net = tflearn.fully_connected(net, 32, activation='ReLU')
# Output layer and training model
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
###Output
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/summaries.py:46 in get_summary.: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/summaries.py:46 in get_summary.: scalar_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.scalar. Note that tf.summary.scalar uses the node name instead of the tag. This means that TensorFlow will automatically de-duplicate summary names based on the scope they are created in. Also, passing a tensor or list of tags to a scalar summary op is no longer supported.
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/helpers/trainer.py:766 in create_summaries.: merge_summary (from tensorflow.python.ops.logging_ops) is deprecated and will be removed after 2016-11-30.
Instructions for updating:
Please switch to tf.summary.merge.
WARNING:tensorflow:VARIABLES collection name is deprecated, please use GLOBAL_VARIABLES instead; VARIABLES will be removed after 2017-03-02.
WARNING:tensorflow:From //anaconda3/envs/tensorflow/lib/python3.5/site-packages/tflearn/helpers/trainer.py:130 in __init__.: initialize_all_variables (from tensorflow.python.ops.variables) is deprecated and will be removed after 2017-03-02.
Instructions for updating:
Use `tf.global_variables_initializer` instead.
###Markdown
Training the networkNow that we've constructed the network, saved as the variable `model`, we can fit it to the data. Here we use the `model.fit` method. You pass in the training features `trainX` and the training targets `trainY`. Below I set `validation_set=0.1` which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the `batch_size` and `n_epoch` keywords, respectively.Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
###Code
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)
###Output
Training Step: 49500 | total loss: [1m[32m0.06119[0m[0m
| SGD | epoch: 100 | loss: 0.06119 - acc: 0.9824 | val_loss: 0.10663 - val_acc: 0.9705 -- iter: 49500/49500
Training Step: 49500 | total loss: [1m[32m0.06119[0m[0m
| SGD | epoch: 100 | loss: 0.06119 - acc: 0.9824 | val_loss: 0.10663 - val_acc: 0.9705 -- iter: 49500/49500
--
###Markdown
TestingAfter you're satisified with the training output and accuracy, you can then run the network on the **test data set** to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.A good result will be **higher than 95% accuracy**. Some simple models have been known to get up to 99.7% accuracy!
###Code
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
###Output
Test accuracy: 0.9704
###Markdown
Handwritten Number Recognition with TFLearn and MNISTIn this notebook, we'll be building a neural network that recognizes handwritten numbers 0-9. This kind of neural network is used in a variety of real-world applications including: recognizing phone numbers and sorting postal mail by address. To build the network, we'll be using the **MNIST** data set, which consists of images of handwritten numbers and their correct labels 0-9.We'll be using [TFLearn](http://tflearn.org/), a high-level library built on top of TensorFlow to build the neural network. We'll start off by importing all the modules we'll need, then load the data, and finally build the network.
###Code
# Import Numpy, TensorFlow, TFLearn, and MNIST data
import numpy as np
import tensorflow as tf
import tflearn
import tflearn.datasets.mnist as mnist
###Output
_____no_output_____
###Markdown
Retrieving training and test dataThe MNIST data set already contains both training and test data. There are 55,000 data points of training data, and 10,000 points of test data.Each MNIST data point has:1. an image of a handwritten digit and 2. a corresponding label (a number 0-9 that identifies the image)We'll call the images, which will be the input to our neural network, **X** and their corresponding labels **Y**.We're going to want our labels as *one-hot vectors*, which are vectors that holds mostly 0's and one 1. It's easiest to see this in a example. As a one-hot vector, the number 0 is represented as [1, 0, 0, 0, 0, 0, 0, 0, 0, 0], and 4 is represented as [0, 0, 0, 0, 1, 0, 0, 0, 0, 0]. Flattened dataFor this example, we'll be using *flattened* data or a representation of MNIST images in one dimension rather than two. So, each handwritten number image, which is 28x28 pixels, will be represented as a one dimensional array of 784 pixel values. Flattening the data throws away information about the 2D structure of the image, but it simplifies our data so that all of the training data can be contained in one array whose shape is [55000, 784]; the first dimension is the number of training images and the second dimension is the number of pixels in each image. This is the kind of data that is easy to analyze using a simple neural network.
###Code
# Retrieve the training and test data
trainX, trainY, testX, testY = mnist.load_data(one_hot=True)
###Output
Downloading MNIST...
Succesfully downloaded train-images-idx3-ubyte.gz 9912422 bytes.
Extracting mnist/train-images-idx3-ubyte.gz
Downloading MNIST...
Succesfully downloaded train-labels-idx1-ubyte.gz 28881 bytes.
Extracting mnist/train-labels-idx1-ubyte.gz
Downloading MNIST...
Succesfully downloaded t10k-images-idx3-ubyte.gz 1648877 bytes.
Extracting mnist/t10k-images-idx3-ubyte.gz
Downloading MNIST...
Succesfully downloaded t10k-labels-idx1-ubyte.gz 4542 bytes.
Extracting mnist/t10k-labels-idx1-ubyte.gz
###Markdown
Visualize the training dataProvided below is a function that will help you visualize the MNIST data. By passing in the index of a training example, the function `show_digit` will display that training image along with it's corresponding label in the title.
###Code
# Visualizing the data
import matplotlib.pyplot as plt
%matplotlib inline
# Function for displaying a training image by it's index in the MNIST set
def display_digit(index):
label = trainY[index].argmax(axis=0)
# Reshape 784 array into 28x28 image
image = trainX[index].reshape([28,28])
plt.title('Training data, index: %d, Label: %d' % (index, label))
plt.imshow(image, cmap='gray_r')
plt.show()
# Display the first (index 0) training image
display_digit(0)
###Output
_____no_output_____
###Markdown
Building the networkTFLearn lets you build the network by defining the layers in that network. For this example, you'll define:1. The input layer, which tells the network the number of inputs it should expect for each piece of MNIST data. 2. Hidden layers, which recognize patterns in data and connect the input to the output layer, and3. The output layer, which defines how the network learns and outputs a label for a given image.Let's start with the input layer; to define the input layer, you'll define the type of data that the network expects. For example,```net = tflearn.input_data([None, 100])```would create a network with 100 inputs. The number of inputs to your network needs to match the size of your data. For this example, we're using 784 element long vectors to encode our input data, so we need **784 input units**. Adding layersTo add new hidden layers, you use ```net = tflearn.fully_connected(net, n_units, activation='ReLU')```This adds a fully connected layer where every unit (or node) in the previous layer is connected to every unit in this layer. The first argument `net` is the network you created in the `tflearn.input_data` call, it designates the input to the hidden layer. You can set the number of units in the layer with `n_units`, and set the activation function with the `activation` keyword. You can keep adding layers to your network by repeated calling `tflearn.fully_connected(net, n_units)`. Then, to set how you train the network, use:```net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')```Again, this is passing in the network you've been building. The keywords: * `optimizer` sets the training method, here stochastic gradient descent* `learning_rate` is the learning rate* `loss` determines how the network error is calculated. In this example, with categorical cross-entropy.Finally, you put all this together to create the model with `tflearn.DNN(net)`.
###Code
# Define the neural network
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
# Inputs
net = tflearn.input_data([None, trainX.shape[1]])
# Hidden layer(s)
net = tflearn.fully_connected(net, 128, activation='ReLU')
net = tflearn.fully_connected(net, 32, activation='ReLU')
# Output layer and training model
net = tflearn.fully_connected(net, 10, activation='softmax')
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.01, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
# Build the model
model = build_model()
###Output
_____no_output_____
###Markdown
Training the networkNow that we've constructed the network, saved as the variable `model`, we can fit it to the data. Here we use the `model.fit` method. You pass in the training features `trainX` and the training targets `trainY`. Below I set `validation_set=0.1` which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the `batch_size` and `n_epoch` keywords, respectively.Too few epochs don't effectively train your network, and too many take a long time to execute. Choose wisely!
###Code
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=100, n_epoch=100)
###Output
Training Step: 49499 | total loss: [1m[32m0.05277[0m[0m | time: 1.571s
| SGD | epoch: 100 | loss: 0.05277 - acc: 0.9858 -- iter: 49400/49500
Training Step: 49500 | total loss: [1m[32m0.05093[0m[0m | time: 2.587s
| SGD | epoch: 100 | loss: 0.05093 - acc: 0.9852 | val_loss: 0.11016 - val_acc: 0.9678 -- iter: 49500/49500
--
###Markdown
TestingAfter you're satisified with the training output and accuracy, you can then run the network on the **test data set** to measure it's performance! Remember, only do this after you've done the training and are satisfied with the results.A good result will be **higher than 95% accuracy**. Some simple models have been known to get up to 99.7% accuracy!
###Code
# Compare the labels that our model predicts with the actual labels
# Find the indices of the most confident prediction for each item. That tells us the predicted digit for that sample.
predictions = np.array(model.predict(testX)).argmax(axis=1)
# Calculate the accuracy, which is the percentage of times the predicated labels matched the actual labels
actual = testY.argmax(axis=1)
test_accuracy = np.mean(predictions == actual, axis=0)
# Print out the result
print("Test accuracy: ", test_accuracy)
###Output
Test accuracy: 0.9704
|
Model backlog/Inference/17-commonlit-inf-roberta-base-seq-256-cls-bias-ini.ipynb | ###Markdown
Dependencies
###Code
import warnings, math, json, glob
import pandas as pd
import tensorflow.keras.layers as L
import tensorflow.keras.backend as K
from tensorflow.keras import Model
from transformers import TFAutoModelForSequenceClassification, TFAutoModel, AutoTokenizer
from commonlit_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
pd.set_option('display.max_colwidth', 150)
###Output
_____no_output_____
###Markdown
Hardware configuration
###Code
strategy, tpu = get_strategy()
AUTO = tf.data.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
###Output
REPLICAS: 1
###Markdown
Load data
###Code
base_path = '/kaggle/input/'
test_filepath = base_path + 'commonlitreadabilityprize/test.csv'
test = pd.read_csv(test_filepath)
print(f'Test samples: {len(test)}')
display(test.head())
###Output
Test samples: 7
###Markdown
Model parameters
###Code
input_noteboks = [x for x in os.listdir(base_path) if '-commonlit-' in x]
input_base_path = f'{base_path}{input_noteboks[0]}/'
with open(input_base_path + 'config.json') as json_file:
config = json.load(json_file)
config
###Output
_____no_output_____
###Markdown
Auxiliary functions
###Code
# Datasets utility functions
def custom_standardization(text, is_lower=True):
if is_lower:
text = text.lower() # if encoder is uncased
text = text.strip()
return text
def sample_target(features, target):
mean, stddev = target
sampled_target = tf.random.normal([], mean=tf.cast(mean, dtype=tf.float32),
stddev=tf.cast(stddev, dtype=tf.float32), dtype=tf.float32)
return (features, sampled_target)
def get_dataset(pandas_df, tokenizer, labeled=True, ordered=False, repeated=False,
is_sampled=False, batch_size=32, seq_len=128, is_lower=True):
"""
Return a Tensorflow dataset ready for training or inference.
"""
text = [custom_standardization(text, is_lower) for text in pandas_df['excerpt']]
# Tokenize inputs
tokenized_inputs = tokenizer(text, max_length=seq_len, truncation=True,
padding='max_length', return_tensors='tf')
if labeled:
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']},
(pandas_df['target'], pandas_df['standard_error'])))
if is_sampled:
dataset = dataset.map(sample_target, num_parallel_calls=tf.data.AUTOTUNE)
else:
dataset = tf.data.Dataset.from_tensor_slices({'input_ids': tokenized_inputs['input_ids'],
'attention_mask': tokenized_inputs['attention_mask']})
if repeated:
dataset = dataset.repeat()
if not ordered:
dataset = dataset.shuffle(2048)
dataset = dataset.batch(batch_size)
dataset = dataset.cache()
dataset = dataset.prefetch(tf.data.AUTOTUNE)
return dataset
model_path_list = glob.glob(f'{input_base_path}*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
###Output
Models to predict:
/kaggle/input/17-commonlit-roberta-base-seq-256-cls-bias-init/model_0.h5
###Markdown
Model
###Code
def model_fn(encoder, seq_len=256):
input_ids = L.Input(shape=(seq_len,), dtype=tf.int32, name='input_ids')
input_attention_mask = L.Input(shape=(seq_len,), dtype=tf.int32, name='attention_mask')
outputs = encoder({'input_ids': input_ids,
'attention_mask': input_attention_mask})
last_hidden_state = outputs['last_hidden_state']
cls_token = last_hidden_state[:, 0, :]
# x = L.GlobalAveragePooling1D()(last_hidden_state)
output = L.Dense(1, name='output')(cls_token)
model = Model(inputs=[input_ids, input_attention_mask], outputs=output)
return model
with strategy.scope():
encoder = TFAutoModel.from_pretrained(config['BASE_MODEL'])
model = model_fn(encoder, config['SEQ_LEN'])
model.summary()
###Output
Some layers from the model checkpoint at /kaggle/input/huggingface-roberta/roberta-base/ were not used when initializing TFRobertaModel: ['lm_head']
- This IS expected if you are initializing TFRobertaModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing TFRobertaModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
All the layers of TFRobertaModel were initialized from the model checkpoint at /kaggle/input/huggingface-roberta/roberta-base/.
If your task is similar to the task the model of the checkpoint was trained on, you can already use TFRobertaModel for predictions without further training.
###Markdown
Test set predictions
###Code
tokenizer = AutoTokenizer.from_pretrained(config['BASE_MODEL'])
test_pred = []
for model_path in model_path_list:
print(model_path)
if tpu: tf.tpu.experimental.initialize_tpu_system(tpu)
K.clear_session()
model.load_weights(model_path)
# Test predictions
test_ds = get_dataset(test, tokenizer, labeled=False, ordered=True, batch_size=config['BATCH_SIZE'], seq_len=config['SEQ_LEN'])
x_test = test_ds.map(lambda sample: sample)
test_pred.append(model.predict(x_test))
###Output
/kaggle/input/17-commonlit-roberta-base-seq-256-cls-bias-init/model_0.h5
###Markdown
Test set predictions
###Code
submission = test[['id']]
submission['target'] = np.mean(test_pred, axis=0)
submission.to_csv('submission.csv', index=False)
display(submission.head(10))
###Output
_____no_output_____ |
week7/NumericalIntegration.ipynb | ###Markdown
SYS 611: Numerical Integration (Continuous Time Simulation)Paul T. Grogan This example shows how to perform numerical integration for continuous time simulation. The system to be simulated is a hypothetical basin that is being filled with water. The state variable (q) is the volume of water in the basin. The time derivative (dq/dt=x(t)) is the flow rate of water into the basin, set to x(t)=t for this example. The output variable (y) is omitted in this example. DependenciesThis example is compatible with Python 2 environments through use of the `__future__` library function. Additionally, this example uses the `numpy` library for numerical functions, `scipy.integrate` for numerical integration, and `matplotlib.pyplot`, library for plotting.
###Code
# import the python3 behavior for importing, division, and printing in python2
from __future__ import absolute_import, division, print_function
# import the numpy package and refer to it as `np`
# see http://docs.scipy.org/doc/numpy/reference/ for documentation
import numpy as np
# import the scipy integrate package and refer to it as `integrate`
import scipy.integrate as integrate
# import the matplotlib pyplot package and refer to it as `plt`
# see http://matplotlib.org/api/pyplot_api.html for documentation
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
State Time Derivative FunctionDefine functions for the state time derivative (dq/dt) and the input flow rate (x).
###Code
# define the time derivative
def dq_dt(q, t):
return x(t)
# define the flow rate
def x(t):
return t
###Output
_____no_output_____
###Markdown
Numerical Integration LogicDefine the times for which to compute state values (using a linear space between lower- and upper-bound times) and perform the numerical integration.
###Code
# define the times to integrate over
t = np.linspace(0.0, 5.0)
# perform the numerical integration with initial state q[0] = 5.0
q = integrate.odeint(dq_dt, 5.0, t)
###Output
_____no_output_____
###Markdown
Visualize OutcomesUse bar plots in `matplotlib` to plot the input, state, and output trajectories.
###Code
plt.figure()
# plot the analytical solution solved with calculus (5+t^2/2) with a black line
plt.plot(t, 5+t**2/2, '-k', label='Analytic Solution')
#
plt.step(t, q, '-r', where='post', label='scipy.odeint')
plt.xlabel('Time ($t$)')
plt.ylabel('Water Volume ($q$)')
plt.legend(loc='best')
plt.show()
###Output
_____no_output_____ |
Pandas_1.ipynb | ###Markdown
Conhecendo o Pandas Pandas é uma biblioteca/pacote muito utilizada para análise e manipulação de dados. Por ser bem versátil, ela permite trabalhar de forma rápida e eficiente com arquivos de vários tipo, sendo alguns deles o csv, xlsx, xls, txt, json, etc. Documentação: https://pandas.pydata.org/ Importando o pacote Pandas
###Code
# Por questões de boas práticas, a comunidade faz o uso de alguns apelidos para alguns pacotes.
# No caso do pandas utilizamos o "pd"
# para associar o apelido ao pacote, pasta utilizar o "as" para realizar a associação
import pandas as pd
###Output
_____no_output_____
###Markdown
Fazendo o upload de arquivos pelo Google Colab
###Code
from google.colab import files
# files.upload()
###Output
_____no_output_____
###Markdown
Lendo o arquivo CSV CSV significa "*comma separated values*", ou seja, valores separados por vírgula. Em alguns casos, dependendo da fonte dos dados, eles podem ter separadores diferentes da vírgula, como o "ponto e vírgula" (;). em casos como esses, ao ler o CSV, você deve informar o tipo de separador pelo "sep" dentro da função "read_csv", como mostra o exemplo abaixo. Mais informações sobre o read_csv, acesse a documentação em: https://pandas.pydata.org/pandas-docs/version/0.23/generated/pandas.read_csv.html
###Code
db = pd.read_csv('db.csv', sep=';')
db
###Output
_____no_output_____
###Markdown
Visualização de dados e seus tipos
###Code
# Para visualizar uma amostra dos dados, podemos utilizar head.
# Se não for passado nenhum valor por parâmetro, por padrão, ele retornará as 5 primeiras linhas do dataframe.
db.head()
# Para visualizar mais valores, basta passar por parâmetro a quantidades de linha que deseja ver
# Abaixo vamos ver as 10 primeiras linhas
db.head(10)
# Vamos descobrir o formato do nosso dataframe
db.shape
# Agora que sabemos que o nosso dataframe possui 258 linhas e 7 colunas
# para visualizar todas as linhas, podemos utilizar o "pd.options"
pd.options.display.max_rows = 258 # O 'display.max_rows' vai exibir a quantidade de linhas que for informado, no caso são 258.
# Também podemos utilizar o "pd.options" para visualizar todas as colunas
pd.options.display.max_columns = 7 # Ao invés de passar 'display.max_rows', vamos passar o 'display.max_columns'
# Agora, sempre que tentar visualizar seu dataframe, você verá a quantidade de linhas e colunas que forma configuradas pelo pd.options
###Output
_____no_output_____
###Markdown
Para mais informações sobre Options and settings, acesse: https://pandas.pydata.org/pandas-docs/stable/user_guide/options.html
###Code
# Para melhor visualização, vamos definir os valores como 10 linhas e 7 colunas
pd.options.display.max_columns = 7
pd.options.display.max_rows = 10
db
# Para visualizar os tipos de dados presentes no seu dataframe podemos utilizar o dtypes
db.dtypes
# Também podemos utilizar o info para ver mais ifnromações sobre o dataframe
db.info()
# O describe pode mostrar várias métricas estatisticas para auxiliar na exploração.
# Abaixo ele mostra apenas 3 colunas das 7. Isso acontece porque apenas 3 colunas apresentam valores numéricos, como INT e Float
# Valores do tipo Boleano (bool) e String (objetct) não entram nas métricas.
db.describe()
# podemos utilizar a Built-in Function "type" para descobrir o tipo de dado.
type(db)
###Output
_____no_output_____
###Markdown
Tuplas Semelhante às listas, as Tuplas servem para guardar uma coleção de itens. A principal diferença é que as Tuplas são imutáveis. Para definir uma Tupla utilizamos () ao invés de []. lista = ['a', 'b', 'c'] tupla = ('a', 'b', 'c')
###Code
# Uma das forma de definir uma tupla
tp = ('a', 'b', 'c')
tp
type(tp)
# As tuplas, assim como as listas, armazenam tipos variados de dados.
# Vamos criar uma tupla com duas variáveis que possuem tipos diferentes
nome = 'Jose'
idade = 35
(nome, idade)
#Podemos utilizar Built-in Functions do Python para criar uma tupla, como mostra o exemplo abaixo:
nomes_Pessoas = tuple(['Maria', 'Jonas', 'Kleber', 'Ana'])
nomes_Pessoas
###Output
_____no_output_____
###Markdown
Mais informações sobre Built-in Functions: https://docs.python.org/3/library/functions.html Seleção em Tuplas
###Code
# Para selecionar os dados dentro da Tupla, basta selecionar o índice como é feito em uma lista.
nomes_Pessoas[0]
# Selecionando último item da Tupla
nomes_Pessoas[-1]
# Fazendo o fatiamento de uma Tupla.
# Vamos selecionar os dois primeiros valores, a partir do índice 0. Vale lembrar que o último valor não será incluído.
nomes_Pessoas[0:2]
nomes_Pessoas = ('Maria', 'Jonas', 'Kleber', 'Ana',('Isabella', 'Mateus', 'Gisele'))
# Agora vamos selecionar os dados que estão dentro da Tupla que está dentro da Tupla.
nomes_Pessoas[4]
# Para acessar uma valor específico da Tupla interna, devemos realizar o seguinte fatiamento:
# Vamos selecionar o nome "Isabella" que está presente no índice 0 da Tupla interna.
nomes_Pessoas[4][0]
# Agora vamos selecionar o nome "Gisele"
nomes_Pessoas[4][2]
###Output
_____no_output_____
###Markdown
Iterações em Tuplas
###Code
# Vamos reapoveitar os dados anteriores.
nomes_Pessoas = ('Maria', 'Jonas', 'Kleber', 'Ana')
# Vamos fazer varrer a nossa tupla utilizando o for
for i in nomes_Pessoas:
print(i)
###Output
Maria
Jonas
Kleber
Ana
###Markdown
Desempacotamento de uma Tupla
###Code
# Podemos pegar cada valor presente em uma tupla e associar a uma variável específica.
nome_1, nome_2, nome_3, nome_4 = nomes_Pessoas
nome_1
nome_2
nome_3
nome_4
#Caso você queira transferir os valores de uma tupla para uma lista, é possível utilizar o for e associar cada valor a uma nova lista
nomes_Lista = []
for i in nomes_Pessoas:
nomes_Lista.append(i)
nomes_Lista
# Você também pode criar atribuições de elementos específicos, ignorando aqueles que não são necessários.
nomes_Pessoas
# Vamos selecionar apenas o "jonas" e a "Ana"
_, A, _, B = nomes_Pessoas # Quando utilizei o underline/undersocore eu ignorei os valores correspondentes dentro da Tupla
A
B
# Também podemos passar o valor correspondente ao índice
k = nomes_Pessoas[2]
k
# Para ignorar uma sequência de valores, podemos utilizar o *_
_, A, *_ = nomes_Pessoas
A
###Output
_____no_output_____
###Markdown
zip() A função zip retorna uma lista de tuplas
###Code
# Vamos criar duas lisas, uma lista de nomes e uma lista de idade.
nomes = ['Maria', 'Jonas', 'Kleber', 'Ana']
nomes
idades = [25, 28, 30, 19]
idades
# Vamos utilizar o zip para criar um iterador
zip(nomes, idades)
# Agora vamos utilizar a Built-in Function list.
# Teremos como retorno uma lista com os iteradores.
# O que aconteceu? O zip associou o primeiro item da lista nomes com o primeiro item da lista idades, e foi repetindo esse mesmo padrão.
list(zip(nomes, idades))
# Agora podemos iterar o nome e a idade da pessoa ao mesmo tempo
for i in zip(nomes, idades):
print(i)
# O formato acima não seria muito útil, pois não conseguimos explorar os valores de forma individual.
# Veja o exemplo abaixo:
for i in zip(nomes, idades):
if idades > 26:
print(i)
# O mais indicado seria fazer um desempacotamento, assim podemos avaliar os dados individualmente
for nomes, idades in zip(nomes, idades):
if idades > 26:
print(nomes, idades)
###Output
_____no_output_____
###Markdown
Dicionários As listas armazenam uma coleção de valores que podem ser acessados através de seus índices. Os dicionários também podem acessar uma coleção de valores, mas eles não trabalha com o conceito de índice, mas sim de chave (key) e valor (value)
###Code
# Vamos comparar os acessos de uma lista com o dicionário
nomes = ['Ana', 'Gabi', 'Marcos']
nomes
idade = [19, 23, 25]
idade
nomes.index('Gabi') # o 'index' mostra qual é o índice do ítem selecionado
# Com uma lista pequena fica fácil de descobrir o índice, mas no caso uma lista com muitos valores, ficaria difícil contar os índices.
# Como as listas estão separadas, vamos descobrir a idade da 'Gabi' passando seu índice na lista de idade.
idade[nomes.index('Gabi')]
# Agora vamos criar uma dicionário com os valores acima e vamos chama-lo de 'dicio'.
dicio = {'Ana' : 19, 'Gabi' : 23, 'Marcos' : 25}
dicio
type(dicio)
###Output
_____no_output_____
###Markdown
Criando um dicionário com o zip
###Code
# Podemos agrupar utilizando o zip
dicio = dict(zip(nomes, idade))
dicio
###Output
_____no_output_____
###Markdown
Operações com dicionários
###Code
dicio
###Output
_____no_output_____
###Markdown
[key]
###Code
# No dicionário acessamos valores através das chaves (key).
dicio['Ana'] # Utilizando a chave 'Ana' eu vejo qual o seu valor correspondente, no caso a idade da Ana
###Output
_____no_output_____
###Markdown
in / not in
###Code
# Também podemos descobrir se um valor está presente dentro do dicionário pesquisando pela chave.
# Se estiver presente, ele retorna True
'Ana' in dicio
# Se não estiver presente, ele retorna False
'Jose' in dicio
# Também podemos verificar se o valor não está presente no dicionário.
# Para isso podemos utilizar o not in
'Jose' not in dicio
###Output
_____no_output_____
###Markdown
len
###Code
# O len revela o tamanho do seu dicionário (também se aplica a tuplas e listas)
len(dicio)
###Output
_____no_output_____
###Markdown
del
###Code
# O 'del' apaga um determinado valor do seu dicionário (também se aplica a listas)
dicio
del(dicio['Marcos']) # Devemos informar a chave correspondente ao item que queremos remover
dicio
###Output
_____no_output_____
###Markdown
Métodos - Dicionários update Permite a atualização do dicionário. Podemos incluir novos valores.
###Code
# No último exemplo, nós removemos o 'Marcos' do nosso dicionário. Agora, vamos incluí-lo novamente utilizando o método update
dicio.update({'Marcos':25})
dicio
# Além de adiconar novos itens, também podemos atualizar o que já está dentro do dicionário.
# Vamos mudar a idade do 'Marcos' para 26 anos e incluir uma nova pessoa, a Leticia, no nosso dicionário.
dicio.update({'Marcos':26, 'Leticia':28})
dicio
###Output
_____no_output_____
###Markdown
copy O copy tem o mesmo principio da lista, ele cria uma cópia sem referenciar o original.
###Code
dicio2 = dicio.copy()
dicio2
# Se removermos um valor de 'dicio2' ele não fará alterações no 'dicio' original.
del(dicio2['Leticia'])
print(f'No dicio2 nós temos: {dicio2}')
print(f'No dicio nós temos: {dicio}')
###Output
No dicio2 nós temos: {'Ana': 19, 'Gabi': 23, 'Marcos': 26}
No dicio nós temos: {'Ana': 19, 'Gabi': 23, 'Marcos': 26, 'Leticia': 28}
###Markdown
pop O método 'pop' remove um item do dicionário. Ele faz uma busca pela chave passada por parâmetro e retorna o valor que foi removido.
###Code
# No exemplo abaixo vamos remover o 'Marcos'
dicio2.pop('Marcos')
# Se indicarmos uma chave que não existe, o pop vai retornar um erro
dicio2.pop('Maria')
# É possível tratar esse erro passando um segundo argumento. Caso ele não encontre o valor, ele retornará o argumento passado.
dicio2.pop('Maria', 'Item não encontrado')
###Output
_____no_output_____
###Markdown
clear O método 'clear' apaga todo o dicionário
###Code
dicio2
dicio2.clear()
dicio2
###Output
_____no_output_____
###Markdown
Iteração - Dicionário keys Retorna uma lista com as chaves (keys) do dicionário
###Code
dicio = {'Ana': 19, 'Gabi': 23, 'Marcos': 25, 'Leticia':27, 'Vitor':28}
dicio
dicio.keys()
# Como retorna uma lista, podemos iterar com o laço 'for'
for i in dicio.keys():
print(i)
# Ou para ver os valores que correspondem à chave
for i in dicio.keys():
print(dicio[i])
###Output
19
23
25
27
28
###Markdown
values Retorna uma lista com os valores (values) do dicionário
###Code
dicio.values()
# Como retorna uma lista, também podemos iterar com o laço 'for'
for i in dicio.values():
print(i)
###Output
19
23
25
27
28
###Markdown
items Retona uma lista contendo uma tupla para cada chave/valor
###Code
dicio.items()
# Também podemos iterar com o for.
for i in dicio.items():
print(i)
# Podemos acessar os valores individualmente
for key, item in dicio.items():
print(key, item)
# Também podemos criar filtros.
# Vamos criar um filtro para mostrar apenas pessoas com mais de 24 anos
for key, item in dicio.items():
if item > 24:
print(key, item)
###Output
Marcos 25
Leticia 27
Vitor 28
###Markdown
Estrutura de dados Series Criando uma Series a partir de uma lista
###Code
nomes = ['Gabi', 'Ana', 'Marcos']
nomes
# Utilizando o 'pd.Series' você transforma uma lista em uma Series
# Lembrando que o Pandas assume que strings são do tipo object
pd.Series(nomes)
###Output
_____no_output_____
###Markdown
Criando um dataframe a partir de uma lista de dicionários
###Code
info = [{'Nome': 'Ana', 'Idade': 19, 'Sexo':'Feminino', 'Altura': 1.68, 'Profissão':'Dentista', 'Renda_Mensal':'8.000'},
{'Nome': 'Gabi', 'Idade': 22, 'Sexo':'Feminino', 'Altura': 1.73, 'Profissão':'Arquiteto', 'Renda_Mensal':'10.000'},
{'Nome': 'Marcos', 'Idade': 24, 'Sexo':'Masculino', 'Altura': 1.75, 'Profissão':'Engenheiro', 'Renda_Mensal':'7.000'}]
# Como estamos utilizando dicionários, as chaves (keys) são atribuídas automáticamente ao nome das colunas.
info_DF = pd.DataFrame(info)
info_DF
# Para modificar a ordem das colunas de forma simples, basta chamar o dataframe e abrir dois colchetes passando os nomes na ordem que deseja.
info_DF[['Nome', 'Sexo', 'Idade','Altura','Renda_Mensal','Profissão']]
###Output
_____no_output_____
###Markdown
Criando um DF a partitir de um dicionário com listas de valores
###Code
new_Info = {'Nome':['Ana','Gabi','Marcos'],
'Sexo':['Feminino','Feminino','Masculino'],
'Idade':[19,22,24],
'Altura':[1.68,1.73,1.75],
'Renda_Mensal':[8.000,10.000,7.000],
'Profissão':['Dentista','Arquiteto','Engenheiro']}
new_Info
new_Info_DF = pd.DataFrame(new_Info)
new_Info_DF
# Por padrão, quando é criado o DF, ele cria um índice númerico correspondente ao número de linhas, a partir do 0.
# Para definir uma coluna como índice, basta passar o argumento 'index_col' igualando ao índice da coluna.
# No exemplo abaixo vamos colocar a coluna 'Nome' como indice. A coluna 'Nome está na posição 0'
dadosInfo = pd.read_csv('Dados_Pessoas.csv', index_col= 0)
dadosInfo
###Output
_____no_output_____
###Markdown
Seleção de colunas
###Code
#Selecionando uma coluna ele retorna uma Series com o valor da coluna selecionada.
#Nesse exemplo, a coluna nome é o index
dadosInfo['Idade']
type(dadosInfo['Idade'])
# Para que ele retorne um DF, devemos adicionar um colchetes adicional
dadosInfo[['Idade']]
type(dadosInfo[['Idade']])
###Output
_____no_output_____
###Markdown
Seleção de linhas
###Code
dadosInfo[0:3]
dadosInfo[2:5]
###Output
_____no_output_____
###Markdown
.loc O 'loc' retorna todas as informações da linha selecionada a partir de um rótulo
###Code
# Nomrlamente o index é uma sequência numérica, mas como fizemos as alterações anteriormente, nosso index é a coluna 'nome'
# Então, nesse caso, devemos passar o nome, esse será o nosso rótulo.
dadosInfo.loc['Luisa']
type(dadosInfo.loc['Luisa'])
# Lembrando que para que ele retorne um DF devemos passar dois colchetes
dadosInfo.loc[['Luisa']]
type(dadosInfo.loc[['Luisa']])
# Também podemos acessar mais de uma coluna
dadosInfo.loc[['Luisa', 'Gabi']]
# Também podemos filtrar, além das colunas, as linhas que queremos ver.
# Vamos selecionar os nomes 'Gabi' e 'Luisa' e também suas respectivas alturas e profissões.
# Lembrando que devemos passar peimrio as linhas que queremos e depois informar as colunas.
dadosInfo.loc[['Luisa', 'Gabi'],['Altura','Profissão']]
# Caso você queira selecionar todas as linhas, mas esteja interessado em apenas algumas colunas específicas,
# o padrão 'linha' e 'coluna' deve ser respeitado. Veja um exemplo abaixo.
# Utilize ':' sem os colchetes para selecionar todas as linhas.
dadosInfo.loc[:,['Altura', 'Profissão']]
###Output
_____no_output_____
###Markdown
iloc O 'iloc' realiza o mesmo fatiamento que o 'loc', porém ele faz a seleção com base no ídice(posição) das informações.
###Code
dadosInfo.iloc[[0]]
dadosInfo.iloc[0:3]
# No fatiamento abaixo nós selecionamos uma sequência de valores
dadosInfo.iloc[0:3, 0:3]
# Para selecionar valores individuais utilizando o 'iloc', devemos passar os indices separados por vírgula dentro de colchetes.
# Vamos selecionar os nomes: 'Ana', 'Matheus' e Leticia.
# Vamos selecionar também as idades e altura
dadosInfo.iloc[[0, 3, 6], [1, 3]]
###Output
_____no_output_____
###Markdown
Consultas (Query) com DataFrame
###Code
dadosInfo.head()
# Abaixo foi criada uma variável 'select' que foi utilizada para criar um filtro.
select = dadosInfo.Sexo == 'Feminino'
select # A seleção retorna uma Series Booleana com valores True para o sexo feminino
# Podemos passar a Series booleana para realizar um filtro.
dadosInfo[select]
# Também podemos colocar mais condições.
# Vamos selecionar o público feminino com mais de 22 anos
dadosInfo[(dadosInfo.Sexo == 'Feminino') & (dadosInfo.Idade > 22)]
###Output
_____no_output_____
###Markdown
Método query
###Code
# E também podemos utilizar o método 'query'
# Vamos repetir o experimento anterior com o método query
dadosInfo.query('Sexo == "Feminino" and Idade > 22')
###Output
_____no_output_____
###Markdown
**Pandas is imported and ready to use**
###Code
import pandas
mydataset = {
'cars': ["BMW", "Volvo", "Ford"],
'passings': [3, 7, 2]
}
myvar = pandas.DataFrame(mydataset)
print(myvar)
###Output
_____no_output_____
###Markdown
**Create an alias with the as keyword while importing**
###Code
import pandas as pd
mydataset = {
'cars': ["BMW", "Volvo", "Ford"],
'passings': [3, 7, 2]
}
myvar = pd.DataFrame(mydataset)
print(myvar)
###Output
_____no_output_____
###Markdown
**Checking Pandas Version**
###Code
import pandas as pd
print(pd.__version__)
###Output
_____no_output_____
###Markdown
**A Pandas Series is like a column in a table.**
###Code
import pandas as pd
a = [1, 7, 2]
myvar = pd.Series(a)
print(myvar)
###Output
_____no_output_____
###Markdown
**With the index argument, you can name your own labels**
###Code
import pandas as pd
a = [1, 7, 2]
myvar = pd.Series(a, index = ["x", "y", "z"])
print(myvar)
###Output
_____no_output_____
###Markdown
**Create a simple Pandas Series from a dictionary**
###Code
import pandas as pd
calories = {"day1": 420, "day2": 380, "day3": 390}
myvar = pd.Series(calories)
print(myvar)
###Output
_____no_output_____
###Markdown
**Create a Series using only data from "day1" and "day2"**
###Code
import pandas as pd
calories = {"day1": 420, "day2": 380, "day3": 390}
myvar = pd.Series(calories, index = ["day1", "day2"])
print(myvar)
###Output
_____no_output_____
###Markdown
**Create a DataFrame from two Series**
###Code
import pandas as pd
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
myvar = pd.DataFrame(data)
print(myvar)
###Output
_____no_output_____
###Markdown
**Create a simple Pandas DataFrame**
###Code
import pandas as pd
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
print(df)
###Output
_____no_output_____
###Markdown
**With the index argument, you can name your own indexes**
###Code
import pandas as pd
data = {
"calories": [420, 380, 390],
"duration": [50, 40, 45]
}
df = pd.DataFrame(data, index = ["day1", "day2", "day3"])
print(df)
###Output
_____no_output_____ |
docs/mineflayer.ipynb | ###Markdown
Using mineflayer in PythonThis is a tutorial on how to use mineflayer in Python. This example will connect you to the PrismarineJS test server. You can join it with prismarine-viewer or your Minecraft client at server IP **95.111.249.143:10000**.If you're new to Jupyter Notebooks, you can press the "Play" button at the left of each code block to run it. Make sure that you run the blocks in a correct order. Setup First, make sure you have Python version 3.7 and Node.js version 14 or newer installed
###Code
!python --version
!node --version
###Output
Python 3.7.11
v14.16.0
###Markdown
Now, we can use pip to install the `javascript` Python package to access Node.js libraries from Python.
###Code
!pip install javascript
###Output
_____no_output_____
###Markdown
Usage If all is well, we can import the `javascript` library. We can then import the `require` function which works similarly to the `require` function in Node.js, but does the dependency management for us.You may notice the extra imports : On, Once, off and AsyncTask. These will be discussed later on.
###Code
from javascript import require, On, Once, AsyncTask, once, off
###Output
_____no_output_____
###Markdown
We can now import Mineflayer
###Code
mineflayer = require('mineflayer')
###Output
_____no_output_____
###Markdown
Once we've done that, we can create a new `bot` instance, through the `createBot` function. You can see the docs for this function [here](https://github.com/PrismarineJS/mineflayer/blob/master/docs/api.mdbot). In the line below we specify a hostname and a port for the server, but do not pass any `auth` or `password` options, so it will connect to the server in offline mode.Below that, we also a call to the `once` function, which pauses the thread until an event has been triggered, then returns the output. Here, we print out "I spawned" after the `login` event has been triggered on `bot`.
###Code
random_number = id([]) % 1000 # Give us a random number upto 1000
BOT_USERNAME = f'colab_{random_number}'
bot = mineflayer.createBot({ 'host': '95.111.249.143', 'port': 10000, 'username': BOT_USERNAME, 'hideErrors': False })
# The spawn event
once(bot, 'login')
bot.chat('I spawned')
###Output
_____no_output_____
###Markdown
If your bot spawned, we can now take a look at the bot's position
###Code
bot.entity.position
###Output
_____no_output_____
###Markdown
Listening to events You can register an event handler with the `@On` or `@Once` decorator. This decorator takes two arguments, first it's the **Event Emitter** (the object that is sending events) and the second is the **event name**, what event you want to listen to. *Do not use the .on or .once methods on bot, use the decorators instead.*A decorator always has a function under it which is being decorated, which can have any name. The first parameter to any event emitter callback is the `this` argument. In the code below, we create an event emitter on `bot` that listens to `playerJoin` events, then print that out.
###Code
@On(bot, 'playerJoin')
def end(this, player):
bot.chat('Someone joined!')
###Output
_____no_output_____
###Markdown
In Python, you cannot leave any arguments for an event handler callback blank like in JavaScript. Instead, you can use the asterisk (`*`) operator in Python to capture all remaining arguments to the right, much like the `...` rest/spread operator in JavaScript. The parameter with the asterisk will be a tuple containing the captured arguments.You can stop listening for events through an event handler by using the imported `off` function. It takes three parameters: the emitter, event name, and a reference to the Python function.
###Code
@On(bot, 'chat')
def onChat(this, user, message, *rest):
print(f'{user} said "{message}"')
# If the message contains stop, remove the event listener and stop logging.
if 'stop' in message:
off(bot, 'chat', onChat)
###Output
_____no_output_____
###Markdown
You need to `off` all the event listeners you listen to with `@On`, else the Python process won't exit until all of the active event emitters have been off'ed. If you only need to listen once, you can use the `@Once` decroator like in the example above. Asynchronous tasksBy default, all the operations you do run on the main thread. This means you can only do one thing at a time. To multitask, you can use the `@AsyncTask` decroator to run a function in a new thread, while not obstructing the main thread. Block breakingTake a look at the example below. Here we listen for a "break" trigger in a chat message, then we start digging the block underneath, while simultaneously sending a message that the bot has "started digging".
###Code
@On(bot, 'chat')
def breakListener(this, sender, message, *args):
if sender and (sender != BOT_USERNAME):
if 'break' in message:
pos = bot.entity.position.offset(0, -1, 0)
blockUnder = bot.blockAt(pos)
if bot.canDigBlock(blockUnder):
bot.chat(f"I'm breaking the '{blockUnder.name}' block underneath")
# The start=True parameter means to immediately invoke the function underneath
# If left blank, you can start it with the `start()` function later on.
try:
@AsyncTask(start=True)
def break_block(task):
bot.dig(blockUnder)
bot.chat('I started digging!')
except Exception as e:
bot.chat(f"I had an error {e}")
else:
bot.chat(f"I can't break the '{blockUnder.name}' block underneath")
if 'stop' in message:
off(bot, 'chat', breakListener)
###Output
_____no_output_____
###Markdown
Using mineflayer pluginsPick the plugin you want from the list [here](https://github.com/PrismarineJS/mineflayerthird-party-plugins), then `require()` it and register it to the bot. Some plugins have different ways to register to the bot, look at the plugin's README for usage steps. mineflayer-pathfinder`mineflayer-pathfinder` is a essential plugin that helps your bot move between places through A* pathfinding. Let's import it:
###Code
pathfinder = require('mineflayer-pathfinder')
bot.loadPlugin(pathfinder.pathfinder)
# Create a new minecraft-data instance with the bot's version
mcData = require('minecraft-data')(bot.version)
# Create a new movements class
movements = pathfinder.Movements(bot, mcData)
# How far to be fromt the goal
RANGE_GOAL = 1
###Output
_____no_output_____
###Markdown
Now let's have create a goal for the bot to move to where another player wants, based on a chat message.
###Code
bot.removeAllListeners('chat')
@On(bot, 'chat')
def handleMsg(this, sender, message, *args):
if sender and (sender != BOT_USERNAME):
bot.chat('Hi, you said ' + message)
if 'come' in message:
player = bot.players[sender]
target = player.entity
if not target:
bot.chat("I don't see you !")
return
pos = target.position
bot.pathfinder.setMovements(movements)
bot.pathfinder.setGoal(pathfinder.goals.GoalNear(pos.x, pos.y, pos.z, RANGE_GOAL))
if 'stop' in message:
off(bot, 'chat', handleMsg)
###Output
_____no_output_____
###Markdown
Analyzing the world You can also interact with mineflayer through any other Python package. Let's analyze some block frequencies...
###Code
import matplotlib.pyplot as plt
figure = plt.figure()
axes = figure.add_axes([0,0,1,1])
Vec3 = require('vec3').Vec3
columns = bot.world.getColumns()
block_freqs = {}
for c in range(0, 3): # iterate through some of the loaded chunk columns
cc = columns[c].column
for y in range(1, 40):
for x in range(1, 16):
for z in range(1, 16):
block = cc.getBlock(Vec3(x, y, z))
if block.name in block_freqs:
block_freqs[block.name] += 1
else:
block_freqs[block.name] = 1
print(block_freqs)
axes.bar(block_freqs.keys(), block_freqs.values())
plt.xticks(rotation=45)
plt.show()
###Output
{'bedrock': 1321, 'stone': 19258, 'diorite': 1123, 'lava': 64, 'granite': 1704, 'andesite': 1459, 'redstone_ore': 68, 'iron_ore': 156, 'coal_ore': 282, 'gold_ore': 26, 'lapis_ore': 5, 'dirt': 570, 'emerald_ore': 3, 'diamond_ore': 9, 'gravel': 66, 'air': 211}
###Markdown
Exiting the botOnce you're done, you can call `bot.quit()` or `bot.end()` to disconnect and stop the bot.
###Code
bot.quit()
###Output
_____no_output_____
###Markdown
Using mineflayer in PythonThis is a tutorial on how to use mineflayer in Python. This example will connect you to the PrismarineJS test server. You can join it with prismarine-viewer or your Minecraft client at server IP **95.111.249.143:10000**.If you're new to Jupyter Notebooks, you can press the "Play" button at the left of each code block to run it. Make sure that you run the blocks in a correct order. Setup First, make sure you have Python version 3.7 and Node.js version 14 or newer installed
###Code
!python --version
!node --version
###Output
Python 3.7.11
v14.16.0
###Markdown
Now, we can use pip to install the `javascript` Python package to access Node.js libraries from Python.
###Code
!pip install javascript
###Output
_____no_output_____
###Markdown
Usage If all is well, we can import the `javascript` library. We can then import the `require` function which works similarly to the `require` function in Node.js, but does the dependency management for us.You may notice the extra imports : On, Once, off and AsyncTask. These will be discussed later on.
###Code
from javascript import require, On, Once, AsyncTask
###Output
_____no_output_____
###Markdown
We can now import Mineflayer
###Code
mineflayer = require('mineflayer')
###Output
_____no_output_____
###Markdown
Once we've done that, we can create a new `bot` instance, through the `createBot` function. You can see the docs for this function [here](https://github.com/PrismarineJS/mineflayer/blob/master/docs/api.mdbot). In the line below we specify a hostname and a port for the server, but do not pass any `auth` or `password` options, so it will connect to the server in offline mode.Below that, we also an event handlers, one that gets called on "spawn" event and sends a chat message.
###Code
random_number = id([]) % 1000 # Give us a random number upto 1000
BOT_USERNAME = f'colab_{random_number}'
bot = mineflayer.createBot({ 'host': '95.111.249.143', 'port': 10000, 'username': BOT_USERNAME, 'hideErrors': False })
# The spawn event
@Once(bot, 'login')
def spawn(*a):
bot.chat('I spawned')
###Output
_____no_output_____
###Markdown
If your bot spawned, we can now take a look at the bot's position
###Code
bot.entity.position
###Output
_____no_output_____
###Markdown
Listening to events You can register an event handler with the `@On` or `@Once` decorator. This decorator takes two arguments, first it's the **Event Emitter** (the object that is sending events) and the second is the **event name**, what event you want to listen to. *Do not use the .on or .once methods on bot, use the decorators instead.*A decorator always has a function under it which is being decorated, which can have any name. The first parameter to any event emitter callback is the `this` argument. In the code below, we create an event emitter on `bot` that listens to `playerJoin` events, then print that out.
###Code
@On(bot, 'playerJoin')
def end(this, player):
bot.chat('Someone joined!')
###Output
_____no_output_____
###Markdown
In Python, you cannot leave any arguments for an event handler callback blank like in JavaScript. Instead, you can use the asterisk (`*`) operator in Python to capture all remaining arguments to the right, much like the `...` rest/spread operator in JavaScript. The parameter with the asterisk will be a tuple containing the captured arguments.You can stop listening for events through an event handler by using the imported `off` function. It takes three parameters: the emitter, event name, and a reference to the Python function.
###Code
@On(bot, 'chat')
def onChat(this, user, message, *rest):
print(f'{user} said "{message}"')
# If the message contains stop, remove the event listener and stop logging.
if 'stop' in message:
off(bot, 'chat', onChat)
###Output
_____no_output_____
###Markdown
You need to `off` all the event listeners you listen to with `@On`, else the Python process won't exit until all of the active event emitters have been off'ed. If you only need to listen once, you can use the `@Once` decroator like in the example above. Asynchronous tasksBy default, all the operations you do run on the main thread. This means you can only do one thing at a time. To multitask, you can use the `@AsyncTask` decroator to run a function in a new thread, while not obstructing the main thread. Block breakingTake a look at the example below. Here we listen for a "break" trigger in a chat message, then we start digging the block underneath, while simultaneously sending a message that the bot has "started digging".
###Code
@On(bot, 'chat')
def breakListener(this, sender, message, *args):
if sender and (sender != BOT_USERNAME):
if 'break' in message:
pos = bot.entity.position.offset(0, -1, 0)
blockUnder = bot.blockAt(pos)
if bot.canDigBlock(blockUnder):
bot.chat(f"I'm breaking the '{blockUnder.name}' block underneath {bot.canDigBlock(blockUnder)}")
# The start=True parameter means to immediately invoke the function underneath
# If left blank, you can start it with the `start()` function later on.
try:
@AsyncTask(start=True)
def break_block(task):
bot.dig(blockUnder)
bot.chat('I started digging!')
except Exception as e:
bot.chat(f"I had an error {e}")
else:
bot.chat(f"I can't break the '{blockUnder.name}' block underneath")
if 'stop' in message:
off(bot, 'chat', breakListener)
###Output
_____no_output_____
###Markdown
Using mineflayer pluginsPick the plugin you want from the list [here](https://github.com/PrismarineJS/mineflayerthird-party-plugins), then `require()` it and register it to the bot. Some plugins have different ways to register to the bot, look at the plugin's README for usage steps. mineflayer-pathfinder`mineflayer-pathfinder` is a essential plugin that helps your bot move between places through A* pathfinding. Let's import it:
###Code
pathfinder = require('mineflayer-pathfinder')
bot.loadPlugin(pathfinder.pathfinder)
# Create a new minecraft-data instance with the bot's version
mcData = require('minecraft-data')(bot.version)
# Create a new movements class
movements = pathfinder.Movements(bot, mcData)
# How far to be fromt the goal
RANGE_GOAL = 1
###Output
_____no_output_____
###Markdown
Now let's have create a goal for the bot to move to where another player wants, based on a chat message.
###Code
bot.removeAllListeners('chat')
@On(bot, 'chat')
def handleMsg(this, sender, message, *args):
if sender and (sender != BOT_USERNAME):
bot.chat('Hi, you said ' + message)
if 'come' in message:
player = bot.players[sender]
target = player.entity
if not target:
bot.chat("I don't see you !")
return
pos = target.position
bot.pathfinder.setMovements(movements)
bot.pathfinder.setGoal(pathfinder.goals.GoalNear(pos.x, pos.y, pos.z, RANGE_GOAL))
if 'stop' in message:
off(bot, 'chat', handleMsg)
###Output
_____no_output_____
###Markdown
Analyzing the world You can also interact with mineflayer through any other Python package. Let's analyze some block frequencies...
###Code
import matplotlib.pyplot as plt
figure = plt.figure()
axes = figure.add_axes([0,0,1,1])
Vec3 = require('vec3').Vec3
columns = bot.world.getColumns()
block_freqs = {}
for c in range(0, 4): # iterate through some of the loaded chunk columns
cc = columns[c].column
for y in range(1, 40):
for x in range(1, 16):
for z in range(1, 16):
block = cc.getBlock(Vec3(x, y, z))
if block.name in block_freqs:
block_freqs[block.name] += 1
else:
block_freqs[block.name] = 1
print(block_freqs)
axes.bar(block_freqs.keys(), block_freqs.values())
plt.show()
###Output
{'stone': 32200, 'bedrock': 1807, 'dirt': 665, 'gravel': 340, 'lava': 56, 'coal_ore': 1, 'air': 30, 'torch': 1}
###Markdown
Exiting the botOnce you're done, you can call `bot.quit()` or `bot.end()` to disconnect and stop the bot.
###Code
bot.quit()
###Output
_____no_output_____ |
ejemplos/procesamiento de texto/nube_informe.ipynb | ###Markdown
 Nubes de palabras del 3er Informe de gobierno federalIngeniería de Características 2021-2**Julio Waissman**septiembre, 2021 IntroducciónEn esta libreta vamos a ver como hacer nubes de palabras como un pretexto para ver como usar la biblioteca `spacy` en la limpieza de lenguaje natural (procesamiento sencillo).Como un ejemplo de aplicación actual (al momento de hacer la libreta, claro). Vamos a utilizar el [Discurso del presidente Andrés Manuel López Obrador durante el Tercer Informe de Gobierno](https://lopezobrador.org.mx/2021/09/01/discurso-del-presidente-andres-manuel-lopez-obrador-durante-el-tercer-informe-de-gobierno/).Carguemos primero las bibliotecas que vamos ausar y algunas configuraciones de base.
###Code
!pip install requests
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import spacy
from bs4 import BeautifulSoup
import requests
import wordcloud
nlp = spacy.load('es_core_news_md')
plt.style.use('ggplot')
plt.rcParams['figure.figsize'] = (15, 7)
###Output
_____no_output_____
###Markdown
Descargando el textoVamos a usar `requests` para descargar la página completa como datos crudos, y luego utilizaremos `BeautifulSoup` para extraer del archivo el texto en parágrafor. Cáda parágrafo lo vamos a guardar en una entrada de un `dataframe`, por si luego decidimos hacer otro tipo de análisis con la información.
###Code
url = "https://lopezobrador.org.mx/2021/09/01/discurso-del-presidente-andres-manuel-lopez-obrador-durante-el-tercer-informe-de-gobierno/"
informe_html = requests.get(url)
sopa = BeautifulSoup(informe_html.text)
contenido = sopa.find_all("div", {"class":"entry-content"})
df_informe = pd.DataFrame({
'Parrafo': [parrafo.text for parrafo in contenido[0].find_all("p")]
})
df_informe
###Output
_____no_output_____
###Markdown
Y por lo que vemos, hay al menos uas cuantas lineas que no contienen caractéres alfanumericos, las cuales se usan como separador y podrían ser eliminadas.
###Code
df_informe = df_informe[df_informe.Parrafo.str.contains(r"\w", regex=True)]
df_informe
###Output
_____no_output_____
###Markdown
Haciendo una nube de palabras *rápido y furioso*Ahora vamos autilizar el texto tal cual lo tenemos para hacer una nube de palabras, usando solo lo que nos ofrece la biblioteca de [`wordcloud`](https://amueller.github.io/word_cloud).
###Code
# Primero vamos a ver la funcionalidad básica
# de la clase WordCloud
wordcloud.WordCloud?
texto = '\n'.join(df_informe.Parrafo.values)
# Genera la nube de palabras
wc = wordcloud.WordCloud().generate(texto)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Pues muy bonita pero muy inutil. El problema más grande de esta nube de palabras es que se hizo tomando en cuenta todas las palabras, y la mayoría de las que más se repiten no dan información.Vamos entonces a usar la serie de palabras de paro que nos da `spacy` con el modelo en español que bajamos.
###Code
palabras_paro = nlp.Defaults.stop_words
# Genera la nube de palabras
wc = wordcloud.WordCloud(
stopwords=palabras_paro
).generate(texto)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Y pues algo mejor, pero vemos que se siguen usando paabras que en otro contexto serían palabras significativas, pero que en un informa son muy esperables. Por ejemplo: México, o las que tengan que ver con porcentajes y con cantidades.
###Code
# Actualizamos palabras a mano
palabras_paro.update([
"México", "país", "gobierno",
"año", "años", "mil", "millones",
"pesos", "dolares", "dólares", "ciento"
])
# Genera la nube de palabras
wc = wordcloud.WordCloud(
stopwords=palabras_paro
).generate(texto)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
Un poco mejor, pero podemos ajustar mejor los tamaños de las letras y otros detalles
###Code
# Una mascara redonda de radio dado en pixeles
radio = 200
largo = int(1.2 * radio)
x, y = np.ogrid[:2*largo, :2*largo]
mascara_redonda = (x - largo) ** 2 + (y - largo) ** 2 > radio ** 2
mascara_redonda = 255 * mascara_redonda.astype(int)
# Genera la nube de palabras
wc = wordcloud.WordCloud(
stopwords=palabras_paro,
max_words=100,
max_font_size=50,
background_color="black",
#mask= mask
).generate(texto)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
# Si te gusta lo puedes guardad
wc.to_file("nube.png")
###Output
_____no_output_____
###Markdown
Usando `spacy`para extraer mejores característicasVamos ahora a utilizar spacy y su capacidad para tratar tokens de forma automñatica para extraer diferentes características importantes del informe y revisar como procesar texto con spacy.Por ejemplo, vamos a ver que adjetivos utilizó el presidente en su discurso
###Code
doc = nlp(texto)
palabras = ' '.join(
[
token.norm_ for token in doc
if token.is_alpha and not token.like_num and not token.is_stop and
not token.is_currency and token.pos_ in ['ADJ']
]
)
# Genera la nube de palabras
wc = wordcloud.WordCloud().generate(palabras)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
¿Y que verbos utilizó el presidente? ¿Se puede sacar alguna conclusión? ¿Cambia si se usan los verbos en infinitivo?
###Code
verbos = ' '.join(
[token.norm_ for token in doc if token.pos_ in ['VERB']]
)
verbos_inf = ' '.join(
[token.lemma_ for token in doc if token.pos_ in ['VERB']]
)
# Genera la nube de palabras
wc = wordcloud.WordCloud().generate(verbos)
wc2 = wordcloud.WordCloud().generate(verbos_inf)
# Muestra la nube de palabras
plt.figure(figsize=(15, 15))
plt.subplot(2,1,1)
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.subplot(2,1,2)
plt.imshow(wc2, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
¿Y su usamos puros sustantivos?
###Code
palabras = ' '.join(
[
token.norm_ for token in doc
if token.is_alpha and not token.like_num and not token.is_stop and
not token.is_currency and token.pos_ in ['NOUN']
]
)
# Genera la nube de palabras
wc = wordcloud.WordCloud().generate(palabras)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
¿Y por nombres propios?
###Code
palabras = ' '.join(
[
token.norm_ for token in doc
if token.is_alpha and not token.like_num and not token.is_stop and
not token.is_currency and token.pos_ in ['PROPN'] and
token.norm_ not in ['méxico', 'nacional', 'secretaría', 'programa']
]
)
# Genera la nube de palabras
wc = wordcloud.WordCloud().generate(palabras)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____
###Markdown
¿Y que tal si pudiermos ver los lugares que más mencionó?
###Code
palabras = ' '.join(
[
token.norm_ for token in doc
if token.is_alpha and not token.like_num and not token.is_stop and
not token.is_currency and token.ent_type_ is 'ORG'
]
)
# Genera la nube de palabras
wc = wordcloud.WordCloud().generate(palabras)
# Muestra la nube de palabras
plt.imshow(wc, interpolation='bilinear')
plt.axis("off")
plt.show()
###Output
_____no_output_____ |
yoloV3.ipynb | ###Markdown
###Code
!git clone https://github.com/pjreddie/darknet
!ls
cd darknet
!ls
!make
!wget https://pjreddie.com/media/files/yolov3.weights
!pip install opencv-python
!pip list
!./darknet detect cfg/yolov3.cfg yolov3.weights data/dog.jpg
import cv2
import matplotlib.pyplot as plt
import os.path
fig,ax=plt.subplots()
ax.tick_params(labelbottom="off",bottom="off")
ax.tick_params(labelleft="off",left="off")
ax.set_xticklabels([])
ax.axis('off')
file='wa.jpg'
if os.path.exists(file):
img=cv2.imread(file)
show_img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(show_img)
###Output
_____no_output_____ |
speech-recognition/preproc.ipynb | ###Markdown
Preprocessing we create silence files, pad short files and make training and validation sets
###Code
import numpy as np
import librosa
import librosa.display
import os
import csv
%matplotlib inline
import matplotlib.pyplot as plt
import soundfile as sf
!ls /dev/shm/kmrozowski/
data_path = '/dev/shm/kmrozowski/'
train_dir = data_path + './data/train/audio/' #download files from kaggle
classes = ['yes', 'no',
'up', 'down',
'left', 'right',
'on', 'off',
'stop', 'go',
# 'zero', 'one', 'two', 'three', 'four', 'five', 'six', 'seven', 'eight', 'nine',
'silence']
###Output
_____no_output_____
###Markdown
RUN ONLY ONCE in your lifetime! This cell moves the background noises folder out of the audio directory. We will create silence samples from these files after.
###Code
%%bash
mv {data_path}/data/train/audio/_background_noise_ data/train
ls {data_path}/data/train
###Output
LICENSE
README.md
_background_noise_
audio
testing_list.txt
validation_list.txt
###Markdown
Split all the \_background\_noises\_ into 1-sec audio tracks
###Code
def split_arr(arr):
"""
split an array into chunks of length 16000
Returns:
list of arrays
"""
return np.split(arr, np.arange(16000, len(arr), 16000))
def create_silence():
"""
reads wav files in background noises folder,
splits them and saves to silence folder in train_dir
"""
sampleRate = 16000 # hertz
for file in os.listdir('data/train/_background_noise_/'):
if 'wav' in file:
sig, rate = librosa.load('data/train/_background_noise_/' + file, sr = 16000)
sig_arr = split_arr(sig)
if not os.path.exists(train_dir+'silence/'):
os.makedirs(train_dir+'silence/')
for ind, arr in enumerate(sig_arr):
filename = 'frag%d' %ind + '_%s' %file # example: frag0_running_tap.wav
sf.write(train_dir+'silence/'+filename, arr, sampleRate)
create_silence()
###Output
_____no_output_____
###Markdown
It is probably a good idea to make more silence samples yourself. Perhaps just by recording walking or driving around without speaking. The silence class is underrepresented. three lists with file names. one for training set, one for validation set, one for all. Plus a dictionary with file counts per class.
###Code
folders = os.listdir(train_dir)
# put folders in same order as in the classes list, used when making sets
all_classes = [x for x in classes]
for ind, cl in enumerate(folders):
if cl not in classes:
all_classes.append(cl)
print(all_classes)
with open(data_path + './data/train/validation_list.txt') as val_list:
validation_list = [row[0] for row in csv.reader(val_list)]
assert len(validation_list) == 6798, 'file not loaded'
print(len(validation_list))
"""
#if you want to add the files in testing_list.txt to the validation list:
with open(data_path + './data/train/testing_list.txt') as test_list:
testing_list = [row[0] for row in csv.reader(test_list)]
assert len(testing_list) == 6835, 'file not loaded'
#combine into validation set
validation_list.extend(testing_list)
"""
#add silence files to validation_list
for i, file in enumerate(os.listdir(train_dir + 'silence/')):
if i%10==0:
validation_list.append('silence/'+file)
training_list = []
all_files_list = []
class_counts = {}
for folder in folders:
files = os.listdir(train_dir + folder)
for i, f in enumerate(files):
all_files_list.append(folder + '/' + f)
path = folder + '/' + f
if path not in validation_list:
training_list.append(folder + '/' + f)
class_counts[folder] = i
#remove filenames from validation_list that don't exist anymore (due to eda)
validation_list = list(set(validation_list).intersection(all_files_list))
print(len(validation_list))
path = '/dev/shm/kmrozowski/'
np.savetxt(path + 'validation_list.csv',validation_list, delimiter =" ", fmt ='% s')
assert len(validation_list)+len(training_list)==len(all_files_list), 'error'
# check random file name
print(training_list[345], 'size training set: ',len(training_list), 'size validation set: ', len(validation_list))
print(class_counts)
# {'tree': 1732, 'sheila': 1733, 'bird': 1730, 'no': 2374, 'four': 2371, 'zero': 2375, 'up': 2374, 'five': 2356, 'cat': 1732, 'yes': 2376, 'eight': 2351, 'off': 2356, 'seven': 2376, 'house': 1749, 'happy': 1741, 'three': 2355, 'left': 2352, 'two': 2372, 'bed': 1712, 'nine': 2363, 'dog': 1745, 'down': 2358, 'wow': 1744, 'right': 2366, 'on': 2366, 'one': 2369, 'go': 2371, 'marvin': 1745, 'stop': 2379, 'six': 2368, 'silence': 401}
###Output
{'silence': 401, 'zero': 2375, 'yes': 2376, 'wow': 1744, 'up': 2374, 'two': 2372, 'tree': 1732, 'three': 2355, 'stop': 2379, 'six': 2368, 'sheila': 1733, 'seven': 2376, 'right': 2366, 'one': 2369, 'on': 2366, 'off': 2356, 'no': 2374, 'nine': 2363, 'marvin': 1745, 'left': 2352, 'house': 1749, 'happy': 1741, 'go': 2371, 'four': 2371, 'five': 2356, 'eight': 2351, 'down': 2358, 'dog': 1745, 'cat': 1732, 'bird': 1730, 'bed': 1712}
###Markdown
Plot a wav file
###Code
x, r = librosa.load(train_dir + 'yes/bfdb9801_nohash_0.wav', sr = 16000)
print('min: ',np.min(x),
'\nmax: ', np.max(x),
'\nmean: ', np.mean(x),
'\nmedian: ', np.median(x),
'\nvariance: ', np.var(x),
'\nlength: ', len(x))
plt.plot(x)
###Output
min: -0.1182251
max: 0.10827637
mean: 7.842428e-06
median: 0.0
variance: 0.0003236237
length: 10923
###Markdown
Turning all wav files into spectrograms
###Code
def make_spec(file, file_dir = train_dir, flip = False, ps = False, st = 4):
"""
create a melspectrogram from the amplitude of the sound
Args:
file (str): filename
file_dir (str): directory path
flip (bool): reverse time axis
ps (bool): pitch shift
st (int): half-note steps for pitch shift
Returns:
np.array with shape (122,85) (time, freq)
"""
sig, rate = librosa.load(file_dir + file, sr = 16000)
if len(sig) < 16000: # pad shorter than 1 sec audio with ramp to zero
sig = np.pad(sig, (0,16000-len(sig)), 'linear_ramp')
if ps:
sig = librosa.effects.pitch_shift(sig, rate, st)
D = librosa.amplitude_to_db(librosa.stft(sig[:16000], n_fft = 512,
hop_length = 128,
center = False), ref = np.max)
S = librosa.feature.melspectrogram(S=D, n_mels = 85).T
if flip:
S = np.flipud(S)
return S.astype(np.float32)
librosa.display.specshow(make_spec('yes/bfdb9801_nohash_0.wav'),
x_axis='mel',
fmax=8000,
y_axis='time',
sr = 16000,
hop_length = 128)
make_spec('yes/bfdb9801_nohash_0.wav').shape
def create_sets(file_list = training_list):
X_array = np.zeros([len(file_list),122,85])
Y_array = np.zeros([len(file_list)])
for ind, file in enumerate(file_list):
if ind%2000 == 0:
print(ind, file)
try:
X_array[ind] = make_spec(file)
except ValueError:
print(ind, file, ValueError)
Y_array[ind] = all_classes.index(file.rsplit('/')[0])
return X_array, Y_array
def print_sets(file_list = training_list):
for ind, file in enumerate(file_list):
if ind%2000 == 0:
print(ind, file, file.rsplit('/')[0], all_classes.index(file.rsplit('/')[0]))
print_sets()
X_train, Y_train_all = create_sets() # takes a while
# 0 yes/8a28231e_nohash_3.wav
# 2000 yes/21307344_nohash_0.wav
# 4000 silence/frag55_dude_miaowing.wav
# 6000 nine/283d7a53_nohash_1.wav
# 8000 four/31d31fa0_nohash_0.wav
# 10000 wow/d84829e0_nohash_0.wav
# 12000 five/173e6bbf_nohash_0.wav
# 14000 stop/bbd0bbd0_nohash_4.wav
# 16000 off/29fb33da_nohash_0.wav
# 18000 zero/39543cfd_nohash_1.wav
# 20000 tree/6a203e0e_nohash_3.wav
# 22000 up/01b4757a_nohash_0.wav
# 24000 six/21cbe292_nohash_0.wav
# 26000 one/3b852f6f_nohash_0.wav
# 28000 down/b959cd0c_nohash_4.wav
# 30000 three/23abe1c9_nohash_1.wav
# 32000 three/f953e1af_nohash_1.wav
# 34000 right/75915c90_nohash_0.wav
# 36000 eight/85b877b5_nohash_0.wav
# 38000 no/88053e92_nohash_0.wav
# 40000 two/a1c63f25_nohash_0.wav
# 42000 two/c1d39ce8_nohash_8.wav
# 44000 seven/72ca6a6d_nohash_0.wav
# 46000 on/0137b3f4_nohash_3.wav
# 48000 left/1b4c9b89_nohash_4.wav
# 50000 dog/dfb6450b_nohash_0.wav
# 52000 go/0137b3f4_nohash_2.wav
# 54000 sheila/1ecfb537_nohash_0.wav
# 56000 bed/129c7d8d_nohash_0.wav
# 58000 bird/b9f46737_nohash_0.wav
# all unknown are index 11
Y_train = np.where(Y_train_all < 11, Y_train_all, 11)
print(max(Y_train_all), max(Y_train))
print(len(Y_train) == len(Y_train_all), 12 in Y_train)
X_train.shape
Y_train_all.shape
Y_train.shape
librosa.display.specshow(X_train[6500],
x_axis='mel',
fmax=8000,
y_axis='time',
sr = 16000,
hop_length = 128)
###Output
_____no_output_____
###Markdown
Histogram of trainset values
###Code
print('min: ',np.min(X_train),
'\nmax: ', np.max(X_train),
'\nmean: ', np.mean(X_train),
'\nmedian: ', np.median(X_train),
'\nvariance: ', np.var(X_train))
plt.hist(X_train.flatten(), bins = 50)
###Output
_____no_output_____
###Markdown
save the training sets, add channel dimension for keras, normalize around zero
###Code
np.save('data/X_train.npy', np.expand_dims(X_train, -1) + np.mean(X_train))
np.save('data/Y_train.npy', Y_train.astype(np.int))
np.save('data/Y_train_all.npy', Y_train_all.astype(np.int))
print(len(validation_list))
X_val, Y_val_all = create_sets(file_list = validation_list)
Y_val = np.where(Y_val_all < 11, Y_val_all, 11)
print(Y_val.shape)
###Output
(6839,)
###Markdown
Histogram of validation data values
###Code
plt.hist(X_val.flatten(), bins = 50)
###Output
_____no_output_____
###Markdown
To speed up eventual training of the network I decided to do most of the preprocessing separately and save individual train and validation sets as numpy arrays in .npy files.
###Code
np.save(data_path + 'data/X_val.npy', np.expand_dims(X_val, -1) + np.mean(X_train))
np.save(data_path + 'data/Y_val.npy', Y_val.astype(np.int64))
np.save(data_path + 'data/Y_val_all.npy', Y_val_all.astype(np.int64))
###Output
_____no_output_____ |
notebooks/03-graph-analysis/01-knowledge-bases-overview.ipynb | ###Markdown
Preamble
###Code
from SPARQLWrapper import SPARQLWrapper, JSON
from datetime import datetime
from json import JSONDecodeError
import json
import bz2
import gzip
import io
PATH_CAUSENET = "../../data/causality-graphs/causenet-full.jsonl.bz2"
PATH_FREEBASE = "../../data/external/knowledge-bases/freebase-rdf-latest.gz"
PATH_CONCEPTNET = "../../data/external/knowledge-bases/conceptnet-assertions-5.6.0.tsv"
PATH_WIKIDATA = "../../data/external/knowledge-bases/wikidata-20181001-all.json.bz2"
###Output
_____no_output_____
###Markdown
Table 1: Overview of causal relations in knowledge bases CauseNet
###Code
def load_jsonl(path):
print("Loading... " + path)
lines = []
document = bz2.open(path, mode='rt')
for line in document:
lines.append(json.loads(line))
return lines
def belongs_to_high_precision_causenet(sample):
if sample['support'] > 1:
return True
for source in sample['sources']:
if source['type'] == 'wikipedia_infobox':
return True
if source['type'] == 'wikipedia_list':
return True
return False
def print_statistics(causality_graph):
nodes = []
for sample in causality_graph:
nodes.append(sample['causal_relation']['cause']['concept'])
nodes.append(sample['causal_relation']['effect']['concept'])
print(f'Relations: {len(causality_graph):,}')
print(f'Concepts: {len(set(nodes)):,}')
causenet = load_jsonl(PATH_CAUSENET)
for relation in causenet:
patterns = []
for source in relation['sources']:
if 'path_pattern' in source['payload']:
patterns.append(source['payload']['path_pattern'])
relation['support'] = len(set(patterns))
causenet_precision = []
for sample in causenet:
if belongs_to_high_precision_causenet(sample):
causenet_precision.append(sample)
print("CauseNet:")
print_statistics(causenet)
print()
print("CauseNet-Precision:")
print_statistics(causenet_precision)
###Output
CauseNet:
Relations: 11,609,890
Concepts: 12,186,310
CauseNet-Precision:
Relations: 197,806
Concepts: 80,223
###Markdown
Freebase
###Code
freebase_causal_properties = [
'medicine.disease.symptoms>',
'medicine.symptom.symptom_of>',
'medicine.disease.risk_factors>',
'medicine.risk_factor.diseases>',
'medicine.disease.causes>',
'medicine.disease_cause.diseases>',
'medicine.drug.physiologic_effect>',
'medicine.drug_physiologic_effect.drugs_with_this_physiologic_effect>',
'base.pethealth.symptom.symptom_of>',
'base.pethealth.pet_disease_or_medical_condition.symptoms>',
'medicine.symptom.side_effect_of>',
'medicine.medical_treatment.side_effects>',
'base.wordnet.synset.causes>',
'base.wordnet.synset.caused_by>',
'base.pethealth.pet_disease_risk_factor.' +
'pet_diseases_with_this_risk_factor>',
'base.pethealth.pet_disease_or_medical_condition.risk_factors>',
'base.pethealth.cause.pet_diseases_or_conditions_caused>',
'base.horsefacts.coat_locus_effect.coat_colors>',
'base.horsefacts.coat_color.causative_locus>',
'base.pethealth.pet_disease_or_medical_condition.causes>',
'base.disaster2.rail_accident.cause>',
'base.disaster2.train_accident_cause.train_accidents_caused_this_way>',
'biology.plant_disease_cause.plant_disease_triangle>',
'biology.plant_disease_triangle.plant_disease_cause>',
'base.disaster2.injury_causing_event.injury>',
'base.disaster2.injury.caused_by_event>',
'base.animalpathology.animal_disease_cause.animal_disease_triangle>',
'base.animalpathology.animal_disease_triangle.animal_disease_cause>',
'base.fires.explosion.cause>',
'base.fires.explosion_cause.explosion>',
'base.horsefacts.coat_locus.effect>',
'base.horsefacts.coat_locus_effect.locus>',
'base.fires.fires.firecause>',
'user.skud.fictional_diseases.fictional_disease.symptoms>',
'base.fires.fire_cause.fires_caused_this_way>',
'user.skud.fictional_diseases.fictional_symptom.symptom_of>',
'user.lindajohnson.default_domain.side_effects.side_effect>',
'base.qualia.disability.disability_causing_medical_condition>',
'user.robert.earthquakes.earthquake_effect.earthquake>',
'people.deceased_person.cause_of_death>',
'people.cause_of_death.people>',
'people.cause_of_death.includes_causes_of_death>',
'base.disaster2.death_causing_event.person_killed>',
'base.fictionaluniverse.deceased_fictional_character.cause_of_death>',
'base.disaster2.type_of_injury_causing_event.injuries_caused_this_way>',
'base.disaster2.shipwreck_event.cause>',
'base.disaster2.shipwreck_cause.ships_wrecked_this_way>',
'media_common.cause_of_loss.works_lost_this_way>',
'base.damsbase.dam_failure.cause_of_failure>',
'user.teeler.default_domain.death_euphemism.related_causes>'
]
prefix = "<http://rdf.freebase.com/ns/"
freebase_causal_properties = [prefix + p for p in freebase_causal_properties]
def load_freebase(causal_properties):
gz = gzip.open(PATH_FREEBASE, 'rb')
causal_relations = {}
for line in io.BufferedReader(gz):
line = line.decode("utf-8").strip()
s, p, o, _ = line.split("\t")
if p in causal_properties:
causal_relations.setdefault(p, []).append(line)
return causal_relations
def get_freebase_statistics(freebase_causality, causal_properties):
causality_graph = []
for causal_property in causal_properties:
for relation in freebase_causality[causal_property]:
relation = relation.split("\t")
relation = (relation[0], causal_property, relation[2])
causality_graph.append(relation)
nodes = []
for relation in causality_graph:
nodes.append(relation[0])
nodes.append(relation[1])
print(f'Relations: {len(set([str(x) for x in causality_graph])):,}')
print(f'Concepts: {len(set(nodes)):,}')
freebase_causality = load_freebase(freebase_causal_properties)
print("Freebase:")
get_freebase_statistics(freebase_causality, freebase_causal_properties)
###Output
Freebase:
Relations: 128,766
Concepts: 52,487
###Markdown
ConceptNet
###Code
def load_conceptnet():
conceptnet = open(PATH_CONCEPTNET).readlines()
conceptnet_triples = []
for row in conceptnet:
elements = row.split("\t")
triple = (elements[2], elements[1], elements[3])
conceptnet_triples.append(triple)
return conceptnet_triples
def count_nodes(relation_list):
nodes = []
for relation in relation_list:
nodes.append(relation[0])
nodes.append(relation[2])
return len(set(nodes))
conceptnet = load_conceptnet()
en_conceptnet = [t for t in conceptnet if '/en/' in t[0] and '/en/' in t[2]]
en_conceptnet = set([str(t) for t in en_conceptnet])
causal_properties = ['/r/CausesDesire', '/r/Causes']
causal_triples = set([t for t in conceptnet if t[1] in causal_properties])
en_causal_triples = set([t for t in causal_triples if str(t) in en_conceptnet])
print("ConceptNet Multilingual:")
print("Relations: " + f'{len(causal_triples):,}')
print("Concepts: " + f'{count_nodes(causal_triples):,}')
print()
print("ConceptNet English:")
print("Relations: " + f'{len(en_causal_triples):,}')
print("Concepts: " + f'{count_nodes(en_causal_triples):,}')
###Output
ConceptNet Multilingual:
Relations: 114,308
Concepts: 57,561
ConceptNet English:
Relations: 21,485
Concepts: 16,432
###Markdown
Wikidata
###Code
wikidata_causal_predicates = [
'P509', # cause of death
'P780', # symptoms
'P828', # has cause
'P1542', # has effect
'P770', # cause of destruction
'P1478', # has immediate cause
'P1479', # has contributing factor
'P1534', # end cause
]
def load_wikidata_causality(wikidata_causal_predicates):
causal_wikidata = []
for line in bz2.open(PATH_WIKIDATA, mode='rt'):
try:
item = json.loads(line.strip()[:-1])
except JSONDecodeError:
continue
for wikidata_property in item['claims'].keys():
if wikidata_property in wikidata_causal_predicates:
for snack in item['claims'][wikidata_property]:
if 'datavalue' not in snack['mainsnak']:
continue
value = snack['mainsnak']['datavalue']['value']
if 'id' not in value:
continue
wikidata_object = value['id']
relation = (item['id'], wikidata_property, wikidata_object)
causal_wikidata.append(relation)
return causal_wikidata
wikidata_causality = load_wikidata_causality(wikidata_causal_predicates)
wikidata_cause_of_death = [relation
for relation in wikidata_causality
if relation[1] == 'P509']
nodes = []
for relation in wikidata_causality:
nodes.append(relation[0])
nodes.append(relation[2])
print("Wikidata:")
print(f'Relations: {len(set(wikidata_causality)):,}')
print(f'Concepts: {len(set(nodes)):,}')
cause_of_death = len(set(wikidata_cause_of_death))
cause_of_death /= len(set(wikidata_causality))
cause_of_death = round(cause_of_death,3)
print("Percentage of cause of death relations:")
print(f"{cause_of_death}")
###Output
Percentage of cause of death relations:
0.847
###Markdown
DBpedia Live
###Code
dbpedia_live = SPARQLWrapper("http://live.dbpedia.org/sparql")
def send_query(endpoint, query):
endpoint.setQuery(query)
endpoint.setReturnFormat(JSON)
results = endpoint.query().convert()
return results
def full_graph_query(endpoint, predicates):
all_relations = []
all_nodes = []
for predicate in predicates:
query = """
SELECT ?s ?o WHERE { ?s <""" + predicate + """> ?o}
"""
results = send_query(endpoint, query)
for result in results['results']['bindings']:
relation_subject = result['s']['value']
relation_object = result['o']['value']
all_relations.append((relation_subject, predicate, relation_object))
all_nodes.append(relation_subject)
all_nodes.append(relation_object)
return all_relations, all_nodes
# defined by systematically searching DBpedia properties
causal_predicates = [
"http://dbpedia.org/property/cause",
"http://dbpedia.org/property/causes",
"http://dbpedia.org/ontology/deathCause",
"http://dbpedia.org/ontology/medicalCause",
"http://dbpedia.org/property/causeOfDeath",
"http://dbpedia.org/property/causalAgents",
"http://dbpedia.org/property/causeDeath",
"http://dbpedia.org/property/causeofdeath",
"http://dbpedia.org/property/effects",
"http://dbpedia.org/ontology/symptom",
]
relations, nodes = full_graph_query(dbpedia_live, causal_predicates)
cause_of_death_predicates = [causal_predicates[i]
for i in [2,4,6,7]]
cause_of_death_relations = [relation
for relation in relations
if relation[1] in cause_of_death_predicates]
print(f"DBpedia Live ({datetime.now()}):")
print("Relations: " + f'{len(set(relations)):,}')
print("Concepts: " + f'{len(set(nodes)):,}')
len(set(cause_of_death_relations))
cause_of_death = len(set(cause_of_death_relations))
cause_of_death /= len(set(relations))
cause_of_death = round(cause_of_death,3)
print("Percentage of cause of death relations:")
print(f"{cause_of_death}")
###Output
Percentage of cause of death relations:
0.524
|
notebooks/python/10. Surrogate Prediction.ipynb | ###Markdown
OPTaaS: Surrogate PredictionThe surrogate model is what the optimizer *thinks* the scoring function looks like. It is part of the mechanism used to choose optimal configurations.You can generate predictions from the surrogate model (effectively asking OPTaaS to guess what the scoring function may be at a certain point) at any set of arbitrary configuration points. Connect to OPTaaS using your API Key
###Code
from mindfoundry.optaas.client.client import OPTaaSClient
client = OPTaaSClient('https://optaas.mindfoundry.ai', '<Your OPTaaS API key>')
###Output
_____no_output_____
###Markdown
Create a simple task
###Code
from mindfoundry.optaas.client.parameter import FloatParameter
from mindfoundry.optaas.client.client import Goal
task = client.create_task(
title='Basic 2D Example',
parameters=[
FloatParameter('x', minimum=-3, maximum=1),
FloatParameter('y', minimum=-6, maximum=21)
],
goal=Goal.min,
)
###Output
_____no_output_____
###Markdown
Define your scoring function
###Code
def scoring_function(x, y):
''' A simple well with min at 0, 0'''
score = x**2 + y**2
return score
###Output
_____no_output_____
###Markdown
Run your task
###Code
best_result = task.run(scoring_function, max_iterations=20)
print("Best Result:", best_result)
###Output
Running task "Basic 2D Example" for 20 iterations
(no score threshold set)
Iteration: 0 Score: 57.25
Configuration: {'x': -1.0, 'y': 7.5}
Iteration: 1 Score: 207.0625
Configuration: {'x': -2.0, 'y': 14.25}
Iteration: 2 Score: 0.5625
Configuration: {'x': 0.0, 'y': 0.75}
Iteration: 3 Score: 19.265625
Configuration: {'x': -1.5, 'y': 4.125}
Iteration: 4 Score: 310.890625
Configuration: {'x': 0.5, 'y': 17.625}
Iteration: 5 Score: 124.515625
Configuration: {'x': -2.5, 'y': 10.875}
Iteration: 6 Score: 7.140625
Configuration: {'x': -0.5, 'y': -2.625}
Iteration: 7 Score: 3.94140625
Configuration: {'x': -1.75, 'y': -0.9375}
Iteration: 8 Score: 157.87890625
Configuration: {'x': 0.25, 'y': 12.5625}
Iteration: 9 Score: 380.53515625
Configuration: {'x': -2.75, 'y': 19.3125}
Iteration: 10 Score: 1.0128486056914057
Configuration: {'x': 0.99999998, 'y': 0.11335186673101204}
Iteration: 11 Score: 0.00927530250495255
Configuration: {'x': 0.03888422866549611, 'y': 0.0881097001813192}
Iteration: 12 Score: 0.0012117009114401805
Configuration: {'x': 0.0050892493390089985, 'y': 0.03443545342529378}
Iteration: 13 Score: 0.0006806662021513174
Configuration: {'x': 0.006729942231381237, 'y': 0.025206627694191637}
Iteration: 14 Score: 0.0006364356966787191
Configuration: {'x': 0.0010303479825333319, 'y': 0.02520662769419203}
Iteration: 15 Score: 0.00022768157398298682
Configuration: {'x': -0.0005074972550283101, 'y': 0.015080584223402142}
Iteration: 16 Score: 0.00024225926376497916
Configuration: {'x': 0.0005289767480935436, 'y': 0.015555688585368233}
Iteration: 17 Score: 0.0002342686143728285
Configuration: {'x': 6.452081614416779e-05, 'y': 0.015305699965604729}
Iteration: 18 Score: 0.00021932618498741224
Configuration: {'x': -0.0006874418906318507, 'y': 0.014793701654231666}
Iteration: 19 Score: 0.00021932619083313206
Configuration: {'x': -0.0006874425416874602, 'y': 0.014793701821552655}
Task Completed
Best Result: { 'configuration': { 'type': 'exploitation',
'values': {'x': -0.0006874418906318507, 'y': 0.014793701654231666}},
'score': 0.00021932618498741224,
'user_defined_data': None}
###Markdown
Evaluating the surrogate Ask the surrogate for a prediction at the known best point (x=0, y=0)The surrogate model should predict a fairly low score with high confidence, since it has been exploring the vicinity of this point.
###Code
interesting_configs = [{'x': 0.0, 'y': 0.0}]
predictions = task.get_surrogate_predictions(interesting_configs)
[(p.mean, p.variance) for p in predictions]
###Output
_____no_output_____
###Markdown
Ask the surrogate about a couple of points far away from the explored area (x=1, y=20) and (x=-3, y=-6)The surrogate model should be significantly less confident, as there were no evaluations near this point.
###Code
far_away_points = [{'x': 1.0, 'y': 20.0}, {'x': -1.0, 'y': -6.0}]
predictions = task.get_surrogate_predictions(far_away_points)
[(p.mean, p.variance) for p in predictions]
###Output
_____no_output_____ |
labs/normalization_initialization.ipynb | ###Markdown
Setup
###Code
!pip install tensorflow>=2.0 tqdm matplotlib -Uq
import numpy as np
from tqdm.notebook import tqdm, trange
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
import tensorflow as tf
print("TensorFlow version:", tf.__version__)
from tensorflow.keras import layers
from tensorflow.keras import initializers
mapping = {
"GlorotNormal": initializers.GlorotNormal,
"GlorotUniform": initializers.GlorotUniform,
"Ones": initializers.Ones,
"Zeros": initializers.Zeros,
"RandomNormal": initializers.RandomNormal,
"RandomUniform": initializers.RandomUniform,
"he_normal": initializers.he_normal,
"he_uniform": initializers.he_uniform
}
def get_init_function(init_name):
return mapping[init_name]
def get_init_name(init_function):
name = str(init_function.__name__)
return name
def load_data(batch_size=64):
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data()
if norm_scheme == "0.0~1.0":
x_train, x_test = x_train/255.0, x_test/255.0
elif norm_scheme == "-1.0~1.0":
x_train, x_test = (x_train/127.5)-1, (x_test/127.5)-1
elif norm_scheme == "none":
pass
# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_train, y_train = x_train.astype("float32"), y_train.astype("float32")
train_ds = tf.data.Dataset.from_tensor_slices((x_train, y_train))
train_ds = train_ds.batch(batch_size)
train_steps = x_train.shape[0] // batch_size
return train_ds, train_steps
def build_train_model(train_ds, train_steps, epochs=1):
# Create an instance of the model
l_input = layers.Input(shape=(28, 28, 1))
l_flat = layers.Flatten()(l_input)
l_dense = layers.Dense(10, kernel_initializer=get_init_function(k_init), bias_initializer=get_init_function(b_init))(l_flat)
l_softmax = layers.Activation("softmax")(l_dense)
model = tf.keras.models.Model(inputs=l_input, outputs=l_softmax)
model_int = tf.keras.models.Model(inputs=l_input, outputs=l_dense)
opt = tf.keras.optimizers.SGD()
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
@tf.function
def forward(data):
return model(data)
gradient_list, loss_list = [], []
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = forward(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
loss_list.append(loss)
gradient_list.append(gradients)
opt.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
for epoch in range(epochs):
train_loss.reset_states()
train_accuracy.reset_states()
for images, labels in tqdm(train_ds, total=train_steps):
train_step(images, labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100))
return gradient_list, loss_list, model, model_int
def plot_gradients(gradient_list, loss_list):
mean_list, p25_list, p75_list = [], [], []
for grad in gradient_list:
dense_wgrad = grad[0].numpy().astype("float64")
mean_list.append(np.mean(dense_wgrad))
p25_list.append(np.percentile(dense_wgrad, 25))
p75_list.append(np.percentile(dense_wgrad, 75))
steps = np.arange(len(gradient_list))
mean_list, p25_list, p75_list = np.asarray(mean_list), np.asarray(p25_list), np.asarray(p75_list)
fig = plt.figure(figsize=(12,6))
plt.plot(steps, mean_list, label="mean")
plt.scatter(steps, p25_list, s=2, c="r", alpha=0.7, label="25%")
plt.scatter(steps, p75_list, s=2, c="r", alpha=0.7, label="75%")
plt.title(" / ".join(["F_MNIST", "norm:"+norm_scheme, "kernel:"+k_init, "bias:"+b_init]))
plt.xlabel("Steps")
plt.legend()
plt.show()
grad_flat = []
for grad in gradient_list:
grad_flat += grad[0].numpy().tolist()[0]
grad_flat = np.asarray(grad_flat)
x_low, x_high = np.percentile(grad_flat,0.1), np.percentile(grad_flat,99.9)
fig = plt.figure(figsize=(12,4))
sns.distplot(grad_flat, bins=100, kde=False)
plt.xlim((x_low, x_high))
plt.title("Distribution of Gradients")
plt.show()
###Output
_____no_output_____
###Markdown
Investigating NN Training In this notebook, we investigate the impact of **data normalization** and **weight initialization** on the training of a simple, one-layer neural network to classify the MNIST digits. We'll look at the resulting **gradients** and **activations** of the hidden layer given the different conditions.**Data Normalization*** `-1.0~1.0` performs the best.* `0.0~1.0` trains, gradients are extremely small.* `none` performs very poorly. Gradients and activations explode.**Weight Initialization*** `GlorotNormal` gets around 78%, good distribution of gradients and activations. `GlorotUniform` performs slighly better.* `Ones` create extremely large activations. Deep models will not train.* `Zeros` gets around 79%, good distribution of gradients. Activations distribution slightly poorer compared to `GlorotNormal`/`GlorotUniform`. Significant for deeper models.* `RandomNormal`/`RandomUniform` also get around 78%, with good distribution of gradients. Activations however are more widely distributed compared to `GlorotNormal`/`GlorotUniform`. Significant for deeper models.
###Code
#@title Options { run: "auto" }
#@markdown Data Normalization Scheme
norm_scheme = "-1.0~1.0" #@param ["0.0~1.0", "-1.0~1.0", "none"]
#@markdown Model Initialization Scheme ([see here](https://www.tensorflow.org/api_docs/python/tf/keras/initializers))
k_init = "GlorotUniform" #@param ["GlorotNormal", "GlorotUniform", "Ones", "Zeros", "RandomNormal", "RandomUniform"]
b_init = "Zeros" #@param ["GlorotNormal", "GlorotUniform", "Ones", "Zeros", "RandomNormal", "RandomUniform"]
train_ds, train_steps = load_data(batch_size=32)
gradient_list, loss_list, model, model_int = build_train_model(train_ds, train_steps, epochs=1)
plot_gradients(gradient_list, loss_list)
activations = []
for images, labels in train_ds:
output = model_int(images).numpy().flatten().tolist()
activations += output
activations = np.asarray(activations)
a_low, a_high = np.percentile(activations,1), np.percentile(activations,99)
fig = plt.figure(figsize=(12,4))
sns.distplot(activations, bins=100, kde=False)
plt.title("Distribution of Activations")
plt.axvline(-1.0, c="k", linestyle="dotted")
plt.axvline(1.0, c="k", linestyle="dotted")
plt.xlim((a_low, a_high))
plt.show()
###Output
_____no_output_____ |
Colab RDP.ipynb | ###Markdown
**Colab RDP** : Remote Desktop to Colab Instance> **Warning : Not for Cryptocurrency Mining** >**Why are hardware resources such as T4 GPUs not available to me?** The best available hardware is prioritized for users who use Colaboratory interactively rather than for long-running computations. Users who use Colaboratory for long-running computations may be temporarily restricted in the type of hardware made available to them, and/or the duration that the hardware can be used for. We encourage users with high computational needs to use Colaboratory’s UI with a local runtime. Please note that using Colaboratory for cryptocurrency mining is disallowed entirely, and may result in being banned from using Colab altogether.Google Colab can give you Instance with 12GB of RAM and GPU for 12 hours (Max.) for Free users. Anyone can use it to perform Heavy Tasks.To use other similiar Notebooks use my Repository **[Colab RDP](https://github.com/smnahidemon/Colab-RDP)**
###Code
#@title **Create User**
#@markdown Enter Username and Password
import os
username = "user" #@param {type:"string"}
password = "root" #@param {type:"string"}
print("Creating User and Setting it up")
# Creation of user
os.system(f"useradd -m {username}")
# Add user to sudo group
os.system(f"adduser {username} sudo")
# Set password of user to 'root'
os.system(f"echo '{username}:{password}' | sudo chpasswd")
# Change default shell from sh to bash
os.system("sed -i 's/\/bin\/sh/\/bin\/bash/g' /etc/passwd")
print("User Created and Configured")
#@title **RDP**
#@markdown It takes 4-5 minutes for installation
import os
import subprocess
#@markdown Visit http://remotedesktop.google.com/headless and Copy the command after authentication
CRP = "" #@param {type:"string"}
#@markdown Enter a pin more or equal to 6 digits
Pin = 123456 #@param {type: "integer"}
class CRD:
def __init__(self):
os.system("apt update")
self.installCRD()
self.installDesktopEnvironment()
self.installGoogleChorme()
self.finish()
@staticmethod
def installCRD():
print("Installing Chrome Remote Desktop")
subprocess.run(['wget', 'https://dl.google.com/linux/direct/chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)
subprocess.run(['dpkg', '--install', 'chrome-remote-desktop_current_amd64.deb'], stdout=subprocess.PIPE)
subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)
@staticmethod
def installDesktopEnvironment():
print("Installing Desktop Environment")
os.system("export DEBIAN_FRONTEND=noninteractive")
os.system("apt install --assume-yes xfce4 desktop-base xfce4-terminal")
os.system("bash -c 'echo \"exec /etc/X11/Xsession /usr/bin/xfce4-session\" > /etc/chrome-remote-desktop-session'")
os.system("apt remove --assume-yes gnome-terminal")
os.system("apt install --assume-yes xscreensaver")
os.system("systemctl disable lightdm.service")
@staticmethod
def installGoogleChorme():
print("Installing Google Chrome")
subprocess.run(["wget", "https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE)
subprocess.run(["dpkg", "--install", "google-chrome-stable_current_amd64.deb"], stdout=subprocess.PIPE)
subprocess.run(['apt', 'install', '--assume-yes', '--fix-broken'], stdout=subprocess.PIPE)
@staticmethod
def finish():
print("Finalizing")
os.system(f"adduser {username} chrome-remote-desktop")
command = f"{CRP} --pin={Pin}"
os.system(f"su - {username} -c '{command}'")
os.system("service chrome-remote-desktop start")
print("Finished Succesfully")
try:
if username:
if CRP == "":
print("Please enter authcode from the given link")
elif len(str(Pin)) < 6:
print("Enter a pin more or equal to 6 digits")
else:
CRD()
except NameError as e:
print("username variable not found")
print("Create a User First")
#@title **Google Drive Mount**
#@markdown Google Drive used as Persistance HDD for files.<br>
#@markdown Mounted at `user` Home directory inside drive folder
#@markdown (If `username` variable not defined then use root as default).
def MountGDrive():
from google.colab import drive
! runuser -l $user -c "yes | python3 -m pip install --user google-colab" > /dev/null 2>&1
mount = """from os import environ as env
from google.colab import drive
env['CLOUDSDK_CONFIG'] = '/content/.config'
drive.mount('{}')""".format(mountpoint)
with open('/content/mount.py', 'w') as script:
script.write(mount)
! runuser -l $user -c "python3 /content/mount.py"
try:
if username:
mountpoint = "/home/"+username+"/drive"
user = username
except NameError:
print("username variable not found, mounting at `/content/drive' using `root'")
mountpoint = '/content/drive'
user = 'root'
MountGDrive()
#@title **SSH**
! pip install colab_ssh --upgrade &> /dev/null
Ngrok = False #@param {type:'boolean'}
Agro = False #@param {type:'boolean'}
#@markdown Copy authtoken from https://dashboard.ngrok.com/auth (only for ngrok)
ngrokToken = "" #@param {type:'string'}
def runNGROK():
from colab_ssh import launch_ssh
from IPython.display import clear_output
launch_ssh(ngrokToken, password)
clear_output()
print("ssh", username, end='@')
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'][6:].replace(':', ' -p '))"
def runAgro():
from colab_ssh import launch_ssh_cloudflared
launch_ssh_cloudflared(password=password)
try:
if username:
pass
elif password:
pass
except NameError:
print("No user found using username and password as 'root'")
username='root'
password='root'
if Agro and Ngrok:
print("You can't do that")
print("Select only one of them")
elif Agro:
runAgro()
elif Ngrok:
if ngrokToken == "":
print("No ngrokToken Found, Please enter it")
else:
runNGROK()
else:
print("Select one of them")
#@title Package Installer { vertical-output: true }
run = False #@param {type:"boolean"}
#@markdown *Package management actions (gasp)*
action = "Install" #@param ["Install", "Check Installed", "Remove"] {allow-input: true}
package = "wget" #@param {type:"string"}
system = "apt" #@param ["apt", ""]
def install(package=package, system=system):
if system == "apt":
!apt --fix-broken install > /dev/null 2>&1
!killall apt > /dev/null 2>&1
!rm /var/lib/dpkg/lock-frontend
!dpkg --configure -a > /dev/null 2>&1
!apt-get install -o Dpkg::Options::="--force-confold" --no-install-recommends -y $package
!dpkg --configure -a > /dev/null 2>&1
!apt update > /dev/null 2>&1
!apt install $package > /dev/null 2>&1
def check_installed(package=package, system=system):
if system == "apt":
!apt list --installed | grep $package
def remove(package=package, system=system):
if system == "apt":
!apt remove $package
if run:
if action == "Install":
install()
if action == "Check Installed":
check_installed()
if action == "Remove":
remove()
#@title **Colab Shutdown**
#@markdown To Kill NGROK Tunnel
NGROK = False #@param {type:'boolean'}
#@markdown To Unmount GDrive
GDrive = False #@param {type:'boolean'}
#@markdown To Sleep Colab
Sleep = True #@param {type:'boolean'}
if NGROK:
! killall ngrok
if GDrive:
with open('/content/unmount.py', 'w') as unmount:
unmount.write("""from google.colab import drive
drive.flush_and_unmount()""")
try:
if user:
! runuser $user -c 'python3 /content/unmount.py'
except NameError:
print("Google Drive not Mounted")
if Sleep:
from time import sleep
sleep(43200)
###Output
_____no_output_____ |
Scratch.ipynb | ###Markdown
Refactor : speed up data download> 1. 기존 데이터 다운로드 패키지의 속도는 다음과 같다.> 2. 속도에 영향을 주는 부분을 찾아 refactoring 한다.>> * 'data.py'의 속도를 잡아먹는 부분(데이터를 가져와 parsing하는 부분)을 가져와 변경한다.
###Code
from jupyterworkflow.data import get_fremont_data
import pandas as pd
def test_fremont_data():
data = get_fremont_data()
#data.columns를 실행하면 인덱스, 칼럼명, 타입이 표시 되므로 실행 후 칼럼명을 카피하면 좋음
# all(data.columns == ['West', 'East', 'Total']) : 칼럼명이 일치하면 True
assert all(data.columns == ['West', 'East', 'Total'])
assert isinstance(data.index, pd.DatetimeIndex)
test_fremont_data() # run 'test_fremont_data' function
# 'data.py'에서 가져 온 부분을 변경
# data = pd.read_csv('Fremont.csv', index_col = 'Date', parse_dates = True)
# data.columns = ['West', 'East']
# data['Total'] = data['West'] + data['East']
data = pd.read_csv('Fremont.csv', index_col = 'Date')
# data.head() : 데이터가 로드 된 것 확인
# data.index.dtype : 데이터 타입을 확인
# pd.to_datetime(data.index) #날짜, 시간 형식의 표식 포맷을 확인 후
# 'http://strftime.org/'의 포맷 형식을 따라 변경
try:
data.index = pd.to_datetime(data.index, format = '%m/%d/%Y %H:%M:%S %p') # 이 문장이 pd.to_datetime(data.index)보다 훨씬 빨리 로드 됨
except TypeError:
data.index = pd.to_datetime(data.index)
# 'data.py'에 try ~ except 부분을 추가하여 변경
data.columns = ['West', 'East']
data['Total'] = data['West'] + data['East']
# 속도가 1.81 seconds로 개선됨
###Output
_____no_output_____
###Markdown
Start of baseline evaluation
###Code
from glob import glob
from pathlib import Path
from typing import Tuple
from fractions import Fraction
from bisect import bisect
import pandas as pd
import numpy as np
from harmonic_inference.utils import eval_utils as eu
from harmonic_inference.utils import harmonic_utils as hu
from harmonic_inference.data.data_types import ChordType, PitchType, KeyMode, TRIAD_REDUCTION, ALL_ONE_TYPE_REDUCTION
results = {}
for file in glob("baseline/*.csv"):
file_path = Path(file)
results[file_path.name] = pd.read_csv(file, header=None, names=['on', 'off', 'key', 'degree', 'type', 'inv'])
# Output is in quarter notes, labels are in whole notes
results[file_path.name]["on"] /= 4
results[file_path.name]["off"] /= 4
keys = set()
degrees = set()
types = set()
inversions = set()
for df in results.values():
for k in df['key'].unique():
keys.add(k)
for d in df['degree'].unique():
degrees.add(d)
for t in df['type'].unique():
types.add(t)
for i in df['inv'].unique():
inversions.add(i)
def key_to_tonic_mode(key: str, pitch_type: PitchType = PitchType.TPC) -> Tuple[int, KeyMode]:
key = key.replace('-', 'b')
key = key.replace('+', '#')
tonic = hu.get_pitch_from_string(key, pitch_type)
mode = KeyMode.MAJOR if key[0].isupper() else KeyMode.MINOR
return tonic, mode
def type_to_chord_type(type_str: str) -> ChordType:
return {
'D7': ChordType.MAJ_MIN7,
'M': ChordType.MAJOR,
'd': ChordType.DIMINISHED,
'd7': ChordType.DIM7,
'm': ChordType.MINOR,
'm7': ChordType.MIN_MIN7,
'Gr+6': ChordType.DIM7,
'h7': ChordType.HALF_DIM7,
}[type_str]
def get_root_tonic_and_mode(
degree_str: str, tonic: int, mode: KeyMode, pitch_type: PitchType = PitchType.TPC
) -> Tuple[int, int, KeyMode]:
if isinstance(degree_str, int):
degree_str = str(degree_str)
degree_str = degree_str.replace('-', 'b')
degree_str = degree_str.replace('+', '#')
if '/' in degree_str:
key, degree_str = degree_str.split('/')
relative_transposition = hu.get_interval_from_scale_degree(key, False, mode, pitch_type=pitch_type)
tonic = hu.transpose_pitch(tonic, relative_transposition, pitch_type=pitch_type)
if key in ['5']:
mode = KeyMode.MAJOR
elif key in ['7']:
mode = KeyMode.MINOR
elif key in ['1']:
mode = mode
degree_interval = hu.get_interval_from_scale_degree(degree_str, False, mode, pitch_type=pitch_type)
root = hu.transpose_pitch(tonic, degree_interval, pitch_type=pitch_type)
return root, tonic, mode
def get_all(key: str, degree: str, type_str: str, inv: str) -> Tuple[int, ChordType, int, int, KeyMode]:
inv = int(inv)
chord_type = type_to_chord_type(type_str)
tonic, mode = key_to_tonic_mode(key)
root, tonic, mode = get_root_tonic_and_mode(degree, tonic, mode)
return root, chord_type, inv, tonic, mode
for df in results.values():
roots = []
chord_types = []
invs = []
tonics = []
modes = []
for _, row in df.iterrows():
root, chord_type, inv, tonic, mode = get_all(row['key'], row['degree'], row['type'], row['inv'])
roots.append(root)
chord_types.append(chord_type)
invs.append(inv)
tonics.append(tonic)
modes.append(mode)
df["root_tpc"] = roots
df["chord_type"] = chord_types
df["inversion"] = invs
df["tonic"] = tonics
df["mode"] = modes
def get_label_df(filename: str) -> pd.DataFrame:
filename = filename[:-21] + "results.tsv"
file = glob(f'outputs/**/{filename}', recursive=True)[0]
return pd.read_csv(file, sep='\t', index_col=0, converters={'duration': Fraction}), file
def get_row_at_onset(df, onset):
index = min(bisect(list(df['off']), float(onset)), len(df) - 1)
return df.iloc[index]
def evaluate_df(key, df):
label_df, filename = get_label_df(key)
root_accs = []
chord_accs = []
triad_accs = []
seventh_accs = []
key_accs = []
full_accs = []
onset = 0
for _, label_row in label_df.iterrows():
est_row = get_row_at_onset(df, onset)
onset += label_row['duration']
tonic_str = label_row['gt_key'].split(':')[0]
if '/' in tonic_str:
tonic_str = tonic_str.split('/')[0]
gt_tonic = hu.get_pitch_from_string(tonic_str, pitch_type=PitchType.TPC)
gt_mode = KeyMode.MAJOR if label_row['gt_key'][0].isupper() else KeyMode.MINOR
gt_chord = label_row['gt_chord']
gt_inv = int(gt_chord[-1])
root_str = gt_chord.split(':')[0]
if '/' in root_str:
root_str = root_str.split('/')[0]
gt_root = hu.get_pitch_from_string(root_str, pitch_type=PitchType.TPC)
gt_chord_type = hu.get_chord_type_from_string(gt_chord.split(':')[1].split(',')[0])
chord_dist = eu.get_chord_distance(
gt_root,
gt_chord_type,
gt_inv,
est_row['root_tpc'],
est_row['chord_type'],
est_row['inversion'],
)
chord_accs.append(1 - chord_dist)
root_dist = eu.get_chord_distance(
gt_root,
gt_chord_type,
0,
est_row['root_tpc'],
est_row['chord_type'],
0,
reduction=ALL_ONE_TYPE_REDUCTION
)
root_accs.append(1 - root_dist)
triad_dist = eu.get_chord_distance(
gt_root,
gt_chord_type,
0,
est_row['root_tpc'],
est_row['chord_type'],
0,
reduction=TRIAD_REDUCTION
)
triad_accs.append(1 - triad_dist)
seventh_dist = eu.get_chord_distance(
gt_root,
gt_chord_type,
0,
est_row['root_tpc'],
est_row['chord_type'],
0,
)
seventh_accs.append(1 - seventh_dist)
key_dist = eu.get_key_distance(
gt_tonic,
gt_mode,
est_row['tonic'],
est_row['mode'],
)
key_accs.append(1 - key_dist)
full_accs.append(1 if chord_dist + key_dist == 0 else 0)
root_acc = float(np.average(root_accs, weights=label_df['duration']))
chord_acc = float(np.average(chord_accs, weights=label_df['duration']))
key_acc = float(np.average(key_accs, weights=label_df['duration']))
full_acc = float(np.average(full_accs, weights=label_df['duration']))
triad_acc = float(np.average(triad_accs, weights=label_df['duration']))
seventh_acc = float(np.average(seventh_accs, weights=label_df['duration']))
return {
"Root": root_acc,
"Triad": triad_acc,
"Seventh": seventh_acc,
"Chord": chord_acc,
"Key": key_acc,
"Full": full_acc,
}, filename
results_vals = {}
import re
for key, df in results.items():
eval_dict, name = evaluate_df(key, df)
if not "Beethoven" in name:
continue
print(name)
for acc, val in eval_dict.items():
if acc not in results_vals:
results_vals[acc] = []
results_vals[acc].append(val)
print(f" {acc}: {val}")
for acc, val_list in results_vals.items():
print(f"{acc}: {sum(val_list) / len(val_list)}")
from pathlib import Path
from fractions import Fraction
import pandas as pd
from music21.converter import parse
m21_score = parse(Path("../functional-harmony/data/BPS/scores/bps_01_01.mxl"))
m21_score = m21_score.flattenParts()
m21_score = m21_score.stripTies()
for note in m21_score.recurse().notes:
if note.isChord:
chord = note
print("Chord")
for note in chord.notes:
print(note.pitch.name, note.pitch.octave, chord.duration.quarterLength, chord.offset, chord.measureNumber, note.tie, chord.tie)
print("End Chord")
else:
print(note.offset
print(note.pitch.name, note.pitch.octave, note.duration.quarterLength, note.offset, note.measureNumber)
for offset, measure in m21_score.measureOffsetMap().items():
print(offset, measure[0].timeSignature)
import importlib
from pathlib import Path
import harmonic_inference.data.piece as piece
importlib.reload(piece)
notes, measures_df = piece.get_score_piece_from_music_xml(Path("../functional-harmony/data/BPS/scores/bps_01_01.mxl"), "")
measures_df[40:50]
list(note for note in notes if note.onset[0] in [48, 49])
###Output
_____no_output_____
###Markdown
Test loading functional-harmony data
###Code
from glob import glob
from tqdm import tqdm
from pathlib import Path
import logging
import harmonic_inference.data.piece as piece
import importlib
importlib.reload(piece)
for file_path in tqdm(glob("../functional-harmony/data/**/*.mxl", recursive=True)[173:]):
music_xml_path = Path(file_path)
label_csv_path = music_xml_path.parent.parent / "chords" / Path(str(music_xml_path.stem) + ".csv")
if not label_csv_path.exists():
logging.error(f"Label file {label_csv_path} does not exist. Skipping.")
continue
print(music_xml_path)
score = piece.get_score_piece_from_music_xml(music_xml_path, label_csv_path)
###Output
_____no_output_____
###Markdown
Results / Confusion Matrix
###Code
from glob import glob
from tqdm import tqdm
from fractions import Fraction
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from harmonic_inference.data.data_types import PitchType, ChordType, KeyMode
from harmonic_inference.utils.harmonic_utils import get_pitch_from_string, get_chord_type_from_string
def get_results_df(path):
dfs = []
for tsv in tqdm(glob(path, recursive=True)):
dfs.append(pd.read_csv(tsv, sep="\t", converters={"duration": Fraction}, index_col=0))
if len(dfs) == 0:
return None
results_df = pd.concat(dfs, ignore_index=True)
for type in ["gt", "est"]:
results_df[f"{type}_key_tonic"] = 0
results_df[f"{type}_key_mode"] = 0
results_df[f"{type}_chord_root"] = 0
results_df[f"{type}_chord_type"] = 0
results_df[f"{type}_chord_inv"] = 0
keys = np.concatenate((results_df["gt_key"].unique(), results_df["est_key"].unique()))
for key in tqdm(keys, desc="Working on keys..."):
key_tonic, key_mode = key.split(":")
for type in ["gt", "est"]:
results_df.loc[results_df[f"{type}_key"] == key, f"{type}_key_tonic"] = get_pitch_from_string(key_tonic, PitchType.MIDI)
results_df.loc[results_df[f"{type}_key"] == key, f"{type}_key_mode"] = KeyMode[key_mode.split(".")[1]]
chords = np.concatenate((results_df["gt_chord"].unique(), results_df["est_chord"].unique()))
for chord in tqdm(chords, desc="Working on chords..."):
inv = int(chord[-1])
chord_str = chord.split(",")[0]
chord_root, chord_type = chord_str.split(":")
for type in ["gt", "est"]:
results_df.loc[results_df[f"{type}_chord"] == chord, f"{type}_chord_root"] = get_pitch_from_string(chord_root, PitchType.TPC)
results_df.loc[results_df[f"{type}_chord"] == chord, f"{type}_chord_type"] = get_chord_type_from_string(chord_type)
results_df.loc[results_df[f"{type}_chord"] == chord, f"{type}_chord_inv"] = inv
return results_df
results_df = get_results_df("outputs/dcml-csm-1/**/*_results.tsv")
results_df
def get_heat_map_matrix(results_df):
heat_map = np.zeros((len(ChordType), len(ChordType) + 2))
for i, chord_type in tqdm(enumerate(ChordType)):
chord_type_df = results_df.loc[results_df["gt_chord_type"] == chord_type]
if len(chord_type_df) == 0:
continue
total_dur = float(chord_type_df["duration"].sum())
correct_root_df = chord_type_df.loc[chord_type_df["gt_chord_root"] == chord_type_df["est_chord_root"]]
heat_map[i, 0] = float(total_dur - correct_root_df['duration'].sum())
for j, est_chord_type in enumerate(ChordType, start=1):
selected_df = correct_root_df.loc[correct_root_df["est_chord_type"] == est_chord_type]
if est_chord_type == chord_type:
correct_type_df = selected_df
selected_dur = float(selected_df["duration"].sum())
heat_map[i, j] = selected_dur
if len(correct_type_df) > 0:
correct_inv_df = correct_type_df.loc[(correct_root_df["gt_chord_inv"] == correct_root_df["est_chord_inv"])]
heat_map[i, -1] = 1 - float(correct_inv_df['duration'].sum() / correct_type_df['duration'].sum())
return heat_map
def normalize_heat_map(heat_map):
for i, row in enumerate(heat_map):
if np.sum(row[:-1]) == 0:
continue
heat_map[i, :-1] /= np.sum(row[:-1])
xticks = [
"M",
"m",
"o",
"+",
"MM7",
"d7",
"mM7",
"mm7",
"o7",
"%7",
"+7",
"+M7",
]
heat_map = get_heat_map_matrix(results_df)
normalize_heat_map(heat_map)
plt.xlabel("Estimated Chord Type", labelpad=-15)
plt.ylabel("Ground Truth Chord Type", rotation=90)
plt.xticks(ticks=np.arange(len(ChordType) + 2), labels=["Incorrect Root"] + xticks + ["Incorrect Inv."], rotation=90)
plt.yticks(ticks=np.arange(len(ChordType)), labels=xticks)
plt.tight_layout(pad=0)
plt.imshow(heat_map, vmin=0, vmax=1)
plt.colorbar()
plt.savefig("figs/heatmap.png", pad_inches=0)
major_heat_map = get_heat_map_matrix(results_df.loc[results_df["gt_key_mode"] == KeyMode.MAJOR])
normalize_heat_map(major_heat_map)
plt.xlabel("Estimated Chord Type", labelpad=-15)
plt.ylabel("Ground Truth Chord Type", rotation=90)
plt.xticks(ticks=np.arange(len(ChordType) + 2), labels=["Incorrect Root"] + xticks + ["Incorrect Inv."], rotation=90)
plt.yticks(ticks=np.arange(len(ChordType)), labels=xticks)
plt.tight_layout(pad=0)
plt.imshow(major_heat_map, vmin=0, vmax=1)
plt.colorbar()
plt.savefig("figs/heatmap_major.png", pad_inches=0)
minor_heat_map = get_heat_map_matrix(results_df.loc[results_df["gt_key_mode"] == KeyMode.MINOR])
normalize_heat_map(minor_heat_map)
plt.xlabel("Estimated Chord Type", labelpad=-15)
plt.ylabel("Ground Truth Chord Type", rotation=90)
plt.xticks(ticks=np.arange(len(ChordType) + 2), labels=["Incorrect Root"] + xticks + ["Incorrect Inv."], rotation=90)
plt.yticks(ticks=np.arange(len(ChordType)), labels=xticks)
plt.imshow(minor_heat_map, vmin=0, vmax=1)
plt.colorbar()
plt.tight_layout(pad=0)
plt.savefig("figs/heatmap_minor.png", pad_inches=0)
def get_acc_given_inversion(results_df, inv):
inv_df = results_df.loc[results_df["gt_chord_inv"] == inv]
correct_df = inv_df.loc[inv_df["gt_chord"] == inv_df["est_chord"]]
return float(correct_df["duration"].sum() / inv_df["duration"].sum())
results_df = get_results_df("outputs/dcml-csm-1/**/*_results.tsv")
for inv in range(4):
print(f"Inv {inv} {get_acc_given_inversion(results_df, inv)}")
def get_acc(results_df):
total_dur = float(results_df["duration"].sum())
correct_dur = float(
results_df.loc[
(
(results_df["gt_key"] == results_df["est_key"]) &
(results_df["gt_chord"] == results_df["est_chord"])
),
"duration",
].sum()
)
return correct_dur / total_dur
results_df = get_results_df("outputs/dcml-csm-1/**/*_results.tsv")
acc = get_acc(results_df)
acc_minor = get_acc(results_df.loc[results_df["gt_key_mode"] == KeyMode.MINOR])
acc_major = get_acc(results_df.loc[results_df["gt_key_mode"] == KeyMode.MAJOR])
print(f"Overall: {acc}")
print(f"Minor: {acc_minor}")
print(f"Major: {acc_major}")
for chord_type in ChordType:
try:
print(f"{chord_type}: {get_acc(results_df.loc[results_df['gt_chord_type'] == chord_type])}")
except:
pass
mode_type_heat_map = np.zeros((len(KeyMode), len(ChordType)))
for i, mode in enumerate(KeyMode):
for j, chord_type in enumerate(ChordType):
try:
acc = get_acc(results_df.loc[(results_df['gt_chord_type'] == chord_type) & (results_df['gt_key_mode'] == mode)])
except:
continue
print(f"{mode}, {chord_type} = {acc}")
mode_type_heat_map[i, j] = acc
plt.xlabel("Chord Type", fontsize=12)
plt.ylabel("Mode", rotation=90, fontsize=12)
plt.xticks(ticks=np.arange(len(ChordType)), labels=xticks, rotation=90, fontsize=12)
plt.yticks(ticks=np.arange(len(KeyMode)), labels=["Major", "Minor"], fontsize=12)
plt.imshow(mode_type_heat_map, vmin=0, vmax=1)
plt.colorbar(orientation="horizontal", shrink=0.5, pad=0.23)
plt.tight_layout(pad=0)
plt.savefig("figs/acc_by_mode_type.png", pad_inches=0)
accs = {}
for dir in glob("outputs/dcml-csm-1/*"):
results_df_comp = get_results_df(dir + "/**/*_results.tsv")
if results_df_comp is None:
continue
accs[dir.split("/")[-1]] = get_acc(results_df_comp)
for key, value in accs.items():
print(f"{key}: {value}")
###Output
_____no_output_____
###Markdown
Converting results TSV to chord-eval comparison for ICMPC
###Code
from tqdm import tqdm
from glob import glob
from pathlib import Path
import pandas as pd
from harmonic_inference.data.data_types import ChordType, PitchType, TRIAD_REDUCTION
from harmonic_inference.utils.harmonic_constants import STRING_TO_CHORD_TYPE
from harmonic_inference.utils.harmonic_utils import get_pitch_from_string, get_pitch_string
in_path = "outputs/icmpc/*/Mozart-Sonatas/*_results.tsv"
for results_tsv in tqdm(glob(in_path)):
results_df = pd.read_csv(results_tsv, sep="\t")
for prefix in ["gt", "est"]:
results_df[f"{prefix}_chord_root"] = 0
results_df[f"{prefix}_chord_type"] = 0
results_df[f"{prefix}_chord_inv"] = 0
results_df["root_correct"] = 0
results_df["triad_correct"] = 0
results_df["7th_correct"] = 0
results_df["inv_correct"] = 0
results_df["full_correct"] = 0
for idx, row in results_df.iterrows():
gt_root_str, gt_other_str, gt_inv_str = row["gt_chord"].split(":")
gt_chord_type_str, _ = gt_other_str.split(",")
gt_root = get_pitch_from_string(gt_root_str, PitchType.MIDI)
gt_chord_type = STRING_TO_CHORD_TYPE[gt_chord_type_str]
est_root_str, est_other_str, est_inv_str = row["est_chord"].split(":")
est_chord_type_str, _ = est_other_str.split(",")
est_root = get_pitch_from_string(est_root_str, PitchType.MIDI)
est_chord_type = STRING_TO_CHORD_TYPE[est_chord_type_str]
results_df.loc[idx, "gt_chord_root"] = gt_root
results_df.loc[idx, "gt_chord_type"] = str(gt_chord_type)
results_df.loc[idx, "gt_chord_inv"] = gt_inv_str
results_df.loc[idx, "est_chord_root"] = est_root
results_df.loc[idx, "est_chord_type"] = str(est_chord_type)
results_df.loc[idx, "est_chord_inv"] = est_inv_str
results_df.loc[idx, "root_correct"] = gt_root == est_root
results_df.loc[idx, "triad_correct"] = TRIAD_REDUCTION[gt_chord_type] == TRIAD_REDUCTION[est_chord_type]
results_df.loc[idx, "7th_correct"] = gt_chord_type == est_chord_type
results_df.loc[idx, "inv_correct"] = gt_inv_str == est_inv_str
results_df.loc[idx, "full_correct"] = gt_inv_str == est_inv_str and gt_root == est_root and gt_chord_type == est_chord_type
tsv_path = Path(results_tsv)
out_path = tsv_path.parent / (tsv_path.name[:-4] + "chord-eval.tsv")
results_df.to_csv(out_path, sep="\t")
###Output
_____no_output_____
###Markdown
###Code
def relu(x):
return max(0, x)
def convolution(image, Cnn_filter):
x = Cnn_filter.shape[0]
y = Cnn_filter.shape[1]
out = np.zeros(((int(image.shape[0] - x + 1)), (int(image.shape[1] - y + 1))))
for i in range(out.shape[0]):
for j in range(out.shape[1]):
temp = np.sum(image[i:i+x, j:j+y]*Cnn_filter)
out[i][j] = relu(temp)
return out
def pooling(filtered, window):
x = window[0]
y = window[1]
out = np.zeros((int(filtered.shape[0] / x), int(filtered.shape[1] / y)))
for i in range(out.shape[0]):
for j in range(out.shape[1]):
temp = np.array(filtered[i*x:(i+1)*x, j*x:(j+1)*x])
out[i][j] = np.max(temp)
return out
def convolution_layer(image, number_of_outputs, filter_shape, maxpool_window):
weights = np.random.randint(low = -7, high = 7, size=(number_of_outputs, filter_shape[0], filter_shape[1], 3 ))
p_input = image.shape[0]
p_output = int((image.shape[0] - (filter_shape[0] - 1)) / maxpool_window[0])
output = np.zeros((p_output, p_output, 3, number_of_outputs))
for i in tqdm(range(number_of_outputs)):
filter_1 = weights[i, :, :, 0]
filter_2 = weights[i, :, :, 1]
filter_3 = weights[i, :, :, 2]
b = image[:p_input, :p_input, :1]
c = image[:p_input, :p_input, 1:2]
d = image[:p_input, :p_input, 2:3]
out1 = convolution(b, filter_1)
out2 = convolution(c, filter_2)
out3 = convolution(d, filter_3)
outt1 = pooling(out1, maxpool_window)
outt2 = pooling(out2, maxpool_window)
outt3 = pooling(out3, maxpool_window)
filtered_image = np.zeros((p_output, p_output, 3))
filtered_image[:p_output, :p_output, :1] = outt1.reshape((p_output, p_output, 1))
filtered_image[:p_output, :p_output, 1:2] = outt2.reshape((p_output, p_output, 1))
filtered_image[:p_output, :p_output, 2:3] = outt3.reshape((p_output, p_output, 1))
output[:, :, :, i] = filtered_image
return output
a = train_images[1972] / 255.
b = 10
c = [3, 3]
d = [2, 2]
my_images = convolution_layer(a, b, c, d)
print("\n",my_images.shape)
def initialization(image, layers):
w = []
b = []
# flatten_image = image.flatten()
weight1 = np.ones((image.shape[1], layers[0]))
w.append(weight1)
for i in range (len(layers)-1):
temp_weight = np.ones((layers[i], layers[i+1]))
w.append(temp_weight)
temp_bias = np.ones((layers[i], 1))
b.append(temp_bias)
temp_bias = np.ones((layers[-1], 1))
b.append(temp_bias)
w = np.array(w)
b = np.array(b)
return w, b
def feedforward(image, weights, bias):
z = []
a = []
# flatten_image = image.flatten()
x = np.array(image) # image.reshape(len(flatten_image), 1))
z_temp = np.dot(x, weights[0]).T + bias[0]
z.append(z_temp)
a_temp = sigmoid(z_temp)
a.append(a_temp)
for i in range(len(weights)-1):
z_temp = np.dot(a[i].T, weights[i+1]).T + bias[i+1]
z.append(z_temp)
a_temp = sigmoid(z_temp)
a.append(a_temp)
# print(a[i].T)
# print(weights[i+1].shape)
# print(z_temp.shape)
z = np.array(z)
a = np.array(a)
return z, a
def sigmoid(z):
a = 1 / (1 + np.exp((-1*z)))
return a
def backpropagation(predicted, actual, z, a, w, b, x):
dz = []
dw = []
db = []
m = predicted.shape[0]
dz_temp = a[-1].T - actual
dz_temp = dz_temp.T
dz.append(dz_temp)
for i in reversed(range(len(a)-1)):
# print(dz_temp)
# print(i)
# print(a[-1])
dw_temp = np.dot(dz_temp, a[i].T) / m
dw.append(dw_temp)
db_temp = np.sum(dz_temp, axis = 1, keepdims=True) / m
db.append(db_temp)
dz_temp = np.dot(w[i+1], dz_temp) * sigmoid_derivate(z[i])
dz.append(dz_temp)
dw_temp = np.dot(dz_temp, x) / m
dw.append(dw_temp)
db_temp = np.sum(dz_temp, axis = 1, keepdims=True) / m
db.append(db_temp)
return dw, db
def sigmoid_derivate(z):
a = z*(1-z)
return a
xx_xx = np.array([[3, 4, 0, 1, 5], #flatten image 1 (consist of so many filters)
[0, 0, 0, 0, 0], #flatten image 2 (consist of so many filters)
[0, 6, 4, 2, 3],
[1, 7, 4, 1, 8]])
y = np.array([[0.1, 0.82],
[0.23, 0],
[0.5, 1],
[0.75, 0]])
# w, b = initialization(x, [10, 3, 6, 10, 1])
# z, a = feedforward(x, w, b)
# x = my_images[:, :, :, :]
ww_ww, bb_bb = initialization(x, [10, 3, 6, 10, 2])
zz_zz, aa_aa = feedforward(x, w, b)
dw, db = backpropagation(aa_aa[-1], y, zz_zz, aa_aa, ww_ww, bb_bb, xx_xx)
# print(x.T)
# print(z[0].T)
print(dw[4].T)
# print(db[4].shape)
print(w[0])
# print(b[0].shape)
# print(z[1].T)
# print(w[2])
# print(a[2])
# printa[-1].T)
# print(z[1])
# print(a[1])
###Output
_____no_output_____
###Markdown
Unit testing jworkflow/data.py Want to create a function that'll examine the output of `get_fremont_data()` and make sure it conforms to what it's expected to output.
###Code
from jworkflow.data import get_fremont_data
import pandas as pd
def test_fremont_data():
data = get_fremont_data()
# we know data.columns should be west east & total:
assert all(data.columns == ['West', 'East', 'Total'])
# also want date.index to be a DateTimeIndex instance
assert isinstance(data.index, pd.DatetimeIndex)
###Output
_____no_output_____
###Markdown
Unit Test Frameworks allow you to run unit tests automatically
###Code
test_fremont_data()
data = pd.read_csv('Fremont.csv', index_col='Date')
try:
data.index = pd.to_datetime(data.index, format='%m/%d/%Y %H:%M:%S %p')
except TypeError:
data.index = pd.to_datetime(data.index)
data.index
data.columns = ['West', 'East']
data['Total'] = data['West'] + data['East']
###Output
_____no_output_____
###Markdown
CDR ScratchBook
###Code
%matplotlib inline
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from IPython.display import display
column_types = {
}
pd.set_option('display.float_format', lambda x: '%s' % x)
df = pd.read_csv("D2019051500_10138341_1-5UNXGK.3097a.csv.gz")
df["DIALNUM"].astype(str, copy=False).describe()
#df[df["DIALNUM"].startswith("8")]
df["DIALNUM"]
###Output
_____no_output_____
###Markdown
Create unit test
###Code
# Import the data download library
from jupyterworkflow.data import get_fremont_data
import pandas as pd
data = get_fremont_data()
all(data.columns == ['West', 'East', 'Total'])
isinstance(data.index, pd.DatetimeIndex)
test_fremont_data()
# Create a function to do this
# Import the data download library
from jupyterworkflow.data import get_fremont_data
import pandas as pd
# create a function to do all these things
def test_fremont_data():
data = get_fremont_data()
assert all(data.columns == ['West', 'East', 'Total'])
assert isinstance(data.index, pd.DatetimeIndex)
test_fremont_data()
###Output
_____no_output_____ |
notebooks/sesssion_9.ipynb | ###Markdown
topics:- [cv2.morphologyEx](cv2.morphologyEx) - [Morphological Transformations docs](https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_morphological_ops/py_morphological_ops.html) - [huang transform]----slide 4
###Code
import cv2
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.pyplot import figure
img = cv2.imread('session_9/j.png',0)
kernel = np.ones((7,7),np.uint8)
opening = cv2.morphologyEx(img, cv2.MORPH_OPEN, kernel)
closing = cv2.morphologyEx(opening, cv2.MORPH_CLOSE, kernel)
figure(figsize=(6, 6), dpi=80)
plt.subplot(1,3,1),plt.imshow(img,cmap = 'gray')
plt.title(f'Opening Image '), plt.xticks([]), plt.yticks([])
plt.subplot(1,3,2),plt.imshow(opening,cmap = 'gray')
plt.title(f'Original Image '), plt.xticks([]), plt.yticks([])
plt.subplot(1,3,3),plt.imshow(closing,cmap = 'gray')
plt.title('Opening and closing'), plt.xticks([]), plt.yticks([])
plt.show()
import cv2
import numpy as np
cap = cv2.VideoCapture(2)
# initialize a black canvas
screen = np.zeros((600, 1000))
# use this to capture a histogram
while True:
_, frame = cap.read()
frame = cv2.flip(frame, 1)
frame = cv2.resize(frame, (1000, 600))
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(frame, 'Place region of interest inside box & press `A`',(5, 50), font, 0.7, (255, 255, 255), 2, cv2.LINE_AA)
cv2.rectangle(frame, (500, 100), (700, 300), (105, 105, 105), 2)
box = frame[105:175, 505:575]
cv2.imshow("Capture Histogram", frame)
key = cv2.waitKey(10)
if key == ord('a'):
object_color = box
cv2.destroyAllWindows()
break
if key == ord('q'):
cv2.destroyAllWindows()
cap.release()
break
object_color_hsv = cv2.cvtColor(object_color, cv2.COLOR_BGR2HSV)
object_hist = cv2.calcHist([object_color_hsv], [0, 1], None,[180, 256], [0, 180, 0, 256])
cv2.normalize(object_hist, object_hist, 0, 255, cv2.NORM_MINMAX)
# detect histogram
while True:
ret, frame = cap.read()
if not ret:
break
# flip and resize the image.
frame = cv2.flip(frame, 1)
# Use a resolution best suited for your camera.
frame = cv2.resize(frame, (1000, 600))
hsv_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
# apply back projection to image using object_hist as
# the model histogram
object_segment = cv2.calcBackProject([hsv_frame], [0, 1], object_hist, [0, 180, 0, 256], 1)
cv2.imshow("", object_segment)
_, segment_thresh = cv2.threshold(object_segment, 20, 255, cv2.THRESH_BINARY)
# apply some image operations to enhance image
kernel = None
disc = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (15, 15))
filtered = cv2.filter2D(segment_thresh, -1, disc)
eroded = cv2.erode(filtered, kernel, iterations=2)
dilated = cv2.dilate(eroded, kernel, iterations=2)
closing = cv2.morphologyEx(dilated, cv2.MORPH_CLOSE, kernel)
cv2.imshow("closing", closing)
# masking
masked = cv2.bitwise_and(frame, frame, mask=closing)
cv2.imshow("Hand Detector", frame)
k = cv2.waitKey(5)
if k == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
###Output
_____no_output_____ |
docs/lectures/lecture18/notebook/lecture18_ex1.ipynb | ###Markdown
Title**Exercise 1 - Basic Multi-classification** DescriptionThe goal of the exercise is to get comfortable using multiclass classification models. Eventually, you will produce a plot similar to the one given below: Instructions: We are trying to predict the types of Irises in the classic Iris data set based on measured characteristics- Load the Iris data set and convert to a data frame.- Fit multinomial & OvR logistic regressions and a $k$-NN model. - Compute the accuracy of the models.- Plot the classification boundaries against the two predictors used. Hints:sklearn.LogisticRegression() : Generates a Logistic Regression classifiersklearn.fit() : Fits the model to the given datasklearn.predict() : Predict using the estimated model (Logistic or knn classifiers) to perform pure classification predictionssklearn.predict_proba() : Predict using the estimated model (Logistic or knn classifiers) to perform probability predictions of all the classes in the response (they should add up to 1 for each observation)sklearn.LogisticRegression.coef_ and .intercept_ : Pull off the estimated $\beta$ coefficients in a Logistic Regression modelsklearn.score() : Accuracy classification score.matplotlib.pcolormesh() : Accuracy classification score**Note: This exercise is auto-graded and you can try multiple attempts.**
###Code
%matplotlib inline
from sklearn import datasets
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.neighbors import KNeighborsClassifier
###Output
_____no_output_____
###Markdown
IrisesRead in the data set and convert to a Pandas data frame:
###Code
raw = datasets.load_iris()
iris = pd.DataFrame(raw['data'],columns=raw['feature_names'])
iris['type'] = raw['target']
iris.head()
###Output
_____no_output_____
###Markdown
Note: this violin plot is 'inverted': putting the response variable in the model on the x-axis. This is fine for exploration
###Code
sns.violinplot(y=iris['sepal length (cm)'], x=iris['type'], split=True);
# Create a violin plot to compare petal length
# across the types of irises
sns.violinplot(___);
###Output
_____no_output_____
###Markdown
Here we fit our first model (the OvR logistic) and print out the coefficients:
###Code
logit_ovr = LogisticRegression(penalty='none', multi_class='ovr',max_iter = 1000).fit(
iris[['sepal length (cm)','sepal width (cm)']], iris['type'])
print(logit_ovr.intercept_)
print(logit_ovr.coef_)
# we can predict classes or probabilities
print(logit_ovr.predict(iris[['sepal length (cm)','sepal width (cm)']])[0:5])
print(logit_ovr.predict_proba(iris[['sepal length (cm)','sepal width (cm)']])[0:5])
# and calculate accuracy
print(logit_ovr.score(iris[['sepal length (cm)','sepal width (cm)']],iris['type']))
###Output
_____no_output_____
###Markdown
Now it's your turn: but this time with the multinomial logistic regression.
###Code
### edTest(test_multinomial) ###
# Fit the model and print out the coefficients
logit_multi = LogisticRegression(___).fit(___)
intercept = logit_multi.intercept_
coefs = logit_multi.coef_
print(intercept)
print(coefs)
### edTest(test_multinomialaccuracy) ###
multi_accuracy = ___
print(multi_accuracy)
# Plot the decision boundary.
x1_range = iris['sepal length (cm)'].max() - iris['sepal length (cm)'].min()
x2_range = iris['sepal width (cm)'].max() - iris['sepal width (cm)'].min()
x1_min, x1_max = iris['sepal length (cm)'].min()-0.1*x1_range, iris['sepal length (cm)'].max() +0.1*x1_range
x2_min, x2_max = iris['sepal width (cm)'].min()-0.1*x2_range, iris['sepal width (cm)'].max() + 0.1*x2_range
step = .05
x1x, x2x = np.meshgrid(np.arange(x1_min, x1_max, step), np.arange(x2_min, x2_max, step))
y_hat_ovr = logit_ovr.predict(np.c_[x1x.ravel(), x2x.ravel()])
y_hat_multi = ___
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 6))
ax1.pcolormesh(x1x, x2x, y_hat_ovr.reshape(x1x.shape), cmap=plt.cm.Paired,alpha = 0.5)
ax1.scatter(iris['sepal length (cm)'], iris['sepal width (cm)'], c=iris['type'], edgecolors='k', cmap=plt.cm.Paired)
### your job is to create the same plot, but for the multinomial
#####
# your code here
#####
plt.show()
#fit a knn model (k=5) for the same data
knn5 = KNeighborsClassifier(___).fit(___)
### edTest(test_knnaccuracy) ###
#Calculate the accuracy
knn5_accuracy = ___
print(knn5_accuracy)
# and plot the classification boundary
y_hat_knn5 = knn5.predict(np.c_[x1x.ravel(), x2x.ravel()])
fig, ax1 = plt.subplots(1, 1, figsize=(8, 6))
ax1.pcolormesh(x1x, x2x, y_hat_knn5.reshape(x1x.shape), cmap=plt.cm.Paired,alpha = 0.5)
# Plot also the training points
ax1.scatter(iris['sepal length (cm)'], iris['sepal width (cm)'], c=iris['type'], edgecolors='k', cmap=plt.cm.Paired)
plt.show()
###Output
_____no_output_____ |
libro_optimizacion/temas/II.computo_matricial/2.1/Operaciones_y_transformaciones_basicas_del_Algebra_Lineal_Numerica.ipynb | ###Markdown
(OTBALN)= 2.1 Operaciones y transformaciones básicas del Álgebra Lineal Numérica ```{admonition} Notas para contenedor de docker:Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.`docker run --rm -v :/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:2.1.4`password para jupyterlab: `qwerty`Detener el contenedor de docker:`docker stop jupyterlab_optimizacion`Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).``` --- Nota generada a partir de [liga1](https://www.dropbox.com/s/fyqwiqasqaa3wlt/3.1.1.Multiplicacion_de_matrices_y_estructura_de_datos.pdf?dl=0), [liga2](https://www.dropbox.com/s/jwu8lu4r14pb7ut/3.2.1.Sistemas_de_ecuaciones_lineales_eliminacion_Gaussiana_y_factorizacion_LU.pdf?dl=0) y [liga3](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0). ```{admonition} Al final de esta nota el y la lectora::class: tip* Entenderá cómo utilizar transformaciones típicas en el álgebra lineal numérica en la que se basan muchos de los algoritmos del análisis numérico. En específico aprenderá cómo aplicar las transformaciones de Gauss, reflexiones de Householder y rotaciones Givens a vectores y matrices.* Se familizarizará con la notación vectorial y matricial de las operaciones básicas del álgebra lineal numérica.``` Las operaciones básicas del Álgebra Lineal Numérica podemos dividirlas en vectoriales y matriciales. Vectoriales * **Transponer:** $\mathbb{R}^{n \times 1} \rightarrow \mathbb{R} ^{1 \times n}$: $y = x^T$ entonces $x = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right ]$ y se tiene: $y = x^T = [x_1, x_2, \dots, x_n].$ * **Suma:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x + y$ entonces $z_i = x_i + y_i$* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $y = \alpha x$ entonces $y_i = \alpha x_i$.* **Producto interno estándar o producto punto:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}$: $c = x^Ty$ entonces $c = \displaystyle \sum_{i=1}^n x_i y_i$.* **Multiplicación *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x.*y$ entonces $z_i = x_i y_i$.* **División *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x./y$ entonces $z_i = x_i /y_i$ con $y_i \neq 0$.* **Producto exterior o *outer product*:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^{n \times n}$: $A = xy^T$ entonces $A[i, :] = x_i y^T$ con $A[i,:]$ el $i$-ésimo renglón de $A$. Matriciales * **Transponer:** $\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{n \times m}$: $C = A^T$ entonces $c_{ij} = a_{ij}$.* **Sumar:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A + B$ entonces $c_{ij} = a_{ij} + b_{ij}$.* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = \alpha A$ entonces $c_{ij} = \alpha a_{ij}$* **Multiplicación por un vector:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$: $y = Ax$ entonces $y_i = \displaystyle \sum_{j=1}^n a_{ij}x_j$.* **Multiplicación entre matrices:** $\mathbb{R}^{m \times k} \times \mathbb{R}^{k \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = AB$ entonces $c_{ij} = \displaystyle \sum_{r=1}^k a_{ir}b_{rj}$.* **Multiplicación *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A.*B$ entonces $c_{ij} = a_{ij}b_{ij}$.* **División *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A./B$ entonces $c_{ij} = a_{ij}/b_{ij}$ con $b_{ij} \neq 0$. **Como ejemplos de transformaciones básicas del Álgebra Lineal Numérica se encuentran:** (TGAUSS)= Transformaciones de Gauss En esta sección suponemos que $A \in \mathbb{R}^{n \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{n \times n} \forall i,j=1,2,\dots,n$. ```{margin}Como ejemplo de vector canónico tenemos: $e_1=(1,0)^T$ en $\mathbb{R}^2$ o $e_3 = (0,0,1,0,0)$ en $\mathbb{R}^5$.``` Considérese al vector $a \in \mathbb{R}^{n}$ y $e_k \in \mathbb{R}^n$ el $k$-ésimo vector canónico: vector con un $1$ en la posición $k$ y ceros en las entradas restantes. ```{admonition} DefiniciónUna transformación de Gauss está definida de forma general como $L_k = I_n - \ell_ke_k^T$ con $\ell_k = (0,0,\dots,\ell_{k+1,k},\dots,\ell_{n,k})^T$ y $\ell_{i,k}=\frac{a_{ik}}{a_{kk}} \forall i=k+1,\dots,n$.$a_{kk}$ se le nombra **pivote** y **debe ser diferente de cero**.``` Las transformaciones de Gauss se utilizan para hacer ceros por debajo del **pivote**. (EG1)= Ejemplo aplicando transformaciones de Gauss a un vector Considérese al vector $a=(-2,3,4)^T$. Definir una transformación de Gauss para hacer ceros por debajo de $a_1$ y otra transformación de Gauss para hacer cero la entrada $a_3$ **Solución:**
###Code
import numpy as np
import math
np.set_printoptions(precision=3, suppress=True)
###Output
_____no_output_____
###Markdown
a)Para hacer ceros por debajo del **pivote** $a_1 = -2$:
###Code
a = np.array([-2,3,4])
pivote = a[0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_2}{a_1}, \frac{a_3}{a_1})^T$```
###Code
l1 = np.array([0,a[1]/pivote, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1a = a - \ell_1 e_1^Ta$.**```
###Code
L1_a = a-l1*(e1.dot(a))
print(L1_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1@a)
###Output
_____no_output_____
###Markdown
b) Para hacer ceros por debajo del **pivote** $a_2 = 3$:
###Code
a = np.array([-2,3,4])
pivote = a[1]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_2=(0, 0, \frac{a_3}{a_2})^T$```
###Code
l2 = np.array([0,0, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_2$ pues se desea hacer ceros en las entradas debajo de la segunda.```
###Code
e2 = np.array([0,1,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_2$, directamente se tiene $L_2a = a - \ell_2 e_2^Ta$.**```
###Code
L2_a = a-l2*(e2.dot(a))
print(L2_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior: ```{margin}$L_2 = I_3 - \ell_2 e_2^T$.```
###Code
L2 = np.eye(3) - np.outer(l2,e2)
print(L2)
print(L2@a)
###Output
_____no_output_____
###Markdown
(EG2)= Ejemplo aplicando transformaciones de Gauss a una matriz Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su **diagonal** y tener una forma **triangular superior**, realizamos los productos matriciales:$$L_2 L_1 A$$ donde: $L_1, L_2$ son transformaciones de Gauss. Posterior a realizar el producto $L_2 L_1 A$ se obtiene una **matriz triangular superior:**$$L_2L_1A = \left [\begin{array}{ccc}* & * & *\\0 & * & * \\0 & 0 & * \end{array}\right ]$$ **Ejemplo:** a) Utilizando $L_1$
###Code
A = np.array([[-1, 2, 5],
[4, 5, -7],
[3, 0, 8]], dtype=float)
print(A)
###Output
_____no_output_____
###Markdown
Para hacer ceros por debajo del **pivote** $a_{11} = -1$:
###Code
pivote = A[0, 0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_{21}}{a_{11}}, \frac{a_{31}}{a_{11}})^T$```
###Code
l1 = np.array([0,A[1,0]/pivote, A[2,0]/pivote])
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A[1:3,1] = A[1:3,1] - \ell_1 e_1^T A[1:3,1]$.**```
###Code
L1_A_1 = A[:,0]-l1*(e1.dot(A[:,0]))
print(L1_A_1)
###Output
_____no_output_____
###Markdown
**Y se debe aplicar $\ell_1$ a las columnas número 2 y 3 de $A$ para completar el producto $L_1A$:** ```{margin}Aplicando $L_1$ a la segunda columna de $A$: $A[1:3,2]$.```
###Code
L1_A_2 = A[:,1]-l1*(e1.dot(A[:,1]))
print(L1_A_2)
###Output
_____no_output_____
###Markdown
```{margin}Aplicando $L_1$ a la tercer columna de $A$: $A[1:3,3]$.```
###Code
L1_A_3 = A[:,2]-l1*(e1.dot(A[:,2]))
print(L1_A_3)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1 @ A)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipAl aplicar $\ell_1$ a la primer columna de $A$ **siempre** obtenemos ceros por debajo del pivote que en este caso es $a_{11}$.``` (EG2.1)= **Después de hacer la multiplicación $L_1A$ en cualquiera de los dos casos (construyendo o no explícitamente $L_1$) no se modifica el primer renglón de $A$:**
###Code
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Este es el primer renglón de $A$.```
###Code
print(A[0,:])
###Output
_____no_output_____
###Markdown
```{margin}Tomando el primer renglón del producto $L_1A$.```
###Code
print((L1 @ A)[0,:])
###Output
_____no_output_____
###Markdown
**por lo que la multiplicación $L_1A$ entonces modifica del segundo renglón de $A$ en adelante y de la segunda columna de $A$ en adelante.** ```{admonition} Observación:class: tipDada la forma de $L_1 = I_3 - \ell_1e_1^T$, al hacer la multiplicación por la segunda y tercer columna de $A$ se tiene:$$e_1^T A[1:3,2] = A[0,2]$$$$e_1^T A[1:3,3] = A[0,3]$$respectivamente.``` ```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 1]))
###Output
_____no_output_____
###Markdown
```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 2]))
###Output
_____no_output_____
###Markdown
y puede escribirse de forma compacta: $$e_1^T A[1:3,2:3] = A[0, 2:3]$$
###Code
print(A[0, 1:3]) #observe that we have to use 2+1=3 as the second number after ":" in 1:3
print(A[0, 1:]) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Entonces los productos $\ell_1 e_1^T A[:,2]$ y $\ell_1 e_1^T A[:,3]$ quedan respectivamente como: $$\ell_1A[0, 2]$$
###Code
print(l1*A[0,1])
###Output
_____no_output_____
###Markdown
$$\ell_1A[0,3]$$
###Code
print(l1*A[0, 2])
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipEn los dos cálculos anteriores, las primeras entradas son iguales a $0$ por lo que es consistente con el hecho que únicamente se modifican dos entradas de la segunda y tercer columna de $A$.``` De forma compacta y aprovechando funciones en *NumPy* como [np.outer](https://numpy.org/doc/stable/reference/generated/numpy.outer.html) se puede calcular lo anterior como:
###Code
print(np.outer(l1[1:3],A[0,1:3]))
print(np.outer(l1[1:],A[0,1:])) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Y finalmente la aplicación de $L_1$ al segundo renglón y segunda columna en adelante de $A$ queda: ```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A = A - \ell_1 e_1^T A$ y podemos aprovechar lo anterior para sólo operar de la segunda columna y segundo renglón en adelante.**```
###Code
print(A[1:, 1:] - np.outer(l1[1:],A[0,1:]))
###Output
_____no_output_____
###Markdown
Compárese con:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar el primer renglón y primera columna al producto. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
A_aux = A[1:, 1:] - np.outer(l1[1:],A[0,1:])
m, n = A.shape
number_of_zeros = m-1
A_aux_2 = np.column_stack((np.zeros(number_of_zeros), A_aux)) # stack two zeros
print(A_aux_2)
A_aux_3 = np.row_stack((A[0, :], A_aux_2))
print(A_aux_3)
###Output
_____no_output_____
###Markdown
que es el resultado de:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
**Lo que falta para obtener una matriz triangular superior es hacer la multiplicación $L_2L_1A$.** Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}_{32}}{a^{(1)}_{22}} \right )^T$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = L_1A^{(0)}$ y $A^{(0)}=A$. ```{admonition} Ejercicio:class: tipCalcular el producto $L_2 L_1 A$ para la matriz anterior y para la matriz:$$A = \left [\begin{array}{ccc}1 & 4 & -2 \\-3 & 9 & 8 \\5 & 1 & -6\end{array}\right]$$tomando en cuenta que en este caso $L_2$ sólo opera del segundo renglón y segunda columna en adelante:y obtener una matriz triangular superior en cada ejercicio.``` ```{admonition} Comentarios* Las transformaciones de Gauss se utilizan para la fase de eliminación del método de eliminación Gaussiana o también llamada factorización $LU$. Ver [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination).* La factorización $P, L, U$ que es la $LU$ con permutaciones por pivoteo parcial es un método estable numéricamente respecto al redondeo en la práctica pero inestable en la teoría.``` (MATORTMATCOLORTONO)= Matriz ortogonal y matriz con columnas ortonormales Un conjunto de vectores $\{x_1, \dots, x_p\}$ en $\mathbb{R}^m$ ($x_i \in \mathbb{R}^m$)es ortogonal si $x_i^Tx_j=0$ $\forall i\neq j$. Por ejemplo, para un conjunto de $2$ vectores $x_1,x_2$ en $\mathbb{R}^3$ esto se visualiza: ```{admonition} Comentarios* Si el conjunto $\{x_1,\dots,x_n\}$ en $\mathbb{R}^m$ satisface $x_i^Tx_j= \delta_{ij}= \begin{cases}1 &\text{ si } i=j,\\0 &\text{ si } i\neq j\end{cases}$, ver [Kronecker_delta](https://en.wikipedia.org/wiki/Kronecker_delta) se le nombra conjunto **ortonormal**, esto es, constituye un conjunto ortogonal y cada elemento del conjunto tiene norma $2$ o Euclidiana igual a $1$: $||x_i||_2 = 1, \forall i=1,\dots,n$. * Si definimos a la matriz $X$ con columnas dadas por cada uno de los vectores del conjunto $\{x_1,\dots, x_n\}$: $X=(x_1, \dots , x_n) \in \mathbb{R}^{m \times n}$ entonces la propiedad de que cada par de columnas satisfaga $x_i^Tx_j=\delta_{ij}$ se puede escribir en notación matricial como $X^TX = I_n$ con $I_n$ la matriz identidad de tamaño $n$ si $n \leq m$ o bien $XX^T=I_m$ si $m \leq n$. A la matriz $X$ se le nombra **matriz con columnas ortonormales**. * Si cada $x_i$ está en $\mathbb{R}^n$ (en lugar de $\mathbb{R}^m$) entonces construímos a la matriz $X$ como el punto anterior con la diferencia que $X \in \mathbb{R}^{n \times n}$. En este caso $X$ se le nombra **matriz ortogonal**.* Entre las propiedades más importantes de las matrices ortogonales o con columnas ortonormales es que son isometrías bajo la norma $2$ o Euclidiana y multiplicar por tales matrices es estable numéricamente bajo el redondeo, ver {ref}`Condición de un problema y estabilidad de un algoritmo `.``` (TREF)= Transformaciones de reflexión En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Reflectores de Householder ```{margin}Recuerda que $u^\perp = \{x \in \mathbb{R}^m| u^Tx=0\}$ es un subespacio de $\mathbb{R}^m$ de dimensión $m-1$ y es el complemento ortogonal de $u$.``` ```{admonition} DefiniciónLas reflexiones de Householder son matrices **simétricas, ortogonales** y se construyen a partir de un vector $v \neq 0$ definiendo:$$R = I_m-\beta v v^T$$ con $v \in \mathbb{R}^m - \{0\}$ y $\beta = \frac{2}{v^Tv}$. El vector $v$ se llama **vector de Householder**. La multiplicación $Rx$ representa la reflexión del vector $x \in \mathbb{R}^m$ a través del hiperplano $v^\perp$.``` ```{admonition} ComentarioAlgunas propiedades de las reflexiones de Householder son: $R^TR = R^2 = I_m$, $R^{-1}=R$, $det(R)=-1$.``` ```{sidebar} Proyector ortogonal elementalEn este dibujo se utiliza el **proyector ortogonal elemental** sobre el complemento ortogonal $u^\perp$ definido como: $P=I_m- u u^T$ y $Px$ es la proyección ortogonal de $x$ sobre $u^\perp$ . Los proyectores ortogonales elementales **no** son matrices ortogonales, son singulares, son simétricas y $P^2=P$. El proyector ortogonal elemental de $x$ sobre $u^\perp$ tienen $rank$ igual a $m-1$ y el proyector ortogonal de $x$ sobre $span\{u\}$ definido por $I_m-P=uu^T$ tienen $rank$ igual a $1$.Recuerda que $span\{u\}$ es el conjunto generado por $u$. Se define como el conjunto de combinaciones lineales de $u$: $span\{u\} = \left \{\displaystyle \sum_{i=1}^m k_i u_i | k_i \in \mathbb{R} \forall i =1,\dots,m \right \}$.``` Un dibujo que ayuda a visualizar el reflector elemental alrededor de $u^\perp$ en el que se utiliza $u \in \mathbb{R}^m - \{0\}$ , $||u||_2 = 1$ y $R=I_m-2 u u^T$ es el siguiente : Las reflexiones de Householder pueden utilizarse para hacer ceros por debajo de una entrada de un vector. Ejemplo aplicando reflectores de Householder a un vector Considérese al vector $x=(1,2,3)^T$. Definir un reflector de Householder para hacer ceros por debajo de $x_1$.
###Code
x = np.array([1,2,3])
print(x)
###Output
_____no_output_____
###Markdown
Utilizamos la definición $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico para construir al vector de Householder: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x-np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
El resultado de $Rx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por:
###Code
print(np.linalg.norm(x))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tip* Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de reflexión de Householder son matrices ortogonales y por tanto isometrías: $||Rv||_2=||v||_2$.* Observa que a diferencia de las transformaciones de Gauss con las reflexiones de Householder en general se modifica la primera entrada, ver {ref}`Ejemplo aplicando transformaciones de Gauss a un vector `.``` A continuación se muestra que el producto $Rx$ si se construye $R$ es equivalente a lo anterior: ```{margin}$R = I_3 - \beta v v^T$.```
###Code
R = np.eye(3)-beta*np.outer(v,np.transpose(v))
print(R)
print(R@x)
###Output
_____no_output_____
###Markdown
Ejemplo aplicando reflectores de Householder a un vector Considérese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$.". Otra opción para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x+np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que difieren en signo las primeras entradas al utilizar $v=x + ||x||_2 e_1$ o $v=x - ||x||_2 e_1$.``` ¿Cuál definición del vector de Householder usar? En cualquiera de las dos definiciones del vector de Householder $v=x \pm ||x||_2 e_1$, la multiplicación $Rx$ refleja $x$ en el primer eje coordenado (pues se usa $e_1$): El vector $v^+ = - u_0^+ = x-||x||_2e_1$ refleja $x$ respecto al subespacio $H^+$ (que en el dibujo es una recta que cruza el origen). El vector $v^- = -u_0^- = x+||x||_2e_1$ refleja $x$ respecto al subespacio $H^-$. Para reducir los errores por redondeo y evitar el problema de cancelación en la aritmética de punto flotante (ver [Sistema de punto flotante](https://itam-ds.github.io/analisis-numerico-computo-cientifico/I.computo_cientifico/1.2/Sistema_de_punto_flotante.html)) se utiliza:$$v = x+signo(x_1)||x||_2e_1$$donde: $signo(x_1) = \begin{cases}1 &\text{ si } x_1 \geq 0 ,\\-1 &\text{ si } x_1 < 0\end{cases}.$La idea de la definción anterior con la función $signo(\cdot)$ es que la reflexión (en el dibujo anterior $-||x||_2e_1$ o $||x||_2e_1$) sea lo más alejada posible de $x$. En el dibujo anterior como $x_1, x_2>0$ entonces se refleja respecto al subespacio $H^-$ quedando su reflexión igual a $-||x||_2e_1$. ```{admonition} Comentarios* Otra forma de lidiar con el problema de cancelación es definiendo a la primera componente del vector de Householder $v_1$ como $v_1=x_1-||x||_2$ y haciendo una manipulación algebraica como sigue:$$v_1=x_1-||x||_2 = \frac{x_1^2-||x||_2^2}{x_1+||x||_2} = -\frac{x_2^2+x_3^2+\dots + x_m^2}{x_1+||x||_2}.$$* En la implementación del cálculo del vector de Householder, es útil que $v_1=1$ y así únicamente se almacenará $v[2:m]$. Al vector $v[2:m]$ se le nombra **parte esencial del vector de Householder**.* Las transformaciones de reflexión de Householder se utilizan para la factorización QR. Ver [QR decomposition](https://en.wikipedia.org/wiki/QR_decomposition), la cual es una factorización estable numéricamente bajo el redondeo.``` ```{admonition} Ejercicio:class: tipReflejar al vector $(1,1)^T$ utilizando al vector $(\frac{-4}{3}, \frac{2}{3})$ para construir $R$.``` Ejemplo aplicando reflectores de Householder a una matriz Las reflexiones de Householder se utilizan para hacer ceros por debajo de la **diagonal** a una matriz y tener una forma triangular superior (mismo objetivo que las transformaciones de Gauss, ver {ref}`Ejemplo aplicando transformaciones de Gauss a una matriz `). Por ejemplo si se han hecho ceros por debajo del elemento $a_{11}$ y se quieren hacer ceros debajo de $a_{22}^{(1)}$: $$\begin{array}{l}R_2A^{(1)} = R_2\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & * & * & * \\0 & * & * & * \\0 & * & * & *\end{array}\right]\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & 0 & * & * \\0 & 0 & * & * \\0 & 0 & * & *\end{array}\right]:= A^{(2)}\end{array}$$donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = R_1A^{(0)}$ y $A^{(0)}=A$, $R_1$ es matriz de reflexión de Householder. En este caso $$R_2 = \left [ \begin{array}{cc}1 & 0 \\0 & \hat{R_2}\end{array}\right ]$$ con $\hat{R}_2$ una matriz de reflexión de Householder que hace ceros por debajo de de $a_{22}^{(1)}$. Se tienen las siguientes propiedades de $R_2$:* No modifica el primer renglón de $A^{(1)}$.* No destruye los ceros de la primer columna de $A^{(1)}$.* $R_2$ es una matriz de reflexión de Householder. ```{admonition} Observación:class: tipPara la implementación computacional **no se inserta** $\hat{R}_2$ en $R_2$, en lugar de esto se aplica $\hat{R}_2$ a la submatriz $A^{(1)}[2:m, 2:m]$.``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 3}$:$$A =\left [\begin{array}{ccc}3 & 2 & -1 \\2 & 3 & 2 \\-1 & 2 & 3 \\2 & 1 & 4\end{array}\right ] $$y aplíquense reflexiones de Householder para llevarla a una forma triangular superior.
###Code
A = np.array([[3 ,2, -1],
[2 ,3 ,2],
[-1, 2 ,3],
[2 ,1 ,4]], dtype = float)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera entrada de la primera columna de $A$: $A[1:4,1]$.```
###Code
e1 = np.array([1,0,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[1:4,1] + signo(A[1,1])||A[1:4,1]||_2e_1$.```
###Code
v = A[:,0] + np.linalg.norm(A[:,0])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
print(beta)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_1$, directamente se tiene $R_1 A[1:4,1] = A[1:4,1] - \beta vv^TA[1:4,1]$.**```
###Code
print(A[:,0] - beta*v*v.dot(A[:,0]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(1)} = R_1 A^{(0)}$.```
###Code
A1 = A[:,0:]-beta*np.outer(v,v.dot(A[:,0:]))
print(A1)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que a diferencia de las transformaciones de Gauss la reflexión de Householder $R_1$ sí modifica el primer renglón de $A^{(0)}$, ver {ref}`Después de hacer la multiplicación... `.``` ```{margin}Se preserva la norma $2$ o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:,0]))
print(np.linalg.norm(A[:,0]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$.** ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la segunda entrada de la segunda columna de $A^{(1)}$: $A^{(1)}[2:4,2]$.```
###Code
e1 = np.array([1, 0, 0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[2:4,2] + signo(A[2,2])||A[2:4,2]||_2e_1$.```
###Code
v = A1[1:,1] + np.linalg.norm(A1[1:,1])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_2$, directamente se tiene $R_2A[2:4,2] = A[2:4,2] - \beta vv^TA[2:4,2]$.**```
###Code
print(A1[1:,1] - beta*v*v.dot(A1[1:,1]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(2)} = R_2 A^{(1)}$ pero sólo operamos en $A^{(2)}[2:4, 2:3]$.```
###Code
A2_aux = A1[1:,1:]-beta*np.outer(v,v.dot(A1[1:,1:]))
print(A2_aux)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma $2$ o Euclidiana de $A[2:4,2]$.```
###Code
print(np.linalg.norm(A1[1:,1]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$.**
###Code
e1 = np.array([1, 0])
v = A2_aux[1:,1] + np.linalg.norm(A2_aux[1:,1])*e1
print(v)
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(3)} = R_3 A^{(2)}$ pero sólo operamos en $A^{(2)}[3:4, 3]$.```
###Code
A3_aux = A2_aux[1:,1]-beta*v*v.dot(A2_aux[1:,1])
print(A3_aux)
print(np.linalg.norm(A2_aux[1:,1]))
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
m,n = A.shape
number_of_zeros = m-2
A3_aux_2 = np.column_stack((np.zeros(number_of_zeros), A3_aux))
print(A3_aux_2)
A3_aux_3 = np.row_stack((A2_aux[0, 0:], A3_aux_2))
print(A3_aux_3)
number_of_zeros = m-1
A3_aux_4 = np.column_stack((np.zeros(number_of_zeros), A3_aux_3))
print(A3_aux_4)
###Output
_____no_output_____
###Markdown
La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es:
###Code
A3 = np.row_stack((A1[0, 0:], A3_aux_4))
print(A3)
###Output
_____no_output_____
###Markdown
Podemos verificar lo anterior comparando con la matriz $R$ de la factorización $QR$ de $A$:
###Code
q,r = np.linalg.qr(A)
print("Q:")
print(q)
print("R:")
print(r)
###Output
Q:
[[-0.707 0. 0.471]
[-0.471 -0.527 0.079]
[ 0.236 -0.843 -0.157]
[-0.471 0.105 -0.864]]
R:
[[-4.243 -2.828 -1.414]
[ 0. -3.162 -3.162]
[ 0. 0. -4.243]]
###Markdown
```{admonition} Ejercicio:class: tipAplicar reflexiones de Householder a la matriz$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$para obtener una matriz triangular superior.``` (TROT)= Transformaciones de rotación En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Si $u, v \in \mathbb{R}^2-\{0\}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un ángulo $\theta$ para llevarlo a la dirección de $v$: A partir de las relaciones anteriores como $cos(\phi)=\frac{u_1}{\ell}, sen(\phi)=\frac{u_2}{\ell}$ se tiene: $v_1 = (cos\theta)u_1-(sen\theta)u_2$, $v_2=(sen\theta)u_1+(cos\theta)u_2$ equivalentemente:$$\begin{array}{l}\left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] \cdot \left[\begin{array}{c}u_1\\u_2\end{array}\right]\end{array}$$ ```{admonition} DefiniciónLa matriz $R_O$:$$R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] $$se nombra matriz de **rotación** o **rotaciones Givens**, es una matriz ortogonal pues $R_O^TR_O=I_2$.La multiplicación $v=R_Ou$ es una rotación en sentido contrario a las manecillas del reloj, de hecho cumple $det(R_O)=1$. La multiplicación $u=R_O^Tv$ es una rotación en sentido de las manecillas del reloj y el ángulo asociado es $-\theta$.``` Ejemplo aplicando rotaciones Givens a un vector Rotar al vector $v=(1,1)^T$ un ángulo de $45^o$ en **sentido contrario a las manecillas del reloj**.
###Code
v=np.array([1,1])
###Output
_____no_output_____
###Markdown
La matriz $R_O$ es: $$R_O = \left[ \begin{array}{cc}cos(\frac{\pi}{4}) & -sen(\frac{\pi}{4})\\sen(\frac{\pi}{4}) & cos(\frac{\pi}{4})\end{array}\right ]$$
###Code
theta=math.pi/4
RO=np.array([[math.cos(theta), -math.sin(theta)],
[math.sin(theta), math.cos(theta)]])
print(RO)
print(RO@v)
print(np.linalg.norm(v))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que se preserva la norma $2$ o Euclidiana del vector, las matrices de rotación Givens son matrices ortogonales y por tanto isometrías: $||R_0v||_2=||v||_2$.``` En el ejemplo anterior se hizo cero la entrada $v_1$ de $v$. Las matrices de rotación se utilizan para hacer ceros en entradas de un vector. Por ejemplo si $v=(v_1,v_2)^T$ y **se desea hacer cero la entrada $v_2$ de $v$** se puede utilizar la matriz de rotación:$$R_O = \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]$$ pues:$$\begin{array}{l} \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ] \cdot \left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\\frac{-v_1v_2+v_1v_2}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\0\end{array}\right ]=\left[ \begin{array}{c}||v||_2\\0\end{array}\right ]\end{array}$$ Y definiendo $cos(\theta)=\frac{v_1}{\sqrt{v_1^2+v_2^2}}, sen(\theta)=\frac{v_2}{\sqrt{v_1^2+v_2^2}}$ se tiene :$$R_O=\left[ \begin{array}{cc}cos\theta & sen\theta\\-sen\theta & cos\theta\end{array}\right]$$que en el ejemplo anterior como $v=(1,1)^T$ entonces: $cos(\theta)=\frac{1}{\sqrt{2}}, sen(\theta)=\frac{1}{\sqrt{2}}$ por lo que $\theta=\frac{\pi}{4}$ y:$$R_O=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]$$que es una matriz de rotación para un ángulo que gira **en sentido de las manecillas del reloj**. Para **hacer cero la entrada $v_1$ de $v$** hay que usar:$$\begin{array}{l}R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]\end{array}$$que es una matriz de rotación para un ángulo que gira **en sentido contrario de las manecillas del reloj**. ```{admonition} Ejercicio:class: tipUsar una matriz de rotación Givens para rotar al vector $(-3, 4)^T$ un ángulo de $\frac{\pi}{3}$ en sentido de las manecillas del reloj.``` Ejemplo aplicando rotaciones Givens a una matriz Las rotaciones Givens permiten hacer ceros en entradas de una matriz que son **seleccionadas**. Por ejemplo si se desea hacer cero la entrada $x_4$ de $x \in \mathbb{R}^4$, se definen $cos\theta = \frac{x_2}{\sqrt{x_2^2 + x_4^2}}, sen\theta = \frac{x_4}{\sqrt{x_2^2 + x_4^2}}$ y $$R_{24}^\theta=\left [ \begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]$$entonces: $$R_{24}^\theta x =\begin{array}{l}\left [\begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]\left [\begin{array}{c}x_1 \\x_2 \\x_3 \\x_4\end{array}\right ]=\left [\begin{array}{c}x_1 \\\sqrt{x_2^2 + x_4^2} \\x_3 \\0\end{array}\right ]\end{array}$$ Y se escribe que se hizo una rotación en el plano $(2,4)$. ```{admonition} Observación:class: tipObsérvese que sólo se modificaron dos entradas de $x$: $x_2, x_4$ por lo que el mismo efecto se obtiene al hacer la multiplicación:$$\begin{array}{l}\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]\left [ \begin{array}{c}x_2\\x_4\end{array}\right ]\end{array}$$para tales entradas.``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 4}$:$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$y aplíquense rotaciones Givens para hacer ceros en las entradas debajo de la diagonal de $A$ y tener una matriz **triangular superior**. **Entrada $a_{21}$, plano $(1,2)$:**
###Code
idx_1 = 0
idx_2 = 1
idx_column = 0
A = np.array([[4, 1, -2, 2],
[1, 2, 0, 1],
[-2, 0, 3, -2],
[2, 1, -2, -1]], dtype=float)
print(A)
a_11 = A[idx_1,idx_column]
a_21 = A[idx_2,idx_column]
norm = math.sqrt(a_11**2 + a_21**2)
cos_theta = a_11/norm
sen_theta = a_21/norm
R12 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R12)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A_subset = np.row_stack((A[idx_1,:], A[idx_2,:]))
print(A_subset)
print(R12@A_subset)
A1_aux = R12@A_subset
print(A1_aux)
###Output
_____no_output_____
###Markdown
Hacemos copia para un fácil manejo de los índices y matrices modificadas. Podríamos también usar [numpy.view](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.view.html).
###Code
A1 = A.copy()
A1[idx_1, :] = A1_aux[0, :]
A1[idx_2, :] = A1_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(1)} = R_{12}^\theta A^{(0)}$.```
###Code
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{31}$, plano $(1,3)$:**
###Code
idx_1 = 0
idx_2 = 2
idx_column = 0
a_11 = A1[idx_1, idx_column]
a_31 = A1[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_31**2)
cos_theta = a_11/norm
sen_theta = a_31/norm
R13 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R13)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A1_subset = np.row_stack((A1[idx_1,:], A1[idx_2,:]))
print(A1_subset)
print(R13@A1_subset)
A2_aux = R13@A1_subset
print(A2_aux)
A2 = A1.copy()
A2[idx_1, :] = A2_aux[0, :]
A2[idx_2, :] = A2_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(2)} = R_{13}^\theta A^{(1)}$.```
###Code
print(A2)
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A2[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{41}$, plano $(1,4)$:**
###Code
idx_1 = 0
idx_2 = 3
idx_column = 0
a_11 = A2[idx_1, idx_column]
a_41 = A2[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_41**2)
cos_theta = a_11/norm
sen_theta = a_41/norm
R14 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R14)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A2_subset = np.row_stack((A2[idx_1,:], A2[idx_2,:]))
print(A2_subset)
print(R14@A2_subset)
A3_aux = R14@A2_subset
print(A3_aux)
A3 = A2.copy()
A3[idx_1, :] = A3_aux[0, :]
A3[idx_2, :] = A3_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(3)} = R_{14}^\theta A^{(2)}$.```
###Code
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A3[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{32}$, plano $(2,3)$:**
###Code
idx_1 = 1
idx_2 = 2
idx_column = 1
a_22 = A2[idx_1, idx_column]
a_32 = A2[idx_2, idx_column]
norm = math.sqrt(a_22**2 + a_32**2)
cos_theta = a_22/norm
sen_theta = a_32/norm
R23 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R23)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A3_subset = np.row_stack((A3[idx_1,:], A3[idx_2,:]))
print(A3_subset)
print(R23@A3_subset)
A4_aux = R23@A3_subset
print(A4_aux)
A4 = A3.copy()
A4[idx_1, :] = A4_aux[0, :]
A4[idx_2, :] = A4_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(4)} = R_{23}^\theta A^{(3)}$.```
###Code
print(A4)
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,2]$.```
###Code
print(np.linalg.norm(A4[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
(OTBALN)= 2.1 Operaciones y transformaciones básicas del Álgebra Lineal Numérica ```{admonition} Notas para contenedor de docker:Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker y `` por la versión más actualizada que se presenta en la documentación.`docker run --rm -v :/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:`password para jupyterlab: `qwerty`Detener el contenedor de docker:`docker stop jupyterlab_optimizacion`Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).``` --- Nota generada a partir de [liga1](https://www.dropbox.com/s/fyqwiqasqaa3wlt/3.1.1.Multiplicacion_de_matrices_y_estructura_de_datos.pdf?dl=0), [liga2](https://www.dropbox.com/s/jwu8lu4r14pb7ut/3.2.1.Sistemas_de_ecuaciones_lineales_eliminacion_Gaussiana_y_factorizacion_LU.pdf?dl=0) y [liga3](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0). ```{admonition} Al final de esta nota el y la lectora::class: tip* Entenderá cómo utilizar transformaciones típicas en el álgebra lineal numérica en la que se basan muchos de los algoritmos del análisis numérico. En específico aprenderá cómo aplicar las transformaciones de Gauss, reflexiones de Householder y rotaciones Givens a vectores y matrices.* Se familizarizará con la notación vectorial y matricial de las operaciones básicas del álgebra lineal numérica.``` Las operaciones básicas del Álgebra Lineal Numérica podemos dividirlas en vectoriales y matriciales. Vectoriales * **Transponer:** $\mathbb{R}^{n \times 1} \rightarrow \mathbb{R} ^{1 \times n}$: $y = x^T$ entonces $x = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right ]$ y se tiene: $y = x^T = [x_1, x_2, \dots, x_n].$ * **Suma:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x + y$ entonces $z_i = x_i + y_i$* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $y = \alpha x$ entonces $y_i = \alpha x_i$.* **Producto interno estándar o producto punto:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}$: $c = x^Ty$ entonces $c = \displaystyle \sum_{i=1}^n x_i y_i$.* **Multiplicación *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x.*y$ entonces $z_i = x_i y_i$.* **División *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x./y$ entonces $z_i = x_i /y_i$ con $y_i \neq 0$.* **Producto exterior o *outer product*:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^{n \times n}$: $A = xy^T$ entonces $A[i, :] = x_i y^T$ con $A[i,:]$ el $i$-ésimo renglón de $A$. Matriciales * **Transponer:** $\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{n \times m}$: $C = A^T$ entonces $c_{ij} = a_{ji}$.* **Sumar:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A + B$ entonces $c_{ij} = a_{ij} + b_{ij}$.* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = \alpha A$ entonces $c_{ij} = \alpha a_{ij}$* **Multiplicación por un vector:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$: $y = Ax$ entonces $y_i = \displaystyle \sum_{j=1}^n a_{ij}x_j$.* **Multiplicación entre matrices:** $\mathbb{R}^{m \times k} \times \mathbb{R}^{k \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = AB$ entonces $c_{ij} = \displaystyle \sum_{r=1}^k a_{ir}b_{rj}$.* **Multiplicación *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A.*B$ entonces $c_{ij} = a_{ij}b_{ij}$.* **División *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A./B$ entonces $c_{ij} = a_{ij}/b_{ij}$ con $b_{ij} \neq 0$. **Como ejemplos de transformaciones básicas del Álgebra Lineal Numérica se encuentran:** (TGAUSS)= Transformaciones de Gauss En esta sección suponemos que $A \in \mathbb{R}^{n \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{n \times n} \forall i,j=1,2,\dots,n$. ```{margin}Como ejemplo de vector canónico tenemos: $e_1=(1,0)^T$ en $\mathbb{R}^2$ o $e_3 = (0,0,1,0,0)$ en $\mathbb{R}^5$.``` Considérese al vector $a \in \mathbb{R}^{n}$ y $e_k \in \mathbb{R}^n$ el $k$-ésimo vector canónico: vector con un $1$ en la posición $k$ y ceros en las entradas restantes. ```{admonition} DefiniciónUna transformación de Gauss está definida de forma general como $L_k = I_n - \ell_ke_k^T$ con $\ell_k = (0,0,\dots,\ell_{k+1,k},\dots,\ell_{n,k})^T$ y $\ell_{i,k}=\frac{a_{ik}}{a_{kk}} \forall i=k+1,\dots,n$.$a_{kk}$ se le nombra **pivote** y **debe ser diferente de cero**.``` Las transformaciones de Gauss se utilizan para hacer ceros por debajo del **pivote**. (EG1)= Ejemplo aplicando transformaciones de Gauss a un vector Considérese al vector $a=(-2,3,4)^T$. Definir una transformación de Gauss para hacer ceros por debajo de $a_1$ y otra transformación de Gauss para hacer cero la entrada $a_3$ **Solución:**
###Code
import numpy as np
import math
np.set_printoptions(precision=3, suppress=True)
###Output
_____no_output_____
###Markdown
a)Para hacer ceros por debajo del **pivote** $a_1 = -2$:
###Code
a = np.array([-2,3,4])
pivote = a[0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_2}{a_1}, \frac{a_3}{a_1})^T$```
###Code
l1 = np.array([0,a[1]/pivote, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1a = a - \ell_1 e_1^Ta$.**```
###Code
L1_a = a-l1*(e1.dot(a))
print(L1_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1@a)
###Output
_____no_output_____
###Markdown
b) Para hacer ceros por debajo del **pivote** $a_2 = 3$:
###Code
a = np.array([-2,3,4])
pivote = a[1]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_2=(0, 0, \frac{a_3}{a_2})^T$```
###Code
l2 = np.array([0,0, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_2$ pues se desea hacer ceros en las entradas debajo de la segunda.```
###Code
e2 = np.array([0,1,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_2$, directamente se tiene $L_2a = a - \ell_2 e_2^Ta$.**```
###Code
L2_a = a-l2*(e2.dot(a))
print(L2_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior: ```{margin}$L_2 = I_3 - \ell_2 e_2^T$.```
###Code
L2 = np.eye(3) - np.outer(l2,e2)
print(L2)
print(L2@a)
###Output
_____no_output_____
###Markdown
(EG2)= Ejemplo aplicando transformaciones de Gauss a una matriz Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su **diagonal** y tener una forma **triangular superior**, realizamos los productos matriciales:$$L_2 L_1 A$$ donde: $L_1, L_2$ son transformaciones de Gauss. Posterior a realizar el producto $L_2 L_1 A$ se obtiene una **matriz triangular superior:**$$L_2L_1A = \left [\begin{array}{ccc}* & * & *\\0 & * & * \\0 & 0 & * \end{array}\right ]$$ **Ejemplo:** a) Utilizando $L_1$
###Code
A = np.array([[-1, 2, 5],
[4, 5, -7],
[3, 0, 8]], dtype=float)
print(A)
###Output
_____no_output_____
###Markdown
Para hacer ceros por debajo del **pivote** $a_{11} = -1$:
###Code
pivote = A[0, 0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_{21}}{a_{11}}, \frac{a_{31}}{a_{11}})^T$```
###Code
l1 = np.array([0,A[1,0]/pivote, A[2,0]/pivote])
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A[1:3,1] = A[1:3,1] - \ell_1 e_1^T A[1:3,1]$.**```
###Code
L1_A_1 = A[:,0]-l1*(e1.dot(A[:,0]))
print(L1_A_1)
###Output
_____no_output_____
###Markdown
**Y se debe aplicar $L_1$ a las columnas número 2 y 3 de $A$ para completar el producto $L_1A$:** ```{margin}Aplicando $L_1$ a la segunda columna de $A$: $A[1:3,2]$.```
###Code
L1_A_2 = A[:,1]-l1*(e1.dot(A[:,1]))
print(L1_A_2)
###Output
_____no_output_____
###Markdown
```{margin}Aplicando $L_1$ a la tercer columna de $A$: $A[1:3,3]$.```
###Code
L1_A_3 = A[:,2]-l1*(e1.dot(A[:,2]))
print(L1_A_3)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1 @ A)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipAl aplicar $L_1$ a la primer columna de $A$ **siempre** obtenemos ceros por debajo del pivote que en este caso es $a_{11}$.``` (EG2.1)= **Después de hacer la multiplicación $L_1A$ en cualquiera de los dos casos (construyendo o no explícitamente $L_1$) no se modifica el primer renglón de $A$:**
###Code
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Este es el primer renglón de $A$.```
###Code
print(A[0,:])
###Output
_____no_output_____
###Markdown
```{margin}Tomando el primer renglón del producto $L_1A$.```
###Code
print((L1 @ A)[0,:])
###Output
_____no_output_____
###Markdown
**por lo que la multiplicación $L_1A$ entonces modifica del segundo renglón de $A$ en adelante y de la segunda columna de $A$ en adelante.** ```{admonition} Observación:class: tipDada la forma de $L_1 = I_3 - \ell_1e_1^T$, al hacer la multiplicación por la segunda y tercer columna de $A$ se tiene:$$e_1^T A[1:3,2] = A[0,2]$$$$e_1^T A[1:3,3] = A[0,3]$$respectivamente.``` ```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 1]))
###Output
_____no_output_____
###Markdown
```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 2]))
###Output
_____no_output_____
###Markdown
y puede escribirse de forma compacta: $$e_1^T A[1:3,2:3] = A[0, 2:3]$$
###Code
print(A[0, 1:3]) #observe that we have to use 2+1=3 as the second number after ":" in 1:3
print(A[0, 1:]) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Entonces los productos $\ell_1 e_1^T A[:,2]$ y $\ell_1 e_1^T A[:,3]$ quedan respectivamente como: $$\ell_1A[0, 2]$$
###Code
print(l1*A[0,1])
###Output
_____no_output_____
###Markdown
$$\ell_1A[0,3]$$
###Code
print(l1*A[0, 2])
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipEn los dos cálculos anteriores, las primeras entradas son iguales a $0$ por lo que es consistente con el hecho que únicamente se modifican dos entradas de la segunda y tercer columna de $A$.``` De forma compacta y aprovechando funciones en *NumPy* como [np.outer](https://numpy.org/doc/stable/reference/generated/numpy.outer.html) se puede calcular lo anterior como:
###Code
print(np.outer(l1[1:3],A[0,1:3]))
print(np.outer(l1[1:],A[0,1:])) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Y finalmente la aplicación de $L_1$ al segundo renglón y segunda columna en adelante de $A$ queda: ```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A = A - \ell_1 e_1^T A$ y podemos aprovechar lo anterior para sólo operar de la segunda columna y segundo renglón en adelante.**```
###Code
print(A[1:, 1:] - np.outer(l1[1:],A[0,1:]))
###Output
_____no_output_____
###Markdown
Compárese con:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar el primer renglón y primera columna al producto. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
A_aux = A[1:, 1:] - np.outer(l1[1:],A[0,1:])
m, n = A.shape
number_of_zeros = m-1
A_aux_2 = np.column_stack((np.zeros(number_of_zeros), A_aux)) # stack two zeros
print(A_aux_2)
A_aux_3 = np.row_stack((A[0, :], A_aux_2))
print(A_aux_3)
###Output
_____no_output_____
###Markdown
que es el resultado de:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
**Lo que falta para obtener una matriz triangular superior es hacer la multiplicación $L_2L_1A$.** Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}_{32}}{a^{(1)}_{22}} \right )^T$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = L_1A^{(0)}$ y $A^{(0)}=A$. ```{admonition} Ejercicio:class: tipCalcular el producto $L_2 L_1 A$ para la matriz anterior y para la matriz:$$A = \left [\begin{array}{ccc}1 & 4 & -2 \\-3 & 9 & 8 \\5 & 1 & -6\end{array}\right]$$tomando en cuenta que en este caso $L_2$ sólo opera del segundo renglón y segunda columna en adelante:y obtener una matriz triangular superior en cada ejercicio.``` ```{admonition} Comentarios* Las transformaciones de Gauss se utilizan para la fase de eliminación del método de eliminación Gaussiana o también llamada factorización $LU$. Ver [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination).* La factorización $P, L, U$ que es la $LU$ con permutaciones por pivoteo parcial es un método estable numéricamente respecto al redondeo en la práctica pero inestable en la teoría.``` (MATORTMATCOLORTONO)= Matriz ortogonal y matriz con columnas ortonormales Un conjunto de vectores $\{x_1, \dots, x_p\}$ en $\mathbb{R}^m$ ($x_i \in \mathbb{R}^m$)es ortogonal si $x_i^Tx_j=0$ $\forall i\neq j$. Por ejemplo, para un conjunto de $2$ vectores $x_1,x_2$ en $\mathbb{R}^3$ esto se visualiza: ```{admonition} Comentarios* Si el conjunto $\{x_1,\dots,x_n\}$ en $\mathbb{R}^m$ satisface $x_i^Tx_j= \delta_{ij}= \begin{cases}1 &\text{ si } i=j,\\0 &\text{ si } i\neq j\end{cases}$, ver [Kronecker_delta](https://en.wikipedia.org/wiki/Kronecker_delta) se le nombra conjunto **ortonormal**, esto es, constituye un conjunto ortogonal y cada elemento del conjunto tiene norma $2$ o Euclidiana igual a $1$: $||x_i||_2 = 1, \forall i=1,\dots,n$. * Si definimos a la matriz $X$ con columnas dadas por cada uno de los vectores del conjunto $\{x_1,\dots, x_n\}$: $X=(x_1, \dots , x_n) \in \mathbb{R}^{m \times n}$ entonces la propiedad de que cada par de columnas satisfaga $x_i^Tx_j=\delta_{ij}$ se puede escribir en notación matricial como $X^TX = I_n$ con $I_n$ la matriz identidad de tamaño $n$ si $n \leq m$ o bien $XX^T=I_m$ si $m \leq n$. A la matriz $X$ se le nombra **matriz con columnas ortonormales**. * Si cada $x_i$ está en $\mathbb{R}^n$ (en lugar de $\mathbb{R}^m$) entonces construímos a la matriz $X$ como el punto anterior con la diferencia que $X \in \mathbb{R}^{n \times n}$. En este caso $X$ se le nombra **matriz ortogonal**.* Entre las propiedades más importantes de las matrices ortogonales o con columnas ortonormales es que son isometrías bajo la norma $2$ o Euclidiana y multiplicar por tales matrices es estable numéricamente bajo el redondeo, ver {ref}`Condición de un problema y estabilidad de un algoritmo `.``` (TREF)= Transformaciones de reflexión En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Reflectores de Householder ```{margin}Recuerda que $u^\perp = \{x \in \mathbb{R}^m| u^Tx=0\}$ es un subespacio de $\mathbb{R}^m$ de dimensión $m-1$ y es el complemento ortogonal de $u$.``` ```{admonition} DefiniciónLas reflexiones de Householder son matrices **simétricas, ortogonales** y se construyen a partir de un vector $v \neq 0$ definiendo:$$R = I_m-\beta v v^T$$ con $v \in \mathbb{R}^m - \{0\}$ y $\beta = \frac{2}{v^Tv}$. El vector $v$ se llama **vector de Householder**. La multiplicación $Rx$ representa la reflexión del vector $x \in \mathbb{R}^m$ a través del hiperplano $v^\perp$.``` ```{admonition} ComentarioAlgunas propiedades de las reflexiones de Householder son: $R^TR = R^2 = I_m$, $R^{-1}=R$, $det(R)=-1$.``` ```{sidebar} Proyector ortogonal elementalEn este dibujo se utiliza el **proyector ortogonal elemental** sobre el complemento ortogonal $u^\perp$ definido como: $P=I_m- u u^T$ y $Px$ es la proyección ortogonal de $x$ sobre $u^\perp$ . Los proyectores ortogonales elementales **no** son matrices ortogonales, son singulares, son simétricas y $P^2=P$. El proyector ortogonal elemental de $x$ sobre $u^\perp$ tienen $rank$ igual a $m-1$ y el proyector ortogonal de $x$ sobre $span\{u\}$ definido por $I_m-P=uu^T$ tienen $rank$ igual a $1$.Recuerda que $span\{u\}$ es el conjunto generado por $u$. Se define como el conjunto de combinaciones lineales de $u$: $span\{u\} = \left \{\displaystyle \sum_{i=1}^m k_i u_i | k_i \in \mathbb{R} \forall i =1,\dots,m \right \}$.``` Un dibujo que ayuda a visualizar el reflector elemental alrededor de $u^\perp$ en el que se utiliza $u \in \mathbb{R}^m - \{0\}$ , $||u||_2 = 1$ y $R=I_m-2 u u^T$ es el siguiente : Las reflexiones de Householder pueden utilizarse para hacer ceros por debajo de una entrada de un vector. Ejemplo aplicando reflectores de Householder a un vector Considérese al vector $x=(1,2,3)^T$. Definir un reflector de Householder para hacer ceros por debajo de $x_1$.
###Code
x = np.array([1,2,3])
print(x)
###Output
_____no_output_____
###Markdown
Utilizamos la definición $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico para construir al vector de Householder: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x-np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
El resultado de $Rx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por:
###Code
print(np.linalg.norm(x))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tip* Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de reflexión de Householder son matrices ortogonales y por tanto isometrías: $||Rv||_2=||v||_2$.* Observa que a diferencia de las transformaciones de Gauss con las reflexiones de Householder en general se modifica la primera entrada, ver {ref}`Ejemplo aplicando transformaciones de Gauss a un vector `.``` A continuación se muestra que el producto $Rx$ si se construye $R$ es equivalente a lo anterior: ```{margin}$R = I_3 - \beta v v^T$.```
###Code
R = np.eye(3)-beta*np.outer(v,np.transpose(v))
print(R)
print(R@x)
###Output
_____no_output_____
###Markdown
Ejemplo aplicando reflectores de Householder a un vector Considérese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$.". Otra opción para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x+np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que difieren en signo las primeras entradas al utilizar $v=x + ||x||_2 e_1$ o $v=x - ||x||_2 e_1$.``` ¿Cuál definición del vector de Householder usar? En cualquiera de las dos definiciones del vector de Householder $v=x \pm ||x||_2 e_1$, la multiplicación $Rx$ refleja $x$ en el primer eje coordenado (pues se usa $e_1$): El vector $v^+ = - u_0^+ = x-||x||_2e_1$ refleja $x$ respecto al subespacio $H^+$ (que en el dibujo es una recta que cruza el origen). El vector $v^- = -u_0^- = x+||x||_2e_1$ refleja $x$ respecto al subespacio $H^-$. Para reducir los errores por redondeo y evitar el problema de cancelación en la aritmética de punto flotante (ver [Sistema de punto flotante](https://itam-ds.github.io/analisis-numerico-computo-cientifico/I.computo_cientifico/1.2/Sistema_de_punto_flotante.html)) se utiliza:$$v = x+signo(x_1)||x||_2e_1$$donde: $signo(x_1) = \begin{cases}1 &\text{ si } x_1 \geq 0 ,\\-1 &\text{ si } x_1 < 0\end{cases}.$La idea de la definción anterior con la función $signo(\cdot)$ es que la reflexión (en el dibujo anterior $-||x||_2e_1$ o $||x||_2e_1$) sea lo más alejada posible de $x$. En el dibujo anterior como $x_1, x_2>0$ entonces se refleja respecto al subespacio $H^-$ quedando su reflexión igual a $-||x||_2e_1$. ```{admonition} Comentarios* Otra forma de lidiar con el problema de cancelación es definiendo a la primera componente del vector de Householder $v_1$ como $v_1=x_1-||x||_2$ y haciendo una manipulación algebraica como sigue:$$v_1=x_1-||x||_2 = \frac{x_1^2-||x||_2^2}{x_1+||x||_2} = -\frac{x_2^2+x_3^2+\dots + x_m^2}{x_1+||x||_2}.$$* En la implementación del cálculo del vector de Householder, es útil que $v_1=1$ y así únicamente se almacenará $v[2:m]$. Al vector $v[2:m]$ se le nombra **parte esencial del vector de Householder**.* Las transformaciones de reflexión de Householder se utilizan para la factorización QR. Ver [QR decomposition](https://en.wikipedia.org/wiki/QR_decomposition), la cual es una factorización estable numéricamente bajo el redondeo.``` ```{admonition} Ejercicio:class: tipReflejar al vector $\left [\begin{array}{c}1 \\1 \\\end{array}\right ]$ utilizando al vector $\left [\begin{array}{c}\frac{-4}{3}\\\frac{2}{3}\end{array}\right ]$ para construir $R$.``` Ejemplo aplicando reflectores de Householder a una matriz Las reflexiones de Householder se utilizan para hacer ceros por debajo de la **diagonal** a una matriz y tener una forma triangular superior (mismo objetivo que las transformaciones de Gauss, ver {ref}`Ejemplo aplicando transformaciones de Gauss a una matriz `). Por ejemplo si se han hecho ceros por debajo del elemento $a_{11}$ y se quieren hacer ceros debajo de $a_{22}^{(1)}$: $$\begin{array}{l}R_2A^{(1)} = R_2\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & * & * & * \\0 & * & * & * \\0 & * & * & *\end{array}\right]=\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & 0 & * & * \\0 & 0 & * & * \\0 & 0 & * & *\end{array}\right]:= A^{(2)}\end{array}$$donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = R_1A^{(0)}$ y $A^{(0)}=A$, $R_1$ es matriz de reflexión de Householder. En este caso $$R_2 = \left [ \begin{array}{cc}1 & 0 \\0 & \hat{R_2}\end{array}\right ]$$ con $\hat{R}_2$ una matriz de reflexión de Householder que hace ceros por debajo de de $a_{22}^{(1)}$. Se tienen las siguientes propiedades de $R_2$:* No modifica el primer renglón de $A^{(1)}$.* No destruye los ceros de la primer columna de $A^{(1)}$.* $R_2$ es una matriz de reflexión de Householder. ```{admonition} Observación:class: tipPara la implementación computacional **no se inserta** $\hat{R}_2$ en $R_2$, en lugar de esto se aplica $\hat{R}_2$ a la submatriz $A^{(1)}[2:m, 2:m]$.``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 3}$:$$A =\left [\begin{array}{ccc}3 & 2 & -1 \\2 & 3 & 2 \\-1 & 2 & 3 \\2 & 1 & 4\end{array}\right ] $$y aplíquense reflexiones de Householder para llevarla a una forma triangular superior.
###Code
A = np.array([[3 ,2, -1],
[2 ,3 ,2],
[-1, 2 ,3],
[2 ,1 ,4]], dtype = float)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera entrada de la primera columna de $A$: $A[1:4,1]$.```
###Code
e1 = np.array([1,0,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[1:4,1] + signo(A[1,1])||A[1:4,1]||_2e_1$.```
###Code
v = A[:,0] + np.linalg.norm(A[:,0])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
print(beta)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_1$, directamente se tiene $R_1 A[1:4,1] = A[1:4,1] - \beta vv^TA[1:4,1]$.**```
###Code
print(A[:,0] - beta*v*v.dot(A[:,0]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(1)} = R_1 A^{(0)}$.```
###Code
A1 = A[:,0:]-beta*np.outer(v,v.dot(A[:,0:]))
print(A1)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que a diferencia de las transformaciones de Gauss la reflexión de Householder $R_1$ sí modifica el primer renglón de $A^{(0)}$, ver {ref}`Después de hacer la multiplicación... `.``` ```{margin}Se preserva la norma $2$ o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:,0]))
print(np.linalg.norm(A[:,0]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$.** ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la segunda entrada de la segunda columna de $A^{(1)}$: $A^{(1)}[2:4,2]$.```
###Code
e1 = np.array([1, 0, 0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[2:4,2] + signo(A[2,2])||A[2:4,2]||_2e_1$.```
###Code
v = A1[1:,1] + np.linalg.norm(A1[1:,1])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_2$, directamente se tiene $R_2A[2:4,2] = A[2:4,2] - \beta vv^TA[2:4,2]$.**```
###Code
print(A1[1:,1] - beta*v*v.dot(A1[1:,1]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(2)} = R_2 A^{(1)}$ pero sólo operamos en $A^{(2)}[2:4, 2:3]$.```
###Code
A2_aux = A1[1:,1:]-beta*np.outer(v,v.dot(A1[1:,1:]))
print(A2_aux)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma $2$ o Euclidiana de $A[2:4,2]$.```
###Code
print(np.linalg.norm(A1[1:,1]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$.**
###Code
e1 = np.array([1, 0])
v = A2_aux[1:,1] + np.linalg.norm(A2_aux[1:,1])*e1
print(v)
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(3)} = R_3 A^{(2)}$ pero sólo operamos en $A^{(2)}[3:4, 3]$.```
###Code
A3_aux = A2_aux[1:,1]-beta*v*v.dot(A2_aux[1:,1])
print(A3_aux)
print(np.linalg.norm(A2_aux[1:,1]))
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
m,n = A.shape
number_of_zeros = m-2
A3_aux_2 = np.column_stack((np.zeros(number_of_zeros), A3_aux))
print(A3_aux_2)
A3_aux_3 = np.row_stack((A2_aux[0, 0:], A3_aux_2))
print(A3_aux_3)
number_of_zeros = m-1
A3_aux_4 = np.column_stack((np.zeros(number_of_zeros), A3_aux_3))
print(A3_aux_4)
###Output
_____no_output_____
###Markdown
La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es:
###Code
A3 = np.row_stack((A1[0, 0:], A3_aux_4))
print(A3)
###Output
_____no_output_____
###Markdown
Podemos verificar lo anterior comparando con la matriz $R$ de la factorización $QR$ de $A$:
###Code
q,r = np.linalg.qr(A)
print("Q:")
print(q)
print("R:")
print(r)
###Output
Q:
[[-0.707 0. 0.471]
[-0.471 -0.527 0.079]
[ 0.236 -0.843 -0.157]
[-0.471 0.105 -0.864]]
R:
[[-4.243 -2.828 -1.414]
[ 0. -3.162 -3.162]
[ 0. 0. -4.243]]
###Markdown
```{admonition} Ejercicio:class: tipAplicar reflexiones de Householder a la matriz$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$para obtener una matriz triangular superior.``` (TROT)= Transformaciones de rotación En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Si $u, v \in \mathbb{R}^2-\{0\}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un ángulo $\theta$ para llevarlo a la dirección de $v$: A partir de las relaciones anteriores como $cos(\phi)=\frac{u_1}{\ell}, sen(\phi)=\frac{u_2}{\ell}$ se tiene: $v_1 = (cos\theta)u_1-(sen\theta)u_2$, $v_2=(sen\theta)u_1+(cos\theta)u_2$ equivalentemente:$$\begin{array}{l}\left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] \cdot \left[\begin{array}{c}u_1\\u_2\end{array}\right]\end{array}$$ ```{admonition} DefiniciónLa matriz $R_O$:$$R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] $$se nombra matriz de **rotación** o **rotaciones Givens**, es una matriz ortogonal pues $R_O^TR_O=I_2$.La multiplicación $v=R_Ou$ es una rotación en sentido contrario a las manecillas del reloj, de hecho cumple $det(R_O)=1$. La multiplicación $u=R_O^Tv$ es una rotación en sentido de las manecillas del reloj y el ángulo asociado es $-\theta$.``` Ejemplo aplicando rotaciones Givens a un vector Rotar al vector $v=(1,1)^T$ un ángulo de $45^o$ en **sentido contrario a las manecillas del reloj**.
###Code
v=np.array([1,1])
###Output
_____no_output_____
###Markdown
La matriz $R_O$ es: $$R_O = \left[ \begin{array}{cc}cos(\frac{\pi}{4}) & -sen(\frac{\pi}{4})\\sen(\frac{\pi}{4}) & cos(\frac{\pi}{4})\end{array}\right ]$$
###Code
theta=math.pi/4
RO=np.array([[math.cos(theta), -math.sin(theta)],
[math.sin(theta), math.cos(theta)]])
print(RO)
print(RO@v)
print(np.linalg.norm(v))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que se preserva la norma $2$ o Euclidiana del vector, las matrices de rotación Givens son matrices ortogonales y por tanto isometrías: $||R_0v||_2=||v||_2$.``` En el ejemplo anterior se hizo cero la entrada $v_1$ de $v$. Las matrices de rotación se utilizan para hacer ceros en entradas de un vector. Por ejemplo si $v=(v_1,v_2)^T$ y **se desea hacer cero la entrada $v_2$ de $v$** se puede utilizar la matriz de rotación:$$R_O = \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]$$ pues:$$\begin{array}{l} \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ] \cdot \left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\\frac{-v_1v_2+v_1v_2}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\0\end{array}\right ]=\left[ \begin{array}{c}||v||_2\\0\end{array}\right ]\end{array}$$ Y definiendo $cos(\theta)=\frac{v_1}{\sqrt{v_1^2+v_2^2}}, sen(\theta)=\frac{v_2}{\sqrt{v_1^2+v_2^2}}$ se tiene :$$R_O=\left[ \begin{array}{cc}cos\theta & sen\theta\\-sen\theta & cos\theta\end{array}\right]$$que en el ejemplo anterior como $v=(1,1)^T$ entonces: $cos(\theta)=\frac{1}{\sqrt{2}}, sen(\theta)=\frac{1}{\sqrt{2}}$ por lo que $\theta=\frac{\pi}{4}$ y:$$R_O=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]$$que es una matriz de rotación para un ángulo que gira **en sentido de las manecillas del reloj**. Para **hacer cero la entrada $v_1$ de $v$** hay que usar:$$\begin{array}{l}R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]\end{array}$$que es una matriz de rotación para un ángulo que gira **en sentido contrario de las manecillas del reloj**. ```{admonition} Ejercicio:class: tipUsar una matriz de rotación Givens para rotar al vector $(-3, 4)^T$ un ángulo de $\frac{\pi}{3}$ en sentido de las manecillas del reloj.``` Ejemplo aplicando rotaciones Givens a una matriz Las rotaciones Givens permiten hacer ceros en entradas de una matriz que son **seleccionadas**. Por ejemplo si se desea hacer cero la entrada $x_4$ de $x \in \mathbb{R}^4$, se definen $cos\theta = \frac{x_2}{\sqrt{x_2^2 + x_4^2}}, sen\theta = \frac{x_4}{\sqrt{x_2^2 + x_4^2}}$ y $$R_{24}^\theta=\left [ \begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]$$entonces: $$R_{24}^\theta x =\begin{array}{l}\left [\begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]\left [\begin{array}{c}x_1 \\x_2 \\x_3 \\x_4\end{array}\right ]=\left [\begin{array}{c}x_1 \\\sqrt{x_2^2 + x_4^2} \\x_3 \\0\end{array}\right ]\end{array}$$ Y se escribe que se hizo una rotación en el plano $(2,4)$. ```{admonition} Observación:class: tipObsérvese que sólo se modificaron dos entradas de $x$: $x_2, x_4$ por lo que el mismo efecto se obtiene al hacer la multiplicación:$$\begin{array}{l}\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]\left [ \begin{array}{c}x_2\\x_4\end{array}\right ]\end{array}$$para tales entradas.``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 4}$:$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$y aplíquense rotaciones Givens para hacer ceros en las entradas debajo de la diagonal de $A$ y tener una matriz **triangular superior**. **Entrada $a_{21}$, plano $(1,2)$:**
###Code
idx_1 = 0
idx_2 = 1
idx_column = 0
A = np.array([[4, 1, -2, 2],
[1, 2, 0, 1],
[-2, 0, 3, -2],
[2, 1, -2, -1]], dtype=float)
print(A)
a_11 = A[idx_1,idx_column]
a_21 = A[idx_2,idx_column]
norm = math.sqrt(a_11**2 + a_21**2)
cos_theta = a_11/norm
sen_theta = a_21/norm
R12 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R12)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A_subset = np.row_stack((A[idx_1,:], A[idx_2,:]))
print(A_subset)
print(R12@A_subset)
A1_aux = R12@A_subset
print(A1_aux)
###Output
_____no_output_____
###Markdown
Hacemos copia para un fácil manejo de los índices y matrices modificadas. Podríamos también usar [numpy.view](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.view.html).
###Code
A1 = A.copy()
A1[idx_1, :] = A1_aux[0, :]
A1[idx_2, :] = A1_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(1)} = R_{12}^\theta A^{(0)}$.```
###Code
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{31}$, plano $(1,3)$:**
###Code
idx_1 = 0
idx_2 = 2
idx_column = 0
a_11 = A1[idx_1, idx_column]
a_31 = A1[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_31**2)
cos_theta = a_11/norm
sen_theta = a_31/norm
R13 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R13)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A1_subset = np.row_stack((A1[idx_1,:], A1[idx_2,:]))
print(A1_subset)
print(R13@A1_subset)
A2_aux = R13@A1_subset
print(A2_aux)
A2 = A1.copy()
A2[idx_1, :] = A2_aux[0, :]
A2[idx_2, :] = A2_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(2)} = R_{13}^\theta A^{(1)}$.```
###Code
print(A2)
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A2[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{41}$, plano $(1,4)$:**
###Code
idx_1 = 0
idx_2 = 3
idx_column = 0
a_11 = A2[idx_1, idx_column]
a_41 = A2[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_41**2)
cos_theta = a_11/norm
sen_theta = a_41/norm
R14 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R14)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A2_subset = np.row_stack((A2[idx_1,:], A2[idx_2,:]))
print(A2_subset)
print(R14@A2_subset)
A3_aux = R14@A2_subset
print(A3_aux)
A3 = A2.copy()
A3[idx_1, :] = A3_aux[0, :]
A3[idx_2, :] = A3_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(3)} = R_{14}^\theta A^{(2)}$.```
###Code
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A3[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{32}$, plano $(2,3)$:**
###Code
idx_1 = 1
idx_2 = 2
idx_column = 1
a_22 = A2[idx_1, idx_column]
a_32 = A2[idx_2, idx_column]
norm = math.sqrt(a_22**2 + a_32**2)
cos_theta = a_22/norm
sen_theta = a_32/norm
R23 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R23)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A3_subset = np.row_stack((A3[idx_1,:], A3[idx_2,:]))
print(A3_subset)
print(R23@A3_subset)
A4_aux = R23@A3_subset
print(A4_aux)
A4 = A3.copy()
A4[idx_1, :] = A4_aux[0, :]
A4[idx_2, :] = A4_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(4)} = R_{23}^\theta A^{(3)}$.```
###Code
print(A4)
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,2]$.```
###Code
print(np.linalg.norm(A4[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
b) Para hacer ceros por debajo del **pivote** $a_2 = 3$:
###Code
a = np.array([-2,3,4])
pivote = a[1]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_2=(0, 0, \frac{a_3}{a_2})^T$```
###Code
l2 = np.array([0,0, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_2$ pues se desea hacer ceros en las entradas debajo de la segunda.```
###Code
e2 = np.array([0,1,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_2$, directamente se tiene $L_2a = a - \ell_2 e_2^Ta$.**```
###Code
L2_a = a-l2*(e2.dot(a))
print(L2_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior: ```{margin}$L_2 = I_3 - \ell_2 e_2^T$.```
###Code
L2 = np.eye(3) - np.outer(l2,e2)
print(L2)
print(L2@a)
###Output
_____no_output_____
###Markdown
(EG2)= Ejemplo aplicando transformaciones de Gauss a una matriz Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su **diagonal** y tener una forma **triangular superior**, realizamos los productos matriciales:$$L_2 L_1 A$$ donde: $L_1, L_2$ son transformaciones de Gauss. Posterior a realizar el producto $L_2 L_1 A$ se obtiene una **matriz triangular superior:**$$L_2L_1A = \left [\begin{array}{ccc}* & * & *\\0 & * & * \\0 & 0 & * \end{array}\right ]$$ **Ejemplo:** a) Utilizando $L_1$
###Code
A = np.array([[-1, 2, 5],
[4, 5, -7],
[3, 0, 8]], dtype=float)
print(A)
###Output
_____no_output_____
###Markdown
Para hacer ceros por debajo del **pivote** $a_{11} = -1$:
###Code
pivote = A[0, 0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_{21}}{a_{11}}, \frac{a_{31}}{a_{11}})^T$```
###Code
l1 = np.array([0,A[1,0]/pivote, A[2,0]/pivote])
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A[1:3,1] = A[1:3,1] - \ell_1 e_1^T A[1:3,1]$.**```
###Code
L1_A_1 = A[:,0]-l1*(e1.dot(A[:,0]))
print(L1_A_1)
###Output
_____no_output_____
###Markdown
**Y se debe aplicar $L_1$ a las columnas número 2 y 3 de $A$ para completar el producto $L_1A$:** ```{margin}Aplicando $L_1$ a la segunda columna de $A$: $A[1:3,2]$.```
###Code
L1_A_2 = A[:,1]-l1*(e1.dot(A[:,1]))
print(L1_A_2)
###Output
_____no_output_____
###Markdown
```{margin}Aplicando $L_1$ a la tercer columna de $A$: $A[1:3,3]$.```
###Code
L1_A_3 = A[:,2]-l1*(e1.dot(A[:,2]))
print(L1_A_3)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1 @ A)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipAl aplicar $L_1$ a la primer columna de $A$ **siempre** obtenemos ceros por debajo del pivote que en este caso es $a_{11}$.``` (EG2.1)= **Después de hacer la multiplicación $L_1A$ en cualquiera de los dos casos (construyendo o no explícitamente $L_1$) no se modifica el primer renglón de $A$:**
###Code
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Este es el primer renglón de $A$.```
###Code
print(A[0,:])
###Output
_____no_output_____
###Markdown
```{margin}Tomando el primer renglón del producto $L_1A$.```
###Code
print((L1 @ A)[0,:])
###Output
_____no_output_____
###Markdown
**por lo que la multiplicación $L_1A$ entonces modifica del segundo renglón de $A$ en adelante y de la segunda columna de $A$ en adelante.** ```{admonition} Observación:class: tipDada la forma de $L_1 = I_3 - \ell_1e_1^T$, al hacer la multiplicación por la segunda y tercer columna de $A$ se tiene:$$e_1^T A[1:3,2] = A[0,2]$$$$e_1^T A[1:3,3] = A[0,3]$$respectivamente.``` ```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 1]))
###Output
_____no_output_____
###Markdown
```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 2]))
###Output
_____no_output_____
###Markdown
y puede escribirse de forma compacta: $$e_1^T A[1:3,2:3] = A[0, 2:3]$$
###Code
print(A[0, 1:3]) #observe that we have to use 2+1=3 as the second number after ":" in 1:3
print(A[0, 1:]) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Entonces los productos $\ell_1 e_1^T A[:,2]$ y $\ell_1 e_1^T A[:,3]$ quedan respectivamente como: $$\ell_1A[0, 2]$$
###Code
print(l1*A[0,1])
###Output
_____no_output_____
###Markdown
$$\ell_1A[0,3]$$
###Code
print(l1*A[0, 2])
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipEn los dos cálculos anteriores, las primeras entradas son iguales a $0$ por lo que es consistente con el hecho que únicamente se modifican dos entradas de la segunda y tercer columna de $A$.``` De forma compacta y aprovechando funciones en *NumPy* como [np.outer](https://numpy.org/doc/stable/reference/generated/numpy.outer.html) se puede calcular lo anterior como:
###Code
print(np.outer(l1[1:3],A[0,1:3]))
print(np.outer(l1[1:],A[0,1:])) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Y finalmente la aplicación de $L_1$ al segundo renglón y segunda columna en adelante de $A$ queda: ```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A = A - \ell_1 e_1^T A$ y podemos aprovechar lo anterior para sólo operar de la segunda columna y segundo renglón en adelante.**```
###Code
print(A[1:, 1:] - np.outer(l1[1:],A[0,1:]))
###Output
_____no_output_____
###Markdown
Compárese con:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar el primer renglón y primera columna al producto. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
A_aux = A[1:, 1:] - np.outer(l1[1:],A[0,1:])
m, n = A.shape
number_of_zeros = m-1
A_aux_2 = np.column_stack((np.zeros(number_of_zeros), A_aux)) # stack two zeros
print(A_aux_2)
A_aux_3 = np.row_stack((A[0, :], A_aux_2))
print(A_aux_3)
###Output
_____no_output_____
###Markdown
que es el resultado de:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
**Lo que falta para obtener una matriz triangular superior es hacer la multiplicación $L_2L_1A$.** Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}_{32}}{a^{(1)}_{22}} \right )^T$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = L_1A^{(0)}$ y $A^{(0)}=A$. ```{admonition} Ejercicio:class: tipCalcular el producto $L_2 L_1 A$ para la matriz anterior y para la matriz:$$A = \left [\begin{array}{ccc}1 & 4 & -2 \\-3 & 9 & 8 \\5 & 1 & -6\end{array}\right]$$tomando en cuenta que en este caso $L_2$ sólo opera del segundo renglón y segunda columna en adelante:y obtener una matriz triangular superior en cada ejercicio.``` ```{admonition} Comentarios* Las transformaciones de Gauss se utilizan para la fase de eliminación del método de eliminación Gaussiana o también llamada factorización $LU$. Ver [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination).* La factorización $P, L, U$ que es la $LU$ con permutaciones por pivoteo parcial es un método estable numéricamente respecto al redondeo en la práctica pero inestable en la teoría.``` (MATORTMATCOLORTONO)= Matriz ortogonal y matriz con columnas ortonormales Un conjunto de vectores $\{x_1, \dots, x_p\}$ en $\mathbb{R}^m$ ($x_i \in \mathbb{R}^m$)es ortogonal si $x_i^Tx_j=0$ $\forall i\neq j$. Por ejemplo, para un conjunto de $2$ vectores $x_1,x_2$ en $\mathbb{R}^3$ esto se visualiza: ```{admonition} Comentarios* Si el conjunto $\{x_1,\dots,x_n\}$ en $\mathbb{R}^m$ satisface $x_i^Tx_j= \delta_{ij}= \begin{cases}1 &\text{ si } i=j,\\0 &\text{ si } i\neq j\end{cases}$, ver [Kronecker_delta](https://en.wikipedia.org/wiki/Kronecker_delta) se le nombra conjunto **ortonormal**, esto es, constituye un conjunto ortogonal y cada elemento del conjunto tiene norma $2$ o Euclidiana igual a $1$: $||x_i||_2 = 1, \forall i=1,\dots,n$. * Si definimos a la matriz $X$ con columnas dadas por cada uno de los vectores del conjunto $\{x_1,\dots, x_n\}$: $X=(x_1, \dots , x_n) \in \mathbb{R}^{m \times n}$ entonces la propiedad de que cada par de columnas satisfaga $x_i^Tx_j=\delta_{ij}$ se puede escribir en notación matricial como $X^TX = I_n$ con $I_n$ la matriz identidad de tamaño $n$ si $n \leq m$ o bien $XX^T=I_m$ si $m \leq n$. A la matriz $X$ se le nombra **matriz con columnas ortonormales**. * Si cada $x_i$ está en $\mathbb{R}^n$ (en lugar de $\mathbb{R}^m$) entonces construímos a la matriz $X$ como el punto anterior con la diferencia que $X \in \mathbb{R}^{n \times n}$. En este caso $X$ se le nombra **matriz ortogonal**.* Entre las propiedades más importantes de las matrices ortogonales o con columnas ortonormales es que son isometrías bajo la norma $2$ o Euclidiana y multiplicar por tales matrices es estable numéricamente bajo el redondeo, ver {ref}`Condición de un problema y estabilidad de un algoritmo `.``` (TREF)= Transformaciones de reflexión En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Reflectores de Householder ```{margin}Recuerda que $u^\perp = \{x \in \mathbb{R}^m| u^Tx=0\}$ es un subespacio de $\mathbb{R}^m$ de dimensión $m-1$ y es el complemento ortogonal de $u$.``` ```{admonition} DefiniciónLas reflexiones de Householder son matrices **simétricas, ortogonales** y se construyen a partir de un vector $v \neq 0$ definiendo:$$R = I_m-\beta v v^T$$ con $v \in \mathbb{R}^m - \{0\}$ y $\beta = \frac{2}{v^Tv}$. El vector $v$ se llama **vector de Householder**. La multiplicación $Rx$ representa la reflexión del vector $x \in \mathbb{R}^m$ a través del hiperplano $v^\perp$.``` ```{admonition} ComentarioAlgunas propiedades de las reflexiones de Householder son: $R^TR = R^2 = I_m$, $R^{-1}=R$, $det(R)=-1$.``` ```{sidebar} Proyector ortogonal elementalEn este dibujo se utiliza el **proyector ortogonal elemental** sobre el complemento ortogonal $u^\perp$ definido como: $P=I_m- u u^T$ y $Px$ es la proyección ortogonal de $x$ sobre $u^\perp$ . Los proyectores ortogonales elementales **no** son matrices ortogonales, son singulares, son simétricas y $P^2=P$. El proyector ortogonal elemental de $x$ sobre $u^\perp$ tienen $rank$ igual a $m-1$ y el proyector ortogonal de $x$ sobre $span\{u\}$ definido por $I_m-P=uu^T$ tienen $rank$ igual a $1$.Recuerda que $span\{u\}$ es el conjunto generado por $u$. Se define como el conjunto de combinaciones lineales de $u$: $span\{u\} = \left \{\displaystyle \sum_{i=1}^m k_i u_i | k_i \in \mathbb{R} \forall i =1,\dots,m \right \}$.``` Un dibujo que ayuda a visualizar el reflector elemental alrededor de $u^\perp$ en el que se utiliza $u \in \mathbb{R}^m - \{0\}$ , $||u||_2 = 1$ y $R=I_m-2 u u^T$ es el siguiente : Las reflexiones de Householder pueden utilizarse para hacer ceros por debajo de una entrada de un vector. Ejemplo aplicando reflectores de Householder a un vector Considérese al vector $x=(1,2,3)^T$. Definir un reflector de Householder para hacer ceros por debajo de $x_1$.
###Code
x = np.array([1,2,3])
print(x)
###Output
_____no_output_____
###Markdown
Utilizamos la definición $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico para construir al vector de Householder: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x-np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
El resultado de $Rx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por:
###Code
print(np.linalg.norm(x))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tip* Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de reflexión de Householder son matrices ortogonales y por tanto isometrías: $||Rv||_2=||v||_2$.* Observa que a diferencia de las transformaciones de Gauss con las reflexiones de Householder en general se modifica la primera entrada, ver {ref}`Ejemplo aplicando transformaciones de Gauss a un vector `.``` A continuación se muestra que el producto $Rx$ si se construye $R$ es equivalente a lo anterior: ```{margin}$R = I_3 - \beta v v^T$.```
###Code
R = np.eye(3)-beta*np.outer(v,np.transpose(v))
print(R)
print(R@x)
###Output
_____no_output_____
###Markdown
Ejemplo aplicando reflectores de Householder a un vector Considérese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$.". Otra opción para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x+np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que difieren en signo las primeras entradas al utilizar $v=x + ||x||_2 e_1$ o $v=x - ||x||_2 e_1$.``` ¿Cuál definición del vector de Householder usar? En cualquiera de las dos definiciones del vector de Householder $v=x \pm ||x||_2 e_1$, la multiplicación $Rx$ refleja $x$ en el primer eje coordenado (pues se usa $e_1$): El vector $v^+ = - u_0^+ = x-||x||_2e_1$ refleja $x$ respecto al subespacio $H^+$ (que en el dibujo es una recta que cruza el origen). El vector $v^- = -u_0^- = x+||x||_2e_1$ refleja $x$ respecto al subespacio $H^-$. Para reducir los errores por redondeo y evitar el problema de cancelación en la aritmética de punto flotante (ver [Sistema de punto flotante](https://itam-ds.github.io/analisis-numerico-computo-cientifico/I.computo_cientifico/1.2/Sistema_de_punto_flotante.html)) se utiliza:$$v = x+signo(x_1)||x||_2e_1$$donde: $signo(x_1) = \begin{cases}1 &\text{ si } x_1 \geq 0 ,\\-1 &\text{ si } x_1 < 0\end{cases}.$La idea de la definción anterior con la función $signo(\cdot)$ es que la reflexión (en el dibujo anterior $-||x||_2e_1$ o $||x||_2e_1$) sea lo más alejada posible de $x$. En el dibujo anterior como $x_1, x_2>0$ entonces se refleja respecto al subespacio $H^-$ quedando su reflexión igual a $-||x||_2e_1$. ```{admonition} Comentarios* Otra forma de lidiar con el problema de cancelación es definiendo a la primera componente del vector de Householder $v_1$ como $v_1=x_1-||x||_2$ y haciendo una manipulación algebraica como sigue:$$v_1=x_1-||x||_2 = \frac{x_1^2-||x||_2^2}{x_1+||x||_2} = -\frac{x_2^2+x_3^2+\dots + x_m^2}{x_1+||x||_2}.$$* En la implementación del cálculo del vector de Householder, es útil que $v_1=1$ y así únicamente se almacenará $v[2:m]$. Al vector $v[2:m]$ se le nombra **parte esencial del vector de Householder**.* Las transformaciones de reflexión de Householder se utilizan para la factorización QR. Ver [QR decomposition](https://en.wikipedia.org/wiki/QR_decomposition), la cual es una factorización estable numéricamente bajo el redondeo.``` ```{admonition} Ejercicio:class: tipReflejar al vector $(1,1)^T$ utilizando al vector $(\frac{-4}{3}, \frac{2}{3})$ para construir $R$.``` Ejemplo aplicando reflectores de Householder a una matriz Las reflexiones de Householder se utilizan para hacer ceros por debajo de la **diagonal** a una matriz y tener una forma triangular superior (mismo objetivo que las transformaciones de Gauss, ver {ref}`Ejemplo aplicando transformaciones de Gauss a una matriz `). Por ejemplo si se han hecho ceros por debajo del elemento $a_{11}$ y se quieren hacer ceros debajo de $a_{22}^{(1)}$: $$\begin{array}{l}R_2A^{(1)} = R_2\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & * & * & * \\0 & * & * & * \\0 & * & * & *\end{array}\right]=\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & 0 & * & * \\0 & 0 & * & * \\0 & 0 & * & *\end{array}\right]:= A^{(2)}\end{array}$$donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = R_1A^{(0)}$ y $A^{(0)}=A$, $R_1$ es matriz de reflexión de Householder. En este caso $$R_2 = \left [ \begin{array}{cc}1 & 0 \\0 & \hat{R_2}\end{array}\right ]$$ con $\hat{R}_2$ una matriz de reflexión de Householder que hace ceros por debajo de de $a_{22}^{(1)}$. Se tienen las siguientes propiedades de $R_2$:* No modifica el primer renglón de $A^{(1)}$.* No destruye los ceros de la primer columna de $A^{(1)}$.* $R_2$ es una matriz de reflexión de Householder. ```{admonition} Observación:class: tipPara la implementación computacional **no se inserta** $\hat{R}_2$ en $R_2$, en lugar de esto se aplica $\hat{R}_2$ a la submatriz $A^{(1)}[2:m, 2:m]$.``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 3}$:$$A =\left [\begin{array}{ccc}3 & 2 & -1 \\2 & 3 & 2 \\-1 & 2 & 3 \\2 & 1 & 4\end{array}\right ] $$y aplíquense reflexiones de Householder para llevarla a una forma triangular superior.
###Code
A = np.array([[3 ,2, -1],
[2 ,3 ,2],
[-1, 2 ,3],
[2 ,1 ,4]], dtype = float)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera entrada de la primera columna de $A$: $A[1:4,1]$.```
###Code
e1 = np.array([1,0,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[1:4,1] + signo(A[1,1])||A[1:4,1]||_2e_1$.```
###Code
v = A[:,0] + np.linalg.norm(A[:,0])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
print(beta)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_1$, directamente se tiene $R_1 A[1:4,1] = A[1:4,1] - \beta vv^TA[1:4,1]$.**```
###Code
print(A[:,0] - beta*v*v.dot(A[:,0]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(1)} = R_1 A^{(0)}$.```
###Code
A1 = A[:,0:]-beta*np.outer(v,v.dot(A[:,0:]))
print(A1)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que a diferencia de las transformaciones de Gauss la reflexión de Householder $R_1$ sí modifica el primer renglón de $A^{(0)}$, ver {ref}`Después de hacer la multiplicación... `.``` ```{margin}Se preserva la norma $2$ o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:,0]))
print(np.linalg.norm(A[:,0]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$.** ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la segunda entrada de la segunda columna de $A^{(1)}$: $A^{(1)}[2:4,2]$.```
###Code
e1 = np.array([1, 0, 0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[2:4,2] + signo(A[2,2])||A[2:4,2]||_2e_1$.```
###Code
v = A1[1:,1] + np.linalg.norm(A1[1:,1])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_2$, directamente se tiene $R_2A[2:4,2] = A[2:4,2] - \beta vv^TA[2:4,2]$.**```
###Code
print(A1[1:,1] - beta*v*v.dot(A1[1:,1]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(2)} = R_2 A^{(1)}$ pero sólo operamos en $A^{(2)}[2:4, 2:3]$.```
###Code
A2_aux = A1[1:,1:]-beta*np.outer(v,v.dot(A1[1:,1:]))
print(A2_aux)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma $2$ o Euclidiana de $A[2:4,2]$.```
###Code
print(np.linalg.norm(A1[1:,1]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$.**
###Code
e1 = np.array([1, 0])
v = A2_aux[1:,1] + np.linalg.norm(A2_aux[1:,1])*e1
print(v)
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(3)} = R_3 A^{(2)}$ pero sólo operamos en $A^{(2)}[3:4, 3]$.```
###Code
A3_aux = A2_aux[1:,1]-beta*v*v.dot(A2_aux[1:,1])
print(A3_aux)
print(np.linalg.norm(A2_aux[1:,1]))
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
m,n = A.shape
number_of_zeros = m-2
A3_aux_2 = np.column_stack((np.zeros(number_of_zeros), A3_aux))
print(A3_aux_2)
A3_aux_3 = np.row_stack((A2_aux[0, 0:], A3_aux_2))
print(A3_aux_3)
number_of_zeros = m-1
A3_aux_4 = np.column_stack((np.zeros(number_of_zeros), A3_aux_3))
print(A3_aux_4)
###Output
_____no_output_____
###Markdown
La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es:
###Code
A3 = np.row_stack((A1[0, 0:], A3_aux_4))
print(A3)
###Output
_____no_output_____
###Markdown
Podemos verificar lo anterior comparando con la matriz $R$ de la factorización $QR$ de $A$:
###Code
q,r = np.linalg.qr(A)
print("Q:")
print(q)
print("R:")
print(r)
###Output
Q:
[[-0.707 0. 0.471]
[-0.471 -0.527 0.079]
[ 0.236 -0.843 -0.157]
[-0.471 0.105 -0.864]]
R:
[[-4.243 -2.828 -1.414]
[ 0. -3.162 -3.162]
[ 0. 0. -4.243]]
###Markdown
```{admonition} Ejercicio:class: tipAplicar reflexiones de Householder a la matriz$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$para obtener una matriz triangular superior.``` (TROT)= Transformaciones de rotación En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Si $u, v \in \mathbb{R}^2-\{0\}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un ángulo $\theta$ para llevarlo a la dirección de $v$: A partir de las relaciones anteriores como $cos(\phi)=\frac{u_1}{\ell}, sen(\phi)=\frac{u_2}{\ell}$ se tiene: $v_1 = (cos\theta)u_1-(sen\theta)u_2$, $v_2=(sen\theta)u_1+(cos\theta)u_2$ equivalentemente:$$\begin{array}{l}\left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] \cdot \left[\begin{array}{c}u_1\\u_2\end{array}\right]\end{array}$$ ```{admonition} DefiniciónLa matriz $R_O$:$$R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] $$se nombra matriz de **rotación** o **rotaciones Givens**, es una matriz ortogonal pues $R_O^TR_O=I_2$.La multiplicación $v=R_Ou$ es una rotación en sentido contrario a las manecillas del reloj, de hecho cumple $det(R_O)=1$. La multiplicación $u=R_O^Tv$ es una rotación en sentido de las manecillas del reloj y el ángulo asociado es $-\theta$.``` Ejemplo aplicando rotaciones Givens a un vector Rotar al vector $v=(1,1)^T$ un ángulo de $45^o$ en **sentido contrario a las manecillas del reloj**.
###Code
v=np.array([1,1])
###Output
_____no_output_____
###Markdown
La matriz $R_O$ es: $$R_O = \left[ \begin{array}{cc}cos(\frac{\pi}{4}) & -sen(\frac{\pi}{4})\\sen(\frac{\pi}{4}) & cos(\frac{\pi}{4})\end{array}\right ]$$
###Code
theta=math.pi/4
RO=np.array([[math.cos(theta), -math.sin(theta)],
[math.sin(theta), math.cos(theta)]])
print(RO)
print(RO@v)
###Output
_____no_output_____
###Markdown
- Con esta operación estamos aplicando rotación al vector "v"
###Code
print(np.linalg.norm(v))
###Output
_____no_output_____
###Markdown
- Vemos que la norma es la misma después de la rotación. ```{admonition} Observación:class: tipObserva que se preserva la norma $2$ o Euclidiana del vector, las matrices de rotación Givens son matrices ortogonales y por tanto isometrías: $||R_0v||_2=||v||_2$.``` - Un objetivo que buscamos satisfacer es hacer cero alguna entrada. En el ejemplo anterior se hizo cero la entrada $v_1$ de $v$. Las matrices de rotación se utilizan para hacer ceros en entradas de un vector. Por ejemplo si $v=(v_1,v_2)^T$ y **se desea hacer cero la entrada $v_2$ de $v$** se puede utilizar la matriz de rotación:$$R_O = \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]$$ pues:$$\begin{array}{l} \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ] \cdot \left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\\frac{-v_1v_2+v_1v_2}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\0\end{array}\right ]=\left[ \begin{array}{c}||v||_2\\0\end{array}\right ]\end{array}$$ Y definiendo $cos(\theta)=\frac{v_1}{\sqrt{v_1^2+v_2^2}}, sen(\theta)=\frac{v_2}{\sqrt{v_1^2+v_2^2}}$ se tiene :$$R_O=\left[ \begin{array}{cc}cos\theta & sen\theta\\-sen\theta & cos\theta\end{array}\right]$$que en el ejemplo anterior como $v=(1,1)^T$ entonces: $cos(\theta)=\frac{1}{\sqrt{2}}, sen(\theta)=\frac{1}{\sqrt{2}}$ por lo que $\theta=\frac{\pi}{4}$ y:$$R_O=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]$$que es una matriz de rotación para un ángulo que gira **en sentido de las manecillas del reloj**. Para **hacer cero la entrada $v_1$ de $v$** hay que usar:$$\begin{array}{l}R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]\end{array}$$que es una matriz de rotación para un ángulo que gira **en sentido contrario de las manecillas del reloj**. - El $R_0$ de este ejercicio es la fórmula general para hacer cero alguna entrada de una matriz de 2x2.- Todas las matriz de rotación son ortogonales porque preservan su norma. Pero no todas las ortogonales son de rotación. ```{admonition} Ejercicio:class: tipUsar una matriz de rotación Givens para rotar al vector $(-3, 4)^T$ un ángulo de $\frac{\pi}{3}$ en sentido de las manecillas del reloj.``` Ejemplo aplicando rotaciones Givens a una matriz - Con esto lidiamos con rotación de matrices de más de 2 dimensiones. Las rotaciones Givens permiten hacer ceros en entradas de una matriz que son **seleccionadas**. Por ejemplo si se desea hacer cero la entrada $x_4$ de $x \in \mathbb{R}^4$, se definen $cos\theta = \frac{x_2}{\sqrt{x_2^2 + x_4^2}}, sen\theta = \frac{x_4}{\sqrt{x_2^2 + x_4^2}}$ y $$R_{24}^\theta=\left [ \begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]$$entonces: $$R_{24}^\theta x =\begin{array}{l}\left [\begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]\left [\begin{array}{c}x_1 \\x_2 \\x_3 \\x_4\end{array}\right ]=\left [\begin{array}{c}x_1 \\\sqrt{x_2^2 + x_4^2} \\x_3 \\0\end{array}\right ]\end{array}$$ Y se escribe que se hizo una rotación en el plano $(2,4)$. - Los ceros que habíamos asignado hacen que el vector de $x$ no se afecte en algunos casos.- Podemos extrapolar esa misma idea a matrices más grandes. - Debes elegir unicamente la estructura correcta de tu matriz de rotación.- Si estuvieras trabajando con una matriz de 2x2 entonces los cosenos de los valores se efectarían. ```{admonition} Observación:class: tipObsérvese que sólo se modificaron dos entradas de $x$: $x_2, x_4$ por lo que el mismo efecto se obtiene al hacer la multiplicación:$$\begin{array}{l}\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]\left [ \begin{array}{c}x_2\\x_4\end{array}\right ]\end{array}$$para tales entradas.``` - En este ejercicio elegimos el plano 1,2. - Modificamos primer columna y segundo renglón. - Lo hacemos con los "senos" y "cosenos" correspondientes. Considérese a la matriz $A \in \mathbb{R}^{4 \times 4}$:$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$y aplíquense rotaciones Givens para hacer ceros en las entradas debajo de la diagonal de $A$ y tener una matriz **triangular superior**. **Entrada $a_{21}$, plano $(1,2)$:**
###Code
idx_1 = 0
idx_2 = 1
idx_column = 0
A = np.array([[4, 1, -2, 2],
[1, 2, 0, 1],
[-2, 0, 3, -2],
[2, 1, -2, -1]], dtype=float)
print(A)
###Output
_____no_output_____
###Markdown
- Vamos a usar la fórmula general que ya habíamos visto arriba:
###Code
a_11 = A[idx_1,idx_column]
a_21 = A[idx_2,idx_column]
norm = math.sqrt(a_11**2 + a_21**2)
cos_theta = a_11/norm
sen_theta = a_21/norm
R12 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R12)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A_subset = np.row_stack((A[idx_1,:], A[idx_2,:]))
print(A_subset)
print(R12@A_subset)
A1_aux = R12@A_subset
print(A1_aux)
###Output
_____no_output_____
###Markdown
Hacemos copia para un fácil manejo de los índices y matrices modificadas. Podríamos también usar [numpy.view](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.view.html).
###Code
A1 = A.copy()
A1[idx_1, :] = A1_aux[0, :]
A1[idx_2, :] = A1_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(1)} = R_{12}^\theta A^{(0)}$.``` - Aquí ya colocamos el subresultado que obtuvimos a la matriz original.- Solo modificamos el primero y segundo renglón.
###Code
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{31}$, plano $(1,3)$:**
###Code
idx_1 = 0
idx_2 = 2
idx_column = 0
a_11 = A1[idx_1, idx_column]
a_31 = A1[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_31**2)
cos_theta = a_11/norm
sen_theta = a_31/norm
R13 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R13)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A1_subset = np.row_stack((A1[idx_1,:], A1[idx_2,:]))
print(A1_subset)
print(R13@A1_subset)
A2_aux = R13@A1_subset
print(A2_aux)
A2 = A1.copy()
A2[idx_1, :] = A2_aux[0, :]
A2[idx_2, :] = A2_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(2)} = R_{13}^\theta A^{(1)}$.```
###Code
print(A2)
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A2[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{41}$, plano $(1,4)$:**
###Code
idx_1 = 0
idx_2 = 3
idx_column = 0
a_11 = A2[idx_1, idx_column]
a_41 = A2[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_41**2)
cos_theta = a_11/norm
sen_theta = a_41/norm
R14 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R14)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A2_subset = np.row_stack((A2[idx_1,:], A2[idx_2,:]))
print(A2_subset)
print(R14@A2_subset)
A3_aux = R14@A2_subset
print(A3_aux)
A3 = A2.copy()
A3[idx_1, :] = A3_aux[0, :]
A3[idx_2, :] = A3_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(3)} = R_{14}^\theta A^{(2)}$.```
###Code
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
- Vemos que el resultado de este ejercicio es el cero del tercer renglón. ```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A3[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{32}$, plano $(2,3)$:** - En este caso vamos a hacer 0 un valor de la segunda columna.- Vamos a ir afectando todo el renglón parejo.
###Code
idx_1 = 1
idx_2 = 2
idx_column = 1
a_22 = A2[idx_1, idx_column]
a_32 = A2[idx_2, idx_column]
norm = math.sqrt(a_22**2 + a_32**2)
cos_theta = a_22/norm
sen_theta = a_32/norm
R23 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R23)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A3_subset = np.row_stack((A3[idx_1,:], A3[idx_2,:]))
print(A3_subset)
print(R23@A3_subset)
A4_aux = R23@A3_subset
print(A4_aux)
A4 = A3.copy()
A4[idx_1, :] = A4_aux[0, :]
A4[idx_2, :] = A4_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(4)} = R_{23}^\theta A^{(3)}$.```
###Code
print(A4)
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,2]$.```
###Code
print(np.linalg.norm(A4[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
(OTBALN)= 2.1 Operaciones y transformaciones básicas del Álgebra Lineal Numérica ```{admonition} Notas para contenedor de docker:Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.`docker run --rm -v :/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:2.1.4`password para jupyterlab: `qwerty`Detener el contenedor de docker:`docker stop jupyterlab_optimizacion`Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).``` --- Nota generada a partir de [liga1](https://www.dropbox.com/s/fyqwiqasqaa3wlt/3.1.1.Multiplicacion_de_matrices_y_estructura_de_datos.pdf?dl=0), [liga2](https://www.dropbox.com/s/jwu8lu4r14pb7ut/3.2.1.Sistemas_de_ecuaciones_lineales_eliminacion_Gaussiana_y_factorizacion_LU.pdf?dl=0) y [liga3](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0). ```{admonition} Al final de esta nota el y la lectora::class: tip* Entenderá cómo utilizar transformaciones típicas en el álgebra lineal numérica en la que se basan muchos de los algoritmos del análisis numérico. En específico aprenderá cómo aplicar las transformaciones de Gauss, reflexiones de Householder y rotaciones Givens a vectores y matrices.* Se familizarizará con la notación vectorial y matricial de las operaciones básicas del álgebra lineal numérica.``` Las operaciones básicas del Álgebra Lineal Numérica podemos dividirlas en vectoriales y matriciales. Vectoriales * **Transponer:** $\mathbb{R}^{n \times 1} \rightarrow \mathbb{R} ^{1 \times n}$: $y = x^T$ entonces $x = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right ]$ y se tiene: $y = x^T = [x_1, x_2, \dots, x_n].$ * **Suma:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x + y$ entonces $z_i = x_i + y_i$* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $y = \alpha x$ entonces $y_i = \alpha x_i$.* **Producto interno estándar o producto punto:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}$: $c = x^Ty$ entonces $c = \displaystyle \sum_{i=1}^n x_i y_i$.* **Multiplicación *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x.*y$ entonces $z_i = x_i y_i$.* **División *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x./y$ entonces $z_i = x_i /y_i$ con $y_i \neq 0$.* **Producto exterior o *outer product*:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^{n \times n}$: $A = xy^T$ entonces $A[i, :] = x_i y^T$ con $A[i,:]$ el $i$-ésimo renglón de $A$. Matriciales * **Transponer:** $\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{n \times m}$: $C = A^T$ entonces $c_{ij} = a_{ji}$.* **Sumar:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A + B$ entonces $c_{ij} = a_{ij} + b_{ij}$.* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = \alpha A$ entonces $c_{ij} = \alpha a_{ij}$* **Multiplicación por un vector:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$: $y = Ax$ entonces $y_i = \displaystyle \sum_{j=1}^n a_{ij}x_j$.* **Multiplicación entre matrices:** $\mathbb{R}^{m \times k} \times \mathbb{R}^{k \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = AB$ entonces $c_{ij} = \displaystyle \sum_{r=1}^k a_{ir}b_{rj}$.* **Multiplicación *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A.*B$ entonces $c_{ij} = a_{ij}b_{ij}$.* **División *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A./B$ entonces $c_{ij} = a_{ij}/b_{ij}$ con $b_{ij} \neq 0$. **Como ejemplos de transformaciones básicas del Álgebra Lineal Numérica se encuentran:** (TGAUSS)= Transformaciones de Gauss En esta sección suponemos que $A \in \mathbb{R}^{n \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{n \times n} \forall i,j=1,2,\dots,n$. ```{margin}Como ejemplo de vector canónico tenemos: $e_1=(1,0)^T$ en $\mathbb{R}^2$ o $e_3 = (0,0,1,0,0)$ en $\mathbb{R}^5$.``` Considérese al vector $a \in \mathbb{R}^{n}$ y $e_k \in \mathbb{R}^n$ el $k$-ésimo vector canónico: vector con un $1$ en la posición $k$ y ceros en las entradas restantes. ```{admonition} DefiniciónUna transformación de Gauss está definida de forma general como $L_k = I_n - \ell_ke_k^T$ con $\ell_k = (0,0,\dots,\ell_{k+1,k},\dots,\ell_{n,k})^T$ y $\ell_{i,k}=\frac{a_{ik}}{a_{kk}} \forall i=k+1,\dots,n$.$a_{kk}$ se le nombra **pivote** y **debe ser diferente de cero**.``` Las transformaciones de Gauss se utilizan para hacer ceros por debajo del **pivote**. (EG1)= Ejemplo aplicando transformaciones de Gauss a un vector Considérese al vector $a=(-2,3,4)^T$. Definir una transformación de Gauss para hacer ceros por debajo de $a_1$ y otra transformación de Gauss para hacer cero la entrada $a_3$ **Solución:**
###Code
import numpy as np
import math
np.set_printoptions(precision=3, suppress=True)
###Output
_____no_output_____
###Markdown
a)Para hacer ceros por debajo del **pivote** $a_1 = -2$:
###Code
a = np.array([-2,3,4])
pivote = a[0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_2}{a_1}, \frac{a_3}{a_1})^T$```
###Code
l1 = np.array([0,a[1]/pivote, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1a = a - \ell_1 e_1^Ta$.**```
###Code
L1_a = a-l1*(e1.dot(a))
print(L1_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1@a)
###Output
_____no_output_____
###Markdown
b) Para hacer ceros por debajo del **pivote** $a_2 = 3$:
###Code
a = np.array([-2,3,4])
pivote = a[1]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_2=(0, 0, \frac{a_3}{a_2})^T$```
###Code
l2 = np.array([0,0, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_2$ pues se desea hacer ceros en las entradas debajo de la segunda.```
###Code
e2 = np.array([0,1,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_2$, directamente se tiene $L_2a = a - \ell_2 e_2^Ta$.**```
###Code
L2_a = a-l2*(e2.dot(a))
print(L2_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_2 a$ si se construye $L_2$ es equivalente a lo anterior: ```{margin}$L_2 = I_3 - \ell_2 e_2^T$.```
###Code
L2 = np.eye(3) - np.outer(l2,e2)
print(L2)
print(L2@a)
###Output
_____no_output_____
###Markdown
(EG2)= Ejemplo aplicando transformaciones de Gauss a una matriz Si tenemos una matriz $A \in \mathbb{R}^{3 \times 3}$ y queremos hacer ceros por debajo de su **diagonal** y tener una forma **triangular superior**, realizamos los productos matriciales:$$L_2 L_1 A$$ donde: $L_1, L_2$ son transformaciones de Gauss. Posterior a realizar el producto $L_2 L_1 A$ se obtiene una **matriz triangular superior:**$$L_2L_1A = \left [\begin{array}{ccc}* & * & *\\0 & * & * \\0 & 0 & * \end{array}\right ]$$ **Ejemplo:** a) Utilizando $L_1$
###Code
A = np.array([[-1, 2, 5],
[4, 5, -7],
[3, 0, 8]], dtype=float)
print(A)
###Output
_____no_output_____
###Markdown
Para hacer ceros por debajo del **pivote** $a_{11} = -1$:
###Code
pivote = A[0, 0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_{21}}{a_{11}}, \frac{a_{31}}{a_{11}})^T$```
###Code
l1 = np.array([0,A[1,0]/pivote, A[2,0]/pivote])
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A[1:3,1] = A[1:3,1] - \ell_1 e_1^T A[1:3,1]$.**```
###Code
L1_A_1 = A[:,0]-l1*(e1.dot(A[:,0]))
print(L1_A_1)
###Output
_____no_output_____
###Markdown
**Y se debe aplicar $L_1$ a las columnas número 2 y 3 de $A$ para completar el producto $L_1A$:** ```{margin}Aplicando $L_1$ a la segunda columna de $A$: $A[1:3,2]$.```
###Code
L1_A_2 = A[:,1]-l1*(e1.dot(A[:,1]))
print(L1_A_2)
###Output
_____no_output_____
###Markdown
```{margin}Aplicando $L_1$ a la tercer columna de $A$: $A[1:3,3]$.```
###Code
L1_A_3 = A[:,2]-l1*(e1.dot(A[:,2]))
print(L1_A_3)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 A$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1 @ A)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipAl aplicar $L_1$ a la primer columna de $A$ **siempre** obtenemos ceros por debajo del pivote que en este caso es $a_{11}$.``` (EG2.1)= **Después de hacer la multiplicación $L_1A$ en cualquiera de los dos casos (construyendo o no explícitamente $L_1$) no se modifica el primer renglón de $A$:**
###Code
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Este es el primer renglón de $A$.```
###Code
print(A[0,:])
###Output
_____no_output_____
###Markdown
```{margin}Tomando el primer renglón del producto $L_1A$.```
###Code
print((L1 @ A)[0,:])
###Output
_____no_output_____
###Markdown
**por lo que la multiplicación $L_1A$ entonces modifica del segundo renglón de $A$ en adelante y de la segunda columna de $A$ en adelante.** ```{admonition} Observación:class: tipDada la forma de $L_1 = I_3 - \ell_1e_1^T$, al hacer la multiplicación por la segunda y tercer columna de $A$ se tiene:$$e_1^T A[1:3,2] = A[0,2]$$$$e_1^T A[1:3,3] = A[0,3]$$respectivamente.``` ```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 1]))
###Output
_____no_output_____
###Markdown
```{margin}El resultado de este producto es un escalar.```
###Code
print(e1.dot(A[:, 2]))
###Output
_____no_output_____
###Markdown
y puede escribirse de forma compacta: $$e_1^T A[1:3,2:3] = A[0, 2:3]$$
###Code
print(A[0, 1:3]) #observe that we have to use 2+1=3 as the second number after ":" in 1:3
print(A[0, 1:]) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Entonces los productos $\ell_1 e_1^T A[:,2]$ y $\ell_1 e_1^T A[:,3]$ quedan respectivamente como: $$\ell_1A[0, 2]$$
###Code
print(l1*A[0,1])
###Output
_____no_output_____
###Markdown
$$\ell_1A[0,3]$$
###Code
print(l1*A[0, 2])
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipEn los dos cálculos anteriores, las primeras entradas son iguales a $0$ por lo que es consistente con el hecho que únicamente se modifican dos entradas de la segunda y tercer columna de $A$.``` De forma compacta y aprovechando funciones en *NumPy* como [np.outer](https://numpy.org/doc/stable/reference/generated/numpy.outer.html) se puede calcular lo anterior como:
###Code
print(np.outer(l1[1:3],A[0,1:3]))
print(np.outer(l1[1:],A[0,1:])) #also we could have use this statement
###Output
_____no_output_____
###Markdown
Y finalmente la aplicación de $L_1$ al segundo renglón y segunda columna en adelante de $A$ queda: ```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1 A = A - \ell_1 e_1^T A$ y podemos aprovechar lo anterior para sólo operar de la segunda columna y segundo renglón en adelante.**```
###Code
print(A[1:, 1:] - np.outer(l1[1:],A[0,1:]))
###Output
_____no_output_____
###Markdown
Compárese con:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar el primer renglón y primera columna al producto. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
A_aux = A[1:, 1:] - np.outer(l1[1:],A[0,1:])
m, n = A.shape
number_of_zeros = m-1
A_aux_2 = np.column_stack((np.zeros(number_of_zeros), A_aux)) # stack two zeros
print(A_aux_2)
A_aux_3 = np.row_stack((A[0, :], A_aux_2))
print(A_aux_3)
###Output
_____no_output_____
###Markdown
que es el resultado de:
###Code
print(L1 @ A)
###Output
_____no_output_____
###Markdown
**Lo que falta para obtener una matriz triangular superior es hacer la multiplicación $L_2L_1A$.** Para este caso la matriz $L_2=I_3 - \ell_2e_2^T$ utiliza $\ell_2 = \left( 0, 0, \frac{a^{(1)}_{32}}{a^{(1)}_{22}} \right )^T$ donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = L_1A^{(0)}$ y $A^{(0)}=A$. ```{admonition} Ejercicio:class: tipCalcular el producto $L_2 L_1 A$ para la matriz anterior y para la matriz:$$A = \left [\begin{array}{ccc}1 & 4 & -2 \\-3 & 9 & 8 \\5 & 1 & -6\end{array}\right]$$tomando en cuenta que en este caso $L_2$ sólo opera del segundo renglón y segunda columna en adelante:y obtener una matriz triangular superior en cada ejercicio.``` ```{admonition} Comentarios* Las transformaciones de Gauss se utilizan para la fase de eliminación del método de eliminación Gaussiana o también llamada factorización $LU$. Ver [Gaussian elimination](https://en.wikipedia.org/wiki/Gaussian_elimination).* La factorización $P, L, U$ que es la $LU$ con permutaciones por pivoteo parcial es un método estable numéricamente respecto al redondeo en la práctica pero inestable en la teoría.``` (MATORTMATCOLORTONO)= Matriz ortogonal y matriz con columnas ortonormales Un conjunto de vectores $\{x_1, \dots, x_p\}$ en $\mathbb{R}^m$ ($x_i \in \mathbb{R}^m$)es ortogonal si $x_i^Tx_j=0$ $\forall i\neq j$. Por ejemplo, para un conjunto de $2$ vectores $x_1,x_2$ en $\mathbb{R}^3$ esto se visualiza: ```{admonition} Comentarios* Si el conjunto $\{x_1,\dots,x_n\}$ en $\mathbb{R}^m$ satisface $x_i^Tx_j= \delta_{ij}= \begin{cases}1 &\text{ si } i=j,\\0 &\text{ si } i\neq j\end{cases}$, ver [Kronecker_delta](https://en.wikipedia.org/wiki/Kronecker_delta) se le nombra conjunto **ortonormal**, esto es, constituye un conjunto ortogonal y cada elemento del conjunto tiene norma $2$ o Euclidiana igual a $1$: $||x_i||_2 = 1, \forall i=1,\dots,n$. * Si definimos a la matriz $X$ con columnas dadas por cada uno de los vectores del conjunto $\{x_1,\dots, x_n\}$: $X=(x_1, \dots , x_n) \in \mathbb{R}^{m \times n}$ entonces la propiedad de que cada par de columnas satisfaga $x_i^Tx_j=\delta_{ij}$ se puede escribir en notación matricial como $X^TX = I_n$ con $I_n$ la matriz identidad de tamaño $n$ si $n \leq m$ o bien $XX^T=I_m$ si $m \leq n$. A la matriz $X$ se le nombra **matriz con columnas ortonormales**. * Si cada $x_i$ está en $\mathbb{R}^n$ (en lugar de $\mathbb{R}^m$) entonces construímos a la matriz $X$ como el punto anterior con la diferencia que $X \in \mathbb{R}^{n \times n}$. En este caso $X$ se le nombra **matriz ortogonal**.* Entre las propiedades más importantes de las matrices ortogonales o con columnas ortonormales es que son isometrías bajo la norma $2$ o Euclidiana y multiplicar por tales matrices es estable numéricamente bajo el redondeo, ver {ref}`Condición de un problema y estabilidad de un algoritmo `.``` (TREF)= Transformaciones de reflexión En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Reflectores de Householder ```{margin}Recuerda que $u^\perp = \{x \in \mathbb{R}^m| u^Tx=0\}$ es un subespacio de $\mathbb{R}^m$ de dimensión $m-1$ y es el complemento ortogonal de $u$.``` ```{admonition} DefiniciónLas reflexiones de Householder son matrices **simétricas, ortogonales** y se construyen a partir de un vector $v \neq 0$ definiendo:$$R = I_m-\beta v v^T$$ con $v \in \mathbb{R}^m - \{0\}$ y $\beta = \frac{2}{v^Tv}$. El vector $v$ se llama **vector de Householder**. La multiplicación $Rx$ representa la reflexión del vector $x \in \mathbb{R}^m$ a través del hiperplano $v^\perp$.``` ```{admonition} ComentarioAlgunas propiedades de las reflexiones de Householder son: $R^TR = R^2 = I_m$, $R^{-1}=R$, $det(R)=-1$.``` ```{sidebar} Proyector ortogonal elementalEn este dibujo se utiliza el **proyector ortogonal elemental** sobre el complemento ortogonal $u^\perp$ definido como: $P=I_m- u u^T$ y $Px$ es la proyección ortogonal de $x$ sobre $u^\perp$ . Los proyectores ortogonales elementales **no** son matrices ortogonales, son singulares, son simétricas y $P^2=P$. El proyector ortogonal elemental de $x$ sobre $u^\perp$ tienen $rank$ igual a $m-1$ y el proyector ortogonal de $x$ sobre $span\{u\}$ definido por $I_m-P=uu^T$ tienen $rank$ igual a $1$.Recuerda que $span\{u\}$ es el conjunto generado por $u$. Se define como el conjunto de combinaciones lineales de $u$: $span\{u\} = \left \{\displaystyle \sum_{i=1}^m k_i u_i | k_i \in \mathbb{R} \forall i =1,\dots,m \right \}$.``` Un dibujo que ayuda a visualizar el reflector elemental alrededor de $u^\perp$ en el que se utiliza $u \in \mathbb{R}^m - \{0\}$ , $||u||_2 = 1$ y $R=I_m-2 u u^T$ es el siguiente : Las reflexiones de Householder pueden utilizarse para hacer ceros por debajo de una entrada de un vector. Ejemplo aplicando reflectores de Householder a un vector Considérese al vector $x=(1,2,3)^T$. Definir un reflector de Householder para hacer ceros por debajo de $x_1$.
###Code
x = np.array([1,2,3])
print(x)
###Output
_____no_output_____
###Markdown
Utilizamos la definición $v=x-||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico para construir al vector de Householder: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x-np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
El resultado de $Rx$ es $(||x||_2,0,0)^T$ con $||x||_2$ dada por:
###Code
print(np.linalg.norm(x))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tip* Observa que se preserva la norma $2$ o Euclidiana del vector, las matrices de reflexión de Householder son matrices ortogonales y por tanto isometrías: $||Rv||_2=||v||_2$.* Observa que a diferencia de las transformaciones de Gauss con las reflexiones de Householder en general se modifica la primera entrada, ver {ref}`Ejemplo aplicando transformaciones de Gauss a un vector `.``` A continuación se muestra que el producto $Rx$ si se construye $R$ es equivalente a lo anterior: ```{margin}$R = I_3 - \beta v v^T$.```
###Code
R = np.eye(3)-beta*np.outer(v,np.transpose(v))
print(R)
print(R@x)
###Output
_____no_output_____
###Markdown
Ejemplo aplicando reflectores de Householder a un vector Considérese al mismo vector $x$ del ejemplo anterior y el mismo objetivo "Definir un reflector de Householder para hacer ceros por debajo de $x_1$.". Otra opción para construir al vector de Householder es $v=x+||x||_2e_1$ con $e_1=(1,0,0)^T$ vector canónico: ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
v = x+np.linalg.norm(x)*e1
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R$, directamente se tiene $R x = x - \beta vv^Tx$.**``` Hacemos ceros por debajo de la primera entrada de $x$ haciendo la multiplicación matriz-vector $Rx$:
###Code
print(x-beta*v*(v.dot(x)))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que difieren en signo las primeras entradas al utilizar $v=x + ||x||_2 e_1$ o $v=x - ||x||_2 e_1$.``` ¿Cuál definición del vector de Householder usar? En cualquiera de las dos definiciones del vector de Householder $v=x \pm ||x||_2 e_1$, la multiplicación $Rx$ refleja $x$ en el primer eje coordenado (pues se usa $e_1$): El vector $v^+ = - u_0^+ = x-||x||_2e_1$ refleja $x$ respecto al subespacio $H^+$ (que en el dibujo es una recta que cruza el origen). El vector $v^- = -u_0^- = x+||x||_2e_1$ refleja $x$ respecto al subespacio $H^-$. Para reducir los errores por redondeo y evitar el problema de cancelación en la aritmética de punto flotante (ver [Sistema de punto flotante](https://itam-ds.github.io/analisis-numerico-computo-cientifico/I.computo_cientifico/1.2/Sistema_de_punto_flotante.html)) se utiliza:$$v = x+signo(x_1)||x||_2e_1$$donde: $signo(x_1) = \begin{cases}1 &\text{ si } x_1 \geq 0 ,\\-1 &\text{ si } x_1 < 0\end{cases}.$La idea de la definción anterior con la función $signo(\cdot)$ es que la reflexión (en el dibujo anterior $-||x||_2e_1$ o $||x||_2e_1$) sea lo más alejada posible de $x$. En el dibujo anterior como $x_1, x_2>0$ entonces se refleja respecto al subespacio $H^-$ quedando su reflexión igual a $-||x||_2e_1$. ```{admonition} Comentarios* Otra forma de lidiar con el problema de cancelación es definiendo a la primera componente del vector de Householder $v_1$ como $v_1=x_1-||x||_2$ y haciendo una manipulación algebraica como sigue:$$v_1=x_1-||x||_2 = \frac{x_1^2-||x||_2^2}{x_1+||x||_2} = -\frac{x_2^2+x_3^2+\dots + x_m^2}{x_1+||x||_2}.$$* En la implementación del cálculo del vector de Householder, es útil que $v_1=1$ y así únicamente se almacenará $v[2:m]$. Al vector $v[2:m]$ se le nombra **parte esencial del vector de Householder**.* Las transformaciones de reflexión de Householder se utilizan para la factorización QR. Ver [QR decomposition](https://en.wikipedia.org/wiki/QR_decomposition), la cual es una factorización estable numéricamente bajo el redondeo.``` ```{admonition} Ejercicio:class: tipReflejar al vector $\left [\begin{array}{c}1 \\1 \\\end{array}\right ]$ utilizando al vector $\left [\begin{array}{c}\frac{-4}{3}\\\frac{2}{3}\end{array}\right ]$ para construir $R$.``` Ejemplo aplicando reflectores de Householder a una matriz Las reflexiones de Householder se utilizan para hacer ceros por debajo de la **diagonal** a una matriz y tener una forma triangular superior (mismo objetivo que las transformaciones de Gauss, ver {ref}`Ejemplo aplicando transformaciones de Gauss a una matriz `). Por ejemplo si se han hecho ceros por debajo del elemento $a_{11}$ y se quieren hacer ceros debajo de $a_{22}^{(1)}$: $$\begin{array}{l}R_2A^{(1)} = R_2\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & * & * & * \\0 & * & * & * \\0 & * & * & *\end{array}\right]=\left[\begin{array}{cccc}* & * & * & *\\0 & * & * & *\\0 & 0 & * & * \\0 & 0 & * & * \\0 & 0 & * & *\end{array}\right]:= A^{(2)}\end{array}$$donde: $a^{(1)}_{ij}$ son las entradas de $A^{(1)} = R_1A^{(0)}$ y $A^{(0)}=A$, $R_1$ es matriz de reflexión de Householder. En este caso $$R_2 = \left [ \begin{array}{cc}1 & 0 \\0 & \hat{R_2}\end{array}\right ]$$ con $\hat{R}_2$ una matriz de reflexión de Householder que hace ceros por debajo de de $a_{22}^{(1)}$. Se tienen las siguientes propiedades de $R_2$:* No modifica el primer renglón de $A^{(1)}$.* No destruye los ceros de la primer columna de $A^{(1)}$.* $R_2$ es una matriz de reflexión de Householder. ```{admonition} Observación:class: tipPara la implementación computacional **no se inserta** $\hat{R}_2$ en $R_2$, en lugar de esto se aplica $\hat{R}_2$ a la submatriz $A^{(1)}[2:m, 2:m]$.``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 3}$:$$A =\left [\begin{array}{ccc}3 & 2 & -1 \\2 & 3 & 2 \\-1 & 2 & 3 \\2 & 1 & 4\end{array}\right ] $$y aplíquense reflexiones de Householder para llevarla a una forma triangular superior.
###Code
A = np.array([[3 ,2, -1],
[2 ,3 ,2],
[-1, 2 ,3],
[2 ,1 ,4]], dtype = float)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera entrada de la primera columna de $A$: $A[1:4,1]$.```
###Code
e1 = np.array([1,0,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[1:4,1] + signo(A[1,1])||A[1:4,1]||_2e_1$.```
###Code
v = A[:,0] + np.linalg.norm(A[:,0])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
print(beta)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_1$, directamente se tiene $R_1 A[1:4,1] = A[1:4,1] - \beta vv^TA[1:4,1]$.**```
###Code
print(A[:,0] - beta*v*v.dot(A[:,0]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(1)} = R_1 A^{(0)}$.```
###Code
A1 = A[:,0:]-beta*np.outer(v,v.dot(A[:,0:]))
print(A1)
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que a diferencia de las transformaciones de Gauss la reflexión de Householder $R_1$ sí modifica el primer renglón de $A^{(0)}$, ver {ref}`Después de hacer la multiplicación... `.``` ```{margin}Se preserva la norma $2$ o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:,0]))
print(np.linalg.norm(A[:,0]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la segunda entrada de la segunda columna de $A^{(1)}$.** ```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la segunda entrada de la segunda columna de $A^{(1)}$: $A^{(1)}[2:4,2]$.```
###Code
e1 = np.array([1, 0, 0])
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $v= A[2:4,2] + signo(A[2,2])||A[2:4,2]||_2e_1$.```
###Code
v = A1[1:,1] + np.linalg.norm(A1[1:,1])*e1
print(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda la definición de $\beta = \frac{2}{v^Tv}$ para $v$ no unitario.```
###Code
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la reflexión de Householder, **no necesitamos construir a la matriz $R_2$, directamente se tiene $R_2A[2:4,2] = A[2:4,2] - \beta vv^TA[2:4,2]$.**```
###Code
print(A1[1:,1] - beta*v*v.dot(A1[1:,1]))
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(2)} = R_2 A^{(1)}$ pero sólo operamos en $A^{(2)}[2:4, 2:3]$.```
###Code
A2_aux = A1[1:,1:]-beta*np.outer(v,v.dot(A1[1:,1:]))
print(A2_aux)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma $2$ o Euclidiana de $A[2:4,2]$.```
###Code
print(np.linalg.norm(A1[1:,1]))
###Output
_____no_output_____
###Markdown
**A continuación queremos hacer ceros debajo de la tercera entrada de la tercera columna de $A^{(2)}$.**
###Code
e1 = np.array([1, 0])
v = A2_aux[1:,1] + np.linalg.norm(A2_aux[1:,1])*e1
print(v)
beta = 2/v.dot(v)
###Output
_____no_output_____
###Markdown
```{margin}Recuerda $A^{(3)} = R_3 A^{(2)}$ pero sólo operamos en $A^{(2)}[3:4, 3]$.```
###Code
A3_aux = A2_aux[1:,1]-beta*v*v.dot(A2_aux[1:,1])
print(A3_aux)
print(np.linalg.norm(A2_aux[1:,1]))
###Output
_____no_output_____
###Markdown
Entonces sólo falta colocar los renglones y columnas para tener a la matriz $A^{(3)}$. Para esto combinamos columnas y renglones en *numpy* con [column_stack](https://numpy.org/doc/stable/reference/generated/numpy.vstack.html) y *row_stack*:
###Code
m,n = A.shape
number_of_zeros = m-2
A3_aux_2 = np.column_stack((np.zeros(number_of_zeros), A3_aux))
print(A3_aux_2)
A3_aux_3 = np.row_stack((A2_aux[0, 0:], A3_aux_2))
print(A3_aux_3)
number_of_zeros = m-1
A3_aux_4 = np.column_stack((np.zeros(number_of_zeros), A3_aux_3))
print(A3_aux_4)
###Output
_____no_output_____
###Markdown
La matriz $A^{(3)} = R_3 R_2 R_1 A^{(0)}$ es:
###Code
A3 = np.row_stack((A1[0, 0:], A3_aux_4))
print(A3)
###Output
_____no_output_____
###Markdown
Podemos verificar lo anterior comparando con la matriz $R$ de la factorización $QR$ de $A$:
###Code
q,r = np.linalg.qr(A)
print("Q:")
print(q)
print("R:")
print(r)
###Output
Q:
[[-0.707 0. 0.471]
[-0.471 -0.527 0.079]
[ 0.236 -0.843 -0.157]
[-0.471 0.105 -0.864]]
R:
[[-4.243 -2.828 -1.414]
[ 0. -3.162 -3.162]
[ 0. 0. -4.243]]
###Markdown
```{admonition} Ejercicio:class: tipAplicar reflexiones de Householder a la matriz$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$para obtener una matriz triangular superior.``` (TROT)= Transformaciones de rotación En esta sección suponemos que $A \in \mathbb{R}^{m \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{m \times n} \forall i=1,2,\dots,m, j=1, 2, \dots, n$. Si $u, v \in \mathbb{R}^2-\{0\}$ con $\ell = ||u||_2 = ||v||_2$ y se desea rotar al vector $u$ en sentido contrario a las manecillas del reloj por un ángulo $\theta$ para llevarlo a la dirección de $v$: A partir de las relaciones anteriores como $cos(\phi)=\frac{u_1}{\ell}, sen(\phi)=\frac{u_2}{\ell}$ se tiene: $v_1 = (cos\theta)u_1-(sen\theta)u_2$, $v_2=(sen\theta)u_1+(cos\theta)u_2$ equivalentemente:$$\begin{array}{l}\left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] \cdot \left[\begin{array}{c}u_1\\u_2\end{array}\right]\end{array}$$ ```{admonition} DefiniciónLa matriz $R_O$:$$R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right] $$se nombra matriz de **rotación** o **rotaciones Givens**, es una matriz ortogonal pues $R_O^TR_O=I_2$.La multiplicación $v=R_Ou$ es una rotación en sentido contrario a las manecillas del reloj, de hecho cumple $det(R_O)=1$. La multiplicación $u=R_O^Tv$ es una rotación en sentido de las manecillas del reloj y el ángulo asociado es $-\theta$.``` Ejemplo aplicando rotaciones Givens a un vector Rotar al vector $v=(1,1)^T$ un ángulo de $45^o$ en **sentido contrario a las manecillas del reloj**.
###Code
v=np.array([1,1])
###Output
_____no_output_____
###Markdown
La matriz $R_O$ es: $$R_O = \left[ \begin{array}{cc}cos(\frac{\pi}{4}) & -sen(\frac{\pi}{4})\\sen(\frac{\pi}{4}) & cos(\frac{\pi}{4})\end{array}\right ]$$
###Code
theta=math.pi/4
RO=np.array([[math.cos(theta), -math.sin(theta)],
[math.sin(theta), math.cos(theta)]])
print(RO)
print(RO@v)
print(np.linalg.norm(v))
###Output
_____no_output_____
###Markdown
```{admonition} Observación:class: tipObserva que se preserva la norma $2$ o Euclidiana del vector, las matrices de rotación Givens son matrices ortogonales y por tanto isometrías: $||R_0v||_2=||v||_2$.``` En el ejemplo anterior se hizo cero la entrada $v_1$ de $v$. Las matrices de rotación se utilizan para hacer ceros en entradas de un vector. Por ejemplo si $v=(v_1,v_2)^T$ y **se desea hacer cero la entrada $v_2$ de $v$** se puede utilizar la matriz de rotación:$$R_O = \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]$$ pues:$$\begin{array}{l} \left[ \begin{array}{cc}\frac{v_1}{\sqrt{v_1^2+v_2^2}} & \frac{v_2}{\sqrt{v_1^2+v_2^2}}\\-\frac{v_2}{\sqrt{v_1^2+v_2^2}} & \frac{v_1}{\sqrt{v_1^2+v_2^2}}\end{array}\right ] \cdot \left[\begin{array}{c}v_1\\v_2\end{array}\right]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\\frac{-v_1v_2+v_1v_2}{\sqrt{v_1^2+v_2^2}}\end{array}\right ]=\left[ \begin{array}{c}\frac{v_1^2+v_2^2}{\sqrt{v_1^2+v_2^2}}\\0\end{array}\right ]=\left[ \begin{array}{c}||v||_2\\0\end{array}\right ]\end{array}$$ Y definiendo $cos(\theta)=\frac{v_1}{\sqrt{v_1^2+v_2^2}}, sen(\theta)=\frac{v_2}{\sqrt{v_1^2+v_2^2}}$ se tiene :$$R_O=\left[ \begin{array}{cc}cos\theta & sen\theta\\-sen\theta & cos\theta\end{array}\right]$$que en el ejemplo anterior como $v=(1,1)^T$ entonces: $cos(\theta)=\frac{1}{\sqrt{2}}, sen(\theta)=\frac{1}{\sqrt{2}}$ por lo que $\theta=\frac{\pi}{4}$ y:$$R_O=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\\-\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]$$que es una matriz de rotación para un ángulo que gira **en sentido de las manecillas del reloj**. Para **hacer cero la entrada $v_1$ de $v$** hay que usar:$$\begin{array}{l}R_O=\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]=\left[ \begin{array}{cc}\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}}\\\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}\end{array}\right]\end{array}$$que es una matriz de rotación para un ángulo que gira **en sentido contrario de las manecillas del reloj**. ```{admonition} Ejercicio:class: tipUsar una matriz de rotación Givens para rotar al vector $(-3, 4)^T$ un ángulo de $\frac{\pi}{3}$ en sentido de las manecillas del reloj.``` Ejemplo aplicando rotaciones Givens a una matriz Las rotaciones Givens permiten hacer ceros en entradas de una matriz que son **seleccionadas**. Por ejemplo si se desea hacer cero la entrada $x_4$ de $x \in \mathbb{R}^4$, se definen $cos\theta = \frac{x_2}{\sqrt{x_2^2 + x_4^2}}, sen\theta = \frac{x_4}{\sqrt{x_2^2 + x_4^2}}$ y $$R_{24}^\theta=\left [ \begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]$$entonces: $$R_{24}^\theta x =\begin{array}{l}\left [\begin{array}{cccc}1 & 0 & 0 & 0\\0 & cos\theta & 0 & sen\theta \\0 & 0 & 1 & 0 \\0 & -sen\theta & 0 & cos\theta\end{array}\right ]\left [\begin{array}{c}x_1 \\x_2 \\x_3 \\x_4\end{array}\right ]=\left [\begin{array}{c}x_1 \\\sqrt{x_2^2 + x_4^2} \\x_3 \\0\end{array}\right ]\end{array}$$ Y se escribe que se hizo una rotación en el plano $(2,4)$. ```{admonition} Observación:class: tipObsérvese que sólo se modificaron dos entradas de $x$: $x_2, x_4$ por lo que el mismo efecto se obtiene al hacer la multiplicación:$$\begin{array}{l}\left[ \begin{array}{cc}cos\theta & -sen\theta\\sen\theta & cos\theta\end{array}\right]\left [ \begin{array}{c}x_2\\x_4\end{array}\right ]\end{array}$$para tales entradas.``` Considérese a la matriz $A \in \mathbb{R}^{4 \times 4}$:$$A =\left [\begin{array}{cccc}4 & 1 & -2 & 2 \\1 & 2 & 0 & 1\\-2 & 0 & 3 & -2 \\2 & 1 & -2 & -1\end{array}\right ] $$y aplíquense rotaciones Givens para hacer ceros en las entradas debajo de la diagonal de $A$ y tener una matriz **triangular superior**. **Entrada $a_{21}$, plano $(1,2)$:**
###Code
idx_1 = 0
idx_2 = 1
idx_column = 0
A = np.array([[4, 1, -2, 2],
[1, 2, 0, 1],
[-2, 0, 3, -2],
[2, 1, -2, -1]], dtype=float)
print(A)
a_11 = A[idx_1,idx_column]
a_21 = A[idx_2,idx_column]
norm = math.sqrt(a_11**2 + a_21**2)
cos_theta = a_11/norm
sen_theta = a_21/norm
R12 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R12)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A_subset = np.row_stack((A[idx_1,:], A[idx_2,:]))
print(A_subset)
print(R12@A_subset)
A1_aux = R12@A_subset
print(A1_aux)
###Output
_____no_output_____
###Markdown
Hacemos copia para un fácil manejo de los índices y matrices modificadas. Podríamos también usar [numpy.view](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.view.html).
###Code
A1 = A.copy()
A1[idx_1, :] = A1_aux[0, :]
A1[idx_2, :] = A1_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(1)} = R_{12}^\theta A^{(0)}$.```
###Code
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A1[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{31}$, plano $(1,3)$:**
###Code
idx_1 = 0
idx_2 = 2
idx_column = 0
a_11 = A1[idx_1, idx_column]
a_31 = A1[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_31**2)
cos_theta = a_11/norm
sen_theta = a_31/norm
R13 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R13)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A1_subset = np.row_stack((A1[idx_1,:], A1[idx_2,:]))
print(A1_subset)
print(R13@A1_subset)
A2_aux = R13@A1_subset
print(A2_aux)
A2 = A1.copy()
A2[idx_1, :] = A2_aux[0, :]
A2[idx_2, :] = A2_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(2)} = R_{13}^\theta A^{(1)}$.```
###Code
print(A2)
print(A1)
print(A)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A2[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{41}$, plano $(1,4)$:**
###Code
idx_1 = 0
idx_2 = 3
idx_column = 0
a_11 = A2[idx_1, idx_column]
a_41 = A2[idx_2, idx_column]
norm = math.sqrt(a_11**2 + a_41**2)
cos_theta = a_11/norm
sen_theta = a_41/norm
R14 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R14)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A2_subset = np.row_stack((A2[idx_1,:], A2[idx_2,:]))
print(A2_subset)
print(R14@A2_subset)
A3_aux = R14@A2_subset
print(A3_aux)
A3 = A2.copy()
A3[idx_1, :] = A3_aux[0, :]
A3[idx_2, :] = A3_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(3)} = R_{14}^\theta A^{(2)}$.```
###Code
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,1]$.```
###Code
print(np.linalg.norm(A3[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
**Entrada $a_{32}$, plano $(2,3)$:**
###Code
idx_1 = 1
idx_2 = 2
idx_column = 1
a_22 = A2[idx_1, idx_column]
a_32 = A2[idx_2, idx_column]
norm = math.sqrt(a_22**2 + a_32**2)
cos_theta = a_22/norm
sen_theta = a_32/norm
R23 = np.array([[cos_theta, sen_theta],
[-sen_theta, cos_theta]])
print(R23)
###Output
_____no_output_____
###Markdown
```{margin}Extraemos sólo los renglones a los que se les aplicará la matriz de rotación.```
###Code
A3_subset = np.row_stack((A3[idx_1,:], A3[idx_2,:]))
print(A3_subset)
print(R23@A3_subset)
A4_aux = R23@A3_subset
print(A4_aux)
A4 = A3.copy()
A4[idx_1, :] = A4_aux[0, :]
A4[idx_2, :] = A4_aux[1, :]
###Output
_____no_output_____
###Markdown
```{margin} $A^{(4)} = R_{23}^\theta A^{(3)}$.```
###Code
print(A4)
print(A3)
print(A2)
###Output
_____no_output_____
###Markdown
```{margin}Se preserva la norma 2 o Euclidiana de $A[1:4,2]$.```
###Code
print(np.linalg.norm(A4[:, idx_column]))
print(np.linalg.norm(A[:, idx_column]))
###Output
_____no_output_____
###Markdown
(OTBALN)= 2.1 Operaciones y transformaciones básicas del Álgebra Lineal Numérica ```{admonition} Notas para contenedor de docker:Comando de docker para ejecución de la nota de forma local:nota: cambiar `` por la ruta de directorio que se desea mapear a `/datos` dentro del contenedor de docker.`docker run --rm -v :/datos --name jupyterlab_optimizacion -p 8888:8888 -d palmoreck/jupyterlab_optimizacion:2.1.4`password para jupyterlab: `qwerty`Detener el contenedor de docker:`docker stop jupyterlab_optimizacion`Documentación de la imagen de docker `palmoreck/jupyterlab_optimizacion:2.1.4` en [liga](https://github.com/palmoreck/dockerfiles/tree/master/jupyterlab/optimizacion).``` --- Nota generada a partir de [liga1](https://www.dropbox.com/s/fyqwiqasqaa3wlt/3.1.1.Multiplicacion_de_matrices_y_estructura_de_datos.pdf?dl=0), [liga2](https://www.dropbox.com/s/jwu8lu4r14pb7ut/3.2.1.Sistemas_de_ecuaciones_lineales_eliminacion_Gaussiana_y_factorizacion_LU.pdf?dl=0) y [liga3](https://www.dropbox.com/s/s4ch0ww1687pl76/3.2.2.Factorizaciones_matriciales_SVD_Cholesky_QR.pdf?dl=0). ```{admonition} Al final de esta nota el y la lectora::class: tip* Entenderá cómo utilizar transformaciones típicas en el álgebra lineal numérica en la que se basan muchos de los algoritmos del análisis numérico. En específico aprenderá cómo aplicar las transformaciones de Gauss, reflexiones de Householder y rotaciones Givens a vectores y matrices.* Se familizarizará con la notación vectorial y matricial de las operaciones básicas del álgebra lineal numérica.``` Las operaciones básicas del Álgebra Lineal Numérica podemos dividirlas en vectoriales y matriciales. Vectoriales * **Transponer:** $\mathbb{R}^{n \times 1} \rightarrow \mathbb{R} ^{1 \times n}$: $y = x^T$ entonces $x = \left[ \begin{array}{c} x_1 \\ x_2 \\ \vdots \\ x_n \end{array} \right ]$ y se tiene: $y = x^T = [x_1, x_2, \dots, x_n].$ * **Suma:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x + y$ entonces $z_i = x_i + y_i$* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $y = \alpha x$ entonces $y_i = \alpha x_i$.* **Producto interno estándar o producto punto:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}$: $c = x^Ty$ entonces $c = \displaystyle \sum_{i=1}^n x_i y_i$.* **Multiplicación *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x.*y$ entonces $z_i = x_i y_i$.* **División *point wise:*** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^n$: $z = x./y$ entonces $z_i = x_i /y_i$ con $y_i \neq 0$.* **Producto exterior o *outer product*:** $\mathbb{R}^n \times \mathbb{R} ^n \rightarrow \mathbb{R}^{n \times n}$: $A = xy^T$ entonces $A[i, :] = x_i y^T$ con $A[i,:]$ el $i$-ésimo renglón de $A$. Matriciales * **Transponer:** $\mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{n \times m}$: $C = A^T$ entonces $c_{ij} = a_{ji}$.* **Sumar:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A + B$ entonces $c_{ij} = a_{ij} + b_{ij}$.* **Multiplicación por un escalar:** $\mathbb{R} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = \alpha A$ entonces $c_{ij} = \alpha a_{ij}$* **Multiplicación por un vector:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{n} \rightarrow \mathbb{R}^{m}$: $y = Ax$ entonces $y_i = \displaystyle \sum_{j=1}^n a_{ij}x_j$.* **Multiplicación entre matrices:** $\mathbb{R}^{m \times k} \times \mathbb{R}^{k \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = AB$ entonces $c_{ij} = \displaystyle \sum_{r=1}^k a_{ir}b_{rj}$. - Para esta necesitaríamos hacer tres ciclos "for" para codificar esta operación. * **Multiplicación *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A.*B$ entonces $c_{ij} = a_{ij}b_{ij}$.* **División *point wise*:** $\mathbb{R}^{m \times n} \times \mathbb{R}^{m \times n} \rightarrow \mathbb{R}^{m \times n}$: $C = A./B$ entonces $c_{ij} = a_{ij}/b_{ij}$ con $b_{ij} \neq 0$. **Como ejemplos de transformaciones básicas del Álgebra Lineal Numérica se encuentran:** (TGAUSS)= Transformaciones de Gauss En esta sección suponemos que $A \in \mathbb{R}^{n \times n}$ y $A$ es una matriz con entradas $a_{ij} \in \mathbb{R}^{n \times n} \forall i,j=1,2,\dots,n$. ```{margin}Como ejemplo de vector canónico tenemos: $e_1=(1,0)^T$ en $\mathbb{R}^2$ o $e_3 = (0,0,1,0,0)$ en $\mathbb{R}^5$.``` Considérese al vector $a \in \mathbb{R}^{n}$ y $e_k \in \mathbb{R}^n$ el $k$-ésimo vector canónico: vector con un $1$ en la posición $k$ y ceros en las entradas restantes. ```{admonition} DefiniciónUna transformación de Gauss está definida de forma general como $L_k = I_n - \ell_ke_k^T$ con $\ell_k = (0,0,\dots,\ell_{k+1,k},\dots,\ell_{n,k})^T$ y $\ell_{i,k}=\frac{a_{ik}}{a_{kk}} \forall i=k+1,\dots,n$.$a_{kk}$ se le nombra **pivote** y **debe ser diferente de cero**.``` Las transformaciones de Gauss se utilizan para hacer ceros por debajo del **pivote**. (EG1)= Ejemplo aplicando transformaciones de Gauss a un vector Considérese al vector $a=(-2,3,4)^T$. Definir una transformación de Gauss para hacer ceros por debajo de $a_1$ y otra transformación de Gauss para hacer cero la entrada $a_3$ **Solución:**
###Code
import numpy as np
import math
np.set_printoptions(precision=3, suppress=True)
###Output
_____no_output_____
###Markdown
a)Para hacer ceros por debajo del **pivote** $a_1 = -2$:
###Code
a = np.array([-2,3,4])
pivote = a[0]
###Output
_____no_output_____
###Markdown
```{margin} Recuerda la definición de $\ell_1=(0, \frac{a_2}{a_1}, \frac{a_3}{a_1})^T$```
###Code
l1 = np.array([0,a[1]/pivote, a[2]/pivote])
###Output
_____no_output_____
###Markdown
```{margin}Usamos $e_1$ pues se desea hacer ceros en las entradas debajo de la primera.```
###Code
e1 = np.array([1,0,0])
###Output
_____no_output_____
###Markdown
```{margin}Observa que por la definición de la transformación de Gauss, **no necesitamos construir a la matriz $L_1$, directamente se tiene $L_1a = a - \ell_1 e_1^Ta$.**```
###Code
L1_a = a-l1*(e1.dot(a))
print(L1_a)
###Output
_____no_output_____
###Markdown
A continuación se muestra que el producto $L_1 a$ si se construye $L_1$ es equivalente a lo anterior: ```{margin}$L_1 = I_3 - \ell_1 e_1^T$.```
###Code
L1 = np.eye(3) - np.outer(l1,e1)
print(L1)
print(L1@a)
###Output
_____no_output_____ |
Module4/Module4 - Lab2-Copy1.ipynb | ###Markdown
DAT210x - Programming with Python for DS Module4- Lab2
###Code
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from sklearn import preprocessing
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Some Boilerplate Code For your convenience, we've included some boilerplate code here which will help you out. You aren't expected to know how to write this code on your own at this point, but it'll assist with your visualizations. We've added some notes to the code in case you're interested in knowing what it's doing: A Note on SKLearn's `.transform()` calls: Any time you perform a transformation on your data, you lose the column header names because the output of SciKit-Learn's `.transform()` method is an NDArray and not a daraframe.This actually makes a lot of sense because there are essentially two types of transformations:- Those that adjust the scale of your features, and- Those that change alter the number of features, perhaps even changing their values entirely.An example of adjusting the scale of a feature would be changing centimeters to inches. Changing the feature entirely would be like using PCA to reduce 300 columns to 30. In either case, the original column's units have either been altered or no longer exist at all, so it's up to you to assign names to your columns after any transformation, if you'd like to store the resulting NDArray back into a dataframe.
###Code
def scaleFeaturesDF(df):
# Feature scaling is a type of transformation that only changes the
# scale, but not number of features. Because of this, we can still
# use the original dataset's column names... so long as we keep in
# mind that the _units_ have been altered:
scaled = preprocessing.StandardScaler().fit_transform(df)
scaled = pd.DataFrame(scaled, columns=df.columns)
print("New Variances:\n", scaled.var())
print("New Describe:\n", scaled.describe())
return scaled
###Output
_____no_output_____
###Markdown
SKLearn contains many methods for transforming your features by scaling them, a type of pre-processing): - `RobustScaler` - `Normalizer` - `MinMaxScaler` - `MaxAbsScaler` - `StandardScaler` - ...http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessingHowever in order to be effective at PCA, there are a few requirements that must be met, and which will drive the selection of your scaler. PCA requires your data is standardized -- in other words, it's _mean_ should equal 0, and it should have unit variance.SKLearn's regular `Normalizer()` doesn't zero out the mean of your data, it only clamps it, so it could be inappropriate to use depending on your data. `MinMaxScaler` and `MaxAbsScaler` both fail to set a unit variance, so you won't be using them here either. `RobustScaler` can work, again depending on your data (watch for outliers!). So for this assignment, you're going to use the `StandardScaler`. Get familiar with it by visiting these two websites:- http://scikit-learn.org/stable/modules/preprocessing.htmlpreprocessing-scaler- http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler Lastly, some code to help with visualizations:
###Code
def drawVectors(transformed_features, components_, columns, plt, scaled):
if not scaled:
return plt.axes() # No cheating ;-)
num_columns = len(columns)
# This funtion will project your *original* feature (columns)
# onto your principal component feature-space, so that you can
# visualize how "important" each one was in the
# multi-dimensional scaling
# Scale the principal components by the max value in
# the transformed set belonging to that component
xvector = components_[0] * max(transformed_features[:,0])
yvector = components_[1] * max(transformed_features[:,1])
## visualize projections
# Sort each column by it's length. These are your *original*
# columns, not the principal components.
important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }
important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)
print("Features by importance:\n", important_features)
ax = plt.axes()
for i in range(num_columns):
# Use an arrow to project each original feature as a
# labeled vector on your principal component axes
plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75)
plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75)
return ax
###Output
_____no_output_____
###Markdown
And Now, The Assignment
###Code
# Do * NOT * alter this line, until instructed!
scaleFeatures = True
###Output
_____no_output_____
###Markdown
Load up the dataset specified on the lab instructions page and remove any and all _rows_ that have a NaN in them. You should be a pro at this by now ;-)**QUESTION**: Should the `id` column be included in your dataset as a feature?
###Code
# .. your code here ..
df = pd.read_csv('Datasets/kidney_disease.csv')
labels = ['red' if i=='ckd' else 'green' for i in df.classification]
df = df.drop(labels=['id', 'classification'], axis=1)
df = df.dropna(axis=0)
df = df.reset_index(drop=True)
df.pcv = pd.to_numeric(df.pcv, errors='coerce')
df.wc = pd.to_numeric(df.wc, errors='coerce')
df.rc = pd.to_numeric(df.rc, errors='coerce')
df = pd.get_dummies(df,columns=['rbc', 'pc', 'pcc', 'ba', 'htn', 'dm', 'cad', 'appet', 'pe', 'ane'])
df.dtypes
# df
###Output
_____no_output_____
###Markdown
Let's build some color-coded labels; the actual label feature will be removed prior to executing PCA, since it's unsupervised. You're only labeling by color so you can see the effects of PCA: Use an indexer to select only the following columns: `['bgr','wc','rc']`
###Code
# .. your code here ..
df1 = df[['bgr','wc','rc']]
df1
###Output
_____no_output_____
###Markdown
Either take a look at the dataset's webpage in the attribute info section of UCI's [Chronic Kidney Disease]() page,: https://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease or alternatively, you can actually look at the first few rows of your dataframe using `.head()`. What kind of data type should these three columns be? Compare what you see with the results when you print out your dataframe's `dtypes`.If Pandas did not properly detect and convert your columns to the data types you expected, use an appropriate command to coerce these features to the right type.
###Code
# .. your code here ..
df1.dtypes
###Output
_____no_output_____
###Markdown
PCA Operates based on variance. The variable with the greatest variance will dominate. Examine your data using a command that will check the variance of every feature in your dataset, and then print out the results. Also print out the results of running `.describe` on your dataset._Hint:_ If you do not see all three variables: `'bgr'`, `'wc'`, and `'rc'`, then it's likely you probably did not complete the previous step properly.
###Code
# .. your code here ..
df1.describe
df=df1
df.head()
df.var()
# df.describe()
###Output
_____no_output_____
###Markdown
Below, we assume your dataframe's variable is named `df`. If it isn't, make the appropriate changes. But do not alter the code in `scaleFeaturesDF()` just yet!
###Code
# .. your (possible) code adjustment here ..
if scaleFeatures: df = scaleFeaturesDF(df)
###Output
('New Variances:\n', bgr 1.006369
wc 1.006369
rc 1.006369
dtype: float64)
('New Describe:\n', bgr wc rc
count 1.580000e+02 1.580000e+02 1.580000e+02
mean -9.755075e-17 9.345548e-17 1.068063e-16
std 1.003180e+00 1.003180e+00 1.003180e+00
min -9.475974e-01 -1.500159e+00 -2.747446e+00
25% -5.305059e-01 -6.259123e-01 -3.855519e-01
50% -2.447210e-01 -2.168611e-01 5.730335e-02
75% 6.306235e-03 4.167672e-01 6.969831e-01
max 5.540492e+00 5.750474e+00 3.058878e+00)
###Markdown
Run PCA on your dataset, reducing it to 2 principal components. Make sure your PCA model is saved in a variable called `'pca'`, and that the results of your transformation are saved in another variable `'T'`:
###Code
# .. your code here ..
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
pca.fit(df)
T = pca.transform(df)
###Output
_____no_output_____
###Markdown
Now, plot the transformed data as a scatter plot. Recall that transforming the data will result in a NumPy NDArray. You can either use MatPlotLib to graph it directly, or you can convert it back to DataFrame and have Pandas do it for you.Since we've already demonstrated how to plot directly with MatPlotLib in `Module4/assignment1.ipynb`, this time we'll show you how to convert your transformed data back into to a Pandas Dataframe and have Pandas plot it from there.
###Code
# Since we transformed via PCA, we no longer have column names; but we know we
# are in `principal-component` space, so we'll just define the coordinates accordingly:
ax = drawVectors(T, pca.components_, df.columns.values, plt, scaleFeatures)
T = pd.DataFrame(T)
T.columns = ['component1', 'component2']
T.plot.scatter(x='component1', y='component2', marker='o', c=labels, alpha=0.75, ax=ax)
plt.show()
###Output
('Features by importance:\n', [(3.9998071556884813, 'wc'), (3.2588876641210884, 'bgr'), (3.009752752998365, 'rc')])
###Markdown
DAT210x - Programming with Python for DS Module4- Lab2
###Code
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from sklearn import preprocessing
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Some Boilerplate Code For your convenience, we've included some boilerplate code here which will help you out. You aren't expected to know how to write this code on your own at this point, but it'll assist with your visualizations. We've added some notes to the code in case you're interested in knowing what it's doing: A Note on SKLearn's `.transform()` calls: Any time you perform a transformation on your data, you lose the column header names because the output of SciKit-Learn's `.transform()` method is an NDArray and not a daraframe.This actually makes a lot of sense because there are essentially two types of transformations:- Those that adjust the scale of your features, and- Those that change alter the number of features, perhaps even changing their values entirely.An example of adjusting the scale of a feature would be changing centimeters to inches. Changing the feature entirely would be like using PCA to reduce 300 columns to 30. In either case, the original column's units have either been altered or no longer exist at all, so it's up to you to assign names to your columns after any transformation, if you'd like to store the resulting NDArray back into a dataframe.
###Code
def scaleFeaturesDF(df):
# Feature scaling is a type of transformation that only changes the
# scale, but not number of features. Because of this, we can still
# use the original dataset's column names... so long as we keep in
# mind that the _units_ have been altered:
scaled = preprocessing.StandardScaler().fit_transform(df)
scaled = pd.DataFrame(scaled, columns=df.columns)
print("New Variances:\n", scaled.var())
print("New Describe:\n", scaled.describe())
return scaled
###Output
_____no_output_____
###Markdown
SKLearn contains many methods for transforming your features by scaling them, a type of pre-processing): - `RobustScaler` - `Normalizer` - `MinMaxScaler` - `MaxAbsScaler` - `StandardScaler` - ...http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessingHowever in order to be effective at PCA, there are a few requirements that must be met, and which will drive the selection of your scaler. PCA requires your data is standardized -- in other words, it's _mean_ should equal 0, and it should have unit variance.SKLearn's regular `Normalizer()` doesn't zero out the mean of your data, it only clamps it, so it could be inappropriate to use depending on your data. `MinMaxScaler` and `MaxAbsScaler` both fail to set a unit variance, so you won't be using them here either. `RobustScaler` can work, again depending on your data (watch for outliers!). So for this assignment, you're going to use the `StandardScaler`. Get familiar with it by visiting these two websites:- http://scikit-learn.org/stable/modules/preprocessing.htmlpreprocessing-scaler- http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler Lastly, some code to help with visualizations:
###Code
def drawVectors(transformed_features, components_, columns, plt, scaled):
if not scaled:
return plt.axes() # No cheating ;-)
num_columns = len(columns)
# This funtion will project your *original* feature (columns)
# onto your principal component feature-space, so that you can
# visualize how "important" each one was in the
# multi-dimensional scaling
# Scale the principal components by the max value in
# the transformed set belonging to that component
xvector = components_[0] * max(transformed_features[:,0])
yvector = components_[1] * max(transformed_features[:,1])
## visualize projections
# Sort each column by it's length. These are your *original*
# columns, not the principal components.
important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }
important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)
print("Features by importance:\n", important_features)
ax = plt.axes()
for i in range(num_columns):
# Use an arrow to project each original feature as a
# labeled vector on your principal component axes
plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75)
plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75)
return ax
###Output
_____no_output_____
###Markdown
And Now, The Assignment
###Code
# Do * NOT * alter this line, until instructed!
scaleFeatures = True
###Output
_____no_output_____
###Markdown
Load up the dataset specified on the lab instructions page and remove any and all _rows_ that have a NaN in them. You should be a pro at this by now ;-)**QUESTION**: Should the `id` column be included in your dataset as a feature?
###Code
import pandas as pd
df = pd.read_csv('Datasets/kidney_disease.csv')
df = df.dropna()
df.head()
###Output
_____no_output_____
###Markdown
Let's build some color-coded labels; the actual label feature will be removed prior to executing PCA, since it's unsupervised. You're only labeling by color so you can see the effects of PCA:
###Code
labels = ['red' if i=='ckd' else 'green' for i in df.classification]
###Output
_____no_output_____
###Markdown
Use an indexer to select only the following columns: `['bgr','wc','rc']`
###Code
df = df.loc[:,['bgr','wc','rc']]
###Output
_____no_output_____
###Markdown
Either take a look at the dataset's webpage in the attribute info section of UCI's [Chronic Kidney Disease]() page,: https://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease or alternatively, you can actually look at the first few rows of your dataframe using `.head()`. What kind of data type should these three columns be? Compare what you see with the results when you print out your dataframe's `dtypes`.If Pandas did not properly detect and convert your columns to the data types you expected, use an appropriate command to coerce these features to the right type.
###Code
df['wc'] = pd.to_numeric(df['wc'], errors='ignore')
df['rc'] = pd.to_numeric(df['rc'], errors='ignore')
###Output
_____no_output_____
###Markdown
PCA Operates based on variance. The variable with the greatest variance will dominate. Examine your data using a command that will check the variance of every feature in your dataset, and then print out the results. Also print out the results of running `.describe` on your dataset._Hint:_ If you do not see all three variables: `'bgr'`, `'wc'`, and `'rc'`, then it's likely you probably did not complete the previous step properly.
###Code
df.var()
df.describe()
###Output
_____no_output_____
###Markdown
Below, we assume your dataframe's variable is named `df`. If it isn't, make the appropriate changes. But do not alter the code in `scaleFeaturesDF()` just yet!
###Code
# .. your (possible) code adjustment here ..
if scaleFeatures: df = scaleFeaturesDF(df)
###Output
('New Variances:\n', bgr 1.006369
wc 1.006369
rc 1.006369
dtype: float64)
('New Describe:\n', bgr wc rc
count 1.580000e+02 1.580000e+02 1.580000e+02
mean -9.755075e-17 9.345548e-17 1.068063e-16
std 1.003180e+00 1.003180e+00 1.003180e+00
min -9.475974e-01 -1.500159e+00 -2.747446e+00
25% -5.305059e-01 -6.259123e-01 -3.855519e-01
50% -2.447210e-01 -2.168611e-01 5.730335e-02
75% 6.306235e-03 4.167672e-01 6.969831e-01
max 5.540492e+00 5.750474e+00 3.058878e+00)
###Markdown
Run PCA on your dataset, reducing it to 2 principal components. Make sure your PCA model is saved in a variable called `'pca'`, and that the results of your transformation are saved in another variable `'T'`:
###Code
from sklearn.decomposition import PCA
pca = PCA(svd_solver='full',n_components=2)
pca.fit(df)
T = pca.fit_transform(df)
###Output
_____no_output_____
###Markdown
Now, plot the transformed data as a scatter plot. Recall that transforming the data will result in a NumPy NDArray. You can either use MatPlotLib to graph it directly, or you can convert it back to DataFrame and have Pandas do it for you.Since we've already demonstrated how to plot directly with MatPlotLib in `Module4/assignment1.ipynb`, this time we'll show you how to convert your transformed data back into to a Pandas Dataframe and have Pandas plot it from there.
###Code
# Since we transformed via PCA, we no longer have column names; but we know we
# are in `principal-component` space, so we'll just define the coordinates accordingly:
ax = drawVectors(T, pca.components_, df.columns.values, plt, scaleFeatures)
T = pd.DataFrame(T)
T.columns = ['component1', 'component2']
T.plot.scatter(x='component1', y='component2', marker='o', c=labels, alpha=0.75, ax=ax)
plt.show()
###Output
('Features by importance:\n', [(3.9998071556884867, 'wc'), (3.2588876641210907, 'bgr'), (3.0097527529983648, 'rc')])
###Markdown
DAT210x - Programming with Python for DS Module4- Lab2
###Code
import math
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
from sklearn import preprocessing
# Look pretty...
# matplotlib.style.use('ggplot')
plt.style.use('ggplot')
###Output
_____no_output_____
###Markdown
Some Boilerplate Code For your convenience, we've included some boilerplate code here which will help you out. You aren't expected to know how to write this code on your own at this point, but it'll assist with your visualizations. We've added some notes to the code in case you're interested in knowing what it's doing: A Note on SKLearn's `.transform()` calls: Any time you perform a transformation on your data, you lose the column header names because the output of SciKit-Learn's `.transform()` method is an NDArray and not a daraframe.This actually makes a lot of sense because there are essentially two types of transformations:- Those that adjust the scale of your features, and- Those that change alter the number of features, perhaps even changing their values entirely.An example of adjusting the scale of a feature would be changing centimeters to inches. Changing the feature entirely would be like using PCA to reduce 300 columns to 30. In either case, the original column's units have either been altered or no longer exist at all, so it's up to you to assign names to your columns after any transformation, if you'd like to store the resulting NDArray back into a dataframe.
###Code
def scaleFeaturesDF(df):
# Feature scaling is a type of transformation that only changes the
# scale, but not number of features. Because of this, we can still
# use the original dataset's column names... so long as we keep in
# mind that the _units_ have been altered:
scaled = preprocessing.StandardScaler().fit_transform(df)
scaled = pd.DataFrame(scaled, columns=df.columns)
print("New Variances:\n", scaled.var())
print("New Describe:\n", scaled.describe())
return scaled
###Output
_____no_output_____
###Markdown
SKLearn contains many methods for transforming your features by scaling them, a type of pre-processing): - `RobustScaler` - `Normalizer` - `MinMaxScaler` - `MaxAbsScaler` - `StandardScaler` - ...http://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.preprocessingHowever in order to be effective at PCA, there are a few requirements that must be met, and which will drive the selection of your scaler. PCA requires your data is standardized -- in other words, it's _mean_ should equal 0, and it should have unit variance.SKLearn's regular `Normalizer()` doesn't zero out the mean of your data, it only clamps it, so it could be inappropriate to use depending on your data. `MinMaxScaler` and `MaxAbsScaler` both fail to set a unit variance, so you won't be using them here either. `RobustScaler` can work, again depending on your data (watch for outliers!). So for this assignment, you're going to use the `StandardScaler`. Get familiar with it by visiting these two websites:- http://scikit-learn.org/stable/modules/preprocessing.htmlpreprocessing-scaler- http://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.htmlsklearn.preprocessing.StandardScaler Lastly, some code to help with visualizations:
###Code
def drawVectors(transformed_features, components_, columns, plt, scaled):
if not scaled:
return plt.axes() # No cheating ;-)
num_columns = len(columns)
# This funtion will project your *original* feature (columns)
# onto your principal component feature-space, so that you can
# visualize how "important" each one was in the
# multi-dimensional scaling
# Scale the principal components by the max value in
# the transformed set belonging to that component
xvector = components_[0] * max(transformed_features[:,0])
yvector = components_[1] * max(transformed_features[:,1])
## visualize projections
# Sort each column by it's length. These are your *original*
# columns, not the principal components.
important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) }
important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True)
print("Features by importance:\n", important_features)
ax = plt.axes()
for i in range(num_columns):
# Use an arrow to project each original feature as a
# labeled vector on your principal component axes
plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75)
plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75)
return ax
###Output
_____no_output_____
###Markdown
And Now, The Assignment
###Code
# Do * NOT * alter this line, until instructed!
scaleFeatures = True
###Output
_____no_output_____
###Markdown
Load up the dataset specified on the lab instructions page and remove any and all _rows_ that have a NaN in them. You should be a pro at this by now ;-)**QUESTION**: Should the `id` column be included in your dataset as a feature?
###Code
# .. your code here ..
df = pd.read_csv('Datasets/kidney_disease.csv', header = 0, index_col=0)
df = df.dropna()
df
df.describe()
df.dtypes
###Output
_____no_output_____
###Markdown
Let's build some color-coded labels; the actual label feature will be removed prior to executing PCA, since it's unsupervised. You're only labeling by color so you can see the effects of PCA:
###Code
labels = ['red' if i=='ckd' else 'green' for i in df.classification]
###Output
_____no_output_____
###Markdown
Use an indexer to select only the following columns: `['bgr','wc','rc']`
###Code
# .. your code here ..
dfc = df[['bgr','wc','rc']]
dfc
###Output
_____no_output_____
###Markdown
Either take a look at the dataset's webpage in the attribute info section of UCI's [Chronic Kidney Disease]() page,: https://archive.ics.uci.edu/ml/datasets/Chronic_Kidney_Disease or alternatively, you can actually look at the first few rows of your dataframe using `.head()`. What kind of data type should these three columns be? Compare what you see with the results when you print out your dataframe's `dtypes`.If Pandas did not properly detect and convert your columns to the data types you expected, use an appropriate command to coerce these features to the right type.
###Code
# .. your code here ..
dfc.loc[:,('wc')] = pd.to_numeric(dfc['wc'], errors='coerce')
dfc.loc[:,('rc')] = pd.to_numeric(dfc['rc'], errors='coerce')
dfc
###Output
C:\ProgramData\Anaconda3\lib\site-packages\pandas\core\indexing.py:477: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
self.obj[item] = s
###Markdown
PCA Operates based on variance. The variable with the greatest variance will dominate. Examine your data using a command that will check the variance of every feature in your dataset, and then print out the results. Also print out the results of running `.describe` on your dataset._Hint:_ If you do not see all three variables: `'bgr'`, `'wc'`, and `'rc'`, then it's likely you probably did not complete the previous step properly.
###Code
# .. your code here ..
dfc.describe()
###Output
_____no_output_____
###Markdown
Below, we assume your dataframe's variable is named `df`. If it isn't, make the appropriate changes. But do not alter the code in `scaleFeaturesDF()` just yet!
###Code
# .. your (possible) code adjustment here ..
#if scaleFeatures: df = scaleFeaturesDF(df)
if scaleFeatures: dfc = scaleFeaturesDF(dfc)
dfc.describe()
###Output
_____no_output_____
###Markdown
Run PCA on your dataset, reducing it to 2 principal components. Make sure your PCA model is saved in a variable called `'pca'`, and that the results of your transformation are saved in another variable `'T'`:
###Code
# .. your code here ..
from sklearn.decomposition import PCA
pca = PCA(n_components=2, svd_solver='full')
pca.fit(dfc)
T = pca.transform(dfc)
dT = pd.DataFrame(T)
dT.columns = ['component1', 'component2']
###Output
_____no_output_____
###Markdown
Now, plot the transformed data as a scatter plot. Recall that transforming the data will result in a NumPy NDArray. You can either use MatPlotLib to graph it directly, or you can convert it back to DataFrame and have Pandas do it for you.Since we've already demonstrated how to plot directly with MatPlotLib in `Module4/assignment1.ipynb`, this time we'll show you how to convert your transformed data back into to a Pandas Dataframe and have Pandas plot it from there.
###Code
# Since we transformed via PCA, we no longer have column names; but we know we
# are in `principal-component` space, so we'll just define the coordinates accordingly:
ax = drawVectors(T, pca.components_, dfc.columns.values, plt, scaleFeatures)
T = pd.DataFrame(T)
T.columns = ['component1', 'component2']
T.plot.scatter(x='component1', y='component2', marker='o', c=labels, alpha=0.75, ax=ax)
plt.show()
###Output
Features by importance:
[(3.999807155688483, 'wc'), (3.258887664121087, 'bgr'), (3.009752752998363, 'rc')]
|
berries/notebooks/transfer_learning_tutorial.ipynb | ###Markdown
Transfer Learning for Computer Vision Tutorial==============================================**Author**: `Sasank Chilamkurthy `_In this tutorial, you will learn how to train a convolutional neural network forimage classification using transfer learning. You can read more about the transferlearning at `cs231n notes `__Quoting these notes, In practice, very few people train an entire Convolutional Network from scratch (with random initialization), because it is relatively rare to have a dataset of sufficient size. Instead, it is common to pretrain a ConvNet on a very large dataset (e.g. ImageNet, which contains 1.2 million images with 1000 categories), and then use the ConvNet either as an initialization or a fixed feature extractor for the task of interest.These two major transfer learning scenarios look as follows:- **Finetuning the convnet**: Instead of random initialization, we initialize the network with a pretrained network, like the one that is trained on imagenet 1000 dataset. Rest of the training looks as usual.- **ConvNet as fixed feature extractor**: Here, we will freeze the weights for all of the network except that of the final fully connected layer. This last fully connected layer is replaced with a new one with random weights and only this layer is trained.
###Code
# License: BSD
# Author: Sasank Chilamkurthy
from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy
plt.ion() # interactive mode
###Output
_____no_output_____
###Markdown
Load Data---------We will use torchvision and torch.utils.data packages for loading thedata.The problem we're going to solve today is to train a model to classify**ants** and **bees**. We have about 120 training images each for ants and bees.There are 75 validation images for each class. Usually, this is a verysmall dataset to generalize upon, if trained from scratch. Since weare using transfer learning, we should be able to generalize reasonablywell.This dataset is a very small subset of imagenet... Note :: Download the data from `here `_ and extract it to the current directory.
###Code
# Data augmentation and normalization for training
# Just normalization for validation
data_transforms = {
'train': transforms.Compose([
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
'val': transforms.Compose([
transforms.Resize(256),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
]),
}
data_dir = '../../input/hymenoptera_data'
image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
data_transforms[x])
for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
shuffle=True, num_workers=4)
for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
###Output
_____no_output_____
###Markdown
Visualize a few images^^^^^^^^^^^^^^^^^^^^^^Let's visualize a few training images so as to understand the dataaugmentations.
###Code
def imshow(inp, title=None):
"""Imshow for Tensor."""
inp = inp.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
inp = std * inp + mean
inp = np.clip(inp, 0, 1)
plt.imshow(inp)
if title is not None:
plt.title(title)
plt.pause(0.001) # pause a bit so that plots are updated
# Get a batch of training data
inputs, classes = next(iter(dataloaders['train']))
# Make a grid from batch
out = torchvision.utils.make_grid(inputs)
imshow(out, title=[class_names[x] for x in classes])
###Output
_____no_output_____
###Markdown
Training the model------------------Now, let's write a general function to train a model. Here, we willillustrate:- Scheduling the learning rate- Saving the best modelIn the following, parameter ``scheduler`` is an LR scheduler object from``torch.optim.lr_scheduler``.
###Code
def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
since = time.time()
best_model_wts = copy.deepcopy(model.state_dict())
best_acc = 0.0
for epoch in range(num_epochs):
print('Epoch {}/{}'.format(epoch, num_epochs - 1))
print('-' * 10)
# Each epoch has a training and validation phase
for phase in ['train', 'val']:
if phase == 'train':
model.train() # Set model to training mode
else:
model.eval() # Set model to evaluate mode
running_loss = 0.0
running_corrects = 0
# Iterate over data.
for inputs, labels in dataloaders[phase]:
inputs = inputs.to(device)
labels = labels.to(device)
# zero the parameter gradients
optimizer.zero_grad()
# forward
# track history if only in train
with torch.set_grad_enabled(phase == 'train'):
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
loss = criterion(outputs, labels)
# backward + optimize only if in training phase
if phase == 'train':
loss.backward()
optimizer.step()
# statistics
running_loss += loss.item() * inputs.size(0)
running_corrects += torch.sum(preds == labels.data)
if phase == 'train':
scheduler.step()
epoch_loss = running_loss / dataset_sizes[phase]
epoch_acc = running_corrects.double() / dataset_sizes[phase]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
phase, epoch_loss, epoch_acc))
# deep copy the model
if phase == 'val' and epoch_acc > best_acc:
best_acc = epoch_acc
best_model_wts = copy.deepcopy(model.state_dict())
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val Acc: {:4f}'.format(best_acc))
# load best model weights
model.load_state_dict(best_model_wts)
return model
###Output
_____no_output_____
###Markdown
Visualizing the model predictions^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^Generic function to display predictions for a few images
###Code
def visualize_model(model, num_images=6):
was_training = model.training
model.eval()
images_so_far = 0
fig = plt.figure()
with torch.no_grad():
for i, (inputs, labels) in enumerate(dataloaders['val']):
inputs = inputs.to(device)
labels = labels.to(device)
outputs = model(inputs)
_, preds = torch.max(outputs, 1)
for j in range(inputs.size()[0]):
images_so_far += 1
ax = plt.subplot(num_images//2, 2, images_so_far)
ax.axis('off')
ax.set_title('predicted: {}'.format(class_names[preds[j]]))
imshow(inputs.cpu().data[j])
if images_so_far == num_images:
model.train(mode=was_training)
return
model.train(mode=was_training)
###Output
_____no_output_____
###Markdown
Finetuning the convnet----------------------Load a pretrained model and reset final fully connected layer.
###Code
model_ft = models.resnet18(pretrained=True)
num_ftrs = model_ft.fc.in_features
# Here the size of each output sample is set to 2.
# Alternatively, it can be generalized to nn.Linear(num_ftrs, len(class_names)).
model_ft.fc = nn.Linear(num_ftrs, 2)
model_ft = model_ft.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
###Output
_____no_output_____
###Markdown
Train and evaluate^^^^^^^^^^^^^^^^^^It should take around 15-25 min on CPU. On GPU though, it takes less than aminute.
###Code
model_ft = train_model(model_ft, criterion, optimizer_ft, exp_lr_scheduler,
num_epochs=25)
visualize_model(model_ft)
###Output
_____no_output_____
###Markdown
ConvNet as fixed feature extractor----------------------------------Here, we need to freeze all the network except the final layer. We needto set ``requires_grad == False`` to freeze the parameters so that thegradients are not computed in ``backward()``.You can read more about this in the documentation`here `__.
###Code
model_conv = torchvision.models.resnet18(pretrained=True)
for param in model_conv.parameters():
param.requires_grad = False
# Parameters of newly constructed modules have requires_grad=True by default
num_ftrs = model_conv.fc.in_features
model_conv.fc = nn.Linear(num_ftrs, 2)
model_conv = model_conv.to(device)
criterion = nn.CrossEntropyLoss()
# Observe that only parameters of final layer are being optimized as
# opposed to before.
optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9)
# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)
###Output
_____no_output_____
###Markdown
Train and evaluate^^^^^^^^^^^^^^^^^^On CPU this will take about half the time compared to previous scenario.This is expected as gradients don't need to be computed for most of thenetwork. However, forward does need to be computed.
###Code
model_conv = train_model(model_conv, criterion, optimizer_conv,
exp_lr_scheduler, num_epochs=25)
visualize_model(model_conv)
plt.ioff()
plt.show()
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/save-the-energy-for-the-future-1-detailed-eda-checkpoint.ipynb | ###Markdown
Image source: [technotification](https://www.technotification.com/2018/09/amazing-renewable-energy-ideas.html)This notebook aims to predict a building's energy consumption over 2017 and 2018 using the data from 2016 in 4 different consumpiton categories (electricity, chilled water, steam, hot water) using ASHRAE data, which is our problem statement as well.This is a *supervised machine learning model*, meaning based on the columns available in the datasets and data from 2016, we are going to train the model to predict an energy consumption of a building in each category. Since, consumption values are labeled as `meter_reading` and they are continuous, we are going to apply *regression techniques* to generate predictions on meter_reading. It is a highly debated and popular competition in Kaggle currently, however my main motivation is to contribute to make energy-efficient buildings by estimating its energy consumption. It seemed like a good start to save our energy for future!There will be 3 notebooks covering the complete machine learning building pipeline. This notebook will focus on parts 1 and 2 and provide information about the datasets with a detailed EDA.**1) Understand, Cleand and Format Data****2) Exploratory Data Analysis**3) Feature Engineering & Selection4) Compare Several Machine Learning Models5) Perform Hyperparameter Tuning and Cross Validation6) Evaluate Model with Test Data7) Interpret Model Results8) Submissions & Summary & Conclusions* [Notebook 2](https://www.kaggle.com/cereniyim/save-the-energy-for-the-future-2-fe-lightgbm) will cover 3, 4 and 5 be focusing on building the optimal machine learning model.* [Notebook 3](https://www.kaggle.com/cereniyim/save-the-energy-for-the-future-3-predictions) will cover 6, 7 and 8 be focusing on generating the predictions with the best model and summary for the whole project.Machine Learning application and building is not a linear and one time process. Steps above enable me to follow a structured way for an end-to-end machine project flow and preparation for the each step ahead. All in all, steps might be modified or revisited according to findings. You can use the table of contents to navigate to each section and visual 👇Enjoy reading ! Table of Contents - 1. Undserstand, Clean and Format Data - 1.1. Load data into dataframes - 1.2. Reduce the memory size - 1.3. Information about the training datasets - 1.3.1. Building dataset - 1.3.2. Weather_train dataset - 1.3.3. Train dataset - 1.4. Information about the test datasets - 1.4.1 Test dataset - 1.4.2 Weather_test - 1.5. Findings from Understand, Clean, Format Data - 2. Exploratory Data Analysis - 2.1. Distribution of meter reading - 2.1.1. Consolidated distribution of meter reading - 2.1.2. Consolidated distribution of positive meter reading values - 2.1.3. Distribution of meter reading among different meter categories - 2.1.4. Distribution of positive meter reading values among different meter categories - 2.1.5. Average daily meter reading values over 2016 - 2.2. Meter reading VS weather_train data - 2.2.1. Prepare & merge dataframes - 2.2.2. Average daily weather variable values over 2016 - 2.2.3. Pairplot of meter reading vs weather data - 2.3. Meter reading VS building data categorical features - 2.3.1. Prepare & merge dataframes - 2.3.2. Meter reading distribution among primary uses - 2.3.3. Meter reading distribution among site id as violinplot - 2.4. Meter reading VS building data continuous features as scatterplots - 2.4.1. Scatter plot of meter reading VS square feet - 2.4.2. Scatter plot of meter reading VS age of the building - 2.4.3. Scatter plot of meter reading VS floor count - 2.5. Findings from exploratory data analysis - Conclusions **Imports**: I will use numpy and pandas for data munging and manipulation. For the visualizations I will discover some [plotly](https://plot.ly/python/) in this project and create interactive visuals where possible.
###Code
# for data manipulation
import numpy as np
import pandas as pd
import pandas_profiling as pp
pd.set_option('display.max_columns', 50)
pd.set_option('display.float_format', lambda x: '%.3f' % x)
# for date manipulation
from datetime import datetime
# for visualization: matplotlib
from matplotlib import pyplot as plt
from IPython.core.pylabtools import figsize
%matplotlib inline
# to display visuals in the notebook
# for visualization: seaborn
import seaborn as sns
sns.set_context(font_scale=2)
# for visualization: plotly
import plotly.offline as py
py.init_notebook_mode(connected=True)
import plotly.graph_objects as go
import plotly.express as px
import plotly.figure_factory as ff
from plotly.subplots import make_subplots
from plotly.offline import iplot
# to cleanup memory usage
import gc
# Input data files are available in the "../input/" directory.
# For example, running this (by clicking run or pressing Shift+Enter) will list all files under the input directory
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
# Any results you write to the current directory are saved as output.
###Output
_____no_output_____
###Markdown
1. Understand, Clean and Format Data Back to top Very first observation is training and test data spans in 5 different csv files. If you look at the [data tab of the competition](https://www.kaggle.com/c/ashrae-energy-prediction/data) you will see that:- `train.csv`, `test.csv`, `weather_train.csv` and `weather_test.csv` are time-series data, with hourly measurements.- `building_metadata.csv` contains the characteristics of a building such as: - site id of the building - primary use - square feet - year built- In the weather datasets, there are features related to wind, clouds, temperature and pressure.- Weather_train dataset measured from 1 Jan, 2016 to 1 Jan, 2017.- Weather_test dataset spans from 1 Jan, 2017 to 1, Jan 2019. So using 1 year data we are going to predict following 2 years energy consumption of a building.Looking at the `test_csv`and `samplele_submissions.csv`, predictions will be based on:- building_id- meter (energy consumption category)- timestamp 1.1. Load data into dataframes Back to top Time series data will be loaded by parsing `timestamp`column to enabling timestamp column to be formatted as datetime data type and as index.
###Code
# path
path = "/kaggle/input/ashrae-energy-prediction"
# train data
building = pd.read_csv(path + "/building_metadata.csv")
weather_train = pd.read_csv(path + "/weather_train.csv",
index_col=1, parse_dates = True)
train = pd.read_csv( path + "/train.csv",
index_col=2, parse_dates = True)
# look at the number of rows and columns
print('Size of the building dataset is', building.shape)
print('Size of the weather_train dataset is', weather_train.shape)
print('Size of the train dataset is', train.shape)
# test data
weather_test = pd.read_csv(path + "/weather_test.csv",
index_col=1, parse_dates = True)
test = pd.read_csv(path + "/test.csv",
index_col=3, parse_dates = True)
# submission data
sample_submission = pd.read_csv( path + "/sample_submission.csv")
# look at the number of rows and columns
print('Size of the weather_test dataset is', weather_test.shape)
print('Size of the test dataset is', test.shape)
print('Size of the sample_submission is', sample_submission.shape)
del sample_submission
gc.collect()
###Output
_____no_output_____
###Markdown
We are dealing with some big datasets here (20 and 40 million rows). We have 41 million rows to predict with the built model. To save some space from the memory, I am going to delete unused dataframes and use a function built as part of this [popular notebook](https://www.kaggle.com/caesarlupum/ashrae-start-here-a-gentle-introduction2.-Imports-) to reduce the memory size use of the datasets. 1.2. Reduce the memory size Back to top This function converts data types in such a way that, they allocate less space in the memory. Then, reports the size of the reduction.
###Code
## Function to reduce the DF size
def reduce_memory_usage(df, verbose=True):
numerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']
start_mem = df.memory_usage().sum() / 1024**2
for col in df.columns:
col_type = df[col].dtypes
if col_type in numerics:
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
df[col] = df[col].astype(np.float64)
end_mem = df.memory_usage().sum() / 1024**2
if verbose: print('Mem. usage decreased to {:5.2f} Mb ({:.1f}% reduction)'.format(end_mem, 100 * (start_mem - end_mem) / start_mem))
return df
reduce_memory_usage(building)
reduce_memory_usage(weather_train)
reduce_memory_usage(train)
reduce_memory_usage(weather_test)
reduce_memory_usage(test)
###Output
_____no_output_____
###Markdown
1.3. Information about the training datasets Back to top Since there are 3 csv files, I will use pandas_profiling to get the quick glance of the data for the datasets with less than 1 million rows. Pandas_profiling is a great library to display information about * Essentials* Quantile statistics* Descriptive statistics* Most frequent values* Histogram* Correlations (even rejects a column if a collinear correlation is found) and provides a Sample consisting of first and last rows. The further details about the dataset can be observed by clicking on each tab and Toggle Details buttons per column.The best part of the pandas profiling: It delivers whole bunch of information with just one line of code! If you want to dive deeper into pandas_profiling, you can check their [GitHub page](https://pandas-profiling.github.io/pandas-profiling/docs/). 1.3.1. Building dataset Back to top
###Code
pp.ProfileReport(building)
###Output
_____no_output_____
###Markdown
**Summary from the Report:*** `Building_id` is the primary key for this dataset.* Observations coming from 1448 buildings.* `building_id` and `site_id`are collinear features.* More than 50% of the values are missing at columns`floor_count`and `year_built` which easily captured in the first raws of the data.* Except `primary_use`all of the columns are numeric.* `primary_use` column is categorical, most of the values being Education, Office and Public Assembly.* Looking at the `floor_count` histogram, data is gathered mostly from 1 to 5 floor buildings. Whether to keep or drop this column will be decided after looking at how this feature contributes to determining meter_reading.* Looking at the `square_feet` histogram most of the buildings are smaller than 200.000 square feet. There are few extremely large buildings: square feet is more than 800.000 square feet.* We have buildings from every age: 2 year-old buildings to 100-year-old buildings.
###Code
building.sort_values(by="square_feet", ascending=True).tail()
###Output
_____no_output_____
###Markdown
Those extremely large buildings that lack year_built or floor_count values and can be named as outliers. I am going to determine how to handle outliers at the end of this notebook. 1.3.2. Weather_train dataset Back to top
###Code
pp.ProfileReport(weather_train)
###Output
_____no_output_____
###Markdown
**Summary from the report:*** Since this is a time-series data, we have much more observations (139.772) than the building dataset.* Except timestamp, all of the features are numeric and in following units and ranges:| | Feature | Range | Description|---|-------------------|------------------------------------|-------------------------------| 1 | air_temperature | -28 to 47 Degrees Celsius | | 2 | cloud_coverage | 0 to 9 oktas | Portion of the sky covered in clouds| 3 | dew_temperature | -35 to 26 Degrees Celsius | The temperature at which the air can no longer "hold" all of the water vapor. The dew temperature is aways lower than (or equal to) the air temperature.| 4 | precip_depth_1_hr | -1 to 343 Millimeters | The amount of rain, snow, hail, etc., that has fallen at a given place within a given period.| 5 | sea_level_pressure| 968 to 1046 Millibar/hectopascals | The average atmospheric pressure at mean sea level| 6 | wind_direction | 0 to 360 degrees | Direction that the wind comes| 7 | wind_speed | 0 to 19 Meters per second |* We have equal number of samples coming from 16 different sites.* It seems there are some extreme observations in this dataset, which can be observed by clicking toggle details and extreme values tab* `cloud_coverage`, `precip_depth_1_hr`, `sea_level_pressure` and `wind_direction` have significantly high missing values. 1.3.3. Train dataset Back to top This is the biggest dataset in amongst the training datasets. Thus, pandas_profiling crunches data in higher run-times. I will use handy pandas exploration functions to explore that dataset.
###Code
train.info()
print("Percentage of missing values in the train dataset")
train.isna().sum()
train.describe(include="all")
###Output
_____no_output_____
###Markdown
* `meter_reading` is the target that we are trying to predict.* `meter` is the meter category, representing: * 0: electricity * 1: chilled water * 2: steam * 3: hot water* `meter_reading` values ranges between 0 and 22 million. We are going to take a closer look at the max value and investigate the reasons behind.* We don't have any missing values in the train dataset.
###Code
train.head()
###Output
_____no_output_____
###Markdown
1.4. Information about the test datasets Back to top 1.4.1. Test dataset
###Code
test.describe(include="all")
###Output
_____no_output_____
###Markdown
1.4.2 Weather_test Back to top
###Code
pp.ProfileReport(weather_test)
###Output
_____no_output_____
###Markdown
Weather_test's column values and ranges are consistent with the weather_train's column values and ranges.
###Code
del weather_test, test
gc.collect()
###Output
_____no_output_____
###Markdown
1.5. Findings from Understand, Clean, Format Data Back to top Relationships between tables:- building_id is the primary key of the building dataset.- building_id is the foreign key for the train dataset.- site_id is the foreign key for weather_train dataset.---------------------------------------------------------In total, there are 15 unique columns in all training datasets excluding target column. Those columns are self-explanatory and clear. Excluding timestamp, rest are potential candidates for feature set.`meter_reading` is the target we are trying to predict. `meter_reading` represents energy consumption of a building for a 1-hour period in different meter reading categories.`timestamp` columns are converted to date-time format and seems like measurements are recorded in hourly periods.Although we have missing values and outliers in the training datasets, I will keep them all for now, and decide how to handled them at the end of the notebook.There are nearly 20 million `meter_reading` values in the train dataset which are observed among 1448 buildings between 1 Jan, 2016 and 1 Jan, 2017; most of them being electricity meter observations.There are 40 million `meter_reading`values in the test dataset observed among the same buildings betweeen 1 Jan 2017 and 1 Jan 2019. 2. Exploratory Data Analysis I will run a detailed exploratory data analysis by visualizing trends, correlations and distributions between target and feature variables. Thanks to the pandas profiling I have already observed the single variable distributions of building and weather_train columns. So, I will start by looking at the high-level distribution of meter reading values. 2.1. Distribution of meter reading Meter reading values will be visualized without any categories, among meter categories and as a time-series data. 2.1.1. Consolidated distribution of meter reading Back to top
###Code
# set the plot size
figsize(12,10)
# set the histogram, mean and median
sns.distplot(train['meter_reading'],
kde=True)
plt.axvline(x=train.meter_reading.mean(),
linewidth=3, color='g', label="mean", alpha=0.5)
plt.axvline(x=train.meter_reading.median(),
linewidth=3, color='y', label="median", alpha=0.5)
# set title, legends and labels
plt.title("Distribution of Meter Reading", size=14)
plt.legend(["mean", "median"])
###Output
_____no_output_____
###Markdown
Our very first plot already conveyed some information: Meter reading values are highly skewed.Recall that, meter-reding values range between 0 and 22 million, this picture shows that high percentage of them are gathered around zero. And unfortunately due this high skewness it is impossible to visualize raw meter reading values and draw a histogram. Due to this wide range and highly skewed data the natural log(1 + meter_reading) will be visualized using [np.log1p](https://docs.scipy.org/doc/numpy/reference/generated/numpy.log1p.html). Since the natural logarithm of zero is minus infinity (a real number to the power of some real number is never 0), np.log1p enables to transform 0 measurements to 1, including those in the visualization.
###Code
# set the plot size
figsize(12,10)
# set the histogram, mean and median
sns.distplot(np.log1p(train['meter_reading']),
kde=True)
plt.axvline(x=np.log1p(train.meter_reading.mean()),
linewidth=3, color='g', label="mean", alpha=0.5)
plt.axvline(x=np.log1p(train.meter_reading.median()),
linewidth=3, color='y', label="median", alpha=0.5)
# set title, legends and labels
plt.title("Distribution of Logarithm(Meter Reading + 1) ", size=14)
plt.legend(["mean", "median"])
###Output
_____no_output_____
###Markdown
Distribution of logarithm(meter reading + 1) values, regardless of the category, shows a right-skewed distribution. Median value is smaller than the mean value proving this skewness. This skewness is caused by the significanlty high number of 0 and 1 measurements in the `meter_reading`. Let's look at the definition if we can get some logical explanation to 0 measurements:> They are the Energy consumption in kWh (or equivalent). This is the real data with measurement error, which we expect will impose a baseline level of modeling error. Explanation implies this is a real data with some errors, due this error, there may be some missed observations in the `meter_reading`. Thus high number of 0 meter reading values shows not only zero consumption but may indicate some missing data in the `meter_reading`. Moving on with this suspicion, let's see how the distribution will look like if we exclude 0 measurements from the dataset. I think it is impossible for a building to consume 0 kWh energy at a given time. Every office, home at least having some fridges and other home appliances running all the time. Thus, I I will visualize meter reading excluding 0 measurements. 2.1.2. Consolidated distribution of positive meter reading values Back to top
###Code
# create dataframe excluding 0 measurements of meter_reading and take the natural logarithm
# np.log is used this time because we don't have 0 values in the meter_reading
positive_train = train[train['meter_reading'] != 0]
positive_train['log_meter_reading'] = np.log(positive_train['meter_reading'])
# set the plot size
figsize(12,10)
# set the histogram, mean and median
sns.distplot(positive_train['log_meter_reading'],
kde=True)
plt.axvline(x=positive_train['log_meter_reading'].mean(),
linewidth=3, color='g', label="mean", alpha=0.5)
plt.axvline(x=positive_train['log_meter_reading'].median(),
linewidth=3, color='y', label="median", alpha=0.5)
# set title, legends and labels
plt.title("Distribution of Logarithm(Meter Reading) w/o 0 Measurements", size=14)
plt.legend(["mean", "median"])
###Output
_____no_output_____
###Markdown
Now, I have more concrete evidence that 0 observations represents missing values.Moreover, this is the picture I expected to see in the first place. However, after dropping the zero values and taking the logarithm of the meter reading values, distribution shows a perfect normal distribution. Data is centered around the mean. Mean and median values are equal to each other. Taking the logarithm helped to lower the variance.Moreover, some immediately observed outliers for log_meter_reading are around -7 and -5 and greater than 10. Let's have a closer look at the outliers:
###Code
def outlier_function(df, col_name):
''' this function detects first and third quartile and interquartile range for a given column of a dataframe
then calculates upper and lower limits to determine outliers conservatively
returns the number of lower and uper limit and number of outliers respectively
'''
first_quartile = np.percentile(
np.array(df[col_name].tolist()), 25)
third_quartile = np.percentile(
np.array(df[col_name].tolist()), 75)
IQR = third_quartile - first_quartile
upper_limit = third_quartile+(3*IQR)
lower_limit = first_quartile-(3*IQR)
outlier_count = 0
for value in df[col_name].tolist():
if (value < lower_limit) | (value > upper_limit):
outlier_count +=1
return lower_limit, upper_limit, outlier_count
# percentage of outliers in the meter_reading
print("{} percent of {} are outliers."
.format((
(100 * outlier_function(train, 'meter_reading')[2])
/ len(train['meter_reading'])),
'meter_reading'))
train['meter_reading'].sort_values().tail()
positive_train['meter_reading'].sort_values().head()
###Output
_____no_output_____
###Markdown
Although I excluded 0s, there are still plenty of data points which are very close to 0.Out of 20 million observations 8% are detected with the logic of determining [extreme outliers](https://people.richland.edu/james/lecture/m170/ch03-pos.html). We are going to decide what to do with the outliers in the target data at the end of the notebook. 2.1.3. Distribution of meter reading among different meter categories Back to top
###Code
# distribution of the meter reading in meters without zeros
figsize(12,10)
#list of different meters
meters = sorted(train['meter'].unique().tolist())
# plot meter_reading distribution for each meter
for meter_type in meters:
subset = train[train['meter'] == meter_type]
sns.kdeplot(np.log1p(subset["meter_reading"]),
label=meter_type, linewidth=2)
# set title, legends and labels
plt.ylabel("Density")
plt.xlabel("Meter_reading")
plt.legend(['electricity', 'chilled water', 'steam', 'hot water'])
plt.title("Density of Logartihm(Meter Reading + 1) Among Different Meters", size=14)
###Output
_____no_output_____
###Markdown
Again for the visualization purposes, we are looking at the distribution of the np.log1p(meter_reading) values. One thing that is obvious is; significant number of observations 0 are coming from hot water, chilled water and steam consumption,meaning we have less missing values and 0 observations in the electricity usage.This picture shows that meter reading values shows different distribution in each meter category, especially electricity consumption is different than others. Thus, meter is a signifcant variable to determine the meter_reading values. It is already included in the train and test dataset as a determinant factor.Let's visualize the meter reading values excluding 0 measurements for different meter categories using the dataset created earlier. 2.1.4. Distribution of positive meter reading values among different meter categories Back to top
###Code
# distribution of the meter reading in meters without zeros
figsize(12,10)
# plot meter_reading distribution for each meter
for meter_type in meters:
subset = positive_train[positive_train['meter'] == meter_type]
sns.kdeplot(subset["log_meter_reading"],
label=meter_type, linewidth=2)
# set title, legends and labels
plt.ylabel("Density")
plt.xlabel("Log_meter_reading")
plt.legend(['electricity', 'chilled water', 'steam', 'hot water'])
plt.title("Density of Positive Logarithm(Meter Reading) Among Different Meters", size=14)
###Output
_____no_output_____
###Markdown
After dropping the zero values and taking the logarithm of the meter reading values: Electricity shows a slightly different distribution than other categories. Chilled water and steam meter_reading shows similar distributions with close mean values. Hot water has the least number of datapoints and has more spikes than the other categories. 2.1.5. Average daily meter reading values over 2016 Back to top
###Code
# upsample hourly observations to daily and aggregate by meter category
train_daily_avg_by_meter = (train.
groupby('meter').
meter_reading.
resample('d').mean().
reset_index())
# assign meter values as column headers to create tidy-form dataframe
tidy_train_daily_avg = (train_daily_avg_by_meter.
pivot(index='timestamp',
columns='meter',
values='meter_reading').
reset_index())
# rename column header back to meter categories
tidy_train_daily_avg.rename(columns = {0: "electricity",
1: "chilled_water",
2: "steam",
3: "hot_water"},
inplace=True)
###Output
_____no_output_____
###Markdown
**By clicking on the legend in each category, you can observe meter categories individually.**
###Code
# create meter and color dictionary
meter_dict = {'electricity': 'darkblue',
'chilled_water':'orange',
'steam': 'green',
'hot_water': 'red'
}
# create figure object and plot each meter category
fig = go.Figure()
for key in meter_dict:
fig.add_trace(go.Line(
x=tidy_train_daily_avg.timestamp,
y=tidy_train_daily_avg[key],
mode='lines',
name=key,
line_color=meter_dict[key]))
# add title and show figure
fig.update_layout(
title_text='Average Daily Energy Consumption in kWh',
xaxis_rangeslider_visible=True)
fig.show()
###Output
_____no_output_____
###Markdown
Electricity observations spread between 0 and 220 kWh. For the first half of the year consumption does not exceed 160 kWh, for the second half consumption increases and ranges between 160 and 220 kWh. In general electricity consumption shows an increasing trend in 2016.Chilled water consumption ranges between 130 - 2500 kWh. It shows a steady increase up to 1000 kWh until September 2016. Between September and October, there are spikes in the consumption causing the range going up to 2500 kWh. Starting from November it shows downward trend.Steam consuption has the highest and most volatile range: 0 - 80.000 kWh. There is no obvious trend in the steam consumption and steam is utilized only in the first half of the year. For the rest of the year, consumption decreases drastically which may indicate either steam is not used for the rest of the year or the errors (0 measurements) are coming from the steam category in the second half of 2016. There is an interesting spike in the Nov 9, 2016.Hot water also is variable, however data is consistent in itself. Hot water consumtion is higher in the winter season and shows the lowest results between May and July. The data range is 0-1200 kWh having only one data point as 1200 on Dec 19, 2016. Excluding this, data is spread between 0-1000 kWh. The lesser consumption in summer season is a useful trend for our ML model to catch. 2.2. Meter reading VS weather_train data Back to top 2.2.1. Prepare & merge dataframes Back to top I am going to visualize weather data available and meter_reading values per meter category to see how each observation of cloud, temperature, pressure, precipitation and wind affects meter reading. Moreover, I am going to look for reasonable explanations of extremes in the chilled water, steam and hot water consumption in the weather data.To aggreagte hourly observations, weather train data will be upsampled to daily averages. After that, two datasets will be merged on timestamp column since this is the shared column between two. **Weather Dataframe**
###Code
# upsample weather_train dataframe to get daily means
weather_train_daily_avg = (weather_train.
resample('d').
mean())
# align weather train dataframe with the train_daily_avg dataframe
weather_train_daily_avg.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
**Merge weather and train_daily_avg datasets**
###Code
weather_vs_meter_reading = (train_daily_avg_by_meter.
merge(weather_train_daily_avg,
on='timestamp',
how='left'))
# rename meter column
weather_vs_meter_reading['meter'] = (weather_vs_meter_reading['meter'].
map({0: 'electricity',
1: 'chilled_water',
2: 'steam',
3: 'hot_water'}))
###Output
_____no_output_____
###Markdown
2.2.2. Average daily weather variable values over 2016 Back to top Spikes and anomalies detected in the section *2.1.3.Average daily meter reading values over 2016* will be investigated while plotting average daily values of weather variables.
###Code
# create weather variables and color dictionary
weather_dict = {"air_temperature": "red",
"cloud_coverage": "orange",
"dew_temperature": "coral",
"precip_depth_1_hr": "olive",
"sea_level_pressure": "teal",
"wind_direction": "purple",
"wind_speed": "navy"
}
###Output
_____no_output_____
###Markdown
**By clicking on the legend in each category, you can observe meter categories individually.**
###Code
# create plotly object and plot weather variables against dates
fig = go.Figure()
for key in weather_dict:
fig.add_trace(go
.Line(x=weather_vs_meter_reading['timestamp'],
y=weather_vs_meter_reading[key],
name=key,
line_color=weather_dict[key]))
fig.update_layout(title_text='Time Series of Weather Variables')
fig.show()
###Output
_____no_output_____
###Markdown
Recall Dec 19, 2016 where there is a peak int the hot_water consumption. On that date, 4th lowest air_temperature is recorded 1 degrees celcius, this might explain the spike in the hot_water consumption. Obviously, air temperature trend shows that this measurements recorded somewhere in Northern hemisphere.Recall from the definition dew temperature it is usually lower than air_temperature. If you look at air_temperature and dew_temperatur together, it proves this statement. Especially, dew_temperature is 5 degrees lower than the air_temperature and shares the same trend as air_temperature.Average daily cloud coverage values are between 1 and 5. Higher cloud coverage indicates cloudy days, thus in winter there are more cloudy days.If we look at the precipt_depth_1_hr half of the year is rainy, and half of the year seems to be dry. There are some days where average daily precip_depth_1_hr goes beyond 5. Sea level pressure is not changing much over the year. Wind_direction will make more sense if we look at it as directions (north, south and so on). Thus I will convert them to directions. Average wind speed is around 3.5 and varies values throughout the year. 2.2.3. Pairplot of meter reading vs weather data Back to top Pairplots are very useful to look at several continuous variables relationships in one chart. Ploly version of pairplots are scatterplots or scatter matrix.To understand the relationship between meter_reading and weather related variables, scatter matrix of energy consumption data and weather data will be visualized and relationships & correlations will be searched.
###Code
# fig = ff.create_scatterplotmatrix(
# weather_vs_meter_reading[["meter_reading",
# "air_temperature",
# "cloud_coverage",
# "dew_temperature",
# "precip_depth_1_hr",
# "sea_level_pressure",
# "wind_direction",
# "wind_speed",
# "meter"]], diag='histogram', index='meter',
# height=1400, width=1400)
# fig.update_layout(
# title='Weather Varaibles and Meter Reading',
# dragmode='select'
#)
# fig.show()
###Output
_____no_output_____
###Markdown
**By clicking on the legend in the right hand side, you can observe meter categories individually.**
###Code
fig = px.scatter_matrix(weather_vs_meter_reading,
dimensions=["meter_reading",
"air_temperature",
"cloud_coverage",
"dew_temperature",
"precip_depth_1_hr",
"sea_level_pressure",
"wind_direction",
"wind_speed"],
color="meter")
fig.update_layout(
title='Weather Varaibles and Meter Reading',
dragmode='select',
width=1400,
height=1400,
hovermode='closest')
fig.update_traces(diagonal_visible=True)
fig.show()
###Output
_____no_output_____
###Markdown
**Electricity:**- As the air and dew temperature goes up, electricity consumption increases.- Cloud coverage, sea level pressure, precip depth 1 hour shows positive trend with the electricity consumption. Although their correlations are not as strong as temperature variables.- Wind direction and wind speed has almost no effect in electricity consumption.**Chilled water:**- As the air and dew temperature rises, chilled water consumption increases.- Other weather variables shows a slightly weaker positive trend with chilled water consumption, compared to temperature variables.**Steam:**- Although the data range is wider and higher than the weather variables, air temperature, dew temperature and cloud coverage shows positive trend with the steam consumption.- Precip depth 1 hour have a small effect (with a positive trend) in determining the steam consumption.- Rest of the weather variables does not significantly impacting steam consumption.**Hot water:**- Other weather variables shows positive trend with the hot water consumption.- Air and dew temprature shows negative trend with the hot water consumption.One other important observation is air and dew temperature are highly collinear. All of the weather variables whether it is slight or strong have an affect on energy consumption. 2.3. Meter reading VS building data categorical features 2.3.1. Prepare & merge dataframes Back to top Meter reading values will be visualized against the categorical variables (primary_use and site_id) of building data. To do that, first train data will be grouped by building id and meter category and data will be aggreagted by mean.
###Code
# group train dataset per building and meter category
train_by_building = (train.
groupby(["building_id", "meter"]).
meter_reading.mean().
reset_index())
# merge grouped train dataframe with building dataset
building_w_meter_reading = (train_by_building.
merge(building,
on='building_id',
how='left'))
###Output
_____no_output_____
###Markdown
2.3.2. Meter reading distribution among primary uses Back to top Recall from the profiling report 80% of the `primary_use` values consists of the top 5 values:I am going to decode top 5 primary_use values as a new column first, then look for the distribution of meter reading among different primary use.
###Code
# add log_meter_reading to visualize meter_reading distribution
building_w_meter_reading['log_meter_reading'] = np.log1p(building_w_meter_reading['meter_reading'])
# map primary use column
building_w_meter_reading['primary_use_mapped'] = (building_w_meter_reading['primary_use'].
map({'Office': 'Office',
'Education': 'Education',
'Entertainment/public assembly':'Entertainment/public',
'Lodging/residential': 'Residential',
'Public services': 'Public services'
}))
# replace the rest with Other
building_w_meter_reading['primary_use_mapped'].replace(np.nan,
'Other',
regex=True,
inplace=True)
building_w_meter_reading['meter'] = (building_w_meter_reading['meter'].
map({0: 'electricity',
1: 'chilled_water',
2: 'steam',
3: 'hot_water'
}))
# split bıilding_w_meter_reading dataset per primary use category
education = (building_w_meter_reading[building_w_meter_reading[
'primary_use_mapped'] == 'Education'])
office = (building_w_meter_reading[building_w_meter_reading[
'primary_use_mapped'] == 'Office'])
entertainment_public = (building_w_meter_reading[building_w_meter_reading[
'primary_use_mapped'] == 'Entertainment/public'])
residential = (building_w_meter_reading[building_w_meter_reading[
'primary_use_mapped'] == 'Residential'])
public_services = (building_w_meter_reading[building_w_meter_reading[
'primary_use_mapped'] == 'Public services'])
other = (building_w_meter_reading[building_w_meter_reading[
'primary_use_mapped'] == 'Other'])
# create distplot parameters as lists
hist_data = [education['log_meter_reading'],
office['log_meter_reading'],
entertainment_public['log_meter_reading'],
residential['log_meter_reading'],
public_services['log_meter_reading'],
other['log_meter_reading']]
group_labels = ['education', 'office', 'entertainment_public',
'residential', 'public_services', 'other' ]
colors = ['#333F44', '#37AA9C', '#94F3E4', '#66CCFF', '#2C89AB', '#0324A9']
# create KDE plot of log_meter_reading
fig = ff.create_distplot(hist_data, group_labels,
show_hist=False, colors=colors, show_rug=True)
fig.update_layout(title_text='Distribution of Logarithm Meter Reading among Primary Use')
fig.show()
###Output
_____no_output_____
###Markdown
All primary_use categories shows uni-modal distirbution. We have fewer data points where log_meter_reading values are greater than 10 in all primary use categories.Median values log_meter_reading in Education, residential and office are between 4.5 and 4.9, whereas median values of entertainment, publice services and other are between 3.9 and 4.3.**Education:**- This primary use category has the most datapoints (38%) in the primary_use column.- Meter reading values are normally distributed between 0 and 10.- There is one outlier in this category whose log_meter_reading value is greater than 15.**Residential:**- Residential category is the 10% of all primary_use values.- Meter reading values are spread between 1.5 and 8.- This primary use category has the narrowest data range.**Office**:- Office category has the second most datapoints (20%).- Log meter reading values spread between 0.7 and 8.5.- There are several outliers greater than 10.**Entertainment/public:**- Entertainment/public category is the 13% of all primary_use values.- Meter reading value spread between 0.1 and 9.6.- Entertainment/public category has one outlier point greater than 10.**Public services:**- This category distributed between 0.7 and 7.5 having 3 outlier points.**Other:**- Log meter reading values are spread between 0.2 and 8.2.- It has one outlier greater than 10.Although some of primary_use meter reading distirbution resembles, it has importance in determining a meter_reading values, by each primary_use category having different ranges of log_meter_reading_values. 2.3.3. Meter reading distribution among site id as violinplot Back to top To understand how meter reading values distributed in each site_id lets first understand the distribution of site_ids in the building dataset.
###Code
# histogram of site_ids
fig = px.histogram(building, x="site_id")
fig.update_layout(title_text='Distribution Site IDs')
fig.show()
###Output
_____no_output_____
###Markdown
Most of the training set examples are coming from site id 0, 2, 3, 4, 5, 9, 13, 14 and 15. Violinplots are one [my favorite](https://towardsdatascience.com/recipes-for-the-visualizations-of-data-distributions-a1527a0faf77) data exploratation tools, conveying the summary statistics and distribution at the same time. They are robust visualization when it comes to looking at a distribuiton among categories.
###Code
# create site id list
site_ids = building_w_meter_reading.site_id.unique().tolist()
# create plotly object and visualize the distribution
fig = go.Figure()
# add a violin plot for each site_id
for site_id in site_ids:
fig.add_trace(go.Violin(y=building_w_meter_reading
[building_w_meter_reading['site_id'] == site_id]
['log_meter_reading'],
name=site_id,
box_visible=True))
# set title and show the object
fig.update_layout(title_text='Distribution of Logarithm Meter Reading among Site ID')
fig.show()
###Output
_____no_output_____
###Markdown
* Site ids 0, 1, 2, 3, 4, 5, 8, 9, 14 and 15 have similar meter reading distributions.* Site ids 6 and 10 meter_reading values shares almost the same distribution and summary statistics.* Site id 13 has the widest meter reading value range that goes beyond log_meter_reading_value 10.* Site id 11 has the narrowest meter reading values range, centered around 5.* Site id 13 shows the widest meter_reading distributions.As site_id's are highly correlated with building id's, I might one to keep only one of them which I will decide at the end of this notebook. 2.4. Meter reading VS building data continuous features as scatterplots Back to top Logarithm of the meter reading values in each meter category will be visualized against the continuous variables (square_feet, year and floor_count) of the building dataset. 2.4.1. Scatter plot of meter reading VS square feet Back to top
###Code
fig = px.scatter(building_w_meter_reading, x="square_feet", y="log_meter_reading",
color="meter", hover_data=['meter_reading'])
fig.update_layout(title_text='Meter Reading VS Square Feet Among Different Meters')
fig.show()
###Output
_____no_output_____
###Markdown
There is a clear connectcion between square feet and meter_reading values in all categories, whcih can be captured by intuition also: As the size of the building increases, it consumes more in each category. 2.4.2. Scatter plot of meter reading VS age of the building Back to top I will add one more column as `age` using `year_built` column and use that one.
###Code
currentYear = datetime.now().year
building_w_meter_reading['age'] = currentYear - building_w_meter_reading['year_built']
fig = px.scatter(building_w_meter_reading, x="age", y="log_meter_reading",
color="meter", hover_data=['meter_reading'])
fig.update_layout(title_text='Meter Reading VS Age of the Building Among Different Meters')
fig.show()
###Output
_____no_output_____
###Markdown
There is no obivous relationship between the age and log_meter_reading values of electricity and steam.There is obvious relationship in the hot_water category as the age of the building increases hot water consumption increases.Other obvious relationship is as the building age increases chilled water consumption decrease. Maybe old buildings don't have chilled water supply structures. 2.4.3. Scatter plot of meter reading VS floor count Back to top
###Code
fig = px.scatter(building_w_meter_reading, x="floor_count", y="log_meter_reading",
color="meter", hover_data=['meter_reading'])
fig.update_layout(title_text='Meter Reading VS Floor Count Among Different Meters')
fig.show()
###Output
_____no_output_____ |
docs/examples/fetchmgs.ipynb | ###Markdown
Fetch marker genes from a genome This example is adapted from the [fetchMGs](https://github.com/motu-tool/fetchMGs/blob/master/fetchMGs.pl) Perl script used to extract the 40 single-copy universal marker genes from a genome, annotating proteins to find the highest scoring proteins mapping to each of these marker genes.In this notebook, we show how to reproduce this kind of analysis, using `pyhmmer` instead of HMMER3 to perform the alignments and extract the bit scores.References * [Ciccarelli FD, Doerks T, von Mering C, Creevey CJ, Snel B, Bork P. *Toward automatic reconstruction of a highly resolved tree of life.* Science. 2006 Mar 3;311(5765):1283-7. Erratum in: Science. 2006 May 5;312(5774):697.](https://pubmed.ncbi.nlm.nih.gov/16513982/)* [Sorek R, Zhu Y, Creevey CJ, Francino MP, Bork P, Rubin EM. *Genome-wide experimental determination of barriers to horizontal gene transfer.* Science. 2007 Nov 30;318(5855):1449-52.](https://pubmed.ncbi.nlm.nih.gov/17947550/)
###Code
import pyhmmer
pyhmmer.__version__
###Output
_____no_output_____
###Markdown
Getting the cutoffsEach HMM has been calibrated and contains custom cutoffs, but they are not in Pfam format, so we need to use them externaly. Let's start by downloading the file with these cutoffs from the GitHub repository of `fetchMG`:
###Code
import csv
import io
import urllib.request
url = "https://github.com/motu-tool/fetchMGs/raw/master/lib/MG_BitScoreCutoffs.allhits.txt"
cutoffs = {}
with urllib.request.urlopen(url) as f:
for line in csv.reader(io.TextIOWrapper(f), dialect="excel-tab"):
if not line[0].startswith("#"):
cutoffs[line[0]] = float(line[1])
###Output
_____no_output_____
###Markdown
Downloading the HMMsSince the HMMs for the universal marker genes are also hosted on the `fetchMG` GitHub repository, we can download them from there too. `pyhmmer.plan7.HMMFile` supports reading from a file-handle, so we can parse each HMM as we download it.
###Code
import urllib.request
import pyhmmer.plan7
baseurl = "https://github.com/motu-tool/fetchMGs/raw/master/lib/{}.hmm"
hmms = []
for cog in cutoffs:
with urllib.request.urlopen(baseurl.format(cog)) as f:
hmm = next(pyhmmer.plan7.HMMFile(f))
hmms.append(hmm)
###Output
_____no_output_____
###Markdown
Loading the sequencesNow we need protein sequences to annotate. Let's use our set of protein sequences identified in the chromosome of [Anaerococcus provencensis](https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?id=938293).
###Code
import pyhmmer.easel
with pyhmmer.easel.SequenceFile("data/seqs/938293.PRJEB85.HG003687.faa") as seqs_file:
seqs_file.set_digital(pyhmmer.easel.Alphabet.amino())
proteins = list(seqs_file)
###Output
_____no_output_____
###Markdown
Running the search pipelineWith the proteins loaded, let's make a `namedtuple` that will contain the data we need to extract for each `hmmsearch` hit: the name of the query gene, the name of the marker gene which produced a hit, and the bitscore for the alignment.
###Code
import collections
Result = collections.namedtuple("Result", ["query", "cog", "bitscore"])
###Output
_____no_output_____
###Markdown
Now we can run the search pipeline: we annotate all the proteins will all the HMMs using the `pyhmmer.hmmsearch` function. Each HMM gives us a `TopHits` instance to process: we extract only the hits that are above the bitscore cutoff for this particular HMM.
###Code
results = []
for top_hits in pyhmmer.hmmsearch(hmms, proteins):
for hit in top_hits:
cog = hit.best_domain.alignment.hmm_name.decode()
if hit.score > cutoffs[cog]:
results.append(Result(hit.name.decode(), cog, hit.score))
###Output
_____no_output_____
###Markdown
Filtering resultsNow that we have all the hits that pass the bitscore thresholds, we can create a dictionary that maps each query protein to its highest scoring bitscore alignment, like in the original script. If a protein has alignments to two different marker genes with the same score, that query is ignored.
###Code
best_results = {}
keep_query = set()
for result in results:
if result.query in best_results:
previous_bitscore = best_results[result.query].bitscore
if result.bitscore > previous_bitscore:
best_results[result.query] = result
keep_query.add(result.query)
elif result.bitscore == previous_bitscore:
if best_results[result.query].cog != hit.cog:
keep_query.remove(result.query)
else:
best_results[result.query] = result
keep_query.add(result.query)
###Output
_____no_output_____
###Markdown
Now we can get our final filtered results:
###Code
filtered_results = [best_results[k] for k in sorted(best_results) if k in keep_query]
###Output
_____no_output_____
###Markdown
We can print them to see which gene maps to which marker gene, with the score for each alignment. Let's look at the top 10 results:
###Code
for result in filtered_results[:10]:
print(result.query, "{:.1f}".format(result.bitscore), result.cog, sep="\t")
###Output
_____no_output_____ |
notebooks/DataAnalysis.ipynb | ###Markdown
**Stackoverflow keyword extraction** **Description** Stackoverflow is the largest online community of developers where they can ask questions, learn and share their programming knowledge. It is a platformthat every programmer uses in their workflow. With over 4 million registeres users, the platform caters exchange of over 10 million questions in a year. To make things easy to understand, the platform automatically tag each question based on the topic. Based on the type of tags, some of the most used tags are: java, python, javascript, etc. **Problem statement** Analyse the information in the question title and body to assign tags to new questions in the future.**Business value** It is job of the platform to route the right question to right users. E.g. you don't want to be sending a java related question to javascript developer. That would be a huge miss and will affect the user experience of the platform. Stackoverflow can make use of topic information about a question to forward it to the subject matter expert. This is why the problem of predicting tags is of high value to ensure a good user experience. 1. Pre-processing features (Question Title and Body)
###Code
%cd ..
#import some libraries
import pickle
import pandas as pd
from datatable import by, dt, f
import datatable, os, pandas as pd, numpy as np
from functools import partial
import matplotlib.pyplot as plt
import seaborn as sns
## dask dataframe parallel processing
import dask.dataframe as dd
from dask.distributed import Client
from dask.multiprocessing import get
## Text processing
import re
from nltk.corpus import stopwords
from nltk.stem.snowball import SnowballStemmer
from nltk.tokenize import word_tokenize # to break sentences into words
## split data
from dask_ml.model_selection import train_test_split
## sklearn
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_selection import chi2
## Base model without any hyperparameter tuning
from sklearn.multioutput import MultiOutputClassifier, ClassifierChain
from sklearn.linear_model import SGDClassifier
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
import warnings
warnings.filterwarnings('ignore')
from src.evaluation import print_eval
data_r = datatable.fread("./input/data_no_dupes_top50.csv")
data = data_r[:, f[:].remove([f["#tags"], f["count_duples"], f[ "Tags"]])]
data.head(2)
###Output
_____no_output_____
###Markdown
1.1 Plot class proportions and imbalances
###Code
# requires a datatable object
def plot_class_ratio_balance(_):
plt.figure(figsize=(17, 8))
plt.subplot(212)
_res = dt.rbind(_[:, dt.sum(f[:])], _[:, dt.count(f[:])])
_res_pd = _res.to_pandas().T.reset_index().rename(columns = {0: "occur", 1: "total", "index": "tag"})
_res_pd = _res_pd.assign(no_occur = _res_pd.total - _res_pd.occur, prop = _res_pd.occur / _res_pd.total)
_res_pd_melt = pd.melt(_res_pd, id_vars = "tag", value_vars=["occur", "no_occur"])
ax = sns.barplot(data = _res_pd_melt.assign(value = lambda r: r.value / _.shape[0]), x = "tag", y = "value", hue="variable")
plt.xticks(rotation=60)
__ = ax.get_yticks()
ax.set_yticklabels([f"{i:.0%}" for i in __])
leg = plt.legend()
leg.remove()
plt.subplot(211, sharex = ax)
plt.bar(_.sum().names, _.sum().to_numpy()[0])
plt.xticks(rotation=50)
# set the spacing between subplots
plt.subplots_adjust(left=0.1,
bottom=0.1,
right=0.9,
top=0.9,
wspace=0.4,
hspace=0.4)
plt.title("Class ratios and Class imbalance")
plt.show()
# X = data[:, ["Title", "Body"]]
# Y = data[:, data.names[2:]]
plot_class_ratio_balance(data[:, data.names[2:]])
###Output
/tmp/ipykernel_36157/3335243169.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_yticklabels([f"{i:.0%}" for i in __])
###Markdown
1.2 Sample 500k questions from overall data__The best way to sample (theoretically)__Sampling pays a key role in practical machine learning and data mining. * Efficient data processing for training models* Generation of training, validation and test sets The stratified version of sampling is typically used in classification tasks.* Class proportions remain the same in sampled and raw data* It has been found to improve standard CV both in terms of bias and variancehttp://videolectures.net/ecmlpkdd2011_tsoumakas_stratification/?q=stratification%20multi%20label __BUT Computation constraints__ We will be randomly sampling 1 million datapoints, because stratified sampling has an exponential complexity. (after trial and error) __For simplication, we will sample 100 million questions from the overall 2.3M questions__
###Code
# Number of datapoints we want
500000 / data.shape[0]
###Output
_____no_output_____
###Markdown
We only need 17 percent, or __500k datapoints__ of the overall data to proceed further. The reason is that in the further steps, we aregoing to featurize the text data in X, which will drastically increase the complexity of the analysis. Plus, we have a lot of data, so we can afford to sample down to perform analysis.
###Code
# We will make use of dask to perform sampling
# divide the data into partitions and take around 35 percent data from each partition
data_pd = data.to_pandas().reset_index() # just to make sure of order
data_pd.iloc[:, 3:] = data_pd.iloc[:, 3:].astype(int)
data_pd.head(1)
###Output
_____no_output_____
###Markdown
We only perform order 1 level stratification, to reduce complexity. `The above operation took 90 minutes to complete.`
###Code
# sample data from df
def sampler(df, p):
return df.sample(frac = p)
# partition the data into 12 parts
ddata_pd = dd.from_pandas(data_pd, npartitions=12)
# pick 30% of data from each partition randomly
res = ddata_pd.map_partitions(lambda df, p: sampler(df, p), p = 0.18).compute(scheduler="threads")
res.shape
###Output
_____no_output_____
###Markdown
We were able to sample 1 million points from original data in 9 seconds
###Code
plot_class_ratio_balance(datatable.Frame(res.loc[:, res.columns[3:]]))
###Output
/tmp/ipykernel_36157/3335243169.py:14: UserWarning: FixedFormatter should only be used together with FixedLocator
ax.set_yticklabels([f"{i:.0%}" for i in __])
###Markdown
The good thing is that the class ratio are representative of the original dataset
###Code
# Save the data
res.to_csv("./input/500k_data_no_dupes_top50.csv")
# upload to blob storage
filename = "500k_data_no_dupes_top50.csv"
%run ./src/blob_handle.py --operation "upload" --filename $filename
# load the data
res = datatable.fread("./input/500k_data_no_dupes_top50.csv").to_pandas()
###Output
_____no_output_____
###Markdown
The 500k data is around __700MB__ in size 1.3 Divide data into X and Y
###Code
res.drop(columns = ["index"], inplace=True)
res.reset_index(drop=True, inplace = True)
X, Y = res.iloc[:, :2], res.iloc[:, 2:]
X.shape, Y.shape
###Output
_____no_output_____
###Markdown
1.4 Cleaning the input data XNow that we have sampled a manageble proportion of data from the orignal set, we are going to perform operations to clean the input data matrix X. Since, we only have text features, we will perform the following set of operations on each row. Again we will make use of dask library to speed things up a notch.https://medium.com/mindorks/speeding-up-text-pre-processing-using-dask-45cc3ede1366 Steps - - Sample ---- _Done_- Remove code snippets- Remove special characters from remaining text- Remove stop words- remove HTML tags- convert to lowercase- Stem the words using snowball stemmer
###Code
def striphtml(data):
# find the html tags and remove them
patt = re.compile('<.*?>')
cleantxt = re.sub(patt, ' ', str(data))
return cleantxt
def clean_row(row):
stop_words = set(stopwords.words('english'))
stemmer = SnowballStemmer("english")
# questions_with_code = 0
# len_before = 0
len_after = 0
is_code = 0
title, question = row[0], row[1]
if "<code>" in question:
# questions_with_code += 1
is_code = 1
x = len(question) + len(title)
# len_before += x
# Find the code piece of question
code = str(re.findall(r'<code>(.*?)</code>', question, flags=re.DOTALL))
# remove the code
question = re.sub('<code>(.*?)</code>', '', question, flags=re.MULTILINE|re.DOTALL)
title=title.encode('utf-8')
# combine question and title, remove special characters, convert to lowercase
queti = str(title) + str(question)
queti = re.sub(r'[^A-Za-z]+',' ',queti).lower()
# get words
words = word_tokenize(str(queti))
#Removing all single letter and and stopwords from question except for the letter 'c'
queti_stop = ' '.join(str(stemmer.stem(j)) for j in words if j not in stop_words and (len(j)!=1 or j=='c'))
len_after += len(queti_stop)
# question_title(clean), len(cleaned), len_before, if_has_code, code
return queti_stop, len(queti_stop), x, is_code, code
# run the calculation across
dX = dd.from_pandas(X, npartitions=12)
Xres = dX.map_partitions(lambda df: df.apply(lambda row: clean_row(row), axis = 1, result_type="expand")).compute(scheduler="threads")
###Output
_____no_output_____
###Markdown
Time taken for above transformation is `10 minutes`
###Code
Xres.columns = ['queti_stop', 'len_queti_stop', 'len_pre', 'is_code', 'code']
Xres.head(2)
# Save the data
Xres.to_csv("./input/500k_X_clean_top50.csv")
Y.to_csv("./input/500k_Y_clean_top50.csv")
# read data
Xres = pd.read_csv("./input/500k_X_clean_top50.csv")
Y = pd.read_csv("./input/500k_Y_clean_top50.csv")
###Output
_____no_output_____
###Markdown
1.5 Create train and test setsWe split the data by a 80:20 split with 80 percent of data for training/validation and rest for testing
###Code
%%time
Xtrain, Xtest, Ytrain, Ytest = train_test_split(Xres, Y, test_size=0.20, random_state=42)
Ytrain = Ytrain.iloc[:, 1:]
Ytest = Ytest.iloc[:, 1:]
Xtrain.shape, Ytrain.shape, Xtest.shape, Ytest.shape
###Output
CPU times: user 162 ms, sys: 7.89 ms, total: 170 ms
Wall time: 169 ms
###Markdown
Now the data has been split into train and test splits. We are ready to create features from the data X 1.6 Featurizing the text dataWe wil be using TF-IDF based featurizatiom of the text data followed by a feature selection phase to selectonly the relevant features for the analysis
###Code
vectorizer = TfidfVectorizer(min_df=0.00009, ngram_range=(1, 2), tokenizer = lambda x: x.split(), max_features=200000, \
smooth_idf = True, sublinear_tf = False, norm = "l2")
## get corpus
xtrain_text = Xtrain["queti_stop"].tolist()
xtest_text = Xtest["queti_stop"].tolist()
## apply Tf-Idf
xtrain_text_v = vectorizer.fit_transform(xtrain_text)
xtest_text_v = vectorizer.transform(xtest_text)
del xtrain_text, xtest_text
print(type(xtrain_text_v))
xtrain_text_v.shape, xtest_text_v.shape
###Output
<class 'scipy.sparse.csr.csr_matrix'>
###Markdown
`Above`, the result is stored in sparse matrix format.After running TF-Idf, each line of text is transformed to a sparse vector of length 98000
###Code
try:
xtrain_text_v.todense()
except Exception as e:
print(e)
###Output
Unable to allocate 261. GiB for an array with shape (415132, 84363) and data type float64
###Markdown
> The data is so big that it is not even possible to convert it back to dense form
###Code
text_features = vectorizer.get_feature_names_out()
text_features[:10]
###Output
_____no_output_____
###Markdown
1.7 Feature selectionThe Tf-Idf returned a massive vector with row length of 98000. Now, in order to achieve good results and prevent overfitting, we need to do some kind of feature selection to take only the features that are somehow useful to predict the targets. To do feature selection, we have two options - 1. Mutual information test2. Chi square testWe will go ahead and use the chi-sq test of independence, since it is faster and performs well if data size is reasonable.
###Code
Ytrain = Ytrain.astype(int)
Ytest = Ytest.astype(int)
xtrain_text_v, Ytrain.shape
# iterate through each of the classes
res_fs = []
for ii, class_col in enumerate(Ytrain):
print(f"{ii+1} Running for class - ", class_col + " "*20, end = "\r")
_ = [class_col]
chi, p = chi2(xtrain_text_v, Ytrain[class_col])
_.extend(p)
res_fs.append(_)
chi_df = pd.DataFrame(res_fs, columns=(["tags"] + text_features.tolist()))
chi_df.head(2)
sns.set_style("whitegrid")
sns.ecdfplot((chi_df.iloc[:, 1:] < 0.05).sum())
plt.title("Importance of a feature to different targets")
plt.xlim(-1, 10)
plt.xticks(range(0, 11))
plt.yticks([i/10 for i in range(0, 11)])
plt.show()
_ = (chi_df.iloc[:, 1:] <= 0.05).sum().replace([0], np.nan).dropna().sort_values()
chi_df.shape[1], "now -- >" ,_.shape[0]
###Output
_____no_output_____
###Markdown
`Using above logic`, we only select a feature if it is useful to more than 3 models. Later we can try to change it to see if adding more feature improves performance.
###Code
_selected_features = _.index.tolist()
_selected_features[:4]
# store index of text features for easy access
text_features = text_features.tolist()
text_features_index = {}
for i, e in enumerate(text_features):
text_features_index[e] = i
## convert the selected features to dense format for train and test sets
from tqdm.notebook import tqdm
_selected_features_index = []
for s in (_selected_features):
_selected_features_index.append(text_features_index[s])
_selected_features_index[:4]
xtrain_text_v_10k = xtrain_text_v[:, _selected_features_index]
xtest_text_v_10k = xtest_text_v[:, _selected_features_index]
xtrain_text_v_10k
# save data
pickle.dump(_selected_features, open('./input/_selected_features.pickle', 'wb'))
pickle.dump(xtrain_text_v_10k, open('./input/xtrain_text_v_10k.pickle', 'wb'))
pickle.dump(xtest_text_v_10k, open('./input/xtest_text_v_10k.pickle', 'wb'))
Ytrain.to_csv('./input/Ytrain.csv')
Ytest.to_csv('./input/Ytest.csv')
# read saved data
import pickle
_selected_features = pickle.load(open('./input/_selected_features.pickle', 'rb'))
xtrain_text_v_10k = pickle.load(open('./input/xtrain_text_v_10k.pickle', 'rb'))
xtest_text_v_10k = pickle.load(open('./input/xtest_text_v_10k.pickle', 'rb'))
Ytrain = pd.read_csv('./input/Ytrain.csv')
Ytest = pd.read_csv('./input/Ytest.csv')
#Ytrain = Ytrain.iloc[:, 2:]
#Ytest = Ytest.iloc[:, 2:]
Ytrain.shape, Ytest.shape
###Output
_____no_output_____
###Markdown
2. Getting predictionsAs a first step towards model building, we will train a basic machine learning model which is SGD, and we will run it with two different loss functions: log-loss and hinge loss, to see which performs better on the data.We want to test **if**:1. Using **dimensionality reduction help** us to improve the performance of the baseline model ?2. If yes, then which dimensionality reduction method we should be using: PCA, or UMAP, or LDA ? Another focus of this section is to create a basic model and create a evaluation methodology to test:1. **How good** the model is overall ?2. What is the performance for **different tags** ?3. What are the most **important features** for each of the classes ?
###Code
## Base model without any hyperparameter tuning
from sklearn.multioutput import MultiOutputClassifier
from sklearn.linear_model import SGDClassifier
from sklearn.pipeline import Pipeline
# train 50 independent classifiers
# balanced with more weigh given to miniority classes
classifier_b = MultiOutputClassifier(SGDClassifier(loss='log', alpha=0.0001, penalty='l1', class_weight='balanced'), n_jobs=-1)
# no balancing done using weights
classifier_nb = MultiOutputClassifier(SGDClassifier(loss='log', alpha=0.0001, penalty='l1'), n_jobs=-1)
# using a custom balance scheme to see where it fits
# class weight is a possible hyper parameter
#classifier_cb = MultiOutputClassifier(SGDClassifier(loss='log', alpha=0.0001, penalty='l1', class_weight={True: 0.80, False: 0.20}), n_jobs=-1)
classifier_b.fit(xtrain_text_v_10k, Ytrain)
classifier_nb.fit(xtrain_text_v_10k, Ytrain)
#classifier_cb.fit(xtrain_text_v_10k, Ytrain)
comp_res = []
# save the models to save time in future
pickle.dump(classifier_b, open("./model/classifier_b.pickle", "wb"))
pickle.dump(classifier_nb, open("./model/classifier_nb.pickle", "wb"))
#pickle.dump(classifier_cb, open("./model/classifier_cb.pickle", "wb"))
# read the saved models
classifier_b = pickle.load(open("./model/classifier_b.pickle", "rb"))
classifier_nb = pickle.load(open("./model/classifier_nb.pickle", "rb"))
#classifier_cb = pickle.load(open("./model/cl/'/assifier_cb.pickle", "rb"))
###Output
_____no_output_____
###Markdown
2.1 Logistic Regression, no class weightsFor this model, we just pass the data without accounting for the class imbalance
###Code
print_eval(classifier_nb, xtrain_text_v_10k, Ytrain, xtest_text_v_10k, Ytest)
comp_res.append(("Log-Loss-not-balanced", 0.80, 0.34, 0.48))
###Output
_____no_output_____
###Markdown
Observations - 1. All models on average suffer from High precision, Low recall - means, model make right predictions but do not take risk for hard predictions2. micro avg P, R, F1 -- 0.809613, 0.348177, 0.486942 2.2 Logistic Regression, balanced class weightsFor this model, we just pass the data but account for the class imbalance
###Code
print_eval(classifier_b, xtrain_text_v_10k, Ytrain, xtest_text_v_10k, Ytest)
###Output
**************************************************
Hamming loss: 0.07403626763277577
Micro Precision: 0.2624251799044993
Micro Recall: 0.8525065543815047
Micro F1 score: 0.40131458998536756
**************************************************
###Markdown
`AUC score has improved`
###Code
comp_res.append(("Log-Loss-balanced", 0.26, 0.85, 0.40))
###Output
_____no_output_____
###Markdown
After using balanced approach, the precision improved but both recall and f1 score dropped drastically 2.3 SVM Classifier, balanced class weights
###Code
from sklearn.svm import LinearSVC
from sklearn.pipeline import Pipeline
classifier_b_svm = MultiOutputClassifier(LinearSVC(penalty='l1', class_weight='balanced'), n_jobs=-1)
classifier_b_svm.fit(xtrain_text_v_10k, Ytrain)
# save the models to save time in future
pickle.dump(classifier_b_svm, open("./model/classifier_b_svm.pickle", "wb"))
# load model
classifier_b_svm = pickle.load(open("./model/classifier_b_svm.pickle", "rb"))
print_eval(classifier_b_svm, xtrain_text_v_10k, Ytrain, xtest_text_v_10k, Ytest)
comp_res.append(("SVM-balanced", 0.46, 0.83, 0.59))
###Output
_____no_output_____
###Markdown
By using Linear SVM, we see a significant boost in the F1 score as well as the precision 2.4 SVM Classifier Chains, balanced class weights
###Code
## Implement classifier chain for SVM
from sklearn.multioutput import ClassifierChain
from sklearn.svm import LinearSVC
classifier_b_svm_chains = ClassifierChain(LinearSVC(penalty='l2', class_weight='balanced'))
classifier_b_svm_chains.fit(xtrain_text_v_10k, Ytrain)
# save the models to save time in future
pickle.dump(classifier_b_svm_chains, open("./model/classifier_b_svm_chains.pickle", "wb"))
from sklearn import metrics
y_true = Ytest
y_pred = classifier_b_svm_chains.predict(xtest_text_v_10k)
print("*"*50)
print("Hamming loss: ", metrics.hamming_loss(y_true, y_pred))
# print("Ranked avg precision: ", label_ranking_average_precision_score(y_true, y_pred))
print("Micro Precision: ", metrics.precision_score(y_true, y_pred, average='micro', zero_division=0))
print("Micro Recall: ", metrics.recall_score(y_true, y_pred, average='micro'))
print("Micro F1 score: ", metrics.f1_score(y_true, y_pred, average='micro', zero_division=0))
print("*"*50)
comp_res.append(("SVM-balanced-Chained", 0.51, 0.75, 0.60))
###Output
_____no_output_____
###Markdown
Using SVM with chains, we can see that there is 6 percent boost in precision but recall also dropped around 8 percent, with minor improvements in F1 score. So, chaining does help the models make better predictions.
###Code
pd.DataFrame(comp_res, columns = ['method', 'precision', 'recall', 'f1-score'])
###Output
_____no_output_____
###Markdown
__`Failed experiments :`__- **PCA** for dimensionality reductions: fails because of memory limitations- **Random Projections**: no significant performance immprovements- **Stacking predictions** of above models: no significant performance improvements- **Tree algorithms**: High training time requirements, not feasible ------ **Experiments below** 2.5.1 Train doc2vec on the cleaned dataWe make use of bigram phrase capture to capture important phases
###Code
# read cleaned data, which we processed before TF-idf and splitting into
Xres = pd.read_csv("./input/500k_X_clean_top50.csv")
Y = pd.read_csv("./input/500k_Y_clean_top50.csv")
%%time
Xtrain, Xtest, Ytrain, Ytest = train_test_split(Xres, Y, test_size=0.20, random_state=42)
Ytrain = Ytrain.iloc[:, 1:]
Ytest = Ytest.iloc[:, 1:]
Xtrain.shape, Ytrain.shape, Xtest.shape, Ytest.shape
cleaned_text_train = Xtrain.queti_stop
cleaned_text_test = Xtest.queti_stop
cleaned_text_train[:3]
from tqdm.notebook import tqdm
# [ ['I', 'am' ..], [..], ... ]
corpus_train = []
tags_train = []
i = 0
for s in tqdm(cleaned_text_train):
corpus_train.append(word_tokenize(s))
tags_train.append(Ytrain.columns[Ytrain.iloc[i]].tolist())
i += 1
corpus_train = np.array(corpus_train)
tags_train = np.array(tags_train)
corpus_train[0], tags_train[:3]
from tqdm.notebook import tqdm
# [ ['I', 'am' ..], [..], ... ]
corpus_test = []
tags_test = []
i = 0
for s in tqdm(cleaned_text_test):
corpus_test.append(word_tokenize(s))
tags_test.append(Ytest.columns[Ytest.iloc[i]].tolist())
i += 1
corpus_test = np.array(corpus_test)
tags_test = np.array(tags_train)
corpus_test[0], tags_test[:3]
from gensim.models import Word2Vec
# To extract n-gram from text
from gensim.models.phrases import Phrases, Phraser
# Configuring Phrases() for bigram
bigram_train = Phrases(corpus_train, min_count=20, threshold=2, delimiter = '~')
# Intializing Phrases() for bigram
bigram_phraser_train = Phraser(bigram_train)
# test
bigram_test = Phrases(corpus_test, min_count=20, threshold=2, delimiter = '~')
# Intializing Phrases() for bigram
bigram_phraser_test = Phraser(bigram_test)
bigram_phraser_train[corpus_train[3]][:6]
# now create a corpus of bigram phrases extracted from original data
bigram_corpus_train = []
for s in tqdm(corpus_train):
bigram_corpus_train.append(bigram_phraser_train[s])
bigram_corpus_test = []
for s in tqdm(corpus_test):
bigram_corpus_test.append(bigram_phraser_test[s])
from gensim.models import Doc2Vec
from gensim.models.doc2vec import TaggedDocument
tagged_docs_train = [TaggedDocument(d, [i]) for i, d in enumerate(bigram_corpus_train)]
model_dbow = Doc2Vec(dm = 0, negative=5, hs=0, min_count=2, sample=0, vector_size=300, window=5, workers=8)
tagged_docs_train[0]
model_dbow.build_vocab(tagged_docs_train)
from sklearn import utils
for epoch in range(30):
print(f"Epoch {epoch}", end = "\r")
model_dbow.train(utils.shuffle([x for x in (tagged_docs_train)]), total_examples=len(tagged_docs_train), epochs=1)
model_dbow.alpha -= 0.002
model_dbow.min_alpha = model_dbow.alpha
model_dbow.dv[0].shape
model_dbow.infer_vector(bigram_corpus_test[0]).shape
train_dbow_vecs = []
for i in tqdm(range(len(tagged_docs_train))):
train_dbow_vecs.append(model_dbow.dv[i])
test_dbow_vecs = []
for i in tqdm(range(len(bigram_corpus_test))):
test_dbow_vecs.append(model_dbow.infer_vector(bigram_corpus_test[i]))
# we can see that the word2vec model finds related words, useful in the given context
model_dbow.wv.most_similar("java", topn=4)
# save the model
model_dbow.save('./model/model_dbow')
# load the model
model_dbow = Doc2Vec.load('./model/model_dbow')
from sklearn.ensemble import VotingClassifier
clf1 = SGDClassifier(loss='log', alpha=0.00001)
clf2 = SGDClassifier(loss='log', alpha=0.00001, class_weight='balanced')
estimator = VotingClassifier(estimators=[clf1, clf2], voting='soft', n_jobs=4, weights=[0.40, 0.60])
model_log_dv = MultiOutputClassifier(estimator=estimator, n_jobs=4)
model_log_dv.fit(train_dbow_vecs, Ytrain)
###Output
_____no_output_____
###Markdown
Above we are training a voting classifier to predict the outcome using soft thresholds
###Code
print_eval(model_log_dv, train_dbow_vecs, Ytrain, test_dbow_vecs, Ytest)
###Output
_____no_output_____
###Markdown
Reading in data
###Code
mitbih_train_df = pd.read_csv("../data/mitbih/mitbih_train.csv", header=None)
mitbih_test_df = pd.read_csv("../data/mitbih/mitbih_test.csv", header=None)
mitbih_train_df.head()
mitbih_train_df.shape
mitbih_test_df.head()
mitbih_test_df.shape
###Output
_____no_output_____
###Markdown
Looking at data distribution
###Code
mitbih_train_df[187] = mitbih_train_df[187].astype(int)
count = mitbih_train_df[187].value_counts()
labels = ["normal beats", "Supra. beats", "Ventric. beats", "Fusion beats", "Unknown beats"]
plt.figure(figsize=(10, 10))
pie = plt.Circle((0, 0), 0.7, color="white")
plt.pie(count, labels=["Normal beats", "Supra. beats", "Ventric. beats", "Fusion beats", "Unknown beats"], colors=["green", "blue", "yellow", "purple", "lightblue"], autopct='%1.0f%%')
p = plt.gcf()
p.gca().add_artist(pie)
plt.savefig(f"images/data_distro.pdf", bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
Looking at the ECG-signals for the different classes
###Code
samples = mitbih_train_df.groupby(187, group_keys=False).apply(lambda mitbih_train_df: mitbih_train_df.sample(1))
samples
plt.figure(figsize=(20, 20))
for i, name in enumerate(["Normal beats", "Supra. beats", "Ventric. beats", "Fusion beats", "Unknown beats"]):
plt.subplot(3,3,i+1)
plt.xlabel("ms")
plt.ylabel("mV")
plt.plot(samples.iloc[i,:186])
plt.title(name)
plt.savefig("images/graphs_ecg_2.png", dpi=960)
###Output
_____no_output_____
###Markdown
Time series heatmap for the different classes
###Code
def heatmap(df, class_label, min_val, size, title):
img = df.loc[mitbih_train_df[187]==class_label].values
img = img[:, min_val:size]
img_flatten = img.flatten()
final = np.arange(min_val, size)
for _ in range(img.shape[0]-1):
tempo = np.arange(min_val, size)
final = np.concatenate((final, tempo), axis=None)
plt.hist2d(final, img_flatten, bins=(65, 65), cmap=plt.cm.jet)
plt.colorbar()
plt.title('2D Histogram - '+ title)
plt.figure(figsize=(20, 20))
for i, name in enumerate(["Normal beats", "Supra. beats", "Ventric. beats", "Fusion beats", "Unknown beats"]):
plt.subplot(3,3,i+1)
plt.xlabel("ms")
plt.ylabel("mV")
heatmap(mitbih_train_df, i, 5, 70, name)
plt.savefig(f"images/2d_histogram.pdf", bbox_inches='tight')
plt.show()
from sklearn.utils import resample
df_1=mitbih_train_df[mitbih_train_df[187]==1]
df_2=mitbih_train_df[mitbih_train_df[187]==2]
df_3=mitbih_train_df[mitbih_train_df[187]==3]
df_4=mitbih_train_df[mitbih_train_df[187]==4]
df_0=(mitbih_train_df[mitbih_train_df[187]==0]).sample(n=20000,random_state=42)
df_1_upsample=resample(df_1,replace=True,n_samples=20000,random_state=123)
df_2_upsample=resample(df_2,replace=True,n_samples=20000,random_state=124)
df_3_upsample=resample(df_3,replace=True,n_samples=20000,random_state=125)
df_4_upsample=resample(df_4,replace=True,n_samples=20000,random_state=126)
train_df=pd.concat([df_0,df_1_upsample,df_2_upsample,df_3_upsample,df_4_upsample])
train_df[187] = train_df[187].astype(int)
count = train_df[187].value_counts()
labels = ["normal beats", "Supra. beats", "Ventric. beats", "Fusion beats", "Unknown beats"]
plt.figure(figsize=(10, 10))
pie = plt.Circle((0, 0), 0.7, color="white")
plt.pie(count, labels=["normal beats", "Supra. beats", "Ventric. beats", "Fusion beats", "Unknown beats"], colors=["green", "blue", "yellow", "purple", "lightblue"], autopct='%1.0f%%')
p = plt.gcf()
p.gca().add_artist(pie)
plt.savefig(f"images/data_distribution_after_datapreprocessing.pdf", bbox_inches='tight')
plt.show()
###Output
_____no_output_____
###Markdown
STATS
###Code
def gen_seq(N):
out = []
for _ in range(1000000):
arr = np.arange(1, N+1)
np.random.shuffle(arr)
temp = arr[0]
for j in range(len(arr)-1):
temp += abs(arr[j] - arr[j+1])
out.append(temp)
return out
out_10 = gen_seq(10)
print('Mean of N = 10: {}'.format(np.mean(out_10)))
print('STD of N = 10: {}'.format(np.std(out_10)))
out_20 = gen_seq(20)
print('Mean of N = 20: {}'.format(np.mean(out_20)))
print('STD of N = 20: {}'.format(np.std(out_20)))
print('Prob greater than or equal 45 for N = 10: {}'.format(np.mean([i >= 45 for i in out_10])))
print('Prob greater than or equal 160 for N = 20: {}'.format(np.mean([i >= 160 for i in out_20])))
out_5 = gen_seq(5)
np.mean(out_5)
###Output
_____no_output_____ |
OpenCV_Recognize.ipynb | ###Markdown
OpenCV template recognition from http://docs.opencv.org/master/d4/dc6/tutorial_py_template_matching.htmlgsc.tab=0
###Code
! wget http://docs.opencv.org/master/res_mario.jpg
import cv2
import numpy as np
from matplotlib import pyplot as plt
from PIL import Image as PIL_Image
from IPython.display import Image as IpyImage
IpyImage(filename='res_mario.jpg')
###Output
_____no_output_____
###Markdown
Crop the image to make an initial test case
###Code
img_full = PIL_Image.open('res_mario.jpg')
img_half = img_full.crop((0,0,img_full.size[0]/2,img_full.size[1]))
img_half.save('mario_test1.jpg')
IpyImage(filename='mario_test1.jpg')
###Output
_____no_output_____
###Markdown
next grab a gold coin as template
###Code
source = PIL_Image.open('mario_test1.jpg')
coin = source.crop((100,113,110,129))
coin.save('coin.jpg')
IpyImage(filename = 'coin.jpg')
###Output
_____no_output_____
###Markdown
next process template
###Code
img_rgb = cv2.imread('mario_test1.jpg')
img_gray = cv2.cvtColor(img_rgb, cv2.COLOR_BGR2GRAY)
template = cv2.imread('coin.jpg',0)
w, h = template.shape[::-1]
res = cv2.matchTemplate(img_gray,template,cv2.TM_CCOEFF_NORMED)
threshold = 0.8
loc = np.where( res >= threshold)
for pt in zip(*loc[::-1]):
cv2.rectangle(img_rgb, pt, (pt[0] + w, pt[1] + h), (0,0,255), 2)
cv2.imwrite('res.jpg',img_rgb)
IpyImage(filename = 'res.jpg')
###Output
_____no_output_____ |
jupyter/Python_IV_Curve/2_IV_Model_Update/Match_Experimental_Data/IV_Model_Match_Experimental_Data.ipynb | ###Markdown
Source Scripts and Baseline Parameters
###Code
import math
import numpy as np
import matplotlib.pyplot as plt
import matplotlib
from scipy.optimize import minimize
from sklearn.metrics import mean_squared_error
# updated I-V relationship by considering Knudsen Dissuion (calculated from particle sizes)
def IV_new(oT,fT,J,pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,
BV_alpha,BV_prexp,BV_Eact,
Fkn=True, de_a=1.0,ne_a=0.5,alpha_a=1.0,de_c=1.0,ne_c=0.5,alpha_c=1.0):
#-- B Koeppel
#-- 10-13-2014
#--
#-------------------------------------------------------------------
#-- --
#-- VoltageValue() --
#-- --
#-- V-I performance based on spreadsheet EC model --
#-- Updated to include SOA performance --
#-- --
#-------------------------------------------------------------------
#--
#-- Available Local Inputs From SOFC-MP
# oT =700.0 #Temperature oxidant (K)
# fT =700.0 #Temperature fuel (K)
# J=0.01 # Current density (A/cm2)
# pO2air=0.3 # Air side partial pressure O2 (atm)
# pN2air =0.7 #Air side partial pressure N2 (atm)
# pH2 = 0.1 #Fuel side partial pressure H2 (atm)
# pH2O =0.9 #Fuel side partial pressure H2O (atm)
# pCO=0.0 # Fuel side partial pressure CO (atm)
# pCO2=0.0 # Fuel side partial pressure CO2 (atm)
# pCH4=0.0 # Fuel side partial pressure CH4 (atm)
# pN2=0.0 # Fuel side partial pressure N2 (atm)
# pSys=1.0 #System pressure (atm)
#--nActiveCell # Cell number
#-- DistanceFromTheEdge Distance along path (m)
#-- DistanceFromTheEdgeRatio Relative distance along the path
#--
#-- Required Subroutine Outputs
#-- Z Voltage (V)
#--
#------------------------------------------------------------------
#--
#-- User-Assigned Geometry/Material Inputs
#-- th_a Thickness anode (microns)
#-- th_e Thickness electrolyte (microns)
#-- th_c Thickness cathode (microns)
#-- th_ic Thickness interconnect (microns)
#-- por_a Porosity anode (%)
#-- por_c Porosity cathode (%)
#-- tort_a Tortuosity anode
#-- tort_c Tortuosity cathode
#-- BV_alpha Butler-Volmer 'alpha' constant
#-- BV_prexp Butler-Volmer pre-exponential constant
#-- BV_Eact Butler-Volmer activation energy
#-- R_cont Contact resistance
#--
#-- User-Assigned Constants/Conversions
#-- R Ideal gas constant
#-- F Faraday's constant
#-- atm2Pa Conversion for pressure atm -> Pa
#-- mic2m Conversion for length micron -> m
#------------------------------------------------------------------
#--
#function VoltageValue()
#--
#--J=-J
#-- Assign Constants/Conversions
R=8.3145
F=96485.0
atm2Pa=101325.0
mic2m=1.0e-6
#--
#-- Assign Flags
BVflag=0 #-- 0=old method, 1=pressurized method
#--
#-- Assign Geometry/Material Data
th_a= 300.0
th_e= 10.0
th_c= 30.0
th_ic= 500.0
por_a= 40.0
por_c= 40.0
tort_a= 2.5
tort_c= 2.5
# BV_alpha= 0.43236
# BV_prexp= 5639.0
# BV_Eact= 79616.0
R_cont= 0.0
BV_alpha2a= 0.44
BV_prexp2a= 1.43E+04
BV_Eact2a= 8.00E+04
BV_gamma2a= 0.5
BV_alpha2f= 9.01
BV_prexp2f= 1.31E+07
BV_Eact2f= 8.00E+04
BV_gamma2f= 0.133
V_loss= 0.0
#--
#%-- Compute the local cell temperature
#------------------------------------------------------------------
Tk=(oT+fT)/2.0
Tc=Tk-273.0
#--
#%-- Compute the Nernst open circuit voltage
#------------------------------------------------------------------
Keq_dHo=-56930.0
Keq_A=6.75
Keq_B=-0.64
Keq_C=-0.08
Keq_L=-8.74
Keq_dG=Keq_dHo+Keq_A*Tk*math.log10(Tk)+Keq_B*Tk*Tk/1000+Keq_C*100000/Tk+Keq_L*Tk
Kequib=math.exp(-Keq_dG*4.184/R/Tk)
pO2anode=(pH2O/Kequib/pH2)**2
Voc=(R*Tk/4.0/F)*math.log(pO2air/pO2anode)
#--
#%-- Compute the ohmic polarization
#------------------------------------------------------------------
#-- Compute the electrolyte conductivity
s_eA=8.588e-10
s_eB=-1.101e-6
s_eC=4.679e-4
s_eD=-0.0654
s_e=s_eA*Tc**3+s_eB*Tc**2+s_eC*Tc+s_eD
#%-- Compute the interconnect conductivity
s_icA=0.069
s_icB=70.9
s_ic=1000000.0/(s_icA*Tc+s_icB)
#%-- Compute the cathode conductivity
s_cA=575955.0
s_cEa=0.117
s_c=(s_cA/Tk)*math.exp(-s_cEa/0.00008617/Tk)*(1.0-(0.018*por_c))
#%-- Compute the anode conductivity
s_aA=1000
s_a=s_aA*(1.0-(0.018*por_a))
#%-- Compute the effective cell resistivity
Ri=R_cont+(th_e/s_e+th_a/s_a+th_ic/s_ic+th_c/s_c)*0.0001
#%-- Compute the total ohmic loss
Ohmic=Ri*J
#--
#%-- Compute the activation polarization (old method or new pressurized method)
#------------------------------------------------------------------
if BVflag==0:
# -- Old method
i0=BV_prexp*math.exp(-BV_Eact/R/Tk)
BV=(R*Tk/BV_alpha/F)*math.log((J/2.0/i0)+math.sqrt((J/2.0/i0)**2+1))
else:
# -- New method
ioeff_f=BV_prexp2f*math.exp(-BV_Eact2f/R/Tk)*pO2anode**BV_gamma2f
ioeff_a=BV_prexp2a*math.exp(-BV_Eact2a/R/Tk)*pO2air**BV_gamma2a
eta_f=R*Tk/BV_alpha2f/F*math.log((J/2.0/ioeff_f)+math.sqrt((J/2.0/ioeff_f)**2+1))
eta_a=R*Tk/BV_alpha2a/F*math.log((J/2.0/ioeff_a)+math.sqrt((J/2.0/ioeff_a)**2+1))
BV=eta_f+eta_a
#--
#%-- Compute the diffusion coefficients
#------------------------------------------------------------------
#-- Make 0.0 species non-zero to make equations defined
if pCO<=0 :
pCOc=1e-16
else:
pCOc=pCO
if pCO2<=0 :
pCO2c=1e-16
else:
pCO2c=pCO2
Ptotal=pH2+pH2O+pCOc+pCO2c+pN2+pCH4
H2_mf=pH2/Ptotal
H2O_mf=pH2O/Ptotal
CO_mf=pCOc/Ptotal
CO2_mf=pCO2c/Ptotal
N2_mf=pN2/Ptotal
CH4_mf=pCH4/Ptotal
#-- Diffusion constants (empirical radii and molecular weights)
H2i=1.92
H2Oi=2.33
COi=2.66
CO2i=3.0
N2i=2.62
O2i=2.55
CH4i=2.9
H2ii=2.0 #unit [g/mol]
H2Oii=18.0 #unit [g/mol]
COii=28.0 #unit [g/mol]
CO2ii=44.0 #unit [g/mol]
N2ii=28.0 #unit [g/mol]
O2ii=32.0 #unit [g/mol]
CH4ii=16.0 #unit [g/mol]
#%-- Compute anode binary diffusion constants
H2H2O=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/H2Oii)/((H2i+H2Oi)**2)
H2CO=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/COii)/((H2i+COi)**2)
H2CO2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/CO2ii)/((H2i+CO2i)**2)
H2N2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/N2ii)/((H2i+N2i)**2)
H2CH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/CH4ii)/((H2i+CH4i)**2)
O2N2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/O2ii+1/N2ii)/((O2i+N2i)**2)
H2OCO=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/COii)/((H2Oi+COi)**2)
H2OCO2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/CO2ii)/((H2Oi+CO2i)**2)
H2ON2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/N2ii)/((H2Oi+N2i)**2)
H2OCH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/CH4ii)/((H2Oi+CH4i)**2)
N2CH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/N2ii+1/CH4ii)/((N2i+CH4i)**2)
COCO2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/COii+1/CO2ii)/((COi+CO2i)**2)
CON2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/COii+1/N2ii)/((COi+N2i)**2)
COCH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/COii+1/CH4ii)/((COi+CH4i)**2)
CO2N2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/CO2ii+1/N2ii)/((CO2i+N2i)**2)
CO2CH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/CO2ii+1/CH4ii)/((CO2i+CH4i)**2)
#%-- Compute anode unitary diffusion constants
H2_UD=(1-H2_mf)/(H2O_mf/H2H2O+CO_mf/H2CO+CO2_mf/H2CO2+N2_mf/H2N2+CH4_mf/H2CH4)
H2O_UD=(1-H2O_mf)/(H2_mf/H2H2O+CO_mf/H2OCO+CO2_mf/H2OCO2+N2_mf/H2ON2+CH4_mf/H2OCH4)
CO_UD=(1-CO_mf)/(H2_mf/H2CO+H2O_mf/H2OCO+CO2_mf/COCO2+N2_mf/CON2+CH4_mf/COCH4)
CO2_UD=(1-CO2_mf)/(H2_mf/H2CO2+H2O_mf/H2OCO2+CO_mf/COCO2+N2_mf/CO2N2+CH4_mf/CO2CH4)
N2_UD=(1-N2_mf)/(H2_mf/H2N2+H2O_mf/H2ON2+CO_mf/CON2+CO2_mf/CO2N2+CH4_mf/N2CH4)
CH4_UD=(1-CH4_mf)/(H2_mf/H2CH4+H2O_mf/H2OCH4+CO_mf/COCH4+CO2_mf/CO2CH4+N2_mf/N2CH4)
#%-- Compute anode adsorption and surface diffusion modifications
area_H2=math.pi*(H2i*10**-10)**2
area_H2O=math.pi*(H2Oi*10**-10)**2
area_CO=math.pi*(COi*10**-10)**2
area_CO2=math.pi*(CO2i*10**-10)**2
area_N2=math.pi*(N2i*10**-10)**2
area_O2=math.pi*(O2i*10**-10)**2
area_CH4=math.pi*(CH4i*10**-10)**2
pres_H2=max(0,pH2-J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(H2_UD*por_a/100)))
pres_H2O=max(0,pH2O+J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(H2O_UD*por_a/100)))
pres_CO=max(0,pCOc-J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(CO_UD*por_a/100)))
pres_CO2=max(0,pCO2c+J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(CO2_UD*por_a/100)))
pres_N2=max(0,pN2)
pres_O2=max(0,pO2anode)
pres_CH4=max(0,pCH4)
Qev_H2=0.425
Qev_H2O=0.549
Qev_CO=0.5
Qev_CO2=0.5
Qev_N2=0.5
Qev_O2=0.5
Qev_CH4=0.5
bP_H2=6.023*10**23*area_H2*10**-13/math.sqrt(2*math.pi*R*Tk*H2ii)*math.exp(Qev_H2/(0.026*Tk/298))*pres_H2
bP_H2O=6.023*10**23*area_H2O*10**-13/math.sqrt(2*math.pi*R*Tk*H2Oii)*math.exp(Qev_H2O/(0.026*Tk/298))*pres_H2O
bP_CO=6.023*10**23*area_CO*10**-13/math.sqrt(2*math.pi*R*Tk*COii)*math.exp(Qev_CO/(0.026*Tk/298))*pres_CO
bP_CO2=6.023*10**23*area_CO2*10**-13/math.sqrt(2*math.pi*R*Tk*CO2ii)*math.exp(Qev_CO2/(0.026*Tk/298))*pres_CO2
bP_N2=6.023*10**23*area_N2*10**-13/math.sqrt(2*math.pi*R*Tk*N2ii)*math.exp(Qev_N2/(0.026*Tk/298))*pres_N2
bP_O2=6.023*10**23*area_O2*10**-13/math.sqrt(2*math.pi*R*Tk*O2ii)*math.exp(Qev_O2/(0.026*Tk/298))*pres_O2
bP_CH4=6.023*10**23*area_CH4*10**-13/math.sqrt(2*math.pi*R*Tk*CH4ii)*math.exp(Qev_CH4/(0.026*Tk/298))*pres_CH4
bP_sum=bP_H2+bP_H2O+bP_CO+bP_CO2+bP_N2+bP_O2+bP_CH4
cov_H2=bP_H2/(1+bP_sum)
cov_H2O=bP_H2O/(1+bP_sum)
cov_CO=bP_CO/(1+bP_sum)
cov_CO2=bP_CO2/(1+bP_sum)
cov_N2=bP_N2/(1+bP_sum)
cov_O2=bP_O2/(1+bP_sum)
cov_CH4=bP_CH4/(1+bP_sum)
cov_sum=cov_H2+cov_H2O+cov_CO+cov_CO2+cov_N2+cov_O2+cov_CH4
fij_H2=cov_H2/cov_sum
fij_H2O=cov_H2O/cov_sum
fij_CO=cov_CO/cov_sum
fij_CO2=cov_CO2/cov_sum
fij_N2=cov_N2/cov_sum
fij_O2=cov_O2/cov_sum
fij_CH4=cov_CH4/cov_sum
DsurfH2th1=0.1
DsurfH2th2=4.51e-5
D_H2=H2_UD**fij_H2*((DsurfH2th1**(1-fij_H2)*DsurfH2th2**fij_H2)/(1-fij_H2))**(1-fij_H2)
D_H2O=H2O_UD**fij_H2O*(10**-4)**(1-fij_H2O)
D_CO=CO_UD**fij_CO*(10**-4)**(1-fij_CO)
D_CO2=CO2_UD**fij_CO2*(10**-4)**(1-fij_CO2)
D_N2=N2_UD**fij_N2*(10**-4)**(1-fij_N2)
D_O2=O2N2**fij_O2*(10**-4)**(1-fij_O2)
D_CH4=CH4_UD**fij_CH4*(10**-4)**(1-fij_CH4)
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
#-- Compute the effective Knudsen diffusion coefficient
A0_a=6/de_a*(10**-6)*(ne_a+(1-ne_a)*alpha_a**2)/(ne_a+(1-ne_a)*alpha_a**3)
d0_a=4/A0_a*(0.01*por_a)/(1-0.01*por_a)
# print('specific surface area: ', A0_a)
# print('pore diameter: ', d0_a)
DeffH2_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(H2ii*10**(-3)))*por_a/tort_a*0.01*10**4
DeffH2O_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(H2Oii*10**(-3)))*por_a/tort_a*0.01*10**4
DeffCO_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(COii*10**(-3)))*por_a/tort_a*0.01*10**4
DeffCO2_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(CO2ii*10**(-3)))*por_a/tort_a*0.01*10**4
A0_c=6/de_c*(10**-6)*(ne_c+(1-ne_c)*alpha_c**2)/(ne_c+(1-ne_c)*alpha_c**3)
d0_c=4/A0_c*(0.01*por_c)/(1-0.01*por_c)
DeffO2_K=1/3*d0_c*math.sqrt(8*R*Tk/math.pi/(O2ii*10**(-3)))*por_c/tort_c*0.01*10**4
#---------------------------------------------------------------------------------------------------------------------
#%-- Compute the cathode concentration polarization
#------------------------------------------------------------------
Deffc=0.01*por_c*O2N2/tort_c
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
# print('Cathode: O2 ',Deffc, 'vs.', DeffO2_K, '[cm2/s]')
Deffc=(Deffc*DeffO2_K)/(Deffc+DeffO2_K)
#---------------------------------------------------------------------------------------------------------------------
ics=1.0e-8*(4.0*F*Ptotal*atm2Pa*Deffc)/(R*Tk*th_c*mic2m)*math.log(pSys/(pSys-pO2air))
#--ics=1.0e-8*(4.0*F*Ptotal*atm2Pa*Deffc)/(R*Tk*th_c*mic2m)*math.log(Ptotal/(Ptotal-pO2air))
Cath=(R*Tk/4.0/F)*math.log(1.0-(J/ics))
#--
#%-- Compute the anode concentration polarization
#------------------------------------------------------------------
DeffH2=D_H2
DeffH2O=0.01*H2O_UD*por_a/tort_a
DeffCO=0.01*CO_UD*por_a/tort_a
DeffCO2=0.01*CO2_UD*por_a/tort_a
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
# print('Anode: H2 Dffe_normal ',DeffH2, 'vs. Deff_Knu', DeffH2_K, '[cm2/s]')
# print('Anode: H2O Dffe_normal ',DeffH2O, 'vs. Deff_Knu', DeffH2O_K, '[cm2/s]')
# print('Anode: CO Dffe_normal ',DeffCO, 'vs. Deff_Knu', DeffCO_K, '[cm2/s]')
# print('Anode: CO2 Dffe_normal ',DeffCO2, 'vs. Deff_Knu', DeffCO2_K, '[cm2/s]')
DeffH2=(DeffH2*DeffH2_K)/(DeffH2+DeffH2_K)
DeffH2O=(DeffH2O*DeffH2O_K)/(DeffH2O+DeffH2O_K)
DeffCO=(DeffCO*DeffCO_K)/(DeffCO+DeffCO_K)
DeffCO2=(DeffCO2*DeffCO2_K)/(DeffCO2+DeffCO2_K)
#---------------------------------------------------------------------------------------------------------------------
alim=2*F*pH2*atm2Pa*DeffH2/(831.45*Tk*th_a)
blim=2*F*pH2O*atm2Pa*DeffH2O/(831.45*Tk*th_a)
clim=2*F*pCOc*atm2Pa*DeffCO/(831.45*Tk*th_a)
dlim=2*F*pCO2c*atm2Pa*DeffCO2/(831.45*Tk*th_a)
#-- Adjust calculation for iteration case of too high current requested
if J>(alim+clim) :
Jcalc=J
else:
Jcalc=J
OPa_A=(Jcalc+blim+dlim)/blim/dlim
OPa_B=(Jcalc*(alim*dlim+blim*clim)+blim*clim*dlim+alim*blim*dlim-alim*clim*dlim-alim*blim*clim)/alim/blim/clim/dlim
OPa_C=(Jcalc-alim-clim)/alim/clim
holdA1=OPa_A
holdB1=OPa_B
holdC1=OPa_C
stabcheck=OPa_B**2-4.0*OPa_A*OPa_C
stabcheck2=(-OPa_B+math.sqrt(OPa_B**2-4.0*OPa_A*OPa_C))/2.0/OPa_A
# print('alim: ', alim)
# print('blim: ', blim)
# print('clim: ', clim)
# print('dlim: ', dlim)
# print('OPa_A: ', OPa_A)
# print('OPa_B: ', OPa_B)
# print('OPa_C: ', OPa_C)
# print('stabcheck: ', stabcheck)
# print('stabcheck2: ', stabcheck2)
if stabcheck>0 :
if stabcheck2>0 :
# print('stabcheck>0 and stabcheck2>0')
Anod=(R*Tk/2.0/F)*math.log((-OPa_B+math.sqrt(OPa_B**2-4.0*OPa_A*OPa_C))/2.0/OPa_A)
holdA2=0
holdB2=0
holdC2=0
goober=1
# print('DeffH2: ', DeffH2)
else:
# print('stabcheck>0 and stabcheck2<0')
DeffH2=0.01*H2_UD*por_a/tort_a
DeffH2O=0.01*H2O_UD*por_a/tort_a
DeffCO=0.01*CO_UD*por_a/tort_a
DeffCO2=0.01*CO2_UD*por_a/tort_a
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
DeffH2=(DeffH2*DeffH2_K)/(DeffH2+DeffH2_K)
DeffH2O=(DeffH2O*DeffH2O_K)/(DeffH2O+DeffH2O_K)
DeffCO=(DeffCO*DeffCO_K)/(DeffCO+DeffCO_K)
DeffCO2=(DeffCO2*DeffCO2_K)/(DeffCO2+DeffCO2_K)
#---------------------------------------------------------------------------------------------------------------------
# print('DeffH2: ', DeffH2)
alim=2*F*pH2*atm2Pa*DeffH2/(831.45*Tk*th_a)
blim=2*F*pH2O*atm2Pa*DeffH2O/(831.45*Tk*th_a)
clim=2*F*pCOc*atm2Pa*DeffCO/(831.45*Tk*th_a)
dlim=2*F*pCO2c*atm2Pa*DeffCO2/(831.45*Tk*th_a)
OPa_A=(Jcalc+blim+dlim)/blim/dlim
OPa_B=(Jcalc*(alim*dlim+blim*clim)+blim*clim*dlim+alim*blim*dlim-alim*clim*dlim-alim*blim*clim)/alim/blim/clim/dlim
OPa_C=(Jcalc-alim-clim)/alim/clim
holdA2=OPa_A
holdB2=OPa_B
holdC2=OPa_C
Anod=(R*Tk/2.0/F)*math.log((-OPa_B+math.sqrt(OPa_B**2-4.0*OPa_A*OPa_C))/2.0/OPa_A)
goober=2
#--
#%-- Compute the final voltage result
#------------------------------------------------------------------
# print(Voc,Ohmic,BV,Cath)
V=(Voc-Ohmic-BV+Cath+Anod)+V_loss #this is the original one for SOFC
#--file=io.open("vdetails.dat","a")
#V=(Voc+Ohmic+BV-Cath-Anod)+V_loss #SOEC proton
#Z=V #*1.1+0.05
# print(V,"(V)=",Voc,"(Voc)+",Ohmic,"(Ohmic)+",BV,"(BV)-",Cath,"(Cath)-",Anod,"Anod)")
#--Voc=(R*Tk/4.0/F)*math.log(pO2air/pO2anode)
#--file:write(Voc," ",Ohmic," ",BV," ",Cath," ",Anod," ",pN2air," ",pH2," ",pH2O," ",pCO," ",pCO2," ",pCH4,"\n")
#--pO2anode=(pH2O/Kequib/pH2)**2
#--file:write(Voc,"=",pO2air,"/",pO2anode," =",pH2O,"/",Kequib,"/",pH2,"\n")
#--file:close()
#--
#-- return the voltage value
return(V,Voc,Ohmic,BV,Cath,Anod)
# updated I-V relationship by considering Knudsen Dissuion (calculated from pore size)
def IV_new_2(oT,fT,J,pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,
BV_alpha, BV_prexp, BV_Eact,V_loss=0.0, R_cont=0.0,
DsurfH2th1=0.1, DsurfH2th2=4.51e-5,Fkn=True, d0_am=0.28,d0_cm=0.28, th_e=10):
#-- B Koeppel
#-- 10-13-2014
#--
#-------------------------------------------------------------------
#-- --
#-- VoltageValue() --
#-- --
#-- V-I performance based on spreadsheet EC model --
#-- Updated to include SOA performance --
#-- --
#-------------------------------------------------------------------
#--
#-- Available Local Inputs From SOFC-MP
# oT =700.0 #Temperature oxidant (K)
# fT =700.0 #Temperature fuel (K)
# J=0.01 # Current density (A/cm2)
# pO2air=0.3 # Air side partial pressure O2 (atm)
# pN2air =0.7 #Air side partial pressure N2 (atm)
# pH2 = 0.1 #Fuel side partial pressure H2 (atm)
# pH2O =0.9 #Fuel side partial pressure H2O (atm)
# pCO=0.0 # Fuel side partial pressure CO (atm)
# pCO2=0.0 # Fuel side partial pressure CO2 (atm)
# pCH4=0.0 # Fuel side partial pressure CH4 (atm)
# pN2=0.0 # Fuel side partial pressure N2 (atm)
# pSys=1.0 #System pressure (atm)
#--nActiveCell # Cell number
#-- DistanceFromTheEdge Distance along path (m)
#-- DistanceFromTheEdgeRatio Relative distance along the path
#--
#-- Required Subroutine Outputs
#-- Z Voltage (V)
#--
#------------------------------------------------------------------
#--
#-- User-Assigned Geometry/Material Inputs
#-- th_a Thickness anode (microns)
#-- th_e Thickness electrolyte (microns)
#-- th_c Thickness cathode (microns)
#-- th_ic Thickness interconnect (microns)
#-- por_a Porosity anode (%)
#-- por_c Porosity cathode (%)
#-- tort_a Tortuosity anode
#-- tort_c Tortuosity cathode
#-- BV_alpha Butler-Volmer 'alpha' constant
#-- BV_prexp Butler-Volmer pre-exponential constant
#-- BV_Eact Butler-Volmer activation energy
#-- R_cont Contact resistance
#--
#-- User-Assigned Constants/Conversions
#-- R Ideal gas constant
#-- F Faraday's constant
#-- atm2Pa Conversion for pressure atm -> Pa
#-- mic2m Conversion for length micron -> m
#------------------------------------------------------------------
#--
#function VoltageValue()
#--
#--J=-J
#-- Assign Constants/Conversions
R=8.3145
F=96485.0
atm2Pa=101325.0
mic2m=1.0e-6
#--
#-- Assign Flags
BVflag=0 #-- 0=old method, 1=pressurized method
#--
#-- Assign Geometry/Material Data
th_a= 300.0
# th_e= 10.0
th_c= 30.0
th_ic= 500.0
por_a= 40.0
por_c= 40.0
tort_a= 2.5
tort_c= 2.5
# BV_alpha= 0.43236
# BV_prexp= 5639.0
# BV_Eact= 79616.0
# R_cont= 0.0
BV_alpha2a= 0.44
BV_prexp2a= 1.43E+04
BV_Eact2a= 8.00E+04
BV_gamma2a= 0.5
BV_alpha2f= 9.01
BV_prexp2f= 1.31E+07
BV_Eact2f= 8.00E+04
BV_gamma2f= 0.133
# V_loss= 0.0
#--
#%-- Compute the local cell temperature
#------------------------------------------------------------------
Tk=(oT+fT)/2.0
Tc=Tk-273.0
#--
#%-- Compute the Nernst open circuit voltage
#------------------------------------------------------------------
Keq_dHo=-56930.0
Keq_A=6.75
Keq_B=-0.64
Keq_C=-0.08
Keq_L=-8.74
Keq_dG=Keq_dHo+Keq_A*Tk*math.log10(Tk)+Keq_B*Tk*Tk/1000+Keq_C*100000/Tk+Keq_L*Tk
Kequib=math.exp(-Keq_dG*4.184/R/Tk)
pO2anode=(pH2O/Kequib/pH2)**2
Voc=(R*Tk/4.0/F)*math.log(pO2air/pO2anode)
#--
#%-- Compute the ohmic polarization
#------------------------------------------------------------------
#-- Compute the electrolyte conductivity
s_eA=8.588e-10
s_eB=-1.101e-6
s_eC=4.679e-4
s_eD=-0.0654
s_e=s_eA*Tc**3+s_eB*Tc**2+s_eC*Tc+s_eD
#%-- Compute the interconnect conductivity
s_icA=0.069
s_icB=70.9
s_ic=1000000.0/(s_icA*Tc+s_icB)
#%-- Compute the cathode conductivity
s_cA=575955.0
s_cEa=0.117
s_c=(s_cA/Tk)*math.exp(-s_cEa/0.00008617/Tk)*(1.0-(0.018*por_c))
#%-- Compute the anode conductivity
s_aA=1000
s_a=s_aA*(1.0-(0.018*por_a))
#%-- Compute the effective cell resistivity
Ri=R_cont+(th_e/s_e+th_a/s_a+th_ic/s_ic+th_c/s_c)*0.0001
#%-- Compute the total ohmic loss
Ohmic=Ri*J
#--
#%-- Compute the activation polarization (old method or new pressurized method)
#------------------------------------------------------------------
if BVflag==0:
# -- Old method
i0=BV_prexp*math.exp(-BV_Eact/R/Tk)
BV=(R*Tk/BV_alpha/F)*math.log((J/2.0/i0)+math.sqrt((J/2.0/i0)**2+1))
else:
# -- New method
ioeff_f=BV_prexp2f*math.exp(-BV_Eact2f/R/Tk)*pO2anode**BV_gamma2f
ioeff_a=BV_prexp2a*math.exp(-BV_Eact2a/R/Tk)*pO2air**BV_gamma2a
eta_f=R*Tk/BV_alpha2f/F*math.log((J/2.0/ioeff_f)+math.sqrt((J/2.0/ioeff_f)**2+1))
eta_a=R*Tk/BV_alpha2a/F*math.log((J/2.0/ioeff_a)+math.sqrt((J/2.0/ioeff_a)**2+1))
BV=eta_f+eta_a
#--
#%-- Compute the diffusion coefficients
#------------------------------------------------------------------
#-- Make 0.0 species non-zero to make equations defined
if pCO<=0 :
pCOc=1e-16
else:
pCOc=pCO
if pCO2<=0 :
pCO2c=1e-16
else:
pCO2c=pCO2
Ptotal=pH2+pH2O+pCOc+pCO2c+pN2+pCH4
H2_mf=pH2/Ptotal
H2O_mf=pH2O/Ptotal
CO_mf=pCOc/Ptotal
CO2_mf=pCO2c/Ptotal
N2_mf=pN2/Ptotal
CH4_mf=pCH4/Ptotal
#-- Diffusion constants (empirical radii and molecular weights)
H2i=1.92
H2Oi=2.33
COi=2.66
CO2i=3.0
N2i=2.62
O2i=2.55
CH4i=2.9
H2ii=2.0 #unit [g/mol]
H2Oii=18.0 #unit [g/mol]
COii=28.0 #unit [g/mol]
CO2ii=44.0 #unit [g/mol]
N2ii=28.0 #unit [g/mol]
O2ii=32.0 #unit [g/mol]
CH4ii=16.0 #unit [g/mol]
#%-- Compute anode binary diffusion constants
H2H2O=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/H2Oii)/((H2i+H2Oi)**2)
H2CO=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/COii)/((H2i+COi)**2)
H2CO2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/CO2ii)/((H2i+CO2i)**2)
H2N2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/N2ii)/((H2i+N2i)**2)
H2CH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2ii+1/CH4ii)/((H2i+CH4i)**2)
O2N2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/O2ii+1/N2ii)/((O2i+N2i)**2)
H2OCO=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/COii)/((H2Oi+COi)**2)
H2OCO2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/CO2ii)/((H2Oi+CO2i)**2)
H2ON2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/N2ii)/((H2Oi+N2i)**2)
H2OCH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/H2Oii+1/CH4ii)/((H2Oi+CH4i)**2)
N2CH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/N2ii+1/CH4ii)/((N2i+CH4i)**2)
COCO2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/COii+1/CO2ii)/((COi+CO2i)**2)
CON2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/COii+1/N2ii)/((COi+N2i)**2)
COCH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/COii+1/CH4ii)/((COi+CH4i)**2)
CO2N2=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/CO2ii+1/N2ii)/((CO2i+N2i)**2)
CO2CH4=(1e-3/Ptotal)*(Tk**1.75)*math.sqrt(1/CO2ii+1/CH4ii)/((CO2i+CH4i)**2)
#%-- Compute anode unitary diffusion constants
H2_UD=(1-H2_mf)/(H2O_mf/H2H2O+CO_mf/H2CO+CO2_mf/H2CO2+N2_mf/H2N2+CH4_mf/H2CH4)
H2O_UD=(1-H2O_mf)/(H2_mf/H2H2O+CO_mf/H2OCO+CO2_mf/H2OCO2+N2_mf/H2ON2+CH4_mf/H2OCH4)
CO_UD=(1-CO_mf)/(H2_mf/H2CO+H2O_mf/H2OCO+CO2_mf/COCO2+N2_mf/CON2+CH4_mf/COCH4)
CO2_UD=(1-CO2_mf)/(H2_mf/H2CO2+H2O_mf/H2OCO2+CO_mf/COCO2+N2_mf/CO2N2+CH4_mf/CO2CH4)
N2_UD=(1-N2_mf)/(H2_mf/H2N2+H2O_mf/H2ON2+CO_mf/CON2+CO2_mf/CO2N2+CH4_mf/N2CH4)
CH4_UD=(1-CH4_mf)/(H2_mf/H2CH4+H2O_mf/H2OCH4+CO_mf/COCH4+CO2_mf/CO2CH4+N2_mf/N2CH4)
#%-- Compute anode adsorption and surface diffusion modifications
area_H2=math.pi*(H2i*10**-10)**2
area_H2O=math.pi*(H2Oi*10**-10)**2
area_CO=math.pi*(COi*10**-10)**2
area_CO2=math.pi*(CO2i*10**-10)**2
area_N2=math.pi*(N2i*10**-10)**2
area_O2=math.pi*(O2i*10**-10)**2
area_CH4=math.pi*(CH4i*10**-10)**2
pres_H2=max(0,pH2-J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(H2_UD*por_a/100)))
pres_H2O=max(0,pH2O+J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(H2O_UD*por_a/100)))
pres_CO=max(0,pCOc-J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(CO_UD*por_a/100)))
pres_CO2=max(0,pCO2c+J*82.058*Tk*(th_a/10000)/(2*F)*(tort_a/(CO2_UD*por_a/100)))
pres_N2=max(0,pN2)
pres_O2=max(0,pO2anode)
pres_CH4=max(0,pCH4)
Qev_H2=0.425
Qev_H2O=0.549
Qev_CO=0.5
Qev_CO2=0.5
Qev_N2=0.5
Qev_O2=0.5
Qev_CH4=0.5
bP_H2=6.023*10**23*area_H2*10**-13/math.sqrt(2*math.pi*R*Tk*H2ii)*math.exp(Qev_H2/(0.026*Tk/298))*pres_H2
bP_H2O=6.023*10**23*area_H2O*10**-13/math.sqrt(2*math.pi*R*Tk*H2Oii)*math.exp(Qev_H2O/(0.026*Tk/298))*pres_H2O
bP_CO=6.023*10**23*area_CO*10**-13/math.sqrt(2*math.pi*R*Tk*COii)*math.exp(Qev_CO/(0.026*Tk/298))*pres_CO
bP_CO2=6.023*10**23*area_CO2*10**-13/math.sqrt(2*math.pi*R*Tk*CO2ii)*math.exp(Qev_CO2/(0.026*Tk/298))*pres_CO2
bP_N2=6.023*10**23*area_N2*10**-13/math.sqrt(2*math.pi*R*Tk*N2ii)*math.exp(Qev_N2/(0.026*Tk/298))*pres_N2
bP_O2=6.023*10**23*area_O2*10**-13/math.sqrt(2*math.pi*R*Tk*O2ii)*math.exp(Qev_O2/(0.026*Tk/298))*pres_O2
bP_CH4=6.023*10**23*area_CH4*10**-13/math.sqrt(2*math.pi*R*Tk*CH4ii)*math.exp(Qev_CH4/(0.026*Tk/298))*pres_CH4
bP_sum=bP_H2+bP_H2O+bP_CO+bP_CO2+bP_N2+bP_O2+bP_CH4
cov_H2=bP_H2/(1+bP_sum)
cov_H2O=bP_H2O/(1+bP_sum)
cov_CO=bP_CO/(1+bP_sum)
cov_CO2=bP_CO2/(1+bP_sum)
cov_N2=bP_N2/(1+bP_sum)
cov_O2=bP_O2/(1+bP_sum)
cov_CH4=bP_CH4/(1+bP_sum)
cov_sum=cov_H2+cov_H2O+cov_CO+cov_CO2+cov_N2+cov_O2+cov_CH4
fij_H2=cov_H2/cov_sum
fij_H2O=cov_H2O/cov_sum
fij_CO=cov_CO/cov_sum
fij_CO2=cov_CO2/cov_sum
fij_N2=cov_N2/cov_sum
fij_O2=cov_O2/cov_sum
fij_CH4=cov_CH4/cov_sum
# DsurfH2th1=0.1
# DsurfH2th2=4.51e-5
D_H2=H2_UD**fij_H2*((DsurfH2th1**(1-fij_H2)*DsurfH2th2**fij_H2)/(1-fij_H2))**(1-fij_H2)
D_H2O=H2O_UD**fij_H2O*(10**-4)**(1-fij_H2O)
D_CO=CO_UD**fij_CO*(10**-4)**(1-fij_CO)
D_CO2=CO2_UD**fij_CO2*(10**-4)**(1-fij_CO2)
D_N2=N2_UD**fij_N2*(10**-4)**(1-fij_N2)
D_O2=O2N2**fij_O2*(10**-4)**(1-fij_O2)
D_CH4=CH4_UD**fij_CH4*(10**-4)**(1-fij_CH4)
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
#-- Compute the effective Knudsen diffusion coefficient
d0_a=d0_am*(10**-6)
DeffH2_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(H2ii*10**(-3)))*por_a/tort_a*0.01*10**4
DeffH2O_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(H2Oii*10**(-3)))*por_a/tort_a*0.01*10**4
DeffCO_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(COii*10**(-3)))*por_a/tort_a*0.01*10**4
DeffCO2_K=1/3*d0_a*math.sqrt(8*R*Tk/math.pi/(CO2ii*10**(-3)))*por_a/tort_a*0.01*10**4
d0_c=d0_cm*(10**-6)
DeffO2_K=1/3*d0_c*math.sqrt(8*R*Tk/math.pi/(O2ii*10**(-3)))*por_c/tort_c*0.01*10**4
#---------------------------------------------------------------------------------------------------------------------
#%-- Compute the cathode concentration polarization
#------------------------------------------------------------------
Deffc=0.01*por_c*O2N2/tort_c
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
# print('Cathode: O2 ',Deffc, 'vs.', DeffO2_K, '[cm2/s]')
Deffc=(Deffc*DeffO2_K)/(Deffc+DeffO2_K)
#---------------------------------------------------------------------------------------------------------------------
ics=1.0e-8*(4.0*F*Ptotal*atm2Pa*Deffc)/(R*Tk*th_c*mic2m)*math.log(pSys/(pSys-pO2air))
#--ics=1.0e-8*(4.0*F*Ptotal*atm2Pa*Deffc)/(R*Tk*th_c*mic2m)*math.log(Ptotal/(Ptotal-pO2air))
Cath=(R*Tk/4.0/F)*math.log(1.0-(J/ics))
#--
#%-- Compute the anode concentration polarization
#------------------------------------------------------------------
DeffH2=D_H2
DeffH2O=0.01*H2O_UD*por_a/tort_a
DeffCO=0.01*CO_UD*por_a/tort_a
DeffCO2=0.01*CO2_UD*por_a/tort_a
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
# print('Anode: H2 Dffe_normal ',DeffH2, 'vs. Deff_Knu', DeffH2_K, '[cm2/s]')
# print('Anode: H2O Dffe_normal ',DeffH2O, 'vs. Deff_Knu', DeffH2O_K, '[cm2/s]')
# print('Anode: CO Dffe_normal ',DeffCO, 'vs. Deff_Knu', DeffCO_K, '[cm2/s]')
# print('Anode: CO2 Dffe_normal ',DeffCO2, 'vs. Deff_Knu', DeffCO2_K, '[cm2/s]')
DeffH2=(DeffH2*DeffH2_K)/(DeffH2+DeffH2_K)
DeffH2O=(DeffH2O*DeffH2O_K)/(DeffH2O+DeffH2O_K)
DeffCO=(DeffCO*DeffCO_K)/(DeffCO+DeffCO_K)
DeffCO2=(DeffCO2*DeffCO2_K)/(DeffCO2+DeffCO2_K)
#---------------------------------------------------------------------------------------------------------------------
alim=2*F*pH2*atm2Pa*DeffH2/(831.45*Tk*th_a)
blim=2*F*pH2O*atm2Pa*DeffH2O/(831.45*Tk*th_a)
clim=2*F*pCOc*atm2Pa*DeffCO/(831.45*Tk*th_a)
dlim=2*F*pCO2c*atm2Pa*DeffCO2/(831.45*Tk*th_a)
#-- Adjust calculation for iteration case of too high current requested
if J>(alim+clim) :
Jcalc=J
else:
Jcalc=J
OPa_A=(Jcalc+blim+dlim)/blim/dlim
OPa_B=(Jcalc*(alim*dlim+blim*clim)+blim*clim*dlim+alim*blim*dlim-alim*clim*dlim-alim*blim*clim)/alim/blim/clim/dlim
OPa_C=(Jcalc-alim-clim)/alim/clim
holdA1=OPa_A
holdB1=OPa_B
holdC1=OPa_C
stabcheck=OPa_B**2-4.0*OPa_A*OPa_C
stabcheck2=(-OPa_B+math.sqrt(OPa_B**2-4.0*OPa_A*OPa_C))/2.0/OPa_A
# print('alim: ', alim)
# print('blim: ', blim)
# print('clim: ', clim)
# print('dlim: ', dlim)
# print('OPa_A: ', OPa_A)
# print('OPa_B: ', OPa_B)
# print('OPa_C: ', OPa_C)
# print('stabcheck: ', stabcheck)
# print('stabcheck2: ', stabcheck2)
if stabcheck>0 :
if stabcheck2>0 :
# print('stabcheck>0 and stabcheck2>0')
Anod=(R*Tk/2.0/F)*math.log((-OPa_B+math.sqrt(OPa_B**2-4.0*OPa_A*OPa_C))/2.0/OPa_A)
holdA2=0
holdB2=0
holdC2=0
goober=1
# print('DeffH2: ', DeffH2)
else:
# print('stabcheck>0 and stabcheck2<0')
DeffH2=0.01*H2_UD*por_a/tort_a
DeffH2O=0.01*H2O_UD*por_a/tort_a
DeffCO=0.01*CO_UD*por_a/tort_a
DeffCO2=0.01*CO2_UD*por_a/tort_a
#---------------------------------------------------------------------------------------------------------------------
if Fkn==True:
DeffH2=(DeffH2*DeffH2_K)/(DeffH2+DeffH2_K)
DeffH2O=(DeffH2O*DeffH2O_K)/(DeffH2O+DeffH2O_K)
DeffCO=(DeffCO*DeffCO_K)/(DeffCO+DeffCO_K)
DeffCO2=(DeffCO2*DeffCO2_K)/(DeffCO2+DeffCO2_K)
#---------------------------------------------------------------------------------------------------------------------
# print('DeffH2: ', DeffH2)
alim=2*F*pH2*atm2Pa*DeffH2/(831.45*Tk*th_a)
blim=2*F*pH2O*atm2Pa*DeffH2O/(831.45*Tk*th_a)
clim=2*F*pCOc*atm2Pa*DeffCO/(831.45*Tk*th_a)
dlim=2*F*pCO2c*atm2Pa*DeffCO2/(831.45*Tk*th_a)
OPa_A=(Jcalc+blim+dlim)/blim/dlim
OPa_B=(Jcalc*(alim*dlim+blim*clim)+blim*clim*dlim+alim*blim*dlim-alim*clim*dlim-alim*blim*clim)/alim/blim/clim/dlim
OPa_C=(Jcalc-alim-clim)/alim/clim
holdA2=OPa_A
holdB2=OPa_B
holdC2=OPa_C
Anod=(R*Tk/2.0/F)*math.log((-OPa_B+math.sqrt(OPa_B**2-4.0*OPa_A*OPa_C))/2.0/OPa_A)
goober=2
#--
#%-- Compute the final voltage result
#------------------------------------------------------------------
# print(Voc,Ohmic,BV,Cath)
V=(Voc-Ohmic-BV+Cath+Anod)+V_loss #this is the original one for SOFC
#--file=io.open("vdetails.dat","a")
#V=(Voc+Ohmic+BV-Cath-Anod)+V_loss #SOEC proton
#Z=V #*1.1+0.05
# print(V,"(V)=",Voc,"(Voc)+",Ohmic,"(Ohmic)+",BV,"(BV)-",Cath,"(Cath)-",Anod,"Anod)")
#--Voc=(R*Tk/4.0/F)*math.log(pO2air/pO2anode)
#--file:write(Voc," ",Ohmic," ",BV," ",Cath," ",Anod," ",pN2air," ",pH2," ",pH2O," ",pCO," ",pCO2," ",pCH4,"\n")
#--pO2anode=(pH2O/Kequib/pH2)**2
#--file:write(Voc,"=",pO2air,"/",pO2anode," =",pH2O,"/",Kequib,"/",pH2,"\n")
#--file:close()
#--
#-- return the voltage value
return(V,Voc,Ohmic,BV,Cath,Anod)
# environment parameters
oT =700+273.15 #Temperature oxidant (K)
fT =700+273.15 #Temperature fuel (K)
pO2air=0.3 #Air side partial pressure O2 (atm)
pN2air =0.7 #Air side partial pressure N2 (atm)
# pH2 = 0.4375 #Fuel side partial pressure H2 (atm)
# pH2O =0.3125 #Fuel side partial pressure H2O (atm)
# pCO=0.0625 #Fuel side partial pressure CO (atm)
# pCO2=0.0625 #Fuel side partial pressure CO2 (atm)
# pCH4=0.125 #Fuel side partial pressure CH4 (atm)
# pN2=0.0 #Fuel side partial pressure N2 (atm)
pH2 = 0.97 #Fuel side partial pressure H2 (atm)
pH2O =0.03 #Fuel side partial pressure H2O (atm)
pCO=0.0 #Fuel side partial pressure CO (atm)
pCO2=0.0 #Fuel side partial pressure CO2 (atm)
pCH4=0.0 #Fuel side partial pressure CH4 (atm)
pN2=0.0 #Fuel side partial pressure N2 (atm)
pSys=1.0 #System pressure (atm)
# fuel cell property parameters
de_a = 0.61 #diameter of electrically conducting particles for anode 0.2-0.8
ne_a = 0.2 #number fraction of electrically conducting particles for anode 0.2-0.6
alpha_a = 0.475/0.61 #the particle size ratio of ionic to electronic conducting particles for anode 0.5-2.0
de_c = de_a #** #for cathode
ne_c = ne_a #** #for cathode
alpha_c = alpha_a #** #for cathode
d0_am = 0.228 # pore diameter [microm]
d0_cm = d0_am
# adjustable parameters for B-V loss (starting with these for optimization)
BV_alpha = 0.43236 #0.43236/3 **
BV_prexp = 5639 #**
BV_Eact = 79616 #**
###Output
_____no_output_____
###Markdown
Match Experimental Data in Literature (PNNL)
###Code
# comparison between JHM-2017 experimental data and IV_new predicitons
#initilize and optimize
param_guess = (0.43236, 5639, 79616, 0.0, 0.0, 0.28, 0.28)
# BV_alpha, BV_prexp, BV_Eact,V_loss=0.0, R_cont=0.0,
# DsurfH2th1=0.1, DsurfH2th2=4.51e-5,
# Fkn=True, d0_am=0.28,d0_cm=0.28, th_e=10
Tlist = [650, 700, 750, 800, 850]
th_e = 10
param_fixed = [pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys]
def objective_function_JM(param_guess, param_fixed):
[pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys] = param_fixed
filename = './ExperimentalData/JM2017/AllData_JM.csv'
data = np.loadtxt(open(filename, "rb"), delimiter=",", skiprows=1)
x = data[:,0]
y_exp = data[:,1]
Toper = data[:, 2]
y_model = np.zeros(len(y_exp))
for j in range(len(y_exp)):
y_model[j], Voc, Ohmic, BV, Cath, Anode = IV_new_2(Toper[j]+273.15,Toper[j]+273.15,x[j],
pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,
param_guess[0], param_guess[1], param_guess[2],
param_guess[3], param_guess[4], 0.1, 4.51e-5,
True,param_guess[5],param_guess[6])
rmse = mean_squared_error(y_exp, y_model, squared=False)
return rmse
obj_guess = objective_function_JM(param_guess, param_fixed)
bnds = ((None, None), (None, None), (None, None), (None, 0), (0, None), (0.2, 2.0), (0.2, 2.0))
result = minimize(objective_function_JM, param_guess, args = (param_fixed), method = 'SLSQP', bounds = bnds)
print(result)
# plot and comparison
markerlist = ['o', 'v', 'd', '^', '+']
colorlist = ['r', 'g', 'b', 'c', 'm']
plt.figure(figsize=(17.5,6))
for i in range(len(Tlist)):
oT = Tlist[i]+273.15
fT = oT
# plot exp data
filename = './Resources_More/ExperimentalData/JM2017/'+str(Tlist[i])+'.csv'
data = np.loadtxt(open(filename, "rb"), delimiter=",", skiprows=1)
plt.plot(data[:,0], data[:,1], colorlist[i]+markerlist[i], label = 'EXP '+str(Tlist[i]))
# initialize Jlist and Vlist
Jlist=np.linspace(np.amin(data[:,0]), np.amax(data[:,0]), num=20) #Current density, unit [A/cm2]
Vlist = np.zeros(20)
# plot pred data
for j in range(20):
Vlist[j], Voc, Ohmic, BV, Cath, Anode = IV_new_2(oT,fT,Jlist[j],pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,
result.x[0],result.x[1],result.x[2],
result.x[3], result.x[4], 0.1, 4.51e-5,
True,result.x[5],result.x[6])
plt.plot(Jlist, Vlist, colorlist[i]+'-', label = 'IV Pred '+str(Tlist[i]))
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.legend(loc='upper right', fontsize=15)
plt.xlim(-0.1, 1.1)
plt.ylim(0.4, 1.1)
# ymin, ymax = plt.ylim()
# plt.ylim(ymin-(ymax-ymin)*0.0, ymax+(ymax-ymin)*0.0)
plt.xlabel('Current Density, J [A/cm2]', fontsize = 15)
plt.ylabel('Voltage, V [V]', fontsize = 15)
plt.title('EXP. VS. IV_new', fontsize = 15)
#comparison between Park-2020 experimental data (t8) and IV_new predicitons
#initilize and optimize
param_guess = (0.43236, 5639, 79616, 0.0, 0.0, 0.28, 0.28)
bnds = ((None, None), (None, None), (None, None), (None, 0), (0, None), (0.2, 2.0), (0.2, 2.0))
# BV_alpha, BV_prexp, BV_Eact,V_loss=0.0, R_cont=0.0,
# DsurfH2th1=0.1, DsurfH2th2=4.51e-5,
# Fkn=True, d0_am=0.28,d0_cm=0.28, th_e=10
Tlist = [600, 700, 800]
th_e = 8
param_fixed = [pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,th_e]
def objective_function_BP(param_guess, param_fixed):
filename = './ExperimentalData/BP2020/AllData_BP.csv'
data = np.loadtxt(open(filename, "rb"), delimiter=",", skiprows=1)
x = data[:,0]
y_exp = data[:,1]
Toper = data[:, 2]
[pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,th_e] = param_fixed
y_model = np.zeros(len(y_exp))
for j in range(len(y_exp)):
y_model[j], Voc, Ohmic, BV, Cath, Anode = IV_new_2(Toper[j]+273.15,Toper[j]+273.15,x[j],
pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,
param_guess[0],param_guess[1],param_guess[2],
param_guess[3], param_guess[4],
0.1, 4.51e-5, True, param_guess[5], param_guess[6], th_e)
rmse = mean_squared_error(y_exp, y_model, squared=False)
return rmse
obj_guess = objective_function_BP(param_guess, param_fixed)
result = minimize(objective_function_BP, param_guess, args = (param_fixed), method = 'SLSQP', bounds = bnds)
print(result)
# plot and comparison
markerlist = ['o', 'v', 'd', '^', '+']
colorlist = ['r', 'g', 'b', 'c', 'm']
plt.figure(figsize=(17.5,6))
for i in range(len(Tlist)):
oT = Tlist[i]+273.15
fT = oT
# plot exp data
filename = './Resources_More/ExperimentalData/BP2020/t8/'+str(Tlist[i])+'.csv'
data = np.loadtxt(open(filename, "rb"), delimiter=",", skiprows=1)
plt.plot(data[:,0], data[:,1], colorlist[i]+markerlist[i], label = 'EXP '+str(Tlist[i]))
# initialize Jlist and Vlist
Jlist=np.linspace(np.amin(data[:,0]), np.amax(data[:,0]), num=20) #Current density, unit [A/cm2]
Vlist = np.zeros(20)
# plot pred data
for j in range(20):
Vlist[j], Voc, Ohmic, BV, Cath, Anode = IV_new_2(oT,fT,Jlist[j],pO2air,pN2air,pH2,pH2O,pCO,pCO2,pCH4,pN2,pSys,
result.x[0],result.x[1],result.x[2],
result.x[3], result.x[4],
0.1, 4.51e-5, True, result.x[5], result.x[6], th_e)
plt.plot(Jlist, Vlist, colorlist[i]+'-', label = 'IV Pred '+str(Tlist[i]))
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.legend(loc='upper right', fontsize=15)
ymin, ymax = plt.ylim()
plt.ylim(ymin-(ymax-ymin)*0.0, ymax+(ymax-ymin)*0.0)
plt.xlabel('Current Density, J [A/cm2]', fontsize = 15)
plt.ylabel('Voltage, V [V]', fontsize = 15)
plt.title('EXP. VS. IV_new', fontsize = 15)
###Output
fun: 0.01942475200926286
jac: array([ 1.11753477e-02, -9.72067937e-07, -8.50763172e-07, 8.39315006e-03,
3.96854378e-01, -1.16512994e-03, -3.08273127e-04])
message: 'Optimization terminated successfully.'
nfev: 150
nit: 16
njev: 16
status: 0
success: True
x: array([ 2.69263711e-01, 5.63900022e+03, 7.96160001e+04, -5.98828589e-02,
0.00000000e+00, 2.00000000e+00, 3.72783474e-01])
|
2018/Quotes_XML.ipynb | ###Markdown
Scraping Quotes with LXML (Xpath approach)Unti now we were using CSS selectors to find an HTML element. This notebook uses another powerful approach called Xpath (XML path) to math elements. We will use Lxml library to complete the task.Key points:- lxml - ugly, useful to get attributes and nested values, faster- bs - beautiful, useful to get structured values, slower- text_content() function from lxml to ge tthe text out of the tag,- xpath() - functino from lxml to mathc elements,- tostring() - function from lmxml to convert HTML element to string (both tag and text)
###Code
import requests
from lxml import html
from lxml.etree import tostring
url = "http://quotes.toscrape.com/"
#headers is used to provide information to server
response = requests.get(url,headers={"user-agent":"[email protected]"})
page = response.content
tree = html.document_fromstring(page) #change type to lxml type
tostring(tree) #to see the HTML source
# find author names using xpath
[i.text_content() for i in tree.xpath("//small")]
# find tags using spath
[i.text_content() for i in tree.xpath("//div/a")]
tree.xpath("//a[@class='tag']")
###Output
_____no_output_____ |
qutip-notebooks-master/examples/quantum-gates - Copy.ipynb | ###Markdown
QuTiP example: Quantum Gates and their usage Author: Anubhav Vardhan ([email protected])For more information about QuTiP see [http://qutip.org](http://qutip.org)
###Code
%matplotlib inline
from IPython.display import Image
from numpy import pi
from qutip import *
###Output
_____no_output_____
###Markdown
Introduction http://en.wikipedia.org/wiki/Quantum_gate Gates in QuTiP and their representation Controlled-PHASE
###Code
cphase(pi/2)
Image(filename='images/cphase.png')
###Output
_____no_output_____
###Markdown
Rotation about X-axis
###Code
rx(pi/2)
Image(filename='images/rx.png')
###Output
_____no_output_____
###Markdown
Rotation about Y-axis
###Code
ry(pi/2)
Image(filename='images/ry.png')
###Output
_____no_output_____
###Markdown
Rotation about Z-axis
###Code
rz(pi/2)
Image(filename='images/rz.png')
###Output
_____no_output_____
###Markdown
CNOT
###Code
cnot()
Image(filename='images/cnot.png')
###Output
_____no_output_____
###Markdown
CSIGN
###Code
csign()
Image(filename='images/csign.png')
###Output
_____no_output_____
###Markdown
Berkeley
###Code
berkeley()
Image(filename='images/berkeley.png')
###Output
_____no_output_____
###Markdown
SWAPalpha
###Code
swapalpha(pi/2)
Image(filename='images/swapalpha.png')
###Output
_____no_output_____
###Markdown
FREDKIN
###Code
fredkin()
Image(filename='images/fredkin.png')
###Output
_____no_output_____
###Markdown
TOFFOLI
###Code
toffoli()
Image(filename='images/toffoli.png')
###Output
_____no_output_____
###Markdown
SWAP
###Code
swap()
Image(filename='images/swap.png')
###Output
_____no_output_____
###Markdown
ISWAP
###Code
iswap()
Image(filename='images/iswap.png')
###Output
_____no_output_____
###Markdown
SQRTiSWAP
###Code
sqrtiswap()
Image(filename='images/sqrtiswap.png')
###Output
_____no_output_____
###Markdown
SQRTSWAP
###Code
sqrtswap()
Image(filename='images/sqrtswap.png')
###Output
_____no_output_____
###Markdown
SQRTNOT
###Code
sqrtnot()
Image(filename='images/sqrtnot.png')
###Output
_____no_output_____
###Markdown
HADAMARD
###Code
snot()
Image(filename='images/snot.png')
###Output
_____no_output_____
###Markdown
PHASEGATE
###Code
phasegate(pi/2)
Image(filename='images/phasegate.png')
###Output
_____no_output_____
###Markdown
GLOBALPHASE
###Code
globalphase(pi/2)
Image(filename='images/globalphase.png')
###Output
_____no_output_____
###Markdown
Mølmer–Sørensen gate
###Code
molmer_sorensen(pi/2)
###Output
_____no_output_____
###Markdown
Qubit rotation gate
###Code
qrot(pi/2, pi/4)
###Output
_____no_output_____
###Markdown
Expanding gates to larger qubit registers The example above show how to generate matrice representations of the gates implemented in QuTiP, in their minimal qubit requirements. If the same gates is to be represented in a qubit register of size $N$, the optional keywork argument `N` can be specified when calling the gate function. For example, to generate the matrix for the CNOT gate for a $N=3$ bit register:
###Code
cnot(N=3)
Image(filename='images/cnot310.png')
###Output
_____no_output_____
###Markdown
Furthermore, the control and target qubits (when applicable) can also be similarly specified using keyword arguments `control` and `target` (or in some cases `controls` or `targets`):
###Code
cnot(N=3, control=2, target=0)
Image(filename='images/cnot302.png')
###Output
_____no_output_____
###Markdown
Setup of a Qubit Circuit The gates implemented in QuTiP can be used to build any qubit circuit using the class QubitCircuit. The output can be obtained in the form of a unitary matrix or a latex representation. In the following example, we take a SWAP gate. It is known that a swap gate is equivalent to three CNOT gates applied in the given format.
###Code
N = 2
qc0 = QubitCircuit(N)
qc0.add_gate("SWAP", [0, 1], None)
qc0.png
U_list0 = qc0.propagators()
U0 = gate_sequence_product(U_list0)
U0
qc1 = QubitCircuit(N)
qc1.add_gate("CNOT", 0, 1)
qc1.add_gate("CNOT", 1, 0)
qc1.add_gate("CNOT", 0, 1)
qc1.png
U_list1 = qc1.propagators()
U1 = gate_sequence_product(U_list1)
U1
###Output
_____no_output_____
###Markdown
In place of manually converting the SWAP gate to CNOTs, it can be automatically converted using an inbuilt function in QubitCircuit
###Code
qc2 = qc0.resolve_gates("CNOT")
qc2.png
U_list2 = qc2.propagators()
U2 = gate_sequence_product(U_list2)
U2
###Output
_____no_output_____
###Markdown
Example of basis transformation
###Code
qc3 = QubitCircuit(3)
qc3.add_gate("CNOT", 1, 0)
qc3.add_gate("RX", 0, None, pi/2, r"\pi/2")
qc3.add_gate("RY", 1, None, pi/2, r"\pi/2")
qc3.add_gate("RZ", 2, None, pi/2, r"\pi/2")
qc3.add_gate("ISWAP", [1, 2])
qc3.png
U3 = gate_sequence_product(qc3.propagators())
U3
###Output
_____no_output_____
###Markdown
The transformation can either be only in terms of 2-qubit gates:
###Code
qc4 = qc3.resolve_gates("CNOT")
qc4.png
U4 = gate_sequence_product(qc4.propagators())
U4
qc5 = qc3.resolve_gates("ISWAP")
qc5.png
U5 = gate_sequence_product(qc5.propagators())
U5
###Output
_____no_output_____
###Markdown
Or the transformation can be in terms of any 2 single qubit rotation gates along with the 2-qubit gate.
###Code
qc6 = qc3.resolve_gates(["ISWAP", "RX", "RY"])
qc6.png
U6 = gate_sequence_product(qc6.propagators())
U6
qc7 = qc3.resolve_gates(["CNOT", "RZ", "RX"])
qc7.png
U7 = gate_sequence_product(qc7.propagators())
U7
###Output
_____no_output_____
###Markdown
Resolving non-adjacent interactions Interactions between non-adjacent qubits can be resolved by QubitCircuit to a series of adjacent interactions, which is useful for systems such as spin chain models.
###Code
qc8 = QubitCircuit(3)
qc8.add_gate("CNOT", 2, 0)
qc8.png
U8 = gate_sequence_product(qc8.propagators())
U8
qc9 = qc8.adjacent_gates()
qc9.png
U9 = gate_sequence_product(qc9.propagators())
U9
qc10 = qc9.resolve_gates("CNOT")
qc10.png
U10 = gate_sequence_product(qc10.propagators())
U10
###Output
_____no_output_____
###Markdown
User defined gatesA user defined gate can be defined by a python function that takes at most one parameter and return a `Qobj`, the dimension of the `Qobj` has to match the qubit system.
###Code
import numpy as np
def user_gate1(arg_value):
# controlled rotation X
mat = np.zeros((4, 4), dtype=np.complex)
mat[0, 0] = mat[1, 1] = 1.
mat[2:4, 2:4] = rx(arg_value)
return Qobj(mat, dims=[[2, 2], [2, 2]])
def user_gate2():
# S gate
mat = np.array([[1., 0],
[0., 1.j]])
return Qobj(mat, dims=[[2], [2]])
###Output
_____no_output_____
###Markdown
To let the `QubitCircuit` process those gates, one can modify its the attributes `QubitCircuit.user_gates`, which is a python dictionary in the form `{name: gate_function}`.
###Code
qc = QubitCircuit(2)
qc.user_gates = {"CTRLRX": user_gate1,
"S" : user_gate2}
###Output
_____no_output_____
###Markdown
When calling the `add_gate` method, the targets qubits and the argument need to be given.
###Code
# qubit 0 controlls qubit 1
qc.add_gate("CTRLRX", targets=[0,1], arg_value=pi/2)
# qubit 1 controlls qutbi 0
qc.add_gate("CTRLRX", targets=[1,0], arg_value=pi/2)
# a gate can also be added using the Gate class
g_T = Gate("S", targets=[1])
qc.add_gate("S", targets=[1])
props = qc.propagators()
props[0] # qubit 0 controlls qubit 1
props[1] # qubit 1 controlls qutbi 0
props[2] # S gate acts on qubit 1
###Output
_____no_output_____
###Markdown
Software versions
###Code
from qutip.ipynbtools import version_table
version_table()
###Output
_____no_output_____ |
.ipynb_checkpoints/ML_LogisticRegression-checkpoint.ipynb | ###Markdown
Machine Learning - Andrew Ng ( Python Implementation) Logistic Regression Loading of Data
###Code
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
df=pd.read_csv("ex2data1.txt",header=None)
X=df.iloc[:,:-1].values
y=df.iloc[:,-1].values
df.head()
df.describe()
###Output
_____no_output_____
###Markdown
Plotting of Data
###Code
pos , neg = (y==1).reshape(100,1) , (y==0).reshape(100,1)
plt.scatter(X[pos[:,0],0],X[pos[:,0],1],c="r",marker="+")
plt.scatter(X[neg[:,0],0],X[neg[:,0],1],marker="o",s=10)
plt.xlabel("Exam 1 score")
plt.ylabel("Exam 2 score")
plt.legend(["Admitted","Not admitted"],loc=0)
###Output
_____no_output_____
###Markdown
Sigmoid function$ g(z) = \frac{1}{(1+e^{-z})}$
###Code
def sigmoid(z):
"""
return the sigmoid of z
"""
return 1/ (1 + np.exp(-z))
# testing the sigmoid function
sigmoid(0)
###Output
_____no_output_____
###Markdown
Compute the Cost Function and Gradient$J(\Theta) = \frac{1}{m} \sum_{i=1}^{m} [ -y^{(i)}log(h_{\Theta}(x^{(i)})) - (1 - y^{(i)})log(1 - (h_{\Theta}(x^{(i)}))]$$ \frac{\partial J(\Theta)}{\partial \Theta_j} = \frac{1}{m} \sum_{i=1}^{m} (h_{\Theta}(x^{(i)}) - y^{(i)})x_j^{(i)}$
###Code
def costFunction(theta, X, y):
"""
Takes in numpy array theta, x and y and return the logistic regression cost function and gradient
"""
m=len(y)
predictions = sigmoid(np.dot(X,theta))
error = (-y * np.log(predictions)) - ((1-y)*np.log(1-predictions))
cost = 1/m * sum(error)
grad = 1/m * np.dot(X.transpose(),(predictions - y))
return cost[0] , grad
###Output
_____no_output_____
###Markdown
Feature scaling
###Code
def featureNormalization(X):
"""
Take in numpy array of X values and return normalize X values,
the mean and standard deviation of each feature
"""
mean=np.mean(X,axis=0)
std=np.std(X,axis=0)
X_norm = (X - mean)/std
return X_norm , mean , std
m , n = X.shape[0], X.shape[1]
X, X_mean, X_std = featureNormalization(X)
X= np.append(np.ones((m,1)),X,axis=1)
y=y.reshape(m,1)
initial_theta = np.zeros((n+1,1))
cost, grad= costFunction(initial_theta,X,y)
print("Cost of initial theta is",cost)
print("Gradient at initial theta (zeros):",grad)
###Output
Cost of initial theta is 0.693147180559946
Gradient at initial theta (zeros): [[-0.1 ]
[-0.28122914]
[-0.25098615]]
###Markdown
Gradient Descent
###Code
def gradientDescent(X,y,theta,alpha,num_iters):
"""
Take in numpy array X, y and theta and update theta by taking num_iters gradient steps
with learning rate of alpha
return theta and the list of the cost of theta during each iteration
"""
m=len(y)
J_history =[]
for i in range(num_iters):
cost, grad = costFunction(theta,X,y)
theta = theta - (alpha * grad)
J_history.append(cost)
return theta , J_history
theta , J_history = gradientDescent(X,y,initial_theta,1,400)
print("Theta optimized by gradient descent:",theta)
print("The cost of the optimized theta:",J_history[-1])
###Output
Theta optimized by gradient descent: [[1.65947664]
[3.8670477 ]
[3.60347302]]
The cost of the optimized theta: 0.20360044248226664
###Markdown
Plotting of Cost Function
###Code
plt.plot(J_history)
plt.xlabel("Iteration")
plt.ylabel("$J(\Theta)$")
plt.title("Cost function using Gradient Descent")
###Output
_____no_output_____
###Markdown
Plotting the decision boundaryFrom Machine Learning Resources: $h_\Theta(x) = g(z)$, where g is the sigmoid function and $z = \Theta^Tx$Since $h_\Theta(x) \geq 0.5$ is interpreted as predicting class "1", $g(\Theta^Tx) \geq 0.5$ or $\Theta^Tx \geq 0$ predict class "1" $\Theta_1 + \Theta_2x_2 + \Theta_3x_3 = 0$ is the decision boundary Since, we plot $x_2$ against $x_3$, the boundary line will be the equation $ x_3 = \frac{-(\Theta_1+\Theta_2x_2)}{\Theta_3}$
###Code
plt.scatter(X[pos[:,0],1],X[pos[:,0],2],c="r",marker="+",label="Admitted")
plt.scatter(X[neg[:,0],1],X[neg[:,0],2],c="b",marker="x",label="Not admitted")
x_value= np.array([np.min(X[:,1]),np.max(X[:,1])])
y_value=-(theta[0] +theta[1]*x_value)/theta[2]
plt.plot(x_value,y_value, "g")
plt.xlabel("Exam 1 score")
plt.ylabel("Exam 2 score")
plt.legend(loc=0)
###Output
_____no_output_____
###Markdown
Prediction
###Code
def classifierPredict(theta,X):
"""
take in numpy array of theta and X and predict the class
"""
predictions = X.dot(theta)
return predictions>0
x_test = np.array([45,85])
x_test = (x_test - X_mean)/X_std
x_test = np.append(np.ones(1),x_test)
prob = sigmoid(x_test.dot(theta))
print("For a student with scores 45 and 85, we predict an admission probability of",prob[0])
###Output
For a student with scores 45 and 85, we predict an admission probability of 0.7677628875792492
###Markdown
Accuracy on training set
###Code
p=classifierPredict(theta,X)
print("Train Accuracy:", sum(p==y)[0],"%")
###Output
Train Accuracy: 89 %
|
Experiments/3d_simple_nerf.ipynb | ###Markdown
Simplified 3D NeRF
###Code
!pip install -q livelossplot
import jax
from jax import random, grad, jit, vmap
from jax.config import config
from jax.lib import xla_bridge
import jax.numpy as np
from jax.experimental import stax
from jax.experimental import optimizers
from livelossplot import PlotLosses
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm as tqdm
import os
import numpy as onp
from jax.lib import xla_bridge
print(f'Using {xla_bridge.get_backend().platform}')
rng = random.PRNGKey(0)
###Output
Using gpu
###Markdown
Load Data
###Code
filename = 'lego_400.npz'
if not os.path.exists(filename):
!gdown --id 108jNfjPITTsTA0lE6Kpg7Ei53BUVL-4n # Lego
data = np.load(filename)
images = data['images']
poses = data['poses']
focal = data['focal']
H, W = images.shape[1:3]
images, val_images, test_images = np.split(images[...,:3], [100,107], axis=0) # image load train validation test sets
poses, val_poses, test_poses = np.split(poses, [100,107], axis=0) # position load (same sets)
print(val_images.shape, test_images.shape, focal)
plt.imshow(test_images[0,...])
plt.show()
###Output
(7, 400, 400, 3) (13, 400, 400, 3) 555.5555155968841
###Markdown
Rendering Functions
###Code
def get_rays(H, W, focal, c2w):
i, j = np.meshgrid(np.arange(W), np.arange(H), indexing='xy')
dirs = np.stack([(i-W*.5)/focal, -(j-H*.5)/focal, -np.ones_like(i)], -1) #trnasform pixel to camera coordinate(intrinsic)
rays_d = np.sum(dirs[..., np.newaxis, :] * c2w[:3,:3], -1) #trnasform camera to world coordinate(extrinsic) rotation
rays_o = np.broadcast_to(c2w[:3,-1], rays_d.shape) #trnasform camera to world coordinate(extrinsic) translation
return np.stack([rays_o, rays_d], 0)
get_rays = jit(get_rays, static_argnums=(0, 1, 2,))
training_rays = np.stack([get_rays(H,W,focal,pose) for pose in poses], 1)
training_data = np.concatenate([training_rays, images[None]])
training_data = np.moveaxis(training_data, 0, -2)
training_data = onp.array(np.reshape(training_data, [-1, 3, 3]))
onp.random.shuffle(training_data)
training_data = np.array(training_data)
def render_rays(apply_fn, params, avals, bvals, key, rays, near, far, N_samples, rand=False, allret=False):
rays_o, rays_d = rays
# Compute 3D query points
z_vals = np.linspace(near, far, N_samples)
if rand:
z_vals += random.uniform(key, shape=list(rays_o.shape[:-1]) + [N_samples]) * (far-near)/N_samples #coarse samples
pts = rays_o[...,None,:] + rays_d[...,None,:] * z_vals[...,:,None]
# Run network
pts_flat = np.reshape(pts, [-1,3]) #if input position pass on the network we can get prediction value utilizing fourier features
if avals is not None:
pts_flat = np.concatenate([avals * np.sin(pts_flat @ bvals.T),
avals * np.cos(pts_flat @ bvals.T)], axis=-1)
raw = apply_fn(params, pts_flat)
raw = np.reshape(raw, list(pts.shape[:-1]) + [4])
# Compute opacities and colors
rgb, sigma_a = raw[...,:3], raw[...,3] #rgb value and volume density
sigma_a = jax.nn.relu(sigma_a)
rgb = jax.nn.sigmoid(rgb)
# Do volume rendering
dists = np.concatenate([z_vals[..., 1:] - z_vals[..., :-1], np.broadcast_to([1e10], z_vals[...,:1].shape)], -1)
alpha = 1.-np.exp(-sigma_a * dists)
trans = np.minimum(1., 1.-alpha + 1e-10)
trans = np.concatenate([np.ones_like(trans[...,:1]), trans[...,:-1]], -1)
weights = alpha * np.cumprod(trans, -1)
rgb_map = np.sum(weights[...,None] * rgb, -2)
acc_map = np.sum(weights, -1)
if False:
rgb_map = rgb_map + (1.-acc_map[..., None])
if not allret:
return rgb_map
depth_map = np.sum(weights * z_vals, -1)
return rgb_map, depth_map, acc_map
def render_fn_inner(params, avals, bvals, key, rays, rand, allret):
return render_rays(apply_fn, params, avals, bvals, key, rays, near=2., far=6., N_samples=N_samples, rand=rand, allret=allret)
render_fn_inner = jit(render_fn_inner, static_argnums=(5, 6,))
def render_fn(params, avals, bvals, key, rays, rand):
chunk = 5
for i in range(0, rays.shape[1], chunk):
out = render_fn_inner(params, avals, bvals, key, rays[:,i:i+chunk], rand, True)
if i==0:
rets = out
else:
rets = [np.concatenate([a, b], 0) for a, b in zip(rets, out)]
return rets
###Output
_____no_output_____
###Markdown
Network Definition
###Code
def make_network(num_layers, num_channels): #network generation function
layers = []
for i in range(num_layers-1):
layers.append(stax.Dense(num_channels))
layers.append(stax.Relu)
layers.append(stax.Dense(4))
return stax.serial(*layers)
def loss_fn(params, avals, bvals, key, rays, target, stratified):
rgb = render_fn_inner(params, avals, bvals, key, rays, stratified, False)
l = np.mean(np.square(rgb - target))
return l
def train_model(lr, iters, avals, bvals, stratified, name='', plot_groups=None):
rng = random.PRNGKey(0)
if bvals is not None:
init_shape = (-1, bvals.shape[0]*2)
else:
init_shape = (-1, 3)
_, net_params = init_fn(rng, init_shape)
opt_init, opt_update, get_params = optimizers.adam(lr)
opt_state = opt_init(net_params)
@jit
def step_fn(i, opt_state, avals, bvals, key, rays, target):
params = get_params(opt_state)
g = grad(loss_fn)(params, avals, bvals, key, rays, target, stratified)
return opt_update(i, g, opt_state)
if plot_groups is not None:
plot_groups['PSNR'].append(f'{name}')
b_i = 0
xs = []
psnrs = []
import time
t = time.time()
t0 = t
for i in range(iters+1):
batch = training_data[b_i:b_i+batch_size]
b_i += batch_size
rays = np.moveaxis(batch[:,:2], 1, 0) #ray postion direction
target = batch[:,2] #target image
if b_i >= training_data.shape[0]:
b_i = 0
rng, key = random.split(rng)
opt_state = step_fn(i, opt_state, avals, bvals, key, rays, target)
if i%1000==0 or i==iters:
psnr = []
print(i, (time.time() - t) / 200, 'secs per iter', (time.time()-t0)/60., 'total mins')
num_vals = val_poses.shape[0] if i==iters else 1
for v in range(num_vals):
# Render the holdout view for logging
rays = get_rays(H, W, focal, val_poses[v,...])
rng, key = random.split(rng)
rgb, depth, acc = render_fn(get_params(opt_state), avals, bvals, key, rays, False)
loss = np.mean(np.square(rgb - val_images[v,...]))
psnr.append(-10. * np.log10(loss))
psnr = np.mean(np.array(psnr))
psnrs.append(psnr)
xs.append(i)
if plot_groups is not None:
plotlosses_model.update({f'{name}':psnr}, current_step=i)
plotlosses_model.send()
t = time.time()
results = {
'state': get_params(opt_state),
'psnrs': psnrs,
'avals': avals,
'bvals': bvals,
'val_image': rgb,
'xs': xs
}
return results
###Output
_____no_output_____
###Markdown
Train network with different embeddingsOur 3D input embedding is of the form: \> $\gamma(\mathbf v) = (a_0 \sin(\pi b_0^\top \mathbf v), a_0 \cos(\pi b_0^\top \mathbf v),a_1 \sin(\pi b_1^\top \mathbf v), a_1 \cos(\pi b_1^\top \mathbf v),...)$This creates a kernel of the form: \> $k_\gamma(\mathbf v_1, \mathbf v_2) = \sum_{i=1}^m a_i^2 \cos(\pi b_i^\top (\mathbf v_1 - \mathbf v_2))$
###Code
#@title Train Models
disable_jit = False #@param {type:"boolean"}
live_plot = True #@param {type:"boolean"}
reset_plots = True #@param {type:"boolean"}
#@markdown ##Network Params
lr = 5e-4#@param
batch_size = 1024 #@param
N_samples = 128 #@param
training_steps = 50000#@param
num_layers = 4#@param
layer_width = 256 #@param
stratified_sampling = True #@param {type:"boolean"}
rotate = True #@param {type:"boolean"}
#@markdown ##Encoder Params
embedding_size = 256 #@param
max_posenc_log_scale = 8#@param
#@markdown gaussian_scales should be a list of scales (things like np.arange(...) allowed)
gaussian_scales = [38] #@param
#@markdown
include_no_encoding = True #@param {type:"boolean"}
include_basic = True #@param {type:"boolean"}
include_posenc = False #@param {type:"boolean"}
include_new_posenc = True #@param {type:"boolean"}
include_gaussian = True #@param {type:"boolean"}
config.update('jax_disable_jit', disable_jit)
init_fn, apply_fn = make_network(num_layers, layer_width)
enc_dict = {}
if include_basic:
bvals = np.eye(3)
avals = np.ones((bvals.shape[0]))
enc_dict['basic'] = (avals, bvals)
if include_posenc:
bvals = 2.**np.arange(max_posenc_log_scale)
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
enc_dict['posenc'] = (avals, bvals)
if include_new_posenc:
bvals = 2.**np.linspace(0,max_posenc_log_scale,embedding_size//3) - 1
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
if rotate:
rot = np.array([[(2**.5)/2,-(2**.5)/2,0],[(2**.5)/2,(2**.5)/2,0],[0,0,1]])
bvals = bvals @ rot.T
rot = np.array([[1,0,0],[0,(2**.5)/2,-(2**.5)/2],[0,(2**.5)/2,(2**.5)/2]])
bvals = bvals @ rot.T
enc_dict['posenc_new'] = (avals, bvals)
if include_gaussian:
bvals = random.normal(rng, (embedding_size, 3))
avals = np.ones((bvals.shape[0]))
for scale in gaussian_scales:
enc_dict['gaussian_%.2f' % scale] = (avals, bvals * scale)
if live_plot:
if reset_plots:
plt_groups = {'PSNR':[]}
# plotlosses_model = PlotLosses()
plotlosses_model = PlotLosses(groups=plt_groups)
else:
plt_groups = None
if reset_plots:
outputs = {}
if include_no_encoding:
outputs['no_encoding'] = train_model(lr, training_steps, None, None, stratified_sampling, name='no encodings', plot_groups=plt_groups)
for k in tqdm(enc_dict, leave=False):
outputs[k] = train_model(lr, training_steps, *enc_dict[k], stratified_sampling, name=k, plot_groups=plt_groups)
#@title Plot Results
bar_graph = True #@param {type:"boolean"}
renders_viz = True #@param {type:"boolean"}
names = list(outputs.keys())
xvals = np.arange(len(names))
test_value = np.array([outputs[n]['psnrs'][-1] for n in names])
inds = np.argsort(test_value)
names_sort = [names[i] for i in inds]
if bar_graph:
plt.figure(figsize=(15,5))
plt.bar(xvals, test_value[inds], alpha=.5)
# plt.xticks(xvals, names_sort, rotation=60)
plt.xticks([])
plt.ylim(test_value.min()-1, test_value.max()+1)
plt.title(f'PSNR of rendered view')
plt.table(cellText=[['%.2f' % x for x in test_value[inds].tolist()]],
rowLabels=['PSNR'],
colLabels=names_sort,
loc='bottom',
bbox=[0, -.2, 1, 0.2])
plt.show()
if renders_viz:
print('----------------------------------------')
print(' Test')
print('----------------------------------------')
plt.figure(figsize=(28,6))
for i, p in enumerate(names_sort):
plt.subplot(1,len(names)+1,i+1)
plt.imshow(outputs[p]['val_image'])
plt.title(p)
plt.subplot(1,len(names)+1,len(names)+1)
plt.imshow(testimg)
plt.title('ground truth')
plt.show()
###Output
_____no_output_____
###Markdown
Grid Search
###Code
#@title Train Models
disable_jit = False #@param {type:"boolean"}
live_plot = True #@param {type:"boolean"}
reset_plots = True #@param {type:"boolean"}
#@markdown ##Network Params
lr = 5e-4#@param
batch_size = 1024 #@param
N_samples = 128 #@param
training_steps = 50000#@param
num_layers = 4#@param
layer_width = 256 #@param
stratified_sampling = True #@param {type:"boolean"}
#@markdown ##Encoder Params
embedding_size = 256 #@param
#@markdown gaussian_scales should be a list of scales (things like np.arange(...) allowed)
gaussian_scales = [8,12,14,15,16,17,18,19,20,21,22,23,24,26,28,32] #@param
config.update('jax_disable_jit', disable_jit)
init_fn, apply_fn = make_network(num_layers, layer_width)
enc_dict = {}
bvals = random.normal(rng, (embedding_size, 3))
avals = np.ones((bvals.shape[0]))
for scale in gaussian_scales:
enc_dict['gaussian_%.2f' % scale] = (avals, bvals * scale)
if live_plot:
if reset_plots:
plt_groups = {'PSNR':[]}
# plotlosses_model = PlotLosses()
plotlosses_model = PlotLosses(groups=plt_groups)
else:
plt_groups = None
if reset_plots:
outputs = {}
if include_no_encoding:
outputs['no_encoding'] = train_model(lr, training_steps, None, None, stratified_sampling, name='no encoding', plot_groups=plt_groups)
grid_psnrs = []
for k in tqdm(enc_dict, leave=False):
out = train_model(lr, training_steps, *enc_dict[k], stratified_sampling, name=k, plot_groups=plt_groups)
grid_psnrs.append(out['psnrs'][-1])
plt.plot(gaussian_scales, grid_psnrs)
print('best scale', gaussian_scales[np.argmax(np.array(grid_psnrs))])
###Output
_____no_output_____
###Markdown
Paper Experiments
###Code
live_plot = True
reset_plots = True
training_steps = 50000
lr = 5e-4
lr_no_encoding = 1e-2
lr_basic = 5e-3
batch_size = 1024
N_samples = 128
num_layers = 4
layer_width = 256
stratified_sampling = True
embedding_size = 256
max_posenc_log_scale = 8
gaussian_scale = 26
init_fn, apply_fn = make_network(num_layers, layer_width)
enc_dict = {}
bvals = np.eye(3)
avals = np.ones((bvals.shape[0]))
enc_dict['basic'] = (avals, bvals)
bvals = 2.**np.arange(max_posenc_log_scale+1)
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
enc_dict['posenc'] = (avals, bvals)
bvals = 2.**np.arange(max_posenc_log_scale+1)
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
rot = np.array([[(2**.5)/2,-(2**.5)/2,0],[(2**.5)/2,(2**.5)/2,0],[0,0,1]])
bvals = bvals @ rot.T
rot = np.array([[1,0,0],[0,(2**.5)/2,-(2**.5)/2],[0,(2**.5)/2,(2**.5)/2]])
bvals = bvals @ rot.T
enc_dict['posenc_rotated'] = (avals, bvals)
bvals = 2.**np.linspace(0,max_posenc_log_scale,embedding_size//3) - 1
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
enc_dict['posenc_new'] = (avals, bvals)
bvals = 2.**np.linspace(0,max_posenc_log_scale,embedding_size//3) - 1
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
rot = np.array([[(2**.5)/2,-(2**.5)/2,0],[(2**.5)/2,(2**.5)/2,0],[0,0,1]])
bvals = bvals @ rot.T
rot = np.array([[1,0,0],[0,(2**.5)/2,-(2**.5)/2],[0,(2**.5)/2,(2**.5)/2]])
bvals = bvals @ rot.T
enc_dict['posenc_new_rotated'] = (avals, bvals)
bvals = random.normal(rng, (embedding_size, 3))
avals = np.ones((bvals.shape[0]))
enc_dict[f'gaussian_{gaussian_scale}'] = (avals, bvals * gaussian_scale)
if live_plot:
if reset_plots:
plt_groups = {'PSNR':[]}
plotlosses_model = PlotLosses(groups=plt_groups)
else:
plt_groups = None
if reset_plots:
outputs_paper = {}
outputs_paper['no_encoding'] = train_model(lr_no_encoding, training_steps, None, None, stratified_sampling, name='no encodings', plot_groups=plt_groups)
for k in tqdm(enc_dict, leave=False):
if 'basic' in k:
exp_lr = lr_basic
else:
exp_lr = lr
outputs_paper[k] = train_model(exp_lr, training_steps, *enc_dict[k], stratified_sampling, name=k, plot_groups=plt_groups)
import imageio
for k in outputs_paper:
psnr = []
num_test = test_poses.shape[0]
state = outputs_paper[k]['state']
avals = outputs_paper[k]['avals']
bvals = outputs_paper[k]['bvals']
for v in range(num_test):
rays = get_rays(H, W, focal, test_poses[v,...])
rng, key = random.split(rng)
rgb, depth, acc = render_fn(state, avals, bvals, key, rays, False)
loss = np.mean(np.square(rgb - test_images[v,...]))
psnr.append(-10. * np.log10(loss))
if v in [1,4,6]:
imageio.imwrite(f'nerf_{k}_{v}.png', rgb)
psnr_mean = np.mean(np.array(psnr))
psnr_std = np.std(np.array(psnr))
print(f' {k}: %.3f, std: %.3f' % (psnr_mean, psnr_std))
###Output
WARNING:imageio:Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
WARNING:imageio:Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
WARNING:imageio:Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
###Markdown
Simplified 3D NeRF
###Code
!pip install -q livelossplot
import jax
from jax import random, grad, jit, vmap
from jax.config import config
from jax.lib import xla_bridge
import jax.numpy as np
from jax.experimental import stax
from jax.experimental import optimizers
from livelossplot import PlotLosses
import matplotlib.pyplot as plt
from tqdm.notebook import tqdm as tqdm
import os
import numpy as onp
from jax.lib import xla_bridge
print(f'Using {xla_bridge.get_backend().platform}')
rng = random.PRNGKey(0)
###Output
Using gpu
###Markdown
Load Data
###Code
filename = 'lego_400.npz'
if not os.path.exists(filename):
!gdown --id 108jNfjPITTsTA0lE6Kpg7Ei53BUVL-4n # Lego
data = np.load(filename)
images = data['images']
poses = data['poses']
focal = data['focal']
H, W = images.shape[1:3]
images, val_images, test_images = np.split(images[...,:3], [100,107], axis=0)
poses, val_poses, test_poses = np.split(poses, [100,107], axis=0)
print(val_images.shape, test_images.shape, focal)
plt.imshow(test_images[0,...])
plt.show()
###Output
(7, 400, 400, 3) (13, 400, 400, 3) 555.5555155968841
###Markdown
Rendering Functions
###Code
def get_rays(H, W, focal, c2w):
i, j = np.meshgrid(np.arange(W), np.arange(H), indexing='xy')
dirs = np.stack([(i-W*.5)/focal, -(j-H*.5)/focal, -np.ones_like(i)], -1)
rays_d = np.sum(dirs[..., np.newaxis, :] * c2w[:3,:3], -1)
rays_o = np.broadcast_to(c2w[:3,-1], rays_d.shape)
return np.stack([rays_o, rays_d], 0)
get_rays = jit(get_rays, static_argnums=(0, 1, 2,))
training_rays = np.stack([get_rays(H,W,focal,pose) for pose in poses], 1)
training_data = np.concatenate([training_rays, images[None]])
training_data = np.moveaxis(training_data, 0, -2)
training_data = onp.array(np.reshape(training_data, [-1, 3, 3]))
onp.random.shuffle(training_data)
training_data = np.array(training_data)
def render_rays(apply_fn, params, avals, bvals, key, rays, near, far, N_samples, rand=False, allret=False):
rays_o, rays_d = rays
# Compute 3D query points
z_vals = np.linspace(near, far, N_samples)
if rand:
z_vals += random.uniform(key, shape=list(rays_o.shape[:-1]) + [N_samples]) * (far-near)/N_samples
pts = rays_o[...,None,:] + rays_d[...,None,:] * z_vals[...,:,None]
# Run network
pts_flat = np.reshape(pts, [-1,3])
if avals is not None:
pts_flat = np.concatenate([avals * np.sin(pts_flat @ bvals.T),
avals * np.cos(pts_flat @ bvals.T)], axis=-1)
raw = apply_fn(params, pts_flat)
raw = np.reshape(raw, list(pts.shape[:-1]) + [4])
# Compute opacities and colors
rgb, sigma_a = raw[...,:3], raw[...,3]
sigma_a = jax.nn.relu(sigma_a)
rgb = jax.nn.sigmoid(rgb)
# Do volume rendering
dists = np.concatenate([z_vals[..., 1:] - z_vals[..., :-1], np.broadcast_to([1e10], z_vals[...,:1].shape)], -1)
alpha = 1.-np.exp(-sigma_a * dists)
trans = np.minimum(1., 1.-alpha + 1e-10)
trans = np.concatenate([np.ones_like(trans[...,:1]), trans[...,:-1]], -1)
weights = alpha * np.cumprod(trans, -1)
rgb_map = np.sum(weights[...,None] * rgb, -2)
acc_map = np.sum(weights, -1)
if False:
rgb_map = rgb_map + (1.-acc_map[..., None])
if not allret:
return rgb_map
depth_map = np.sum(weights * z_vals, -1)
return rgb_map, depth_map, acc_map
def render_fn_inner(params, avals, bvals, key, rays, rand, allret):
return render_rays(apply_fn, params, avals, bvals, key, rays, near=2., far=6., N_samples=N_samples, rand=rand, allret=allret)
render_fn_inner = jit(render_fn_inner, static_argnums=(5, 6,))
def render_fn(params, avals, bvals, key, rays, rand):
chunk = 5
for i in range(0, rays.shape[1], chunk):
out = render_fn_inner(params, avals, bvals, key, rays[:,i:i+chunk], rand, True)
if i==0:
rets = out
else:
rets = [np.concatenate([a, b], 0) for a, b in zip(rets, out)]
return rets
###Output
_____no_output_____
###Markdown
Network Definition
###Code
def make_network(num_layers, num_channels):
layers = []
for i in range(num_layers-1):
layers.append(stax.Dense(num_channels))
layers.append(stax.Relu)
layers.append(stax.Dense(4))
return stax.serial(*layers)
def loss_fn(params, avals, bvals, key, rays, target, stratified):
rgb = render_fn_inner(params, avals, bvals, key, rays, stratified, False)
l = np.mean(np.square(rgb - target))
return l
def train_model(lr, iters, avals, bvals, stratified, name='', plot_groups=None):
rng = random.PRNGKey(0)
if bvals is not None:
init_shape = (-1, bvals.shape[0]*2)
else:
init_shape = (-1, 3)
_, net_params = init_fn(rng, init_shape)
opt_init, opt_update, get_params = optimizers.adam(lr)
opt_state = opt_init(net_params)
@jit
def step_fn(i, opt_state, avals, bvals, key, rays, target):
params = get_params(opt_state)
g = grad(loss_fn)(params, avals, bvals, key, rays, target, stratified)
return opt_update(i, g, opt_state)
if plot_groups is not None:
plot_groups['PSNR'].append(f'{name}')
b_i = 0
xs = []
psnrs = []
import time
t = time.time()
t0 = t
for i in range(iters+1):
batch = training_data[b_i:b_i+batch_size]
b_i += batch_size
rays = np.moveaxis(batch[:,:2], 1, 0)
target = batch[:,2]
if b_i >= training_data.shape[0]:
b_i = 0
rng, key = random.split(rng)
opt_state = step_fn(i, opt_state, avals, bvals, key, rays, target)
if i%1000==0 or i==iters:
psnr = []
print(i, (time.time() - t) / 200, 'secs per iter', (time.time()-t0)/60., 'total mins')
num_vals = val_poses.shape[0] if i==iters else 1
for v in range(num_vals):
# Render the holdout view for logging
rays = get_rays(H, W, focal, val_poses[v,...])
rng, key = random.split(rng)
rgb, depth, acc = render_fn(get_params(opt_state), avals, bvals, key, rays, False)
loss = np.mean(np.square(rgb - val_images[v,...]))
psnr.append(-10. * np.log10(loss))
psnr = np.mean(np.array(psnr))
psnrs.append(psnr)
xs.append(i)
if plot_groups is not None:
plotlosses_model.update({f'{name}':psnr}, current_step=i)
plotlosses_model.send()
t = time.time()
results = {
'state': get_params(opt_state),
'psnrs': psnrs,
'avals': avals,
'bvals': bvals,
'val_image': rgb,
'xs': xs
}
return results
###Output
_____no_output_____
###Markdown
Train network with different embeddingsOur 3D input embedding is of the form: \> $\gamma(\mathbf v) = (a_0 \sin(\pi b_0^\top \mathbf v), a_0 \cos(\pi b_0^\top \mathbf v),a_1 \sin(\pi b_1^\top \mathbf v), a_1 \cos(\pi b_1^\top \mathbf v),...)$This creates a kernel of the form: \> $k_\gamma(\mathbf v_1, \mathbf v_2) = \sum_{i=1}^m a_i^2 \cos(\pi b_i^\top (\mathbf v_1 - \mathbf v_2))$
###Code
#@title Train Models
disable_jit = False #@param {type:"boolean"}
live_plot = True #@param {type:"boolean"}
reset_plots = True #@param {type:"boolean"}
#@markdown ##Network Params
lr = 5e-4#@param
batch_size = 1024 #@param
N_samples = 128 #@param
training_steps = 50000#@param
num_layers = 4#@param
layer_width = 256 #@param
stratified_sampling = True #@param {type:"boolean"}
rotate = True #@param {type:"boolean"}
#@markdown ##Encoder Params
embedding_size = 256 #@param
max_posenc_log_scale = 8#@param
#@markdown gaussian_scales should be a list of scales (things like np.arange(...) allowed)
gaussian_scales = [38] #@param
#@markdown
include_no_encoding = True #@param {type:"boolean"}
include_basic = True #@param {type:"boolean"}
include_posenc = False #@param {type:"boolean"}
include_new_posenc = True #@param {type:"boolean"}
include_gaussian = True #@param {type:"boolean"}
config.update('jax_disable_jit', disable_jit)
init_fn, apply_fn = make_network(num_layers, layer_width)
enc_dict = {}
if include_basic:
bvals = np.eye(3)
avals = np.ones((bvals.shape[0]))
enc_dict['basic'] = (avals, bvals)
if include_posenc:
bvals = 2.**np.arange(max_posenc_log_scale)
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
enc_dict['posenc'] = (avals, bvals)
if include_new_posenc:
bvals = 2.**np.linspace(0,max_posenc_log_scale,embedding_size//3) - 1
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
if rotate:
rot = np.array([[(2**.5)/2,-(2**.5)/2,0],[(2**.5)/2,(2**.5)/2,0],[0,0,1]])
bvals = bvals @ rot.T
rot = np.array([[1,0,0],[0,(2**.5)/2,-(2**.5)/2],[0,(2**.5)/2,(2**.5)/2]])
bvals = bvals @ rot.T
enc_dict['posenc_new'] = (avals, bvals)
if include_gaussian:
bvals = random.normal(rng, (embedding_size, 3))
avals = np.ones((bvals.shape[0]))
for scale in gaussian_scales:
enc_dict['gaussian_%.2f' % scale] = (avals, bvals * scale)
if live_plot:
if reset_plots:
plt_groups = {'PSNR':[]}
# plotlosses_model = PlotLosses()
plotlosses_model = PlotLosses(groups=plt_groups)
else:
plt_groups = None
if reset_plots:
outputs = {}
if include_no_encoding:
outputs['no_encoding'] = train_model(lr, training_steps, None, None, stratified_sampling, name='no encodings', plot_groups=plt_groups)
for k in tqdm(enc_dict, leave=False):
outputs[k] = train_model(lr, training_steps, *enc_dict[k], stratified_sampling, name=k, plot_groups=plt_groups)
#@title Plot Results
bar_graph = True #@param {type:"boolean"}
renders_viz = True #@param {type:"boolean"}
names = list(outputs.keys())
xvals = np.arange(len(names))
test_value = np.array([outputs[n]['psnrs'][-1] for n in names])
inds = np.argsort(test_value)
names_sort = [names[i] for i in inds]
if bar_graph:
plt.figure(figsize=(15,5))
plt.bar(xvals, test_value[inds], alpha=.5)
# plt.xticks(xvals, names_sort, rotation=60)
plt.xticks([])
plt.ylim(test_value.min()-1, test_value.max()+1)
plt.title(f'PSNR of rendered view')
plt.table(cellText=[['%.2f' % x for x in test_value[inds].tolist()]],
rowLabels=['PSNR'],
colLabels=names_sort,
loc='bottom',
bbox=[0, -.2, 1, 0.2])
plt.show()
if renders_viz:
print('----------------------------------------')
print(' Test')
print('----------------------------------------')
plt.figure(figsize=(28,6))
for i, p in enumerate(names_sort):
plt.subplot(1,len(names)+1,i+1)
plt.imshow(outputs[p]['val_image'])
plt.title(p)
plt.subplot(1,len(names)+1,len(names)+1)
plt.imshow(testimg)
plt.title('ground truth')
plt.show()
###Output
_____no_output_____
###Markdown
Grid Search
###Code
#@title Train Models
disable_jit = False #@param {type:"boolean"}
live_plot = True #@param {type:"boolean"}
reset_plots = True #@param {type:"boolean"}
#@markdown ##Network Params
lr = 5e-4#@param
batch_size = 1024 #@param
N_samples = 128 #@param
training_steps = 50000#@param
num_layers = 4#@param
layer_width = 256 #@param
stratified_sampling = True #@param {type:"boolean"}
#@markdown ##Encoder Params
embedding_size = 256 #@param
#@markdown gaussian_scales should be a list of scales (things like np.arange(...) allowed)
gaussian_scales = [8,12,14,15,16,17,18,19,20,21,22,23,24,26,28,32] #@param
config.update('jax_disable_jit', disable_jit)
init_fn, apply_fn = make_network(num_layers, layer_width)
enc_dict = {}
bvals = random.normal(rng, (embedding_size, 3))
avals = np.ones((bvals.shape[0]))
for scale in gaussian_scales:
enc_dict['gaussian_%.2f' % scale] = (avals, bvals * scale)
if live_plot:
if reset_plots:
plt_groups = {'PSNR':[]}
# plotlosses_model = PlotLosses()
plotlosses_model = PlotLosses(groups=plt_groups)
else:
plt_groups = None
if reset_plots:
outputs = {}
if include_no_encoding:
outputs['no_encoding'] = train_model(lr, training_steps, None, None, stratified_sampling, name='no encoding', plot_groups=plt_groups)
grid_psnrs = []
for k in tqdm(enc_dict, leave=False):
out = train_model(lr, training_steps, *enc_dict[k], stratified_sampling, name=k, plot_groups=plt_groups)
grid_psnrs.append(out['psnrs'][-1])
plt.plot(gaussian_scales, grid_psnrs)
print('best scale', gaussian_scales[np.argmax(np.array(grid_psnrs))])
###Output
_____no_output_____
###Markdown
Paper Experiments
###Code
live_plot = True
reset_plots = True
training_steps = 50000
lr = 5e-4
lr_no_encoding = 1e-2
lr_basic = 5e-3
batch_size = 1024
N_samples = 128
num_layers = 4
layer_width = 256
stratified_sampling = True
embedding_size = 256
max_posenc_log_scale = 8
gaussian_scale = 26
init_fn, apply_fn = make_network(num_layers, layer_width)
enc_dict = {}
bvals = np.eye(3)
avals = np.ones((bvals.shape[0]))
enc_dict['basic'] = (avals, bvals)
bvals = 2.**np.arange(max_posenc_log_scale+1)
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
enc_dict['posenc'] = (avals, bvals)
bvals = 2.**np.arange(max_posenc_log_scale+1)
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
rot = np.array([[(2**.5)/2,-(2**.5)/2,0],[(2**.5)/2,(2**.5)/2,0],[0,0,1]])
bvals = bvals @ rot.T
rot = np.array([[1,0,0],[0,(2**.5)/2,-(2**.5)/2],[0,(2**.5)/2,(2**.5)/2]])
bvals = bvals @ rot.T
enc_dict['posenc_rotated'] = (avals, bvals)
bvals = 2.**np.linspace(0,max_posenc_log_scale,embedding_size//3) - 1
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
avals = np.ones((bvals.shape[0]))
enc_dict['posenc_new'] = (avals, bvals)
bvals = 2.**np.linspace(0,max_posenc_log_scale,embedding_size//3) - 1
bvals = np.reshape(np.eye(3)*bvals[:,None,None], [len(bvals)*3, 3])
rot = np.array([[(2**.5)/2,-(2**.5)/2,0],[(2**.5)/2,(2**.5)/2,0],[0,0,1]])
bvals = bvals @ rot.T
rot = np.array([[1,0,0],[0,(2**.5)/2,-(2**.5)/2],[0,(2**.5)/2,(2**.5)/2]])
bvals = bvals @ rot.T
enc_dict['posenc_new_rotated'] = (avals, bvals)
bvals = random.normal(rng, (embedding_size, 3))
avals = np.ones((bvals.shape[0]))
enc_dict[f'gaussian_{gaussian_scale}'] = (avals, bvals * gaussian_scale)
if live_plot:
if reset_plots:
plt_groups = {'PSNR':[]}
plotlosses_model = PlotLosses(groups=plt_groups)
else:
plt_groups = None
if reset_plots:
outputs_paper = {}
outputs_paper['no_encoding'] = train_model(lr_no_encoding, training_steps, None, None, stratified_sampling, name='no encodings', plot_groups=plt_groups)
for k in tqdm(enc_dict, leave=False):
if 'basic' in k:
exp_lr = lr_basic
else:
exp_lr = lr
outputs_paper[k] = train_model(exp_lr, training_steps, *enc_dict[k], stratified_sampling, name=k, plot_groups=plt_groups)
import imageio
for k in outputs_paper:
psnr = []
num_test = test_poses.shape[0]
state = outputs_paper[k]['state']
avals = outputs_paper[k]['avals']
bvals = outputs_paper[k]['bvals']
for v in range(num_test):
rays = get_rays(H, W, focal, test_poses[v,...])
rng, key = random.split(rng)
rgb, depth, acc = render_fn(state, avals, bvals, key, rays, False)
loss = np.mean(np.square(rgb - test_images[v,...]))
psnr.append(-10. * np.log10(loss))
if v in [1,4,6]:
imageio.imwrite(f'nerf_{k}_{v}.png', rgb)
psnr_mean = np.mean(np.array(psnr))
psnr_std = np.std(np.array(psnr))
print(f' {k}: %.3f, std: %.3f' % (psnr_mean, psnr_std))
###Output
WARNING:imageio:Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
WARNING:imageio:Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
WARNING:imageio:Lossy conversion from float32 to uint8. Range [0, 1]. Convert image to uint8 prior to saving to suppress this warning.
|
traffic.ipynb | ###Markdown
Traffichttps://developer.here.com/documentation/traffic/topics/what-is.html
###Code
from datetime import datetime
from ipyrest import Api
from utils import latlon_for_address
from credentials import APP_ID, APP_CODE
start = datetime.utcnow().isoformat()
start
###Output
_____no_output_____
###Markdown
Traffic Incidents within a Bounding Box or via Proximity
###Code
url = 'https://traffic.api.here.com/traffic/6.0/incidents.json'
lat, lon = 52.5311, 13.3644
params = dict(
# bbox='52.5311,13.3644;52.5114,13.4035',
prox=f'{lat},{lon},1000',
criticality='minor',
app_id=APP_ID,
app_code=APP_CODE,
)
api = Api(url, params=params)
api
from ipywidgets import HTML
from ipyleaflet import Map, Polyline, Popup
m = Map(center=(lat, lon), zoom=13)
m
obj = api.resp.json()
obj['TRAFFICITEMS']['TRAFFICITEM'][0]['LOCATION']['GEOLOC']
for item in obj['TRAFFICITEMS']['TRAFFICITEM']:
geoloc = item['LOCATION']['GEOLOC']
typ = item['TRAFFICITEMTYPEDESC']
desc = item['RDSTMCLOCATIONS']['RDSTMC'][0]['ALERTC']['DESCRIPTION']
origin = geoloc['ORIGIN']
o_latlon = origin['LATITUDE'], origin['LONGITUDE']
to = geoloc['TO'][0]
t_latlon = to['LATITUDE'], to['LONGITUDE']
m += Popup(location=o_latlon, child=HTML(value=f'{typ}: {desc}'))
m += Polyline(locations=[o_latlon, t_latlon], color='red', fill=False)
print(o_latlon, t_latlon, f'{typ}: {desc}')
###Output
_____no_output_____
###Markdown
異常値を除去。
###Code
#mat[176][724] = None
#mat[176][726] = None
#mat[176][678] = None
for i in range(len(counters)):
cmat[120][i] = None
cmat[124][i] = None
cmat[177][i] = None
cmat[287][i] = None
cmat[329][i] = None
cmat[387][i] = None
cmat[409][i] = None
cmat[463][i] = None
for i in range(550,609):
cmat[176][i] = None
for i in range(540,609):
cmat[238][i] = None
for i in range(540,609):
cmat[239][i] = None
for i in range(540,609):
cmat[240][i] = None
for i in range(90, 789):
cmat[462][i] = None
for i in range(90,339):
cmat[488][i] = None
a = pandas.DataFrame(amat)
c = pandas.DataFrame(cmat)
tm = [datetime.datetime.strptime(row["@timestamp"], "%Y-%m-%dT%H:%M:%S.%f") for row in data]
t = pandas.Series(tm)
###Output
_____no_output_____
###Markdown
素朴にプロット
###Code
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
x=plt.plot(t,c)
c2=c.fillna(method="ffill").fillna(0).diff()
for i in range(c2.shape[0]):
for j in range(c2.shape[1]):
if c2.iloc[i,j] < 0:
print(i,j,c2.iloc[i,j])
###Output
109 90 -239.0
109 92 -210.0
109 95 -72154.0
109 96 -83902.0
109 98 -1.0
178 540 -4799479.0
178 542 -3641394.0
178 543 -2637.0
178 545 -6079237431.0
178 546 -634842520.0
178 550 -269217.0
178 552 -461293.0
178 555 -51297945.0
178 556 -550300171.0
178 558 -13091.0
178 590 -2813371.0
178 592 -3647929.0
178 595 -501334859.0
178 596 -4641986674.0
178 598 -13355.0
178 600 -612985.0
178 602 -750822.0
178 605 -88386251.0
178 606 -888854074.0
178 608 -8545.0
241 540 -2515461.0
241 542 -2093180.0
241 543 -447.0
241 545 -2378782405.0
241 546 -683006421.0
241 550 -537350.0
241 552 -678602.0
241 555 -182004326.0
241 556 -660569534.0
241 558 -9876.0
241 590 -1383846.0
241 592 -1619606.0
241 595 -447639406.0
241 596 -1506714963.0
241 598 -14177.0
241 600 -223309.0
241 602 -290710.0
241 605 -58949433.0
241 606 -224584514.0
241 608 -11150.0
364 450 -811521.0
364 452 -620352.0
364 455 -879416187.0
364 456 -274964241.0
364 460 -44388.0
364 462 -59029.0
364 465 -15060086.0
364 466 -41246238.0
364 468 -15642.0
364 500 -552569.0
364 502 -712326.0
364 505 -222312195.0
364 506 -826583845.0
364 508 -17128.0
364 510 -38308.0
364 512 -35477.0
364 515 -34505706.0
364 516 -13582003.0
364 518 -20717.0
385 450 -142526.0
385 452 -115973.0
385 455 -156711480.0
385 456 -57620270.0
385 460 -88291.0
385 462 -105576.0
385 465 -52919497.0
385 466 -119166830.0
385 468 -461.0
385 500 -26687.0
385 502 -34508.0
385 505 -4014078.0
385 506 -36640789.0
385 508 -368.0
385 510 -309.0
385 512 -1457.0
385 515 -36579.0
385 516 -287339.0
385 518 -241.0
393 810 -1417.0
393 812 -1304.0
393 815 -643089.0
393 816 -281699.0
393 820 -583.0
393 822 -663.0
393 825 -78586.0
393 826 -337084.0
393 828 -78.0
393 860 -655.0
393 862 -594.0
393 865 -159533.0
393 866 -275876.0
393 868 -32.0
396 190 -61496.0
396 192 -88234.0
396 195 -36126694.0
396 196 -49914069.0
396 198 -99334.0
396 230 -2999405.0
396 232 -3730036.0
396 235 -770282445.0
396 236 -4097267048.0
396 238 -4400.0
396 240 -980398.0
396 242 -1473614.0
396 245 -124459193.0
396 246 -1684805660.0
396 248 -8118.0
397 180 -5158092.0
397 182 -3922207.0
397 185 -5817601553.0
397 186 -928198997.0
397 188 -5.0
397 360 -858114.0
397 362 -684320.0
397 365 -857002173.0
397 366 -288434633.0
397 368 -5.0
397 370 -271561.0
397 372 -332464.0
397 375 -107351012.0
397 376 -361403701.0
397 378 -7679.0
397 410 -351216.0
397 412 -446232.0
397 415 -158886333.0
397 416 -448875168.0
397 418 -8703.0
397 420 -39731.0
397 422 -40800.0
397 425 -8098122.0
397 426 -34703004.0
397 428 -15297.0
398 270 -803213.0
398 272 -737151.0
398 275 -851825355.0
398 276 -206514022.0
398 278 -4.0
398 280 -54427.0
398 282 -47503.0
398 285 -46965718.0
398 286 -30288636.0
398 288 -9011.0
398 320 -590646.0
398 322 -640376.0
398 325 -133230972.0
398 326 -751930601.0
398 328 -3308.0
398 330 -63501.0
398 332 -72746.0
398 335 -11398153.0
398 336 -56788513.0
398 338 -7079.0
402 450 -114476.0
402 452 -110438.0
402 455 -83625920.0
402 456 -64490212.0
402 458 -1.0
402 460 -20710.0
402 462 -19380.0
402 465 -18281533.0
402 466 -8757575.0
402 468 -1090.0
402 500 -94107.0
402 502 -98853.0
402 505 -46963852.0
402 506 -80470681.0
402 508 -1000.0
402 510 -390.0
402 512 -424.0
402 515 -67089.0
402 516 -779067.0
402 518 -3955.0
408 270 -249415.0
408 272 -233503.0
408 275 -224638061.0
408 276 -135968059.0
408 280 -59000.0
408 282 -80213.0
408 285 -21432340.0
408 286 -90392167.0
408 288 -1721.0
408 320 -118211.0
408 322 -114663.0
408 325 -71709330.0
408 326 -85340151.0
408 328 -686.0
408 330 -58966.0
408 332 -58155.0
408 335 -42709848.0
408 336 -48864224.0
408 338 -2967.0
412 360 -798393.0
412 362 -586252.0
412 363 -387.0
412 365 -967453488.0
412 366 -90989617.0
412 370 -220856.0
412 372 -345809.0
412 375 -25795549.0
412 376 -476909038.0
412 378 -1315.0
412 410 -349730.0
412 412 -432552.0
412 415 -60492834.0
412 416 -467162187.0
412 418 -1083.0
412 420 -26515.0
412 422 -33615.0
412 425 -6117958.0
412 426 -28022646.0
412 428 -4038.0
416 540 -70241.0
416 542 -53626.0
416 545 -56359859.0
416 546 -17293944.0
416 548 -7.0
416 550 -4936.0
416 552 -6673.0
416 555 -784009.0
416 556 -6064485.0
416 558 -444.0
416 590 -1123.0
416 592 -8844.0
416 595 -4729795.0
416 596 -8911342.0
416 598 -713.0
416 600 -1026.0
416 602 -836.0
416 605 -189639.0
416 606 -574935.0
416 608 -2258.0
417 270 -386810.0
417 272 -287713.0
417 275 -410781311.0
417 276 -102984634.0
417 278 -1.0
417 280 -70338.0
417 282 -86419.0
417 285 -26788820.0
417 286 -89241592.0
417 288 -2456.0
417 320 -205156.0
417 322 -264668.0
417 325 -72061488.0
417 326 -261403647.0
417 328 -1383.0
417 330 -26214.0
417 332 -57170.0
417 335 -5094131.0
417 336 -61666816.0
417 338 -1789.0
418 556 -11697.0
418 558 -222.0
418 592 -7470.0
418 596 -27963213.0
418 598 -5.0
418 602 -285.0
433 820 -462002.0
433 822 -630110.0
433 825 -82984870.0
433 826 -754291802.0
433 828 -559.0
433 860 -1186426.0
433 862 -2686905.0
433 865 -101182663.0
433 866 -3942105674.0
433 868 -916.0
434 810 -3223884.0
434 812 -1586542.0
434 815 -4561408774.0
434 816 -180415325.0
442 630 -7203923.0
442 632 -6233460.0
442 633 -156.0
442 635 -7312212993.0
442 636 -2794174519.0
442 638 -3.0
442 640 -631424.0
442 642 -809419.0
442 645 -297026052.0
442 646 -817680157.0
442 648 -35424.0
442 680 -5075370.0
442 682 -5645750.0
442 685 -2357538313.0
442 686 -5716428543.0
442 688 -22965.0
442 690 -574293.0
442 692 -794447.0
442 695 -138741093.0
442 696 -779329809.0
442 698 -19743.0
445 640 -2418.0
445 642 -3446.0
445 645 -227067.0
445 646 -3112370.0
445 648 -342.0
445 680 -3910.0
445 682 -4640.0
445 685 -628563.0
445 686 -4703564.0
445 688 -175.0
445 690 -3773.0
445 692 -4970.0
445 695 -322343.0
445 696 -6030361.0
445 698 -37.0
464 90 -4064392.0
464 92 -3402103.0
464 95 -3994992658.0
464 96 -1558747071.0
464 98 -3.0
464 100 -209165.0
464 102 -278691.0
464 105 -34418444.0
464 106 -269611501.0
464 108 -9227.0
464 140 -2255999.0
464 142 -2681114.0
464 145 -1272968869.0
464 146 -2466904139.0
464 148 -7845.0
464 150 -929211.0
464 152 -1096438.0
464 155 -238365765.0
464 156 -1257770043.0
464 158 -7549.0
464 720 -6230944.0
464 722 -4539418.0
464 723 -950.0
464 725 -6291866890.0
464 726 -1331719458.0
464 728 -3.0
464 730 -594200.0
464 732 -752735.0
464 735 -138634752.0
464 736 -669381172.0
464 738 -44856.0
464 770 -3278021.0
464 772 -4549431.0
464 775 -1063722599.0
464 776 -4593174253.0
464 778 -38881.0
464 780 -764135.0
464 782 -1031162.0
464 785 -131919686.0
464 786 -1049946295.0
464 788 -46662.0
468 90 -53309.0
468 92 -31293.0
468 95 -60859717.0
468 96 -2869889.0
468 98 -2.0
468 100 -2505.0
468 102 -2424.0
468 105 -597249.0
468 106 -1020668.0
468 108 -909.0
468 140 -30817.0
468 142 -52824.0
468 145 -4909292.0
468 146 -64430318.0
468 148 -344.0
468 150 -1987.0
468 152 -1394.0
468 158 -2306.0
489 270 -2486864.0
489 272 -2423128.0
489 275 -2736898921.0
489 276 -1141771384.0
489 280 -303513.0
489 282 -234752.0
489 285 -211501836.0
489 286 -107160757.0
489 288 -18546.0
489 320 -1712843.0
489 322 -1576945.0
489 325 -780546480.0
489 326 -1771334564.0
489 328 -8949.0
489 330 -470720.0
489 332 -777009.0
489 335 -153742417.0
489 336 -865765608.0
489 338 -6336.0
###Markdown
カウンターの調整をしてプロットcounter の wrap-around や、リブートによる counter reset などで数値が巻き戻ることがある。ざっくりといって、負数は 0 にしてしまえばよい。
###Code
c2[c2<0] = 0
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
x=plt.plot(t, c2.iloc[:,wlan_io].cumsum())
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
x=plt.plot(t, c2.iloc[:,wlan_io]/300*8)
###Output
_____no_output_____
###Markdown
デバイスごとに合算(上り下り総和)
###Code
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
c3 = pandas.DataFrame([c2.iloc[:,wlan_io[16*i:16*(i+1)]].sum(axis=1).cumsum() for i in range(10)])
x = plt.plot(tm, c3.T)
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
c3 = pandas.DataFrame([c2.iloc[:,wlan_io[16*i:16*(i+1)]].sum(axis=1)/300*8 for i in range(10)])
x = plt.plot(tm, c3.T)
###Output
_____no_output_____
###Markdown
Minecraft ワークショップは 10Mbps 程度見込んでおく感じか。 上り下り普通に下りが支配的。ただ、上りと下りが両方同程度になっている時間帯もあるのが興味深い。
###Code
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
c4 = pandas.concat([c2.iloc[:,wlan_io[0:160:2]].sum(axis=1), c2.iloc[:,wlan_io[1:160:2]].sum(axis=1)], axis=1)
x=plt.plot(tm, c4.cumsum())
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
c4 = pandas.concat([c2.iloc[:,wlan_io[0:160:2]].sum(axis=1), c2.iloc[:,wlan_io[1:160:2]].sum(axis=1)], axis=1)
x=plt.plot(tm, c4/300*8)
###Output
_____no_output_____
###Markdown
総合計
###Code
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
x=plt.plot(t, c2.iloc[:,wlan_io].cumsum().sum(axis=1))
"{:,}".format(list(c2.iloc[:,wlan_io].cumsum().sum(axis=1))[-1])
###Output
_____no_output_____
###Markdown
これに別途数えていない部分で 4.5 G 使用していたので、合計 83 G 程度使ったようだ。やはり 2 日目のほうがおとなしい。
###Code
plt.figure(figsize=(15,8))
plt.gca().yaxis.set_major_formatter(mtick.FormatStrFormatter('%.0f'))
x=plt.plot(t, c2.iloc[:,wlan_io].cumsum().sum(axis=1).diff()/300*8)
###Output
_____no_output_____
###Markdown
Популяция состоит из трафиков. Трафик состоит из периодов. Периоды включают в себя троллейбусы. Троллейбусы говорят о кол-ве машин и людях.
###Code
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
ROUTES = (7,10,11,12,16,24,25,29,32)
DAYS = 31
POPULATION_COUNT = 500
GENERATIONS = 150
class BKM321:
seat_places = 26
capacity = 115
count = 101
class Trolleybus:
def __init__(self,id,cars=1,people=1500,model=BKM321):
self.id = id
self.car = model()
self.cars = cars
self.people = people
class Traffic:
def __init__(self,tbuses,days=5,model=BKM321, random=True):
self.tbuses = []
self.days = days
for d in range(self.days):
self.tbuses.append([])
if random:
for t in tbuses:
cars = np.random.randint(1,model.count/len(ROUTES)+1)
people = np.random.randint(1,cars*model.capacity+1)
self.tbuses[d].append(Trolleybus(id=t,cars=cars,people=people))
def tolist(self):
return [[[t.id,t.cars,t.people] for t in period] for period in self.tbuses]
def toarray(self):
return np.array(self.tolist())
@staticmethod
def create(nparray,days):
traf_array = Traffic(ROUTES,days,random=False)
for d in range(len(nparray)):
for c in range(len(nparray[d])):
t = Trolleybus(id=nparray[d][c][0],cars=nparray[d][c][1],people=nparray[d][c][2])
traf_array.tbuses[d].append(t)
return traf_array
@staticmethod
def to_traffics(arrays,days):
for i in range(len(arrays)):
arrays[i] = Traffic.create(arrays[i],days)
return arrays
def __str__(self):
r = []
for d in range(len(self.tbuses)):
r.append(f"Day {d+1}: ")
for t in self.tbuses[d]:
r.append(f"{t.id} {t.cars} {t.people} | ")
r.append("\n")
return "".join(r)
def __repr__(self):
return self.__str__()
def generate_random_traffic(days, model = BKM321):
traffic = Traffic(tbuses = ROUTES, days = days, model = model)
return traffic
def population(n, days = 7):
population = []
for _ in range(n):
population.append(generate_random_traffic(days))
return population
def cost(traffic, traffic_goal):
errors = traffic - traffic_goal
cost = 0
for i in errors:
for j in i:
cost += abs(j[1] * j[2])
return cost
def get_population(generation):
inds = dict()
for i in range(len(generation)):
for j in range(len(generation[i].tbuses)):
for k in range(len(generation[i].tbuses[j])):
t = get_tbus(generation,i,j,k)
key = (i,j,k)
value = (t.cars,t.people)
inds[key] = value
return inds
def sort_population(individuals):
return sorted(individuals, key=lambda k: (individuals[k][0], individuals[k][1]))
def get_tbus(generation,i,j,k):
return generation[i].tbuses[j][k]
def select_best(generation, goal):
best = {}
for g in range(len(generation)):
best[g] = cost(generation[g].toarray(),goal)
_best = sorted(best, key=lambda k: best[k])
best = []
for b in _best[:POPULATION_COUNT//2]:
best.append(generation[b])
return best
def crossover(parents, offspring_count):
parents_count = len(parents)
offsprings = []
for i in range(offspring_count):
for j in range(parents_count):
parent1 = parents[np.random.randint(0, parents_count)].toarray()
parent2 = parents[np.random.randint(0, parents_count)].toarray()
parent1_mask = np.random.randint(0, 2, size = parent1.shape)
parent2_mask = np.logical_not(parent1_mask)
offspring = np.add(np.multiply(parent1, parent1_mask), np.multiply(parent2, parent2_mask))
offspring = np.array(offspring)
offsprings.append(Traffic.create(offspring,DAYS))
return offsprings
def mutation(individual, mutations_count):
size1 = individual.shape[0]
size2 = individual.shape[1]
for i in range(mutations_count):
day = np.random.randint(0, size1)
tbus = np.random.randint(0, size2)
#Cars
d = np.random.choice((-1,1))
individual[day,tbus,1] += d
#People
d = np.random.choice(np.arange(-5,6))
individual[day,tbus,2] += d
return individual
def mutate(offspring):
for i in range(0,len(offspring),2):
offspring[i] = mutation(offspring[i].toarray(), 2)
offspring[i] = Traffic.create(offspring[i],DAYS)
return offspring
def set_goal(days, goal_cars = 1, goal_people = 1500):
return np.array([[[r,goal_cars,goal_people] for r in ROUTES] for _ in range(days)])
def genetic_algorithm(generation, goal):
best = select_best(generation, goal)
offsprings = crossover(best, 2)
mutants = mutate(offsprings)
return mutants
goal = set_goal(days=DAYS,goal_cars=1,goal_people=1500)
generation = population(POPULATION_COUNT,DAYS)
_generation = generation
_fcost = 0
acc_list = []
for g in range(GENERATIONS):
generation = genetic_algorithm(generation,goal)
#Accuracy
_cost = cost(select_best(generation,goal)[0].toarray(),goal)
if _fcost == 0:
_fcost = _cost
acc = abs(round((1 - _cost / _fcost)*100,2))
acc_list.append(acc)
print(f"Generation's {g} cost: {_cost}, acc: {acc}%")
THE_BEST = select_best(generation,goal)
pd.DataFrame(THE_BEST[0].toarray().reshape(-1,3)[:DAYS], columns=("ID","Cars","People"))
plt.plot(acc_list)
###Output
_____no_output_____
###Markdown
Traffic Data ETL DemoThis is a demo ETL process that extracts traffic data stored as CSV in S3, transforms it, and loads it into Redshift. Boilerplate
###Code
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
glueContext = GlueContext(SparkContext.getOrCreate())
###Output
_____no_output_____
###Markdown
Load the data source
###Code
traffic = glueContext.create_dynamic_frame.from_catalog(
database = "traffic",
table_name = "traffic"
)
###Output
_____no_output_____
###Markdown
Drop unwanted fields
###Code
traffic = traffic.drop_fields([
'location',
'boundaries - zip codes',
'community areas',
'census tracts',
'wards',
':@computed_region_awaf_s7ux'
])
###Output
_____no_output_____
###Markdown
Map remaining fields
###Code
def reformat_date(record):
"""Change 'date of count' format from mm/dd/yyyy to yyyy-mm-dd."""
date = record['date of count']
month, day, year = date.split('/')
record['date of count'] = year + '-' + month + '-' + day
return record
def split_vehicle_volume(record):
"""Split 'vehicle volume by each direction of traffic' into
separate fields for each direction.
"""
volumes = record['vehicle volume by each direction of traffic']
direction_volumes = volumes.split('/')
for direction_volume in direction_volumes:
if ':' not in direction_volume:
continue
direction, volume = direction_volume.split(':')
volume = int(volume.strip())
direction = direction.lower()
if 'north' in direction:
record['vehicle volume north'] = volume
elif 'south' in direction:
record['vehicle volume south'] = volume
elif 'west' in direction:
record['vehicle volume west'] = volume
elif 'east' in direction:
record['vehicle volume east'] = volume
return record
# Convert date format and split vehicle volume field by direction
traffic = traffic.map(reformat_date).map(split_vehicle_volume)
# Map field names and types
traffic = traffic.apply_mapping([
('id', 'long', 'id', 'long'),
('traffic volume count location address', 'string', 'address', 'string'),
('street', 'string', 'street', 'string'),
('date of count', 'string', 'date_of_count', 'date'),
('total passing vehicle volume', 'long', 'total_passing_vehicle_volume', 'long'),
('vehicle volume north', 'int', 'vehicle_volume_north', 'long'),
('vehicle volume south', 'int', 'vehicle_volume_south', 'long'),
('vehicle volume west', 'int', 'vehicle_volume_west', 'long'),
('vehicle volume east', 'int', 'vehicle_volume_east', 'long'),
('latitude', 'double', 'latitude', 'double'),
('longitude', 'double', 'longitude', 'double'),
('zip codes', 'long', 'zip_codes', 'long'),
])
###Output
_____no_output_____
###Markdown
Write data to Redshift
###Code
glueContext.write_dynamic_frame.from_jdbc_conf(frame=traffic,
catalog_connection='redshift',
connection_options={'dbtable': 'traffic', 'database': 'dev'},
redshift_tmp_dir='s3://rk-analytics-sandbox/temp-dir/'
)
###Output
_____no_output_____
###Markdown
###Code
import os
import tensorflow as tf
import cv2
import matplotlib.pyplot as plt
import random
%matplotlib inline
os.chdir("/content/drive/My Drive/CNN/traffic")
os.listdir()
import zipfile
fl = zipfile.ZipFile("./data.zip","r")
fl.extractall("./data")
train_path = "/content/drive/My Drive/CNN/traffic/data/train/"
test_path = "/content/drive/My Drive/CNN/traffic/data/test/"
for i in os.listdir(train_path):
print("%s folder _ file := %d"%(i,len(os.listdir(train_path + i))))
print("Folder in train dir := ",len(os.listdir(train_path)))
print("Folder in test dir := ",len(os.listdir(test_path)))
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(rescale=(1.0/255.0),width_shift_range=0.2,height_shift_range=0.2,zoom_range=0.2)
train_data = train_datagen.flow_from_directory(train_path,batch_size=32,target_size=(160,160),class_mode="categorical")
_,ax = plt.subplots(1,5,figsize=(20,5))
for i in range(5):
ax[i].imshow(train_data[0][0][0])
_,ax = plt.subplots(1,5,figsize=(20,5))
for i in range(5):
ax[i].imshow(train_data[0][0][31])
ax[i].set_title(train_data[0][1][31].argmax())
_,ax = plt.subplots(1,5,figsize=(20,5))
for i in range(5):
a = random.randint(0,32)
ax[i].imshow(train_data[0][0][a])
ax[i].set_title(train_data[0][1][a].argmax())
model1 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16,(3,3),activation="relu",input_shape=(160,160,3)),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(32,(3,3),activation="relu"),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(64,(3,3),activation="relu"),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128,activation="relu"),
tf.keras.layers.Dense(43,activation="softmax")
])
model1.summary()
model1.compile(optimizer=tf.keras.optimizers.RMSprop(),loss=tf.keras.losses.categorical_crossentropy,metrics=['acc'])
his1 = model1.fit(train_data,epochs=10,steps_per_epoch=train_data.n//32)
model1.predict_classes(train_data[0][0][0:5])
def prediction(ind,s,e):
pre = model1.predict_classes(train_data[ind][0][s:e+1])
print("Predicted value := ",pre)
print("Original value := ",end="")
for i in train_data[ind][1][s:e+1]:
print("%2d"%i.argmax(),end=" ")
prediction(0,0,5)
prediction(5,0,15)
prediction(10,0,20)
plt.plot(his1.history['acc'])
plt.plot(his1.history['loss'])
model2 = tf.keras.models.Sequential([
tf.keras.layers.Conv2D(16,(3,3),activation="relu",input_shape=(160,160,3)),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(32,(3,3),activation="relu"),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(64,(3,3),activation="relu"),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Conv2D(64,(3,3),activation="relu"),
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128,activation="relu"),
tf.keras.layers.Dense(128,activation="relu"),
tf.keras.layers.Dense(43,activation="softmax")
])
model2.summary()
model2.compile(optimizer=tf.keras.optimizers.RMSprop(),loss=tf.keras.losses.categorical_crossentropy,metrics=['acc'])
his2 = model2.fit(train_data,epochs=20,steps_per_epoch=train_data.n//32)
_,ax = plt.subplots(1,2,figsize=(12,4))
ax[0].plot(his2.history['acc'],'r')
ax[1].plot(his2.history['loss'])
###Output
_____no_output_____ |
Practices/Practice09_Conditionals.ipynb | ###Markdown
Practice with conditionalsBefore we practice conditionals, let's review:To execute a command when a condition is true, use `if`:```if [condition]: [command]```To execute a command when a condition is true, and execute something else otherwise, use `if/else`:```if [condition]: [command 1]else: [command 2]```To execute a command when one condition is true, a different command if a second condition is true, and execute something else otherwise, use `if/elif/else`:```if [condition 1]: [command 1]elif [condition 2]: [command 2]else: [command 3]```Remember that commands in an `elif` will only run if the first condition is false AND the second condition is true. Let's say we are making a smoothie. In order to make a big enough smoothie, we want at least 4 cups of ingredients.
###Code
strawberries = 1
bananas = 0.5
milk = 1
# create a variable ingredients that equals the sum of all our ingredients
# write an if statement that prints out "We have enough ingredients!" if we have at least 4 cups of ingredients
###Output
_____no_output_____
###Markdown
The code above will let us know if we have enough ingredients for our smoothie. But, if we don't have enough ingredients, the code won't print anything. Our code would be more informative if it also told us when we didn't have enough ingredients. Next, let's write code that also lets us know when we _don't_ have enough ingredients.
###Code
# write code that prints "We have enough ingredients" if we have at least 4 cups of ingredients
# and also prints "We don't have enough ingredients" if we have less than 4 cups of ingredients
###Output
We do not have enough ingredients.
###Markdown
It might also be useful to know if we have exactly 4 cups of ingredients. Add to the code above so that it lets us know when we have more than enough ingredients, exactly enough ingredients, or not enough ingredients.
###Code
# write code that prints informative messages when we have more than 4 cups of ingredients,
# exactly 4 cups of ingredients, or less than 4 cups of ingredients
###Output
we have exactly enough ingredients
###Markdown
**Challenge**: Suppose our blender can only fit up to 6 cups inside. Add to the above code so that it also warns us when we have too many ingredients.
###Code
# write an if/elif/else style statement that does the following:
# prints a message when we have exactly 4 cups of ingredients saying we have exactly the right amount of ingredients
# prints a message when we have less than 4 cups of ingredients say we do not have enough
# prints a message when we have 4-6 cups of ingredients saying we have more than enough
# prints a message otherwise that says we have too many ingredients
###Output
_____no_output_____ |
PlayGround/AI_ML_DL_DS/Introduction_Machine_Learning_Python_Pdf/Introduction.ipynb | ###Markdown
Introduction
###Code
from sklearn.datasets import load_iris
iris=load_iris()
iris
print(iris.keys())
print(iris['feature_names'])
iris['data'].shape
iris['target'].shape
type(iris['target_names'])
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(iris['data'],iris['target'],random_state=0)
X_train.shape
fig, ax = plt.subplots(3, 3, figsize=(15, 15))
plt.suptitle("iris_pairplot")
for i in range(3):
for j in range(3):
ax[i, j].scatter(X_train[:, j], X_train[:, i + 1], c=y_train, s=60)
ax[i, j].set_xticks(())
ax[i, j].set_yticks(())
if i == 2:
ax[i, j].set_xlabel(iris['feature_names'][j])
if j == 0:
ax[i, j].set_ylabel(iris['feature_names'][i + 1])
if j > i:
ax[i, j].set_visible(False)
root@ferried1:~# ls -al /opt/IBM/WebSphere/AppServer8.5/profiles/node0Prof_140/config/cells/lib
total 4600
drwxr-xr-x 2 root root 4096 Jan 3 10:36 .
drwxr-xr-x 4 ws8admin wasadmin 4096 Jan 3 10:36 ..
-rw-r--r-- 1 root root 487135 Jan 3 10:36 bcmail-jdk15-1.46.jar
-rw-r--r-- 1 root root 255692 Jan 3 10:36 bcpg-jdk15on-1.51.jar
-rw-r--r-- 1 root root 673715 Jan 3 10:36 bcpkix-jdk15on-1.54.jar
-rw-r--r-- 1 root root 3277268 Jan 3 10:36 bcprov-jdk15on-1.54.jar
type(iris['data'])
###Output
_____no_output_____ |
docs/_downloads/9f87dd9ca21e4acb39a4a28644c8c8e9/data_tutorial.ipynb | ###Markdown
`파이토치(PyTorch) 기본 익히기 `_ ||`빠른 시작 `_ ||`텐서(Tensor) `_ ||**Dataset과 DataLoader** ||`변형(Transform) `_ ||`신경망 모델 구성하기 `_ ||`Autograd `_ ||`최적화(Optimization) `_ ||`모델 저장하고 불러오기 `_Dataset과 DataLoader========================================================================== 데이터 샘플을 처리하는 코드는 지저분(messy)하고 유지보수가 어려울 수 있습니다;더 나은 가독성(readability)과 모듈성(modularity)을 위해 데이터셋 코드를 모델 학습 코드로부터 분리하는 것이 이상적입니다.PyTorch는 ``torch.utils.data.DataLoader`` 와 ``torch.utils.data.Dataset`` 의 두 가지 데이터 기본 요소를제공하여 미리 준비해된(pre-loaded) 데이터셋 뿐만 아니라 가지고 있는 데이터를 사용할 수 있도록 합니다.``Dataset`` 은 샘플과 정답(label)을 저장하고, ``DataLoader`` 는 ``Dataset`` 을 샘플에 쉽게 접근할 수 있도록순회 가능한 객체(iterable)로 감쌉니다.PyTorch의 도메인 특화 라이브러리들은 (FashionMNIST와 같은) 다양한 미리 준비해둔(pre-loaded) 데이터셋을 제공합니다.데이터셋은 ``torch.utils.data.Dataset`` 의 하위 클래스로 개별 데이터를 특정하는 함수가 구현되어 있습니다.이러한 데이터셋은 모델을 만들어보고(prototype) 성능을 측정(benchmark)하는데 사용할 수 있습니다.여기에서 데이터셋들을 찾아볼 수 있습니다:`이미지 데이터셋 `_,`텍스트 데이터셋 `_ 및`오디오 데이터셋 `_ 데이터셋 불러오기------------------------------------------------------------------------------------------`TorchVision` 에서 `Fashion-MNIST `_ 데이터셋을불러오는 예제를 살펴보겠습니다. Fashion-MNIST는 Zalando의 기사 이미지 데이터셋으로 60,000개의 학습 예제와 10,000개의 테스트 예제로 이루어져 있습니다.각 예제는 흑백(grayscale)의 28x28 이미지와 10개 분류(class) 중 하나인 정답(label)으로 구성됩니다.다음 매개변수들을 사용하여 `FashionMNIST 데이터셋 `_ 을 불러옵니다: - ``root`` 는 학습/테스트 데이터가 저장되는 경로입니다. - ``train`` 은 학습용 또는 테스트용 데이터셋 여부를 지정합니다. - ``download=True`` 는 ``root`` 에 데이터가 없는 경우 인터넷에서 다운로드합니다. - ``transform`` 과 ``target_transform`` 은 특징(feature)과 정답(label) 변형(transform)을 지정합니다.
###Code
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
###Output
_____no_output_____
###Markdown
데이터셋을 순회하고 시각화하기------------------------------------------------------------------------------------------``Dataset`` 에 리스트(list)처럼 직접 접근(index)할 수 있습니다: ``training_data[index]``.``matplotlib`` 을 사용하여 학습 데이터의 일부를 시각화해보겠습니다.
###Code
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
.. .. figure:: /_static/img/basics/fashion_mnist.png :alt: fashion_mnist ------------------------------------------------------------------------------------------ 파일에서 사용자 정의 데이터셋 만들기------------------------------------------------------------------------------------------사용자 정의 Dataset 클래스는 반드시 3개 함수를 구현해야 합니다: `__init__`, `__len__`, and `__getitem__`.아래 구현을 살펴보면 FashionMNIST 이미지들은 ``img_dir`` 디렉토리에 저장되고, 정답은 ``annotations_file`` csv 파일에별도로 저장됩니다.다음 장에서 각 함수들에서 일어나는 일들을 자세히 살펴보겠습니다.
###Code
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file, names=['file_name', 'label'])
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
###Output
_____no_output_____
###Markdown
__init__^^^^^^^^^^^^^^^^^^^^__init__ 함수는 Dataset 객체가 생성(instantiate)될 때 한 번만 실행됩니다.여기서는 이미지와 주석 파일(annotation_file)이 포함된 디렉토리와 (다음 장에서 자세히 살펴볼) 두가지변형(transform)을 초기화합니다.labels.csv 파일은 다음과 같습니다: :: tshirt1.jpg, 0 tshirt2.jpg, 0 ...... ankleboot999.jpg, 9
###Code
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file, names=['file_name', 'label'])
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
###Output
_____no_output_____
###Markdown
__len__^^^^^^^^^^^^^^^^^^^^__len__ 함수는 데이터셋의 샘플 개수를 반환합니다.예:
###Code
def __len__(self):
return len(self.img_labels)
###Output
_____no_output_____
###Markdown
__getitem__^^^^^^^^^^^^^^^^^^^^__getitem__ 함수는 주어진 인덱스 ``idx`` 에 해당하는 샘플을 데이터셋에서 불러오고 반환합니다.인덱스를 기반으로, 디스크에서 이미지의 위치를 식별하고, ``read_image`` 를 사용하여 이미지를 텐서로 변환하고, ``self.img_labels`` 의 csv 데이터로부터해당하는 정답(label)을 가져오고, (해당하는 경우) 변형(transform) 함수들을 호출한 뒤, 텐서 이미지와 라벨을 Python 사전(dict)형으로 반환합니다.
###Code
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------------ DataLoader로 학습용 데이터 준비하기------------------------------------------------------------------------------------------``Dataset`` 은 데이터셋의 특징(feature)을 가져오고 하나의 샘플에 정답(label)을 지정하는 일을 한 번에 합니다.모델을 학습할 때, 일반적으로 샘플들을 "미니배치(minibatch)"로 전달하고, 매 에폭(epoch)마다 데이터를 다시 섞어서 과적합(overfit)을 막고,Python의 ``multiprocessing`` 을 사용하여 데이터 검색 속도를 높이려고 합니다.``DataLoader`` 는 간단한 API로 이러한 복잡한 과정들을 추상화한 순회 가능한 객체(iterable)입니다.
###Code
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
DataLoader를 통해 순회하기(iterate)------------------------------------------------------------------------------------------``DataLoader`` 에 데이터셋을 불러온 뒤에는 필요에 따라 데이터셋을 순회(iterate)할 수 있습니다.아래의 각 순회(iteration)는 (각각 ``batch_size=64`` 의 특징(feature)과 정답(label)을 포함하는) ``train_features`` 와``train_labels`` 의 묶음(batch)을 반환합니다. ``shuffle=True`` 로 지정했으므로, 모든 배치를 순회한 뒤 데이터가 섞입니다.(데이터 불러오기 순서를 보다 세밀하게(finer-grained) 제어하려면 `Samplers `_를 살펴보세요.)
###Code
# 이미지와 정답(label)을 표시합니다.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
_____no_output_____
###Markdown
`파이토치(PyTorch) 기본 익히기 `_ ||`빠른 시작 `_ ||`텐서(Tensor) `_ ||**Dataset과 Dataloader** ||`변형(Transform) `_ ||`신경망 모델 구성하기 `_ ||`Autograd `_ ||`최적화(Optimization) `_ ||`모델 저장하고 불러오기 `_Dataset과 Dataloader========================================================================== 데이터 샘플을 처리하는 코드는 지저분(messy)하고 유지보수가 어려울 수 있습니다;더 나은 가독성(readability)과 모듈성(modularity)을 위해 데이터셋 코드를 모델 학습 코드로부터 분리하는 것이 이상적입니다.PyTorch는 ``torch.utils.data.DataLoader`` 와 ``torch.utils.data.Dataset`` 의 두 가지 데이터 기본 요소를제공하여 미리 준비해된(pre-loaded) 데이터셋 뿐만 아니라 가지고 있는 데이터를 사용할 수 있도록 합니다.``Dataset`` 은 샘플과 정답(label)을 저장하고, ``DataLoader`` 는 ``Dataset`` 을 샘플에 쉽게 접근할 수 있도록순회 가능한 객체(iterable)로 감쌉니다.PyTorch의 도메인 특화 라이브러리들은 (FashionMNIST와 같은) 다양한 미리 준비해둔(pre-loaded) 데이터셋을 제공합니다.데이터셋은 ``torch.utils.data.Dataset`` 의 하위 클래스로 개별 데이터를 특정하는 함수가 구현되어 있습니다.이러한 데이터셋은 모델을 만들어보고(prototype) 성능을 측정(benchmark)하는데 사용할 수 있습니다.여기에서 데이터셋들을 찾아볼 수 있습니다:`이미지 데이터셋 `_,`텍스트 데이터셋 `_ 및`오디오 데이터셋 `_ 데이터셋 불러오기------------------------------------------------------------------------------------------`TorchVision` 에서 `Fashion-MNIST `_ 데이터셋을불러오는 예제를 살펴보겠습니다. Fashion-MNIST는 Zalando의 기사 이미지 데이터셋으로 60,000개의 학습 예제와 10,000개의 테스트 예제로 이루어져 있습니다.각 예제는 흑백(grayscale)의 28x28 이미지와 10개 분류(class) 중 하나인 정답(label)으로 구성됩니다.다음 매개변수들을 사용하여 `FashionMNIST 데이터셋 `_ 을 불러옵니다: - ``root`` 는 학습/테스트 데이터가 저장되는 경로입니다. - ``train`` 은 학습용 또는 테스트용 데이터셋 여부를 지정합니다. - ``download=True`` 는 ``root`` 에 데이터가 없는 경우 인터넷에서 다운로드합니다. - ``transform`` 과 ``target_transform`` 은 특징(feature)과 정답(label) 변형(transform)을 지정합니다.
###Code
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
###Output
_____no_output_____
###Markdown
데이터셋을 순회하고 시각화하기------------------------------------------------------------------------------------------``Dataset`` 에 리스트(list)처럼 직접 접근(index)할 수 있습니다: ``training_data[index]``.``matplotlib`` 을 사용하여 학습 데이터의 일부를 시각화해보겠습니다.
###Code
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
.. .. figure:: /_static/img/basics/fashion_mnist.png :alt: fashion_mnist ------------------------------------------------------------------------------------------ 파일에서 사용자 정의 데이터셋 만들기------------------------------------------------------------------------------------------사용자 정의 Dataset 클래스는 반드시 3개 함수를 구현해야 합니다: `__init__`, `__len__`, and `__getitem__`.아래 구현을 살펴보면 FashionMNIST 이미지들은 ``img_dir`` 디렉토리에 저장되고, 정답은 ``annotations_file`` csv 파일에별도로 저장됩니다.다음 장에서 각 함수들에서 일어나는 일들을 자세히 살펴보겠습니다.
###Code
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
###Output
_____no_output_____
###Markdown
__init__^^^^^^^^^^^^^^^^^^^^__init__ 함수는 Dataset 객체가 생성(instantiate)될 때 한 번만 실행됩니다.여기서는 이미지와 주석 파일(annotation_file)이 포함된 디렉토리와 (다음 장에서 자세히 살펴볼) 두가지변형(transform)을 초기화합니다.labels.csv 파일은 다음과 같습니다: :: tshirt1.jpg, 0 tshirt2.jpg, 0 ...... ankleboot999.jpg, 9
###Code
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
###Output
_____no_output_____
###Markdown
__len__^^^^^^^^^^^^^^^^^^^^__len__ 함수는 데이터셋의 샘플 개수를 반환합니다.예:
###Code
def __len__(self):
return len(self.img_labels)
###Output
_____no_output_____
###Markdown
__getitem__^^^^^^^^^^^^^^^^^^^^__getitem__ 함수는 주어진 인덱스 ``idx`` 에 해당하는 샘플을 데이터셋에서 불러오고 반환합니다.인덱스를 기반으로, 디스크에서 이미지의 위치를 식별하고, ``read_image`` 를 사용하여 이미지를 텐서로 변환하고, ``self.img_labels`` 의 csv 데이터로부터해당하는 정답(label)을 가져오고, (해당하는 경우) 변형(transform) 함수들을 호출한 뒤, 텐서 이미지와 라벨을 Python 사전(dict)형으로 반환합니다.
###Code
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------------ DataLoader로 학습용 데이터 준비하기------------------------------------------------------------------------------------------``Dataset`` 은 데이터셋의 특징(feature)을 가져오고 하나의 샘플에 정답(label)을 지정하는 일을 한 번에 합니다.모델을 학습할 때, 일반적으로 샘플들을 "미니배치(minibatch)"로 전달하고, 매 에폭(epoch)마다 데이터를 다시 섞어서 과적합(overfit)을 막고,Python의 ``multiprocessing`` 을 사용하여 데이터 검색 속도를 높이려고 합니다.``DataLoader`` 는 간단한 API로 이러한 복잡한 과정들을 추상화한 순회 가능한 객체(iterable)입니다.
###Code
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
DataLoader를 통해 순회하기(iterate)------------------------------------------------------------------------------------------``DataLoader`` 에 데이터셋을 불러온 뒤에는 필요에 따라 데이터셋을 순회(iterate)할 수 있습니다.아래의 각 순회(iteration)는 (각각 ``batch_size=64`` 의 특징(feature)과 정답(label)을 포함하는) ``train_features`` 와``train_labels`` 의 묶음(batch)을 반환합니다. ``shuffle=True`` 로 지정했으므로, 모든 배치를 순회한 뒤 데이터가 섞입니다.(데이터 불러오기 순서를 보다 세밀하게(finer-grained) 제어하려면 `Samplers `_를 살펴보세요.)
###Code
# 이미지와 정답(label)을 표시합니다.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
_____no_output_____
###Markdown
`파이토치(PyTorch) 기본 익히기 `_ ||`빠른 시작 `_ ||`텐서(Tensor) `_ ||**Dataset과 Dataloader** ||`변형(Transform) `_ ||`신경망 모델 구성하기 `_ ||`Autograd `_ ||`최적화(Optimization) `_ ||`모델 저장하고 불러오기 `_Dataset과 Dataloader========================================================================== 데이터 샘플을 처리하는 코드는 지저분(messy)하고 유지보수가 어려울 수 있습니다;더 나은 가독성(readability)과 모듈성(modularity)을 위해 데이터셋 코드를 모델 학습 코드로부터 분리하는 것이 이상적입니다.PyTorch는 ``torch.utils.data.DataLoader`` 와 ``torch.utils.data.Dataset`` 의 두 가지 데이터 기본 요소를제공하여 미리 준비해된(pre-loaded) 데이터셋 뿐만 아니라 가지고 있는 데이터를 사용할 수 있도록 합니다.``Dataset`` 은 샘플과 정답(label)을 저장하고, ``DataLoader`` 는 ``Dataset`` 을 샘플에 쉽게 접근할 수 있도록반복 가능한 객체(iterable)로 감쌉니다.PyTorch의 도메인 특화 라이브러리들은 (FashionMNIST와 같은) 다양한 미리 준비해둔(pre-loaded) 데이터셋을 제공합니다.데이터셋은 ``torch.utils.data.Dataset`` 의 하위 클래스로 개별 데이터를 특정하는 함수가 구현되어 있습니다.이러한 데이터셋은 모델을 만들어보고(prototype) 성능을 측정(benchmark)하는데 사용할 수 있습니다.여기에서 데이터셋들을 찾아볼 수 있습니다:`이미지 데이터셋 `_,`텍스트 데이터셋 `_ 및`오디오 데이터셋 `_ 데이터셋 불러오기------------------------------------------------------------------------------------------`TorchVision` 에서 `Fashion-MNIST `_ 데이터셋을불러오는 예제를 살펴보겠습니다. Fashion-MNIST는 Zalando의 기사 이미지 데이터셋으로 60,000개의 학습 예제와 10,000개의 테스트 예제로 이루어져 있습니다.각 예제는 흑백(grayscale)의 28x28 이미지와 10개 분류(class) 중 하나인 정답(label)으로 구성됩니다.다음 매개변수들을 사용하여 `FashionMNIST 데이터셋 `_ 을 불러옵니다: - ``root`` 는 학습/테스트 데이터가 저장되는 경로입니다. - ``train`` 은 학습용 또는 테스트용 데이터셋 여부를 지정합니다. - ``download=True`` 는 ``root`` 에 데이터가 없는 경우 인터넷에서 다운로드합니다. - ``transform`` 과 ``target_transform`` 은 특징(feature)과 정답(label) 변형(transform)을 지정합니다.
###Code
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
###Output
_____no_output_____
###Markdown
데이터셋을 반복하고 시각화하기------------------------------------------------------------------------------------------``Dataset`` 에 리스트(list)처럼 직접 접근(index)할 수 있습니다: ``training_data[index]``.``matplotlib`` 을 사용하여 학습 데이터의 일부를 시각화해보겠습니다.
###Code
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
.. .. figure:: /_static/img/basics/fashion_mnist.png :alt: fashion_mnist ------------------------------------------------------------------------------------------ 파일에서 사용자 정의 데이터셋 만들기------------------------------------------------------------------------------------------사용자 정의 Dataset 클래스는 반드시 3개 함수를 구현해야 합니다: `__init__`, `__len__`, and `__getitem__`.아래 구현을 살펴보면 FashionMNIST 이미지들은 ``img_dir`` 디렉토리에 저장되고, 정답은 ``annotations_file`` csv 파일에별도로 저장됩니다.다음 장에서 각 함수들에서 일어나는 일들을 자세히 살펴보겠습니다.
###Code
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
###Output
_____no_output_____
###Markdown
__init__^^^^^^^^^^^^^^^^^^^^__init__ 함수는 Dataset 객체가 생성(instantiate)될 때 한 번만 실행됩니다.여기서는 이미지와 주석 파일(annotation_file)이 포함된 디렉토리와 (다음 장에서 자세히 살펴볼) 두가지변형(transform)을 초기화합니다.labels.csv 파일은 다음과 같습니다: :: tshirt1.jpg, 0 tshirt2.jpg, 0 ...... ankleboot999.jpg, 9
###Code
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
###Output
_____no_output_____
###Markdown
__len__^^^^^^^^^^^^^^^^^^^^__len__ 함수는 데이터셋의 샘플 개수를 반환합니다.예:
###Code
def __len__(self):
return len(self.img_labels)
###Output
_____no_output_____
###Markdown
__getitem__^^^^^^^^^^^^^^^^^^^^__getitem__ 함수는 주어진 인덱스 ``idx`` 에 해당하는 샘플을 데이터셋에서 불러오고 반환합니다.인덱스를 기반으로, 디스크에서 이미지의 위치를 식별하고, ``read_image`` 를 사용하여 이미지를 텐서로 변환하고, ``self.img_labels`` 의 csv 데이터로부터해당하는 정답(label)을 가져오고, (해당하는 경우) 변형(transform) 함수들을 호출한 뒤, 텐서 이미지와 라벨을 Python 사전(dict)형으로 반환합니다.
###Code
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------------ DataLoader로 학습용 데이터 준비하기------------------------------------------------------------------------------------------``Dataset`` 은 데이터셋의 특징(feature)을 가져오고 하나의 샘플에 정답(label)을 지정하는 일을 한 번에 합니다.모델을 학습할 때, 일반적으로 샘플들을 "미니배치(minibatch)"로 전달하고, 매 에폭(epoch)마다 데이터를 다시 섞어서 과적합(overfit)을 막고,Python의 ``multiprocessing`` 을 사용하여 데이터 검색 속도를 높이려고 합니다.``DataLoader`` 는 간단한 API로 이러한 복잡한 과정들을 추상화한 반복 가능한 객체(iteratable)입니다.
###Code
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
DataLoader를 통해 반복하기(iterate)------------------------------------------------------------------------------------------``DataLoader`` 에 데이터셋을 불러온 뒤에는 필요에 따라 데이터셋을 반복(iterate)할 수 있습니다.아래의 각 반복(iteration)은 (각각 ``batch_size=64`` 의 특징(feature)과 정답(label)을 포함하는) ``train_features`` 와``train_labels`` 의 묶음(batch)을 반환합니다. ``shuffle=True`` 로 지정했으므로, 모든 배치를 반복한 뒤 데이터가 섞입니다.(데이터 불러오기 순서를 보다 세밀하게(finer-grained) 제어하려면 `Samplers `_를 살펴보세요.)
###Code
# 이미지와 정답(label)을 표시합니다.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
_____no_output_____
###Markdown
`파이토치(PyTorch) 기본 익히기 `_ ||`빠른 시작 `_ ||`텐서(Tensor) `_ ||**Dataset과 Dataloader** ||`변형(Transform) `_ ||`신경망 모델 구성하기 `_ ||`Autograd `_ ||`최적화(Optimization) `_ ||`모델 저장하고 불러오기 `_Dataset과 Dataloader========================================================================== 데이터 샘플을 처리하는 코드는 지저분(messy)하고 유지보수가 어려울 수 있습니다;더 나은 가독성(readability)과 모듈성(modularity)을 위해 데이터셋 코드를 모델 학습 코드로부터 분리하는 것이 이상적입니다.PyTorch는 ``torch.utils.data.DataLoader`` 와 ``torch.utils.data.Dataset`` 의 두 가지 데이터 기본 요소를제공하여 미리 준비해된(pre-loaded) 데이터셋 뿐만 아니라 가지고 있는 데이터를 사용할 수 있도록 합니다.``Dataset`` 은 샘플과 정답(label)을 저장하고, ``DataLoader`` 는 ``Dataset`` 을 샘플에 쉽게 접근할 수 있도록반복 가능한 객체(iterable)로 감쌉니다.PyTorch의 도메인 특화 라이브러리들은 (FashionMNIST와 같은) 다양한 미리 준비해둔(pre-loaded) 데이터셋을 제공합니다.데이터셋은 ``torch.utils.data.Dataset`` 의 하위 클래스로 개별 데이터를 특정하는 함수가 구현되어 있습니다.이러한 데이터셋은 모델을 만들어보고(prototype) 성능을 측정(benchmark)하는데 사용할 수 있습니다.여기에서 데이터셋들을 찾아볼 수 있습니다:`이미지 데이터셋 `_,`텍스트 데이터셋 `_ 및`오디오 데이터셋 `_ 데이터셋 불러오기------------------------------------------------------------------------------------------`TorchVision` 에서 `Fashion-MNIST `_ 데이터셋을불러오는 예제를 살펴보겠습니다. Fashion-MNIST는 Zalando의 기사 이미지 데이터셋으로 60,000개의 학습 예제와 10,000개의 테스트 예제로 이루어져 있습니다.각 예제는 흑백(grayscale)의 28x28 이미지와 10개 분류(class) 중 하나인 정답(label)으로 구성됩니다.다음 매개변수들을 사용하여 `FashionMNIST 데이터셋 `_ 을 불러옵니다: - ``root`` 는 학습/테스트 데이터가 저장되는 경로입니다. - ``train`` 은 학습용 또는 테스트용 데이터셋 여부를 지정합니다. - ``download=True`` 는 ``root`` 에 데이터가 없는 경우 인터넷에서 다운로드합니다. - ``transform`` 과 ``target_transform`` 은 특징(feature)과 정답(label) 변형(transform)을 지정합니다.
###Code
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor, Lambda
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
###Output
_____no_output_____
###Markdown
데이터셋을 반복하고 시각화하기------------------------------------------------------------------------------------------``Dataset`` 에 리스트(list)처럼 직접 접근(index)할 수 있습니다: ``training_data[index]``.``matplotlib`` 을 사용하여 학습 데이터의 일부를 시각화해보겠습니다.
###Code
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
.. .. figure:: /_static/img/basics/fashion_mnist.png :alt: fashion_mnist ------------------------------------------------------------------------------------------ 파일에서 사용자 정의 데이터셋 만들기------------------------------------------------------------------------------------------사용자 정의 Dataset 클래스는 반드시 3개 함수를 구현해야 합니다: `__init__`, `__len__`, and `__getitem__`.아래 구현을 살펴보면 FashionMNIST 이미지들은 ``img_dir`` 디렉토리에 저장되고, 정답은 ``annotations_file`` csv 파일에별도로 저장됩니다.다음 장에서 각 함수들에서 일어나는 일들을 자세히 살펴보겠습니다.
###Code
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
###Output
_____no_output_____
###Markdown
__init__^^^^^^^^^^^^^^^^^^^^__init__ 함수는 Dataset 객체가 생성(instantiate)될 때 한 번만 실행됩니다.여기서는 이미지와 주석 파일(annotation_file)이 포함된 디렉토리와 (다음 장에서 자세히 살펴볼) 두가지변형(transform)을 초기화합니다.labels.csv 파일은 다음과 같습니다: :: tshirt1.jpg, 0 tshirt2.jpg, 0 ...... ankleboot999.jpg, 9
###Code
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
###Output
_____no_output_____
###Markdown
__len__^^^^^^^^^^^^^^^^^^^^__len__ 함수는 데이터셋의 샘플 개수를 반환합니다.예:
###Code
def __len__(self):
return len(self.img_labels)
###Output
_____no_output_____
###Markdown
__getitem__^^^^^^^^^^^^^^^^^^^^__getitem__ 함수는 주어진 인덱스 ``idx`` 에 해당하는 샘플을 데이터셋에서 불러오고 반환합니다.인덱스를 기반으로, 디스크에서 이미지의 위치를 식별하고, ``read_image`` 를 사용하여 이미지를 텐서로 변환하고, ``self.img_labels`` 의 csv 데이터로부터해당하는 정답(label)을 가져오고, (해당하는 경우) 변형(transform) 함수들을 호출한 뒤, 텐서 이미지와 라벨을 Python 사전(dict)형으로 반환합니다.
###Code
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------------ DataLoader로 학습용 데이터 준비하기------------------------------------------------------------------------------------------``Dataset`` 은 데이터셋의 특징(feature)을 가져오고 하나의 샘플에 정답(label)을 지정하는 일을 한 번에 합니다.모델을 학습할 때, 일반적으로 샘플들을 "미니배치(minibatch)"로 전달하고, 매 에폭(epoch)마다 데이터를 다시 섞어서 과적합(overfit)을 막고,Python의 ``multiprocessing`` 을 사용하여 데이터 검색 속도를 높이려고 합니다.``DataLoader`` 는 간단한 API로 이러한 복잡한 과정들을 추상화한 반복 가능한 객체(iteratable)입니다.
###Code
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
DataLoader를 통해 반복하기(iterate)------------------------------------------------------------------------------------------``DataLoader`` 에 데이터셋을 불러온 뒤에는 필요에 따라 데이터셋을 반복(iterate)할 수 있습니다.아래의 각 반복(iteration)은 (각각 ``batch_size=64`` 의 특징(feature)과 정답(label)을 포함하는) ``train_features`` 와``train_labels`` 의 묶음(batch)을 반환합니다. ``shuffle=True`` 로 지정했으므로, 모든 배치를 반복한 뒤 데이터가 섞입니다.(데이터 불러오기 순서를 보다 세밀하게(finer-grained) 제어하려면 `Samplers `_를 살펴보세요.)
###Code
# 이미지와 정답(label)을 표시합니다.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
_____no_output_____
###Markdown
`파이토치(PyTorch) 기본 익히기 `_ ||`빠른 시작 `_ ||`텐서(Tensor) `_ ||**Dataset과 Dataloader** ||`변형(Transform) `_ ||`신경망 모델 구성하기 `_ ||`Autograd `_ ||`최적화(Optimization) `_ ||`모델 저장하고 불러오기 `_Dataset과 Dataloader========================================================================== 데이터 샘플을 처리하는 코드는 지저분(messy)하고 유지보수가 어려울 수 있습니다;더 나은 가독성(readability)과 모듈성(modularity)을 위해 데이터셋 코드를 모델 학습 코드로부터 분리하는 것이 이상적입니다.PyTorch는 ``torch.utils.data.DataLoader`` 와 ``torch.utils.data.Dataset`` 의 두 가지 데이터 기본 요소를제공하여 미리 준비해된(pre-loaded) 데이터셋 뿐만 아니라 가지고 있는 데이터를 사용할 수 있도록 합니다.``Dataset`` 은 샘플과 정답(label)을 저장하고, ``DataLoader`` 는 ``Dataset`` 을 샘플에 쉽게 접근할 수 있도록순회 가능한 객체(iterable)로 감쌉니다.PyTorch의 도메인 특화 라이브러리들은 (FashionMNIST와 같은) 다양한 미리 준비해둔(pre-loaded) 데이터셋을 제공합니다.데이터셋은 ``torch.utils.data.Dataset`` 의 하위 클래스로 개별 데이터를 특정하는 함수가 구현되어 있습니다.이러한 데이터셋은 모델을 만들어보고(prototype) 성능을 측정(benchmark)하는데 사용할 수 있습니다.여기에서 데이터셋들을 찾아볼 수 있습니다:`이미지 데이터셋 `_,`텍스트 데이터셋 `_ 및`오디오 데이터셋 `_ 데이터셋 불러오기------------------------------------------------------------------------------------------`TorchVision` 에서 `Fashion-MNIST `_ 데이터셋을불러오는 예제를 살펴보겠습니다. Fashion-MNIST는 Zalando의 기사 이미지 데이터셋으로 60,000개의 학습 예제와 10,000개의 테스트 예제로 이루어져 있습니다.각 예제는 흑백(grayscale)의 28x28 이미지와 10개 분류(class) 중 하나인 정답(label)으로 구성됩니다.다음 매개변수들을 사용하여 `FashionMNIST 데이터셋 `_ 을 불러옵니다: - ``root`` 는 학습/테스트 데이터가 저장되는 경로입니다. - ``train`` 은 학습용 또는 테스트용 데이터셋 여부를 지정합니다. - ``download=True`` 는 ``root`` 에 데이터가 없는 경우 인터넷에서 다운로드합니다. - ``transform`` 과 ``target_transform`` 은 특징(feature)과 정답(label) 변형(transform)을 지정합니다.
###Code
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
###Output
_____no_output_____
###Markdown
데이터셋을 순회하고 시각화하기------------------------------------------------------------------------------------------``Dataset`` 에 리스트(list)처럼 직접 접근(index)할 수 있습니다: ``training_data[index]``.``matplotlib`` 을 사용하여 학습 데이터의 일부를 시각화해보겠습니다.
###Code
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
.. .. figure:: /_static/img/basics/fashion_mnist.png :alt: fashion_mnist ------------------------------------------------------------------------------------------ 파일에서 사용자 정의 데이터셋 만들기------------------------------------------------------------------------------------------사용자 정의 Dataset 클래스는 반드시 3개 함수를 구현해야 합니다: `__init__`, `__len__`, and `__getitem__`.아래 구현을 살펴보면 FashionMNIST 이미지들은 ``img_dir`` 디렉토리에 저장되고, 정답은 ``annotations_file`` csv 파일에별도로 저장됩니다.다음 장에서 각 함수들에서 일어나는 일들을 자세히 살펴보겠습니다.
###Code
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
###Output
_____no_output_____
###Markdown
__init__^^^^^^^^^^^^^^^^^^^^__init__ 함수는 Dataset 객체가 생성(instantiate)될 때 한 번만 실행됩니다.여기서는 이미지와 주석 파일(annotation_file)이 포함된 디렉토리와 (다음 장에서 자세히 살펴볼) 두가지변형(transform)을 초기화합니다.labels.csv 파일은 다음과 같습니다: :: tshirt1.jpg, 0 tshirt2.jpg, 0 ...... ankleboot999.jpg, 9
###Code
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
###Output
_____no_output_____
###Markdown
__len__^^^^^^^^^^^^^^^^^^^^__len__ 함수는 데이터셋의 샘플 개수를 반환합니다.예:
###Code
def __len__(self):
return len(self.img_labels)
###Output
_____no_output_____
###Markdown
__getitem__^^^^^^^^^^^^^^^^^^^^__getitem__ 함수는 주어진 인덱스 ``idx`` 에 해당하는 샘플을 데이터셋에서 불러오고 반환합니다.인덱스를 기반으로, 디스크에서 이미지의 위치를 식별하고, ``read_image`` 를 사용하여 이미지를 텐서로 변환하고, ``self.img_labels`` 의 csv 데이터로부터해당하는 정답(label)을 가져오고, (해당하는 경우) 변형(transform) 함수들을 호출한 뒤, 텐서 이미지와 라벨을 Python 사전(dict)형으로 반환합니다.
###Code
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------------ DataLoader로 학습용 데이터 준비하기------------------------------------------------------------------------------------------``Dataset`` 은 데이터셋의 특징(feature)을 가져오고 하나의 샘플에 정답(label)을 지정하는 일을 한 번에 합니다.모델을 학습할 때, 일반적으로 샘플들을 "미니배치(minibatch)"로 전달하고, 매 에폭(epoch)마다 데이터를 다시 섞어서 과적합(overfit)을 막고,Python의 ``multiprocessing`` 을 사용하여 데이터 검색 속도를 높이려고 합니다.``DataLoader`` 는 간단한 API로 이러한 복잡한 과정들을 추상화한 순회 가능한 객체(iteratable)입니다.
###Code
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
DataLoader를 통해 순회하기(iterate)------------------------------------------------------------------------------------------``DataLoader`` 에 데이터셋을 불러온 뒤에는 필요에 따라 데이터셋을 순회(iterate)할 수 있습니다.아래의 각 순회(iteration)는 (각각 ``batch_size=64`` 의 특징(feature)과 정답(label)을 포함하는) ``train_features`` 와``train_labels`` 의 묶음(batch)을 반환합니다. ``shuffle=True`` 로 지정했으므로, 모든 배치를 순회한 뒤 데이터가 섞입니다.(데이터 불러오기 순서를 보다 세밀하게(finer-grained) 제어하려면 `Samplers `_를 살펴보세요.)
###Code
# 이미지와 정답(label)을 표시합니다.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
_____no_output_____
###Markdown
`파이토치(PyTorch) 기본 익히기 `_ ||`빠른 시작 `_ ||`텐서(Tensor) `_ ||**Dataset과 DataLoader** ||`변형(Transform) `_ ||`신경망 모델 구성하기 `_ ||`Autograd `_ ||`최적화(Optimization) `_ ||`모델 저장하고 불러오기 `_Dataset과 DataLoader========================================================================== 데이터 샘플을 처리하는 코드는 지저분(messy)하고 유지보수가 어려울 수 있습니다;더 나은 가독성(readability)과 모듈성(modularity)을 위해 데이터셋 코드를 모델 학습 코드로부터 분리하는 것이 이상적입니다.PyTorch는 ``torch.utils.data.DataLoader`` 와 ``torch.utils.data.Dataset`` 의 두 가지 데이터 기본 요소를제공하여 미리 준비해된(pre-loaded) 데이터셋 뿐만 아니라 가지고 있는 데이터를 사용할 수 있도록 합니다.``Dataset`` 은 샘플과 정답(label)을 저장하고, ``DataLoader`` 는 ``Dataset`` 을 샘플에 쉽게 접근할 수 있도록순회 가능한 객체(iterable)로 감쌉니다.PyTorch의 도메인 특화 라이브러리들은 (FashionMNIST와 같은) 다양한 미리 준비해둔(pre-loaded) 데이터셋을 제공합니다.데이터셋은 ``torch.utils.data.Dataset`` 의 하위 클래스로 개별 데이터를 특정하는 함수가 구현되어 있습니다.이러한 데이터셋은 모델을 만들어보고(prototype) 성능을 측정(benchmark)하는데 사용할 수 있습니다.여기에서 데이터셋들을 찾아볼 수 있습니다:`이미지 데이터셋 `_,`텍스트 데이터셋 `_ 및`오디오 데이터셋 `_ 데이터셋 불러오기------------------------------------------------------------------------------------------`TorchVision` 에서 `Fashion-MNIST `_ 데이터셋을불러오는 예제를 살펴보겠습니다. Fashion-MNIST는 Zalando의 기사 이미지 데이터셋으로 60,000개의 학습 예제와 10,000개의 테스트 예제로 이루어져 있습니다.각 예제는 흑백(grayscale)의 28x28 이미지와 10개 분류(class) 중 하나인 정답(label)으로 구성됩니다.다음 매개변수들을 사용하여 `FashionMNIST 데이터셋 `_ 을 불러옵니다: - ``root`` 는 학습/테스트 데이터가 저장되는 경로입니다. - ``train`` 은 학습용 또는 테스트용 데이터셋 여부를 지정합니다. - ``download=True`` 는 ``root`` 에 데이터가 없는 경우 인터넷에서 다운로드합니다. - ``transform`` 과 ``target_transform`` 은 특징(feature)과 정답(label) 변형(transform)을 지정합니다.
###Code
import torch
from torch.utils.data import Dataset
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
training_data = datasets.FashionMNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
test_data = datasets.FashionMNIST(
root="data",
train=False,
download=True,
transform=ToTensor()
)
###Output
_____no_output_____
###Markdown
데이터셋을 순회하고 시각화하기------------------------------------------------------------------------------------------``Dataset`` 에 리스트(list)처럼 직접 접근(index)할 수 있습니다: ``training_data[index]``.``matplotlib`` 을 사용하여 학습 데이터의 일부를 시각화해보겠습니다.
###Code
labels_map = {
0: "T-Shirt",
1: "Trouser",
2: "Pullover",
3: "Dress",
4: "Coat",
5: "Sandal",
6: "Shirt",
7: "Sneaker",
8: "Bag",
9: "Ankle Boot",
}
figure = plt.figure(figsize=(8, 8))
cols, rows = 3, 3
for i in range(1, cols * rows + 1):
sample_idx = torch.randint(len(training_data), size=(1,)).item()
img, label = training_data[sample_idx]
figure.add_subplot(rows, cols, i)
plt.title(labels_map[label])
plt.axis("off")
plt.imshow(img.squeeze(), cmap="gray")
plt.show()
###Output
_____no_output_____
###Markdown
.. .. figure:: /_static/img/basics/fashion_mnist.png :alt: fashion_mnist ------------------------------------------------------------------------------------------ 파일에서 사용자 정의 데이터셋 만들기------------------------------------------------------------------------------------------사용자 정의 Dataset 클래스는 반드시 3개 함수를 구현해야 합니다: `__init__`, `__len__`, and `__getitem__`.아래 구현을 살펴보면 FashionMNIST 이미지들은 ``img_dir`` 디렉토리에 저장되고, 정답은 ``annotations_file`` csv 파일에별도로 저장됩니다.다음 장에서 각 함수들에서 일어나는 일들을 자세히 살펴보겠습니다.
###Code
import os
import pandas as pd
from torchvision.io import read_image
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file, names=['file_name', 'label'])
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
###Output
_____no_output_____
###Markdown
__init__^^^^^^^^^^^^^^^^^^^^__init__ 함수는 Dataset 객체가 생성(instantiate)될 때 한 번만 실행됩니다.여기서는 이미지와 주석 파일(annotation_file)이 포함된 디렉토리와 (다음 장에서 자세히 살펴볼) 두가지변형(transform)을 초기화합니다.labels.csv 파일은 다음과 같습니다: :: tshirt1.jpg, 0 tshirt2.jpg, 0 ...... ankleboot999.jpg, 9
###Code
def __init__(self, annotations_file, img_dir, transform=None, target_transform=None):
self.img_labels = pd.read_csv(annotations_file)
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
###Output
_____no_output_____
###Markdown
__len__^^^^^^^^^^^^^^^^^^^^__len__ 함수는 데이터셋의 샘플 개수를 반환합니다.예:
###Code
def __len__(self):
return len(self.img_labels)
###Output
_____no_output_____
###Markdown
__getitem__^^^^^^^^^^^^^^^^^^^^__getitem__ 함수는 주어진 인덱스 ``idx`` 에 해당하는 샘플을 데이터셋에서 불러오고 반환합니다.인덱스를 기반으로, 디스크에서 이미지의 위치를 식별하고, ``read_image`` 를 사용하여 이미지를 텐서로 변환하고, ``self.img_labels`` 의 csv 데이터로부터해당하는 정답(label)을 가져오고, (해당하는 경우) 변형(transform) 함수들을 호출한 뒤, 텐서 이미지와 라벨을 Python 사전(dict)형으로 반환합니다.
###Code
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0])
image = read_image(img_path)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
sample = {"image": image, "label": label}
return sample
###Output
_____no_output_____
###Markdown
------------------------------------------------------------------------------------------ DataLoader로 학습용 데이터 준비하기------------------------------------------------------------------------------------------``Dataset`` 은 데이터셋의 특징(feature)을 가져오고 하나의 샘플에 정답(label)을 지정하는 일을 한 번에 합니다.모델을 학습할 때, 일반적으로 샘플들을 "미니배치(minibatch)"로 전달하고, 매 에폭(epoch)마다 데이터를 다시 섞어서 과적합(overfit)을 막고,Python의 ``multiprocessing`` 을 사용하여 데이터 검색 속도를 높이려고 합니다.``DataLoader`` 는 간단한 API로 이러한 복잡한 과정들을 추상화한 순회 가능한 객체(iterable)입니다.
###Code
from torch.utils.data import DataLoader
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
test_dataloader = DataLoader(test_data, batch_size=64, shuffle=True)
###Output
_____no_output_____
###Markdown
DataLoader를 통해 순회하기(iterate)------------------------------------------------------------------------------------------``DataLoader`` 에 데이터셋을 불러온 뒤에는 필요에 따라 데이터셋을 순회(iterate)할 수 있습니다.아래의 각 순회(iteration)는 (각각 ``batch_size=64`` 의 특징(feature)과 정답(label)을 포함하는) ``train_features`` 와``train_labels`` 의 묶음(batch)을 반환합니다. ``shuffle=True`` 로 지정했으므로, 모든 배치를 순회한 뒤 데이터가 섞입니다.(데이터 불러오기 순서를 보다 세밀하게(finer-grained) 제어하려면 `Samplers `_를 살펴보세요.)
###Code
# 이미지와 정답(label)을 표시합니다.
train_features, train_labels = next(iter(train_dataloader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
img = train_features[0].squeeze()
label = train_labels[0]
plt.imshow(img, cmap="gray")
plt.show()
print(f"Label: {label}")
###Output
_____no_output_____ |
examples/from-benchmarks/manyCells - 5*20 cells with 2 lines of code and 2 outputs per cell (10% of markdown).ipynb | ###Markdown
MD cell 0 MD cell 1 MD cell 2 MD cell 3 MD cell 4 MD cell 5 MD cell 6 MD cell 7 MD cell 8 MD cell 9
###Code
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
import sys
print('1')
print('2', file=sys.stderr)
###Output
1
|
notebooks/Processamento de Pistas de Vento.ipynb | ###Markdown
Por Geraldo Maciel e Mateus Barbosa --- 1. Importação inicial Esta célula sempre deverá ser executada inicialmente.É ela a responsável por importar o código-fonte do módulo a ser utilizado.
###Code
import sys
import os
sys.path.insert(0, os.path.join(os.path.dirname(os.getcwd()), 'scripts'))
from windfetchs import *
###Output
_____no_output_____
###Markdown
--- 2. Criação e importação de projetos e shapefiles O módulo só aceita shapefiles que estejam nas seguintes projeções:• EPSG:32718• EPSG:32719• EPSG:32720• EPSG:32721• EPSG:32722• EPSG:32723• EPSG:32724• EPSG:32725Mais informações sobre as projeções podem ser obtidas no endereço http://www.spatialreference.org/, utilizando os códigos já mencionados.
###Code
widgets_main = WidgetsMain()
widgets_main.display()
###Output
_____no_output_____
###Markdown
--- 3. Malha
###Code
widgets_grid = WidgetsGrid()
widgets_grid.display()
###Output
_____no_output_____
###Markdown
--- 3.1. Processamento da malha
###Code
widgets_main.grid = widgets_grid.outputs(widgets_main.shapefile, widgets_main.project_dirs)
###Output
_____no_output_____
###Markdown
--- 4. Seleção de métodos e direções para pistas de vento
###Code
widgets_wind_fetchs = WidgetsWindFetchs()
widgets_wind_fetchs.display()
###Output
_____no_output_____
###Markdown
--- 4.1. Processamento de pistas de vento
###Code
widgets_wind_fetchs.outputs(widgets_main.grid, widgets_main.project_dirs)
###Output
_____no_output_____
###Markdown
--- 5. Importação e seleção de esquemas de cores/intervalos para geração de imagens de pistas de vento processadas Mais informações sobre os mapas de cores que podem ser utilizados podem ser encontrados nos endereços:• https://matplotlib.org/examples/color/colormaps_reference.html• https://matplotlib.org/cmocean/
###Code
widgets_wind_fetchs_visualization = WidgetsWindFetchsVisualization()
widgets_wind_fetchs_visualization.display()
###Output
_____no_output_____
###Markdown
--- 5.1. Geração de imagens de pistas de vento processadas
###Code
widgets_wind_fetchs_visualization.outputs(widgets_main.grid, widgets_main.project_dirs)
###Output
_____no_output_____ |
docs/source/tutorials/1-getting-started/what-are-periodogram-objects.ipynb | ###Markdown
What are `Periodogram` objects? *Lightkurve* has a class specifically for dealing with periodograms of time series data. This can be useful for finding the periods of variable stars. Below is a quick example of how to find the period of an [eclipsing binary star](https://en.wikipedia.org/wiki/Binary_starEclipsing_binaries).First, let's grab a light curve file from the Kepler data archive. We'll use the object named KIC 10264202, which is an eclipsing binary observed by the original Kepler mission. We're just going to use one quarter of data for this demo.
###Code
from lightkurve import search_lightcurve
lc = search_lightcurve('KIC 10264202', author="Kepler", quarter=10, cadence="long").download().remove_nans()
###Output
_____no_output_____
###Markdown
Let's plot the light curve to see what we're working with.
###Code
%matplotlib inline
lc.scatter();
###Output
_____no_output_____
###Markdown
This light curve looks like it has some structure in it! Let's use the periodogram class to find the rotation period. You can create a periodogram from the `KeplerLightCurve` object by using the `to_periodogram` method.
###Code
pg = lc.to_periodogram(oversample_factor=1)
###Output
_____no_output_____
###Markdown
Now we can plot the periodogram in the same way that we plot the original light curve.
###Code
pg.plot();
###Output
_____no_output_____
###Markdown
This looks like there is a huge signal at a certain frequency! Let's plot it in period space, so that we can see what period the oscillation is occurring at.
###Code
pg.plot(view='period', scale='log');
###Output
_____no_output_____
###Markdown
This looks like a very fast period. We can access the full period and power data as follows:
###Code
pg.period
pg.power
###Output
_____no_output_____
###Markdown
In this case we simply want to know the period that corresponds to the highest peak in the periodogram. We can directly access this value using the convenient `period_at_max_power` property:
###Code
pg.period_at_max_power
###Output
_____no_output_____
###Markdown
We can then use this period to fold our light curve:
###Code
lc.fold(period=pg.period_at_max_power).scatter();
###Output
_____no_output_____
###Markdown
Oops, the eclipses do not line up nicely. This does not look like the correct period of this eclipsing binary!As is quite common for eclipsing binaries with deep secondary eclipses, we have found a harmonic of the period of the eclipsing binary. Let's plot it again with quadruple the period.
###Code
lc.fold(period=4*pg.period_at_max_power, wrap_phase=0.2).scatter();
###Output
_____no_output_____
###Markdown
That looks better, but the eclipses still don't seem to line up as well as they could.Let's try to get a more precise period by increasing the number of points in our periodogram using the `oversample_factor` parameter and by constraining the range of the period value:
###Code
import astropy.units as u
pg = lc.to_periodogram(minimum_period=0.9*u.day, maximum_period=1.2*u.day, oversample_factor=10)
pg.period_at_max_power
lc.fold(period=pg.period_at_max_power, wrap_phase=0.2).scatter();
###Output
_____no_output_____
###Markdown
What are `Periodogram` objects? *Lightkurve* has a class specifically for dealing with periodograms of time series data. This can be useful for finding the periods of variable stars. Below is a quick example of how to find the period of an [eclipsing binary star](https://en.wikipedia.org/wiki/Binary_starEclipsing_binaries).First, let's grab a light curve file from the Kepler data archive. We'll use the object named KIC 10264202, which is an eclipsing binary observed by the original Kepler mission. We're just going to use one quarter of data for this demo.
###Code
from lightkurve import search_lightcurve
lc = search_lightcurve('KIC 10264202', author="Kepler", quarter=10, cadence="long").download().remove_nans()
###Output
_____no_output_____
###Markdown
Let's plot the light curve to see what we're working with.
###Code
%matplotlib inline
lc.scatter();
###Output
_____no_output_____
###Markdown
This light curve looks like it has some structure in it! Let's use the periodogram class to find the rotation period. You can create a periodogram from the `KeplerLightCurve` object by using the [to_periodogram](https://docs.lightkurve.org/reference/api/lightkurve.LightCurve.to_periodogram.html?highlight=to_periodogram) method.
###Code
pg = lc.to_periodogram(oversample_factor=1)
###Output
_____no_output_____
###Markdown
Now we can plot the periodogram in the same way that we plot the original light curve.
###Code
pg.plot();
###Output
_____no_output_____
###Markdown
This looks like there is a huge signal at a certain frequency! Let's plot it in period space, so that we can see what period the oscillation is occurring at.
###Code
pg.plot(view='period', scale='log');
###Output
_____no_output_____
###Markdown
This looks like a very fast period. We can access the full [period](https://docs.lightkurve.org/reference/api/lightkurve.periodogram.Periodogram.period.html?highlight=periodlightkurve.periodogram.Periodogram.period) and [power](https://docs.lightkurve.org/reference/api/lightkurve.periodogram.Periodogram.power.html?highlight=powerlightkurve.periodogram.Periodogram.power) data as follows:
###Code
pg.period
pg.power
###Output
_____no_output_____
###Markdown
In this case we simply want to know the period that corresponds to the highest peak in the periodogram. We can directly access this value using the convenient [period_at_max_power](https://docs.lightkurve.org/reference/api/lightkurve.periodogram.Periodogram.period_at_max_power.html?highlight=period_at_max_power) property:
###Code
pg.period_at_max_power
###Output
_____no_output_____
###Markdown
We can then use this period to fold our light curve:
###Code
lc.fold(period=pg.period_at_max_power).scatter();
###Output
_____no_output_____
###Markdown
Oops, the eclipses do not line up nicely. This does not look like the correct period of this eclipsing binary!As is quite common for eclipsing binaries with deep secondary eclipses, we have found a harmonic of the period of the eclipsing binary. Let's plot it again with quadruple the period.
###Code
lc.fold(period=4*pg.period_at_max_power, wrap_phase=0.2).scatter();
###Output
_____no_output_____
###Markdown
That looks better, but the eclipses still don't seem to line up as well as they could.Let's try to get a more precise period by increasing the number of points in our periodogram using the `oversample_factor` parameter and by constraining the range of the period value:
###Code
import astropy.units as u
pg = lc.to_periodogram(minimum_period=0.9*u.day, maximum_period=1.2*u.day, oversample_factor=10)
pg.period_at_max_power
lc.fold(period=pg.period_at_max_power, wrap_phase=0.2).scatter();
###Output
_____no_output_____ |
projecten data science/Predicting Credit Card Approvals/notebook.ipynb | ###Markdown
1. Credit card applicationsCommercial banks receive a lot of applications for credit cards. Many of them get rejected for many reasons, like high loan balances, low income levels, or too many inquiries on an individual's credit report, for example. Manually analyzing these applications is mundane, error-prone, and time-consuming (and time is money!). Luckily, this task can be automated with the power of machine learning and pretty much every commercial bank does so nowadays. In this notebook, we will build an automatic credit card approval predictor using machine learning techniques, just like the real banks do.We'll use the Credit Card Approval dataset from the UCI Machine Learning Repository. The structure of this notebook is as follows:First, we will start off by loading and viewing the dataset.We will see that the dataset has a mixture of both numerical and non-numerical features, that it contains values from different ranges, plus that it contains a number of missing entries.We will have to preprocess the dataset to ensure the machine learning model we choose can make good predictions.After our data is in good shape, we will do some exploratory data analysis to build our intuitions.Finally, we will build a machine learning model that can predict if an individual's application for a credit card will be accepted.First, loading and viewing the dataset. We find that since this data is confidential, the contributor of the dataset has anonymized the feature names.
###Code
# Import pandas
import pandas as pd
# Load dataset
cc_apps = pd.read_csv("datasets/cc_approvals.data",header=None)
# Inspect data
cc_apps.head()
###Output
_____no_output_____
###Markdown
2. Inspecting the applicationsThe output may appear a bit confusing at its first sight, but let's try to figure out the most important features of a credit card application. The features of this dataset have been anonymized to protect the privacy, but this blog gives us a pretty good overview of the probable features. The probable features in a typical credit card application are Gender, Age, Debt, Married, BankCustomer, EducationLevel, Ethnicity, YearsEmployed, PriorDefault, Employed, CreditScore, DriversLicense, Citizen, ZipCode, Income and finally the ApprovalStatus. This gives us a pretty good starting point, and we can map these features with respect to the columns in the output. As we can see from our first glance at the data, the dataset has a mixture of numerical and non-numerical features. This can be fixed with some preprocessing, but before we do that, let's learn about the dataset a bit more to see if there are other dataset issues that need to be fixed.
###Code
# Print summary statistics
cc_apps_description = cc_apps.describe()
print(cc_apps_description)
print("\n")
# Print DataFrame information
cc_apps_info = cc_apps.info()
print(cc_apps_info)
print("\n")
# Inspect missing values in the dataset
cc_apps.tail()
###Output
2 7 10 14
count 690.000000 690.000000 690.00000 690.000000
mean 4.758725 2.223406 2.40000 1017.385507
std 4.978163 3.346513 4.86294 5210.102598
min 0.000000 0.000000 0.00000 0.000000
25% 1.000000 0.165000 0.00000 0.000000
50% 2.750000 1.000000 0.00000 5.000000
75% 7.207500 2.625000 3.00000 395.500000
max 28.000000 28.500000 67.00000 100000.000000
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 690 entries, 0 to 689
Data columns (total 16 columns):
0 690 non-null object
1 690 non-null object
2 690 non-null float64
3 690 non-null object
4 690 non-null object
5 690 non-null object
6 690 non-null object
7 690 non-null float64
8 690 non-null object
9 690 non-null object
10 690 non-null int64
11 690 non-null object
12 690 non-null object
13 690 non-null object
14 690 non-null int64
15 690 non-null object
dtypes: float64(2), int64(2), object(12)
memory usage: 86.3+ KB
None
###Markdown
3. Handling the missing values (part i)We've uncovered some issues that will affect the performance of our machine learning model(s) if they go unchanged:Our dataset contains both numeric and non-numeric data (specifically data that are of float64, int64 and object types). Specifically, the features 2, 7, 10 and 14 contain numeric values (of types float64, float64, int64 and int64 respectively) and all the other features contain non-numeric values.The dataset also contains values from several ranges. Some features have a value range of 0 - 28, some have a range of 2 - 67, and some have a range of 1017 - 100000. Apart from these, we can get useful statistical information (like mean, max, and min) about the features that have numerical values. Finally, the dataset has missing values, which we'll take care of in this task. The missing values in the dataset are labeled with '?', which can be seen in the last cell's output.Now, let's temporarily replace these missing value question marks with NaN.
###Code
# Import numpy
import numpy as np
# Inspect missing values in the dataset
print(cc_apps.isnull().values.sum())
# Replace the '?'s with NaN
cc_apps = cc_apps.replace('?',np.NaN)
# Inspect the missing values again
cc_apps.tail(17)
###Output
0
###Markdown
4. Handling the missing values (part ii)We replaced all the question marks with NaNs. This is going to help us in the next missing value treatment that we are going to perform.An important question that gets raised here is why are we giving so much importance to missing values? Can't they be just ignored? Ignoring missing values can affect the performance of a machine learning model heavily. While ignoring the missing values our machine learning model may miss out on information about the dataset that may be useful for its training. Then, there are many models which cannot handle missing values implicitly such as LDA. So, to avoid this problem, we are going to impute the missing values with a strategy called mean imputation.
###Code
# Impute the missing values with mean imputation
cc_apps.fillna(cc_apps.mean(), inplace=True)
# Count the number of NaNs in the dataset to verify
cc_apps.isnull().values.sum()
###Output
_____no_output_____
###Markdown
5. Handling the missing values (part iii)We have successfully taken care of the missing values present in the numeric columns. There are still some missing values to be imputed for columns 0, 1, 3, 4, 5, 6 and 13. All of these columns contain non-numeric data and this why the mean imputation strategy would not work here. This needs a different treatment. We are going to impute these missing values with the most frequent values as present in the respective columns. This is good practice when it comes to imputing missing values for categorical data in general.
###Code
# Iterate over each column of cc_apps
for col in cc_apps.columns:
# Check if the column is of object type
if cc_apps[col].dtypes == 'object':
# Impute with the most frequent value
cc_apps[col] = cc_apps[col].fillna(cc_apps[col].value_counts().index[0])
# Count the number of NaNs in the dataset and print the counts to verify
cc_apps.isnull().values.sum()
###Output
_____no_output_____
###Markdown
6. Preprocessing the data (part i)The missing values are now successfully handled.There is still some minor but essential data preprocessing needed before we proceed towards building our machine learning model. We are going to divide these remaining preprocessing steps into three main tasks:Convert the non-numeric data into numeric.Split the data into train and test sets. Scale the feature values to a uniform range.First, we will be converting all the non-numeric values into numeric ones. We do this because not only it results in a faster computation but also many machine learning models (like XGBoost) (and especially the ones developed using scikit-learn) require the data to be in a strictly numeric format. We will do this by using a technique called label encoding.
###Code
# Iterate over all the values of each column and extract their dtypes
from sklearn.preprocessing import LabelEncoder
# Instantiate LabelEncoder
le = LabelEncoder()
# Iterate over all the values of each column and extract their dtypes
for col in cc_apps.columns:
# Compare if the dtype is object
if cc_apps[col].dtype=='object':
# Use LabelEncoder to do the numeric transformation
cc_apps[col]=le.fit_transform(cc_apps[col])
###Output
_____no_output_____
###Markdown
7. Splitting the dataset into train and test setsWe have successfully converted all the non-numeric values to numeric ones.Now, we will split our data into train set and test set to prepare our data for two different phases of machine learning modeling: training and testing. Ideally, no information from the test data should be used to scale the training data or should be used to direct the training process of a machine learning model. Hence, we first split the data and then apply the scaling.Also, features like DriversLicense and ZipCode are not as important as the other features in the dataset for predicting credit card approvals. We should drop them to design our machine learning model with the best set of features. In Data Science literature, this is often referred to as feature selection.
###Code
from sklearn.model_selection import train_test_split
# Drop the features 11 and 13 and convert the DataFrame to a NumPy array
cc_apps = cc_apps.drop([cc_apps.columns[10],cc_apps.columns[13]] , axis=1)
cc_apps = cc_apps.values
# Segregate features and labels into separate variables
X,y = cc_apps[:,0:13] , cc_apps[:,13]
# Split into train and test sets
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.33,
random_state=42)
###Output
_____no_output_____
###Markdown
8. Preprocessing the data (part ii)The data is now split into two separate sets - train and test sets respectively. We are only left with one final preprocessing step of scaling before we can fit a machine learning model to the data. Now, let's try to understand what these scaled values mean in the real world. Let's use CreditScore as an example. The credit score of a person is their creditworthiness based on their credit history. The higher this number, the more financially trustworthy a person is considered to be. So, a CreditScore of 1 is the highest since we're rescaling all the values to the range of 0-1.
###Code
# Import MinMaxScaler
from sklearn.preprocessing import MinMaxScaler
# Instantiate MinMaxScaler and use it to rescale X_train and X_test
scaler = MinMaxScaler(feature_range=(0, 1))
rescaledX_train = scaler.fit_transform(x_train)
rescaledX_test = scaler.fit_transform(X_test)
###Output
_____no_output_____
###Markdown
9. Fitting a logistic regression model to the train setEssentially, predicting if a credit card application will be approved or not is a classification task. According to UCI, our dataset contains more instances that correspond to "Denied" status than instances corresponding to "Approved" status. Specifically, out of 690 instances, there are 383 (55.5%) applications that got denied and 307 (44.5%) applications that got approved. This gives us a benchmark. A good machine learning model should be able to accurately predict the status of the applications with respect to these statistics.Which model should we pick? A question to ask is: are the features that affect the credit card approval decision process correlated with each other? Although we can measure correlation, that is outside the scope of this notebook, so we'll rely on our intuition that they indeed are correlated for now. Because of this correlation, we'll take advantage of the fact that generalized linear models perform well in these cases. Let's start our machine learning modeling with a Logistic Regression model (a generalized linear model).
###Code
from sklearn.linear_model import LogisticRegression
# Instantiate a LogisticRegression classifier with default parameter values
logreg = LogisticRegression()
# Fit logreg to the train set
logreg.fit(rescaledX_train,y_train)
###Output
_____no_output_____
###Markdown
10. Making predictions and evaluating performanceBut how well does our model perform? We will now evaluate our model on the test set with respect to classification accuracy. But we will also take a look the model's confusion matrix. In the case of predicting credit card applications, it is equally important to see if our machine learning model is able to predict the approval status of the applications as denied that originally got denied. If our model is not performing well in this aspect, then it might end up approving the application that should have been approved. The confusion matrix helps us to view our model's performance from these aspects.
###Code
from sklearn.metrics import confusion_matrix
# Use logreg to predict instances from the test set and store it
y_pred = logreg.predict(rescaledX_test)
# Get the accuracy score of logreg model and print it
print("Accuracy of logistic regression classifier: ", logreg.score(rescaledX_test,Y_test))
# Print the confusion matrix of the logreg model
print(confusion_matrix(Y_test,y_pred))
###Output
_____no_output_____
###Markdown
11. Grid searching and making the model perform betterOur model was pretty good! It was able to yield an accuracy score of almost 84%.For the confusion matrix, the first element of the of the first row of the confusion matrix denotes the true negatives meaning the number of negative instances (denied applications) predicted by the model correctly. And the last element of the second row of the confusion matrix denotes the true positives meaning the number of positive instances (approved applications) predicted by the model correctly.Let's see if we can do better. We can perform a grid search of the model parameters to improve the model's ability to predict credit card approvals.scikit-learn's implementation of logistic regression consists of different hyperparameters but we will grid search over the following two:tolmax_iter
###Code
# Import GridSearchCV
# ... YOUR CODE FOR TASK 11 ...
# Define the grid of values for tol and max_iter
tol = ...
max_iter = ...
# Create a dictionary where tol and max_iter are keys and the lists of their values are corresponding values
param_grid = dict(..., ...)
###Output
_____no_output_____
###Markdown
12. Finding the best performing modelWe have defined the grid of hyperparameter values and converted them into a single dictionary format which GridSearchCV() expects as one of its parameters. Now, we will begin the grid search to see which values perform best.We will instantiate GridSearchCV() with our earlier logreg model with all the data we have. Instead of passing train and test sets separately, we will supply X (scaled version) and y. We will also instruct GridSearchCV() to perform a cross-validation of five folds.We'll end the notebook by storing the best-achieved score and the respective best parameters.While building this credit card predictor, we tackled some of the most widely-known preprocessing steps such as scaling, label encoding, and missing value imputation. We finished with some machine learning to predict if a person's application for a credit card would get approved or not given some information about that person.
###Code
# Instantiate GridSearchCV with the required parameters
grid_model = GridSearchCV(estimator=..., param_grid=..., cv=...)
# Use scaler to rescale X and assign it to rescaledX
rescaledX = scaler....(...)
# Fit data to grid_model
grid_model_result = grid_model.fit(..., ...)
# Summarize results
best_score, best_params = ...
print("Best: %f using %s" % (..., ...))
###Output
_____no_output_____ |
notebooks/features/responsible_ai/Interpretability - Tabular SHAP explainer.ipynb | ###Markdown
Interpretability - Tabular SHAP explainerIn this example, we use Kernel SHAP to explain a tabular classification model built from the Adults Census dataset.First we import the packages and define some UDFs we will need later.
###Code
import pyspark
from synapse.ml.explainers import *
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler
from pyspark.sql.types import *
from pyspark.sql.functions import *
import pandas as pd
vec_access = udf(lambda v, i: float(v[i]), FloatType())
vec2array = udf(lambda vec: vec.toArray().tolist(), ArrayType(FloatType()))
###Output
_____no_output_____
###Markdown
Now let's read the data and train a simple binary classification model.
###Code
df = spark.read.parquet("wasbs://[email protected]/AdultCensusIncome.parquet")
labelIndexer = StringIndexer(inputCol="income", outputCol="label", stringOrderType="alphabetAsc").fit(df)
print("Label index assigment: " + str(set(zip(labelIndexer.labels, [0, 1]))))
training = labelIndexer.transform(df).cache()
display(training)
categorical_features = [
"workclass",
"education",
"marital-status",
"occupation",
"relationship",
"race",
"sex",
"native-country",
]
categorical_features_idx = [col + "_idx" for col in categorical_features]
categorical_features_enc = [col + "_enc" for col in categorical_features]
numeric_features = ["age", "education-num", "capital-gain", "capital-loss", "hours-per-week"]
strIndexer = StringIndexer(inputCols=categorical_features, outputCols=categorical_features_idx)
onehotEnc = OneHotEncoder(inputCols=categorical_features_idx, outputCols=categorical_features_enc)
vectAssem = VectorAssembler(inputCols=categorical_features_enc + numeric_features, outputCol="features")
lr = LogisticRegression(featuresCol="features", labelCol="label", weightCol="fnlwgt")
pipeline = Pipeline(stages=[strIndexer, onehotEnc, vectAssem, lr])
model = pipeline.fit(training)
###Output
_____no_output_____
###Markdown
After the model is trained, we randomly select some observations to be explained.
###Code
explain_instances = model.transform(training).orderBy(rand()).limit(5).repartition(200).cache()
display(explain_instances)
###Output
_____no_output_____
###Markdown
We create a TabularSHAP explainer, set the input columns to all the features the model takes, specify the model and the target output column we are trying to explain. In this case, we are trying to explain the "probability" output which is a vector of length 2, and we are only looking at class 1 probability. Specify targetClasses to `[0, 1]` if you want to explain class 0 and 1 probability at the same time. Finally we sample 100 rows from the training data for background data, which is used for integrating out features in Kernel SHAP.
###Code
shap = TabularSHAP(
inputCols=categorical_features + numeric_features,
outputCol="shapValues",
numSamples=5000,
model=model,
targetCol="probability",
targetClasses=[1],
backgroundData=broadcast(training.orderBy(rand()).limit(100).cache()),
)
shap_df = shap.transform(explain_instances)
###Output
_____no_output_____
###Markdown
Once we have the resulting dataframe, we extract the class 1 probability of the model output, the SHAP values for the target class, the original features and the true label. Then we convert it to a pandas dataframe for visisualization.For each observation, the first element in the SHAP values vector is the base value (the mean output of the background dataset), and each of the following element is the SHAP values for each feature.
###Code
shaps = (
shap_df.withColumn("probability", vec_access(col("probability"), lit(1)))
.withColumn("shapValues", vec2array(col("shapValues").getItem(0)))
.select(["shapValues", "probability", "label"] + categorical_features + numeric_features)
)
shaps_local = shaps.toPandas()
shaps_local.sort_values("probability", ascending=False, inplace=True, ignore_index=True)
pd.set_option("display.max_colwidth", None)
shaps_local
###Output
_____no_output_____
###Markdown
We use plotly subplot to visualize the SHAP values.
###Code
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import pandas as pd
features = categorical_features + numeric_features
features_with_base = ["Base"] + features
rows = shaps_local.shape[0]
fig = make_subplots(
rows=rows,
cols=1,
subplot_titles="Probability: " + shaps_local["probability"].apply("{:.2%}".format) + "; Label: " + shaps_local["label"].astype(str),
)
for index, row in shaps_local.iterrows():
feature_values = [0] + [row[feature] for feature in features]
shap_values = row["shapValues"]
list_of_tuples = list(zip(features_with_base, feature_values, shap_values))
shap_pdf = pd.DataFrame(list_of_tuples, columns=["name", "value", "shap"])
fig.add_trace(
go.Bar(x=shap_pdf["name"], y=shap_pdf["shap"], hovertext="value: " + shap_pdf["value"].astype(str)),
row=index + 1,
col=1,
)
fig.update_yaxes(range=[-1, 1], fixedrange=True, zerolinecolor="black")
fig.update_xaxes(type="category", tickangle=45, fixedrange=True)
fig.update_layout(height=400 * rows, title_text="SHAP explanations")
fig.show()
###Output
_____no_output_____
###Markdown
Interpretability - Tabular SHAP explainerIn this example, we use Kernel SHAP to explain a tabular classification model built from the Adults Census dataset.First we import the packages and define some UDFs we will need later.
###Code
import pyspark
from synapse.ml.explainers import *
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler
from pyspark.sql.types import *
from pyspark.sql.functions import *
import pandas as pd
import os
if os.environ.get("AZURE_SERVICE", None) == "Microsoft.ProjectArcadia":
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
from notebookutils.visualization import display
vec_access = udf(lambda v, i: float(v[i]), FloatType())
vec2array = udf(lambda vec: vec.toArray().tolist(), ArrayType(FloatType()))
###Output
_____no_output_____
###Markdown
Now let's read the data and train a simple binary classification model.
###Code
df = spark.read.parquet(
"wasbs://[email protected]/AdultCensusIncome.parquet"
)
labelIndexer = StringIndexer(
inputCol="income", outputCol="label", stringOrderType="alphabetAsc"
).fit(df)
print("Label index assigment: " + str(set(zip(labelIndexer.labels, [0, 1]))))
training = labelIndexer.transform(df).cache()
display(training)
categorical_features = [
"workclass",
"education",
"marital-status",
"occupation",
"relationship",
"race",
"sex",
"native-country",
]
categorical_features_idx = [col + "_idx" for col in categorical_features]
categorical_features_enc = [col + "_enc" for col in categorical_features]
numeric_features = [
"age",
"education-num",
"capital-gain",
"capital-loss",
"hours-per-week",
]
strIndexer = StringIndexer(
inputCols=categorical_features, outputCols=categorical_features_idx
)
onehotEnc = OneHotEncoder(
inputCols=categorical_features_idx, outputCols=categorical_features_enc
)
vectAssem = VectorAssembler(
inputCols=categorical_features_enc + numeric_features, outputCol="features"
)
lr = LogisticRegression(featuresCol="features", labelCol="label", weightCol="fnlwgt")
pipeline = Pipeline(stages=[strIndexer, onehotEnc, vectAssem, lr])
model = pipeline.fit(training)
###Output
_____no_output_____
###Markdown
After the model is trained, we randomly select some observations to be explained.
###Code
explain_instances = (
model.transform(training).orderBy(rand()).limit(5).repartition(200).cache()
)
display(explain_instances)
###Output
_____no_output_____
###Markdown
We create a TabularSHAP explainer, set the input columns to all the features the model takes, specify the model and the target output column we are trying to explain. In this case, we are trying to explain the "probability" output which is a vector of length 2, and we are only looking at class 1 probability. Specify targetClasses to `[0, 1]` if you want to explain class 0 and 1 probability at the same time. Finally we sample 100 rows from the training data for background data, which is used for integrating out features in Kernel SHAP.
###Code
shap = TabularSHAP(
inputCols=categorical_features + numeric_features,
outputCol="shapValues",
numSamples=5000,
model=model,
targetCol="probability",
targetClasses=[1],
backgroundData=broadcast(training.orderBy(rand()).limit(100).cache()),
)
shap_df = shap.transform(explain_instances)
###Output
_____no_output_____
###Markdown
Once we have the resulting dataframe, we extract the class 1 probability of the model output, the SHAP values for the target class, the original features and the true label. Then we convert it to a pandas dataframe for visisualization.For each observation, the first element in the SHAP values vector is the base value (the mean output of the background dataset), and each of the following element is the SHAP values for each feature.
###Code
shaps = (
shap_df.withColumn("probability", vec_access(col("probability"), lit(1)))
.withColumn("shapValues", vec2array(col("shapValues").getItem(0)))
.select(
["shapValues", "probability", "label"] + categorical_features + numeric_features
)
)
shaps_local = shaps.toPandas()
shaps_local.sort_values("probability", ascending=False, inplace=True, ignore_index=True)
pd.set_option("display.max_colwidth", None)
shaps_local
###Output
_____no_output_____
###Markdown
We use plotly subplot to visualize the SHAP values.
###Code
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import pandas as pd
features = categorical_features + numeric_features
features_with_base = ["Base"] + features
rows = shaps_local.shape[0]
fig = make_subplots(
rows=rows,
cols=1,
subplot_titles="Probability: "
+ shaps_local["probability"].apply("{:.2%}".format)
+ "; Label: "
+ shaps_local["label"].astype(str),
)
for index, row in shaps_local.iterrows():
feature_values = [0] + [row[feature] for feature in features]
shap_values = row["shapValues"]
list_of_tuples = list(zip(features_with_base, feature_values, shap_values))
shap_pdf = pd.DataFrame(list_of_tuples, columns=["name", "value", "shap"])
fig.add_trace(
go.Bar(
x=shap_pdf["name"],
y=shap_pdf["shap"],
hovertext="value: " + shap_pdf["value"].astype(str),
),
row=index + 1,
col=1,
)
fig.update_yaxes(range=[-1, 1], fixedrange=True, zerolinecolor="black")
fig.update_xaxes(type="category", tickangle=45, fixedrange=True)
fig.update_layout(height=400 * rows, title_text="SHAP explanations")
fig.show()
###Output
_____no_output_____
###Markdown
Interpretability - Tabular SHAP explainerIn this example, we use Kernel SHAP to explain a tabular classification model built from the Adults Census dataset.First we import the packages and define some UDFs we will need later.
###Code
import pyspark
from synapse.ml.explainers import *
from pyspark.ml import Pipeline
from pyspark.ml.classification import LogisticRegression
from pyspark.ml.feature import StringIndexer, OneHotEncoder, VectorAssembler
from pyspark.sql.types import *
from pyspark.sql.functions import *
import pandas as pd
vec_access = udf(lambda v, i: float(v[i]), FloatType())
vec2array = udf(lambda vec: vec.toArray().tolist(), ArrayType(FloatType()))
###Output
_____no_output_____
###Markdown
Now let's read the data and train a simple binary classification model.
###Code
df = spark.read.parquet("wasbs://[email protected]/AdultCensusIncome.parquet")
labelIndexer = StringIndexer(inputCol="income", outputCol="label", stringOrderType="alphabetAsc").fit(df)
print("Label index assigment: " + str(set(zip(labelIndexer.labels, [0, 1]))))
training = labelIndexer.transform(df)
display(training)
categorical_features = [
"workclass",
"education",
"marital-status",
"occupation",
"relationship",
"race",
"sex",
"native-country",
]
categorical_features_idx = [col + "_idx" for col in categorical_features]
categorical_features_enc = [col + "_enc" for col in categorical_features]
numeric_features = ["age", "education-num", "capital-gain", "capital-loss", "hours-per-week"]
strIndexer = StringIndexer(inputCols=categorical_features, outputCols=categorical_features_idx)
onehotEnc = OneHotEncoder(inputCols=categorical_features_idx, outputCols=categorical_features_enc)
vectAssem = VectorAssembler(inputCols=categorical_features_enc + numeric_features, outputCol="features")
lr = LogisticRegression(featuresCol="features", labelCol="label", weightCol="fnlwgt")
pipeline = Pipeline(stages=[strIndexer, onehotEnc, vectAssem, lr])
model = pipeline.fit(training)
###Output
_____no_output_____
###Markdown
After the model is trained, we randomly select some observations to be explained.
###Code
explain_instances = model.transform(training).orderBy(rand()).limit(5).repartition(200).cache()
display(explain_instances)
###Output
_____no_output_____
###Markdown
We create a TabularSHAP explainer, set the input columns to all the features the model takes, specify the model and the target output column we are trying to explain. In this case, we are trying to explain the "probability" output which is a vector of length 2, and we are only looking at class 1 probability. Specify targetClasses to `[0, 1]` if you want to explain class 0 and 1 probability at the same time. Finally we sample 100 rows from the training data for background data, which is used for integrating out features in Kernel SHAP.
###Code
shap = TabularSHAP(
inputCols=categorical_features + numeric_features,
outputCol="shapValues",
numSamples=5000,
model=model,
targetCol="probability",
targetClasses=[1],
backgroundData=broadcast(training.orderBy(rand()).limit(100).cache()),
)
shap_df = shap.transform(explain_instances)
###Output
_____no_output_____
###Markdown
Once we have the resulting dataframe, we extract the class 1 probability of the model output, the SHAP values for the target class, the original features and the true label. Then we convert it to a pandas dataframe for visisualization.For each observation, the first element in the SHAP values vector is the base value (the mean output of the background dataset), and each of the following element is the SHAP values for each feature.
###Code
shaps = (
shap_df.withColumn("probability", vec_access(col("probability"), lit(1)))
.withColumn("shapValues", vec2array(col("shapValues").getItem(0)))
.select(["shapValues", "probability", "label"] + categorical_features + numeric_features)
)
shaps_local = shaps.toPandas()
shaps_local.sort_values("probability", ascending=False, inplace=True, ignore_index=True)
pd.set_option("display.max_colwidth", None)
shaps_local
###Output
_____no_output_____
###Markdown
We use plotly subplot to visualize the SHAP values.
###Code
from plotly.subplots import make_subplots
import plotly.graph_objects as go
import pandas as pd
features = categorical_features + numeric_features
features_with_base = ["Base"] + features
rows = shaps_local.shape[0]
fig = make_subplots(
rows=rows,
cols=1,
subplot_titles="Probability: " + shaps_local["probability"].apply("{:.2%}".format) + "; Label: " + shaps_local["label"].astype(str),
)
for index, row in shaps_local.iterrows():
feature_values = [0] + [row[feature] for feature in features]
shap_values = row["shapValues"]
list_of_tuples = list(zip(features_with_base, feature_values, shap_values))
shap_pdf = pd.DataFrame(list_of_tuples, columns=["name", "value", "shap"])
fig.add_trace(
go.Bar(x=shap_pdf["name"], y=shap_pdf["shap"], hovertext="value: " + shap_pdf["value"].astype(str)),
row=index + 1,
col=1,
)
fig.update_yaxes(range=[-1, 1], fixedrange=True, zerolinecolor="black")
fig.update_xaxes(type="category", tickangle=45, fixedrange=True)
fig.update_layout(height=400 * rows, title_text="SHAP explanations")
fig.show()
###Output
_____no_output_____ |
Notebooks/Lec 17 - Model Misspecification.ipynb | ###Markdown
Model MisspecificationFor instance, we may assume that the dependent variable is a linear function of two independent variables. If the model is not correctly specified, regression assumptions will be violated and the model will not be accurate. Below we define and explain many of the common model specification errors. Exclusion of important variablesIf we omit a variable which is uncorrelated with the variables that we do include, we will simply not explain the dependent variable as well as we could. However, if the omitted variable (say, $X_2$) is correlated with the included variable ($X_1$), then the omission additionally affects the model. The coefficient of $X_1$ and the constant term in the regression will be biased by trying to compensate for the omission of $X_2$. This can lead us to overestimate the effect of $X_1$ on the dependent variable. Also, estimated values of the coefficients and the estimated standard errors will be inconsistent.In particular, we may be led to believe that two variables have a causal relationship because of their high correlation, when in fact they are both caused by a third. For instance, if two stocks both follow the market, or two quantities both tend to increase with time, they will be highly correlated.
###Code
# Pull the pricing data for our two stocks and S&P 500
start = '2013-01-01'
end = '2015-01-01'
bench = get_pricing('SPY', fields='price', start_date=start, end_date=end) #S&P 500
a1 = get_pricing('LRCX', fields='price', start_date=start, end_date=end) #Lam Research
a2 = get_pricing('AAPL', fields='price', start_date=start, end_date=end) #Apple
# Perform linear regression and print R-squared values
slr12 = regression.linear_model.OLS(a2, sm.add_constant(a1)).fit()
slrb1 = regression.linear_model.OLS(a1, sm.add_constant(bench)).fit()
slrb2 = regression.linear_model.OLS(a2, sm.add_constant(bench)).fit()
print "R-squared values of linear regression"
print "LRCX and AAPL: ", slr12.rsquared
print "LRCX and SPY: ", slrb1.rsquared
print "AAPL and SPY: ", slrb2.rsquared
plt.plot(bench)
plt.plot(a1)
plt.plot(a2)
plt.legend()
###Output
R-squared values of linear regression
LRCX and AAPL: 0.920633448032
LRCX and SPY: 0.88185329246
AAPL and SPY: 0.824172499467
###Markdown
It's hard to see consistency, or lack of it, since it is asymptotic and probabilistic. However, we can extend our sample period to see how the R-squared value changes. The correlation between the stocks and the market seems to persist, while the correlation between the two stocks drops. So we would be better off predicting the stock prices from the market price than from each other.
###Code
# Pull pricing data from further back
start = '2009-01-01'
end = '2015-01-01'
bench = get_pricing('SPY', fields='price', start_date=start, end_date=end)
a1 = get_pricing('LRCX', fields='price', start_date=start, end_date=end)
a2 = get_pricing('AAPL', fields='price', start_date=start, end_date=end)
# Perform linear regression and print R-squared values
slr12 = regression.linear_model.OLS(a2, sm.add_constant(a1)).fit()
slrb1 = regression.linear_model.OLS(a1, sm.add_constant(bench)).fit()
slrb2 = regression.linear_model.OLS(a2, sm.add_constant(bench)).fit()
print "R-squared values of linear regression"
print "LRCX and AAPL: ", slr12.rsquared
print "LRCX and SPY: ", slrb1.rsquared
print "AAPL and SPY: ", slrb2.rsquared
plt.plot(bench)
plt.plot(a1)
plt.plot(a2)
plt.legend()
###Output
R-squared values of linear regression
LRCX and AAPL: 0.530905362926
LRCX and SPY: 0.737620243278
AAPL and SPY: 0.793107070407
###Markdown
The best way to avoid this issue is to choose the independent variables which you have reason to believe will be good predictors of the dependent variable before starting the regression analysis. "Before" is key: it's important not to pick variables just based on how good the regression analysis looks because that leads to overfitting. Inclusion of unnecessary variablesConversely, we can have a model which includes too many independent variables. If we include a truly unnecessary variable, we will have a lower adjusted R-squared and less precise estimates of the other regression coefficients. That is, our analysis of the model will be weakened, but the model itself will not change.If we include variables that are only mostly irrelevant, however, we can artificially improve the fit and the R-squared of our model by adding bits of the slightly-correlated variables to conform to the sample data. This runs the risk of overfitting, since the small adjustments we make are sample-specific. For example, below we run a regression with PEP price as the independent variable and PG price as the dependent variable (which makes some sense as they are in the same sector) and then run another regression with three random other stocks added in.
###Code
# Load one year's worth of pricing data for five different assets
start = '2014-01-01'
end = '2015-01-01'
x1 = get_pricing('PEP', fields='price', start_date=start, end_date=end) #Pepsico
x2 = get_pricing('MCD', fields='price', start_date=start, end_date=end) #McDonald's
x3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end) #Athenahealth
x4 = get_pricing('DOW', fields='price', start_date=start, end_date=end) #Dow Jones Industrial Average
y = get_pricing('PG', fields='price', start_date=start, end_date=end) #Procter & Gamble
# Build a linear model using only x1 to explain y
slr = regression.linear_model.OLS(y, sm.add_constant(x1)).fit()
slr_prediction = slr.params[0] + slr.params[1]*x1
# Run multiple linear regression using x1, x2, x3, x4 to explain y
mlr = regression.linear_model.OLS(y, sm.add_constant(np.column_stack((x1,x2,x3,x4)))).fit()
mlr_prediction = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4
# Compute adjusted R-squared for the two different models
print 'SLR R-squared:', slr.rsquared_adj
print 'MLR R-squared:', mlr.rsquared_adj
# Plot y along with the two different predictions
y.plot()
slr_prediction.plot()
mlr_prediction.plot()
plt.legend(['PG', 'SLR', 'MLR']);
###Output
SLR R-squared: 0.714538080242
MLR R-squared: 0.888347333447
###Markdown
We are able to tune the model with more variables more precisely to the data. Note that although adjusted R-squared penalizes us for using more variables, the number of samples here is so large that the adjustment is tiny. Let's see what happens if we use the same linear models to predict the price of PG for the next six months:
###Code
# Load a year and a half of pricing data
start = '2015-01-01'
end = '2015-06-01'
x1 = get_pricing('PEP', fields='price', start_date=start, end_date=end)
x2 = get_pricing('MCD', fields='price', start_date=start, end_date=end)
x3 = get_pricing('ATHN', fields='price', start_date=start, end_date=end)
x4 = get_pricing('DOW', fields='price', start_date=start, end_date=end)
y = get_pricing('PG', fields='price', start_date=start, end_date=end)
# Extend our model from before to the new time period
slr_prediction2 = slr.params[0] + slr.params[1]*x1
mlr_prediction2 = mlr.params[0] + mlr.params[1]*x1 + mlr.params[2]*x2 + mlr.params[3]*x3 + mlr.params[4]*x4
# Compute adjusted R-squared over the extended time period
adj = float(len(y) - 1)/(len(y) - 5) # Compute adjustment factor
SST = sum((y - np.mean(y))**2)
SSRs = sum((slr_prediction2 - y)**2)
print 'SLR R-squared:', 1 - adj*SSRs/SST
SSRm = sum((mlr_prediction2 - y)**2)
print 'MLR R-squared:', 1 - adj*SSRm/SST
# Plot y along with the two different predictions
y.plot()
slr_prediction2.plot()
mlr_prediction2.plot()
plt.legend(['PG', 'SLR', 'MLR']);
###Output
SLR R-squared: -0.742493773006
MLR R-squared: -1.74144128938
|
worksheets/Week 02 - Worksheet 1 - Syntax - if statements.ipynb | ###Markdown
Week 01, Worksheet 0: Python syntax (`if` statements)This worksheet will invite you to tinker with the examples, as they are live code cells. Instead of the normal fill-in-the-blank style of notebook, feel free to mess with the code directly. Remember that -- to test things out -- the Sandbox is available to you as well.While grading, you will see a line referencing the following characters: \ud83c\udf89\ud83c\udf89\ud83c\udf89These refer to the characters: 🎉🎉🎉 `if` I complete this worksheet...`if` statements depart a bit from our traditional Python syntax. Whereas we've been focusing on assignment, and now relative equality, our work takes us into an area of programming which contemplates _how code actually_ runs. Why talk about this now instead of last week? Because we're going to mess with it a bit. Flow of control (or control flow)How do we understand the following code snippet to run?```python We have five widgetswidgets = 5 The Professor gives us five morewidgets += 5 Due to a complex social situation, we owe 9/10 of our widgets to friendswidget -= .90 * widgets While once rich with widgets, we now have...print(widgets)```I can hear you through the internet: TOP TO BOTTOM! You're right. And code will _generally_ still follow this rule, which implies (for the above code):* Variables must be created before we can use them* If the value of a variable changes over time, the most recent assignments "wins"* Whatever the value of the variable is at the end of the code is the final valueThese will all still be true, but sometimes (depending on circumstances), we can jump ahead a bit. Back to `if` statementsSometimes we want make different decisions in our code based on whether or not a condition is `True`. In this case, we engage branching logic to assist us in our programmatic decision-making by using the `if` statement. This statemet takes the general form of:```pythonif CONDITION: Functionality if true```Here, `CONDITION` substitutes for some `boolean` value or expression which _must be true_. Notice also that the line proceeding the `if` portion of the statement is **_indented 4 spaces_**. Indentation is an important part of the Python language: it identifies what _belongs to_ this branch of our "branching logic": the ` Functionality if true` portion should only work if the `CONDITION` true. If not, it skips it. So:```pythonif widgets > 5: print("We're better off than we were before!")```But, as we know from our example, we're not better off -- we actually only have `1` widget left! We need to be able to accomodate this.```pythonif widgets > 5: print("We're better off than we were before!")else: print("Somewhere we lost some widgets...")```Here, we use an `else` clause to indicate what to do if the `CONDITION` (in this case, `widgets > 5`) isn't true.Let's say, for sake of example, if we're completely out of widgets we want to do something else. We have the following situation:* If `widgets > 5`, we're rich* If `widgets 0`, at least we have a widget left* If `widgets == 0`, we're probably sadHow do we model that in code?```pythonif widgets > 5: print("We're rich!")elif widgets 0: print("At least we still have one.")else: print("Oh no! No widgets. But, look! The Professor gave us one!") widgets +=1print(widgets)```Okay, okay, so I was nice. But, there are two things to notice about the above code:* There's this new thing called the `elif` or "else if"* We can use as many statements as we want in a branchBoth are important. First, we can always add as many conditions as we want at any point. We already know about `relational` and `logical` combinations (`widgets 0`), but the `elif` or "else if" allows us to do something else _in very specific cases_.In addition, we can write as many statements or expressions as we want in an `if` clause, as we see in the `else` branch of the statement above.Of course, we could go overboard:
###Code
# TINKER AWAY!
# We have five widgets
widgets = 5
# The Professor gives us five more
widgets += 5
# Due to a complex social situation, we owe 9/10 of our widgets to friends
widgets -= .90 * widgets
if widgets > 5:
print("We're rich!")
elif widgets == 4:
print("Not as many as before, but not so bad.")
elif widgets > 1:
print("Hm. We lost several somewhere...")
else:
print("Oh no! We only have the 1 widget! But, look -- The Professor gave us another one!")
widgets +=1
print("In the end, we have " + str(int(widgets)) + " widgets.")
###Output
_____no_output_____
###Markdown
Answer these questions about the above example. Feel free to modify the code above to check your work. 1. How many widgets do we need to have to display the message `Not as many as before, but not so bad.`? 4 2. How many widgets will trigger the message `Hm. We lost several somewhere..."? 2,3 3. Assign various numbers to the first statement (`widgets = 5`). Is there a way to trick the statement? Yes, but it's probably hard to find. What does this have to do with that "flow of control" thing?Good question. Thanks for asking.As we see in our examples above, our code still runs from top to bottom. However, we skip portions of it that do or don't run based on various conditions that vary as to their relative "truthiness." So, we can think of this as a frustration of the flow of control, not a negation of it. It diagrams like this:Each of the paths ends at `print(widgets)`, but as we can see the value of `widgets` at that moment is contingent on which "branch" the statement follows. A note on assignmentsThis means that, occasionally, we need to assign variables _outside_ of our structures in order to use them -- this follows the rule that variables have to _exist_ before we can modify or call on them. Imagine the following:```python We can't do this; truthiness hasn't been created yetif True: truthiness += 1``` Here, we have a new options to assign variables to either `0` or `""` values. We can also assign variables to `None` -- essentially, _nothing_. It's kind of like someone in Congress voting "present."To do this, all we have to do is remember our data types, and work accordingly:```pythona_number = 0a_number = Nonea_string = ""a_string = Nonea_boolean = None```
###Code
# This comes in handy below with counter
truthiness = 0
if True:
truthiness += 1
###Output
_____no_output_____
###Markdown
The puzzle boxFor this worksheet, we're going to imagine that we have a brand new puzzle box with `4` buttons, each of which has two-states ("pressed", "unpressed"). Our goal is to demonsrate how to open all of the doors of the puzzle box using the correct combinations of buttons. We need to combine our knowledge of these buttons (`booleans`) with our new power of the `if` statement to figure out how to express the solution combination.Here's how the box works:* certain combinations of buttons add to a tracking variable called `counter`* certain combinations of buttons subtract from `counter`* some combinations have no effect* one combination resets the `counter` to `0`* if `counter` is `5` and the "final" combination is entered, the box opens and should print the `message`The following combinations have effects:|Combination number |`button_one` |`button_two` |`button_three` |`button_four` |`button_five`| Effect on `counter`||-------------------|-------------|-------------|---------------|--------------|-------------|------------------||1|`True` |`False` |`True`|`True`|`False`|`+2`||2|`True` |`False` |`True`|`False`|`False`|`+1`||3|`True` |`False` |`False`|`True`|`False`|set to `0`||4|`True` |`False` |`False`|`False`|`False`|`+2`||5|`True` |`True` |`True`|`True`|`False`|`-1`||6|`False` |`False` |`False`|`True`|`False`|`+1`||7|`True` |`False` |`True`|`True`|`True`|`-2`||Final|`False` |`False` |`True`|`True`|`False`|The final combination!|Tips:* This activity uses the `not` operator from the last worksheet. For example, `Combination 1`:```pythonif button_one and button_three and button_four and not button_two and not button_five:```* You will also need to reset buttons to `False` or `True` between steps* Use the function `buttons()` (copy and paste exactly) at any time in the code to check button and counter status * This is not a "built-in" function, and is a unique to this program; we will learn how to do this in the coming weeks
###Code
# THIS FUNCTION IS INTENDED TO BE HELPFUL TO YOU.
# Use buttons() to reference the following cell to print the values of the buttons to check them
def buttons():
print("Button one:\t" + str(button_one))
print("Button two:\t" + str(button_two))
print("Button three:\t" + str(button_three))
print("Button four:\t" + str(button_four))
print("Button five:\t" + str(button_five))
print("Counter is:\t" + str(counter))
# Set up counter
counter = 0
# Set up message
message = "🎉🎉🎉You've opened the puzzle box!🎉🎉🎉"
# Initial state of buttons
button_one = False # button_one off
button_two = False # button_two off
button_three = False # button_three off
button_four = False # button_four off
button_five = False # button_five off
# Space to try combinations; keep in mind that you will change values one at a time by assigning them
# TODO
# Space to write the final combination
# TODO
###Output
_____no_output_____
###Markdown
Week 01, Worksheet 0: Python syntax (`if` statements)This worksheet will invite you to tinker with the examples, as they are live code cells. Instead of the normal fill-in-the-blank style of notebook, feel free to mess with the code directly. Remember that -- to test things out -- the Sandbox is available to you as well.While grading, you will see a line referencing the following characters: \ud83c\udf89\ud83c\udf89\ud83c\udf89These refer to the characters: 🎉🎉🎉 `if` I complete this worksheet...`if` statements depart a bit from our traditional Python syntax. Whereas we've been focusing on assignment, and now relative equality, our work takes us into an area of programming which contemplates _how code actually_ runs. Why talk about this now instead of last week? Because we're going to mess with it a bit. Flow of control (or control flow)How do we understand the following code snippet to run?```python We have five widgetswidgets = 5 The Professor gives us five morewidgets += 5 Due to a complex social situation, we owe 9/10 of our widgets to friendswidget -= .90 * widgets While once rich with widgets, we now have...print(widgets)```I can hear you through the internet: TOP TO BOTTOM! You're right. And code will _generally_ still follow this rule, which implies (for the above code):* Variables must be created before we can use them* If the value of a variable changes over time, the most recent assignments "wins"* Whatever the value of the variable is at the end of the code is the final valueThese will all still be true, but sometimes (depending on circumstances), we can jump ahead a bit. Back to `if` statementsSometimes we want make different decisions in our code based on whether or not a condition is `True`. In this case, we engage branching logic to assist us in our programmatic decision-making by using the `if` statement. This statemet takes the general form of:```pythonif CONDITION: Functionality if true```Here, `CONDITION` substitutes for some `boolean` value or expression which _must be true_. Notice also that the line proceeding the `if` portion of the statement is **_indented 4 spaces_**. Indentation is an important part of the Python language: it identifies what _belongs to_ this branch of our "branching logic": the ` Functionality if true` portion should only work if the `CONDITION` true. If not, it skips it. So:```pythonif widgets > 5: print("We're better off than we were before!")```But, as we know from our example, we're not better off -- we actually only have `1` widget left! We need to be able to accomodate this.```pythonif widgets > 5: print("We're better off than we were before!")else: print("Somewhere we lost some widgets...")```Here, we use an `else` clause to indicate what to do if the `CONDITION` (in this case, `widgets > 5`) isn't true.Let's say, for sake of example, if we're completely out of widgets we want to do something else. We have the following situation:* If `widgets > 5`, we're rich* If `widgets 0`, at least we have a widget left* If `widgets == 0`, we're probably sadHow do we model that in code?```pythonif widgets > 5: print("We're rich!")elif widgets 0: print("At least we still have one.")else: print("Oh no! No widgets. But, look! The Professor gave us one!") widgets +=1print(widgets)```Okay, okay, so I was nice. But, there are two things to notice about the above code:* There's this new thing called the `elif` or "else if"* We can use as many statements as we want in a branchBoth are important. First, we can always add as many conditions as we want at any point. We already know about `relational` and `logical` combinations (`widgets 0`), but the `elif` or "else if" allows us to do something else _in very specific cases_.In addition, we can write as many statements or expressions as we want in an `if` clause, as we see in the `else` branch of the statement above.Of course, we could go overboard:
###Code
# TINKER AWAY!
# We have five widgets
widgets = 5
# The Professor gives us five more
widgets += 5
# Due to a complex social situation, we owe 9/10 of our widgets to friends
widgets -= .90 * widgets
if widgets > 5:
print("We're rich!")
elif widgets == 4:
print("Not as many as before, but not so bad.")
elif widgets > 1:
print("Hm. We lost several somewhere...")
else:
print("Oh no! We only have the 1 widget! But, look -- The Professor gave us another one!")
widgets +=1
print("In the end, we have " + str(int(widgets)) + " widgets.")
###Output
Oh no! We only have the 1 widget! But, look -- The Professor gave us another one!
In the end, we have 2 widgets.
###Markdown
Answer these questions about the above example. Feel free to modify the code above to check your work. 1. How many widgets do we need to have to display the message `Not as many as before, but not so bad.`? 4 2. How many widgets will trigger the message `Hm. We lost several somewhere..."? 2,3 3. Assign various numbers to the first statement (`widgets = 5`). Is there a way to trick the statement? Yes, but it's probably hard to find. What does this have to do with that "flow of control" thing?Good question. Thanks for asking.As we see in our examples above, our code still runs from top to bottom. However, we skip portions of it that do or don't run based on various conditions that vary as to their relative "truthiness." So, we can think of this as a frustration of the flow of control, not a negation of it. It diagrams like this:Each of the paths ends at `print(widgets)`, but as we can see the value of `widgets` at that moment is contingent on which "branch" the statement follows. A note on assignmentsThis means that, occasionally, we need to assign variables _outside_ of our structures in order to use them -- this follows the rule that variables have to _exist_ before we can modify or call on them. Imagine the following:```python We can't do this; truthiness hasn't been created yetif True: truthiness += 1``` Here, we have a new options to assign variables to either `0` or `""` values. We can also assign variables to `None` -- essentially, _nothing_. It's kind of like someone in Congress voting "present."To do this, all we have to do is remember our data types, and work accordingly:```pythona_number = 0a_number = Nonea_string = ""a_string = Nonea_boolean = None```
###Code
# This comes in handy below with counter
truthiness = 0
if True:
truthiness += 1
###Output
_____no_output_____
###Markdown
The puzzle boxFor this worksheet, we're going to imagine that we have a brand new puzzle box with `4` buttons, each of which has two-states ("pressed", "unpressed"). Our goal is to demonsrate how to open all of the doors of the puzzle box using the correct combinations of buttons. We need to combine our knowledge of these buttons (`booleans`) with our new power of the `if` statement to figure out how to express the solution combination.Here's how the box works:* certain combinations of buttons add to a tracking variable called `counter`* certain combinations of buttons subtract from `counter`* some combinations have no effect* one combination resets the `counter` to `0`* if `counter` is `5` and the "final" combination is entered, the box opens and should print the `message`The following combinations have effects:|Combination number |`button_one` |`button_two` |`button_three` |`button_four` |`button_five`| Effect on `counter`||-------------------|-------------|-------------|---------------|--------------|-------------|------------------||1|`True` |`False` |`True`|`True`|`False`|`+2`||2|`True` |`False` |`True`|`False`|`False`|`+1`||3|`True` |`False` |`False`|`True`|`False`|set to `0`||4|`True` |`False` |`False`|`False`|`False`|`+2`||5|`True` |`True` |`True`|`True`|`False`|`-1`||6|`False` |`False` |`False`|`True`|`False`|`+1`||7|`True` |`False` |`True`|`True`|`True`|`-2`||Final|`False` |`False` |`True`|`True`|`False`|The final combination!|Tips:* This activity uses the `not` operator from the last worksheet. For example, `Combination 1`:```pythonif button_one and button_three and button_four and not button_two and not button_five:```* You will also need to reset buttons to `False` or `True` between steps* Use the function `buttons()` (copy and paste exactly) at any time in the code to check button and counter status * This is not a "built-in" function, and is a unique to this program; we will learn how to do this in the coming weeks
###Code
# THIS FUNCTION IS INTENDED TO BE HELPFUL TO YOU.
# Use buttons() to reference the following cell to print the values of the buttons to check them
def buttons():
print("Button one:\t" + str(button_one))
print("Button two:\t" + str(button_two))
print("Button three:\t" + str(button_three))
print("Button four:\t" + str(button_four))
print("Button five:\t" + str(button_five))
print("Counter is:\t" + str(counter))
# Set up counter
counter = 0
# Set up message
message = "🎉🎉🎉You've opened the puzzle box!🎉🎉🎉"
# Initial state of buttons
button_one = False # button_one off
button_two = False # button_two off
button_three = False # button_three off
button_four = False # button_four off
button_five = False # button_five off
# Space to try combinations; keep in mind that you will change values one at a time by assigning them
button_one = True
button_three = True
button_four = True
if button_one and button_three and button_four and not button_two and not button_five:
counter +=2
button_three = False
button_four = False
if button_one and not button_two and not button_three and not button_four and not button_five:
counter += 2
button_one = False
button_four = True
if button_four and not button_one and not button_two and not button_three and not button_five:
counter +=1
# Space to write the final combination
button_one = False # button_one off
button_two = False # button_two off
button_three = True # button_three on
button_four = True # button_four on
button_fve = False # button_five off
if counter == 5 and button_three and button_four and not button_one and not button_two and not button_five:
print(message)
else:
print("Not quite yet.")
###Output
🎉🎉🎉You've opened the puzzle box!🎉🎉🎉
|
Implementations/FY21/ACC_BGD_National/Step 6 - Population catchment area generation.ipynb | ###Markdown
Step 6 - Catchment area generation This notebook convert OD matrices into spatial catchment areas for destinations. The catchment area extents are determined by cutoff time extents (1 hour, 2 hours, etc.). Note that catchments are exclusive -- even if an origin location in reality has effective acces to 2 or more locations, it is only considered part of the nearest destination's catchment.
###Code
import os, sys
import time
# data science basics
import pandas as pd
import geopandas as gpd
import numpy as np
# vector data basics
import shapely
from shapely import wkt
from shapely.wkt import loads
from shapely.ops import transform
from shapely.geometry import Point, MultiPoint
# raster data basics
import rasterio
from rasterio.profiles import DefaultGTiffProfile
from rasterio.transform import from_origin
from rasterio.features import rasterize
# other
import pyproj
import geopy
###Output
_____no_output_____
###Markdown
Setup Functions
###Code
import warnings
def fxn():
warnings.warn("deprecated", DeprecationWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
# function for sorting alphanumerically
import re
def sorted_nicely( l ):
""" Sort the given iterable in the way that humans expect."""
convert = lambda text: int(text) if text.isdigit() else text
alphanum_key = lambda key: [ convert(c) for c in re.split('([0-9]+)', key) ]
return sorted(l, key = alphanum_key)
# funciton for sorting matrices smallest to largest, by origin ID then destination ID
def sort_od_matrix(od_matrix):
# sort by O_IDs, then dest node IDs
od_matrix = od_matrix.sort_values('Unnamed: 0').reindex(sorted_nicely(od_matrix.columns), axis=1)
# reset O_ID column to the front
od_matrix = od_matrix[ ['Unnamed: 0'] + [ col for col in od_matrix.columns if col != 'Unnamed: 0' ] ]
# set the Dest_ID column back to index so the shape is the same as the dWeight shape
od_matrix.set_index('Unnamed: 0',inplace=True)
###Output
_____no_output_____
###Markdown
Parameters
###Code
# pd.set_option('max_columns',None)
simplif_meters = 25
source_epsg = 4326
target_epsg = 3106
# WorldPop data parameters
# constraint_status = 'constrained'
constraint_status = 'unconstrained'
# wp_res = 100
wp_res = 250
# wp_res = '1k'
# Production date for outputs being used
# prod_date = '210312'
# prod_date = '210329'
prod_date = '210503'
###Output
_____no_output_____
###Markdown
File paths
###Code
# Local folders
input_pth = r'inputs\\dests'
interm_pth = r'intermediate'
fin_pth = r'final'
res_pth = r'results'
# Shared drive folders
tab_pth = r'../../../Tabular'
geo_pth = r'../../../GEO'
origin_folder = r'..\..\..\GEO\Population'
###Output
_____no_output_____
###Markdown
Define destinations
###Code
# # Looping lists
# dest_lst = ['All_cities', 'Minor_cities', 'Dhaka_Chitt',\
# 'Dry_ports', 'River_ports', 'Deep_sea_ports',\
# 'All_SEZs', 'Functioning_SEZs']
dest_lst = ['All_SEZs', 'Functioning_SEZs']
# destination = ['current_PopOrig_all_cities', 'current_PopOrig_deep_sea_ports', 'current_PopOrig_DhakaChitt', 'current_PopOrig_dry_ports', 'current_PopOrig_minor_cities',\
# 'current_CityOrig_all_cities', 'current_CityOrig_deep_sea_ports', 'current_CityOrig_DhakaChitt', 'current_CityOrig_dry_ports', 'current_CityOrig_minor_cities']
# Looping dicts
# dests_time_filt_dct = {'All_cities_PopOrigins' : {}, 'Deep_sea_ports_PopOrigins' : {}, 'Dhaka_Chitt_PopOrigins' : {}, 'Dry_ports_PopOrigins' : {}, 'Minor_cities_PopOrigins' : {},\
# 'All_cities_CityOrigins' : {}, 'Deep_sea_ports_CityOrigins' : {}, 'Dhaka_Chitt_CityOrigins' : {}, 'Dry_ports_CityOrigins' : {}, 'Minor_cities_CityOrigins': {}}
# dests_pop_time_filt_dct = {'All_cities_PopOrigins' : {}, 'Deep_sea_ports_PopOrigins' : {}, 'Dhaka_Chitt_PopOrigins' : {}, 'Dry_ports_PopOrigins' : {}, 'Minor_cities_PopOrigins' : {},\
# 'All_cities_CityOrigins' : {}, 'Deep_sea_ports_CityOrigins' : {}, 'Dhaka_Chitt_CityOrigins' : {}, 'Dry_ports_CityOrigins' : {}, 'Minor_cities_CityOrigins': {}}
# dests_time_filt_dct = {'All_cities' : {}, 'Minor_cities' : {}, 'Dhaka_Chitt' : {},\
# 'Dry_ports' : {}, 'River_ports' : {}, 'Deep_sea_ports': {},\
# 'All_SEZs' : {}, 'Functioning_SEZs' : {}}
# dests_pop_time_filt_dct = {'All_cities' : {}, 'Minor_cities' : {}, 'Dhaka_Chitt' : {},\
# 'Dry_ports' : {}, 'River_ports' : {}, 'Deep_sea_ports' : {},\
# 'All_SEZs' : {}, 'Functioning_SEZs' : {}}
dests_time_filt_dct = {'All_SEZs' : {}, 'Functioning_SEZs' : {}}
dests_pop_time_filt_dct = {'All_SEZs' : {}, 'Functioning_SEZs' : {}}
# # rename stuff that's badly named
# import re
# for key in dests_pop_time_filt_dct.keys():
# print(re.search('^(.*?_){2}',key).group())
###Output
_____no_output_____
###Markdown
Time filters to loop over (in minutes)
###Code
# Minute-wise time cutoffs as needed
# time_filters = [60, 90, 120, 180]
time_filters = [15,30,45,60,90]
###Output
_____no_output_____
###Markdown
Tabular data transformations New, per destination
###Code
# Loop over each destination, computing all the relevant, filtered and aggregated dataframes for later comptuational usage
for dest,v in dests_pop_time_filt_dct.items():
print(dest)
# read in od grid for calculations
dest_origs = pd.read_csv(os.path.join(res_pth,prod_date,f'final_od_grid_{dest}_PopOrigins_{constraint_status}_{wp_res}m_res_{simplif_meters}m_simplification.csv'))
# make dest_origs spatial
dest_origs['geometry'] = dest_origs['geometry'].apply(wkt.loads)
dest_origs_gdf = gpd.GeoDataFrame(dest_origs,geometry='geometry')
dest_origs_gdf['lon'] = dest_origs_gdf.geometry.x
dest_origs_gdf['lat'] = dest_origs_gdf.geometry.y
# Calculate raw filtered origin dataframes, populate to a dict
raw_time_filt_dct = {}
for t in time_filters:
df = dest_origs_gdf[dest_origs_gdf['PLOT_TIME_MINS'] <= t]
raw_time_filt_dct.update({t:df})
# Calculate the aggregate population for the *nearest* destination per time range
pop_time_filt_dct = {}
for k, v in raw_time_filt_dct.items():
df = pd.pivot_table(v,values='VALUE',index='D_ID',aggfunc='sum')\
.rename(columns={'VALUE' : 'Pop'})\
.reset_index()
pop_time_filt_dct.update({k:df})
# Insert these dics of filtered, aggregated data frames as the values of the master destination dict
dests_time_filt_dct[dest] = raw_time_filt_dct
dests_pop_time_filt_dct[dest] = pop_time_filt_dct
###Output
All_SEZs
Functioning_SEZs
###Markdown
Consolidate population per time band per destination and export to a shapefile
###Code
for dest_key, val_dct in dests_pop_time_filt_dct.items():
print(dest_key)
dest_gdf = pd.read_csv(os.path.join(fin_pth,prod_date,f'{dest_key}_{constraint_status}_{wp_res}m_res_{simplif_meters}m_simplification_snapped.csv'))
# # rename Destination columns
# if 'City' in dest_gdf.columns:
# dest_gdf.rename({'City':'Destination'},axis=1,inplace=True)
# elif 'RIVER_PORT' in dest.gdf.columns:
# dest_gdf.rename({'RIVER_PORT':'Destination'},axis=1,inplace=True)
# else:
# None
# load geometry of GDF
dest_gdf['geometry'] = dest_gdf['geometry'].apply(wkt.loads)
dest_gdf = gpd.GeoDataFrame(dest_gdf,geometry='geometry')
dest_gdf.rename({'NN':'D_ID'},axis=1,inplace=True)
dest_gdf.sort_values(by='D_ID',inplace=True)
# Merge the population with the destination GDF, then replace the val_dct with that, renamed for interpretability
for t, val_df in val_dct.items():
time_cutoff = str(t) + 'min'
dest_gdf = pd.merge(dest_gdf,val_df.rename(columns={'Pop' : time_cutoff}),how="left",on='D_ID')
# print(dest_gdf.head())
# print(val_df.head())
val_dct.update({t:dest_gdf[['D_ID','Destination',time_cutoff]].rename(columns={time_cutoff : 'Pop'})})
dest_gdf.to_file(os.path.join(res_pth,prod_date,f"spatial/{dest_key}_catchment_pops.shp"),driver="ESRI Shapefile")
dest_gdf.filter(regex='Destination|min').head(10)
###Output
_____no_output_____
###Markdown
Convert CSV to raster Two options -- rasterio way below (https://stackoverflow.com/questions/62472750/how-to-rasterize-a-pandas-dataframe-with-many-points-per-pixel) or gdal_grid method (https://gis.stackexchange.com/questions/254330/python-gdal-grid-correct-use) also useful : https://gis.stackexchange.com/questions/279953/numpy-array-to-gtiff-using-rasterio-without-source-raster Define a function for outputting raster catchment extents, optionally with the per-cell population in the second band
###Code
def extent_catch(dest,filt_val,filtered_df,pop=False):
# read in existing worldpop raster to provide metadata conditions for new layers
with rasterio.open(os.path.join(origin_folder,f'WorldPop/{constraint_status}/bgd_ppp_2020_UNadj_{constraint_status}_{wp_res}m_3106.tif')) as wp_src:
prof = wp_src.profile
if pop == False:
None
else:
prof.update(count=2) # set number of bands
# Rasterize by nearest destination ID and filter out above maximum value
with rasterio.open(f"results/{prod_date}/spatial/{dest}_{filt_val}min_catch.tif", 'w+',**prof) as out:
out.nodata = -9999
# Read in the respective bands for later writing
out_arr1 = out.read(1)
# create a generator of geom, value pairs to use in rasterizing, then rasterize
dest_shapes = ((geom, dest_id) for geom, dest_id in zip(filtered_df.geometry, filtered_df["D_ID"].astype(int)))
dest_burned = rasterize(shapes=dest_shapes, fill=0, out=out_arr1, transform=out.transform)
# write band
out.write_band(1, dest_burned)
if pop == False:
None
else:
out_arr2 = out.read(2)
time_shapes = ((geom, time_to_reach) for geom, time_to_reach in zip(filtered_df.geometry, filtered_df["PLOT_TIME_MINS"].astype(int)))
time_burned = rasterize(shapes=time_shapes, fill=0, out=out_arr2, transform=out.transform)
out.write_band(2, time_burned)
###Output
_____no_output_____
###Markdown
Loop over each filtered dataframe of origins and output as a raster to the prod_date folder
###Code
for dest_key, val_dct in dests_time_filt_dct.items():
print(dest_key)
for t, v in val_dct.items():
print(t)
extent_catch(dest_key,t,v,pop=True)
###Output
All_SEZs
15
30
45
60
90
Functioning_SEZs
15
30
45
60
90
###Markdown
Convert raster to polygon
###Code
# Define a function to convert rasters to polygons and join in the population covered by each catchment
def catch_rast_to_poly(dest_name, catch_rast, rast_profile, time_filt, dest_pop_df):
# Start timer
func_start = time.time()
# open each created raster
with rasterio.open(catch_rast, 'r',**rast_profile) as rast:
# populate geoms list
results = (
{'properties': {'D_ID': v}, 'geometry': s}
for i, (s, v)
in enumerate(
rasterio.features.shapes(rast.read(1), transform=rast.transform)))
geoms = list(results)
# convert to GDF, clean up, and dissolve
catch_poly = gpd.GeoDataFrame.from_features(geoms)
catch_poly['D_ID'] = catch_poly['D_ID'].astype(int)
catch_poly['D_ID'].replace(-9999,0,inplace=True) # replace nulls with 0
catch_poly = catch_poly.dissolve(by='D_ID')
# join in total population, drop uncovered areas
catch_poly = pd.merge(catch_poly,dest_pop_df,how='left',on='D_ID')
catch_poly = catch_poly[catch_poly['D_ID'] != 0]
catch_poly.crs = f"EPSG:{target_epsg}"
catch_poly = catch_poly.to_crs(source_epsg)
# export to shapefile
catch_poly.to_file(f"results/{prod_date}/spatial/{dest_name}_{time_filt}min_catch_poly.shp",driver="ESRI Shapefile")
return catch_poly
# Report function time
func_end = time.time()
print(f'time elapsed for summing {dest_name}')
print(str((func_end - func_start) / 60) + ' minutes')
###Output
_____no_output_____
###Markdown
Create polygons from the catchment rasters and join in the populations covered for each destination
###Code
with rasterio.open(os.path.join(origin_folder,f'WorldPop/{constraint_status}/bgd_ppp_2020_UNadj_{constraint_status}_{wp_res}m_{source_epsg}.tif')) as wp_src:
prof = wp_src.profile
prof.update(count=2)
for dest_key, val_dct in dests_pop_time_filt_dct.items():
# Keep track of which destination is being processed
print(dest_key)
for t, val_df in val_dct.items():
# Keep track of which travel time is being processed
print(t)
catch_rast = f"results/{prod_date}/spatial/{dest_key}_{t}min_catch.tif"
catch_rast_to_poly(dest_name=dest_key,catch_rast=catch_rast,rast_profile = prof,time_filt=t,dest_pop_df=val_df)
###Output
All_SEZs
15
30
45
60
90
Functioning_SEZs
15
30
45
60
90
|
tutorials/streamlit_notebooks/SENTENCE_SIMILARITY.ipynb | ###Markdown
[](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/SENTENCE_SIMILARITY.ipynb) **Detect sentence similarity** 1. Colab Setup
###Code
# Install PySpark and Spark NLP
! pip install -q pyspark==3.1.2 spark-nlp
###Output
_____no_output_____
###Markdown
2. Start the Spark session
###Code
import json
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp.base import *
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the USE model If you change the model, re-run the cell that creates the pipeline so the pipeline will use the new model.
###Code
# If you change the model, re-run all the cells below.
# Applicable models: tfhub_use, tfhub_use_lg
MODEL_NAME = "tfhub_use"
os.environ['MODEL_NAME'] = MODEL_NAME
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# To compare the similarity of sentences, enter them as strings in this list.
text_list = [
"Sign up for our mailing list to get free offers and updates about our products!",
"Subscribe to notifications to receive information about discounts and new offerings.",
"Send in your information for a chance to win big in our Summer Sweepstakes!",
"After filling out this form, you will receive a confirmation email to complete your signup.",
"It was raining, so I waited beneath the balcony outside the cafe.",
"I stayed under the deck of the cafe because it was rainy outside.",
"I like the cafe down the street because it's not too loud in there.",
"The coffee shop near where I live is quiet, so I like to go there.",
"Web traffic analysis shows that most Internet users browse on mobile nowadays.",
"The analytics show that modern web users mostly use their phone instead of their computers."
]
###Output
_____no_output_____
###Markdown
Write the input sentences into a single file.
###Code
! mkdir inputs
! mkdir inputs/$MODEL_NAME
with open(f'inputs/{MODEL_NAME}/sentences.txt', 'w') as input_file:
for text in text_list:
input_file.write(text + '\n')
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
# Transforms the input text into a document usable by the SparkNLP pipeline.
document_assembler = DocumentAssembler()
document_assembler.setInputCol('text')
document_assembler.setOutputCol('document')
# Separates the text into individual tokens (words and punctuation).
tokenizer = Tokenizer()
tokenizer.setInputCols(['document'])
tokenizer.setOutputCol('token')
# Encodes the text as a single vector representing semantic features.
sentence_encoder = UniversalSentenceEncoder.pretrained(name=MODEL_NAME)
sentence_encoder.setInputCols(['document', 'token'])
sentence_encoder.setOutputCol('sentence_embeddings')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sentence_encoder
])
# Fit the model to an empty data frame so it can be used on inputs.
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
###Markdown
6. Run the pipeline This method will get the similarity of the embeddings of each pair of sentences in the list of sentences passed in. The similarity is returned as a matrix, where (0, 2), for example, represents the similarity of input sentence 0 and input sentence 2.
###Code
def get_similarity(input_list):
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = light_pipeline.transform(df)
embeddings = []
for r in result.collect():
embeddings.append(r.sentence_embeddings[0].embeddings)
embeddings_matrix = np.array(embeddings)
return np.matmul(embeddings_matrix, embeddings_matrix.transpose())
###Output
_____no_output_____
###Markdown
Write the computed similarities to a CSV file.
###Code
! mkdir outputs
! mkdir outputs/$MODEL_NAME
np.savetxt(f'outputs/{MODEL_NAME}/similarities.csv',
get_similarity(text_list),
delimiter=',')
###Output
_____no_output_____
###Markdown
7. Visualize results This method plots the gets the similarity of the sentences in the list using the method above, then it plots those similarities as a heatmap where dark red means "very similar" and pale yellow means "not similar at all".
###Code
import seaborn as sns
def plot_similarity(input_list):
g = sns.heatmap(
get_similarity(input_list),
xticklabels=input_list,
yticklabels=input_list,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(input_list, rotation=90)
g.set_title("Semantic Textual Similarity")
plot_similarity(text_list)
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/SENTENCE_SIMILARITY.ipynb) **Detect sentence similarity** 1. Colab Setup
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install SparkNLP
! pip install --ignore-installed spark-nlp
###Output
_____no_output_____
###Markdown
2. Start the Spark session
###Code
import os
import json
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp.base import *
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the USE model If you change the model, re-run the cell that creates the pipeline so the pipeline will use the new model.
###Code
# If you change the model, re-run all the cells below.
# Applicable models: tfhub_use, tfhub_use_lg
MODEL_NAME = "tfhub_use"
os.environ['MODEL_NAME'] = MODEL_NAME
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# To compare the similarity of sentences, enter them as strings in this list.
text_list = [
"Sign up for our mailing list to get free offers and updates about our products!",
"Subscribe to notifications to receive information about discounts and new offerings.",
"Send in your information for a chance to win big in our Summer Sweepstakes!",
"After filling out this form, you will receive a confirmation email to complete your signup.",
"It was raining, so I waited beneath the balcony outside the cafe.",
"I stayed under the deck of the cafe because it was rainy outside.",
"I like the cafe down the street because it's not too loud in there.",
"The coffee shop near where I live is quiet, so I like to go there.",
"Web traffic analysis shows that most Internet users browse on mobile nowadays.",
"The analytics show that modern web users mostly use their phone instead of their computers."
]
###Output
_____no_output_____
###Markdown
Write the input sentences into a single file.
###Code
! mkdir inputs
! mkdir inputs/$MODEL_NAME
with open(f'inputs/{MODEL_NAME}/sentences.txt', 'w') as input_file:
for text in text_list:
input_file.write(text + '\n')
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
# Transforms the input text into a document usable by the SparkNLP pipeline.
document_assembler = DocumentAssembler()
document_assembler.setInputCol('text')
document_assembler.setOutputCol('document')
# Separates the text into individual tokens (words and punctuation).
tokenizer = Tokenizer()
tokenizer.setInputCols(['document'])
tokenizer.setOutputCol('token')
# Encodes the text as a single vector representing semantic features.
sentence_encoder = UniversalSentenceEncoder.pretrained(name=MODEL_NAME)
sentence_encoder.setInputCols(['document', 'token'])
sentence_encoder.setOutputCol('sentence_embeddings')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sentence_encoder
])
# Fit the model to an empty data frame so it can be used on inputs.
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
_____no_output_____
###Markdown
6. Run the pipeline This method will get the similarity of the embeddings of each pair of sentences in the list of sentences passed in. The similarity is returned as a matrix, where (0, 2), for example, represents the similarity of input sentence 0 and input sentence 2.
###Code
def get_similarity(input_list):
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = light_pipeline.transform(df)
embeddings = []
for r in result.collect():
embeddings.append(r.sentence_embeddings[0].embeddings)
embeddings_matrix = np.array(embeddings)
return np.matmul(embeddings_matrix, embeddings_matrix.transpose())
###Output
_____no_output_____
###Markdown
Write the computed similarities to a CSV file.
###Code
! mkdir outputs
! mkdir outputs/$MODEL_NAME
np.savetxt(f'outputs/{MODEL_NAME}/similarities.csv',
get_similarity(text_list),
delimiter=',')
###Output
_____no_output_____
###Markdown
7. Visualize results This method plots the gets the similarity of the sentences in the list using the method above, then it plots those similarities as a heatmap where dark red means "very similar" and pale yellow means "not similar at all".
###Code
import seaborn as sns
def plot_similarity(input_list):
g = sns.heatmap(
get_similarity(input_list),
xticklabels=input_list,
yticklabels=input_list,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(input_list, rotation=90)
g.set_title("Semantic Textual Similarity")
plot_similarity(text_list)
###Output
_____no_output_____
###Markdown
[](https://githubtocolab.com/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/SENTENCE_SIMILARITY.ipynb) **Detect sentence similarity** 1. Colab Setup
###Code
!wget http://setup.johnsnowlabs.com/colab.sh -O - | bash
# !bash colab.sh
# -p is for pyspark
# -s is for spark-nlp
# !bash colab.sh -p 3.1.1 -s 3.0.1
# by default they are set to the latest
###Output
openjdk version "11.0.10" 2021-01-19
OpenJDK Runtime Environment (build 11.0.10+9-Ubuntu-0ubuntu1.18.04)
OpenJDK 64-Bit Server VM (build 11.0.10+9-Ubuntu-0ubuntu1.18.04, mixed mode, sharing)
setup Colab for PySpark 3.1.1 and Spark NLP 3.0.0
[K |████████████████████████████████| 212.3MB 76kB/s
[K |████████████████████████████████| 143kB 39.9MB/s
[K |████████████████████████████████| 204kB 50.0MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
###Markdown
2. Start the Spark session
###Code
import json
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp.base import *
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the USE model If you change the model, re-run the cell that creates the pipeline so the pipeline will use the new model.
###Code
# If you change the model, re-run all the cells below.
# Applicable models: tfhub_use, tfhub_use_lg
MODEL_NAME = "tfhub_use"
os.environ['MODEL_NAME'] = MODEL_NAME
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# To compare the similarity of sentences, enter them as strings in this list.
text_list = [
"Sign up for our mailing list to get free offers and updates about our products!",
"Subscribe to notifications to receive information about discounts and new offerings.",
"Send in your information for a chance to win big in our Summer Sweepstakes!",
"After filling out this form, you will receive a confirmation email to complete your signup.",
"It was raining, so I waited beneath the balcony outside the cafe.",
"I stayed under the deck of the cafe because it was rainy outside.",
"I like the cafe down the street because it's not too loud in there.",
"The coffee shop near where I live is quiet, so I like to go there.",
"Web traffic analysis shows that most Internet users browse on mobile nowadays.",
"The analytics show that modern web users mostly use their phone instead of their computers."
]
###Output
_____no_output_____
###Markdown
Write the input sentences into a single file.
###Code
! mkdir inputs
! mkdir inputs/$MODEL_NAME
with open(f'inputs/{MODEL_NAME}/sentences.txt', 'w') as input_file:
for text in text_list:
input_file.write(text + '\n')
###Output
_____no_output_____
###Markdown
5. Define Spark NLP pipeline
###Code
# Transforms the input text into a document usable by the SparkNLP pipeline.
document_assembler = DocumentAssembler()
document_assembler.setInputCol('text')
document_assembler.setOutputCol('document')
# Separates the text into individual tokens (words and punctuation).
tokenizer = Tokenizer()
tokenizer.setInputCols(['document'])
tokenizer.setOutputCol('token')
# Encodes the text as a single vector representing semantic features.
sentence_encoder = UniversalSentenceEncoder.pretrained(name=MODEL_NAME)
sentence_encoder.setInputCols(['document', 'token'])
sentence_encoder.setOutputCol('sentence_embeddings')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sentence_encoder
])
# Fit the model to an empty data frame so it can be used on inputs.
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
###Markdown
6. Run the pipeline This method will get the similarity of the embeddings of each pair of sentences in the list of sentences passed in. The similarity is returned as a matrix, where (0, 2), for example, represents the similarity of input sentence 0 and input sentence 2.
###Code
def get_similarity(input_list):
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = light_pipeline.transform(df)
embeddings = []
for r in result.collect():
embeddings.append(r.sentence_embeddings[0].embeddings)
embeddings_matrix = np.array(embeddings)
return np.matmul(embeddings_matrix, embeddings_matrix.transpose())
###Output
_____no_output_____
###Markdown
Write the computed similarities to a CSV file.
###Code
! mkdir outputs
! mkdir outputs/$MODEL_NAME
np.savetxt(f'outputs/{MODEL_NAME}/similarities.csv',
get_similarity(text_list),
delimiter=',')
###Output
_____no_output_____
###Markdown
7. Visualize results This method plots the gets the similarity of the sentences in the list using the method above, then it plots those similarities as a heatmap where dark red means "very similar" and pale yellow means "not similar at all".
###Code
import seaborn as sns
def plot_similarity(input_list):
g = sns.heatmap(
get_similarity(input_list),
xticklabels=input_list,
yticklabels=input_list,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(input_list, rotation=90)
g.set_title("Semantic Textual Similarity")
plot_similarity(text_list)
###Output
_____no_output_____
###Markdown
[](https://colab.research.google.com/github/JohnSnowLabs/spark-nlp-workshop/blob/master/tutorials/streamlit_notebooks/SENTENCE_SIMILARITY.ipynb) **Detect sentence similarity** 1. Colab Setup
###Code
# Install Java
! apt-get update -qq
! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null
! java -version
# Install pyspark
! pip install --ignore-installed -q pyspark==2.4.4
# Install SparkNLP
! pip install --ignore-installed spark-nlp
###Output
openjdk version "11.0.8" 2020-07-14
OpenJDK Runtime Environment (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1)
OpenJDK 64-Bit Server VM (build 11.0.8+10-post-Ubuntu-0ubuntu118.04.1, mixed mode, sharing)
[K |████████████████████████████████| 215.7MB 63kB/s
[K |████████████████████████████████| 204kB 45.5MB/s
[?25h Building wheel for pyspark (setup.py) ... [?25l[?25hdone
Collecting spark-nlp
[?25l Downloading https://files.pythonhosted.org/packages/b5/a2/5c2e18a65784442ded6f6c58af175ca4d99649337de569fac55b04d7ed8e/spark_nlp-2.5.5-py2.py3-none-any.whl (124kB)
[K |████████████████████████████████| 133kB 2.8MB/s
[?25hInstalling collected packages: spark-nlp
Successfully installed spark-nlp-2.5.5
###Markdown
2. Start the Spark session
###Code
import os
import json
os.environ['JAVA_HOME'] = "/usr/lib/jvm/java-8-openjdk-amd64"
import pandas as pd
import numpy as np
from pyspark.ml import Pipeline
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
import sparknlp
from sparknlp.annotator import *
from sparknlp.base import *
from sparknlp.pretrained import PretrainedPipeline
spark = sparknlp.start()
###Output
_____no_output_____
###Markdown
3. Select the USE model If you change the model, re-run the cell that creates the pipeline so the pipeline will use the new model.
###Code
# If you change the model, re-run all the cells below.
# Applicable models: tfhub_use, tfhub_use_lg
MODEL_NAME = "tfhub_use"
os.environ['MODEL_NAME'] = MODEL_NAME
###Output
_____no_output_____
###Markdown
4. Some sample examples
###Code
# To compare the similarity of sentences, enter them as strings in this list.
text_list = [
"Sign up for our mailing list to get free offers and updates about our products!",
"Subscribe to notifications to receive information about discounts and new offerings.",
"Send in your information for a chance to win big in our Summer Sweepstakes!",
"After filling out this form, you will receive a confirmation email to complete your signup.",
"It was raining, so I waited beneath the balcony outside the cafe.",
"I stayed under the deck of the cafe because it was rainy outside.",
"I like the cafe down the street because it's not too loud in there.",
"The coffee shop near where I live is quiet, so I like to go there.",
"Web traffic analysis shows that most Internet users browse on mobile nowadays.",
"The analytics show that modern web users mostly use their phone instead of their computers."
]
###Output
_____no_output_____
###Markdown
Write the input sentences into a single file.
###Code
! mkdir inputs
! mkdir inputs/$MODEL_NAME
with open(f'inputs/{MODEL_NAME}/sentences.txt', 'w') as input_file:
for text in text_list:
input_file.write(text + '\n')
###Output
mkdir: cannot create directory ‘inputs’: File exists
mkdir: cannot create directory ‘inputs/tfhub_use’: File exists
###Markdown
5. Define Spark NLP pipeline
###Code
# Transforms the input text into a document usable by the SparkNLP pipeline.
document_assembler = DocumentAssembler()
document_assembler.setInputCol('text')
document_assembler.setOutputCol('document')
# Separates the text into individual tokens (words and punctuation).
tokenizer = Tokenizer()
tokenizer.setInputCols(['document'])
tokenizer.setOutputCol('token')
# Encodes the text as a single vector representing semantic features.
sentence_encoder = UniversalSentenceEncoder.pretrained(name=MODEL_NAME)
sentence_encoder.setInputCols(['document', 'token'])
sentence_encoder.setOutputCol('sentence_embeddings')
nlp_pipeline = Pipeline(stages=[
document_assembler,
tokenizer,
sentence_encoder
])
# Fit the model to an empty data frame so it can be used on inputs.
empty_df = spark.createDataFrame([['']]).toDF('text')
pipeline_model = nlp_pipeline.fit(empty_df)
light_pipeline = LightPipeline(pipeline_model)
###Output
tfhub_use download started this may take some time.
Approximate size to download 923.7 MB
[OK!]
###Markdown
6. Run the pipeline This method will get the similarity of the embeddings of each pair of sentences in the list of sentences passed in. The similarity is returned as a matrix, where (0, 2), for example, represents the similarity of input sentence 0 and input sentence 2.
###Code
def get_similarity(input_list):
df = spark.createDataFrame(pd.DataFrame({'text': input_list}))
result = light_pipeline.transform(df)
embeddings = []
for r in result.collect():
embeddings.append(r.sentence_embeddings[0].embeddings)
embeddings_matrix = np.array(embeddings)
return np.matmul(embeddings_matrix, embeddings_matrix.transpose())
###Output
_____no_output_____
###Markdown
Write the computed similarities to a CSV file.
###Code
! mkdir outputs
! mkdir outputs/$MODEL_NAME
np.savetxt(f'outputs/{MODEL_NAME}/similarities.csv',
get_similarity(text_list),
delimiter=',')
###Output
mkdir: cannot create directory ‘outputs’: File exists
mkdir: cannot create directory ‘outputs/tfhub_use’: File exists
###Markdown
7. Visualize results This method plots the gets the similarity of the sentences in the list using the method above, then it plots those similarities as a heatmap where dark red means "very similar" and pale yellow means "not similar at all".
###Code
import seaborn as sns
def plot_similarity(input_list):
g = sns.heatmap(
get_similarity(input_list),
xticklabels=input_list,
yticklabels=input_list,
vmin=0,
vmax=1,
cmap="YlOrRd")
g.set_xticklabels(input_list, rotation=90)
g.set_title("Semantic Textual Similarity")
plot_similarity(text_list)
###Output
_____no_output_____ |
stevens_developmental_microglia_2019/calculating_index_200609.ipynb | ###Markdown
Concatenating and calculating index on microglia dataset from [this publication](https://www.cell.com/immunity/fulltext/S1074-7613(18)30485-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS1074761318304850%3Fshowall%3Dtrue), data can be found [here](https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE121654)
###Code
import numpy as np
import pandas as pd
import os
### okay, so we want ot concatenate all of the files together
files = os.listdir('E:\\DATA\\rna_seq_datasets\\stevens_developmental_microglia_2019')
index = pd.read_csv('E:\\DATA\\rna_seq_datasets\\stevens_developmental_microglia_2019\\' + files[3], delimiter = '\t', index_col = 0).index
df_output = pd.DataFrame(index = index)
for file in files:
if '.txt' in file:
df = pd.read_csv('E:\\DATA\\rna_seq_datasets\\stevens_developmental_microglia_2019\\' + file, delimiter = '\t', index_col = 0)
df_final = pd.DataFrame(df.mean(axis = 1))
df_final.rename(columns = {0:file}, inplace = True)
df_output = df_output.join(df_final)
i = -1
columns = list(df_output.columns)
for column in columns:
i = i + 1
columns[i] = column[11:-8]
df_output.columns = columns
df_output
import developmental_index as dvp
scaled_df = dvp.scale_expression(df_output)
df_index = pd.read_csv('C:\\Users\\Ben\\Dropbox\\bilbo_lab_spr2020\\microglia-seq_website\\microglia-seq\\mdi\\unique_data_index_gene_list.csv', index_col = 0)
df_index.index = df_index.index.str.capitalize()
merged = dvp.import_index(scaled_df, df_index)
raw_index = dvp.generate_index(merged, merged.columns[0:-2])
index = dvp.scale_index(raw_index)
samples = list(index['index'].values)
### i think i might want to wait to build these labels until after the index is calculated....
age = ['Unknown'] * len(samples)
sex = ['Unknown'] * len(samples)
tx = ['N/A'] * len(samples)
i = -1
for sample in samples:
i = i + 1
if 'E14' in sample:
age[i] = 'E14'
if 'P4' in sample:
age[i] = 'P4'
if 'P5' in sample:
age[i] = 'P4'
if 'P30' in sample:
age[i] = 'P30'
if 'P100' in sample:
age[i] = 'P100'
if 'Old' in sample:
age[i] = 'Old'
i = -1
for sample in samples:
i = i + 1
if 'F' in sample:
sex[i] = 'F'
if 'M' in sample:
sex[i] = 'M'
if 'Male' in sample:
sex[i] = 'M'
if 'male' in sample:
sex[i] = 'M'
if 'female' in sample:
sex[i] = 'F'
if 'Female' in sample:
sex[i] = 'F'
i = -1
for sample in samples:
i = i + 1
if 'SALINE' in sample:
tx[i] = 'Sal'
if 'LPC' in sample:
tx[i] = 'Lpc'
index['age'] = age
index['sex'] = sex
index['tx'] = tx
index.rename(columns = {0:''})
###Output
_____no_output_____ |
scripts/amazon-ml-xlnet.ipynb | ###Markdown
XLNET
###Code
dev=torch.device('cuda')
temp["BROWSE_NODE_ID"].value_counts()
train_text, val_text, train_labels, val_labels = train_test_split(temp['WHOLE SENTENCE'], temp['BROWSE_NODE_ID'],
test_size=0.05)
train_text=train_text.reset_index(drop=True)
train_labels=train_labels.reset_index(drop=True)
val_text=val_text.reset_index(drop=True)
val_labels=val_labels.reset_index(drop=True)
tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased')
seq_len = [len(i.split()) for i in train_text]
pd.Series(seq_len).hist(bins = 30)
max_seq_len = 64
class amazonDataset(Dataset):
def __init__(self,text,label,tokenizer):
self.sentence=text
self.label=label
self.tokenizer=tokenizer
def __len__(self):
return len(self.sentence)
def __getitem__(self,idx):
inp_tokens=self.tokenizer.encode_plus(self.sentence[idx],
padding="max_length",
add_special_tokens=True,
max_length=max_seq_len,
truncation=True)
inp_id=inp_tokens.input_ids
inp_mask=inp_tokens.attention_mask
inp_type_ids=inp_tokens.token_type_ids
labels=self.label[idx]
return {
# "text":self.sentence,
"input_ids":torch.tensor(inp_id, dtype=torch.long),
"input_attention_mask":torch.tensor(inp_mask, dtype=torch.long),
"input_type_ids":torch.tensor(inp_type_ids, dtype=torch.long),
"labels":torch.tensor(labels, dtype=torch.float)
}
train_dataset = amazonDataset(train_text, train_labels, tokenizer)
val_dataset = amazonDataset(val_text, val_labels, tokenizer)
train_dataloader=DataLoader(train_dataset,
batch_size=164,
shuffle=True,
num_workers=2,
pin_memory=True)
val_dataloader=DataLoader(val_dataset,
batch_size=164,
shuffle=False,
num_workers=2,
pin_memory=True)
dataloaders={'train':train_dataloader, 'eval':val_dataloader }
dataset_sizes={'train':len(train_dataset), 'eval':len(val_dataset)}
# class BERTBaseUncased(nn.Module):
# def __init__(self):
# super(BERTBaseUncased, self).__init__()
# self.bert=AutoModel.from_pretrained('bert-base-uncased')
# self.dropout = nn.Dropout(0.1)
# self.relu = nn.ReLU()
# self.fc1 = nn.Linear(768,9919)
# def forward(self,ids,mask,token_type_ids):
# a, o2 = self.bert(
# ids,
# attention_mask=mask,
# token_type_ids=token_type_ids)
# bo=self.dropout(o2)
# output=self.fc1(bo)
# return output
# model=XLNetForSequenceClassification.from_pretrained('xlnet-base-cased',
# num_labels=9919)
# print(model)
model=torch.load("../input/amazon-ml-models/XLNetNoset.pth")
print(model)
model.to(dev)
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=1e-5, momentum=0.9)
exp_lr_scheduler = optim.lr_scheduler.StepLR(optimizer, step_size=7, gamma=0.1)
def train_fn(model,loss_fn,optimizer,scheduler,num_epochs=1):
since=time.time()
best_wts=copy.deepcopy(model.state_dict())
best_loss=float('inf')
for epoch in range(num_epochs):
print(f'Epoch:{epoch}/{num_epochs}')
print('-'*10)
for mode in ['train','eval']:
if mode=='train':
model.train()
elif mode=='eval':
model.eval()
running_loss=0.0
running_corrects=0.0
for data in tqdm(dataloaders[mode]):
input_ids = data["input_ids"].to(dev, dtype=torch.long)
labels = data['labels'].to(dev, dtype=torch.long)
mask = data["input_attention_mask"].to(dev, dtype=torch.long)
token_type_ids = data['input_type_ids'].to(dev, dtype=torch.long)
optimizer.zero_grad()
with torch.set_grad_enabled(mode=='train'):
outputs=model(
input_ids =input_ids,
attention_mask=mask,
token_type_ids=token_type_ids,
labels=labels
)
loss, logits=outputs.loss, outputs.logits
_,preds=torch.max(logits,1)
if mode=='train':
loss.backward()
optimizer.step()
running_loss += loss.item()
running_corrects += torch.sum(preds == labels.data)
if mode == 'train':
scheduler.step()
epoch_loss=running_loss/dataset_sizes[mode]
epoch_accuracy=running_corrects.double()/dataset_sizes[mode]
print('{} Loss: {:.4f} Acc: {:.4f}'.format(
mode, epoch_loss, epoch_accuracy))
if mode=='eval' and epoch_loss<best_loss:
best_wts=copy.deepcopy(model.state_dict())
best_acc=epoch_accuracy
best_loss=epoch_loss
print()
time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))
print('Best val loss: {:4f}'.format(best_loss))
print('Best val Acc: {:4f}'.format(best_acc))
model.load_state_dict(best_wts)
return model
model = train_fn(model,
criterion,
optimizer,
exp_lr_scheduler,
num_epochs=3)
torch.save(model,"XLNetNoset.pth")
###Output
_____no_output_____ |
Tarea_2_Pilas.ipynb | ###Markdown
###Code
class Stack:
def __init__(self):
self.__datos = []
def is_empty(self):
return len(self.__datos)==0
def get_top(self):
return self.__datos[-1] #ultimo elemento (-1)
def pop(self):
return self.__datos.pop()
def push (self, value):
self.__datos.append(value)
def get_lenght(self):
return len(self.__datos)
def to_string(self):
print("|-------------|")
for x in self.__datos[::-1]:
print(f"{x}")
print("|-------------|\n")
c=('+++++++++++++++++++++++++++++++ Programa C +++++++++++++++++++++++++++++++\n\n'
+'#include <stdio.h>\n'
+'int main(int argc, char const *argv[]){\n'
+'float L=0.0f, R=0.0f;\n'
+'printf(\"AREA DE UN CUADRADO\n\");'
+'printf(\"INGRESE UN LADO DEL CUADRADO: \");\n'
+'scanf(\"%f\",&L);// entrada del valor\n'
+'R=L*L;\n'
+'printf(\"EL ARE DEL CUADRADO ES: %.2f\n\",R);// imprime el resultado\n'
+'return 0;\n}')
j = ('+++++++++++++++++++++++++++++++ Programa java +++++++++++++++++++++++++++++++\n\n'
+'private void jButton1MouseClicked(java.awt.event.MouseEvent evt) \n'
+'\tString snum1, snum2;\n'
+'\tdouble num1, num2, resultado;\n'
+'\tsnum1 = JOptionPane.showInputDialog(this, "Valor del primer numero", "Suma", WIDTH);\n'
+'\tnum1 = Double.parseDouble(snum1);\n'
+'\tsnum2 = JOptionPane.showInputDialog(this, "Valor del segundo numero", "Suma", WIDTH);\n'
+'\tnum2 = Double.parseDouble(snum2);\n'
+'\tresultado = num1 + num2 + 5;\n'
+'\tJOptionPane.showMessageDialog(this, "La suma de \n\n" + num1 + " + " + num2 + " + 5 = " + resultado, "Resultado", WIDTH);\n\n'
+'}')
v=[c,j]
balanceo=Stack()
try:
for i in v:
print(i)
for x in i:
#print(x)
if x == '(' or x== '{' or x=='[':
balanceo.push('@')
elif x == ')' or x== '}' or x==']':
balanceo.pop()
#print(balanceo.to_string())
if balanceo.is_empty() :
print("\nla cadena esta balanceada.\n".upper())
else: print("\nla cadena no esta balanceada.\n".upper())
except: print("\nla cadena no esta balanceada.\n".upper())
###Output
+++++++++++++++++++++++++++++++ Programa C +++++++++++++++++++++++++++++++
#include <stdio.h>
int main(int argc, char const *argv[]){
float L=0.0f, R=0.0f;
printf("AREA DE UN CUADRADO
");printf("INGRESE UN LADO DEL CUADRADO: ");
scanf("%f",&L);// entrada del valor
R=L*L;
printf("EL ARE DEL CUADRADO ES: %.2f
",R);// imprime el resultado
return 0;
}
LA CADENA ESTA BALANCEADA.
+++++++++++++++++++++++++++++++ Programa java +++++++++++++++++++++++++++++++
private void jButton1MouseClicked(java.awt.event.MouseEvent evt)
String snum1, snum2;
double num1, num2, resultado;
snum1 = JOptionPane.showInputDialog(this, "Valor del primer numero", "Suma", WIDTH);
num1 = Double.parseDouble(snum1);
snum2 = JOptionPane.showInputDialog(this, "Valor del segundo numero", "Suma", WIDTH);
num2 = Double.parseDouble(snum2);
resultado = num1 + num2 + 5;
JOptionPane.showMessageDialog(this, "La suma de
" + num1 + " + " + num2 + " + 5 = " + resultado, "Resultado", WIDTH);
}
LA CADENA NO ESTA BALANCEADA.
###Markdown
TareaHacer un programa que valide el balanceo de '( )' , '[ ]' , '{ }'en programas de lenguaje C y Java tomando como base el código de la clase
###Code
class Stack:
def __init__(self):
self.__datos=[]
def is_empty(self):
return len(self.__datos)==0
def get_top(self):
return self.__datos[-1]
def pop(self):
return self.__datos.pop()
def push(self,valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("-------------------------")
for i in self.__datos[-1::-1]:
print(i)
print("-------------------------")
print()
def balanceo(cadena):
parentesis=Stack()
corchetes=Stack()
picocorchetes=Stack()
aux=0
for i in range (len(cadena)):
if cadena[i]=="(":
parentesis.push("@")
if cadena[i]==")":
try:
parentesis.pop()
except:
aux=1
pass
if cadena[i]=="[":
corchetes.push("@")
if cadena[i]=="]":
try:
corchetes.pop()
except:
aux=1
pass
if cadena[i]=="(":
picocorchetes.push("@")
if cadena[i]==")":
try:
picocorchetes.pop()
except:
aux=1
pass
if (parentesis.is_empty()==True) and (corchetes.is_empty()==True) and (picocorchetes.is_empty()==True) and (aux==0):
print("La cadena está balanceada")
else:
print("La cadena está desbalanceada")
def main():
a="""arr=[]
palindromos=[]
contador=0
for i in range(len(h)):
for j in range(len(m)):
arr.append(h[i]+":"+m[j])
print(arr)
for k in range(0,len(arr)):
if (arr[k][0]==arr[k][4]) and (arr[k][1]==arr[k][3]):
print(arr[k])
contador+=1
print("los palindromos encontrados fueron:"
print(contador)"""
balanceo(a)
b="""class Stack:
def __init__(self):
self.__datos=[]
def is_empty(self):
return len(self.__datos)==0
def get_top(self):
return self.__datos[-1]
def pop(self):
return self.__datos.pop()
def push(self,valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("-------------------------")
for i in self.__datos[-1::-1]:
print(i)
print("-------------------------")
print()"""
balanceo(b)
c=""")class Stack:
def __init__(self):
self.__datos=[]
def is_empty(self):
return len(self.__datos)==0
def get_top(self):
return self.__datos[-1]
def pop(self):
return self.__datos.pop()
def push(self,valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("-------------------------")
for i in self.__datos[-1::-1]:
print(i)
print("-------------------------")
print()"""
balanceo(c)
main()
###Output
La cadena está desbalanceada
La cadena está balanceada
La cadena está desbalanceada
###Markdown
###Code
class Stack:
def __init__(self):
self.__datos = []
def is_empty(self):
return len(self.__datos) == 0
def get_top(self):
return self.__datos[-1]
def pop(self):
return self.__datos.pop()
def push(self,valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("|-----------------|")
for i in self.__datos[-1::-1]:
print(i)
print("|-----------------|")
print()
def main():
pila1=Stack()
pila2=Stack()
pila3=Stack()
b1=0
b2=0
b3=0
p= """int main() {
int opcion;
char r='S';
printf("Calculadora Basica\n");
while(r=='S')
{
printf("Selecciona una opcion\n\nSuma:1\t\tResta:2\t\tMultiplicacion: 3\t\tDivision: 4\t\t");
printf("\n\nOpcion: ");
scanf("%i", &opcion);
if(opcion==1)
{
int x,y;
printf("\n\nElegiste Suma\n\nIntroduce primer valor: ");
scanf("%i", &x);
printf("\nIntroduce Segundo valor: ");
scanf("%i", &y);
int resultado=x+y;
printf("\nEl resultado de %i + %i es= %i \n",x,y,resultado);
}
if(opcion==2)
{
int x,y;
printf("\n\nElegiste Resta\n\nIntroduce primer valor: ");
scanf("%i", &x);
printf("\nIntroduce Segundo valor: ");
scanf("%i", &y);
int resultado=x-y;
printf("\nEl resultado de %i - %i es= %i \n",x,y,resultado);
}
if(opcion==3)
{
int x,y;
printf("\n\nElegiste Multiplicacion\n\nIntroduce primer valor: ");
scanf("%i", &x);
printf("\nIntroduce Segundo valor: ");
scanf("%i", &y);
int resultado=x*y;
printf("\n\nEl resultado de %i * %i es= %i \n",x,y,resultado);
}
if(opcion==4)
{
float x,y;
printf("\n\nElegiste Division\n\nIntroduce primer valor: ");
scanf("%f", &x);
printf("\nIntroduce Segundo valor: ");
scanf("%f", &y);
float resultado=x/y;
printf("\n\nEl resultado de %f / %f es= %f \n",x,y,resultado);
}
if((opcion<1)||(opcion>4))
{
printf("\n\nEsa opción no el valida\n");
}
printf("\n\n¿Quiere hacer otra operacion?[S/N]: ");
scanf("%s", &r);
if(r=='S')
{
system ("cls");
}
}
printf("\n\nUsted ha usado la calculadora basica\n\n");
system("pause");
return 0;
}
"""
for i in range(0,len(p)):
if p[i]=="(":
pila1.push("@")
if p[i]==")":
try:
pila1.pop()
except:
b1=1
for i in range(0,len(p)):
if p[i]=="[":
pila2.push("@")
if p[i]=="]":
try:
pila2.pop()
except:
b2=1
for i in range(0,len(p)):
if p[i]=="{":
pila3.push("@")
if p[i]=="}":
try:
pila3.pop()
except:
b3=1
if b1==0 and b2==0 and b3==0 and pila1.get_length()==0 and pila2.get_length()==0 and pila3.get_length()==0:
print("El programa está balanceado")
else:
print("El programa no está balanceado")
main()
###Output
_____no_output_____
###Markdown
###Code
#Código clase Pilas
class Stack:
def __init__(self):
self.__datos= []
def is_empty(self):
return len(self.__datos)==0
def get_top(self):
return self.__datos[-1]
def pop(self):
return self.__datos.pop()
def push (self, valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("|--------|")
for i in self.__datos[-1::-1]:
print(i)
print("|--------|\n")
print()
def main():
pila1 = Stack()
pila2 = Stack()
pila3 = Stack()
x1=0
x2=0
x3=0
balancear= """
int main() {
int opcion;
char r='S';
printf("Convertidor de Unidades\n");
while(r=='S')
{
printf("Selecciona una opcion\n\nCm a Mts:1\t\tCm a Ft:2\t\tCm a Yd: 3\n\n");
printf("Mts a Cm:4\t\tMts a Ft:5\t\tMts a Yd: 6\n\n");
printf("Ft a Cm:7\t\tFt a Mts:8\t\tFt a Yd: 9\n\n");
printf("Yd a Cm:10\t\tYd a Mts:11\t\tYd a Ft: 12\n\n");
printf("Opcion: ");
scanf("%i", &opcion);
if(opcion==1)
{
float x;
printf("Elegiste Cm a Mts\nInserta Cm: ");
scanf("%f", &x);
float resultado=x*0.01;
printf("En Metros son: %f Mts.\n",resultado);
}
if(opcion==2)
{
float x;
printf("Elegiste Cm a Ft\nInserta Cm: ");
scanf("%f", &x);
float resultado=x/30.48;
printf("En Pies son: %f Ft. \n",resultado);
}
if(opcion==3)
{
float x;
printf("Elegiste Cm a Yd\nInserta Cm: ");
scanf("%f", &x);
float resultado=x/91.44;
printf("En Yardas son: %f Yd. \n",resultado);
}
if(opcion==4)
{
float x;
printf("Elegiste Mts a Cm\nInserta Mt: ");
scanf("%f", &x);
float resultado=x*100;
printf("En Centimetros son: %f Cm. \n",resultado);
}
if(opcion==5)
{
float x;
printf("Elegiste Mt a Ft\nInserta Mt: ");
scanf("%f", &x);
float resultado=x*3.28084;
printf("En Pies son: %f Ft. \n",resultado);
}
if(opcion==6)
{
float x;
printf("Elegiste Mt a Yd\nInserta Mt: ");
scanf("%f", &x);
float resultado=x*1.09361;
printf("En Yardas son: %f Yd. \n",resultado);
}
if(opcion==7)
{
float x;
printf("Elegiste Ft a Cm\nInserta Ft: ");
scanf("%f", &x);
float resultado=x*30.48;
printf("En Centimetros son: %f Cm. \n",resultado);
}
if(opcion==8)
{
float x;
printf("Elegiste Ft a Mt\nInserta Ft: ");
scanf("%f", &x);
float resultado=x*.3048;
printf("En Metros son: %f Mt. \n",resultado);
}
if(opcion==9)
{
float x;
printf("Elegiste Ft a Yd\nInserta Ft: ");
scanf("%f", &x);
float resultado=x/3;
printf("En Yardas son: %f Yd. \n",resultado);
}
if(opcion==10)
{
float x;
printf("Elegiste Yd a Cm\nInserta Yd: ");
scanf("%f", &x);
float resultado=x*91.44;
printf("En Centimetros son: %f Cm. \n",resultado);
}
if(opcion==11)
{
float x;
printf("Elegiste Yd a Mt\nInserta Yd: ");
scanf("%f", &x);
float resultado=x*0.9144;
printf("En Metros son: %f Mt. \n",resultado);
}
if(opcion==12)
{
float x;
printf("Elegiste Yd a Ft\nInnserta Yd: ");
scanf("%f", &x);
float resultado=x*3;
printf("En Pies son: %f Ft. \n",resultado);
}
if((opcion<1)||(opcion>12))
{
printf("La opcion que elegiste no es valida \n");
}
printf("\n\n Desea hacer otra operacion?[S/N]: ");
scanf("%s", &r);
if(r=='S')
{
system ("cls");
}
}
printf("\n\nGracias por ocupar el convertidor\n\n");
system("pause");
return 0;
}
"""
for i in range(0,len(balancear)):
if balancear[i]==" ( ":
pila1.push("@")
if balancear[i]==" ) ":
try:
pila1.pop()
except:
x1=1
for i in range(0,len(balancear)):
if balancear[i]==" [ ":
pila2.push("@")
if balancear[i]==" ] ":
try:
pila2.pop()
except:
x2=1
for i in range(0,len(balancear)):
if balancear[i]==" { ":
pila3.push("@")
if balancear[i]==" } ":
try:
pila3.pop()
except:
x3=1
if x1==0 and x2==0 and x3==0 and pila1.get_length()==0 and pila2.get_length()==0 and pila3.get_length()==0:
print("Válido: El programa está balanceado UwU")
else:
print("Inválido: El programa no está balanceado :C")
main()
###Output
Válido: El programa está balanceado UwU
###Markdown
###Code
class Stack:
def _init_(self):
self.__datos = []
def is_empty(self):
return len(self.__datos) == 0
def det_top(self):
return self.__datos[-1]
def pop(self):
try:
return self.__datos.pop()
except:
self.__datos=["@"]
def push(self , valor):
self.__datos.append(valor)
def get_length(self):
return len(self.__datos)
def to_string(self):
print("-------------")
for puntero in self.__datos[::-1]:
print(puntero)
print("-------------")
def chequeo(codigo):
parentesis = Stack()
parentesis._init_()
corchetes = Stack()
corchetes._init_()
llaves = Stack()
llaves._init_()
for puntero in codigo:
if (puntero == "("):
parentesis.push("@")
elif (puntero == ")"):
parentesis.pop()
elif (puntero == "["):
corchetes.push("@")
elif (puntero == "]"):
corchetes.pop()
elif (puntero == "{"):
llaves.push("@")
elif (puntero == "}"):
llaves.pop()
print(f"-----La equivalencia de parentesis es: {parentesis.is_empty()}-----")
print(f"-----La equivalencia de corchetes es: {corchetes.is_empty()}------")
print(f"-----La equivalencia de llaves es: {llaves.is_empty()}----------")
#Codigo Hola Mundo en Java Correcto
codigo ="public class HolaMundo{public static void main (String [ ] args){System.out.println ("+"Hola Mundo"+");}}"
print(codigo)
chequeo(codigo)
#Codigo Hola Mundo en Java Incorrecto SIN PARENTESIS Y SINCORCHETES DE CIERRE
codigo ="public class HolaMundo{public static void main (String [ args){System.out.println ("+"Hola Mundo"+";}}"
print(codigo)
chequeo(codigo)
#Codigo Hola Mundo en Java Incorrecto SIN LLAVES Y SIN CORCHETES DE INICIO
codigo ="public class HolaMundo public static void main (String ] args){System.out.println ("+"Hola Mundo"+");}}"
print(codigo)
chequeo(codigo)
###Output
public class HolaMundo{public static void main (String [ ] args){System.out.println (Hola Mundo);}}
-----La equivalencia de parentesis es: True-----
-----La equivalencia de corchetes es: True------
-----La equivalencia de llaves es: True----------
public class HolaMundo{public static void main (String [ args){System.out.println (Hola Mundo;}}
-----La equivalencia de parentesis es: False-----
-----La equivalencia de corchetes es: False------
-----La equivalencia de llaves es: True----------
public class HolaMundo public static void main (String ] args){System.out.println (Hola Mundo);}}
-----La equivalencia de parentesis es: True-----
-----La equivalencia de corchetes es: False------
-----La equivalencia de llaves es: False----------
###Markdown
###Code
class Stack:
def __init__(self):
self.__datos=[]
def is_empty(self):
return len(self.__datos)== 0
def get_top(self):
return self.__datos[-1]
def pop (self):
return self.__datos.pop()
def push (self, valor):
self.__datos.append(valor)
def get_lenght(self):
return len(self.__datos)
def to_string(self):
for ele in self.__datos [-1 : :-1]:
print(f" { ele } ")
pila1=Stack()
for c in "System.out.println((({[[)}":
if c in "({[":pila1.push("@")
elif c in ")]}": pila1.pop()
print(f"La pila está vacia? {pila1.is_empty()}")
print(f"La pila tiene {pila1.get_lenght()} elementos")
pila1.to_string()
#En este ejemplo, la cadena está "desbalanceada", pero si se ponen las llaves correctamente, la pila quedará vacía
print("------------------------------")
#En el siguiente ejemplo, la cadena sí está balanceada y la pila queda vacía.
pila2=Stack()
for c in "System.out.println(){}[]":
if c in "({[":pila2.push("@")
elif c in ")]}": pila2.pop()
print(f"La pila está vacia? {pila2.is_empty()}")
print(f"La pila tiene {pila2.get_lenght()} elementos")
pila2.to_string()
###Output
La pila está vacia? False
La pila tiene 4 elementos
@
@
@
@
------------------------------
La pila está vacia? True
La pila tiene 0 elementos
|
nb/Synthetic_Images_Antibiogram.ipynb | ###Markdown
IntroductionGiven the shortage of antibiogram images for training, I created this notebook to produce synthetic images of antibiograms.The synthetic images will be generated using foreground images and paste them into a background image. Foreground images usually contains one of the following:* A single type of "antimicrobial disk" * A figure of "zone of inhibition". In each figure of "zone of inhibition", it also contains an "antimicrobial disk" at its center.Background images, on the other hand, contains a single "Petri dish".In the first part of this notebook, I pasted as foreground two PNG formatted images, one of a single disk and a second image of a zone of inhibition. The background was a larger image of a "Petri dish". Foreground images must be smaller than the background images for this to work.To see clearly what I mean, please go ahead execute the cells below and on the "Visualize Dataset" section, you will see the foreground and background images mentioned above.The second part of this notebook will contains the codes to find the contours, segmentation and bounding boxes of the "antimicrobial disks" and "zone of inhibition(with their respective antimicrobial disks").And in the final part, I converted all the data found in the second part into JSON format file.**Note: foreground and background images were created using GIMP. This process was not included in this repo. For more information, please see in the reference section at the end of the README page.** Clone and Install
###Code
# Clone this repo.
!git clone https://github.com/chho-work/biovision.git
# Optional: install mypy lib
#!pip install -q mypy
# Install Detectron2
# Remember to restart the runtime when you install it for the first time.
# Install only once per session.
!pip install -q git+https://github.com/facebookresearch/fvcore.git
!git clone -q https://github.com/facebookresearch/detectron2 detectron2_repo
!pip install -e detectron2_repo
###Output
_____no_output_____
###Markdown
Setup Import Library
###Code
import os
import random
import numpy as np
import json
import pandas as pd
from skimage import measure
from shapely.geometry import Polygon, MultiPolygon
from pathlib import Path
from PIL import Image
from typing import Tuple, List, Callable
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.pyplot import imshow
%matplotlib inline
###Output
_____no_output_____
###Markdown
Path to Data
###Code
# base directory for images
base_path_image = Path('/content/biovision/data/images')
# annotation directory for annotated data
coco_annot_path = base_path_image.joinpath('coco_annotation')
coco_annot_path.mkdir(parents=True, exist_ok=True)
# background image directory
back_path = base_path_image.joinpath('background')
# foreground image directory
fore_path = base_path_image.joinpath('foreground')
# output file directory
output_path = base_path_image.joinpath('output')
# mask image directory
mask_path = base_path_image.joinpath('masks')
mask_path.mkdir(parents=True, exist_ok=True)
# train directory containing all the training images
train_path = base_path_image.joinpath('train')
train_path.mkdir(parents=True, exist_ok=True)
###Output
_____no_output_____
###Markdown
Clean Up Directories* To be used in errors related to [".ipynb checkpoints"](https://stackoverflow.com/questions/46421663/what-are-jupyter-notebook-checkpoint-files-for)".* This cell removes "ipynb checkpoints" , when it appears in the directories.
###Code
# Note: use the below commands to remove ".ipynb_checkpoints" if it exists.
!rm -r /content/BIODL/data/annotations/.ipynb_checkpoints
!rm -r /content/BIODL/data/images/.ipynb_checkpoints
!rm -r /content/BIODL/data/images/background/.ipynb_checkpoints
!rm -r /content/BIODL/data/images/foreground/.ipynb_checkpoints
!rm -r /content/BIODL/data/images/output/.ipynb_checkpoints
!rm -r /content/BIODL/data/images/sample/.ipynb_checkpoints
!rm -r /content/BIODL/data/images/disks/.ipynb_checkpoints
###Output
rm: cannot remove '/content/BIODL/data/annotations/.ipynb_checkpoints': No such file or directory
rm: cannot remove '/content/BIODL/data/images/.ipynb_checkpoints': No such file or directory
rm: cannot remove '/content/BIODL/data/images/background/.ipynb_checkpoints': No such file or directory
rm: cannot remove '/content/BIODL/data/images/foreground/.ipynb_checkpoints': No such file or directory
rm: cannot remove '/content/BIODL/data/images/output/.ipynb_checkpoints': No such file or directory
rm: cannot remove '/content/BIODL/data/images/sample/.ipynb_checkpoints': No such file or directory
rm: cannot remove '/content/BIODL/data/images/disks/.ipynb_checkpoints': No such file or directory
###Markdown
Utilities Type Checking(Optional)* Example of data type checking using ["mypy"](https://docs.python.org/3/library/typing.html).* "mypy" still not supported in Colab. But it can be useful if you decide to convert to script.* Not strictly necessary that you use the following cells. * Not working in every functions of this nb.
###Code
# For type checking using Python "mypy".
%%writefile type_checking.py
import os
import random
import numpy as np
import json
import pandas as pd
from skimage import measure
from shapely.geometry import Polygon, MultiPolygon
from pathlib import Path
from PIL import Image
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import matplotlib.patches as patches
from matplotlib.pyplot import imshow
from typing import Dict, List, Set, Tuple
def displaySingleImage(pathName:Path, figsize: Tuple[int, int] = (6, 6)) -> Tuple[plt.show]:
_img = str(pathName)
_image = mpimg.imread(_img)
plt.figure(figsize=figsize)
plt.imshow(_image)
plt.title(f'File name: {pathName.name}\nImage size h x w x c {_image.shape}',
loc='left',
fontsize=13)
return plt.show()
# Purposely gave the wrong data type parameter.
displaySingleImage("monkey", figsize=(3.8, 3))
# Execute "mypy" on the toy sample.
!mypy type_checking.py
# Some errors are related to no third party library support.
# https://mypy.readthedocs.io/en/stable/running_mypy.html#:~:text=If%20you%20are%20getting%20a%20%E2%80%9CSkipping%20analyzing%20X%3A%20found%20module,but%20no%20corresponding%20type%20hints.&text=Searching%20to%20see%20if%20there,to%20your%20third%20party%20library.
###Output
type_checking.py:3: [1m[31merror:[m Skipping analyzing 'numpy': found module but no type hints or library stubs[m
type_checking.py:3: [34mnote:[m See [4mhttps://mypy.readthedocs.io/en/latest/running_mypy.html#missing-imports[m
type_checking.py:5: [1m[31merror:[m Skipping analyzing 'pandas': found module but no type hints or library stubs[m
type_checking.py:6: [1m[31merror:[m Skipping analyzing 'skimage': found module but no type hints or library stubs[m
type_checking.py:7: [1m[31merror:[m Skipping analyzing 'shapely.geometry': found module but no type hints or library stubs[m
type_checking.py:10: [1m[31merror:[m Skipping analyzing 'PIL': found module but no type hints or library stubs[m
type_checking.py:12: [1m[31merror:[m Skipping analyzing 'matplotlib.image': found module but no type hints or library stubs[m
type_checking.py:12: [1m[31merror:[m Skipping analyzing 'matplotlib': found module but no type hints or library stubs[m
type_checking.py:13: [1m[31merror:[m Skipping analyzing 'matplotlib.pyplot': found module but no type hints or library stubs[m
type_checking.py:14: [1m[31merror:[m Skipping analyzing 'matplotlib.patches': found module but no type hints or library stubs[m
type_checking.py:30: [1m[31merror:[m Argument 1 to [m[1m"displaySingleImage"[m has incompatible type [m[1m"str"[m; expected [m[1m"Path"[m[m
type_checking.py:30: [1m[31merror:[m Argument [m[1m"figsize"[m to [m[1m"displaySingleImage"[m has incompatible type [m[1m"Tuple[float, int]"[m; expected [m[1m"Tuple[int, int]"[m[m
[1m[31mFound 11 errors in 1 file (checked 1 source file)[m
###Markdown
General Utils* List paths* Display image* Create random coordinates
###Code
# Iterate over the files of a directory and filter in a list.
Path.ls = lambda x: sorted(list(x.iterdir()))
# Displays a single image given the path to image. Change figure size(optional) with parameter figsize.
# Returns a displayed image with file name, and image size.
def displaySingleImage(pathName:Path, figsize:Tuple[int, int] = (6, 6)) -> Callable:
_img = str(pathName)
_image = mpimg.imread(_img)
plt.figure(figsize=figsize)
plt.imshow(_image)
plt.title(f'File name: {pathName.name}\nImage size h x w x c {_image.shape}',
loc='left',
fontsize=13)
return plt.show() # plt.show() removes <matplotlib.image.AxesImage...>
# Create a series of coordinates given the initial coordinate(init_x, init_y).
# Create a series in the X-axis using axis="x" or in the Y-axis using axis="y".
# Generate the number of coordinates with generate=<number of coordinates>
# Returns a list with the coordinates.
def series(axis:str, init_x:int, init_y:int, generate:int) -> list:
_series = []
for i in range(generate):
_result = []
if axis=='x' and generate > 0:
_range = range((init_x), (init_x + generate))
_coordinate = [init_x + i, init_y]
elif axis=='y' and generate > 0:
_range = range((init_y), (init_y + generate))
_coordinate = [init_x, init_y + i]
elif axis=='x' and generate == 0:
_coordinate = []
elif axis=='y' and generate == 0:
_coordinate = []
else:
assert axis=='x' or axis=='y', "Please choose x or y."
_coord_0 = _coordinate[0]
_result.append(_coord_0)
_coord_1 = _coordinate[1]
_result.append(_coord_1)
_series.append(_result)
return _series
# Get the random generated coordinates, given the initial xy coordinates.
# Choose the number of coordinates to create with x_generate and y_generate.
# Returns a single randomly generated coordinate.
def getRandomCoordinate(init_x:int, init_y:int, x_generate:int, y_generate:int) -> Tuple[int, int]:
_coord_x = series("x", init_x, init_y, x_generate) # generate x-axis coordinates
_coord_y = series("y", init_x, init_y, y_generate) # generate y-axis coordinates
_coord = (_coord_x + _coord_y)
# randomly choose one generated coordinate
_random_coord = tuple(random.choice(_coord))
return _random_coord
###Output
_____no_output_____
###Markdown
Utils to Create New Images* Paste smaller images into a bigger one* Create mask image* Composite image
###Code
# Find image size given path to image.
# Returns a tuple with image size
def imageSize(path2Image:Path) -> Tuple[int, int]:
_image = Image.open(path2Image)
return _image.size
# Create a new image with 4 channels(RGBA), of the same size as the background image.
# Paste foreground image(s) into the newly created RGBA image, at the given coordinates.
# Parameters: background image size, a list with foreground images and coordinates to paste foreground images.
# Returns an image:
# 1) of the same size as the background images
# 2) in RGBA format
# 3) with foreground images at specific coordinates
def imageNewForeground(imageSize:Callable,
img_fore:List[Path],
coordinates:Tuple[int, int]):
# Newly created image in black color with the size of background image.
new_fore = Image.new("RGBA", imageSize, color=(0, 0, 0, 0))
# Lock foreground images with coordinates
match = zip(img_fore, coordinates)
for fore, coord in match:
# Open foreground image.
fore_img = Image.open(fore)
# Paste foreground to background image in the given coordinate.
new_fore.paste(fore_img, tuple(coord))
return new_fore
# Create a single channel image(greyscale) of the same size as the background image.
# Paste foreground image(s) in alpha channel into the newly created greyscale image, at the given coordinates.
# Parameters: background image size, a list with foreground images, coordinates to paste foreground images and file name.
# Returns a mask image:
# 1) of the same size as the background images
# 2) with foreground images at specific coordinates
# 3) image is saved at mask_path.
def imageNewMask(imageSize:Callable,
img_fore:List[Path],
coordinates:Tuple[int, int], fname_mask:str):
# Create a 8-bit pixels(range 0-255), black color image with the same size as background image.
new_mask = Image.new("L", imageSize, color=0)
coordinates = list(coordinates)
match = zip(img_fore, coordinates)
for fore, coord in match:
fore_img = Image.open(fore)
# Get the alpha channel image.
fore_img_mask = fore_img.getchannel(3)
# Paste the foreground alpha into the new mask.
new_mask.paste(fore_img_mask, tuple(coord))
_path_mask = mask_path.joinpath(fname_mask)
new_mask.save(_path_mask)
return new_mask, _path_mask
# Create a new composited image using transparent mask
# Paste background image with generated images from imageNewForeground() and imageNewMask().
# Parameters: image generated from imageNewForeground(), background image paths,
# image generated from imageNewMask()and file name.
# Returns a composited image:
# 1) of the same size as the background images
# 2) composited with foreground and mask images
# 3) image is saved at train_path.
def imageNewComposite(imageNewForeground:Callable,
pathBackgroundImage:Path,
imageNewMask:Callable,
fname_composite:str):
# Open a background image
back = Image.open(pathBackgroundImage)
# Composite image is blended from a foreground, background and mask, all of the same size.
composite = Image.composite(imageNewForeground, back, imageNewMask)
# Save the new composited image.
_path_composite = train_path.joinpath(fname_composite)
composite.save(_path_composite)
return _path_composite, _path_composite
###Output
_____no_output_____
###Markdown
Utils for Contours, Segmentation and BBox* Find figure contours* Convert contours to segmentation* Find bounding box(bbox)
###Code
# Find object contours of foreground image.
def findContours(imageSize:Callable, pathForegroundImage:Path, coordinates:Tuple[int, int]):
_new_mask = Image.new("L", imageSize, color=0)
_fore_img = Image.open(pathForegroundImage)
_fore_img_mask = _fore_img.getchannel(3)
_new_mask.paste(_fore_img_mask, (coordinates))
_new_fname = pathForegroundImage.stem + ".png"
_path_mask = mask_path.joinpath(_new_fname)
_new_mask.save(_path_mask)
contours = measure.find_contours(_new_mask, 0.8)
new_contours = []
for cont in contours:
for i in range(len(contours[0])):
_contours = (cont[i][1], cont[i][0])
new_contours.append(_contours)
new_contours_array = np.array(new_contours)
re = new_contours_array.shape[0]
new_contours_array = new_contours_array.reshape(-1, re, 2)
return new_contours_array, new_contours_array.size, _path_mask
# Convert contours to segmentation given
def contours2Segmentations(contours_array, tolerance:int=1.0, preserve_topology:bool=False):
segmentations = []
polygons = []
for contour in contours_array:
poly = Polygon(contour)
poly = poly.simplify(tolerance, preserve_topology=preserve_topology)
polygons.append(poly)
seg = np.array(poly.exterior.coords).ravel().tolist()
segmentations.append(seg)
return polygons, segmentations
# Find bbox of the given polygon.
def seg2BBoxArea(polygons):
multi_poly = MultiPolygon(polygons)
x, y, max_x, max_y = multi_poly.bounds
width = max_x - x
height = max_y - y
bbox = [x, y, width, height]
area = multi_poly.area
return [bbox, area]
###Output
_____no_output_____
###Markdown
Utils to Create JSON * Create dictionaries to compose into JSON format
###Code
# Add "images" to dict.
def addImagesDict(pathComposite:Path, fileName:str, image_id:int) -> dict:
_image = mpimg.imread(pathComposite)
height, width, _ = _image.shape
return {"images": [{"file_name": fileName, "id": image_id, "height": height, "width": width}]}
# Add "categories" to dict
def addCategoriesDict(supercategory:str, category_id:int, category_name:str) -> dict:
return {"categories": [{"supercategory": supercategory, "id": category_id, "name": category_name}]}
# Add "annotations" to dict
def addAnnotationsDict(segmentation:list,
iscrowd:int,
area:float,
image_id:int,
bbox:list,
category_id:int,
id:int) -> dict:
return {"annotations": [{"segmentation": segmentation, "iscrowd": iscrowd, "area": area, "image_id":image_id, "bbox": bbox, "category_id": category_id, "id": id}]}
###Output
_____no_output_____
###Markdown
Visualize Dataset* First we list all image files in foreground and background directories.* Visualize all images in these directories.
###Code
# List foreground dir
print(f'Total of {len(fore_path.ls())} images')
fore_path.ls()
# List background dir
print(f'Total of {len(back_path.ls())} images')
back_path.ls()
###Output
Total of 1 images
###Markdown
As mentioned in the introduction, the following four cells contains images of:1) Foreground image 1: single antimicrobial disk image2) Foreground image 2: zone of inhibition image with its center containing an antimicrobial disk3) The mentioned antimicrobial disk in image 2 in an stand alone image4) Background image: Petri dish imageWe will paste 1) and 2) on 4). We can find the contours of foreground images 1) and 2). But we also need the contour of the disk at the center of the zone of inhibition, image 2). Therefore, I had to create a separate image of the disk at the center of zone of inhibition, image 3).With images mentioned above, we can find contours of:a) Foreground image 1, single diskb) Foreground image 2, zone of inhibitionc) Foreground image 2, disk at the center of zone of inhibitionPlease note the following:* The size of second image(zone of inhibition) and the size of third image(antimicrobial disk at the center of the zone of inhibition) must be of the same size.* You can add more than one single antimicrobial disks, but you need to make sure that all antimicrobial disk foreground images you add have to be the same sizes.* The same size rule also applies for zone of inhibition images.
###Code
# This is a foreground image of a single type of antimicrobial disk.
# I also call this type of foreground image, disk/only images.
fore1 = fore_path.ls()[0]
displaySingleImage(fore1, figsize=(3, 3))
# This is zone of inhibition type foreground image.
# At its center, it also contains an antimicrobial disk.
# I also called this type of image zone images.
fore2 = fore_path.ls()[1]
displaySingleImage(fore2, figsize=(3, 3))
# In this foreground image, I displayed the previous image without zone of inhibition.
# In another words, I showed only the antimicrobial disk at the center of the previous zone of inhibition image.
# I also call this disk/zone images.
fore3 = fore_path.ls()[2]
displaySingleImage(fore3, figsize=(3, 3))
# background image
back = back_path.ls()[0]
displaySingleImage(back, figsize=(5, 5))
###Output
_____no_output_____
###Markdown
Synthetic Image Create Synthetic ImageIn this section:* Generate coordinates to paste the different foreground images.* Paste different foreground images into a background image.* Composite a synthetic image for foreground, background and mask.
###Code
# Paths to foreground and backgrond images
AML25_disk = fore_path.ls()[0] # foreground -> coord_AML25
CT25_zone = fore_path.ls()[1] # foreground -> coord_CT25
CT25_disk_zone = fore_path.ls()[2] # foreground -> coord_CT25, use the same coordinate as CT25_zone
back_petri = back_path.ls()[0] # background
###Output
_____no_output_____
###Markdown
Note: ***Remember to give "image_id" a new number every time you want to create a new synthetic image.***
###Code
# Select an id number.
image_id = 1 # must be int
# Declare image name, to be saved in train directory and used for training
synthetic_fname = "synthetic_antibiogram_" + str(image_id) + ".jpg"
# Declare image name, to be saved in mask directory
mask_fname = "synthetic_mask_" + str(image_id) + ".png"
# Declare image name, to be saved in coco annotation directory
json_fname = "synthetic_json_" + str(image_id) + ".json"
###Output
_____no_output_____
###Markdown
**Generate new sets of coordinate by executing the following cell.**
###Code
# Generate random coordinates.
# To generate another set of random coordinates, re-execute this cell.
coord_AML25 = getRandomCoordinate(init_x=310, init_y=200, x_generate=300, y_generate=1) # coordinate for disk only image
coord_CT25 = getRandomCoordinate(init_x=190, init_y=350, x_generate=100, y_generate=100) # coordinate for disk/zone + zone images
# Create synthetic image and paste on the coordinates generated in above cells.
# Note: CT25_zone and CT25_disk_zone are to be pasted in the same coordinate(coord_CT25).
coordinates = [coord_AML25, coord_CT25]
image_foreground = [AML25_disk, CT25_zone, CT25_disk_zone]
# Background image.
background_img = back_path.ls()[0]
back_size = imageSize(background_img)
# A synthetic image is generated by composing two images(new_foreground, background_img) with
# a mask(new_mask).
new_foreground = imageNewForeground(back_size, image_foreground, coordinates)
new_mask, path_mask = imageNewMask(back_size, image_foreground, coordinates, mask_fname)
# Save the generated image in train dir.
path_composite_img, path_train = imageNewComposite(new_foreground,
background_img,
new_mask,
synthetic_fname)
# Synthetic image
print(f'File saved in "{path_train}"')
displaySingleImage(path_composite_img)
# Mask of the synthetic image
print(f'File saved in "{path_mask}"')
displaySingleImage(path_mask)
###Output
File saved in "/content/biovision/data/images/train/synthetic_antibiogram_1.jpg"
###Markdown
Segmentation and BBox In this section:* Find contour for each foreground image.* Convert the contours into segmentation.* Find each segmentation's bbox.We have 3 foreground images in 2 coordinates. Please note that: * Foreground images CT25_zone and CT25_disk_zone will use the same coordinate.* This is due to the fact that CT25_disk_zone image resides on top of CT25_zone. In another words, they are part of the same image. Foreground Image AML_25 Segmentation and BBOX
###Code
# Foreground image AML_25(disk/only) in coordinate AML_25
contours_AML25, size_AML25, path_mask_AML25 = findContours(back_size, image_foreground[0], coordinates[0])
polygons_AML25, segment_AML25 = contours2Segmentations(contours_AML25)
print(f'Segmentation AML25: {segment_AML25}')
bbox_AML25, area_AML25 = seg2BBoxArea(polygons_AML25)
print(f'BBox coordinates AML25: {bbox_AML25}')
print(f'BBox area AML25: {area_AML25}')
displaySingleImage(path_mask_AML25, figsize=(4, 4))
###Output
Segmentation AML25: [[479.0, 272.2, 484.0, 269.2, 496.2, 256.0, 496.2, 217.0, 483.0, 201.8, 479.0, 199.8, 446.0, 199.07272727272726, 440.0, 200.8, 429.8, 210.0, 424.8, 217.0, 424.8, 254.0, 427.8, 259.0, 442.0, 272.2, 479.0, 272.2]]
BBox coordinates AML25: [424.8, 199.07272727272726, 71.39999999999998, 73.12727272727273]
BBox area AML25: 4658.761818181816
###Markdown
Foreground Image CT25 Segmentation and BBOX* Foreground images: CT25_zone and CT25_disk_zone will use the same coordinate.* This is due to the fact that CT25_disk_zone image resides on top of CT25_zone. In another words, they are part of the same image.
###Code
# Foreground image CT25(zone) in coordinate CT_25
contours_CT25_zone, size_CT25_zone, path_mask_CT25_zone = findContours(back_size, image_foreground[1], coordinates[1])
polygons_CT25_zone, segment_CT25_zone = contours2Segmentations(contours_CT25_zone)
print(f'Segmentation CT25 Zone: {segment_CT25_zone}')
bbox_CT25_zone, area_CT25_zone = seg2BBoxArea(polygons_CT25_zone)
print(f'BBox coordinates CT25 Zone: {bbox_CT25_zone}')
print(f'BBox area CT25 Zone: {area_CT25_zone}')
displaySingleImage(path_mask_CT25_zone, figsize=(4, 4))
# Foreground image CT25(disk/zone) in coordinate CT_25
contours_CT25_diskZone, size_CT25_diskZone, path_mask_CT25_diskZone = findContours(back_size, image_foreground[2], coordinates[1])
polygons_CT25_diskZone, segment_CT25_diskZone = contours2Segmentations(contours_CT25_diskZone)
print(f'Segmentation CT25 Disk/Zone: {segment_CT25_diskZone}')
bbox_CT25_diskZone, area_CT25_diskZone = seg2BBoxArea(polygons_CT25_diskZone)
print(f'BBox coordinates 5 Disk/Zone: {bbox_CT25_diskZone}')
print(f'BBox area CT25 Disk/Zone: {area_CT25_diskZone}')
print(path_mask_CT25_diskZone)
displaySingleImage(path_mask_CT25_diskZone, figsize=(4, 4))
###Output
Segmentation CT25 Disk/Zone: [[383.0, 486.6, 386.0, 481.9, 392.0, 481.8666666666667, 398.0, 479.6, 409.8, 469.0, 413.6, 464.0, 417.8666666666667, 444.0, 412.6, 428.0, 397.0, 411.4, 386.0, 408.4, 370.2, 409.0, 364.0, 406.1333333333333, 359.4, 410.0, 359.8, 414.0, 354.0, 414.1333333333333, 351.4, 416.0, 352.1142857142857, 422.0, 344.1333333333333, 434.0, 343.1333333333333, 450.0, 346.4, 463.0, 350.0, 468.2, 361.0, 476.73333333333335, 361.4, 483.0, 365.0, 486.6, 368.0, 486.6, 371.0, 480.9, 376.2, 483.0, 379.0, 486.6, 383.0, 486.6]]
BBox coordinates 5 Disk/Zone: [343.1333333333333, 406.1333333333333, 74.73333333333335, 80.4666666666667]
BBox area CT25 Disk/Zone: 4469.251428571431
/content/biovision/data/images/masks/zone_CT25_disk.png
###Markdown
Preprocess the Data for JSON Format* Convert segmentation and bbox points into JSON format.* To fulfill basic COCO format requirement, the JSON file that we need to create should contain the following keys: "images", "categories" and "annotations". Key: "images"Sample:{'file_name': , 'height': int, 'id': int, 'width': int} Important:* Check if "image_id" is correct, verify if there is any other images using the same image_id. The values in each image_id variable must be "unique". In another words, you CAN NOT have two image_id's with same value. * The same applies to "filename".* "image_id" and "filename" should match. There is only one "image_id" for a given "filename".Ex:image_id = 0; synthetic_image_1.jpgimage_id = 1; synthetic_image_2.jpgimage_id = 2; synthetic_image_3.jpg...
###Code
id_image = image_id
filename = synthetic_fname
images_dict = addImagesDict(path_composite_img, filename, id_image)
images_dict_0 = images_dict['images'][0]
images_dict_0
###Output
_____no_output_____
###Markdown
Key: "categories"Sample:{'id': 0, 'name': 'disk', 'supercategory': 'none'}
###Code
# First category
supercategory = "none"
id_disk = 0
name_disk = "disk"
category_disk_dict = addCategoriesDict(supercategory, id_disk, name_disk)
disk_dict = (category_disk_dict['categories'][0])
# Second category
supercategory = "none"
id_zone = 1
name_zone = "zone"
category_zone_dict = addCategoriesDict(supercategory, id_zone, name_zone)
zone_dict = (category_zone_dict['categories'][0])
# Make sure you execute this only once, otherwise it will append new categories.
categories_dict = {'categories':[]}
categories_dict['categories'].append(disk_dict)
categories_dict['categories'].append(zone_dict)
cat_0 = categories_dict['categories'][0]
cat_1 = categories_dict['categories'][1]
cat_0, cat_1
###Output
_____no_output_____
###Markdown
Key: "annotations"Sample:{'area': float, 'bbox': , 'category_id': int, 'id': int, 'image_id': int, 'iscrowd': int, 'segmentation': }Reminder:* category disk: 0* category zone: 1* iscrowd = 0We have 3 annotations data, one for each foreground image.
###Code
iscrowd = 0 # This nb does not support RLE.
# Annotation for AML_25
category_id = 0 # disk image
id_annot_1 = 1
annotations_dict_AML_25 = addAnnotationsDict(segment_AML25,
iscrowd,
area_AML25,
image_id,
bbox_AML25,
category_id,
id_annot_1)
annot_dict_AML_25 = (annotations_dict_AML_25['annotations'][0])
print(annot_dict_AML_25)
# Annotation for CT25 Zone
category_id = 1 # zone image
id_annot_2 = 2
annotations_dict_CT5_zone = addAnnotationsDict(segment_CT25_zone,
iscrowd,
area_CT25_zone,
image_id,
bbox_CT25_zone,
category_id,
id_annot_2)
annot_dict_CT5_zone = (annotations_dict_CT5_zone['annotations'][0])
print(annot_dict_CT5_zone)
# Annotation for CT25 Disk/Zone
category_id = 0
id_annot_3 = 3
annotations_dict_CT25_diskZone = addAnnotationsDict(segment_CT25_diskZone,
iscrowd,
area_CT25_diskZone,
image_id,
bbox_CT25_diskZone,
category_id,
id_annot_3)
annot_dict_CT25_diskZone = (annotations_dict_CT25_diskZone['annotations'][0])
print(annot_dict_CT25_diskZone)
###Output
{'segmentation': [[383.0, 486.6, 386.0, 481.9, 392.0, 481.8666666666667, 398.0, 479.6, 409.8, 469.0, 413.6, 464.0, 417.8666666666667, 444.0, 412.6, 428.0, 397.0, 411.4, 386.0, 408.4, 370.2, 409.0, 364.0, 406.1333333333333, 359.4, 410.0, 359.8, 414.0, 354.0, 414.1333333333333, 351.4, 416.0, 352.1142857142857, 422.0, 344.1333333333333, 434.0, 343.1333333333333, 450.0, 346.4, 463.0, 350.0, 468.2, 361.0, 476.73333333333335, 361.4, 483.0, 365.0, 486.6, 368.0, 486.6, 371.0, 480.9, 376.2, 483.0, 379.0, 486.6, 383.0, 486.6]], 'iscrowd': 0, 'area': 4469.251428571431, 'image_id': 1, 'bbox': [343.1333333333333, 406.1333333333333, 74.73333333333335, 80.4666666666667], 'category_id': 0, 'id': 3}
###Markdown
Convert to JSON Format(Customized COCO)Now that we have "images", "categories", and "annotations" keys, we can go ahead and create JSON file that fulfills COCO format requirement.
###Code
# Create an empty value dictionary with keys that you will use in you custom(COCO format) json file.
coco_json = {"images":[], "categories":[], "annotations":[]}
# Append values in images key.
coco_json['images'].append(images_dict_0)
coco_json
# Append values in categories key.
coco_json['categories'].append(cat_0)
coco_json['categories'].append(cat_1)
coco_json
# Append values in annotations key.
annotations_coco = [annot_dict_AML_25, annot_dict_CT5_zone, annot_dict_CT25_diskZone]
for _annot in annotations_coco:
coco_json['annotations'].append(_annot)
print(coco_json)
# Save to image annotation in coco_annotation directory.
json_coco = json.loads(json.dumps(coco_json, indent=4))
synthetic_json = coco_annot_path.joinpath(json_fname)
with open(synthetic_json, 'w') as coco:
json.dump(json_coco, coco, indent=4)
# Print synthetic_json.
with open(synthetic_json) as json_file:
data = json.load(json_file)
for key, value in data.items():
print(key)
print(value)
###Output
images
[{'file_name': 'synthetic_antibiogram_1.jpg', 'id': 1, 'height': 754, 'width': 980}]
categories
[{'supercategory': 'none', 'id': 0, 'name': 'disk'}, {'supercategory': 'none', 'id': 1, 'name': 'zone'}]
annotations
[{'segmentation': [[479.0, 272.2, 484.0, 269.2, 496.2, 256.0, 496.2, 217.0, 483.0, 201.8, 479.0, 199.8, 446.0, 199.07272727272726, 440.0, 200.8, 429.8, 210.0, 424.8, 217.0, 424.8, 254.0, 427.8, 259.0, 442.0, 272.2, 479.0, 272.2]], 'iscrowd': 0, 'area': 4658.761818181816, 'image_id': 1, 'bbox': [424.8, 199.07272727272726, 71.39999999999998, 73.12727272727273], 'category_id': 0, 'id': 1}, {'segmentation': [[405.0, 549.2, 417.0, 540.2, 442.0, 526.2, 450.2, 518.0, 462.2, 501.0, 466.2, 491.0, 474.2, 478.0, 474.84, 434.0, 472.2, 405.0, 466.2, 396.0, 457.0, 386.8, 452.0, 381.8, 444.0, 377.8, 431.0, 363.8, 418.0, 358.8, 409.0, 352.8, 402.0, 351.8, 398.0, 353.8, 390.0, 353.8, 382.0, 349.8, 372.0, 351.8, 359.0, 351.8, 345.0, 357.8, 327.0, 369.8, 320.0, 372.8, 307.0, 382.8, 297.8, 394.0, 293.8, 402.0, 286.8, 424.0, 285.8, 471.0, 290.8, 478.0, 292.8, 486.0, 297.8, 495.0, 308.8, 507.0, 313.8, 515.0, 328.0, 527.2, 337.0, 531.2, 350.0, 540.2, 362.0, 544.2, 368.0, 549.2, 405.0, 549.2]], 'iscrowd': 0, 'area': 29577.839999999997, 'image_id': 1, 'bbox': [285.8, 349.8, 189.03999999999996, 199.40000000000003], 'category_id': 1, 'id': 2}, {'segmentation': [[383.0, 486.6, 386.0, 481.9, 392.0, 481.8666666666667, 398.0, 479.6, 409.8, 469.0, 413.6, 464.0, 417.8666666666667, 444.0, 412.6, 428.0, 397.0, 411.4, 386.0, 408.4, 370.2, 409.0, 364.0, 406.1333333333333, 359.4, 410.0, 359.8, 414.0, 354.0, 414.1333333333333, 351.4, 416.0, 352.1142857142857, 422.0, 344.1333333333333, 434.0, 343.1333333333333, 450.0, 346.4, 463.0, 350.0, 468.2, 361.0, 476.73333333333335, 361.4, 483.0, 365.0, 486.6, 368.0, 486.6, 371.0, 480.9, 376.2, 483.0, 379.0, 486.6, 383.0, 486.6]], 'iscrowd': 0, 'area': 4469.251428571431, 'image_id': 1, 'bbox': [343.1333333333333, 406.1333333333333, 74.73333333333335, 80.4666666666667], 'category_id': 0, 'id': 3}]
###Markdown
View Image in Detectron2We will train, validate and perform inference on the data in Detectron2 in another nb. To use the same library, Detectron2's Visualizers API to view the the annotated image. Import Library and Visualize
###Code
from detectron2.data.datasets import register_coco_instances
instance_name = "Synthetic_Image"
train_dir = str(train_path)
register_coco_instances(instance_name,
{},
synthetic_json,
train_path)
from detectron2.data import MetadataCatalog, DatasetCatalog
metadata = MetadataCatalog.get(instance_name)
dataset_dicts = DatasetCatalog.get(instance_name)
import random
import cv2
from detectron2.utils.visualizer import Visualizer
from google.colab.patches import cv2_imshow
for d in random.sample(dataset_dicts, 1):
img = cv2.imread(d["file_name"])
visualizer = Visualizer(img[:, :, ::-1], metadata=metadata, scale=0.5)
vis = visualizer.draw_dataset_dict(d)
cv2_imshow(vis.get_image()[:, :, ::-1])
vis.save('/content/synthetic_annotation.jpg')
###Output
_____no_output_____
###Markdown
Pack Data and Upload to Local Drive* You can download the data to your local drive or to your Grive.* You can use the following celles to zip all JSON files created when generating the synthetic images and download them to your local drive.
###Code
# Zip all created contents, both images and annotations.
#!zip -r /content/<fname>.zip /content/biovision/data/images/coco_annotation
#!zip -r /content/<fname>.zip /content/biovision/data/images/train
# Download to your local drive
from google.colab import files
# files.download("/content/<fname>.zip")
# files.download("/content/<fname>.zip")
###Output
_____no_output_____ |
fashionMNIST-5.ipynb | ###Markdown
###Code
# Imports
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
import matplotlib.pyplot as plt
import numpy as np
from copy import deepcopy
from torch.utils.data import Dataset
from torchvision import datasets, models, transforms
from torchvision.transforms import ToTensor
import copy
# Enable GPU if avialable
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using {device} device")
def build_resnet_model():
model = models.resnet50(pretrained=True)
model.conv1 = nn.Conv2d(1, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
for param in model.parameters():
param.requires_grad = False
num_ftrs = model.fc.in_features
model.fc = nn.Linear(num_ftrs, 10)
return model
def build_vgg_model():
model = models.vgg19(pretrained=True)
model.features[0] = nn.Conv2d(1, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
for param in model.parameters():
param.requires_grad = False
num_ftrs = model.classifier[6].in_features
model.classifier[6] = nn.Linear(num_ftrs, 10)
return model
class SubsampleDataset(Dataset):
def __init__(self, data, targets):
self.data = data
self.targets = targets
def __getitem__(self, index):
x = self.data[index]
y = self.targets[index]
return x, y
def __len__(self):
return len(self.data)
def get_trainloader(sample_size):
train_transformer = transforms.Compose([transforms.Resize(224),transforms.ToTensor()])
train_set = torchvision.datasets.MNIST('./data', download=True, train=True, transform=train_transformer)
counter = np.zeros(10)
train_set_subsample = SubsampleDataset([], [])
for data in train_set:
image = data[0]
label = data[1]
if counter[label] <= sample_size:
train_set_subsample.data.append(deepcopy(image))
train_set_subsample.targets.append(deepcopy(label))
counter[label] += 1
if sum(counter) >= sample_size*10:
break
return torch.utils.data.DataLoader(train_set_subsample, batch_size=batch_size, shuffle=True, num_workers=2)
def get_testloader():
test_transformer = transforms.Compose([
transforms.Resize(224),
transforms.ToTensor(),
])
test_set = torchvision.datasets.MNIST('./data', download=True, train=False, transform=test_transformer)
return torch.utils.data.DataLoader(test_set, batch_size=batch_size, shuffle=False, num_workers=2)
# train model
def train(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
model.train()
for batch, (X, y) in enumerate(dataloader):
X, y = X.to(device), y.to(device)
# Compute prediction error
pred = model(X)
loss = loss_fn(pred, y)
# Backpropagation
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
print(f"Train Loss: {loss:>7f}")
# test
def test(dataloader, model, loss_fn):
size = len(dataloader.dataset)
num_batches = len(dataloader)
model.eval()
test_loss, correct = 0, 0
with torch.no_grad():
for X, y in dataloader:
X, y = X.to(device), y.to(device)
pred = model(X)
test_loss += loss_fn(pred, y).item()
correct += (pred.argmax(1) == y).type(torch.float).sum().item()
test_loss /= num_batches
correct /= size
print(f"Test Error: Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \n")
# parameters
epochs = 30
batch_size = 100
sizes = [10, 20, 30, 40, 50, 60, 70, 80, 90]
learning_rate=0.01
momentum=0.9
# resnet experiment
for sample_size in sizes:
model = build_resnet_model()
model.to(device)
optimizer=optim.SGD(model.fc.parameters(), lr=learning_rate, momentum=momentum)
loss_function = nn.CrossEntropyLoss()
trainloader = get_trainloader(sample_size)
testloader = get_testloader()
for t in range(epochs):
print(f"Model: Resnet, Size: {sample_size}, Epoch {t+1}\n-------------------------------")
train(trainloader, model, loss_function, optimizer)
test(testloader, model, loss_function)
# vgg experiment
for sample_size in sizes:
model = build_vgg_model()
model.to(device)
optimizer=optim.SGD(model.parameters(), lr=learning_rate, momentum=momentum)
loss_function = nn.CrossEntropyLoss()
trainloader = get_trainloader(sample_size)
testloader = get_testloader()
for t in range(epochs):
print(f"Model: Resnet, Size: {sample_size}, Epoch {t+1}\n-------------------------------")
train(trainloader, model, loss_function, optimizer)
test(testloader, model, loss_function)
###Output
Model: Resnet, Size: 10, Epoch 1
-------------------------------
Train Loss: 2.301983
Test Error: Accuracy: 11.5%, Avg loss: 2.271293
Model: Resnet, Size: 10, Epoch 2
-------------------------------
Train Loss: 2.292219
Test Error: Accuracy: 17.2%, Avg loss: 2.248342
Model: Resnet, Size: 10, Epoch 3
-------------------------------
Train Loss: 2.235973
Test Error: Accuracy: 26.8%, Avg loss: 2.222175
Model: Resnet, Size: 10, Epoch 4
-------------------------------
Train Loss: 2.191041
Test Error: Accuracy: 39.6%, Avg loss: 2.180398
Model: Resnet, Size: 10, Epoch 5
-------------------------------
Train Loss: 2.100339
Test Error: Accuracy: 42.1%, Avg loss: 2.121052
Model: Resnet, Size: 10, Epoch 6
-------------------------------
Train Loss: 2.035792
Test Error: Accuracy: 41.4%, Avg loss: 2.048936
Model: Resnet, Size: 10, Epoch 7
-------------------------------
Train Loss: 2.008691
Test Error: Accuracy: 43.1%, Avg loss: 1.970290
Model: Resnet, Size: 10, Epoch 8
-------------------------------
Train Loss: 1.910486
Test Error: Accuracy: 46.3%, Avg loss: 1.891961
Model: Resnet, Size: 10, Epoch 9
-------------------------------
Train Loss: 1.835727
Test Error: Accuracy: 48.4%, Avg loss: 1.819674
Model: Resnet, Size: 10, Epoch 10
-------------------------------
Train Loss: 1.729425
Test Error: Accuracy: 50.0%, Avg loss: 1.759874
Model: Resnet, Size: 10, Epoch 11
-------------------------------
Train Loss: 1.700410
Test Error: Accuracy: 50.5%, Avg loss: 1.709221
Model: Resnet, Size: 10, Epoch 12
-------------------------------
Train Loss: 1.611257
Test Error: Accuracy: 50.4%, Avg loss: 1.662817
Model: Resnet, Size: 10, Epoch 13
-------------------------------
Train Loss: 1.601552
Test Error: Accuracy: 50.6%, Avg loss: 1.621731
Model: Resnet, Size: 10, Epoch 14
-------------------------------
Train Loss: 1.422911
Test Error: Accuracy: 50.9%, Avg loss: 1.582528
Model: Resnet, Size: 10, Epoch 15
-------------------------------
Train Loss: 1.457706
Test Error: Accuracy: 51.2%, Avg loss: 1.544066
Model: Resnet, Size: 10, Epoch 16
-------------------------------
Train Loss: 1.326765
Test Error: Accuracy: 54.7%, Avg loss: 1.499668
Model: Resnet, Size: 10, Epoch 17
-------------------------------
Train Loss: 1.352792
Test Error: Accuracy: 58.3%, Avg loss: 1.457874
Model: Resnet, Size: 10, Epoch 18
-------------------------------
Train Loss: 1.279053
Test Error: Accuracy: 60.2%, Avg loss: 1.423604
Model: Resnet, Size: 10, Epoch 19
-------------------------------
Train Loss: 1.230533
Test Error: Accuracy: 59.6%, Avg loss: 1.397534
Model: Resnet, Size: 10, Epoch 20
-------------------------------
Train Loss: 1.263430
Test Error: Accuracy: 60.2%, Avg loss: 1.371310
Model: Resnet, Size: 10, Epoch 21
-------------------------------
Train Loss: 1.185581
Test Error: Accuracy: 61.2%, Avg loss: 1.344081
Model: Resnet, Size: 10, Epoch 22
-------------------------------
Train Loss: 1.211096
Test Error: Accuracy: 61.9%, Avg loss: 1.316580
Model: Resnet, Size: 10, Epoch 23
-------------------------------
Train Loss: 1.137997
Test Error: Accuracy: 62.0%, Avg loss: 1.294573
Model: Resnet, Size: 10, Epoch 24
-------------------------------
Train Loss: 1.137295
Test Error: Accuracy: 61.9%, Avg loss: 1.272763
Model: Resnet, Size: 10, Epoch 25
-------------------------------
Train Loss: 1.025087
Test Error: Accuracy: 62.4%, Avg loss: 1.250689
Model: Resnet, Size: 10, Epoch 26
-------------------------------
Train Loss: 1.039406
Test Error: Accuracy: 63.1%, Avg loss: 1.231301
Model: Resnet, Size: 10, Epoch 27
-------------------------------
Train Loss: 0.976269
Test Error: Accuracy: 63.3%, Avg loss: 1.218632
Model: Resnet, Size: 10, Epoch 28
-------------------------------
Train Loss: 1.041719
Test Error: Accuracy: 63.9%, Avg loss: 1.206803
Model: Resnet, Size: 10, Epoch 29
-------------------------------
Train Loss: 1.004355
Test Error: Accuracy: 64.5%, Avg loss: 1.193279
Model: Resnet, Size: 10, Epoch 30
-------------------------------
Train Loss: 1.026520
Test Error: Accuracy: 64.6%, Avg loss: 1.182829
Model: Resnet, Size: 20, Epoch 1
-------------------------------
Train Loss: 2.323709
Test Error: Accuracy: 10.2%, Avg loss: 2.308446
Model: Resnet, Size: 20, Epoch 2
-------------------------------
Train Loss: 2.316402
Test Error: Accuracy: 7.6%, Avg loss: 2.317115
Model: Resnet, Size: 20, Epoch 3
-------------------------------
Train Loss: 2.321569
Test Error: Accuracy: 11.3%, Avg loss: 2.315037
Model: Resnet, Size: 20, Epoch 4
-------------------------------
Train Loss: 2.356754
Test Error: Accuracy: 10.3%, Avg loss: 2.304370
Model: Resnet, Size: 20, Epoch 5
-------------------------------
Train Loss: 2.335059
Test Error: Accuracy: 9.8%, Avg loss: 2.310603
Model: Resnet, Size: 20, Epoch 6
-------------------------------
Train Loss: 2.321003
Test Error: Accuracy: 10.1%, Avg loss: 2.298542
Model: Resnet, Size: 20, Epoch 7
-------------------------------
Train Loss: 2.318574
Test Error: Accuracy: 11.3%, Avg loss: 2.306999
Model: Resnet, Size: 20, Epoch 8
-------------------------------
Train Loss: 2.295167
Test Error: Accuracy: 9.8%, Avg loss: 2.303487
Model: Resnet, Size: 20, Epoch 9
-------------------------------
Train Loss: 2.326913
Test Error: Accuracy: 10.5%, Avg loss: 2.290117
Model: Resnet, Size: 20, Epoch 10
-------------------------------
Train Loss: 2.374970
Test Error: Accuracy: 10.4%, Avg loss: 2.292667
Model: Resnet, Size: 20, Epoch 11
-------------------------------
Train Loss: 2.367107
Test Error: Accuracy: 11.4%, Avg loss: 2.284272
Model: Resnet, Size: 20, Epoch 12
-------------------------------
Train Loss: 2.351051
Test Error: Accuracy: 9.8%, Avg loss: 2.293300
Model: Resnet, Size: 20, Epoch 13
-------------------------------
Train Loss: 2.318155
Test Error: Accuracy: 23.4%, Avg loss: 2.291486
Model: Resnet, Size: 20, Epoch 14
-------------------------------
Train Loss: 2.350888
Test Error: Accuracy: 15.8%, Avg loss: 2.278242
Model: Resnet, Size: 20, Epoch 15
-------------------------------
Train Loss: 2.337543
Test Error: Accuracy: 15.7%, Avg loss: 2.267628
Model: Resnet, Size: 20, Epoch 16
-------------------------------
Train Loss: 2.323187
Test Error: Accuracy: 25.9%, Avg loss: 2.261710
Model: Resnet, Size: 20, Epoch 17
-------------------------------
Train Loss: 2.306069
Test Error: Accuracy: 19.2%, Avg loss: 2.263967
Model: Resnet, Size: 20, Epoch 18
-------------------------------
Train Loss: 2.325718
Test Error: Accuracy: 19.3%, Avg loss: 2.264016
Model: Resnet, Size: 20, Epoch 19
-------------------------------
Train Loss: 2.296652
Test Error: Accuracy: 18.1%, Avg loss: 2.259777
Model: Resnet, Size: 20, Epoch 20
-------------------------------
Train Loss: 2.281996
Test Error: Accuracy: 14.8%, Avg loss: 2.252940
Model: Resnet, Size: 20, Epoch 21
-------------------------------
Train Loss: 2.331332
Test Error: Accuracy: 11.3%, Avg loss: 2.254627
Model: Resnet, Size: 20, Epoch 22
-------------------------------
Train Loss: 2.351653
Test Error: Accuracy: 23.4%, Avg loss: 2.249899
Model: Resnet, Size: 20, Epoch 23
-------------------------------
Train Loss: 2.319525
Test Error: Accuracy: 9.8%, Avg loss: 2.250124
Model: Resnet, Size: 20, Epoch 24
-------------------------------
Train Loss: 2.322842
Test Error: Accuracy: 22.9%, Avg loss: 2.245923
Model: Resnet, Size: 20, Epoch 25
-------------------------------
Train Loss: 2.295096
Test Error: Accuracy: 22.9%, Avg loss: 2.240863
Model: Resnet, Size: 20, Epoch 26
-------------------------------
Train Loss: 2.336189
Test Error: Accuracy: 19.1%, Avg loss: 2.245780
Model: Resnet, Size: 20, Epoch 27
-------------------------------
Train Loss: 2.321564
Test Error: Accuracy: 9.9%, Avg loss: 2.240586
Model: Resnet, Size: 20, Epoch 28
-------------------------------
Train Loss: 2.344712
Test Error: Accuracy: 17.9%, Avg loss: 2.236546
Model: Resnet, Size: 20, Epoch 29
-------------------------------
Train Loss: 2.288113
Test Error: Accuracy: 19.2%, Avg loss: 2.243131
Model: Resnet, Size: 20, Epoch 30
-------------------------------
Train Loss: 2.315703
Test Error: Accuracy: 28.7%, Avg loss: 2.227361
Model: Resnet, Size: 30, Epoch 1
-------------------------------
Train Loss: 2.274686
Test Error: Accuracy: 9.7%, Avg loss: 2.302421
Model: Resnet, Size: 30, Epoch 2
-------------------------------
Train Loss: 2.304553
Test Error: Accuracy: 18.7%, Avg loss: 2.283104
Model: Resnet, Size: 30, Epoch 3
-------------------------------
Train Loss: 2.271912
Test Error: Accuracy: 16.4%, Avg loss: 2.269465
Model: Resnet, Size: 30, Epoch 4
-------------------------------
Train Loss: 2.245185
Test Error: Accuracy: 18.4%, Avg loss: 2.218505
Model: Resnet, Size: 30, Epoch 5
-------------------------------
Train Loss: 2.295144
Test Error: Accuracy: 26.1%, Avg loss: 2.179644
Model: Resnet, Size: 30, Epoch 6
-------------------------------
Train Loss: 2.196661
Test Error: Accuracy: 27.9%, Avg loss: 2.167939
Model: Resnet, Size: 30, Epoch 7
-------------------------------
Train Loss: 2.136851
Test Error: Accuracy: 33.1%, Avg loss: 2.105034
Model: Resnet, Size: 30, Epoch 8
-------------------------------
Train Loss: 2.090703
Test Error: Accuracy: 36.8%, Avg loss: 2.069726
Model: Resnet, Size: 30, Epoch 9
-------------------------------
Train Loss: 2.009847
Test Error: Accuracy: 17.7%, Avg loss: 2.079021
Model: Resnet, Size: 30, Epoch 10
-------------------------------
Train Loss: 2.131943
Test Error: Accuracy: 39.6%, Avg loss: 2.011069
Model: Resnet, Size: 30, Epoch 11
-------------------------------
Train Loss: 2.176607
Test Error: Accuracy: 41.7%, Avg loss: 1.987473
Model: Resnet, Size: 30, Epoch 12
-------------------------------
Train Loss: 1.966668
Test Error: Accuracy: 39.2%, Avg loss: 1.985653
Model: Resnet, Size: 30, Epoch 13
-------------------------------
Train Loss: 2.046799
Test Error: Accuracy: 37.8%, Avg loss: 1.944018
Model: Resnet, Size: 30, Epoch 14
-------------------------------
Train Loss: 1.990231
Test Error: Accuracy: 43.1%, Avg loss: 1.928470
Model: Resnet, Size: 30, Epoch 15
-------------------------------
Train Loss: 2.031476
Test Error: Accuracy: 42.6%, Avg loss: 1.896400
Model: Resnet, Size: 30, Epoch 16
-------------------------------
Train Loss: 1.930811
Test Error: Accuracy: 39.9%, Avg loss: 1.897009
Model: Resnet, Size: 30, Epoch 17
-------------------------------
Train Loss: 2.109744
Test Error: Accuracy: 43.8%, Avg loss: 1.864070
Model: Resnet, Size: 30, Epoch 18
-------------------------------
Train Loss: 2.015763
Test Error: Accuracy: 44.8%, Avg loss: 1.837176
Model: Resnet, Size: 30, Epoch 19
-------------------------------
Train Loss: 1.882130
Test Error: Accuracy: 37.1%, Avg loss: 1.865926
Model: Resnet, Size: 30, Epoch 20
-------------------------------
Train Loss: 2.034643
Test Error: Accuracy: 48.4%, Avg loss: 1.807385
Model: Resnet, Size: 30, Epoch 21
-------------------------------
Train Loss: 1.979377
Test Error: Accuracy: 45.5%, Avg loss: 1.795633
Model: Resnet, Size: 30, Epoch 22
-------------------------------
Train Loss: 1.802460
Test Error: Accuracy: 42.8%, Avg loss: 1.794149
Model: Resnet, Size: 30, Epoch 23
-------------------------------
Train Loss: 1.915007
Test Error: Accuracy: 50.9%, Avg loss: 1.757544
Model: Resnet, Size: 30, Epoch 24
-------------------------------
Train Loss: 1.826748
Test Error: Accuracy: 53.2%, Avg loss: 1.735832
Model: Resnet, Size: 30, Epoch 25
-------------------------------
Train Loss: 1.997048
Test Error: Accuracy: 45.9%, Avg loss: 1.750242
Model: Resnet, Size: 30, Epoch 26
-------------------------------
Train Loss: 1.982626
Test Error: Accuracy: 53.1%, Avg loss: 1.718977
Model: Resnet, Size: 30, Epoch 27
-------------------------------
Train Loss: 1.890839
Test Error: Accuracy: 47.3%, Avg loss: 1.740617
Model: Resnet, Size: 30, Epoch 28
-------------------------------
Train Loss: 1.999574
Test Error: Accuracy: 52.4%, Avg loss: 1.705772
Model: Resnet, Size: 30, Epoch 29
-------------------------------
Train Loss: 1.783511
Test Error: Accuracy: 52.6%, Avg loss: 1.691816
Model: Resnet, Size: 30, Epoch 30
-------------------------------
Train Loss: 1.831020
Test Error: Accuracy: 49.8%, Avg loss: 1.711427
Model: Resnet, Size: 40, Epoch 1
-------------------------------
Train Loss: 2.375866
Test Error: Accuracy: 12.6%, Avg loss: 2.208342
Model: Resnet, Size: 40, Epoch 2
-------------------------------
Train Loss: 2.209149
Test Error: Accuracy: 38.3%, Avg loss: 1.997617
Model: Resnet, Size: 40, Epoch 3
-------------------------------
Train Loss: 2.022115
Test Error: Accuracy: 51.5%, Avg loss: 1.778246
Model: Resnet, Size: 40, Epoch 4
-------------------------------
Train Loss: 1.824383
Test Error: Accuracy: 46.0%, Avg loss: 1.611917
Model: Resnet, Size: 40, Epoch 5
-------------------------------
Train Loss: 1.640113
Test Error: Accuracy: 55.9%, Avg loss: 1.454471
Model: Resnet, Size: 40, Epoch 6
-------------------------------
Train Loss: 1.577572
Test Error: Accuracy: 58.9%, Avg loss: 1.346284
Model: Resnet, Size: 40, Epoch 7
-------------------------------
Train Loss: 1.278899
Test Error: Accuracy: 64.1%, Avg loss: 1.232380
Model: Resnet, Size: 40, Epoch 8
-------------------------------
Train Loss: 1.258083
Test Error: Accuracy: 69.1%, Avg loss: 1.155725
Model: Resnet, Size: 40, Epoch 9
-------------------------------
Train Loss: 1.294443
Test Error: Accuracy: 66.6%, Avg loss: 1.117256
Model: Resnet, Size: 40, Epoch 10
-------------------------------
Train Loss: 1.088430
Test Error: Accuracy: 67.4%, Avg loss: 1.078145
Model: Resnet, Size: 40, Epoch 11
-------------------------------
Train Loss: 1.052037
Test Error: Accuracy: 71.3%, Avg loss: 1.031456
Model: Resnet, Size: 40, Epoch 12
-------------------------------
Train Loss: 1.179438
Test Error: Accuracy: 73.0%, Avg loss: 0.985527
Model: Resnet, Size: 40, Epoch 13
-------------------------------
Train Loss: 1.119224
Test Error: Accuracy: 73.0%, Avg loss: 0.972817
Model: Resnet, Size: 40, Epoch 14
-------------------------------
Train Loss: 1.019917
Test Error: Accuracy: 72.2%, Avg loss: 0.957684
Model: Resnet, Size: 40, Epoch 15
-------------------------------
Train Loss: 1.097564
Test Error: Accuracy: 73.5%, Avg loss: 0.946633
Model: Resnet, Size: 40, Epoch 16
-------------------------------
Train Loss: 0.823179
Test Error: Accuracy: 71.6%, Avg loss: 0.934551
Model: Resnet, Size: 40, Epoch 17
-------------------------------
Train Loss: 0.923182
Test Error: Accuracy: 75.1%, Avg loss: 0.891172
Model: Resnet, Size: 40, Epoch 18
-------------------------------
Train Loss: 0.786721
Test Error: Accuracy: 76.3%, Avg loss: 0.865399
Model: Resnet, Size: 40, Epoch 19
-------------------------------
Train Loss: 0.874915
Test Error: Accuracy: 73.9%, Avg loss: 0.878638
Model: Resnet, Size: 40, Epoch 20
-------------------------------
Train Loss: 0.892012
Test Error: Accuracy: 75.1%, Avg loss: 0.863894
Model: Resnet, Size: 40, Epoch 21
-------------------------------
Train Loss: 0.858579
Test Error: Accuracy: 73.4%, Avg loss: 0.869918
Model: Resnet, Size: 40, Epoch 22
-------------------------------
Train Loss: 0.733748
Test Error: Accuracy: 76.4%, Avg loss: 0.823428
Model: Resnet, Size: 40, Epoch 23
-------------------------------
Train Loss: 0.828302
Test Error: Accuracy: 76.5%, Avg loss: 0.822030
Model: Resnet, Size: 40, Epoch 24
-------------------------------
Train Loss: 0.959027
Test Error: Accuracy: 76.7%, Avg loss: 0.808784
Model: Resnet, Size: 40, Epoch 25
-------------------------------
Train Loss: 0.976270
Test Error: Accuracy: 75.7%, Avg loss: 0.822505
Model: Resnet, Size: 40, Epoch 26
-------------------------------
Train Loss: 0.712155
Test Error: Accuracy: 75.7%, Avg loss: 0.834623
Model: Resnet, Size: 40, Epoch 27
-------------------------------
Train Loss: 0.957431
Test Error: Accuracy: 76.9%, Avg loss: 0.791550
Model: Resnet, Size: 40, Epoch 28
-------------------------------
Train Loss: 0.988295
Test Error: Accuracy: 77.5%, Avg loss: 0.788193
Model: Resnet, Size: 40, Epoch 29
-------------------------------
Train Loss: 0.696338
Test Error: Accuracy: 77.3%, Avg loss: 0.790133
Model: Resnet, Size: 40, Epoch 30
-------------------------------
Train Loss: 0.824472
Test Error: Accuracy: 77.3%, Avg loss: 0.766831
Model: Resnet, Size: 50, Epoch 1
-------------------------------
Train Loss: 2.351234
Test Error: Accuracy: 10.3%, Avg loss: 2.328089
Model: Resnet, Size: 50, Epoch 2
-------------------------------
Train Loss: 2.450995
Test Error: Accuracy: 17.9%, Avg loss: 2.256214
Model: Resnet, Size: 50, Epoch 3
-------------------------------
Train Loss: 2.330314
Test Error: Accuracy: 33.8%, Avg loss: 2.087570
Model: Resnet, Size: 50, Epoch 4
-------------------------------
Train Loss: 2.141202
Test Error: Accuracy: 36.7%, Avg loss: 1.996277
Model: Resnet, Size: 50, Epoch 5
-------------------------------
Train Loss: 2.043556
Test Error: Accuracy: 38.1%, Avg loss: 1.916874
Model: Resnet, Size: 50, Epoch 6
-------------------------------
Train Loss: 2.040507
Test Error: Accuracy: 46.9%, Avg loss: 1.843702
Model: Resnet, Size: 50, Epoch 7
-------------------------------
Train Loss: 1.898519
Test Error: Accuracy: 47.7%, Avg loss: 1.803582
Model: Resnet, Size: 50, Epoch 8
-------------------------------
Train Loss: 1.908322
Test Error: Accuracy: 47.5%, Avg loss: 1.759681
Model: Resnet, Size: 50, Epoch 9
-------------------------------
Train Loss: 1.989569
Test Error: Accuracy: 43.4%, Avg loss: 1.730658
Model: Resnet, Size: 50, Epoch 10
-------------------------------
Train Loss: 1.761289
Test Error: Accuracy: 45.7%, Avg loss: 1.707438
Model: Resnet, Size: 50, Epoch 11
-------------------------------
Train Loss: 1.807453
Test Error: Accuracy: 49.7%, Avg loss: 1.691342
Model: Resnet, Size: 50, Epoch 12
-------------------------------
Train Loss: 1.974722
Test Error: Accuracy: 50.9%, Avg loss: 1.679353
Model: Resnet, Size: 50, Epoch 13
-------------------------------
Train Loss: 1.812451
Test Error: Accuracy: 48.0%, Avg loss: 1.621871
Model: Resnet, Size: 50, Epoch 14
-------------------------------
Train Loss: 1.621148
Test Error: Accuracy: 49.3%, Avg loss: 1.669130
Model: Resnet, Size: 50, Epoch 15
-------------------------------
Train Loss: 1.741130
Test Error: Accuracy: 49.0%, Avg loss: 1.600331
Model: Resnet, Size: 50, Epoch 16
-------------------------------
Train Loss: 1.788379
Test Error: Accuracy: 51.9%, Avg loss: 1.565065
Model: Resnet, Size: 50, Epoch 17
-------------------------------
Train Loss: 1.886110
Test Error: Accuracy: 46.7%, Avg loss: 1.572266
Model: Resnet, Size: 50, Epoch 18
-------------------------------
Train Loss: 1.763919
Test Error: Accuracy: 53.0%, Avg loss: 1.521902
Model: Resnet, Size: 50, Epoch 19
-------------------------------
Train Loss: 1.880243
Test Error: Accuracy: 53.2%, Avg loss: 1.543615
Model: Resnet, Size: 50, Epoch 20
-------------------------------
Train Loss: 1.777204
Test Error: Accuracy: 52.9%, Avg loss: 1.537799
Model: Resnet, Size: 50, Epoch 21
-------------------------------
Train Loss: 1.738821
Test Error: Accuracy: 54.6%, Avg loss: 1.515767
Model: Resnet, Size: 50, Epoch 22
-------------------------------
Train Loss: 1.703941
Test Error: Accuracy: 52.8%, Avg loss: 1.527537
Model: Resnet, Size: 50, Epoch 23
-------------------------------
Train Loss: 1.501722
Test Error: Accuracy: 60.0%, Avg loss: 1.481290
Model: Resnet, Size: 50, Epoch 24
-------------------------------
Train Loss: 1.600759
Test Error: Accuracy: 53.3%, Avg loss: 1.479309
Model: Resnet, Size: 50, Epoch 25
-------------------------------
Train Loss: 1.744212
Test Error: Accuracy: 58.6%, Avg loss: 1.466406
Model: Resnet, Size: 50, Epoch 26
-------------------------------
Train Loss: 1.745041
Test Error: Accuracy: 49.9%, Avg loss: 1.496491
Model: Resnet, Size: 50, Epoch 27
-------------------------------
Train Loss: 1.736536
Test Error: Accuracy: 56.5%, Avg loss: 1.448550
Model: Resnet, Size: 50, Epoch 28
-------------------------------
Train Loss: 1.653199
Test Error: Accuracy: 54.2%, Avg loss: 1.463546
Model: Resnet, Size: 50, Epoch 29
-------------------------------
Train Loss: 1.524191
Test Error: Accuracy: 59.4%, Avg loss: 1.435129
Model: Resnet, Size: 50, Epoch 30
-------------------------------
Train Loss: 1.705352
Test Error: Accuracy: 54.6%, Avg loss: 1.430916
Model: Resnet, Size: 60, Epoch 1
-------------------------------
Train Loss: 2.321162
Test Error: Accuracy: 21.1%, Avg loss: 2.230772
Model: Resnet, Size: 60, Epoch 2
-------------------------------
Train Loss: 2.239422
Test Error: Accuracy: 38.1%, Avg loss: 2.115117
Model: Resnet, Size: 60, Epoch 3
-------------------------------
Train Loss: 2.164280
Test Error: Accuracy: 32.1%, Avg loss: 1.958188
Model: Resnet, Size: 60, Epoch 4
-------------------------------
Train Loss: 2.052208
Test Error: Accuracy: 44.2%, Avg loss: 1.833998
Model: Resnet, Size: 60, Epoch 5
-------------------------------
Train Loss: 1.977115
Test Error: Accuracy: 44.6%, Avg loss: 1.739370
Model: Resnet, Size: 60, Epoch 6
-------------------------------
Train Loss: 1.891433
Test Error: Accuracy: 49.6%, Avg loss: 1.675339
Model: Resnet, Size: 60, Epoch 7
-------------------------------
Train Loss: 1.705280
Test Error: Accuracy: 45.4%, Avg loss: 1.626842
Model: Resnet, Size: 60, Epoch 8
-------------------------------
Train Loss: 1.641647
Test Error: Accuracy: 49.6%, Avg loss: 1.564317
Model: Resnet, Size: 60, Epoch 9
-------------------------------
Train Loss: 1.724091
Test Error: Accuracy: 55.1%, Avg loss: 1.515266
Model: Resnet, Size: 60, Epoch 10
-------------------------------
Train Loss: 1.584554
Test Error: Accuracy: 50.0%, Avg loss: 1.533783
Model: Resnet, Size: 60, Epoch 11
-------------------------------
Train Loss: 1.499392
Test Error: Accuracy: 55.7%, Avg loss: 1.479754
Model: Resnet, Size: 60, Epoch 12
-------------------------------
Train Loss: 1.567584
Test Error: Accuracy: 50.2%, Avg loss: 1.482337
Model: Resnet, Size: 60, Epoch 13
-------------------------------
Train Loss: 1.593207
Test Error: Accuracy: 59.7%, Avg loss: 1.404447
Model: Resnet, Size: 60, Epoch 14
-------------------------------
Train Loss: 1.676635
Test Error: Accuracy: 59.6%, Avg loss: 1.400905
Model: Resnet, Size: 60, Epoch 15
-------------------------------
Train Loss: 1.508629
Test Error: Accuracy: 57.6%, Avg loss: 1.373498
Model: Resnet, Size: 60, Epoch 16
-------------------------------
Train Loss: 1.500761
Test Error: Accuracy: 59.6%, Avg loss: 1.370295
Model: Resnet, Size: 60, Epoch 17
-------------------------------
Train Loss: 1.566680
Test Error: Accuracy: 57.6%, Avg loss: 1.343242
Model: Resnet, Size: 60, Epoch 18
-------------------------------
Train Loss: 1.520072
Test Error: Accuracy: 59.7%, Avg loss: 1.328614
Model: Resnet, Size: 60, Epoch 19
-------------------------------
Train Loss: 1.580534
Test Error: Accuracy: 57.3%, Avg loss: 1.345446
Model: Resnet, Size: 60, Epoch 20
-------------------------------
Train Loss: 1.443282
Test Error: Accuracy: 56.3%, Avg loss: 1.331901
Model: Resnet, Size: 60, Epoch 21
-------------------------------
Train Loss: 1.500133
Test Error: Accuracy: 60.4%, Avg loss: 1.294208
Model: Resnet, Size: 60, Epoch 22
-------------------------------
Train Loss: 1.395770
Test Error: Accuracy: 59.1%, Avg loss: 1.275872
Model: Resnet, Size: 60, Epoch 23
-------------------------------
Train Loss: 1.367614
Test Error: Accuracy: 62.9%, Avg loss: 1.255292
Model: Resnet, Size: 60, Epoch 24
-------------------------------
Train Loss: 1.359241
Test Error: Accuracy: 61.7%, Avg loss: 1.239934
Model: Resnet, Size: 60, Epoch 25
-------------------------------
Train Loss: 1.504385
Test Error: Accuracy: 62.1%, Avg loss: 1.212051
Model: Resnet, Size: 60, Epoch 26
-------------------------------
Train Loss: 1.370265
Test Error: Accuracy: 63.9%, Avg loss: 1.212234
Model: Resnet, Size: 60, Epoch 27
-------------------------------
Train Loss: 1.166321
Test Error: Accuracy: 60.0%, Avg loss: 1.226192
Model: Resnet, Size: 60, Epoch 28
-------------------------------
Train Loss: 1.538485
Test Error: Accuracy: 63.7%, Avg loss: 1.189012
Model: Resnet, Size: 60, Epoch 29
-------------------------------
Train Loss: 1.441568
Test Error: Accuracy: 71.7%, Avg loss: 1.158906
Model: Resnet, Size: 60, Epoch 30
-------------------------------
Train Loss: 1.463110
Test Error: Accuracy: 61.2%, Avg loss: 1.205806
Model: Resnet, Size: 70, Epoch 1
-------------------------------
Train Loss: 2.354496
Test Error: Accuracy: 14.6%, Avg loss: 2.340076
Model: Resnet, Size: 70, Epoch 2
-------------------------------
Train Loss: 2.356266
Test Error: Accuracy: 10.4%, Avg loss: 2.268727
Model: Resnet, Size: 70, Epoch 3
-------------------------------
Train Loss: 2.230090
Test Error: Accuracy: 11.6%, Avg loss: 2.169272
Model: Resnet, Size: 70, Epoch 4
-------------------------------
Train Loss: 2.275096
Test Error: Accuracy: 21.8%, Avg loss: 2.071900
Model: Resnet, Size: 70, Epoch 5
-------------------------------
Train Loss: 2.235683
Test Error: Accuracy: 30.7%, Avg loss: 2.072952
Model: Resnet, Size: 70, Epoch 6
-------------------------------
Train Loss: 2.162395
Test Error: Accuracy: 40.0%, Avg loss: 1.967731
Model: Resnet, Size: 70, Epoch 7
-------------------------------
Train Loss: 1.995836
Test Error: Accuracy: 32.8%, Avg loss: 1.972740
Model: Resnet, Size: 70, Epoch 8
-------------------------------
Train Loss: 2.168135
Test Error: Accuracy: 33.8%, Avg loss: 1.904766
Model: Resnet, Size: 70, Epoch 9
-------------------------------
Train Loss: 2.064331
Test Error: Accuracy: 38.0%, Avg loss: 1.822179
Model: Resnet, Size: 70, Epoch 10
-------------------------------
Train Loss: 2.132960
Test Error: Accuracy: 39.2%, Avg loss: 1.864885
Model: Resnet, Size: 70, Epoch 11
-------------------------------
Train Loss: 2.048521
Test Error: Accuracy: 39.1%, Avg loss: 1.832785
Model: Resnet, Size: 70, Epoch 12
-------------------------------
Train Loss: 1.799026
Test Error: Accuracy: 35.9%, Avg loss: 1.860621
Model: Resnet, Size: 70, Epoch 13
-------------------------------
Train Loss: 1.947151
Test Error: Accuracy: 42.0%, Avg loss: 1.719549
Model: Resnet, Size: 70, Epoch 14
-------------------------------
Train Loss: 1.987869
Test Error: Accuracy: 42.0%, Avg loss: 1.714318
Model: Resnet, Size: 70, Epoch 15
-------------------------------
Train Loss: 1.839800
Test Error: Accuracy: 38.4%, Avg loss: 1.731825
Model: Resnet, Size: 70, Epoch 16
-------------------------------
Train Loss: 1.786609
Test Error: Accuracy: 52.6%, Avg loss: 1.661864
Model: Resnet, Size: 70, Epoch 17
-------------------------------
Train Loss: 1.886091
Test Error: Accuracy: 49.4%, Avg loss: 1.663081
Model: Resnet, Size: 70, Epoch 18
-------------------------------
Train Loss: 1.903739
Test Error: Accuracy: 48.8%, Avg loss: 1.644975
Model: Resnet, Size: 70, Epoch 19
-------------------------------
Train Loss: 1.827214
Test Error: Accuracy: 39.8%, Avg loss: 1.692792
Model: Resnet, Size: 70, Epoch 20
-------------------------------
Train Loss: 1.952472
Test Error: Accuracy: 36.3%, Avg loss: 1.726948
Model: Resnet, Size: 70, Epoch 21
-------------------------------
Train Loss: 1.878207
Test Error: Accuracy: 41.9%, Avg loss: 1.682338
Model: Resnet, Size: 70, Epoch 22
-------------------------------
Train Loss: 2.004186
Test Error: Accuracy: 43.9%, Avg loss: 1.632111
Model: Resnet, Size: 70, Epoch 23
-------------------------------
Train Loss: 1.876932
Test Error: Accuracy: 37.4%, Avg loss: 1.691873
Model: Resnet, Size: 70, Epoch 24
-------------------------------
Train Loss: 1.914191
Test Error: Accuracy: 41.5%, Avg loss: 1.727794
Model: Resnet, Size: 70, Epoch 25
-------------------------------
Train Loss: 2.002362
Test Error: Accuracy: 48.1%, Avg loss: 1.561070
Model: Resnet, Size: 70, Epoch 26
-------------------------------
Train Loss: 1.769259
Test Error: Accuracy: 42.8%, Avg loss: 1.674906
Model: Resnet, Size: 70, Epoch 27
-------------------------------
Train Loss: 1.902918
Test Error: Accuracy: 44.7%, Avg loss: 1.592273
Model: Resnet, Size: 70, Epoch 28
-------------------------------
Train Loss: 1.948112
Test Error: Accuracy: 42.4%, Avg loss: 1.598361
Model: Resnet, Size: 70, Epoch 29
-------------------------------
Train Loss: 1.855297
Test Error: Accuracy: 53.9%, Avg loss: 1.550490
Model: Resnet, Size: 70, Epoch 30
-------------------------------
Train Loss: 1.696175
Test Error: Accuracy: 42.7%, Avg loss: 1.600954
Model: Resnet, Size: 80, Epoch 1
-------------------------------
Train Loss: 2.376816
Test Error: Accuracy: 21.4%, Avg loss: 2.141352
Model: Resnet, Size: 80, Epoch 2
-------------------------------
Train Loss: 2.187439
Test Error: Accuracy: 32.4%, Avg loss: 1.825418
Model: Resnet, Size: 80, Epoch 3
-------------------------------
Train Loss: 1.851284
Test Error: Accuracy: 39.8%, Avg loss: 1.592200
Model: Resnet, Size: 80, Epoch 4
-------------------------------
Train Loss: 1.764883
Test Error: Accuracy: 44.0%, Avg loss: 1.483323
Model: Resnet, Size: 80, Epoch 5
-------------------------------
Train Loss: 1.631015
Test Error: Accuracy: 55.9%, Avg loss: 1.302346
Model: Resnet, Size: 80, Epoch 6
-------------------------------
Train Loss: 1.423474
Test Error: Accuracy: 61.8%, Avg loss: 1.203237
Model: Resnet, Size: 80, Epoch 7
-------------------------------
Train Loss: 1.312032
Test Error: Accuracy: 58.9%, Avg loss: 1.219624
Model: Resnet, Size: 80, Epoch 8
-------------------------------
Train Loss: 1.532045
Test Error: Accuracy: 68.5%, Avg loss: 1.095554
Model: Resnet, Size: 80, Epoch 9
-------------------------------
Train Loss: 1.107368
Test Error: Accuracy: 64.1%, Avg loss: 1.130952
Model: Resnet, Size: 80, Epoch 10
-------------------------------
Train Loss: 1.448038
Test Error: Accuracy: 66.2%, Avg loss: 1.088232
Model: Resnet, Size: 80, Epoch 11
-------------------------------
Train Loss: 1.130029
Test Error: Accuracy: 71.9%, Avg loss: 0.977451
Model: Resnet, Size: 80, Epoch 12
-------------------------------
Train Loss: 1.127069
Test Error: Accuracy: 72.0%, Avg loss: 0.967574
Model: Resnet, Size: 80, Epoch 13
-------------------------------
Train Loss: 1.093421
Test Error: Accuracy: 65.0%, Avg loss: 1.065477
Model: Resnet, Size: 80, Epoch 14
-------------------------------
Train Loss: 1.266386
Test Error: Accuracy: 67.0%, Avg loss: 0.996014
Model: Resnet, Size: 80, Epoch 15
-------------------------------
Train Loss: 1.055012
Test Error: Accuracy: 74.0%, Avg loss: 0.903812
Model: Resnet, Size: 80, Epoch 16
-------------------------------
Train Loss: 1.057240
Test Error: Accuracy: 67.4%, Avg loss: 0.978096
Model: Resnet, Size: 80, Epoch 17
-------------------------------
Train Loss: 1.155659
Test Error: Accuracy: 76.4%, Avg loss: 0.858372
Model: Resnet, Size: 80, Epoch 18
-------------------------------
Train Loss: 1.111396
Test Error: Accuracy: 72.4%, Avg loss: 0.911172
Model: Resnet, Size: 80, Epoch 19
-------------------------------
Train Loss: 0.942246
Test Error: Accuracy: 78.1%, Avg loss: 0.822497
Model: Resnet, Size: 80, Epoch 20
-------------------------------
Train Loss: 1.089227
Test Error: Accuracy: 72.5%, Avg loss: 0.915341
Model: Resnet, Size: 80, Epoch 21
-------------------------------
Train Loss: 1.106079
Test Error: Accuracy: 72.3%, Avg loss: 0.881971
Model: Resnet, Size: 80, Epoch 22
-------------------------------
Train Loss: 0.898839
Test Error: Accuracy: 73.6%, Avg loss: 0.853841
Model: Resnet, Size: 80, Epoch 23
-------------------------------
Train Loss: 1.119820
Test Error: Accuracy: 73.9%, Avg loss: 0.859119
Model: Resnet, Size: 80, Epoch 24
-------------------------------
Train Loss: 0.830776
Test Error: Accuracy: 73.4%, Avg loss: 0.833770
Model: Resnet, Size: 80, Epoch 25
-------------------------------
Train Loss: 0.934367
Test Error: Accuracy: 75.4%, Avg loss: 0.842417
Model: Resnet, Size: 80, Epoch 26
-------------------------------
Train Loss: 0.926458
Test Error: Accuracy: 76.5%, Avg loss: 0.803780
Model: Resnet, Size: 80, Epoch 27
-------------------------------
Train Loss: 0.964014
Test Error: Accuracy: 81.1%, Avg loss: 0.757150
Model: Resnet, Size: 80, Epoch 28
-------------------------------
Train Loss: 0.830291
Test Error: Accuracy: 77.0%, Avg loss: 0.782399
Model: Resnet, Size: 80, Epoch 29
-------------------------------
Train Loss: 1.159842
Test Error: Accuracy: 71.4%, Avg loss: 0.956551
Model: Resnet, Size: 80, Epoch 30
-------------------------------
Train Loss: 1.307734
Test Error: Accuracy: 77.3%, Avg loss: 0.767030
Model: Resnet, Size: 90, Epoch 1
-------------------------------
Train Loss: 2.272749
Test Error: Accuracy: 15.3%, Avg loss: 2.390102
Model: Resnet, Size: 90, Epoch 2
-------------------------------
Train Loss: 2.383688
Test Error: Accuracy: 12.2%, Avg loss: 2.318429
Model: Resnet, Size: 90, Epoch 3
-------------------------------
Train Loss: 2.306314
Test Error: Accuracy: 17.3%, Avg loss: 2.270910
Model: Resnet, Size: 90, Epoch 4
-------------------------------
Train Loss: 2.310758
Test Error: Accuracy: 21.9%, Avg loss: 2.171025
Model: Resnet, Size: 90, Epoch 5
-------------------------------
Train Loss: 2.206148
Test Error: Accuracy: 31.4%, Avg loss: 2.109096
Model: Resnet, Size: 90, Epoch 6
-------------------------------
Train Loss: 2.207768
Test Error: Accuracy: 32.0%, Avg loss: 2.064589
Model: Resnet, Size: 90, Epoch 7
-------------------------------
Train Loss: 2.170941
Test Error: Accuracy: 31.4%, Avg loss: 2.037484
Model: Resnet, Size: 90, Epoch 8
-------------------------------
Train Loss: 2.106835
Test Error: Accuracy: 24.2%, Avg loss: 1.978384
Model: Resnet, Size: 90, Epoch 9
-------------------------------
Train Loss: 2.116201
Test Error: Accuracy: 23.1%, Avg loss: 2.063090
Model: Resnet, Size: 90, Epoch 10
-------------------------------
Train Loss: 2.184778
Test Error: Accuracy: 30.2%, Avg loss: 2.029463
Model: Resnet, Size: 90, Epoch 11
-------------------------------
Train Loss: 2.299899
Test Error: Accuracy: 26.9%, Avg loss: 2.068666
Model: Resnet, Size: 90, Epoch 12
-------------------------------
Train Loss: 2.330736
Test Error: Accuracy: 29.9%, Avg loss: 1.917887
Model: Resnet, Size: 90, Epoch 13
-------------------------------
Train Loss: 2.170053
Test Error: Accuracy: 26.4%, Avg loss: 1.901212
Model: Resnet, Size: 90, Epoch 14
-------------------------------
Train Loss: 2.172714
Test Error: Accuracy: 37.1%, Avg loss: 1.877135
Model: Resnet, Size: 90, Epoch 15
-------------------------------
Train Loss: 2.032565
Test Error: Accuracy: 41.0%, Avg loss: 1.837603
Model: Resnet, Size: 90, Epoch 16
-------------------------------
Train Loss: 2.145882
Test Error: Accuracy: 35.1%, Avg loss: 1.858580
Model: Resnet, Size: 90, Epoch 17
-------------------------------
Train Loss: 2.126173
Test Error: Accuracy: 42.8%, Avg loss: 1.807864
Model: Resnet, Size: 90, Epoch 18
-------------------------------
Train Loss: 1.965152
Test Error: Accuracy: 49.4%, Avg loss: 1.810413
Model: Resnet, Size: 90, Epoch 19
-------------------------------
Train Loss: 1.964378
Test Error: Accuracy: 35.6%, Avg loss: 1.835019
Model: Resnet, Size: 90, Epoch 20
-------------------------------
Train Loss: 2.015275
Test Error: Accuracy: 25.9%, Avg loss: 1.898507
Model: Resnet, Size: 90, Epoch 21
-------------------------------
Train Loss: 2.177117
Test Error: Accuracy: 40.0%, Avg loss: 1.816810
Model: Resnet, Size: 90, Epoch 22
-------------------------------
Train Loss: 1.963034
Test Error: Accuracy: 30.0%, Avg loss: 1.815655
Model: Resnet, Size: 90, Epoch 23
-------------------------------
Train Loss: 2.153277
Test Error: Accuracy: 29.7%, Avg loss: 1.879870
Model: Resnet, Size: 90, Epoch 24
-------------------------------
Train Loss: 2.148444
Test Error: Accuracy: 33.5%, Avg loss: 1.830304
Model: Resnet, Size: 90, Epoch 25
-------------------------------
Train Loss: 1.972238
Test Error: Accuracy: 36.0%, Avg loss: 1.848754
Model: Resnet, Size: 90, Epoch 26
-------------------------------
Train Loss: 1.934706
Test Error: Accuracy: 38.1%, Avg loss: 1.813560
Model: Resnet, Size: 90, Epoch 27
-------------------------------
Train Loss: 1.961267
Test Error: Accuracy: 38.5%, Avg loss: 1.817021
Model: Resnet, Size: 90, Epoch 28
-------------------------------
Train Loss: 2.152897
Test Error: Accuracy: 34.2%, Avg loss: 1.770912
Model: Resnet, Size: 90, Epoch 29
-------------------------------
Train Loss: 1.952006
Test Error: Accuracy: 40.1%, Avg loss: 1.768525
Model: Resnet, Size: 90, Epoch 30
-------------------------------
Train Loss: 1.991476
Test Error: Accuracy: 32.6%, Avg loss: 1.785120
|
python/coursera_python/deeplearning_ai_Andrew_Ng/1_NN_DL/work/Week 2/Python Basics with Numpy/Python Basics With Numpy v3.ipynb | ###Markdown
Python Basics with Numpy (optional assignment)Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need. **Instructions:**- You will be using Python 3.- Avoid using for-loops and while-loops, unless you are explicitly told to do so.- Do not modify the ( GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.- After coding your function, run the cell right below it to check if your result is correct.**After this assignment you will:**- Be able to use iPython Notebooks- Be able to use numpy functions and numpy matrix/vector operations- Understand the concept of "broadcasting"- Be able to vectorize codeLet's get started! About iPython Notebooks iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the START CODE HERE and END CODE HERE comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook. We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
###Code
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
###Output
test: Hello World
###Markdown
**Expected output**:test: Hello World **What you need to remember**:- Run your cells using SHIFT+ENTER (or "Run cell")- Write code in the designated areas using Python 3 only- Do not modify the code outside of the designated areas 1 - Building basic functions with numpy Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments. 1.1 - sigmoid function, np.exp() Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.**Reminder**:$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
###Code
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+math.exp(-x))
### END CODE HERE ###
return s
basic_sigmoid(3)
###Output
_____no_output_____
###Markdown
**Expected Output**: ** basic_sigmoid(3) ** 0.9525741268224334 Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
###Code
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
###Output
_____no_output_____
###Markdown
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
###Code
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
###Output
[ 2.71828183 7.3890561 20.08553692]
###Markdown
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
###Code
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
###Output
[4 5 6]
###Markdown
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html). You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.**Exercise**: Implement the sigmoid function using numpy. **Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \\ x_2 \\ ... \\ x_n \\\end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \\ \frac{1}{1+e^{-x_2}} \\ ... \\ \frac{1}{1+e^{-x_n}} \\\end{pmatrix}\tag{1} $$
###Code
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
###Output
_____no_output_____
###Markdown
**Expected Output**: **sigmoid([1,2,3])** array([ 0.73105858, 0.88079708, 0.95257413]) 1.2 - Sigmoid gradientAs you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$You often code this function in two steps:1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.2. Compute $\sigma'(x) = s(1-s)$
###Code
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
###Output
sigmoid_derivative(x) = [ 0.19661193 0.10499359 0.04517666]
###Markdown
**Expected Output**: **sigmoid_derivative([1,2,3])** [ 0.19661193 0.10499359 0.04517666] 1.3 - Reshaping arrays Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html). - X.shape is used to get the shape (dimension) of a matrix/vector X. - X.reshape(...) is used to reshape X into some other dimension. For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:``` pythonv = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c```- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
###Code
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[1]*image.shape[2],1))
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
###Output
image2vector(image) = [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]
###Markdown
**Expected Output**: **image2vector(image)** [[ 0.67826139] [ 0.29380381] [ 0.90714982] [ 0.52835647] [ 0.4215251 ] [ 0.45017551] [ 0.92814219] [ 0.96677647] [ 0.85304703] [ 0.52351845] [ 0.19981397] [ 0.27417313] [ 0.60659855] [ 0.00533165] [ 0.10820313] [ 0.49978937] [ 0.34144279] [ 0.94630077]] 1.4 - Normalizing rowsAnother common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).For example, if $$x = \begin{bmatrix} 0 & 3 & 4 \\ 2 & 6 & 4 \\\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \\ \sqrt{56} \\\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \\ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
###Code
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x,axis=1,keepdims=True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
###Output
normalizeRows(x) = [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]
###Markdown
**Expected Output**: **normalizeRows(x)** [[ 0. 0.6 0.8 ] [ 0.13736056 0.82416338 0.54944226]] **Note**:In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now! 1.5 - Broadcasting and the softmax function A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html). **Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.**Instructions**:- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && ... && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && ... && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} $ - $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}\end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}\end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \\ softmax\text{(second row of x)} \\ ... \\ softmax\text{(last row of x)} \\\end{pmatrix} $$
###Code
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp,axis=1,keepdims=True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
###Output
softmax(x) = [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]
###Markdown
**Expected Output**: **softmax(x)** [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04 1.21052389e-04] [ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04 8.01252314e-04]] **Note**:- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning. **What you need to remember:**- np.exp(x) works for any np.array x and applies the exponential function to every coordinate- the sigmoid function and its gradient- image2vector is commonly used in deep learning- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs. - numpy has efficient built-in functions- broadcasting is extremely useful 2) Vectorization In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
###Code
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
###Output
dot = 278
----- Computation time = 0.1594069999999448ms
outer = [[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]
[18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[63 14 14 63 0 63 14 35 0 0 63 14 35 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[81 18 18 81 0 81 18 45 0 0 81 18 45 0 0]
[18 4 4 18 0 18 4 10 0 0 18 4 10 0 0]
[45 10 10 45 0 45 10 25 0 0 45 10 25 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
[ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]]
----- Computation time = 0.16285900000001519ms
elementwise multiplication = [81 4 10 0 0 63 10 0 0 0 81 4 25 0 0]
----- Computation time = 0.11784699999983772ms
gdot = [ 11.33517785 21.63781173 18.58226734]
----- Computation time = 0.5110849999998557ms
###Markdown
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger. **Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication. 2.1 Implement the L1 and L2 loss functions**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.**Reminder**:- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.- L1 loss is defined as:$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
###Code
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.abs(y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
###Output
L1 = 1.1
###Markdown
**Expected Output**: **L1** 1.1 **Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$. - L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
###Code
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(np.dot(y-yhat,y-yhat))
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
###Output
L2 = 0.43
|
joint/dev/ML_ISTA/Jere/Run_Models_SVHN_J.ipynb | ###Markdown
Model Baseline
###Code
Loss_test_0 = np.zeros((EPOCH,))
Acc_test_0 = np.zeros((EPOCH,))
Acc_train_0 = np.zeros((EPOCH,))
print('\n\t\t\t\t\tTraining Baseline\n')
T = 0
RHO = 0
model_0 = mds.ML_ISTA_NET(m1,m2,m3)
if cudaopt:
model_0.cuda()
optimizer = torch.optim.Adam(model_0.parameters(), lr = 0.0001, eps = EPS,weight_decay=1e-3)
bar = progressbar.ProgressBar()
for epoch in range(EPOCH):
bar.update((epoch+1)/EPOCH*100)
# train 1 epoch
model_0.train()
train_correct = 0
for step, (x, y) in enumerate(train_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
encoded, scores = model_0(b_x,T,RHO)
train_pred = scores.data.max(1, keepdim=True)[1]
train_correct += train_pred.eq(b_y.data.view_as(train_pred)).long().cpu().sum()
loss = F.nll_loss(scores, b_y) # negative log likelyhood
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
Acc_train_0[epoch] = 100 * float(train_correct) /float(len(train_loader.dataset))
# testing
model_0.eval()
correct = 0
test_loss = 0
for step, (x, y) in enumerate(test_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
gamma, scores = model_0(b_x,T,RHO)
test_loss += F.nll_loss(scores, b_y, size_average=False).data[0]
pred = scores.data.max(1, keepdim=True)[1]
correct += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
test_loss /= len(test_loader.dataset)
Loss_test_0[epoch] = test_loss
Acc_test_0[epoch] = 100 * float(correct) /float(len(test_loader.dataset))
# torch.save(model_0.state_dict(), 'cnn_model.pt')
###Output
100% (100 of 100) |#######################| Elapsed Time: 0:05:25 Time: 0:05:25
N/A% (0 of 100) | | Elapsed Time: 0:00:00 ETA: --:--:--
###Markdown
Joint ISTA
###Code
Loss_test_jista_r = np.zeros((EPOCH,))
Acc_test_jista_r = np.zeros((EPOCH,))
Acc_train_jista_r = np.zeros((EPOCH,))
print('\n\t\t\t\t\tTraining ML-JISTA \n')
model_jnn = mds.ML_JISTA_NET_J(m1,m2,m3)
if cudaopt:
model_jnn.cuda()
optimizer = torch.optim.Adam(model_jnn.parameters(), lr = 0.0001, eps = EPS,weight_decay=2e-1)
bar = progressbar.ProgressBar()
for epoch in range(EPOCH):
# print("Epoch: " + str(int(epoch)))
bar.update((epoch+1)/EPOCH*100)
# train 1 epoch
model_jnn.train()
train_correct = 0
for step, (x, y) in enumerate(train_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
encoded, scores = model_jnn.forward_joint(b_x, b_y, T, RHO)
classes_aux = b_y
train_pred = scores.data.max(1, keepdim=True)[1]
train_correct += train_pred.eq(b_y.data.view_as(train_pred)).long().cpu().sum()
loss = F.nll_loss(scores, b_y) # negative log likelyhood
optimizer.zero_grad() # clear gradients for this training step
loss.backward() # backpropagation, compute gradients
optimizer.step() # apply gradients
Acc_train_jista_r[epoch] = 100 * float(train_correct) /float(len(train_loader.dataset))
# testing
model_jnn.eval()
correct = 0
test_loss = 0
for step, (x, y) in enumerate(test_loader):
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
gamma, scores = model_jnn.forward(b_x,T,RHO)
test_loss += F.nll_loss(scores, b_y, size_average=False).data[0]
pred = scores.data.max(1, keepdim=True)[1]
correct += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
test_loss /= len(test_loader.dataset)
Loss_test_jista_r[epoch] = test_loss
Acc_test_jista_r[epoch] = 100 * float(correct) /float(len(test_loader.dataset))
# print("Performance at epoch " + str(int(epoch)) + ": " + str(Acc_test_ista_r[epoch]))
# torch.save(model_jnn.state_dict(), 'mljista_model.pt')
# Displaying joint sparsity pattern:
plt.figure(figsize=(20,20))
plt.spy(np.concatenate((encoded[classes_aux==0,:].data.cpu().numpy().T,
encoded[classes_aux==1,:].data.cpu().numpy().T,
encoded[classes_aux==2,:].data.cpu().numpy().T,
encoded[classes_aux==3,:].data.cpu().numpy().T,
encoded[classes_aux==4,:].data.cpu().numpy().T,
encoded[classes_aux==5,:].data.cpu().numpy().T),
axis=1))
###Output
_____no_output_____
###Markdown
Plot accuracies
###Code
plt.style.use('default')
fig = plt.figure(figsize=(6,4))
plt.plot(Acc_train_0, ':b', linewidth = 2,label='ML-ISTA train')
plt.plot(Acc_test_0,'b',linewidth = 2,label='ML-ISTA test')
# plt.plot(Acc_test_ista_r, linewidth = 2,label = 'ML-ISTA')
plt.plot(Acc_train_jista_r,':r', linewidth = 2,label = 'ML-JISTA train')
plt.plot(Acc_test_jista_r,'r', linewidth = 2,label = 'ML-JISTA test')
# plt.plot(Acc_test_fista_r, linewidth = 2,label = 'ML-FISTA')
plt.grid('on')
plt.title('Train Accuracy - 0 Unfoldings')
plt.legend()
plt.axis([0, EPOCH-1, 0, 101])
plt.show()
###Output
_____no_output_____
###Markdown
Robustness?
###Code
SIGMAS = np.arange(0,200,10)
Acc_0_Robustness = np.zeros((len(SIGMAS),1))
Acc_JS_Robustness = np.zeros((len(SIGMAS),1))
for s in range(len(SIGMAS)):
model_0.eval()
correct_0 = 0
correct_js = 0
for step, (x, y) in enumerate(test_loader):
x = x + SIGMAS[s]/255 * torch.randn(x.shape)
b_x = Variable(x) # batch x, shape (batch, 28*28)
b_y = Variable(y) # batch label
if cudaopt:
b_y, b_x = b_y.cuda(), b_x.cuda()
# baseline
gamma, scores = model_0.forward(b_x,T,RHO)
pred = scores.data.max(1, keepdim=True)[1]
correct_0 += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
# Joint Sparse
gamma, scores = model_jnn.forward(b_x,T,RHO)
pred = scores.data.max(1, keepdim=True)[1]
correct_js += pred.eq(b_y.data.view_as(pred)).long().cpu().sum()
Acc_0_Robustness[s] = 100 * float(correct_0) /float(len(test_loader.dataset))
Acc_JS_Robustness[s] = 100 * float(correct_js) /float(len(test_loader.dataset))
# Plots
plt.style.use('default')
fig = plt.figure(figsize=(6,4))
plt.plot(SIGMAS,Acc_0_Robustness, linewidth = 2,label='ISTA train')
plt.plot(SIGMAS,Acc_JS_Robustness, linewidth = 2,label='JISTA test')
plt.grid('on')
plt.title('Robustness - 0 Unfoldings')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Visualise global filters of baseline
###Code
cols = 10
rows = 10
indices = random.sample(range(m3), cols*rows)
dict1 = model_0.W3
atom1_dim = dict1.shape[3]
print(dict1.shape)
dict2 = F.conv_transpose2d(dict1, model_0.W2, stride=model_0.strd1, dilation=1)
atom2_dim = dict2.shape[3]
print(dict2.shape)
dict3 = F.conv_transpose2d(dict2, model_0.W1, stride=model_0.strd2, dilation=1)
atom3_dim = dict3.shape[3]
print(dict3.shape)
idx = 1
plt.figure(figsize=(10,10))
for j in range(rows):
for i in range(cols):
plt.subplot(cols,rows,idx)
plt.imshow(np.reshape(dict3.cpu().data.numpy()[idx-1], (atom3_dim, atom3_dim)), cmap='gray')
plt.axis('off')
idx+=1
plt.show()
###Output
_____no_output_____
###Markdown
Visualise global filters of JISTA
###Code
cols = 10
rows = 10
indices = random.sample(range(m3), cols*rows)
dict1 = model_jnn.W3
atom1_dim = dict1.shape[3]
print(dict1.shape)
dict2 = F.conv_transpose2d(dict1, model_jnn.W2, stride=model_jnn.strd1, dilation=1)
atom2_dim = dict2.shape[3]
print(dict2.shape)
dict3 = F.conv_transpose2d(dict2, model_jnn.W1, stride=model_jnn.strd2, dilation=1)
atom3_dim = dict3.shape[3]
print(dict3.shape)
idx = 1
plt.figure(figsize=(10,10))
for j in range(rows):
for i in range(cols):
plt.subplot(cols,rows,idx)
plt.imshow(np.reshape(dict3.cpu().data.numpy()[idx-1], (atom3_dim, atom3_dim)), cmap='gray')
plt.axis('off')
idx+=1
plt.show()
###Output
_____no_output_____ |
Conditional/Evaluation/Generation.ipynb | ###Markdown
MLE Generation
###Code
# location to store results
os.makedirs(config.task.name,exist_ok=True)
print(config.task.mle_weights_path)
mle_weights = torch.load(config.task.mle_weights_path)
model.generator.load_state_dict(mle_weights)
temperatures = (np.arange(10) + 1) / 10.0
for temperature in temperatures:
f_write = open(f"{config.task.name}/mle_{temperature}_{random_seed}.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], temperature=temperature)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
_____no_output_____
###Markdown
Beam Search
###Code
# f_write = open(f"{config.task.name}/mle_beam_4.txt", "w")
# for batch in tqdm.tqdm(valid_dataloader):
# batch = collate_fn(batch)
# batch = move_to_device(batch, device)
# ground_truth = batch["target_text"]
# results = decoder.generate(batch["source_token_ids"], do_sample=False, num_beams=4)
# generated = []
# for i in range(len(results["tokens"])):
# res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
# generated.append(res)
# for gt, gen in zip(ground_truth, generated):
# f_write.write(json.dumps([gt, gen]))
# f_write.write("\n")
# f_write.close()
###Output
_____no_output_____
###Markdown
TextGAIL Generation
###Code
print(config.task.textgail_weights_path)
textgail_weights = torch.load(config.task.textgail_weights_path)
model.load_state_dict(textgail_weights)
for temperature in temperatures:
f_write = open(f"{config.task.name}/textgail_{temperature}_{random_seed}.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], temperature=temperature)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
_____no_output_____
###Markdown
Beam Search
###Code
f_write = open(f"{config.task.name}/textgail_no_pretrain2_beam_4.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], do_sample=False, num_beams=4)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
_____no_output_____
###Markdown
MLE Generation
###Code
# location to store results
os.makedirs(config.task.name,exist_ok=True)
print(config.task.mle_weights_path)
mle_weights = torch.load(config.task.mle_weights_path)
model.generator.load_state_dict(mle_weights)
temperatures = (np.arange(10) + 1) / 10.0
for temperature in temperatures:
f_write = open(f"{config.task.name}/mle_{temperature}_{random_seed}.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], temperature=temperature)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
100%|██████████| 189/189 [01:01<00:00, 3.05it/s]
100%|██████████| 189/189 [01:11<00:00, 2.64it/s]
100%|██████████| 189/189 [01:18<00:00, 2.41it/s]
100%|██████████| 189/189 [01:45<00:00, 1.79it/s]
100%|██████████| 189/189 [02:07<00:00, 1.48it/s]
100%|██████████| 189/189 [02:29<00:00, 1.27it/s]
52%|█████▏ | 99/189 [01:32<01:23, 1.07it/s]
###Markdown
Beam Search
###Code
# f_write = open(f"{config.task.name}/mle_beam_4.txt", "w")
# for batch in tqdm.tqdm(valid_dataloader):
# batch = collate_fn(batch)
# batch = move_to_device(batch, device)
# ground_truth = batch["target_text"]
# results = decoder.generate(batch["source_token_ids"], do_sample=False, num_beams=4)
# generated = []
# for i in range(len(results["tokens"])):
# res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
# generated.append(res)
# for gt, gen in zip(ground_truth, generated):
# f_write.write(json.dumps([gt, gen]))
# f_write.write("\n")
# f_write.close()
###Output
_____no_output_____
###Markdown
TextGAIL Generation
###Code
print(config.task.textgail_weights_path)
textgail_weights = torch.load(config.task.textgail_weights_path)
model.load_state_dict(textgail_weights)
for temperature in temperatures:
f_write = open(f"{config.task.name}/textgail_{temperature}_{random_seed}.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], temperature=temperature)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
_____no_output_____
###Markdown
Beam Search
###Code
f_write = open(f"{config.task.name}/textgail_no_pretrain2_beam_4.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], do_sample=False, num_beams=4)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
hydra._internal.GlobalHydra().get_state().clear()
import hydra
from hydra.experimental import initialize, compose
import numpy as np
import tqdm
import json
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers import RobertaTokenizer
from omegaconf import DictConfig
from torchfly.text.decode import TransformerDecoder
from torchfly.common import set_random_seed, move_to_device
from configure_dataloader import DataLoaderHandler
from model import Generator, TextGAILModel
import logging
import os
import pathlib
random_seed = 1
set_random_seed(random_seed)
hydra.experimental.initialize("config")
# config = hydra.experimental.compose("config.yaml")
hydra.core.global_hydra.GlobalHydra().instance().clear()
with initialize(config_path='config'):
config = compose(config_name="config")
# cfg = hydra.experimental.compose(config_file='config.yaml', overrides=[])
print(config.pretty())
config
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
dataloader_handler = DataLoaderHandler(config)
valid_dataloader = dataloader_handler.test_dataloader(config)
collate_fn = test_dataloader.dataset.collate_fn
device = torch.device("cuda")
model = TextGAILModel(config)
model = model.cuda()
decoder = TransformerDecoder(config.decode)
decoder.register_generator(model.generator.decoder)
decoder.register_tokenizer(tokenizer)
decoder.prepare_model_inputs_for_generation = model.generator.prepare_model_inputs_for_generation
###Output
_____no_output_____
###Markdown
MLE Generation
###Code
# location to store results
os.makedirs(config.task.name,exist_ok=True)
print(config.task.mle_weights_path)
mle_weights = torch.load(config.task.mle_weights_path)
model.generator.load_state_dict(mle_weights)
temperatures = (np.arange(10) + 1) / 10.0
for temperature in temperatures:
f_write = open(f"{config.task.name}/mle_{temperature}_{random_seed}.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], temperature=temperature)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
_____no_output_____
###Markdown
Beam Search
###Code
# f_write = open(f"{config.task.name}/mle_beam_4.txt", "w")
# for batch in tqdm.tqdm(valid_dataloader):
# batch = collate_fn(batch)
# batch = move_to_device(batch, device)
# ground_truth = batch["target_text"]
# results = decoder.generate(batch["source_token_ids"], do_sample=False, num_beams=4)
# generated = []
# for i in range(len(results["tokens"])):
# res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
# generated.append(res)
# for gt, gen in zip(ground_truth, generated):
# f_write.write(json.dumps([gt, gen]))
# f_write.write("\n")
# f_write.close()
###Output
_____no_output_____
###Markdown
TextGAIL Generation
###Code
print(config.task.textgail_weights_path)
textgail_weights = torch.load(config.task.textgail_weights_path)
model.load_state_dict(textgail_weights)
for temperature in temperatures:
f_write = open(f"{config.task.name}/textgail_{temperature}_{random_seed}.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], temperature=temperature)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
_____no_output_____
###Markdown
Beam Search
###Code
f_write = open(f"{config.task.name}/textgail_no_pretrain2_beam_4.txt", "w")
for batch in tqdm.tqdm(valid_dataloader):
batch = collate_fn(batch)
batch = move_to_device(batch, device)
ground_truth = batch["target_text"]
results = decoder.generate(batch["source_token_ids"], do_sample=False, num_beams=4)
generated = []
for i in range(len(results["tokens"])):
res = tokenizer.decode(results["tokens"][i][0][1:-1].tolist())
generated.append(res)
for gt, gen in zip(ground_truth, generated):
f_write.write(json.dumps([gt, gen]))
f_write.write("\n")
f_write.close()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.