path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
run_local/07_intro.ipynb
|
###Markdown
Parameterisation Once the potential energy function to be used for a particular interaction has been determined, it is then necessary to **parameterise** the function. If we consider the parameterisation Lennard-Jones potential model.In this model it is necessary to determine two parameters, $\sigma$ and $\varepsilon$. $\sigma$ is the distance at which the potential energy between the two particles is zero, $-\varepsilon$ is the potential energy at the equilbrium separation. Values for each of these must be determined for each pair of atoms in our system. How to parameterise a potential model?The purpose of parameterisation is to develop a potential energy model that is able to **accurately reproduce** the relative energy of a given interaction. This may also be thought of as the model that reproduces the structure accurately. Parameters should really be obtained by optimising them with respect to a **more accurate** technique than classical simulation. Commonly, this involves either experimental measurements, e.g. X-ray crystallography, or quantum mechanical calculations; we will focus on the latter. More can be found out about quantum mechanical calculations in the textbooks mentioned in the introduction (in particular Jeremy Harvey's Computational Chemistry Primer [[1](references)]).However, for our current purposes we only need to remember that quantum calculations are more accurate than classical simulations. Parameterising a Lennard-Jones interactionWe will stick with the example of a Lennard-Jones interaction, however the arguments and methods discussed are **extensible to all different interaction types**. To generate the potential energy model between two particles of argon, we could conduct quantum mechanical calculations at a range of inter-atom separations, from 2 to 5 Å, finding the energy between the two particles at each separation.The Python code below plots the energy against distance that has been obtained from a quantum mechanical calculation.
###Code
import matplotlib.pyplot as plt
import numpy as np
r = np.arange(3.5, 7., 0.5)
energy = np.array([0.1374, -0.0195, -0.0218,
-0.0133, -0.0076, -0.0043,
-0.0025])
energy_err = energy * 0.1
plt.errorbar(r, energy, yerr=energy_err,
marker='o', ls='')
plt.xlabel(r'$r$/Å')
plt.ylabel(r'$E$/eV')
plt.show()
###Output
_____no_output_____
###Markdown
We can already see that the general shape of the curve is similar to a Lennard-Jones (or Buckingham) interaction.There is a well near the **equilibrium bond distance** and a steep incline as the particles come close together. It is possible to then fit a Lennard-Jones function to this data, the Python code below so using a simple least-squares fit.
###Code
from scipy.optimize import curve_fit
def lj_energy(r, epsilon, sigma):
"""
Implementation of the Lennard-Jones potential
to calculate the energy of the interaction.
Parameters
----------
r: float
Distance between two particles (Å)
epsilon: float
Potential energy at the equilibrium bond
length (eV)
sigma: float
Distance at which the potential energy is
zero (Å)
Returns
-------
float
Energy of the van der Waals interaction (eV)
"""
return 4 * epsilon * np.power(
sigma / r, 12) - 4 * epsilon * np.power(
sigma / r, 6)
popt, pcov = curve_fit(lj_energy, r, energy,
sigma=energy_err)
print('Best value for ε = {:.2e} eV'.format(
popt[0]))
print('Best value for σ = {:.2f} Å'.format(
popt[1]))
###Output
Best value for ε = 2.02e-02 eV
Best value for σ = 3.81 Å
###Markdown
These values are similar to those from Rahman [[2](References)].However, the agreement can be more easily assessed with by plotting the Lennard-Jones function with the values fitted and the quantum mechnical data together.These values agree with many datapoints, although it is clear that at short distances it would be necessary to perform further quantum mechanical calculations.
###Code
plt.errorbar(r, energy, yerr=energy_err, marker='o', ls='')
x = np.linspace(3.5, 7, 1000)
plt.plot(x, lj_energy(x, popt[0], popt[1]))
plt.xlabel(r'$r$/Å')
plt.ylabel(r'$E$/eV')
plt.show()
###Output
_____no_output_____
|
data/jupyter/09-2-Ingress.ipynb
|
###Markdown
Übung: 09-2 Ingress-------------------Quelle: Buch Microservices Rezepte- - -Das Beispiel besteht aus drei Microservices: **Order**, **Customer** und **Catalog**. **Order** nutzt **Catalog** und **Customer** mit der REST-Schnittstelle. Ausserdem bietet jeder Microservice einige HTML-Seiten an.Statt des Apache-Webservers, der als [Reverse Proxy](https://github.com/ewolff/microservice-kubernetes/blob/master/microservice-kubernetes-demo/apache/000-default.conf) konfiguriert ist, wird die Kubernetes Ressource Ingress verwendet.
###Code
# ! kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/apache.yaml (obsolet!)
! kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/catalog.yaml
! kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/customer.yaml
! kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/order.yaml
! kubectl apply -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/postgres.yaml
###Output
_____no_output_____
###Markdown
Nach dem Starten der Services erstellen wir die Ingress Ressourcen:
###Code
%%bash
cat <<%EOF% | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /\$2
name: order
namespace: ms-kubernetes
labels:
app: order
spec:
rules:
- http:
paths:
- path: /order/
pathType: Prefix
backend:
service:
name: order
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: catalog
namespace: ms-kubernetes
labels:
app: catalog
spec:
rules:
- http:
paths:
- path: /catalog
pathType: Prefix
backend:
service:
name: catalog
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: customer
namespace: ms-kubernetes
labels:
app: customer
spec:
rules:
- http:
paths:
- path: /customer
pathType: Prefix
backend:
service:
name: customer
port:
number: 8080
%EOF%
###Output
_____no_output_____
###Markdown
Überprüfen der erstellen Ressourcen
###Code
! kubectl get all,ingress -n ms-kubernetes
###Output
_____no_output_____
###Markdown
Wir kontrollieren die korrekte Funktionsweise mittels `curl` (Window `Invoke-WebRequest`).
###Code
%%bash
export SERVER=$(kubectl config view -o=jsonpath='{ .clusters[0].cluster.server }' | sed -e "s/6443/30443/")
echo "Kunden ${SERVER}/customer"
curl -k ${SERVER}/customer
echo "Produkte ${SERVER}/catalog"
curl -k ${SERVER}/catalog
# echo "Bestellung ${SERVER}/order"
# curl -k ${SERVER}/order/<Order-id>
###Output
_____no_output_____
###Markdown
*** Ingress Service (nginx Server)In der aktuellen Umgebung übernimmt ein nginx Server die Ingress Funktionalität. Dieser Server läuft als Pods im Namespace ingress-nginx.Von dem nginx Server können wir uns die Konfigurationsdatei ausgeben:
###Code
! kubectl exec deployments/nginx-ingress-controller -n ingress-nginx -- cat /etc/nginx/nginx.conf | grep location
###Output
_____no_output_____
###Markdown
Zum Testen kann der `kubectl apply -f -` welche die Ingress Ressourcen anlegt, durch `kubectl delete -f -` ersetzt werden und dann der obige Befehl wieder ausgeführt werden.Dann sollten die `location` Einträge für `customer`, `catalog` und `order` nicht mehr vorhanden sein. - - -Aufräumen
###Code
! kubectl delete -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/apache.yaml
! kubectl delete -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/catalog.yaml
! kubectl delete -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/customer.yaml
! kubectl delete -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/hystrix.yaml
! kubectl delete -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/order.yaml
! kubectl delete -f https://raw.githubusercontent.com/mc-b/misegr/master/ewolff/ms-kubernetes/postgres.yaml
###Output
_____no_output_____
|
titanic-machine-learning-from-disaster/titanic-v1.ipynb
|
###Markdown
Titanic: Machine Learning from Disaster Import Dependencies
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
from sklearn.model_selection import train_test_split
from jupyterthemes import jtplot
import csv
jtplot.style()
%matplotlib inline
np.random.seed(1)
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis and Data Cleaning
###Code
data = pd.read_csv('train.csv')
# test_data = pd.read_csv('test.csv')
data.head()
#check total null values in each column
print(data.isnull().sum())
# plot of survival
f, ax = plt.subplots(1,figsize=(10,8))
data['Survived'].value_counts().plot.pie(autopct='%1.1f%%',ax=ax);
data['Survived'].value_counts()
# see survival and sex relation
data.groupby(['Sex','Survived'])['Survived'].count().plot(kind='bar');
pd.crosstab(data.Pclass, data.Survived, margins=True)
pd.crosstab([data.Sex, data.Survived], data.Pclass,margins=True)
print('Oldest Passenger was of:',data['Age'].max(),'Years')
print('Youngest Passenger was of:',data['Age'].min(),'Years')
print('Average Age on the ship:',data['Age'].mean(),'Years')
data['Initial'] = data.Name.str.extract('([A-Za-z]+)\.', expand=True)
data.head()
data.groupby('Initial')['Name'].count()
# there are some errors in data, let's fix them
data['Initial'].replace(['Mlle', 'Mme', 'Ms', 'Dr','Major','Lady','Countess','Jonkheer','Col','Rev','Capt','Sir','Don'],['Miss', 'Miss', 'Miss','Mr','Mr','Mrs','Mrs','Other','Other','Other','Mr','Mr','Mr'], inplace=True)
data.groupby('Initial')['Age'].mean()
## Assigning the NaN Values with the Ceil values of the mean ages
data.loc[(data.Age.isnull())&(data.Initial=='Mr'),'Age']=33
data.loc[(data.Age.isnull())&(data.Initial=='Mrs'),'Age']=36
data.loc[(data.Age.isnull())&(data.Initial=='Master'),'Age']=5
data.loc[(data.Age.isnull())&(data.Initial=='Miss'),'Age']=22
data.loc[(data.Age.isnull())&(data.Initial=='Other'),'Age']=46
data.Age.isnull().any() #check for nan values in age
data['Embarked'].fillna('S',inplace=True)
data['Age_band']=0
data.loc[data['Age']<=16,'Age_band']=0
data.loc[(data['Age']>16)&(data['Age']<=32),'Age_band']=1
data.loc[(data['Age']>32)&(data['Age']<=48),'Age_band']=2
data.loc[(data['Age']>48)&(data['Age']<=64),'Age_band']=3
data.loc[data['Age']>64,'Age_band']=4
data['Sex'].replace(['male','female'],[0,1],inplace=True)
data['Embarked'].replace(['S','C','Q'],[0,1,2],inplace=True)
data['Initial'].replace(['Mr','Mrs','Miss','Master','Other'],[0,1,2,3,4],inplace=True)
data['Age_band'].value_counts().to_frame()
data.head(2)
###Output
_____no_output_____
###Markdown
Predictive Modeling
###Code
train, test = train_test_split(data, test_size=0.3,random_state=0,stratify=data['Survived'])
X_train = train[['Pclass', 'Sex', 'Age_band', 'Embarked', 'Initial']].values
X_train = X_train.T.astype(float)
X_test = test[['Pclass', 'Sex', 'Age_band', 'Embarked', 'Initial']].values
X_test = X_test.T.astype(float)
Y_train = train['Survived'].values
Y_train = Y_train.reshape(1, Y_train.shape[0])
Y_test = test['Survived'].values
Y_test = Y_test.reshape(1, Y_test.shape[0])
print(X_train.shape, X_test.shape)
print(Y_train.shape, Y_test.shape)
###Output
_____no_output_____
###Markdown
DNN
###Code
def Initialize_parameters_deep(layer_dims):
np.random.seed(3)
parameters = {}
for l in range(1, len(layer_dims)):
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l-1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
return parameters
def sigmoid(Z):
return 1 / (1 + np.exp(-1 * Z))
def relu(Z):
return np.maximum(0, Z)
def linear_activation_forward(A_prev, W, b, activation):
Z = np.dot(W, A_prev) + b
linear_cache = (A_prev, W, b)
if activation == 'sigmoid':
A = sigmoid(Z)
elif activation == 'relu':
A = relu(Z)
activation_cache = Z
cache = (linear_cache, activation_cache)
return A, cache
def forward_propogation(X, parameters):
A_prev = X
L = len(parameters)//2
caches = []
for l in range(1, L):
Wl = parameters['W' + str(l)]
bl = parameters['b' + str(l)]
A_prev, cache = linear_activation_forward(A_prev, Wl, bl, 'relu')
caches.append(cache)
AL, cache = linear_activation_forward(A_prev, parameters['W' + str(L)], parameters['b' + str(L)], 'sigmoid')
caches.append(cache)
return AL, caches
#np.mulliply is diff than X*Y
def compute_cost(AL, Y):
m = Y.shape[1]
cost = -1 / m * np.sum((Y * np.log(AL) + ((1 - Y) * np.log(1 - AL))))
cost = np.squeeze(cost)
return cost
def sigmoid_backward(dA, activation_cache):
Z = activation_cache
A = sigmoid(Z)
dZ = dA * A * (1 - A)
return dZ
def relu_backward(dA, activation_cache):
Z = activation_cache
dZ = np.array(dA, copy=True)
dZ[Z <= 0] = 0
return dZ
def linear_activation_backward(dA, cache, activation):
linear_cache, activation_cache = cache
if activation == 'sigmoid':
dZ = sigmoid_backward(dA, activation_cache)
elif activation == 'relu':
dZ = relu_backward(dA, activation_cache)
A_prev, W, b = linear_cache
m = A_prev.shape[1]
dW = 1 / m * np.dot(dZ, A_prev.T)
db = 1 / m * np.sum(dZ, axis=1, keepdims=True)
dA_prev = np.dot(W.T, dZ)
return dA_prev, dW, db
def backward_propogation(AL, Y, caches):
L = len(caches)
grads = {}
dAL = - np.divide(Y, AL) + np.divide(1 - Y, 1 - AL)
grads['dA' + str(L)], grads['dW' + str(L)], grads['db' + str(L)] = linear_activation_backward(dAL, caches[L-1], 'sigmoid')
A_prev = AL
for l in range(L-1, 0, -1):
cache = caches[l-1]
dA = grads['dA' + str(l+1)]
dA_prev, dW, db = linear_activation_backward(dA, cache, 'relu')
grads['dA' + str(l)] = dA_prev
grads['dW' + str(l)] = dW
grads['db' + str(l)] = db
return grads
def update_parameters(parameters, grads, learning_rate):
for l in range(1, len(parameters)//2 + 1 ):
parameters['W' + str(l)] -= learning_rate * grads['dW' + str(l)]
parameters['b' + str(l)] -= learning_rate * grads['db' + str(l)]
return parameters
def the_model(X, Y, layers_dims, learning_rate, num_iterations, print_cost=True):
np.random.seed(1)
costs = []
parameters = Initialize_parameters_deep(layers_dims)
# parameters = np.load('parameters.npy').item()
for i in range(num_iterations+1):
AL, caches = forward_propogation(X, parameters)
cost = compute_cost(AL, Y)
grads = backward_propogation(AL, Y, caches)
parameters = update_parameters(parameters, grads, learning_rate)
if (i%50000==0):
print('Cost at iteration %s is %s' %(i, cost))
if(i%10000==0):
costs.append(cost)
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
np.save("parameters", parameters)
return parameters
def predictAccuracy(X, Y, parameters):
m = X.shape[1]
p = np.zeros((1, m))
probas, caches = forward_propogation(X, parameters)
# convert probas to 0/1 predictions
for i in range(0, probas.shape[1]):
if probas[0, i] > 0.4:
p[0, i] = 1
else:
p[0, i] = 0
print("Accuracy: " + str(np.sum((p == Y)) / m))
return np.squeeze(p)
%%time
layers_dims = [5, 10, 1]
parameters = the_model(X_train, Y_train, layers_dims, learning_rate=0.001, num_iterations=200000, print_cost=True)
%%time
prob = predictAccuracy(X_train, Y_train, parameters)
%%time
prob = predictAccuracy(X_test, Y_test, parameters)
np.save("parameters-v1", parameters)
###Output
_____no_output_____
###Markdown
Evaluation Time! Test Data cleaning
###Code
test_data = pd.read_csv('test.csv')
test_data.isnull().sum()
test_data['Initial'] = test_data.Name.str.extract('([A-Za-z]+)\.', expand=True)
test_data.head()
test_data.groupby('Initial')['Age'].count()
test_data['Initial'].replace(['Col', 'Dona','Dr', 'Ms', 'Rev'], ['Other', 'Miss', 'Mr', 'Miss', 'Other'], inplace=True)
test_data.groupby('Initial')['Age'].count()
test_data.groupby('Initial')['Age'].mean()
## Assigning the NaN Values with the Ceil values of the mean ages
test_data.loc[(test_data.Age.isnull())&(test_data.Initial=='Mr'),'Age']=33
test_data.loc[(test_data.Age.isnull())&(test_data.Initial=='Mrs'),'Age']=39
test_data.loc[(test_data.Age.isnull())&(test_data.Initial=='Master'),'Age']=7
test_data.loc[(test_data.Age.isnull())&(test_data.Initial=='Miss'),'Age']=22
test_data.loc[(test_data.Age.isnull())&(test_data.Initial=='Other'),'Age']=43
test_data['Age_band']=0
test_data.loc[test_data['Age']<=16,'Age_band']=0
test_data.loc[(test_data['Age']>16)&(test_data['Age']<=32),'Age_band']=1
test_data.loc[(test_data['Age']>32)&(test_data['Age']<=48),'Age_band']=2
test_data.loc[(test_data['Age']>48)&(test_data['Age']<=64),'Age_band']=3
test_data.loc[test_data['Age']>64,'Age_band']=4
data['Age_band'].value_counts().to_frame()
test_data['Sex'].replace(['male','female'],[0,1],inplace=True)
test_data['Embarked'].replace(['S','C','Q'],[0,1,2],inplace=True)
test_data['Initial'].replace(['Mr','Mrs','Miss','Master','Other'],[0,1,2,3,4],inplace=True)
###Output
_____no_output_____
###Markdown
Run Model on Test data
###Code
X = test_data[['Pclass', 'Sex', 'Age_band', 'Embarked', 'Initial']].values
X = X.T.astype(float)
X.shape
def predict(X, parameters):
m = X.shape[1]
p = np.zeros((1, m))
probas, caches = forward_propogation(X, parameters)
for i in range(0, probas.shape[1]):
if probas[0, i] > 0.4:
p[0, i] = 1
else:
p[0, i] = 0
return np.squeeze(p)
Y = predict(X, parameters)
###Output
_____no_output_____
###Markdown
Generate csv file for submission
###Code
with open('submission-v1.csv', 'w') as file:
writer = csv.writer(file)
writer.writerow(['PassengerId', 'Survived'])
for index, row in test_data.iterrows():
writer.writerow([row['PassengerId'], int(Y[index])])
###Output
_____no_output_____
|
JNotebooks/tutorial15_generative_adversarial_networks.ipynb
|
###Markdown
Generative Adversarial NetworksIn this tutorial, we will cover a simple example of a Generative Adversarial Network (GAN), where the goal is to create syntheic digits images from uniform random noise input.The learning goals of this tutorial are:- Introduce GANs using a simple example;- Illustrate how to define a simple GAN using TensorFlow and Keras.
###Code
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
# Specific to my computer
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
def generator(input_dim = (100,), dim = 7, nchannels = 1, dropout = 0.25, kshape = (5,5)):
random_input = tf.keras.layers.Input(input_dim)
x1 = tf.keras.layers.Dense(dim*dim*nchannels, activation = 'relu')(random_input)
x2 = tf.keras.layers.BatchNormalization(momentum = 0.9)(x1)
x3 = tf.keras.layers.Reshape((dim,dim,nchannels))(x2)
x4 = tf.keras.layers.Dropout(dropout)(x3)
x5 = tf.keras.layers.UpSampling2D((2,2))(x4)
x6 = tf.keras.layers.Conv2D(200, kshape, padding='same', activation = 'relu')(x5)
x7 = tf.keras.layers.BatchNormalization(momentum=0.9)(x6)
x8 = tf.keras.layers.Conv2D(200, kshape, padding='same', activation = 'relu')(x7)
x9 = tf.keras.layers.BatchNormalization(momentum=0.9)(x8)
x10 = tf.keras.layers.UpSampling2D((2,2))(x9)
x11 = tf.keras.layers.Conv2D(100, kshape, padding='same', activation = 'relu')(x10)
x12 = tf.keras.layers.BatchNormalization(momentum=0.9)(x11)
x13 = tf.keras.layers.Conv2D(100, kshape, padding='same', activation = 'relu')(x12)
x14 = tf.keras.layers.BatchNormalization(momentum=0.9)(x13)
x15 = tf.keras.layers.Conv2D(50, kshape, padding='same', activation = 'relu')(x14)
x16 = tf.keras.layers.BatchNormalization(momentum=0.9)(x15)
x17 = tf.keras.layers.Conv2D(30, kshape, padding='same', activation = 'relu')(x16)
x18 = tf.keras.layers.Conv2D(1, kshape, padding='same', activation = 'sigmoid')(x17)
model = tf.keras.models.Model(inputs=random_input, outputs=x18)
return model
def discriminator(ishape = (28,28,1), dropout = 0.25, kshape = (3,3)):
model_input = tf.keras.layers.Input(shape = ishape)
x1 = tf.keras.layers.Conv2D(48, (3,3), padding='same', activation='relu')(model_input)
x2 = tf.keras.layers.Conv2D(48, (3,3), padding='same', activation='relu')(x1)
x3 = tf.keras.layers.Dropout(0.25)(x2)
x4 = tf.keras.layers.MaxPool2D((2,2))(x3)
x5 = tf.keras.layers.Conv2D(96, (3,3), padding='same', activation='relu')(x4)
x6 = tf.keras.layers.Conv2D(96, (3,3), padding='same', activation='relu')(x5)
x7 = tf.keras.layers.Dropout(0.25)(x6)
flat = tf.keras.layers.Flatten()(x7)
out = tf.keras.layers.Dense(1, activation = 'sigmoid')(flat)
model = tf.keras.models.Model(inputs = model_input, outputs = out)
return model
# Defining the discriminator model
optimizer_d = tf.keras.optimizers.RMSprop(lr = 0.0008, clipvalue = 1.0, decay = 6e-8)
discriminator_model = discriminator()
discriminator_model.compile(loss = "binary_crossentropy", optimizer = optimizer_d, metrics = ["accuracy"])
optimizer_gan= tf.keras.optimizers.RMSprop(lr = 0.0004, clipvalue = 1.0, decay = 3e-8)
generator_model = generator()
random_input = tf.keras.layers.Input((100,))
discriminator_model.trainable = False
out = discriminator_model(generator_model(random_input))
gan_model = tf.keras.models.Model(inputs = random_input, outputs = out)
gan_model.compile(loss = "binary_crossentropy", optimizer = optimizer_gan, metrics = ["accuracy"])
gan_model.summary()
generator_model.summary()
discriminator_model.summary()
(X_dev,_),_ = tf.keras.datasets.mnist.load_data()
indexes = np.arange(X_dev.shape[0], dtype = int)
np.random.shuffle(indexes)
X_dev = X_dev[indexes]
X_dev = X_dev/255
X_dev = X_dev[:,:,:,np.newaxis]
batch_size = 96
a_loss_history = []
d_loss_history = []
for ii in range(20):
true_images = X_dev[np.random.randint(0,X_dev.shape[0], size = batch_size)]
noise = np.random.uniform(-1,1, size = [batch_size, 100])
fake_images = generator_model.predict(noise)
x = np.concatenate((true_images,fake_images), axis = 0)
y = np.ones([2*batch_size,1])
y[batch_size:,:] = 0
discriminator_model.train_on_batch(x,y)
for ii in range(20000):
true_images = X_dev[np.random.randint(0,X_dev.shape[0], size = batch_size)]
noise = np.random.uniform(-1,1, size = [batch_size, 100])
fake_images = generator_model.predict(noise)
x = np.concatenate((true_images,fake_images), axis = 0)
y = np.ones([2*batch_size,1])
y[batch_size:,:] = 0
d_loss_history.append(discriminator_model.train_on_batch(x,y))
y = np.ones([batch_size,1])
noise = np.random.uniform(-1,1, size = [batch_size, 100])
a_loss_history.append(gan_model.train_on_batch(noise,y))
a_loss_history = np.array(a_loss_history)
d_loss_history = np.array(d_loss_history)
plt.plot()
plt.plot(a_loss_history[:,1], label = "GAN loss")
plt.plot(d_loss_history[:,1], label = "Discriminator loss")
plt.legend()
plt.show()
noise = np.random.uniform(-1,1, size = [10, 100])
fake_images = generator_model.predict(noise)
for ii in range(10):
plt.figure()
plt.imshow(fake_images[ii,:,:,0], cmap = "gray")
plt.show()
###Output
_____no_output_____
###Markdown
Generative Adversarial NetworksIn this tutorial, we will cover a simple example of a Generative Adversarial Network (GAN), where the goal is to create syntheic digits images from uniform random noise input.The learning goals of this tutorial are:- Introduce GANs using a simple example;- Illustrate how to define a simple GAN using TensorFlow and Keras.
###Code
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
# Specific to my computer
physical_devices = tf.config.experimental.list_physical_devices('GPU')
tf.config.experimental.set_memory_growth(physical_devices[0], True)
def generator(input_dim = (100,), dim = 7, nchannels = 1, dropout = 0.25, kshape = (5,5)):
random_input = tf.keras.layers.Input(input_dim)
x1 = tf.keras.layers.Dense(dim*dim*nchannels, activation = 'relu')(random_input)
x2 = tf.keras.layers.BatchNormalization(momentum = 0.9)(x1)
x3 = tf.keras.layers.Reshape((dim,dim,nchannels))(x2)
x4 = tf.keras.layers.Dropout(dropout)(x3)
x5 = tf.keras.layers.UpSampling2D((2,2))(x4)
x6 = tf.keras.layers.Conv2D(200, kshape, padding='same', activation = 'relu')(x5)
x7 = tf.keras.layers.BatchNormalization(momentum=0.9)(x6)
x8 = tf.keras.layers.Conv2D(200, kshape, padding='same', activation = 'relu')(x7)
x9 = tf.keras.layers.BatchNormalization(momentum=0.9)(x8)
x10 = tf.keras.layers.UpSampling2D((2,2))(x9)
x11 = tf.keras.layers.Conv2D(100, kshape, padding='same', activation = 'relu')(x10)
x12 = tf.keras.layers.BatchNormalization(momentum=0.9)(x11)
x13 = tf.keras.layers.Conv2D(100, kshape, padding='same', activation = 'relu')(x12)
x14 = tf.keras.layers.BatchNormalization(momentum=0.9)(x13)
x15 = tf.keras.layers.Conv2D(50, kshape, padding='same', activation = 'relu')(x14)
x16 = tf.keras.layers.BatchNormalization(momentum=0.9)(x15)
x17 = tf.keras.layers.Conv2D(30, kshape, padding='same', activation = 'relu')(x16)
x18 = tf.keras.layers.Conv2D(1, kshape, padding='same', activation = 'sigmoid')(x17)
model = tf.keras.models.Model(inputs=random_input, outputs=x18)
return model
def discriminator(ishape = (28,28,1), dropout = 0.25, kshape = (3,3)):
model_input = tf.keras.layers.Input(shape = ishape)
x1 = tf.keras.layers.Conv2D(48, (3,3), padding='same', activation='relu')(model_input)
x2 = tf.keras.layers.Conv2D(48, (3,3), padding='same', activation='relu')(x1)
x3 = tf.keras.layers.Dropout(0.25)(x2)
x4 = tf.keras.layers.MaxPool2D((2,2))(x3)
x5 = tf.keras.layers.Conv2D(96, (3,3), padding='same', activation='relu')(x4)
x6 = tf.keras.layers.Conv2D(96, (3,3), padding='same', activation='relu')(x5)
x7 = tf.keras.layers.Dropout(0.25)(x6)
flat = tf.keras.layers.Flatten()(x7)
out = tf.keras.layers.Dense(1, activation = 'sigmoid')(flat)
model = tf.keras.models.Model(inputs = model_input, outputs = out)
return model
# Defining the discriminator model
optimizer_d = tf.keras.optimizers.RMSprop(lr = 0.0008, clipvalue = 1.0, decay = 6e-8)
discriminator_model = discriminator()
discriminator_model.compile(loss = "binary_crossentropy", optimizer = optimizer_d, metrics = ["accuracy"])
optimizer_gan= tf.keras.optimizers.RMSprop(lr = 0.0004, clipvalue = 1.0, decay = 3e-8)
generator_model = generator()
random_input = tf.keras.layers.Input((100,))
discriminator_model.trainable = False
out = discriminator_model(generator_model(random_input))
gan_model = tf.keras.models.Model(inputs = random_input, outputs = out)
gan_model.compile(loss = "binary_crossentropy", optimizer = optimizer_gan, metrics = ["accuracy"])
gan_model.summary()
generator_model.summary()
discriminator_model.summary()
(X_dev,_),_ = tf.keras.datasets.mnist.load_data()
indexes = np.arange(X_dev.shape[0], dtype = int)
np.random.shuffle(indexes)
X_dev = X_dev[indexes]
X_dev = X_dev/255
X_dev = X_dev[:,:,:,np.newaxis]
batch_size = 96
a_loss_history = []
d_loss_history = []
for ii in range(20):
true_images = X_dev[np.random.randint(0,X_dev.shape[0], size = batch_size)]
noise = np.random.uniform(-1,1, size = [batch_size, 100])
fake_images = generator_model.predict(noise)
x = np.concatenate((true_images,fake_images), axis = 0)
y = np.ones([2*batch_size,1])
y[batch_size:,:] = 0
discriminator_model.train_on_batch(x,y)
for ii in range(20000):
true_images = X_dev[np.random.randint(0,X_dev.shape[0], size = batch_size)]
noise = np.random.uniform(-1,1, size = [batch_size, 100])
fake_images = generator_model.predict(noise)
x = np.concatenate((true_images,fake_images), axis = 0)
y = np.ones([2*batch_size,1])
y[batch_size:,:] = 0
d_loss_history.append(discriminator_model.train_on_batch(x,y))
y = np.ones([batch_size,1])
noise = np.random.uniform(-1,1, size = [batch_size, 100])
a_loss_history.append(gan_model.train_on_batch(noise,y))
a_loss_history = np.array(a_loss_history)
d_loss_history = np.array(d_loss_history)
plt.plot()
plt.plot(a_loss_history[:,1], label = "GAN loss")
plt.plot(d_loss_history[:,1], label = "Discriminator loss")
plt.legend()
plt.show()
noise = np.random.uniform(-1,1, size = [10, 100])
fake_images = generator_model.predict(noise)
for ii in range(10):
plt.figure()
plt.imshow(fake_images[ii,:,:,0], cmap = "gray")
plt.show()
###Output
_____no_output_____
|
code/notebooks/synthetic_tests/model_multibody_shallow-seated/generating_grid.ipynb
|
###Markdown
Generating observation points Notebook to open a dictionary with the properties of a set observation points Import libraries
###Code
%matplotlib inline
import string as st
import sys
import numpy as np
import matplotlib.pyplot as plt
import cPickle as pickle
import datetime
from fatiando.gridder import regular
from IPython.display import Markdown as md
from IPython.display import display as dp
notebook_name = 'generating_grid.ipynb'
###Output
_____no_output_____
###Markdown
Importing My package
###Code
dir_modules = '../../../mypackage'
sys.path.append(dir_modules)
import auxiliary_functions as func
###Output
_____no_output_____
###Markdown
List of saved files
###Code
saved_files = []
###Output
_____no_output_____
###Markdown
2D grid of points Regular grid
###Code
regular_grid = dict()
regular_grid['area'] = [-6500.,5500.,-5500.,6500.]
regular_grid['Nx'],regular_grid['Ny'] = 25, 25
regular_grid['shape'] = (regular_grid['Nx'],regular_grid['Ny'])
regular_grid['z_obs'] = 0.
regular_grid['N'] = regular_grid['Nx']*regular_grid['Ny']
regular_grid['x'],regular_grid['y'],regular_grid['z'] = regular(regular_grid['area'],regular_grid['shape'],regular_grid['z_obs'])
###Output
_____no_output_____
###Markdown
Regular grid spacing
###Code
regular_grid['dx'] = (regular_grid['area'][1] - regular_grid['area'][0])/(regular_grid['Nx']-1.)
print 'dx = %.1f m' % regular_grid['dx']
regular_grid['dy'] = (regular_grid['area'][3] - regular_grid['area'][2])/(regular_grid['Ny']-1)
print 'dy = %.1f m' % regular_grid['dy']
###Output
dy = 500.0 m
###Markdown
Visualization of the observation poins
###Code
title_font = 20
bottom_font = 18
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(9,9), tight_layout=True)
plt.title('Regular grid (%.0f,%.0f) ' % (regular_grid['Nx'],regular_grid['Ny']), fontsize=title_font)
plt.plot(regular_grid['y'], regular_grid['x'],'k.')
plt.xlabel('y (m)', fontsize = title_font)
plt.ylabel('x (m)', fontsize = title_font)
plt.ylim(np.min(regular_grid['x']),np.max(regular_grid['x']))
plt.xlim(np.min(regular_grid['y']),np.max(regular_grid['y']))
plt.tick_params(labelsize=15)
file_name = 'figs/regular/grid_regular'
plt.savefig(file_name+'.png',dpi=300)
saved_files.append(file_name+'.png')
plt.show()
###Output
/home/andrelreis/anaconda3/envs/py2/lib/python2.7/site-packages/matplotlib/figure.py:2299: UserWarning: This figure includes Axes that are not compatible with tight_layout, so results might be incorrect.
warnings.warn("This figure includes Axes that are not compatible "
###Markdown
Generating .pickle file
###Code
now = datetime.datetime.utcnow().strftime('%d %B %Y %H:%M:%S UTC')
regular_grid['metadata'] = 'Generated by {name} on {date}'.format(date=now, name=notebook_name)
file_name = 'data/regular_grid.pickle'
with open(file_name, 'w') as f:
pickle.dump(regular_grid, f)
saved_files.append(file_name)
###Output
_____no_output_____
###Markdown
Points simulating an airborne survey
###Code
airborne_survey = dict()
airborne_survey['area'] = [-6500.,5500.,-5500.,6500.]
airborne_survey['Nx'],airborne_survey['Ny'] = 49, 25
airborne_survey['shape'] = (airborne_survey['Nx'],airborne_survey['Ny'])
airborne_survey['z_obs'] = -100.
airborne_survey['N'] = airborne_survey['Nx']*airborne_survey['Ny']
airborne_survey['x'],airborne_survey['y'],airborne_survey['z'] = regular(airborne_survey['area'],airborne_survey['shape'],airborne_survey['z_obs'])
###Output
_____no_output_____
###Markdown
Airborne survey spacing
###Code
airborne_survey['dx'] = (airborne_survey['area'][1] - airborne_survey['area'][0])/(airborne_survey['Nx']-1.)
airborne_survey['dy'] = (airborne_survey['area'][3] - airborne_survey['area'][2])/(airborne_survey['Ny']-1)
print 'dx = %.1f m' % airborne_survey['dx']
print 'dx = %.1f m' % airborne_survey['dx']
print 'dy = %.1f m' % airborne_survey['dy']
print 'Number of data : %.1f ' % airborne_survey['N']
###Output
dx = 250.0 m
dy = 500.0 m
Number of data : 1225.0
###Markdown
Visualization of the observation points
###Code
title_font = 20
bottom_font = 18
saturation_factor = 1.
plt.close('all')
plt.figure(figsize=(9,9), tight_layout=True)
plt.title('Airborne lines(%.0f,%.0f) ' % (airborne_survey['Nx'],airborne_survey['Ny']), fontsize=title_font)
plt.plot(airborne_survey['y'], airborne_survey['x'],'k.')
plt.xlabel('y (m)', fontsize = title_font)
plt.ylabel('x (m)', fontsize = title_font)
plt.ylim(np.min(airborne_survey['x']),np.max(airborne_survey['x']))
plt.xlim(np.min(airborne_survey['y']),np.max(airborne_survey['y']))
plt.tick_params(labelsize=15)
file_name = 'figs/airborne/airborne_lines'
plt.savefig(file_name+'.png',dpi=300)
saved_files.append(file_name+'.png')
plt.show()
###Output
_____no_output_____
###Markdown
Generating .pickle file
###Code
now = datetime.datetime.utcnow().strftime('%d %B %Y %H:%M:%S UTC')
airborne_survey['metadata'] = 'Generated by {name} on {date}'.format(date=now, name=notebook_name)
file_name = 'data/airborne_survey.pickle'
with open(file_name, 'w') as f:
pickle.dump(airborne_survey, f)
saved_files.append(file_name)
###Output
_____no_output_____
###Markdown
Saved files
###Code
with open('reports/report_%s.md' % notebook_name[:st.index(notebook_name, '.')], 'w') as q:
q.write('# Saved files \n')
now = datetime.datetime.utcnow().strftime('%d %B %Y %H:%M:%S UTC')
header = 'Generated by {name} on {date}'.format(date=now, name=notebook_name)
q.write('\n\n'+header+'\n\n')
for i, sf in enumerate(saved_files):
print '%d %s' % (i+1,sf)
q.write('* `%s` \n' % (sf))
###Output
1 figs/regular/grid_regular.png
2 data/regular_grid.pickle
3 figs/airborne/airborne_lines.png
4 data/airborne_survey.pickle
|
notebook/4_classification/9_CFU/c_9_CFU.ipynb
|
###Markdown
Split in train and validation validation condiviso con le varie tecniche per il confronto, fatto con lo stratified per tenere tutto bilanciato con le classi.
###Code
attributes = [col for col in df.columns if col != 'IsBadBuy']
X = df[attributes].values
y = df['IsBadBuy']
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.3, stratify=y)
###Output
_____no_output_____
###Markdown
Sampling Method Abbiamo scelto di utilizzare l'undersampling visto che con il Decision Tree era quello con il quale si ottenevano risultati migliori.
###Code
rus = RandomUnderSampler(random_state=42)
print('Resampled dataset shape %s' % Counter(y_train))
X_train_res, y_train_res = rus.fit_resample(X_train, y_train)
print('Resampled dataset shape %s' % Counter(y_train_res))
###Output
Resampled dataset shape Counter({0: 4670, 1: 4670})
###Markdown
Naive Bayes
###Code
gnb = GaussianNB()
%%timeit -n 1
gnb.fit(X_train_res, y_train_res)
gnb.fit(X_train_res, y_train_res)
%%timeit -n 1
gnb.predict(X_val)
y_pred = gnb.predict(X_val)
y_train_pred = gnb.predict(X_train_res)
#y_pred = gnb.fit(X_train_res, y_train_res).predict(X_val)
print("Number of mislabeled points out of a total %d points : %d" % (X_val.shape[0], (y_val != y_pred).sum()))
###Output
Number of mislabeled points out of a total 16482 points : 5877
###Markdown
35,7% di misclassified sul validation Analyze the results
###Code
roc_auc_models_u = []
for i in range(0,len(cnfs)):
fpr, tpr, _ = roc_curve(y_train_res, y_pred_trains_u[i])
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_train_res, y_pred_trains_u[i], average=None)
print("model {} - roc_auc: {}".format(i, roc_auc))
roc_auc_models_u.append(roc_auc)
fpr, tpr, _ = roc_curve(y_train_res, y_train_pred)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_train_res, y_train_pred, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
print('Train Accuracy %s' % accuracy_score(y_train_res, y_train_pred))
print('Train F1-score %s' % f1_score(y_train_res, y_train_pred, average=None))
fpr, tpr, _ = roc_curve(y_val, y_pred)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
print('Val Accuracy %s' % accuracy_score(y_val, y_pred))
print('Val F1-score %s' % f1_score(y_val,y_pred, average=None))
%matplotlib inline
plt.figure(figsize=(8, 5))
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % (roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tick_params(axis='both', which='major')
plt.legend(loc="lower right", fontsize=14, frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
Naive Bayes with SMOTE
###Code
sm = SMOTE(random_state=42)
print('Resampled dataset shape %s' % Counter(y_train))
X_train_res_smote, y_train_res_smote = sm.fit_resample(X_train, y_train)
print('Resampled dataset shape %s' % Counter(y_train_res_smote))
gnb2 = GaussianNB()
gnb2.fit(X_train_res_smote, y_train_res_smote)
%%timeit -n 1
gnb2.predict(X_val)
#y_pred_over della cella precedente non viene salvato
y_pred_smote = gnb2.predict(X_val)
print("Number of mislabeled points out of a total %d points : %d" % (X_val.shape[0], (y_val != y_pred_smote).sum()))
###Output
Number of mislabeled points out of a total 16482 points : 7631
###Markdown
46,3% di misclassified Naive Bayes with oversampling
###Code
ros = RandomOverSampler(random_state=42)
print('Resampled dataset shape %s' % Counter(y_train))
X_train_res_over, y_train_res_over = ros.fit_resample(X_train, y_train)
print('Resampled dataset shape %s' % Counter(y_train_res_over))
%%timeit -n 1
gnb2.predict(X_val)
y_pred_over = gnb2.predict(X_val)
print("Number of mislabeled points out of a total %d points : %d" % (X_val.shape[0], (y_val != y_pred_over).sum()))
###Output
Number of mislabeled points out of a total 16482 points : 7631
###Markdown
46,3% di misclassified Random Forest Ho visto che Random Forest non è richiesta. Gridsearch
###Code
param_list = {'n_estimators': list(np.arange(2, 100)),
'criterion': ['gini', 'entropy'],
'max_depth': [None] + list(np.arange(2, 100)),
'min_samples_split': list(np.arange(2, 100)),
'min_samples_leaf': list(np.arange(1, 100)),
}
new_params = {'randomforestclassifier__' + key: param_list[key] for key in param_list}
skf = StratifiedKFold(n_splits=3)
clf = RandomForestClassifier(n_estimators=2, criterion='gini', max_depth=None, min_samples_split=2, min_samples_leaf=1)
imba_pipeline = make_pipeline(RandomUnderSampler(), clf)
scoring = ['accuracy', 'f1', 'roc_auc' ]
random_search = RandomizedSearchCV(imba_pipeline, param_distributions=new_params, n_iter=1000, cv=skf, scoring=scoring, refit = 'roc_auc', n_jobs = 4, verbose = 1, return_train_score=True)
random_search.fit(X_train, y_train)
cnfs = report_multiple(random_search.cv_results_, n_top=3, scoring = 'roc_auc')
###Output
Fitting 3 folds for each of 1000 candidates, totalling 3000 fits
###Markdown
Perform Classification
###Code
models_u = []
y_pred_vals_u = []
y_pred_trains_u = []
hyper_ps = random_search.cv_results_
for cnf in cnfs.values():
n_estimators = cnf['randomforestclassifier__n_estimators']
criterion = cnf['randomforestclassifier__criterion']
max_depth = cnf['randomforestclassifier__max_depth']
min_samples_split = cnf['randomforestclassifier__min_samples_split']
min_samples_leaf = cnf['randomforestclassifier__min_samples_leaf']
clf = RandomForestClassifier(n_estimators=n_estimators, criterion=criterion, max_depth=max_depth, min_samples_split=min_samples_split, min_samples_leaf=min_samples_leaf)
clf = clf.fit(X_train_res, y_train_res)
models_u.append(clf)
y_pred = clf.predict(X_val)
y_pred_tr = clf.predict(X_train_res)
y_pred_vals_u.append(y_pred)
y_pred_trains_u.append(y_pred_tr)
###Output
_____no_output_____
###Markdown
Analyze the classification results
###Code
for i in range(0,len(cnfs)):
print("model {}".format(i))
print('Train Accuracy %s' % accuracy_score(y_train_res, y_pred_trains_u[i]))
print('Train F1-score %s' % f1_score(y_train_res, y_pred_trains_u[i], average=None))
print()
print('Test Accuracy %s' % accuracy_score(y_val, y_pred_vals_u[i]))
print('Test F1-score %s' % f1_score(y_val, y_pred_vals_u[i], average=None))
print(classification_report(y_val, y_pred_vals_u[i]))
print(confusion_matrix(y_val, y_pred_vals_u[i]))
roc_auc_models_u = []
for i in range(0,len(cnfs)):
fpr, tpr, _ = roc_curve(y_train_res, y_pred_trains_u[i])
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_train_res, y_pred_trains_u[i], average=None)
print("model {} - roc_auc: {}".format(i, roc_auc))
roc_auc_models_u.append(roc_auc)
roc_auc_models_u = []
for i in range(0,len(cnfs)):
fpr, tpr, _ = roc_curve(y_val, y_pred_vals_u[i])
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred_vals_u[i], average=None)
print("model {} - roc_auc: {}".format(i, roc_auc))
roc_auc_models_u.append(roc_auc)
###Output
model 0 - roc_auc: 0.6357027295908513
model 1 - roc_auc: 0.6439090605527069
model 2 - roc_auc: 0.6403717829132193
###Markdown
Choose the best model Come miglior modello scelgo il model 2, essendo quello con la ROC AUC migliore.{'randomforestclassifier__n_estimators': 87, 'randomforestclassifier__min_samples_split': 23, 'randomforestclassifier__min_samples_leaf': 3, 'randomforestclassifier__max_depth': 59, 'randomforestclassifier__criterion': 'gini'}
###Code
clf = RandomForestClassifier(n_estimators=87, criterion='gini', max_depth=59, min_samples_split=23, min_samples_leaf=3)
%%timeit -n 1
clf.fit(X_train_res, y_train_res)
%%timeit -n 1
clf.predict(X_val)
y_pred = clf.predict(X_val)
fpr, tpr, _ = roc_curve(y_val, y_pred)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
%matplotlib inline
plt.figure(figsize=(8, 5))
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % (roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tick_params(axis='both', which='major')
plt.legend(loc="lower right", fontsize=14, frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
Features importance
###Code
for col, imp in zip(attributes, clf.feature_importances_):
print(col, imp)
importances = clf.feature_importances_
# Sort feature importances in descending order
indices = np.argsort(importances)[::-1]
df1 = df
del df1['IsBadBuy']
# Rearrange feature names so they match the sorted feature importances
names = [df1.columns[i] for i in indices]
# Create plot
plt.figure(figsize=(15, 5))
# Create plot title
plt.title("Feature Importance")
# Add bars
plt.bar(range(X.shape[1]), importances[indices])
# Add feature names as x-axis labels
plt.xticks(range(X.shape[1]), names, rotation=90)
# Show plot
plt.show()
###Output
_____no_output_____
###Markdown
K-NN Gridsearch
###Code
param_list = {'n_neighbors': list(np.arange(2, 200)),
'weights': ['uniform', 'distance'],
'algorithm': ['auto'],
'leaf_size': list(np.arange(2, 200)),
}
new_params = {'kneighborsclassifier__' + key: param_list[key] for key in param_list}
skf = StratifiedKFold(n_splits=3)
clf = KNeighborsClassifier(n_neighbors=2, weights='uniform', algorithm='auto', leaf_size=2)
imba_pipeline = make_pipeline(RandomUnderSampler(), clf)
scoring = ['accuracy', 'f1', 'roc_auc' ]
random_search = RandomizedSearchCV(imba_pipeline, param_distributions=new_params, n_iter=1000, cv=skf, scoring=scoring, refit = 'roc_auc', n_jobs = 4, verbose = 1, return_train_score=True)
random_search.fit(X_train, y_train)
cnfs = report_multiple(random_search.cv_results_, n_top=3, scoring = 'roc_auc')
###Output
Fitting 3 folds for each of 1000 candidates, totalling 3000 fits
###Markdown
Non capisco se è andato in overfitting o no. Lo provo sul validation esterno e dopo provo un'altra grid. Perform Classification
###Code
models_u = []
y_pred_vals_u = []
y_pred_trains_u = []
hyper_ps = random_search.cv_results_
for cnf in cnfs.values():
n_neighbors = cnf['kneighborsclassifier__n_neighbors']
weights = cnf['kneighborsclassifier__weights']
algorithm = cnf['kneighborsclassifier__algorithm']
leaf_size = cnf['kneighborsclassifier__leaf_size']
clf = KNeighborsClassifier(n_neighbors=n_neighbors, weights=weights, algorithm=algorithm, leaf_size=leaf_size)
clf = clf.fit(X_train_res, y_train_res)
models_u.append(clf)
y_pred = clf.predict(X_val)
y_pred_tr = clf.predict(X_train_res)
y_pred_vals_u.append(y_pred)
y_pred_trains_u.append(y_pred_tr)
###Output
_____no_output_____
###Markdown
Analyze the classification results
###Code
for i in range(0,len(cnfs)):
print("model {}".format(i))
print('Train Accuracy %s' % accuracy_score(y_train_res, y_pred_trains_u[i]))
print('Train F1-score %s' % f1_score(y_train_res, y_pred_trains_u[i], average=None))
print()
print('Test Accuracy %s' % accuracy_score(y_val, y_pred_vals_u[i]))
print('Test F1-score %s' % f1_score(y_val, y_pred_vals_u[i], average=None))
print(classification_report(y_val, y_pred_vals_u[i]))
print(confusion_matrix(y_val, y_pred_vals_u[i]))
roc_auc_models_u = []
for i in range(0,len(cnfs)):
fpr, tpr, _ = roc_curve(y_train_res, y_pred_trains_u[i])
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_train_res, y_pred_trains_u[i], average=None)
print("model {} - roc_auc: {}".format(i, roc_auc))
roc_auc_models_u.append(roc_auc)
roc_auc_models_u = []
for i in range(0,len(cnfs)):
fpr, tpr, _ = roc_curve(y_val, y_pred_vals_u[i])
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred_vals_u[i], average=None)
print("model {} - roc_auc: {}".format(i, roc_auc))
roc_auc_models_u.append(roc_auc)
###Output
model 0 - roc_auc: 0.6102710825086516
model 1 - roc_auc: 0.6096736481750294
model 2 - roc_auc: 0.6075214495449303
###Markdown
Choose the best model Il modello migliore sembrerebbe il model 0, anche se probabilmente è in overfitting.{'kneighborsclassifier__weights': 'distance', 'kneighborsclassifier__n_neighbors': 135, 'kneighborsclassifier__leaf_size': 142, 'kneighborsclassifier__algorithm': 'auto'}
###Code
neigh = KNeighborsClassifier(n_neighbors=135, weights='distance', algorithm='auto', leaf_size=142)
%%timeit -n 1
neigh.fit(X_train_res, y_train_res)
neigh.fit(X_train_res, y_train_res)
%%timeit -n 1
neigh.predict(X_val)
y_pred = neigh.predict(X_val)
fpr, tpr, _ = roc_curve(y_val, y_pred)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
%matplotlib inline
plt.figure(figsize=(8, 5))
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % (roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tick_params(axis='both', which='major')
plt.legend(loc="lower right", fontsize=14, frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
Another gridsearch
###Code
param_list = {'n_neighbors': [200, 300, 400, 500, 600, 700, 800, 1000],
'weights': ['distance'],
'algorithm': ['auto'],
'leaf_size': [100, 130, 160, 190, 220],
}
new_params = {'kneighborsclassifier__' + key: param_list[key] for key in param_list}
skf = StratifiedKFold(n_splits=3)
clf = KNeighborsClassifier(n_neighbors=200, weights='distance', algorithm='auto', leaf_size=100)
imba_pipeline = make_pipeline(RandomUnderSampler(), clf)
scoring = ['accuracy', 'f1', 'roc_auc' ]
random_search = RandomizedSearchCV(imba_pipeline, param_distributions=new_params, n_iter=1000, cv=skf, scoring=scoring, refit = 'roc_auc', n_jobs = 4, verbose = 1, return_train_score=True)
random_search.fit(X_train, y_train)
cnfs = report_multiple(random_search.cv_results_, n_top=3, scoring = 'roc_auc')
###Output
C:\Users\Giulia\Anaconda3\lib\site-packages\sklearn\model_selection\_search.py:281: UserWarning: The total space of parameters 40 is smaller than n_iter=1000. Running 40 iterations. For exhaustive searches, use GridSearchCV.
% (grid_size, self.n_iter, grid_size), UserWarning)
[Parallel(n_jobs=4)]: Using backend LokyBackend with 4 concurrent workers.
###Markdown
Other tests
###Code
neigh = KNeighborsClassifier(n_neighbors=9000, weights='distance', algorithm='auto', leaf_size=150)
neigh.fit(X_train_res, y_train_res)
y_pred = neigh.predict(X_val)
y_pred_train = neigh.predict(X_train_res)
fpr, tpr, _ = roc_curve(y_train_res, y_pred_train)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_train_res, y_pred_train, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
fpr, tpr, _ = roc_curve(y_val, y_pred)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
###Output
model 0 - roc_auc: 1.0
model 0 - roc_auc: 0.5910739812673513
###Markdown
Ho aumentato gradualmente il numero di vicini per cercare di migliorare la roc sul validation e di peggiorare quella sul training, ma niente. Choose the final best model Prendo quello selezionato dopo la prima gridsearch
###Code
neigh = KNeighborsClassifier(n_neighbors=135, weights='distance', algorithm='auto', leaf_size=142)
%%timeit -n 1
neigh.fit(X_train_res, y_train_res)
neigh.fit(X_train_res, y_train_res)
%%timeit -n 1
neigh.predict(X_val)
y_pred = neigh.predict(X_val)
fpr, tpr, _ = roc_curve(y_val, y_pred)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
%matplotlib inline
plt.figure(figsize=(8, 5))
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % (roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tick_params(axis='both', which='major')
plt.legend(loc="lower right", fontsize=14, frameon=False)
plt.show()
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
dtc = DecisionTreeClassifier(criterion='gini', max_depth=5, min_samples_split=26, min_samples_leaf=25)
%%timeit -n 1
dtc.fit(X_train_res, y_train_res)
dtc.fit(X_train_res, y_train_res)
%%timeit -n 1
dtc.predict(X_val)
y_pred = dtc.predict(X_val)
fpr, tpr, _ = roc_curve(y_val, y_pred)
roc_auc = auc(fpr, tpr)
roc_auc = roc_auc_score(y_val, y_pred, average=None)
print("model {} - roc_auc: {}".format(0, roc_auc))
%matplotlib inline
plt.figure(figsize=(8, 5))
plt.plot(fpr, tpr, label='ROC curve (area = %0.2f)' % (roc_auc))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.tick_params(axis='both', which='major')
plt.legend(loc="lower right", fontsize=14, frameon=False)
plt.show()
###Output
_____no_output_____
|
d2l-en/mxnet/chapter_linear-networks/softmax-regression-scratch.ipynb
|
###Markdown
Implementation of Softmax Regression from Scratch:label:`sec_softmax_scratch`Just as we implemented linear regression from scratch,we believe that softmax regressionis similarly fundamental and you ought to knowthe gory details of how to implement it yourself.We will work with the Fashion-MNIST dataset, just introduced in :numref:`sec_fashion_mnist`,setting up a data iterator with batch size 256.
###Code
from d2l import mxnet as d2l
from mxnet import autograd, np, npx, gluon
from IPython import display
npx.set_np()
batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
###Output
_____no_output_____
###Markdown
Initializing Model ParametersAs in our linear regression example,each example here will be represented by a fixed-length vector.Each example in the raw dataset is a $28 \times 28$ image.In this section, we will flatten each image,treating them as vectors of length 784.In the future, we will talk about more sophisticated strategiesfor exploiting the spatial structure in images,but for now we treat each pixel location as just another feature.Recall that in softmax regression,we have as many outputs as there are classes.Because our dataset has 10 classes,our network will have an output dimension of 10.Consequently, our weights will constitute a $784 \times 10$ matrixand the biases will constitute a $1 \times 10$ row vector.As with linear regression, we will initialize our weights `W`with Gaussian noise and our biases to take the initial value 0.
###Code
num_inputs = 784
num_outputs = 10
W = np.random.normal(0, 0.01, (num_inputs, num_outputs))
b = np.zeros(num_outputs)
W.attach_grad()
b.attach_grad()
###Output
_____no_output_____
###Markdown
Defining the Softmax OperationBefore implementing the softmax regression model,let us briefly review how the sum operator worksalong specific dimensions in a tensor,as discussed in :numref:`subseq_lin-alg-reduction` and :numref:`subseq_lin-alg-non-reduction`.Given a matrix `X` we can sum over all elements (by default) or onlyover elements in the same axis,i.e., the same column (axis 0) or the same row (axis 1).Note that if `X` is a tensor with shape (2, 3)and we sum over the columns,the result will be a vector with shape (3,).When invoking the sum operator,we can specify to keep the number of axes in the original tensor,rather than collapsing out the dimension that we summed over.This will result in a two-dimensional tensor with shape (1, 3).
###Code
X = np.array([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
X.sum(0, keepdims=True), X.sum(1, keepdims=True)
###Output
_____no_output_____
###Markdown
We are now ready to implement the softmax operation.Recall that softmax consists of three steps:i) we exponentiate each term (using `exp`);ii) we sum over each row (we have one row per example in the batch)to get the normalization constant for each example;iii) we divide each row by its normalization constant,ensuring that the result sums to 1.Before looking at the code, let us recallhow this looks expressed as an equation:$$\mathrm{softmax}(\mathbf{X})_{ij} = \frac{\exp(\mathbf{X}_{ij})}{\sum_k \exp(\mathbf{X}_{ik})}.$$The denominator, or normalization constant,is also sometimes called the *partition function*(and its logarithm is called the log-partition function).The origins of that name are in [statistical physics](https://en.wikipedia.org/wiki/Partition_function_(statistical_mechanics))where a related equation models the distributionover an ensemble of particles.
###Code
def softmax(X):
X_exp = np.exp(X)
partition = X_exp.sum(1, keepdims=True)
return X_exp / partition # The broadcasting mechanism is applied here
###Output
_____no_output_____
###Markdown
As you can see, for any random input,we turn each element into a non-negative number.Moreover, each row sums up to 1,as is required for a probability.
###Code
X = np.random.normal(0, 1, (2, 5))
X_prob = softmax(X)
X_prob, X_prob.sum(1)
###Output
_____no_output_____
###Markdown
Note that while this looks correct mathematically,we were a bit sloppy in our implementationbecause we failed to take precautions against numerical overflow or underflowdue to large or very small elements of the matrix. Defining the ModelNow that we have defined the softmax operation,we can implement the softmax regression model.The below code defines how the input is mapped to the output through the network.Note that we flatten each original image in the batchinto a vector using the `reshape` functionbefore passing the data through our model.
###Code
def net(X):
return softmax(np.dot(X.reshape((-1, W.shape[0])), W) + b)
###Output
_____no_output_____
###Markdown
Defining the Loss FunctionNext, we need to implement the cross-entropy loss function,as introduced in :numref:`sec_softmax`.This may be the most common loss functionin all of deep learning because, at the moment,classification problems far outnumber regression problems.Recall that cross-entropy takes the negative log-likelihoodof the predicted probability assigned to the true label.Rather than iterating over the predictions with a Python for-loop(which tends to be inefficient),we can pick all elements by a single operator.Below, we create sample data `y_hat`with 2 examples of predicted probabilities over 3 classes and their corresponding labels `y`.With `y` we know that in the first example the first class is the correct prediction andin the second example the third class is the ground-truth.Using `y` as the indices of the probabilities in `y_hat`,we pick the probability of the first class in the first exampleand the probability of the third class in the second example.
###Code
y = np.array([0, 2])
y_hat = np.array([[0.1, 0.3, 0.6], [0.3, 0.2, 0.5]])
y_hat[[0, 1], y]
###Output
_____no_output_____
###Markdown
Now we can implement the cross-entropy loss function efficiently with just one line of code.
###Code
def cross_entropy(y_hat, y):
return - np.log(y_hat[range(len(y_hat)), y])
cross_entropy(y_hat, y)
###Output
_____no_output_____
###Markdown
Classification AccuracyGiven the predicted probability distribution `y_hat`,we typically choose the class with the highest predicted probabilitywhenever we must output a hard prediction.Indeed, many applications require that we make a choice.Gmail must categorize an email into "Primary", "Social", "Updates", or "Forums".It might estimate probabilities internally,but at the end of the day it has to choose one among the classes.When predictions are consistent with the label class `y`, they are correct.The classification accuracy is the fraction of all predictions that are correct.Although it can be difficult to optimize accuracy directly (it is not differentiable),it is often the performance measure that we care most about,and we will nearly always report it when training classifiers.To compute accuracy we do the following.First, if `y_hat` is a matrix,we assume that the second dimension stores prediction scores for each class.We use `argmax` to obtain the predicted class by the index for the largest entry in each row.Then we compare the predicted class with the ground-truth `y` elementwise.Since the equality operator `==` is sensitive to data types,we convert `y_hat`'s data type to match that of `y`.The result is a tensor containing entries of 0 (false) and 1 (true).Taking the sum yields the number of correct predictions.
###Code
def accuracy(y_hat, y): #@save
"""Compute the number of correct predictions."""
if len(y_hat.shape) > 1 and y_hat.shape[1] > 1:
y_hat = y_hat.argmax(axis=1)
cmp = y_hat.astype(y.dtype) == y
return float(cmp.astype(y.dtype).sum())
###Output
_____no_output_____
###Markdown
We will continue to use the variables `y_hat` and `y`defined beforeas the predicted probability distributions and labels, respectively.We can see that the first example's prediction class is 2(the largest element of the row is 0.6 with the index 2),which is inconsistent with the actual label, 0.The second example's prediction class is 2(the largest element of the row is 0.5 with the index of 2),which is consistent with the actual label, 2.Therefore, the classification accuracy rate for these two examples is 0.5.
###Code
accuracy(y_hat, y) / len(y)
###Output
_____no_output_____
###Markdown
Similarly, we can evaluate the accuracy for any model `net` on a datasetthat is accessed via the data iterator `data_iter`.
###Code
def evaluate_accuracy(net, data_iter): #@save
"""Compute the accuracy for a model on a dataset."""
metric = Accumulator(2) # No. of correct predictions, no. of predictions
for X, y in data_iter:
metric.add(accuracy(net(X), y), y.size)
return metric[0] / metric[1]
###Output
_____no_output_____
###Markdown
Here `Accumulator` is a utility class to accumulate sums over multiple variables.In the above `evaluate_accuracy` function,we create 2 variables in the `Accumulator` instance for storing boththe number of correct predictions and the number of predictions, respectively.Both will be accumulated over time as we iterate over the dataset.
###Code
class Accumulator: #@save
"""For accumulating sums over `n` variables."""
def __init__(self, n):
self.data = [0.0] * n
def add(self, *args):
self.data = [a + float(b) for a, b in zip(self.data, args)]
def reset(self):
self.data = [0.0] * len(self.data)
def __getitem__(self, idx):
return self.data[idx]
###Output
_____no_output_____
###Markdown
Because we initialized the `net` model with random weights,the accuracy of this model should be close to random guessing,i.e., 0.1 for 10 classes.
###Code
evaluate_accuracy(net, test_iter)
###Output
_____no_output_____
###Markdown
TrainingThe training loop for softmax regression should look strikingly familiarif you read through our implementationof linear regression in :numref:`sec_linear_scratch`.Here we refactor the implementation to make it reusable.First, we define a function to train for one epoch.Note that `updater` is a general function to update the model parameters,which accepts the batch size as an argument.It can be either a wrapper of the `d2l.sgd` functionor a framework's built-in optimization function.
###Code
def train_epoch_ch3(net, train_iter, loss, updater): #@save
"""Train a model within one epoch (defined in Chapter 3)."""
# Sum of training loss, sum of training accuracy, no. of examples
metric = Accumulator(3)
if isinstance(updater, gluon.Trainer):
updater = updater.step
for X, y in train_iter:
# Compute gradients and update parameters
with autograd.record():
y_hat = net(X)
l = loss(y_hat, y)
l.backward()
updater(X.shape[0])
metric.add(float(l.sum()), accuracy(y_hat, y), y.size)
# Return training loss and training accuracy
return metric[0] / metric[2], metric[1] / metric[2]
###Output
_____no_output_____
###Markdown
Before showing the implementation of the training function,we define a utility class that plot data in animation.Again, it aims to simplify code in the rest of the book.
###Code
class Animator: #@save
"""For plotting data in animation."""
def __init__(self, xlabel=None, ylabel=None, legend=None, xlim=None,
ylim=None, xscale='linear', yscale='linear',
fmts=('-', 'm--', 'g-.', 'r:'), nrows=1, ncols=1,
figsize=(3.5, 2.5)):
# Incrementally plot multiple lines
if legend is None:
legend = []
d2l.use_svg_display()
self.fig, self.axes = d2l.plt.subplots(nrows, ncols, figsize=figsize)
if nrows * ncols == 1:
self.axes = [self.axes, ]
# Use a lambda function to capture arguments
self.config_axes = lambda: d2l.set_axes(
self.axes[0], xlabel, ylabel, xlim, ylim, xscale, yscale, legend)
self.X, self.Y, self.fmts = None, None, fmts
def add(self, x, y):
# Add multiple data points into the figure
if not hasattr(y, "__len__"):
y = [y]
n = len(y)
if not hasattr(x, "__len__"):
x = [x] * n
if not self.X:
self.X = [[] for _ in range(n)]
if not self.Y:
self.Y = [[] for _ in range(n)]
for i, (a, b) in enumerate(zip(x, y)):
if a is not None and b is not None:
self.X[i].append(a)
self.Y[i].append(b)
self.axes[0].cla()
for x, y, fmt in zip(self.X, self.Y, self.fmts):
self.axes[0].plot(x, y, fmt)
self.config_axes()
display.display(self.fig)
display.clear_output(wait=True)
###Output
_____no_output_____
###Markdown
The following training function thentrains a model `net` on a training dataset accessed via `train_iter`for multiple epochs, which is specified by `num_epochs`.At the end of each epoch,the model is evaluated on a testing dataset accessed via `test_iter`.We will leverage the `Animator` class to visualizethe training progress.
###Code
def train_ch3(net, train_iter, test_iter, loss, num_epochs, updater): #@save
"""Train a model (defined in Chapter 3)."""
animator = Animator(xlabel='epoch', xlim=[1, num_epochs], ylim=[0.3, 0.9],
legend=['train loss', 'train acc', 'test acc'])
for epoch in range(num_epochs):
train_metrics = train_epoch_ch3(net, train_iter, loss, updater)
test_acc = evaluate_accuracy(net, test_iter)
animator.add(epoch + 1, train_metrics + (test_acc,))
train_loss, train_acc = train_metrics
assert train_loss < 0.5, train_loss
assert train_acc <= 1 and train_acc > 0.7, train_acc
assert test_acc <= 1 and test_acc > 0.7, test_acc
###Output
_____no_output_____
###Markdown
As an implementation from scratch,we use the minibatch stochastic gradient descent defined in :numref:`sec_linear_scratch`to optimize the loss function of the model with a learning rate 0.1.
###Code
lr = 0.1
def updater(batch_size):
return d2l.sgd([W, b], lr, batch_size)
###Output
_____no_output_____
###Markdown
Now we train the model with 10 epochs.Note that both the number of epochs (`num_epochs`),and learning rate (`lr`) are adjustable hyperparameters.By changing their values, we may be ableto increase the classification accuracy of the model.
###Code
num_epochs = 10
train_ch3(net, train_iter, test_iter, cross_entropy, num_epochs, updater)
###Output
_____no_output_____
###Markdown
PredictionNow that training is complete,our model is ready to classify some images.Given a series of images,we will compare their actual labels(first line of text output)and the predictions from the model(second line of text output).
###Code
def predict_ch3(net, test_iter, n=6): #@save
"""Predict labels (defined in Chapter 3)."""
for X, y in test_iter:
break
trues = d2l.get_fashion_mnist_labels(y)
preds = d2l.get_fashion_mnist_labels(net(X).argmax(axis=1))
titles = [true + '\n' + pred for true, pred in zip(trues, preds)]
d2l.show_images(X[0:n].reshape((n, 28, 28)), 1, n, titles=titles[0:n])
predict_ch3(net, test_iter)
###Output
_____no_output_____
|
agreg/public2012_D5_OCaml.ipynb
|
###Markdown
Table of Contents 1 Agrégation externe de mathématiques, texte d’exercice diffusé en 20121.1 Épreuve de modélisation, option informatique1.2 Proposition d'implémentation, en OCaml1.2.1 Pour l'option informatique (D) de l'agrégation de mathématiques (en France).1.3 Exercice requis1.4 Choix de structure de données1.4.1 En OCaml1.4.2 En Python1.5 Réponse1.5.1 On fait quelques exemples...1.5.2 Si une hypothèse n'est pas vérifié1.6 Bonus : deux autres méthodes (droites inférieure et supérieure)1.7 Illustration1.7.1 Par la sélection de Bresenham1.7.2 Par la sélection inférieure1.7.3 Par la sélection supérieure1.8 Autres bonus : calculer le mot binaire codant les déplacements1.9 Conclusion1.10 Attention Agrégation externe de mathématiques, texte d’exercice diffusé en 2012 Épreuve de modélisation, option informatique > - Ce [notebook Jupyter](http://jupyter.org/), utilisant [OCaml](https://ocaml.org/) (via le [kernel Ocaml](https://github.com/akabe/ocaml-jupyter/)), est une correction [non officielle](https://github.com/Naereen/notebooks/tree/master/agreg) d'un texte de modélisation pour l'option informatique de l'agrégation externe de mathématiques.> - Il s'agit du texte [public2012-D5](http://agreg.org/Textes/public2012-D5.pdf).> - Cette tentative de correction partielle a été rédigée par [Lilian Besson](http://perso.crans.org/besson/) ([sur GitHub ?](https://github.com/Naereen/), [sur Bitbucket ?](https://bitbucket.org/lbesson)), et [est open-source](https://github.com/Naereen/notebooks/blob/master/agreg/public2012_D5_OCaml.ipynb).> - J'avais déjà rédigé une solution, pendant ma propre préparation à l'agrégation en 2013/2014, voir [ce fichier](https://perso.crans.org/besson/agreg/m/29-04/code_Public2012-D5.html).> Retour ?> - Vous avez trouvé un bug ? → [Signalez-le moi svp !](https://github.com/Naereen/notebooks/issues/new), merci d'avance.> - Vous avez une question ? → [Posez la svp !](https://github.com/Naereen/ama.fr) [](https://GitHub.com/Naereen/ama.fr)---- *Proposition* d'implémentation, en [OCaml](https://ocaml.org/) Pour [l'option informatique (D)](http://www.dit.ens-rennes.fr/agregation-option-d/programme-de-l-option-informatique-de-l-agregation-de-mathematiques-48358.kjsp) de l'[agrégation de mathématiques](http://agreg.org/) (en France). **Attention** : ce document ne prétend pas être LA correction du texte, mais **un exemple de solution**.Je me suis inspiré des propositions d'implémentations rédigées par les élèves qui ont préparé ce texte en 3h50 le lundi 13 mai 2019.---- Exercice requisL'exercice de programmation était en page 2/8 du texte, après l'explication du problème et de l'algorithme de Bresenham.> Écrire un programme permettant de représenter le segment $[A B]$, où $A= (a_1,a_2)$ et $B=(b_1,b_2)$, en suivant l'algorithme de Bresenham.> On supposera que $a_1<b_1$, $a_2 \leq b_2$ et que la pente $\alpha$ de la droite est inférieure à $1$.> La sortie du programme sera la liste des couples $(x_i,y_i)$ des points représentant le segment.Attention, on rappelle que le rapport du jury précise explicitement que dans les exercices de programmation **liste de …** signifie *liste* OU *tableau*, au choix du candidat ou de la candidate. ---- Choix de structure de donnéesSoit $n = b_1 - a_1 \in\mathbb{N}$.Ici, on connaît à l'avance le nombre de points que doit contenir la solution, donc utiliser un tableau de $n+1$ points est une bonne idée. En OCamlOn va préférer :```ocamllet segment = Array.make (n+1) (a1, a2) in...for i = 1 to n do let xi, yi = ..., ... in segment.(i) <- (xi, yi);done```à :```ocamllet segment = ref [(a1, a2)] in...for i = 1 to n do let xi, yi = ..., ... in segment := (xi, yi) :: !segment;done``` En PythonOn pourrait de même créer un tableau dès le début.On va préférer :```pythonsegment = [ (0,0) for i in range(n+1) ]segment = [ (0,0) ] * (n+1)...for i in range(n): xi, yi = ..., ... segment[i] = (xi, yi)```à :```pythonsegment = [ (a1, a2) ]...for i in range(n): xi, yi = ..., ... segment.append(xi, yi)``` ---- Réponse On utilise un type `point` pour représenter les points de coordonées entières $(x, y) \in\mathbb{Z}^2$, cela facilitera l'affichage des signatures :
###Code
type point = (int * int);;
let point_a : point = (0, 0)
and point_b : point = (4, 3);;
type segment = point array;;
###Output
_____no_output_____
###Markdown
La fonction suivante renvoie un tableau de $n+1$ points, représentant le segment $[a, b]$ obtenus avec l'algorithme de Bresenham.- Complexité temporelle : $\mathcal{O}(n)$- Complexité mémoire : $\mathcal{O}(n)$, où n = b1 - a1. (dans tous les cas)
###Code
let bresenham (a : point) (b : point) : segment =
let a1, a2 = a
and b1, b2 = b in
let n = b1 - a1 in
let segment_ab = Array.make (n+1) a in
let alpha_normalisee = b2 - a2 in (* pente normalisée, ie alpha*n dans *)
let erreur = ref 0 in
let y_tilde = ref a2 in
for i = 1 to n-1 do
if 2 * (!erreur + alpha_normalisee) <= n then
erreur := !erreur + alpha_normalisee
else begin
erreur := !erreur + alpha_normalisee - n;
y_tilde := !y_tilde + 1;
end;
segment_ab.(i) <- (a1 + i, !y_tilde);
done;
segment_ab.(n) <- b;
segment_ab
;;
###Output
_____no_output_____
###Markdown
On fait quelques exemples...
###Code
bresenham (0, 0) (5, 2);;
bresenham (0, 0) (5, 5);;
###Output
_____no_output_____
###Markdown
Si une hypothèse n'est pas vérifié On vérifie que l'ordre des arguments est important, le programme exige que $a_1 < b_1$ et $a_2 \leq b_2$ :
###Code
bresenham (0, 0) (-5, 2);;
###Output
_____no_output_____
###Markdown
Si la pente est $\alpha>1$, le programme ne fait pas ce qu'on espérait, car ses hypothèses ne sont pas respectées :
###Code
bresenham (0, 0) (0, 2);;
###Output
_____no_output_____
###Markdown
---- Bonus : deux autres méthodes (droites inférieure et supérieure)Ce n'est pas exigé dans le texte, mais on pouvait facilement implémenter la méthode qui longe la droite au plus près inférieurement, et au plus près supérieurement.- Pour la première, c'est assez facile et on peut aussi travailler uniquement avec des entiers : - Complexité temporelle : $\mathcal{O}(n)$ - Complexité mémoire : $\mathcal{O}(n)$, où n = b1 - a1. (dans tous les cas)
###Code
let au_plus_pres_inferieurement (a : point) (b : point) : segment =
let a1, a2 = a
and b1, b2 = b in
let n = b1 - a1 in
let segment_ab = Array.make (n+1) a in
let alpha_normalisee = b2 - a2 in (* pente normalisée, ie alpha*n dans *)
for i = 1 to n-1 do
(* on laisse la division entière faire la partie inférieure *)
segment_ab.(i) <- (a1 + i, (alpha_normalisee * i + a2 * (b1-a1)) / (b1 -a1));
done;
segment_ab.(n) <- b;
segment_ab
;;
###Output
_____no_output_____
###Markdown
Sur les mêmes exemples, on voit la différence, quand la pente est $\alpha<1$ :
###Code
bresenham (0, 0) (5, 2);;
au_plus_pres_inferieurement (0, 0) (5, 2);;
bresenham (0, 0) (5, 5);;
au_plus_pres_inferieurement (0, 0) (5, 5);;
###Output
_____no_output_____
###Markdown
- Pour la droite au plus près supérieurement, on va illustrer l'utilisation d'arithmétique flottante et de la fonction `ceil` - Complexité temporelle : $\mathcal{O}(n)$ - Complexité mémoire : $\mathcal{O}(n)$, où n = b1 - a1. (dans tous les cas)
###Code
ceil;;
let ceil_to_int x = int_of_float (ceil x);;
let au_plus_pres_superieurement (a : point) (b : point) : segment =
let a1, a2 = a
and b1, b2 = b in
let n = b1 - a1 in
let segment_ab = Array.make (n+1) a in
let alpha = (float_of_int (b2 - a2)) /. (float_of_int n) in (* pente normalisée, ie alpha*n dans *)
for i = 1 to n-1 do
segment_ab.(i) <- (a1 + i, ceil_to_int ((float_of_int a2) +. alpha *. (float_of_int i)));
done;
segment_ab.(n) <- b;
segment_ab
;;
###Output
_____no_output_____
###Markdown
Sur les mêmes exemples, on voit la différence, quand la pente est $\alpha<1$ :
###Code
bresenham (0, 0) (5, 2);;
au_plus_pres_superieurement (0, 0) (5, 2);;
bresenham (0, 0) (5, 5);;
au_plus_pres_superieurement (0, 0) (5, 5);;
###Output
_____no_output_____
###Markdown
---- IllustrationEn bonus, on montre une illustration (on ferait des dessins au tableau). Par la sélection de Bresenham Par la sélection inférieure Par la sélection supérieure ---- Autres bonus : calculer le mot binaire codant les déplacementsSi on utilise par exemple la droite longeant au plus près inférieurement, la fonction suivante renvoie la suite des déplacements horizontaux ou diagonauxpour longer le segment $[a, b]$.
###Code
type mot_binaire = bool array;;
let deplacements (a : point) (b : point) : mot_binaire =
let a1, a2 = a
and b1, b2 = b in
let n = b1 - a1 in
let mot_binaire_ab : mot_binaire = Array.make n false in
let alpha_normalisee = b2 - a2 in (* pente normalisée, ie alpha*n dans *)
let y0 = ref 0 and y1 = ref 0 in
for i = 1 to n do
y0 := !y1;
(* on laisse la division entière faire la partie inférieure *)
y1 := (alpha_normalisee * i + a2 * (b1-a1)) / (b1 -a1);
mot_binaire_ab.(i-1) <- !y0 != !y1;
done;
mot_binaire_ab
;;
###Output
_____no_output_____
###Markdown
Sur les mêmes exemples, on voit la différence, quand la pente est $\alpha<1$ :
###Code
au_plus_pres_inferieurement (0, 0) (5, 2);;
deplacements (0, 0) (5, 2);;
###Output
_____no_output_____
###Markdown
Le mot renvoyé est $(0 0 1 0 1)$, comme prévu. Et si la pente est $\alpha=1$, le mot sera $(11111)$.
###Code
au_plus_pres_inferieurement (0, 0) (5, 5);;
deplacements (0, 0) (5, 5);;
###Output
_____no_output_____
|
openmdao/docs/openmdao_book/features/core_features/working_with_groups/add_subsystem.ipynb
|
###Markdown
Adding Subsystems to a Group and Promoting VariablesTo add a Component or another Group to a Group, use the `add_subsystem` method.```{eval-rst} .. automethod:: openmdao.core.group.Group.add_subsystem :noindex:``` Usage Add a Component to a Group
###Code
import openmdao.api as om
p = om.Problem()
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
p.setup();
print(p.get_val('comp1.a'))
print(p.get_val('comp1.b'))
from openmdao.utils.assert_utils import assert_near_equal
assert(p.get_val('comp1.a') == 3.0)
assert(p.get_val('comp1.b') == 6.0)
###Output
_____no_output_____
###Markdown
```{note}Group names must be Pythonic, so they can only contain alphanumeric characters plus the underscore. In addition, the first character in the group name must be a letter of the alphabet. Also, the system name should not duplicate any method or attribute of the `System` API.``` Promote the input and output of a ComponentBecause the promoted names of `indep.a` and `comp.a` are the same, `indep.a` is automatically connected to `comp1.a`.```{note}Inputs are always accessed using unpromoted names even when they arepromoted, because promoted input names may not be unique. The unpromoted nameis the full system path to the variable from the point of view of the callingsystem. Accessing the variables through the Problem as in this example meansthat the unpromoted name and the full or absolute pathname are the same.```
###Code
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('a', 3.0),
promotes_outputs=['a'])
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*a'),
promotes_inputs=['a'])
p.setup()
p.run_model()
print(p.get_val('a'))
print(p.get_val('comp1.b'))
assert(p.get_val('a') == 3.0)
assert(p.get_val('comp1.b') == 6.0)
###Output
_____no_output_____
###Markdown
Add two Components to a Group nested within another Group
###Code
p = om.Problem()
p.model.add_subsystem('G1', om.Group())
p.model.G1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
p.model.G1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0))
p.setup()
print(p.get_val('G1.comp1.a'))
print(p.get_val('G1.comp1.b'))
print(p.get_val('G1.comp2.a'))
print(p.get_val('G1.comp2.b'))
assert(p.get_val('G1.comp1.a') == 3.0)
assert(p.get_val('G1.comp1.b') == 6.0)
assert(p.get_val('G1.comp2.a') == 4.0)
assert(p.get_val('G1.comp2.b') == 12.0)
###Output
_____no_output_____
###Markdown
Promote the input and output of Components to subgroup levelIn this example, there are two inputs promoted to the same name, sothe promoted name *G1.a* is not unique.
###Code
# promotes from bottom level up 1
p = om.Problem()
g1 = p.model.add_subsystem('G1', om.Group())
g1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0),
promotes_inputs=['a'], promotes_outputs=['b'])
g1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0),
promotes_inputs=['a'])
g1.set_input_defaults('a', val=3.5)
p.setup()
# output G1.comp1.b is promoted
print(p.get_val('G1.b'))
# output G1.comp2.b is not promoted
print(p.get_val('G1.comp2.b'))
# use unpromoted names for the following 2 promoted inputs
print(p.get_val('G1.comp1.a'))
print(p.get_val('G1.comp2.a'))
assert(p.get_val('G1.b') == 6.0)
assert(p.get_val('G1.comp2.b') == 12.0)
assert(p.get_val('G1.comp1.a') == 3.5)
assert(p.get_val('G1.comp2.a') == 3.5)
###Output
_____no_output_____
###Markdown
Promote the input and output of Components from subgroup level up to top level
###Code
# promotes up from G1 level
p = om.Problem()
g1 = om.Group()
g1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
g1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0))
# use glob pattern 'comp?.a' to promote both comp1.a and comp2.a
# use glob pattern 'comp?.b' to promote both comp1.b and comp2.b
p.model.add_subsystem('G1', g1,
promotes_inputs=['comp?.a'],
promotes_outputs=['comp?.b'])
p.setup()
# output G1.comp1.b is promoted
print(p.get_val('comp1.b'), 6.0)
# output G1.comp2.b is promoted
print(p.get_val('comp2.b'), 12.0)
# access both promoted inputs using unpromoted names.
print(p.get_val('G1.comp1.a'), 3.0)
print(p.get_val('G1.comp2.a'), 4.0)
assert(p.get_val('comp1.b') == 6.0)
assert(p.get_val('comp2.b') == 12.0)
assert(p.get_val('G1.comp1.a') == 3.0)
assert(p.get_val('G1.comp2.a') == 4.0)
###Output
_____no_output_____
###Markdown
Promote with an alias to connect an input to a source
###Code
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('aa', 3.0),
promotes=['aa'])
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*aa'),
promotes_inputs=['aa'])
# here we alias 'a' to 'aa' so that it will be automatically
# connected to the independent variable 'aa'.
p.model.add_subsystem('comp2', om.ExecComp('b=3.0*a'),
promotes_inputs=[('a', 'aa')])
p.setup()
p.run_model()
print(p.get_val('comp1.b'))
print(p.get_val('comp2.b'))
assert(p.get_val('comp1.b') == 6.0)
assert(p.get_val('comp2.b') == 9.0)
###Output
_____no_output_____
###Markdown
(group-promotion)= Promote Inputs and Outputs After Adding SubsystemsIt is also possible to promote inputs and outputs after a subsystem has been addedto a Group using the `promotes` method.```{eval-rst} .. automethod:: openmdao.core.group.Group.promotes :noindex:``` Usage Promote any subsystem inputs and outputs from the configure function
###Code
class SimpleGroup(om.Group):
def setup(self):
self.add_subsystem('comp1', om.IndepVarComp('x', 5.0))
self.add_subsystem('comp2', om.ExecComp('b=2*a'))
def configure(self):
self.promotes('comp1', any=['*'])
top = om.Problem(model=SimpleGroup())
top.setup()
print(top.get_val('x'))
assert(top.get_val('x') == 5)
###Output
_____no_output_____
###Markdown
Promote specific inputs and outputs from the configure function
###Code
class SimpleGroup(om.Group):
def setup(self):
self.add_subsystem('comp1', om.IndepVarComp('x', 5.0))
self.add_subsystem('comp2', om.ExecComp('b=2*a'))
def configure(self):
self.promotes('comp2', inputs=['a'], outputs=['b'])
top = om.Problem(model=SimpleGroup())
top.setup()
print(top.get_val('a'))
print(top.get_val('b'))
assert(top.get_val('a') == 1)
assert(top.get_val('b') == 1)
###Output
_____no_output_____
###Markdown
Specifying source shape and source indices for promoted inputs of a groupThe arg `src_shape` can be passed to `promotes` or `set_input_defaults` calls in order tospecify the shape of the source that the input is expecting. This allows an output havinga different shape to be connected to an input by specifying `src_indices` in the `connect`or `promotes` call, even if there are other `src_indices` specified at lower levels in thesystem tree for the same input(s). This basically allows you to specify the 'connection interface'for a given Group, making it easier to use that Group in other models without having to modifyits internal `src_indices` based on the shape of whatever sources are connected to its inputsin a given model.Note that if multiple inputs are promoted to the same name then their `src_shape` must match,but their `src_indices` may be different.Below is an example of applying multiple `src_indices` to the same promoted input at differentsystem tree levels.
###Code
import numpy as np
p = om.Problem()
G = p.model.add_subsystem('G', om.Group())
# At the top level, we assume that the source has a shape of (3,3), and after we
# slice it with [:,:-1], lower levels will see their source having a shape of (3,2)
p.model.promotes('G', inputs=['x'], src_indices=om.slicer[:,:-1], src_shape=(3, 3))
# This specifies that G.x assumes a source shape of (3,2)
G.set_input_defaults('x', src_shape=(3, 2))
g1 = G.add_subsystem('g1', om.Group(), promotes_inputs=['x'])
g1.add_subsystem('C1', om.ExecComp('y = 3*x', shape=3))
# C1.x has a shape of 3, so we apply a slice of [:, 1] to our source which has a shape
# of (3,2) to give us our final shape of 3.
g1.promotes('C1', inputs=['x'], src_indices=om.slicer[:, 1], src_shape=(3, 2))
g2 = G.add_subsystem('g2', om.Group(), promotes_inputs=['x'])
g2.add_subsystem('C2', om.ExecComp('y = 2*x', shape=2))
# C2.x has a shape of 2, so we apply flat source indices of [1,5] to our source which has
# a shape of (3,2) to give us our final shape of 2.
g2.promotes('C2', inputs=['x'], src_indices=[1, 5], src_shape=(3, 2), flat_src_indices=True)
p.setup()
inp = np.arange(9).reshape((3,3)) + 1.
p.set_val('x', inp)
p.run_model()
print(p['x'])
print(p['G.g1.C1.y'])
print(p['G.g2.C2.y'])
assert_near_equal(p['x'], inp)
assert_near_equal(p['G.g1.C1.y'], inp[:, :-1][:, 1]*3.)
assert_near_equal(p['G.g2.C2.y'], inp[:, :-1].flatten()[[1,5]]*2.)
###Output
_____no_output_____
###Markdown
Adding Subsystems to a Group and Promoting VariablesTo add a Component or another Group to a Group, use the `add_subsystem` method.```{eval-rst} .. automethod:: openmdao.core.group.Group.add_subsystem :noindex:``` Usage Add a Component to a Group
###Code
p = om.Problem()
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
p.setup();
print(p.get_val('comp1.a'))
print(p.get_val('comp1.b'))
from openmdao.utils.assert_utils import assert_near_equal
assert(p.get_val('comp1.a') == 3.0)
assert(p.get_val('comp1.b') == 6.0)
###Output
_____no_output_____
###Markdown
```{note}Group names must be Pythonic, so they can only contain alphanumeric characters plus the underscore. In addition, the first character in the group name must be a letter of the alphabet. Also, the system name should not duplicate any method or attribute of the `System` API.``` Promote the input and output of a ComponentBecause the promoted names of `indep.a` and `comp.a` are the same, `indep.a` is automatically connected to `comp1.a`.```{note}Inputs are always accessed using unpromoted names even when they arepromoted, because promoted input names may not be unique. The unpromoted nameis the full system path to the variable from the point of view of the callingsystem. Accessing the variables through the Problem as in this example meansthat the unpromoted name and the full or absolute pathname are the same.```
###Code
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('a', 3.0),
promotes_outputs=['a'])
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*a'),
promotes_inputs=['a'])
p.setup()
p.run_model()
print(p.get_val('a'))
print(p.get_val('comp1.b'))
assert(p.get_val('a') == 3.0)
assert(p.get_val('comp1.b') == 6.0)
###Output
_____no_output_____
###Markdown
Add two Components to a Group nested within another Group
###Code
p = om.Problem()
p.model.add_subsystem('G1', om.Group())
p.model.G1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
p.model.G1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0))
p.setup()
print(p.get_val('G1.comp1.a'))
print(p.get_val('G1.comp1.b'))
print(p.get_val('G1.comp2.a'))
print(p.get_val('G1.comp2.b'))
assert(p.get_val('G1.comp1.a') == 3.0)
assert(p.get_val('G1.comp1.b') == 6.0)
assert(p.get_val('G1.comp2.a') == 4.0)
assert(p.get_val('G1.comp2.b') == 12.0)
###Output
_____no_output_____
###Markdown
Promote the input and output of Components to subgroup levelIn this example, there are two inputs promoted to the same name, sothe promoted name *G1.a* is not unique.
###Code
# promotes from bottom level up 1
p = om.Problem()
g1 = p.model.add_subsystem('G1', om.Group())
g1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0),
promotes_inputs=['a'], promotes_outputs=['b'])
g1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0),
promotes_inputs=['a'])
g1.set_input_defaults('a', val=3.5)
p.setup()
# output G1.comp1.b is promoted
print(p.get_val('G1.b'))
# output G1.comp2.b is not promoted
print(p.get_val('G1.comp2.b'))
# use unpromoted names for the following 2 promoted inputs
print(p.get_val('G1.comp1.a'))
print(p.get_val('G1.comp2.a'))
assert(p.get_val('G1.b') == 6.0)
assert(p.get_val('G1.comp2.b') == 12.0)
assert(p.get_val('G1.comp1.a') == 3.5)
assert(p.get_val('G1.comp2.a') == 3.5)
###Output
_____no_output_____
###Markdown
Promote the input and output of Components from subgroup level up to top level
###Code
# promotes up from G1 level
p = om.Problem()
g1 = om.Group()
g1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
g1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0))
# use glob pattern 'comp?.a' to promote both comp1.a and comp2.a
# use glob pattern 'comp?.b' to promote both comp1.b and comp2.b
p.model.add_subsystem('G1', g1,
promotes_inputs=['comp?.a'],
promotes_outputs=['comp?.b'])
p.setup()
# output G1.comp1.b is promoted
print(p.get_val('comp1.b'), 6.0)
# output G1.comp2.b is promoted
print(p.get_val('comp2.b'), 12.0)
# access both promoted inputs using unpromoted names.
print(p.get_val('G1.comp1.a'), 3.0)
print(p.get_val('G1.comp2.a'), 4.0)
assert(p.get_val('comp1.b'), 6.0)
assert(p.get_val('comp2.b'), 12.0)
assert(p.get_val('G1.comp1.a'), 3.0)
assert(p.get_val('G1.comp2.a'), 4.0)
###Output
_____no_output_____
###Markdown
Promote with an alias to connect an input to a source
###Code
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('aa', 3.0),
promotes=['aa'])
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*aa'),
promotes_inputs=['aa'])
# here we alias 'a' to 'aa' so that it will be automatically
# connected to the independent variable 'aa'.
p.model.add_subsystem('comp2', om.ExecComp('b=3.0*a'),
promotes_inputs=[('a', 'aa')])
p.setup()
p.run_model()
print(p.get_val('comp1.b'))
print(p.get_val('comp2.b'))
assert(p.get_val('comp1.b') == 6.0)
assert(p.get_val('comp2.b') == 9.0)
###Output
_____no_output_____
###Markdown
(group-promotion)= Promote Inputs and Outputs After Adding SubsystemsIt is also possible to promote inputs and outputs after a subsystem has been addedto a Group using the `promotes` method.```{eval-rst} .. automethod:: openmdao.core.group.Group.promotes :noindex:``` Usage Promote any subsystem inputs and outputs from the configure function
###Code
class SimpleGroup(om.Group):
def setup(self):
self.add_subsystem('comp1', om.IndepVarComp('x', 5.0))
self.add_subsystem('comp2', om.ExecComp('b=2*a'))
def configure(self):
self.promotes('comp1', any=['*'])
top = om.Problem(model=SimpleGroup())
top.setup()
print(top.get_val('x'))
assert(top.get_val('x') == 5)
###Output
_____no_output_____
###Markdown
Promote specific inputs and outputs from the configure function
###Code
class SimpleGroup(om.Group):
def setup(self):
self.add_subsystem('comp1', om.IndepVarComp('x', 5.0))
self.add_subsystem('comp2', om.ExecComp('b=2*a'))
def configure(self):
self.promotes('comp2', inputs=['a'], outputs=['b'])
top = om.Problem(model=SimpleGroup())
top.setup()
print(top.get_val('a'))
print(top.get_val('b'))
assert(top.get_val('a') == 1)
assert(top.get_val('b') == 1)
###Output
_____no_output_____
###Markdown
Specifying source shape and source indices for promoted inputs of a groupThe arg `src_shape` can be passed to `promotes` or `set_input_defaults` calls in order tospecify the shape of the source that the input is expecting. This allows an output havinga different shape to be connected to an input by specifying `src_indices` in the `connect`or `promotes` call, even if there are other `src_indices` specified at lower levels in thesystem tree for the same input(s). This basically allows you to specify the 'connection interface'for a given Group, making it easier to use that Group in other models without having to modifyits internal `src_indices` based on the shape of whatever sources are connected to its inputsin a given model.Note that if multiple inputs are promoted to the same name then their `src_shape` must match,but their `src_indices` may be different.Below is an example of applying multiple `src_indices` to the same promoted input at differentsystem tree levels.
###Code
import numpy as np
p = om.Problem()
G = p.model.add_subsystem('G', om.Group())
# At the top level, we assume that the source has a shape of (3,3), and after we
# slice it with [:,:-1], lower levels will see their source having a shape of (3,2)
p.model.promotes('G', inputs=['x'], src_indices=om.slicer[:,:-1], src_shape=(3,3))
# This specifies that G.x assumes a source shape of (3,2)
G.set_input_defaults('x', src_shape=(3,2))
g1 = G.add_subsystem('g1', om.Group(), promotes_inputs=['x'])
g1.add_subsystem('C1', om.ExecComp('y = 3*x', shape=3))
# C1.x has a shape of 3, so we apply a slice of [:, 1] to our source which has a shape
# of (3,2) to give us our final shape of 3.
g1.promotes('C1', inputs=['x'], src_indices=om.slicer[:, 1], src_shape=(3,2), flat_src_indices=True)
g2 = G.add_subsystem('g2', om.Group(), promotes_inputs=['x'])
g2.add_subsystem('C2', om.ExecComp('y = 2*x', shape=2))
# C2.x has a shape of 2, so we apply flat source indices of [1,5] to our source which has
# a shape of (3,2) to give us our final shape of 2.
g2.promotes('C2', inputs=['x'], src_indices=[1,5], src_shape=(3,2), flat_src_indices=True)
p.setup()
inp = np.arange(9).reshape((3,3)) + 1.
p.set_val('x', inp[:, :-1])
p.run_model()
print(p['x'])
print(p['G.g1.C1.y'])
print(p['G.g2.C2.y'])
assert_near_equal(p['x'], inp[:, :-1])
assert_near_equal(p['G.g1.C1.y'], inp[:, :-1][:, 1]*3.)
assert_near_equal(p['G.g2.C2.y'], inp[:, :-1].flatten()[[1,5]]*2.)
###Output
_____no_output_____
###Markdown
Adding Subsystems to a Group and Promoting VariablesTo add a Component or another Group to a Group, use the `add_subsystem` method.```{eval-rst} .. automethod:: openmdao.core.group.Group.add_subsystem :noindex:``` Usage Add a Component to a Group
###Code
import openmdao.api as om
p = om.Problem()
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
p.setup();
print(p.get_val('comp1.a'))
print(p.get_val('comp1.b'))
from openmdao.utils.assert_utils import assert_near_equal
assert(p.get_val('comp1.a') == 3.0)
assert(p.get_val('comp1.b') == 6.0)
###Output
_____no_output_____
###Markdown
```{note}Group names must be Pythonic, so they can only contain alphanumeric characters plus the underscore. In addition, the first character in the group name must be a letter of the alphabet. Also, the system name should not duplicate any method or attribute of the `System` API.``` Promote the input and output of a ComponentBecause the promoted names of `indep.a` and `comp.a` are the same, `indep.a` is automatically connected to `comp1.a`.```{note}Inputs are always accessed using unpromoted names even when they arepromoted, because promoted input names may not be unique. The unpromoted nameis the full system path to the variable from the point of view of the callingsystem. Accessing the variables through the Problem as in this example meansthat the unpromoted name and the full or absolute pathname are the same.```
###Code
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('a', 3.0),
promotes_outputs=['a'])
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*a'),
promotes_inputs=['a'])
p.setup()
p.run_model()
print(p.get_val('a'))
print(p.get_val('comp1.b'))
assert(p.get_val('a') == 3.0)
assert(p.get_val('comp1.b') == 6.0)
###Output
_____no_output_____
###Markdown
Add two Components to a Group nested within another Group
###Code
p = om.Problem()
p.model.add_subsystem('G1', om.Group())
p.model.G1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
p.model.G1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0))
p.setup()
print(p.get_val('G1.comp1.a'))
print(p.get_val('G1.comp1.b'))
print(p.get_val('G1.comp2.a'))
print(p.get_val('G1.comp2.b'))
assert(p.get_val('G1.comp1.a') == 3.0)
assert(p.get_val('G1.comp1.b') == 6.0)
assert(p.get_val('G1.comp2.a') == 4.0)
assert(p.get_val('G1.comp2.b') == 12.0)
###Output
_____no_output_____
###Markdown
Promote the input and output of Components to subgroup levelIn this example, there are two inputs promoted to the same name, sothe promoted name *G1.a* is not unique.
###Code
# promotes from bottom level up 1
p = om.Problem()
g1 = p.model.add_subsystem('G1', om.Group())
g1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0),
promotes_inputs=['a'], promotes_outputs=['b'])
g1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0),
promotes_inputs=['a'])
g1.set_input_defaults('a', val=3.5)
p.setup()
# output G1.comp1.b is promoted
print(p.get_val('G1.b'))
# output G1.comp2.b is not promoted
print(p.get_val('G1.comp2.b'))
# use unpromoted names for the following 2 promoted inputs
print(p.get_val('G1.comp1.a'))
print(p.get_val('G1.comp2.a'))
assert(p.get_val('G1.b') == 6.0)
assert(p.get_val('G1.comp2.b') == 12.0)
assert(p.get_val('G1.comp1.a') == 3.5)
assert(p.get_val('G1.comp2.a') == 3.5)
###Output
_____no_output_____
###Markdown
Promote the input and output of Components from subgroup level up to top level
###Code
# promotes up from G1 level
p = om.Problem()
g1 = om.Group()
g1.add_subsystem('comp1', om.ExecComp('b=2.0*a', a=3.0, b=6.0))
g1.add_subsystem('comp2', om.ExecComp('b=3.0*a', a=4.0, b=12.0))
# use glob pattern 'comp?.a' to promote both comp1.a and comp2.a
# use glob pattern 'comp?.b' to promote both comp1.b and comp2.b
p.model.add_subsystem('G1', g1,
promotes_inputs=['comp?.a'],
promotes_outputs=['comp?.b'])
p.setup()
# output G1.comp1.b is promoted
print(p.get_val('comp1.b'), 6.0)
# output G1.comp2.b is promoted
print(p.get_val('comp2.b'), 12.0)
# access both promoted inputs using unpromoted names.
print(p.get_val('G1.comp1.a'), 3.0)
print(p.get_val('G1.comp2.a'), 4.0)
assert(p.get_val('comp1.b') == 6.0)
assert(p.get_val('comp2.b') == 12.0)
assert(p.get_val('G1.comp1.a') == 3.0)
assert(p.get_val('G1.comp2.a') == 4.0)
###Output
_____no_output_____
###Markdown
Promote with an alias to connect an input to a source
###Code
p = om.Problem()
p.model.add_subsystem('indep', om.IndepVarComp('aa', 3.0),
promotes=['aa'])
p.model.add_subsystem('comp1', om.ExecComp('b=2.0*aa'),
promotes_inputs=['aa'])
# here we alias 'a' to 'aa' so that it will be automatically
# connected to the independent variable 'aa'.
p.model.add_subsystem('comp2', om.ExecComp('b=3.0*a'),
promotes_inputs=[('a', 'aa')])
p.setup()
p.run_model()
print(p.get_val('comp1.b'))
print(p.get_val('comp2.b'))
assert(p.get_val('comp1.b') == 6.0)
assert(p.get_val('comp2.b') == 9.0)
###Output
_____no_output_____
###Markdown
(group-promotion)= Promote Inputs and Outputs After Adding SubsystemsIt is also possible to promote inputs and outputs after a subsystem has been addedto a Group using the `promotes` method.```{eval-rst} .. automethod:: openmdao.core.group.Group.promotes :noindex:``` Usage Promote any subsystem inputs and outputs from the configure function
###Code
class SimpleGroup(om.Group):
def setup(self):
self.add_subsystem('comp1', om.IndepVarComp('x', 5.0))
self.add_subsystem('comp2', om.ExecComp('b=2*a'))
def configure(self):
self.promotes('comp1', any=['*'])
top = om.Problem(model=SimpleGroup())
top.setup()
print(top.get_val('x'))
assert(top.get_val('x') == 5)
###Output
_____no_output_____
###Markdown
Promote specific inputs and outputs from the configure function
###Code
class SimpleGroup(om.Group):
def setup(self):
self.add_subsystem('comp1', om.IndepVarComp('x', 5.0))
self.add_subsystem('comp2', om.ExecComp('b=2*a'))
def configure(self):
self.promotes('comp2', inputs=['a'], outputs=['b'])
top = om.Problem(model=SimpleGroup())
top.setup()
print(top.get_val('a'))
print(top.get_val('b'))
assert(top.get_val('a') == 1)
assert(top.get_val('b') == 1)
###Output
_____no_output_____
###Markdown
Specifying source shape and source indices for promoted inputs of a groupThe arg `src_shape` can be passed to `promotes` or `set_input_defaults` calls in order tospecify the shape of the source that the input is expecting. This allows an output havinga different shape to be connected to an input by specifying `src_indices` in the `connect`or `promotes` call, even if there are other `src_indices` specified at lower levels in thesystem tree for the same input(s). This basically allows you to specify the 'connection interface'for a given Group, making it easier to use that Group in other models without having to modifyits internal `src_indices` based on the shape of whatever sources are connected to its inputsin a given model.Note that if multiple inputs are promoted to the same name then their `src_shape` must match,but their `src_indices` may be different.Below is an example of applying multiple `src_indices` to the same promoted input at differentsystem tree levels.
###Code
import numpy as np
p = om.Problem()
G = p.model.add_subsystem('G', om.Group())
# At the top level, we assume that the source has a shape of (3,3), and after we
# slice it with [:,:-1], lower levels will see their source having a shape of (3,2)
p.model.promotes('G', inputs=['x'], src_indices=om.slicer[:,:-1], src_shape=(3, 3))
# This specifies that G.x assumes a source shape of (3,2)
G.set_input_defaults('x', src_shape=(3, 2))
g1 = G.add_subsystem('g1', om.Group(), promotes_inputs=['x'])
g1.add_subsystem('C1', om.ExecComp('y = 3*x', shape=3))
# C1.x has a shape of 3, so we apply a slice of [:, 1] to our source which has a shape
# of (3,2) to give us our final shape of 3.
g1.promotes('C1', inputs=['x'], src_indices=om.slicer[:, 1], src_shape=(3, 2))
g2 = G.add_subsystem('g2', om.Group(), promotes_inputs=['x'])
g2.add_subsystem('C2', om.ExecComp('y = 2*x', shape=2))
# C2.x has a shape of 2, so we apply flat source indices of [1,5] to our source which has
# a shape of (3,2) to give us our final shape of 2.
g2.promotes('C2', inputs=['x'], src_indices=[1, 5], src_shape=(3, 2), flat_src_indices=True)
p.setup()
inp = np.arange(9).reshape((3,3)) + 1.
p.set_val('x', inp[:, :-1])
p.run_model()
print(p['x'])
print(p['G.g1.C1.y'])
print(p['G.g2.C2.y'])
assert_near_equal(p['x'], inp[:, :-1])
assert_near_equal(p['G.g1.C1.y'], inp[:, :-1][:, 1]*3.)
assert_near_equal(p['G.g2.C2.y'], inp[:, :-1].flatten()[[1,5]]*2.)
###Output
_____no_output_____
|
src/lab/scraping/scraping#1.ipynb
|
###Markdown
Web Scraping using Beautiful Souphttps://www.datacamp.com/community/tutorials/web-scraping-using-python?utm_source=mybridge Meu Primeiro contato com scraping, pandas, numpy, matplotlib e seaborn. *_*
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
from urllib.request import urlopen
from bs4 import BeautifulSoup
url = "http://www.hubertiming.com/results/2017GPTR10K"
html = urlopen(url)
soup = BeautifulSoup(html, 'lxml')# Tem que instalar o lxml
type(soup)
# Get the title
title = soup.title
print(title)
text = soup.get_text()
print(text)
soup.find_all('a')
all_links = soup.find_all('a')
for link in all_links:
print(link.get("href"))
rows = soup.find_all('tr')
print(rows[:10])
list_rows = []
for row in rows:
row_td = row.find_all('td') # Pegando a ultima linha
list_rows.append(row.find_all('td'))
print(row_td)
type(row_td)
print(list_rows)
# Limpando as tags html
str_cells = str(list_rows)
cleantext = BeautifulSoup(str_cells, "lxml").get_text()
print(cleantext)
import re
list_rows = []
for row in rows:
cells = row.find_all('td')
str_cells = str(cells)
clean = re.compile('<.*?>')
clean2 = (re.sub(clean, '', str_cells))
list_rows.append(clean2)
print(clean2)
type(clean2)
df = pd.DataFrame(list_rows)
df.head(10)
###Output
_____no_output_____
###Markdown
Data Manipulation and Cleaning
###Code
df1 = df[0].str.split(',', expand=True)
df1.head(10)
df1[0] = df1[0].str.strip('[')
# df1[0] = df1[1].str.strip(']')
df1.head(10)
col_labels = soup.find_all('th')
col_labels
all_header = []
col_str = str(col_labels)
cleantext2 = BeautifulSoup(col_str, "lxml").get_text()
cleantext2
print(type(cleantext2))
all_header.append(cleantext2)
print(all_header)
df2 = pd.DataFrame(all_header)
df2.head()
df3 = df2[0].str.split(',', expand=True)
df3.head()
frames = [df3, df1]
frames
df4 = pd.concat(frames)
df4
df4.head(10)
df5 = df4.rename(columns=df4.iloc[0])
df5.head(10)
df5.info()
print()
print(df5.shape)
print()
print('a tabela tem 597 linhas e 14 colunas')
print('dropando totas as linhas')
df6 = df5.dropna(axis=0, how='any')
df6.head()
df7 = df6.drop(df6.index[0])
df7.head()
df7.rename(columns={'[Place': 'Place'}, inplace=True)
df7.rename(columns={' Team]': 'Team'}, inplace=True)
df7.head()
df7['Team'] = df7['Team'].str.strip(']')
df7.head()
###Output
_____no_output_____
###Markdown
Data Analysis and Visualization
###Code
time_list = df7[' Chip Time'].tolist()
time_list[:10]
time_mins = []
for i in time_list:
h, m, s = i.split(':')
math = (int(h) * 3600 + int(m) * 60 + int(s))/60
time_mins.append(math)
df7['Runner_mins'] = time_mins # Adiciona uma nova coluna com valores em minutos.
df7.head()
df7.describe(include=[np.number]) # C A R A M B A . . . > .O_O.
from pylab import rcParams
rcParams['figure.figsize'] = 15, 5
df7.boxplot(column='Runner_mins')
plt.grid(True, axis='y')
plt.ylabel('Chip Time')
plt.xticks([1], ['Runners'])
x = df7['Runner_mins']
ax = sns.distplot(x, hist=True, kde=True, rug=False, color='m', bins=25, hist_kws={'edgecolor':'black'})
plt.show
x = df7['Runner_mins']
ax = sns.distplot(x, hist=True, kde=True, rug=False, color='m', bins=84, hist_kws={'edgecolor':'black'})
plt.show
f_fuko = df7.loc[df7[' Gender'] == ' F']['Runner_mins']
m_fuko = df7.loc[df7[' Gender'] == ' M']['Runner_mins']
sns.distplot(f_fuko, hist=True, kde=True, rug=False, hist_kws={'edgecolor':'black'}, label='Female')
sns.distplot(m_fuko, hist=False, kde=True, rug=False, hist_kws={'edgecolor':'black'}, label='Male')
plt.legend()
f_fuko = df7.loc[df7[' Gender'] == ' F']['Runner_mins']
m_fuko = df7.loc[df7[' Gender'] == ' M']['Runner_mins']
sns.distplot(f_fuko, hist=True, kde=True, rug=False, hist_kws={'edgecolor':'black'}, label='Female')
sns.distplot(m_fuko, hist=True, kde=True, rug=False, hist_kws={'edgecolor':'black'}, label='Male')
plt.legend()
g_stats = df7.groupby(" Gender", as_index=True).describe()
print(g_stats)
df7.boxplot(column='Runner_mins', by=' Gender')
plt.ylabel('Chip Time')
plt.suptitle("")
###Output
_____no_output_____
|
nbs/bert_visualize.ipynb
|
###Markdown
Bert Visualize> Visualize masked language modeling transformer model
###Code
# default_exp bert_visualize
# !pip install transformers
from transformers import AutoModelForMaskedLM,AutoTokenizer
# export
from forgebox.imports import *
from forgebox.config import Config
from forgebox.static_file import open_static
from jinja2 import Template
from forgebox.html import DOM
from uuid import uuid4
model = AutoModelForMaskedLM.from_pretrained("bert-base-uncased")
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased",use_fast=True)
###Output
_____no_output_____
###Markdown
A piece of sample text
###Code
text = """I must not [MASK].
Fear is the mind-killer.
Fear is the little [MASK] that brings total obliteration.
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner [MASK] to see its path.
Where the fear has gone there will be nothing.
Only I will remain."""
# export
class MLMVisualizer:
def __init__(self,model,tokenizer):
super().__init__()
self.model = model
self.tokenizer = tokenizer
@classmethod
def from_pretrained(cls,
tag:"str, like how you use from_pretrained from transformers"
):
obj = cls(
model = AutoModelForMaskedLM.from_pretrained(tag),
tokenizer = AutoTokenizer.from_pretrained(tag,use_fast=True),
)
return obj
def tok(self,text:str,)->[
torch.FloatTensor,
torch.BoolTensor,
list,
]:
"""
A specific way of tokenizing.
with pytorch tensor as input
with mask tensor specifying where's the [MASK] token
with offset mapping marking the positions
in format of list in list
"""
tokenized = self.tokenizer(
text,
return_tensors = "pt",
return_offsets_mapping=True
)
x = tokenized['input_ids']
offset_mapping = tokenized['offset_mapping']
mask = x==self.tokenizer.mask_token_id
if len(offset_mapping.shape)==3:
offset_mapping=offset_mapping[0]
return x,mask,offset_mapping
vis = MLMVisualizer.from_pretrained("bert-base-uncased")
# export
softmax = nn.Softmax(dim=-1)
def li(x,)->np.array:
if torch.is_tensor(x):
x=x.cpu().numpy()
return x.tolist()
def infer_logits(
vis,
y_pred,
mask) -> Config:
logits = softmax(y_pred[mask])
pred_idx = logits.argmax(-1)
return Config(
logits=logits,
pred_idx=pred_idx,
pred_tokens = vis.tokenizer.convert_ids_to_tokens(pred_idx)
)
MLMVisualizer.infer_logits = infer_logits
def predict_text(
vis,
text,
)->Config:
with torch.no_grad():
x,mask,mapper=vis.tok(text)
y_pred,attention = vis.model(x,output_attentions=True)
infered = vis.infer_logits(y_pred,mask)
return Config(
text = text,
x = li(x),
mask = li(mask),
mapper = li(mapper),
# y_pred = li(y_pred),
# logits = li(infered.logits),
pred_idx=li(infered.pred_idx),
pred_tokens =infered.pred_tokens,
attention = list(map(li,attention)),
)
MLMVisualizer.predict_text = predict_text
def visualize(vis,
text):
result = vis.predict_text(text)
vis.visualize_result(result)
def visualize_result(vis, result: Config):
template = Template(open_static('mlm/visual.html'))
js = open_static('mlm/visual.js')
text = result.text
delattr(result, 'text')
output_id = str(uuid4())
page = template.render(data=json.dumps(result),
text=text,
output_id=output_id,
mlm_visual_js=js)
DOM(page, "div",)()
MLMVisualizer.visualize = visualize
MLMVisualizer.visualize_result = visualize_result
%%time
result = predict_text(vis,text)
%%time
vis.visualize(text)
###Output
_____no_output_____
###Markdown
Different size of model
###Code
model = AutoModelForMaskedLM.from_pretrained("google/electra-small-generator")
tokenizer = AutoTokenizer.from_pretrained("google/electra-small-generator",use_fast=True)
vis = MLMVisualizer(model,tokenizer)
vis.visualize(text)
###Output
_____no_output_____
|
chap6/chapter_6_exercises.ipynb
|
###Markdown
Exercise 1Write a program to calculate the factorial of a positive integer input by the user.Recall that the factorial function is given by x! = x(x − 1)(x − 2)...(2)(1) so that1! = 1, 2! = 2, 3! = 6, 4! = 24, ...(a) Write the factorial function using a Python while loop.(b) Write the factorial function using a Python for loop.Check your programs to make sure they work for 1, 2, 3, 5, and beyond, but especially for the first 5 integers.
###Code
#using while
x = 5
fac = x
if x < 0:
print('Negativo!')
elif x < 2:
print('Fatorial = ', 1)
else:
y = x - 1
counter = 1
while counter < (fac):
x = x * (y)
y = y - 1
counter = counter + 1
#print('Contador = ', counter)
print('Fatorial = ', x)
# using for
x = 5
if x < 0:
print('Negativo!')
elif x < 2:
print('Fatorial = ', 1)
else:
for i in range(1, x, 1):
x = x * i
print('Fatorial = ', x)
# using while (peguei na web)
x = 5
if x < 0:
print('Negativo!')
else:
factorial = 1
while x > 1:
factorial = factorial * x
x = x - 1
print(factorial)
#using math.factorial(x)
import math
x = 5
print(math.factorial(x))
###Output
120
###Markdown
Exercise 2The following Python program finds the smallest non-trivial (not 1) prime factor of a positive integer.`n = int(raw_input("Input an integer > 1: "))i = 2while (n % i) != 0: i += 1 print("The smallest factor of n is:", i )`(a) Type this program into your computer and verify that it works as advertised. Then briefly explain how it works and why the while loop always terminates.(b) Modify the program so that it tells you if the integer input is a prime number or not. If it is not a prime number, write your program so that it prints out the smallest prime factor. Using your program verify that the following integers are prime numbers: 101, 8191, 947431.
###Code
n = int(input("Input an integer > 1: "))
i = 2
while (n % i) != 0:
i += 1
if i == n:
print(n, 'is a prime number')
else:
print('The smallest factor of', n, 'is:', i)
###Output
Input an integer > 1: 5464879483137
The smallest factor of 5464879483137 is: 3
###Markdown
Exercise 3Consider the matrix list `x = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]`. Write a list comprehension to extract the last column of the matrix `[3, 6, 9]`. Write another list comprehension to create a vector of twice the square of the middle column `[8, 50, 128]`.
###Code
import numpy as np
x = [[1, 2, 3], [4, 5, 6], [7, 8, 9]]
last = [x[i][2] for i in range(3)]
print(last)
middle = [x[i][1] for i in range(3)]
print(middle)
twice_square = list(
2 * (np.array(middle) * np.array(middle))) # gambiarra mode totally ON
print(twice_square)
# outra forma
twice_square = [2 * y**2 for y in middle] # lord das comprehension mode ON
print(twice_square)
###Output
[3, 6, 9]
[2, 5, 8]
[8, 50, 128]
[8, 50, 128]
###Markdown
Exercise 4Write a program that calculates the value of an investment after some number of years specified by the user if(a) the principal is compounded annually(b) the principle is compounded monthly(c) the principle is compounded dailyYour program should ask the user for the initial investment (principal), the interest rate in percent, and the number of years the money will be invested (allow for fractional years). For an initial investment of \\$ 1000 at an interest rate of 6 \%, after 10 years I get \\$ 1790.85 when compounded annually, \\$ 1819.40 when compoundedmonthly, and \$ 1822.03 when compounded daily, assuming 12 months in a year and 365.24 days in a year, where the monthly interest rate is the annual rate divided by 12 and the daily rate is the annual rate divided by 365 (don’t worry about leap years).
###Code
# user input
print('Programa Capitalismo Selvagem \n')
principal = float(input('Forneça o principal do investimento: '))
int_rate_years = float(
input('Forneça a taxa de juros anualizada (em %) do investimento: '))
time_years = float(input('Forneça o tempo (em anos) do investimento: '))
# convertendo o valor da porcentagem para usar nas contas
int_rate_years = int_rate_years / 100
comp_type = 'amd'
while (comp_type not in ('a', 'm', 'd')):
comp_type = input(
'Informe a forma de capitalização dos juros: anual(a), mensal(m) ou diária(d): '
)
if comp_type == 'a':
time = time_years
int_rate = int_rate_years
elif comp_type == 'm':
time = time_years * 12
int_rate = int_rate_years / 12
elif comp_type == 'd':
time = time_years * 365
int_rate = int_rate_years / 365
# fórmula de juros compostos
future_value = principal * (1 + int_rate)**(time)
# apresentando a resposta
print('Montante final: $ {0:0.2f}'.format(future_value))
###Output
Programa Capitalismo Selvagem
Forneça o principal do investimento: 1000
Forneça a taxa de juros anualizada (em %) do investimento: 6
Forneça o tempo (em anos) do investimento: 10
Informe a forma de capitalização dos juros: anual(a), mensal(m) ou diária(d): d
Montante final: $ 1822.03
###Markdown
Exercise 5Write a program that determines the day of the week for any given calendar date after January 1, 1900, which was a Monday. Your program will need to take into account leap years, which occur in every year that is divisible by 4, except for years that are divisible by 100 but are not divisible by 400. For example, 1900 was not a leap year, but 2000 was a leap year. Test that your program gives the following answers: Monday 1900 January 1, Tuesday 1933 December 5, Wednesday 1993 June 23, Thursday 1953 January 15, Friday 1963 November 22, Saturday 1919 June 28, Sunday 2005 August 28.Ver: http://babel.pocoo.org/en/latest/dates.htmlhttps://docs.python.org/2/library/datetime.htmlhttp://strftime.org/
###Code
import datetime
from babel.dates import format_date, format_datetime, format_time
date_entry = input('Enter a date in DD-MM-YYYY format: ')
day, month, year = map(int, date_entry.split('-'))
date1 = datetime.date(year, month, day)
print(date1.strftime('%A, %Y %B %d'))
print(format_date(date1, format='full', locale='pt_BR'))
###Output
Enter a date in DD-MM-YYYY format: 05-12-1933
Tuesday, 1933 December 05
terça-feira, 5 de dezembro de 1933
###Markdown
Exemplo de uso do datetime
###Code
import time
import datetime
print("Time in seconds since the epoch: %s" % time.time())
print("Current date and time: ", datetime.datetime.now())
print("Or like this: ", datetime.datetime.now().strftime("%y-%m-%d-%H-%M"))
print("Current year: ", datetime.date.today().strftime("%Y"))
print("Month of year: ", datetime.date.today().strftime("%B"))
print("Week number of the year: ", datetime.date.today().strftime("%W"))
print("Weekday of the week: ", datetime.date.today().strftime("%w"))
print("Day of year: ", datetime.date.today().strftime("%j"))
print("Day of the month : ", datetime.date.today().strftime("%d"))
print("Day of week: ", datetime.date.today().strftime("%A"))
###Output
Time in seconds since the epoch: 1517939124.8658943
Current date and time: 2018-02-06 15:45:24.866123
Or like this: 18-02-06-15-45
Current year: 2018
Month of year: February
Week number of the year: 06
Weekday of the week: 2
Day of year: 037
Day of the month : 06
Day of week: Tuesday
|
ContextManager.ipynb
|
###Markdown
[Back to PyCampNextLevel Outline](PyCampNextLevel.ipynb) Context Managers and SQLContext manager types are defined by their two characteristic methods, ```__enter__``` and ```__exit__```. As a Python programmer, you're free to make up applications for this grammar. Its purpose is to provide a "scope specific" object you will typically want to open and close at the start and end of the scope, however this is not the only pattern one might use. Allowing the "scope object" to continue beyond the scope is certainly an option.Lets check out the pattern, which is based on a class.
###Code
class CM:
def __enter__(self):
print("Entering...")
self.a = [1,2,3]
return self # <--- as self
def __exit__(self, *oops):
"""
If an exception occurs in the scope (indented block)
then instead of None, None, None coming into __exit__,
will be about the details of the exception. *oops scoops
the three arguments into a single tuple, however this is
not the required parameter pattern. Just deal with three
arguments.
"""
if oops[0]:
print("Exception in play...")
print("Handling it...")
return True
print("Exiting")
with CM() as obj:
print("Within the scope {}".format(obj.a))
print("obj is still alive: {}".format(obj.a))
with CM() as obj:
print("Within the scope {}".format(obj.a))
raise Exception
print("obj is still alive: {}".format(obj.a))
###Output
Entering...
Within the scope [1, 2, 3]
Exception in play...
Handling it...
obj is still alive: [1, 2, 3]
###Markdown
Hexworld Game```hexworld.py``` uses a lot of Python keywords and constructs, including the context manager feature. The Game class has ```__enter__``` and ```__exit__``` methods to help structure the flow.
###Code
import hexworld
help(hexworld.Game)
###Output
Help on class Game in module hexworld:
class Game(builtins.object)
| Game(player)
|
| Will the player score more than 100 points before the
| allowed number of turns, max_turns, runs out?
|
| Designed for use in a try block with a while True loop.
| The only way to escape the loop is by means of an
| exception. However Quitter is handled by __exit__
| whereas Winner and Loser propagate outside the context.
|
| Methods defined here:
|
| __enter__(self)
| As you enter a context, you must go through here
|
| __exit__(self, *oops)
| As you leave a context, you must go through here
|
| __init__(self, player)
| Initialize self. See help(type(self)) for accurate signature.
|
| turn_to_play(self)
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
Airports With SQL ```airports.db``` is a SQLite database, which is basically a text file designed to work with the standard SQL database API, called the DBI.Lets call out to the operating system just to get some stats on the file.
###Code
! ls -g ./data/airports.*
###Output
-rwxr-xr-x@ 1 staff 475136 Apr 2 22:15 [31m./data/airports.db[m[m
###Markdown
We have another way of looking into a file's details, through the ```os``` module.
###Code
import os
r = os.stat("./data/airports.db")
r.st_size
###Output
_____no_output_____
###Markdown
OK, lets turn to using the ```sqlite3``` module in the Standard Library.
###Code
import sqlite3 as sql
type(sql)
con = sql.connect("./data/airports.db")
cursor = con.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(cursor.fetchall())
result = cursor.execute("PRAGMA table_info('Airports')").fetchall()
result
result = cursor.execute("SELECT * FROM Airports WHERE iata='SFO'").fetchall()
result
result = cursor.execute("SELECT * FROM Airports WHERE iso='US'")
us_airports = result.fetchall()
# print(us_airports)
us_airports[10]
###Output
_____no_output_____
###Markdown
###Code
import sqlite3 as sql
class Airport:
"""
Context Manage designed to retrieve data from airports.db
as a tuple, for use in scope. The database remains open
until the scope is exited.
"""
def __init__(self, code):
self.code = code # e.g. SFO, PDX...
def __enter__(self):
self.connect = sql.connect("./data/airports.db")
self.cursor = self.connect.cursor()
# use a tuple to substitute into ? placeholders
results = self.cursor.execute(
"SELECT * FROM Airports WHERE iata= ?", (self.code,))
self.data = results.fetchall()
return self
def __exit__(self, *oops):
# no error handling yet
self.connect.close()
with Airport("HSK") as airport:
print(airport.data)
print("indented part")
print("the context")
print("context")
with Airport("PDX") as airport:
print(airport.data)
###Output
[('PDX', 'US', 'Portland International Airport', 'NA', 'airport', 45.588997, -122.5929, 'large', 1)]
###Markdown
Note that the ```airport``` object keeps a live connection and cursor throughout the scope of the context.
###Code
with Airport("LAX") as airport:
airport.cursor.execute(
"SELECT name FROM Airports WHERE iata = ?",
("PDX",)) # or any arbitrary airport, just to show this degree of freedom
print(airport.cursor.fetchall())
###Output
[('Portland International Airport',)]
###Markdown
[Back to PyCampNextLevel Outline](PyCampNextLevel.ipynb) Context Managers and SQLContext manager types are defined by their two characteristic methods, ```__enter__``` and ```__exit__```. As a Python programmer, you're free to make up applications for this grammar. Its purpose is to provide a "scope specific" object you will typically want to open and close at the start and end of the scope, however this is not the only pattern one might use. Allowing the "scope object" to continue beyond the scope is certainly an option.Lets check out the pattern, which is based on a class.
###Code
class CM:
def __enter__(self):
print("Entering...")
self.a = [1,2,3]
return self # <--- as self
def __exit__(self, *oops):
"""
If an exception occurs in the scope (indented block)
then instead of None, None, None coming into __exit__,
will be about the details of the exception. *oops scoops
the three arguments into a single tuple, however this is
not the required parameter pattern. Just deal with three
arguments.
"""
if oops[0]:
print("Exception in play...")
print("Handling it...")
return True
print("Exiting")
with CM() as obj:
print("Within the scope {}".format(obj.a))
print("obj is still alive: {}".format(obj.a))
with CM() as obj:
print("Within the scope {}".format(obj.a))
raise Exception
print("obj is still alive: {}".format(obj.a))
###Output
Entering...
Within the scope [1, 2, 3]
Exception in play...
Handling it...
obj is still alive: [1, 2, 3]
###Markdown
Hexworld Game```hexworld.py``` uses a lot of Python keywords and constructs, including the context manager feature. The Game class has ```__enter__``` and ```__exit__``` methods to help structure the flow.
###Code
import hexworld
help(hexworld.Game)
###Output
Help on class Game in module hexworld:
class Game(builtins.object)
| Game(player)
|
| Will the player score more than 100 points before the
| allowed number of turns, max_turns, runs out?
|
| Designed for use in a try block with a while True loop.
| The only way to escape the loop is by means of an
| exception. However Quitter is handled by __exit__
| whereas Winner and Loser propagate outside the context.
|
| Methods defined here:
|
| __enter__(self)
| As you enter a context, you must go through here
|
| __exit__(self, *oops)
| As you leave a context, you must go through here
|
| __init__(self, player)
| Initialize self. See help(type(self)) for accurate signature.
|
| turn_to_play(self)
|
| ----------------------------------------------------------------------
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
###Markdown
Airports With SQL ```airports.db``` is a SQLite database, which is basically a text file designed to work with the standard SQL database API, called the DBI.Lets call out to the operating system just to get some stats on the file.
###Code
! ls -g ./data/airports.*
###Output
-rwxr-xr-x@ 1 staff 475136 Apr 2 22:15 [31m./data/airports.db[m[m
###Markdown
We have another way of looking into a file's details, through the ```os``` module.
###Code
import os
r = os.stat("./data/airports.db")
r.st_size
###Output
_____no_output_____
###Markdown
OK, lets turn to using the ```sqlite3``` module in the Standard Library.
###Code
import sqlite3 as sql
type(sql)
con = sql.connect("./data/airports.db")
cursor = con.cursor()
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
print(cursor.fetchall())
result = cursor.execute("PRAGMA table_info('Airports')").fetchall()
result
result = cursor.execute("SELECT * FROM Airports WHERE iata='SFO'").fetchall()
result
result = cursor.execute("SELECT * FROM Airports WHERE iso='US'")
us_airports = result.fetchall()
# print(us_airports)
us_airports[10]
###Output
_____no_output_____
###Markdown
###Code
import sqlite3 as sql
class Airport:
"""
Context Manage designed to retrieve data from airports.db
as a tuple, for use in scope. The database remains open
until the scope is exited.
"""
def __init__(self, code):
self.code = code # e.g. SFO, PDX...
def __enter__(self):
self.connect = sql.connect("./data/airports.db")
self.cursor = self.connect.cursor()
# use a tuple to substitute into ? placeholders
results = self.cursor.execute(
"SELECT * FROM Airports WHERE iata= ?", (self.code,))
self.data = results.fetchall()
return self
def __exit__(self, *oops):
# no error handling yet
self.connect.close()
with Airport("PDX") as airport:
print(airport.data)
print("indented part")
print("the context")
print("context")
with Airport("PDX") as airport:
print(airport.data)
###Output
[('PDX', 'US', 'Portland International Airport', 'NA', 'airport', 45.588997, -122.5929, 'large', 1)]
###Markdown
Note that the ```airport``` object keeps a live connection and cursor throughout the scope of the context.
###Code
with Airport("LAX") as airport:
airport.cursor.execute(
"SELECT name FROM Airports WHERE iata = ?",
("PDX",)) # or any arbitrary airport, just to show this degree of freedom
print(airport.cursor.fetchall())
###Output
[('Portland International Airport',)]
|
community_tutorials_and_guides/rf_demo.ipynb
|
###Markdown
Random Forest Classification**Authorship**Original Author: Saloni JainLast Edit: Taurean Dyer, 9/25/2019**Test System Specs**Test System Hardware: GV100Test System Software: Ubuntu 18.04RAPIDS Version: 0.10.0a - Docker InstallDriver: 410.79CUDA: 10.0**Known Working Systems**RAPIDS Versions: 0.4, 0.5, 0.5.1, 0.6, 0.6.1, 0.7, 0.8, 0.9, 0.10 IntroThe Random Forest algorithm is a classification algorithm which builds several decision trees, and aggregates each of their outputs to make a prediction. This makes it more robust to overfitting.In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/. For additional information on the RandomForest model please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/index.htmlThis notebook demonstratrates fitting a RandomForestClassifier on the Higgs dataset. It is a binary classification problem to distinguish between a signal process which produces Higgs bosons and a background process which does not. The notebook also compares the performance (accuracy and speed) with sklearn's parallel RandomForestClassifier implementation.
###Code
from cuml import RandomForestClassifier as cuRF
from sklearn.ensemble import RandomForestClassifier as sklRF
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
import cudf
import numpy as np
import pandas as pd
import os
from urllib.request import urlretrieve
import gzip
###Output
_____no_output_____
###Markdown
Helper function to download and extract the Higgs dataset
###Code
def download_higgs(compressed_filepath, decompressed_filepath):
higgs_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz'
if not os.path.isfile(compressed_filepath):
urlretrieve(higgs_url, compressed_filepath)
if not os.path.isfile(decompressed_filepath):
cf = gzip.GzipFile(compressed_filepath)
with open(decompressed_filepath, 'wb') as df:
df.write(cf.read())
###Output
_____no_output_____
###Markdown
Download Higgs data and read using cudf
###Code
data_dir = '../data/rf/'
if not os.path.exists(data_dir):
print('creating rf data directory')
os.system('mkdir ../data/rf')
!ls ../data/rf
compressed_filepath = data_dir+'HIGGS.csv.gz' # Set this as path for gzipped Higgs data file, if you already have
decompressed_filepath = data_dir+'HIGGS.csv' # Set this as path for decompressed Higgs data file, if you already have
download_higgs(compressed_filepath, decompressed_filepath)
col_names = ['label'] + ["col-{}".format(i) for i in range(2, 30)] # Assign column names
dtypes_ls = ['int32'] + ['float32' for _ in range(2, 30)] # Assign dtypes to each column
data = cudf.read_csv(decompressed_filepath, names=col_names, dtype=dtypes_ls)
data.head()
###Output
_____no_output_____
###Markdown
Make train test splits
###Code
X, y = data[data.columns.difference(['label'])].as_matrix(), data['label'].to_array() # Separate data into X and y
del data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=500_000)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(10500000, 28) (10500000,) (500000, 28) (500000,)
###Markdown
You can consult RandomForestClassifier docstring to check all the parameters, but here are some of the more important ones: 1. n_estimators: (default = 10) number of trees in the forest.2. max_depth: (default = -1) Maximum tree depth. Unlimited (i.e, until leaves are pure), if -1.3. n_bins: (default = 8) Number of bins used by the split algorithm.Note on `nbins`: Reducing `n_bins` shrinks the histograms used to compute which tree nodes to split. This reduction improves training time, but if you reduce it too low, you may harm model accuracy.
###Code
# cuml Random Forest params
cu_rf_params = {
'n_estimators': 25,
'max_depth': 13,
'n_bins': 15,
}
###Output
_____no_output_____
###Markdown
The methods that can be used with the RandomForestClassifier are:1. fit: Fit the model with X and y.2. get_params: Sklearn style return parameter state3. predict: Predicts the y for X.4. set_params: Sklearn style set parameter state to dictionary of params.5. cross_validate: Predicts the accuracy of the model for X. Note on input to `fit` method: Since `fit` is processed on the GPU, it can accept `cudf` dataframes or `numpy` arrays
###Code
%%time
# Train cuml RF
cu_rf = cuRF(**cu_rf_params)
cu_rf.fit(X_train, y_train)
###Output
[W] [11:40:10.733225] Using experimental backend for growing trees
CPU times: user 1min 8s, sys: 35.9 s, total: 1min 44s
Wall time: 51.9 s
###Markdown
Set Sklearn params and fit RandomForestClassifier
###Code
# sklearn Random Forest params
skl_rf_params = {
'n_estimators': 25,
'max_depth': 13,
}
%%time
# Train sklearn RF parallely
skl_rf = sklRF(**skl_rf_params, n_jobs=20)
skl_rf.fit(X_train, y_train)
###Output
CPU times: user 48min 10s, sys: 2h 8min 9s, total: 2h 56min 20s
Wall time: 41min 8s
###Markdown
Predict and compare cuml and sklearn RandomForestClassifier Note on input to cuml `predict` method: Since `predict` is processed on the CPU, it can only accept `numpy` arrays
###Code
# Predict
print("cuml RF Accuracy Score: ", accuracy_score(cu_rf.predict(X_test), y_test))
print("sklearn RF Accuracy Score: ", accuracy_score(skl_rf.predict(X_test), y_test))
###Output
cuml RF Accuracy Score: 0.716686
sklearn RF Accuracy Score: 0.722672
###Markdown
Random Forest Classification**Authorship**Original Author: Saloni JainLast Edit: Taurean Dyer, 9/25/2019**Test System Specs**Test System Hardware: GV100Test System Software: Ubuntu 18.04RAPIDS Version: 0.10.0a - Docker InstallDriver: 410.79CUDA: 10.0**Known Working Systems**RAPIDS Versions: 0.4, 0.5, 0.5.1, 0.6, 0.6.1, 0.7, 0.8, 0.9, 0.10 IntroThe Random Forest algorithm is a classification algorithm which builds several decision trees, and aggregates each of their outputs to make a prediction. This makes it more robust to overfitting.In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/. For additional information on the RandomForest model please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/index.htmlThis notebook demonstratrates fitting a RandomForestClassifier on the Higgs dataset. It is a binary classification problem to distinguish between a signal process which produces Higgs bosons and a background process which does not. The notebook also compares the performance (accuracy and speed) with sklearn's parallel RandomForestClassifier implementation.
###Code
from cuml import RandomForestClassifier as cuRF
from sklearn.ensemble import RandomForestClassifier as sklRF
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
import cudf
import numpy as np
import pandas as pd
import os
from urllib.request import urlretrieve
import gzip
###Output
_____no_output_____
###Markdown
Helper function to download and extract the Higgs dataset
###Code
def download_higgs(compressed_filepath, decompressed_filepath):
higgs_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz'
if not os.path.isfile(compressed_filepath):
urlretrieve(higgs_url, compressed_filepath)
if not os.path.isfile(decompressed_filepath):
cf = gzip.GzipFile(compressed_filepath)
with open(decompressed_filepath, 'wb') as df:
df.write(cf.read())
###Output
_____no_output_____
###Markdown
Download Higgs data and read using cudf
###Code
data_dir = '../data/rf/'
if not os.path.exists(data_dir):
print('creating rf data directory')
os.system('mkdir ../data/rf')
!ls ../data/rf
compressed_filepath = data_dir+'HIGGS.csv.gz' # Set this as path for gzipped Higgs data file, if you already have
decompressed_filepath = data_dir+'HIGGS.csv' # Set this as path for decompressed Higgs data file, if you already have
download_higgs(compressed_filepath, decompressed_filepath)
col_names = ['label'] + ["col-{}".format(i) for i in range(2, 30)] # Assign column names
dtypes_ls = ['int32'] + ['float32' for _ in range(2, 30)] # Assign dtypes to each column
data = cudf.read_csv(decompressed_filepath, names=col_names, dtype=dtypes_ls)
data.head()
###Output
_____no_output_____
###Markdown
Make train test splits
###Code
X, y = data[data.columns.difference(['label'])].as_matrix(), data['label'].to_array() # Separate data into X and y
del data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=500_000)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
_____no_output_____
###Markdown
You can consult RandomForestClassifier docstring to check all the parameters, but here are some of the more important ones: 1. n_estimators: (default = 10) number of trees in the forest.2. max_depth: (default = -1) Maximum tree depth. Unlimited (i.e, until leaves are pure), if -1.3. n_bins: (default = 8) Number of bins used by the split algorithm.Note on `nbins`: Reducing `n_bins` shrinks the histograms used to compute which tree nodes to split. This reduction improves training time, but if you reduce it too low, you may harm model accuracy.
###Code
# cuml Random Forest params
cu_rf_params = {
'n_estimators': 25,
'max_depth': 13,
'n_bins': 15,
}
###Output
_____no_output_____
###Markdown
The methods that can be used with the RandomForestClassifier are:1. fit: Fit the model with X and y.2. get_params: Sklearn style return parameter state3. predict: Predicts the y for X.4. set_params: Sklearn style set parameter state to dictionary of params.5. cross_validate: Predicts the accuracy of the model for X. Note on input to `fit` method: Since `fit` is processed on the GPU, it can accept `cudf` dataframes or `numpy` arrays
###Code
%%time
# Train cuml RF
cu_rf = cuRF(**cu_rf_params)
cu_rf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Set Sklearn params and fit RandomForestClassifier
###Code
# sklearn Random Forest params
skl_rf_params = {
'n_estimators': 25,
'max_depth': 13,
}
%%time
# Train sklearn RF parallely
skl_rf = sklRF(**skl_rf_params, n_jobs=20)
skl_rf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict and compare cuml and sklearn RandomForestClassifier Note on input to cuml `predict` method: Since `predict` is processed on the CPU, it can only accept `numpy` arrays
###Code
# Predict
print("cuml RF Accuracy Score: ", accuracy_score(cu_rf.predict(X_test), y_test))
# print("sklearn RF Accuracy Score: ", accuracy_score(skl_rf.predict(X_test), y_test))
###Output
_____no_output_____
###Markdown
Random Forest Classification**Authorship**Original Author: Saloni JainLast Edit: Charles Blackmon-Luca, 4/5/2022**Test System Specs**Test System Hardware: GV100Test System Software: Ubuntu 20.04RAPIDS Version: 22.04a - Docker InstallDriver: 495.44CUDA: 11.5 IntroThe Random Forest algorithm is a classification algorithm which builds several decision trees, and aggregates each of their outputs to make a prediction. This makes it more robust to overfitting.In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/. For additional information on the RandomForest model please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/index.htmlThis notebook demonstratrates fitting a RandomForestClassifier on the Higgs dataset. It is a binary classification problem to distinguish between a signal process which produces Higgs bosons and a background process which does not. The notebook also compares the performance (accuracy and speed) with sklearn's parallel RandomForestClassifier implementation.
###Code
from cuml import RandomForestClassifier as cuRF
from sklearn.ensemble import RandomForestClassifier as sklRF
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
import cudf
import numpy as np
import pandas as pd
import os
from urllib.request import urlretrieve
import gzip
###Output
_____no_output_____
###Markdown
Helper function to download and extract the Higgs dataset
###Code
def download_higgs(compressed_filepath, decompressed_filepath):
higgs_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz'
if not os.path.isfile(compressed_filepath):
urlretrieve(higgs_url, compressed_filepath)
if not os.path.isfile(decompressed_filepath):
cf = gzip.GzipFile(compressed_filepath)
with open(decompressed_filepath, 'wb') as df:
df.write(cf.read())
###Output
_____no_output_____
###Markdown
Download Higgs data and read using cudf
###Code
data_dir = '../data/rf/'
if not os.path.exists(data_dir):
print('creating rf data directory')
os.system('mkdir ../data/rf')
!ls ../data/rf
compressed_filepath = data_dir+'HIGGS.csv.gz' # Set this as path for gzipped Higgs data file, if you already have
decompressed_filepath = data_dir+'HIGGS.csv' # Set this as path for decompressed Higgs data file, if you already have
download_higgs(compressed_filepath, decompressed_filepath)
col_names = ['label'] + ["col-{}".format(i) for i in range(2, 30)] # Assign column names
dtypes_ls = ['int32'] + ['float32' for _ in range(2, 30)] # Assign dtypes to each column
data = cudf.read_csv(decompressed_filepath, names=col_names, dtype=dtypes_ls)
data.head()
###Output
_____no_output_____
###Markdown
Make train test splits
###Code
X, y = data[data.columns.difference(['label'])].to_numpy(), data['label'].to_numpy() # Separate data into X and y
del data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=500_000)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(10500000, 28) (10500000,) (500000, 28) (500000,)
###Markdown
You can consult RandomForestClassifier docstring to check all the parameters, but here are some of the more important ones: 1. n_estimators: (default = 10) number of trees in the forest.2. max_depth: (default = -1) Maximum tree depth. Unlimited (i.e, until leaves are pure), if -1.3. n_bins: (default = 8) Number of bins used by the split algorithm.Note on `nbins`: Reducing `n_bins` shrinks the histograms used to compute which tree nodes to split. This reduction improves training time, but if you reduce it too low, you may harm model accuracy.
###Code
# cuml Random Forest params
cu_rf_params = {
'n_estimators': 25,
'max_depth': 13,
'n_bins': 15,
}
###Output
_____no_output_____
###Markdown
The methods that can be used with the RandomForestClassifier are:1. fit: Fit the model with X and y.2. get_params: Sklearn style return parameter state3. predict: Predicts the y for X.4. set_params: Sklearn style set parameter state to dictionary of params.5. cross_validate: Predicts the accuracy of the model for X. Note on input to `fit` method: Since `fit` is processed on the GPU, it can accept `cudf` dataframes or `numpy` arrays
###Code
%%time
# Train cuml RF
cu_rf = cuRF(**cu_rf_params)
cu_rf.fit(X_train, y_train)
###Output
CPU times: user 18.2 s, sys: 12.6 s, total: 30.7 s
Wall time: 11.3 s
###Markdown
Set Sklearn params and fit RandomForestClassifier
###Code
# sklearn Random Forest params
skl_rf_params = {
'n_estimators': 25,
'max_depth': 13,
}
%%time
# Train sklearn RF parallely
skl_rf = sklRF(**skl_rf_params, n_jobs=20)
skl_rf.fit(X_train, y_train)
###Output
CPU times: user 38min 13s, sys: 12.8 s, total: 38min 26s
Wall time: 3min
###Markdown
Predict and compare cuml and sklearn RandomForestClassifier Note on input to cuml `predict` method: Since `predict` is processed on the CPU, it can only accept `numpy` arrays
###Code
# Predict
print("cuml RF Accuracy Score: ", accuracy_score(cu_rf.predict(X_test), y_test))
print("sklearn RF Accuracy Score: ", accuracy_score(skl_rf.predict(X_test), y_test))
###Output
cuml RF Accuracy Score: 0.718828
sklearn RF Accuracy Score: 0.722448
###Markdown
Random Forest Classification**Authorship**Original Author: Saloni JainLast Edit: Taurean Dyer, 9/25/2019**Test System Specs**Test System Hardware: GV100Test System Software: Ubuntu 18.04RAPIDS Version: 0.10.0a - Docker InstallDriver: 410.79CUDA: 10.0**Known Working Systems**RAPIDS Versions: 0.4, 0.5, 0.5.1, 0.6, 0.6.1, 0.7, 0.8, 0.9, 0.10 IntroThe Random Forest algorithm is a classification algorithm which builds several decision trees, and aggregates each of their outputs to make a prediction. This makes it more robust to overfitting.In order to convert your dataset to cudf format please read the cudf documentation on https://rapidsai.github.io/projects/cudf/en/latest/. For additional information on the RandomForest model please refer to the documentation on https://rapidsai.github.io/projects/cuml/en/latest/index.htmlThis notebook demonstratrates fitting a RandomForestClassifier on the Higgs dataset. It is a binary classification problem to distinguish between a signal process which produces Higgs bosons and a background process which does not. The notebook also compares the performance (accuracy and speed) with sklearn's parallel RandomForestClassifier implementation.
###Code
from cuml import RandomForestClassifier as cuRF
from sklearn.ensemble import RandomForestClassifier as sklRF
from sklearn.metrics import accuracy_score
from sklearn.model_selection import train_test_split
import cudf
import numpy as np
import pandas as pd
import os
from urllib.request import urlretrieve
import gzip
###Output
_____no_output_____
###Markdown
Helper function to download and extract the Higgs dataset
###Code
def download_higgs(compressed_filepath, decompressed_filepath):
higgs_url = 'https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz'
if not os.path.isfile(compressed_filepath):
urlretrieve(higgs_url, compressed_filepath)
if not os.path.isfile(decompressed_filepath):
cf = gzip.GzipFile(compressed_filepath)
with open(decompressed_filepath, 'wb') as df:
df.write(cf.read())
###Output
_____no_output_____
###Markdown
Download Higgs data and read using cudf
###Code
data_dir = '../data/rf/'
if not os.path.exists(data_dir):
print('creating rf data directory')
os.system('mkdir ../data/rf')
!ls ../data/rf
compressed_filepath = data_dir+'HIGGS.csv.gz' # Set this as path for gzipped Higgs data file, if you already have
decompressed_filepath = data_dir+'HIGGS.csv' # Set this as path for decompressed Higgs data file, if you already have
download_higgs(compressed_filepath, decompressed_filepath)
col_names = ['label'] + ["col-{}".format(i) for i in range(2, 30)] # Assign column names
dtypes_ls = ['int32'] + ['float32' for _ in range(2, 30)] # Assign dtypes to each column
data = cudf.read_csv(decompressed_filepath, names=col_names, dtype=dtypes_ls)
data.head()
###Output
_____no_output_____
###Markdown
Make train test splits
###Code
X, y = data[data.columns.difference(['label'])].as_matrix(), data['label'].to_array() # Separate data into X and y
del data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=500_000)
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
###Output
(10500000, 28) (10500000,) (500000, 28) (500000,)
###Markdown
You can consult RandomForestClassifier docstring to check all the parameters, but here are some of the more important ones: 1. n_estimators: (default = 10) number of trees in the forest.2. max_depth: (default = -1) Maximum tree depth. Unlimited (i.e, until leaves are pure), if -1.3. n_bins: (default = 8) Number of bins used by the split algorithm.Note on `nbins`: Reducing `n_bins` shrinks the histograms used to compute which tree nodes to split. This reduction improves training time, but if you reduce it too low, you may harm model accuracy.
###Code
# cuml Random Forest params
cu_rf_params = {
'n_estimators': 25,
'max_depth': 13,
'n_bins': 15,
}
###Output
_____no_output_____
###Markdown
The methods that can be used with the RandomForestClassifier are:1. fit: Fit the model with X and y.2. get_params: Sklearn style return parameter state3. predict: Predicts the y for X.4. set_params: Sklearn style set parameter state to dictionary of params.5. cross_validate: Predicts the accuracy of the model for X. Note on input to `fit` method: Since `fit` is processed on the GPU, it can accept `cudf` dataframes or `numpy` arrays
###Code
%%time
# Train cuml RF
cu_rf = cuRF(**cu_rf_params)
cu_rf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Set Sklearn params and fit RandomForestClassifier
###Code
# sklearn Random Forest params
skl_rf_params = {
'n_estimators': 25,
'max_depth': 13,
}
%%time
# Train sklearn RF parallely
skl_rf = sklRF(**skl_rf_params, n_jobs=20)
skl_rf.fit(X_train, y_train)
###Output
_____no_output_____
###Markdown
Predict and compare cuml and sklearn RandomForestClassifier Note on input to cuml `predict` method: Since `predict` is processed on the CPU, it can only accept `numpy` arrays
###Code
# Predict
print("cuml RF Accuracy Score: ", accuracy_score(cu_rf.predict(X_test), y_test))
print("sklearn RF Accuracy Score: ", accuracy_score(skl_rf.predict(X_test), y_test))
###Output
_____no_output_____
|
Chapter02/Chapter 2.ipynb
|
###Markdown
Setting up a SparkContext
###Code
from pyspark import SparkContext
sc = SparkContext('local', 'hands on PySpark')
visitors = [10, 3, 35, 25, 41, 9, 29]
df_visitors = sc.parallelize(visitors)
df_visitors_yearly = df_visitors.map(lambda x: x*365).collect()
print(df_visitors_yearly)
###Output
_____no_output_____
###Markdown
Hands-On Data Preprocessing in PythonLearn how to effectively prepare data for successful data analytics AUTHOR: Dr. Roy Jafari Chapter 2: Review of another core module: Matplotlib
###Code
#from previous chapter
import pandas as pd
import numpy as np
adult_df = pd.read_csv('adult.csv')
import matplotlib.pyplot as plt
plt.hist(adult_df.age)
plt.show()
plt.boxplot(adult_df.age, vert=False)
plt.show()
amz_df = pd.read_csv('Amazon Stock.csv')
apl_df = pd.read_csv('Apple Stock.csv')
plt.plot(amz_df.Close)
plt.plot(apl_df.Close)
plt.show()
plt.scatter(apl_df.Close,amz_df.Close)
plt.show()
plt.plot(amz_df.Close)
plt.plot(apl_df.Close)
plt.title('Line plots of Amazon and Apple stock prices from 2000 to 2020')
plt.ylabel('Closing Price')
plt.show()
plt.plot(amz_df.Close, label='Amazon')
plt.plot(apl_df.Close, label='Apple')
plt.title('Line plots of Amazon and Apple stock prices from 2000 to 2020')
plt.ylabel('Closing Price')
plt.legend()
plt.show()
plt.plot(amz_df.Close, label='Amazon')
plt.plot(apl_df.Close, label='Apple')
plt.title('Line plots of Amazon and Apple stock prices from 2000 to 2020')
plt.ylabel('Closing Price')
plt.xticks([0,500,1000,1500,2000,2500,3000,3500,4000,4500,5000,5500],
rotation=90)
plt.legend()
plt.show()
plt.plot(amz_df.Close, label='Amazon')
plt.plot(apl_df.Close, label='Apple')
plt.title('Line plots of Amazon and Apple stock prices from 2000 to 2020')
plt.ylabel('Closing Price')
plt.legend()
plt.xticks(np.arange(0,len(amz_df),250),amz_df.Date[0:len(amz_df):250],
rotation=90)
plt.show()
plt.scatter(apl_df.Close,amz_df.Close, marker = 'x', color='green')
plt.title('Amazon and Apple stock prices in 2000 to 2020')
plt.xlabel('Apple price ($)')
plt.ylabel('Amazon price ($)')
plt.show()
plt.subplot(2,1,1)
plt.hist(adult_df.age)
plt.title('Histogram')
plt.ylabel('Age')
plt.subplot(2,1,2)
plt.boxplot(adult_df.age, vert=False)
plt.title('Boxplot')
plt.yticks([1],['Age'])
plt.tight_layout()
plt.show()
plt.figure(figsize=(9,6))
plt.subplot(2,1,1)
plt.hist(adult_df.age)
plt.title('Histogram')
plt.ylabel('Age')
plt.subplot(2,1,2)
plt.boxplot(adult_df.age, vert=False)
plt.title('Boxplot')
plt.yticks([1],['Age'])
plt.tight_layout()
plt.show()
Numerical_colums = ['age', 'education-num', 'capitalGain', 'capitalLoss', 'hoursPerWeek']
plt.figure(figsize=(20,5))
for i,col in enumerate(Numerical_colums):
plt.subplot(2,5,i+1)
plt.hist(adult_df[col])
plt.title(col)
for i,col in enumerate(Numerical_colums):
plt.subplot(2,5,i+6)
plt.boxplot(adult_df[col],vert=False)
plt.yticks([])
plt.tight_layout()
plt.savefig('ColumnsVsiaulization.png', dpi=900)
###Output
_____no_output_____
|
HRvsAge_errorbarplots.ipynb
|
###Markdown
(1) Import data from Rose19paper: https://iopscience.iop.org/article/10.3847/1538-4357/ab0704/pdf basic tools and dataset: https://github.com/benjaminrose/MC-Age/tree/master/data (data might be outdated)full MCMC chain and some other detailed data: https://zenodo.org/record/3875482 data prep
###Code
data = pd.read_csv('data/HRvsAge_Median+STD+Bounds.csv')
data.head(3)
# prepare an array for uneven errorbars
xerr = np.array([data['Age_median'].values - data['Age_lower'].values ,data['Age_upper'].values - data['Age_median'].values])
###Output
_____no_output_____
###Markdown
plot
###Code
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,7))
ax1.errorbar(data['Age_global'],data['HR'],yerr=data['HR_err'],xerr=data['Age_global_err'],fmt='ko',lw=0.5)
ax2.errorbar(data['Age_median'],data['HR'],yerr=data['HR_err'],xerr=xerr,fmt='ko',lw=0.5)
ax1.set_title('Lee20 Fig.2',fontsize=20)
ax2.set_title('Dataset provided by Rose19: non-Gaussian',fontsize=20)
for ax in [ax1,ax2]:
ax.set_xlabel('Global Age [Gyr]',fontsize=17)
ax.set_ylabel('Hubble Residual [mag]',fontsize=17)
ax.set_xlim(0,14)
ax.set_ylim(-0.75,0.75)
ax.tick_params(which='major', length=10, direction='in',right=True,top=True)
ax.tick_params(which='minor', length=5, direction='in',right=True,top=True)
ax.xaxis.set_major_locator(MultipleLocator(5))
ax.xaxis.set_minor_locator(MultipleLocator(1))
ax.yaxis.set_major_locator(MultipleLocator(0.5))
ax.yaxis.set_minor_locator(MultipleLocator(0.1))
plt.tight_layout()
###Output
_____no_output_____
###Markdown
sanity check:In this data, 'HR','HR_err','Age_global','Age_global_err' are taken from their paper's Table 1+7, while Age_median and bounds are taken from their dataset. So let's check if these two sets of data are consistent with each other.
###Code
fig,(ax1,ax2) = plt.subplots(1,2,figsize=(15,7))
ax1.scatter(data['Age_global'],data['Age_median'],c='k',s=5)
ax2.scatter(data['Age_global_err'],(data['Age_upper']-data['Age_lower'])/2,c='k',s=5)
ax1.set_title('Ages',fontsize=20)
ax2.set_title('Size of errorbars',fontsize=20)
ax1.set_xlabel('Age_global [Gyr]',fontsize=17)
ax1.set_ylabel('Age_median [Gyr]',fontsize=17)
ax1.set_xlim(0,12)
ax1.set_ylim(0,12)
ax2.set_xlabel('Age_global_err [Gyr]',fontsize=17)
ax2.set_ylabel('(Age_upper - Age_lower)/2 [Gyr]',fontsize=17)
ax2.set_xlim(0,5)
ax2.set_ylim(0,5)
plt.tight_layout()
###Output
_____no_output_____
|
Projekty/Projekt1/Grupa3/StaronSzypulaUrbala/Drzewo_decyzyjne.ipynb
|
###Markdown
Wstępna obróbka
###Code
import pandas as pd
import numpy as np
import math
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import StandardScaler
dane = pd.read_csv('cervical-cancer_csv.csv')
# usuwanie kolumn
dane = dane.drop(['STDs:cervical condylomatosis',
'STDs:vaginal condylomatosis',
'STDs:pelvic inflammatory disease',
'STDs:genital herpes',
'STDs:molluscum contagiosum',
'STDs:AIDS',
'STDs:Hepatitis B',
'STDs:HPV', 'Dx:CIN'], axis=1)
# uzupełnianie braków i kodowanie zmiennych kategorycznych
def column_nodata(df, column_name):
df[column_name + "_null"] = df[column_name].apply(lambda x: 1 if pd.isnull(x) else 0)
df[column_name] = df[column_name].fillna(0)
def replace_in_column(df, column_name, src, dst):
df[column_name] = df[column_name].replace(to_replace=src, value=dst)
replace_in_column(dane, 'STDs (number)', [3, 4], 2)
replace_in_column(dane, 'STDs: Number of diagnosis', [2,3], 1)
nodata_categories = [
'Smokes',
'Hormonal Contraceptives',
'IUD',
'STDs',
'STDs (number)',
'STDs:condylomatosis',
'STDs:vulvo-perineal condylomatosis',
'STDs:syphilis',
'STDs:HIV'
]
for category in nodata_categories:
column_nodata(dane, category)
dane = pd.concat([dane, pd.get_dummies(dane['STDs (number)'], prefix='STDs_')],axis=1)
dane.drop(['STDs (number)'],axis=1, inplace=True)
# usunięcie na - opuszczenie obserwacji
num2 = ['Smokes (years)', 'Smokes (packs/year)', 'First sexual intercourse', 'Number of sexual partners']
narows = []
for i in range (len(dane)):
for j in num2:
if math.isnan(dane.loc[i, j]) :
narows.append(i)
break
dane = dane.drop(narows)
dane.index = range(len(dane))
# standaryzacja
numerical = ['Age', 'Number of sexual partners', 'First sexual intercourse', 'Num of pregnancies', 'Smokes (years)',
'Smokes (packs/year)', 'Hormonal Contraceptives (years)', 'IUD (years)', 'STDs: Time since first diagnosis',
'STDs: Time since last diagnosis']
scaler = StandardScaler()
dane_scaled = scaler.fit_transform(dane[numerical])
d2 = pd.DataFrame(dane_scaled, columns = numerical)
dane[numerical] = d2[numerical]
# usunięcie na - imputacja
imp = dane[[ 'Num of pregnancies', 'Hormonal Contraceptives (years)', 'IUD (years)' ]]
dane[[ 'Num of pregnancies', 'Hormonal Contraceptives (years)', 'IUD (years)' ]] = imp.fillna(0)
# stworzenie jednego targetu
targets = ['Hinselmann', 'Schiller', 'Citology', 'Biopsy']
def has_cancer(row):
for target in targets:
if row[target] == 1:
return 1
return 0
dane['cancer'] = dane.apply(lambda row: has_cancer(row), axis=1)
dane = dane.drop(targets, axis=1)
# wariant bez kolumn
dane_without = dane.drop(columns=['STDs: Time since first diagnosis', 'STDs: Time since last diagnosis'])
###Output
_____no_output_____
###Markdown
Ujednolicone funkcje dla wszystkich modeli
###Code
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
from sklearn.metrics import precision_score
from sklearn.metrics import recall_score
from sklearn.metrics import f1_score
from sklearn.metrics import roc_auc_score
# podzial zbioru na treningowy i testowy
def default_split(X, y):
return train_test_split(X, y, test_size=0.2, random_state=2137)
# scoring
def scoring(y_test, y_predicted):
print("ACC = ", accuracy_score(y_test, y_predicted))
print("PREC = ", precision_score(y_test, y_predicted))
print("RECALL = ", recall_score(y_test, y_predicted))
print("F1 = ", f1_score(y_test, y_predicted))
print("FPR = ", roc_auc_score(y_test, y_predicted))
# wyodrebnienie y
def extract_y(data):
y = data[["cancer"]]
return data.drop(["cancer"], axis=1), y
###Output
_____no_output_____
###Markdown
Drzewo decyzyjne Dane bez kolumn diagnozy
###Code
# przygotowanie danych
X, y = extract_y(dane_without)
X_train, X_test, y_train, y_test = default_split(X, y)
print(X.shape, X_train.shape, X_test.shape)
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
y_predicted = model.predict(X_test)
scoring(y_test, y_predicted)
###Output
ACC = 0.779874213836478
PREC = 0.18518518518518517
RECALL = 0.2777777777777778
F1 = 0.22222222222222224
FPR = 0.5608747044917257
###Markdown
Kolumny diagnozy z NA -> -1 po standaryzacji
###Code
# przygotowanie danych
X, y = extract_y(dane)
X = X.fillna(-1)
X_train, X_test, y_train, y_test = default_split(X, y)
print(X.shape, X_train.shape, X_test.shape)
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
y_predicted = model.predict(X_test)
scoring(y_test, y_predicted)
###Output
ACC = 0.7987421383647799
PREC = 0.23076923076923078
RECALL = 0.3333333333333333
F1 = 0.27272727272727276
FPR = 0.5957446808510638
###Markdown
Kolumny diagnozy NA -> -1 przed standaryzacją
###Code
import pandas as pd
import numpy as np
import math
import warnings
warnings.filterwarnings('ignore')
from sklearn.preprocessing import StandardScaler
dane = pd.read_csv('cervical-cancer_csv.csv')
# usuwanie kolumn
dane = dane.drop(['STDs:cervical condylomatosis',
'STDs:vaginal condylomatosis',
'STDs:pelvic inflammatory disease',
'STDs:genital herpes',
'STDs:molluscum contagiosum',
'STDs:AIDS',
'STDs:Hepatitis B',
'STDs:HPV', 'Dx:CIN'], axis=1)
# uzupełnianie braków i kodowanie zmiennych kategorycznych
def column_nodata(df, column_name):
df[column_name + "_null"] = df[column_name].apply(lambda x: 1 if pd.isnull(x) else 0)
df[column_name] = df[column_name].fillna(0)
def replace_in_column(df, column_name, src, dst):
df[column_name] = df[column_name].replace(to_replace=src, value=dst)
replace_in_column(dane, 'STDs (number)', [3, 4], 2)
replace_in_column(dane, 'STDs: Number of diagnosis', [2,3], 1)
nodata_categories = [
'Smokes',
'Hormonal Contraceptives',
'IUD',
'STDs',
'STDs (number)',
'STDs:condylomatosis',
'STDs:vulvo-perineal condylomatosis',
'STDs:syphilis',
'STDs:HIV'
]
for category in nodata_categories:
column_nodata(dane, category)
dane = pd.concat([dane, pd.get_dummies(dane['STDs (number)'], prefix='STDs_')],axis=1)
dane.drop(['STDs (number)'],axis=1, inplace=True)
# usunięcie na - opuszczenie obserwacji
num2 = ['Smokes (years)', 'Smokes (packs/year)', 'First sexual intercourse', 'Number of sexual partners']
narows = []
for i in range (len(dane)):
for j in num2:
if math.isnan(dane.loc[i, j]) :
narows.append(i)
break
dane = dane.drop(narows)
dane.index = range(len(dane))
imp = dane[['STDs: Time since first diagnosis', 'STDs: Time since last diagnosis']]
dane[['STDs: Time since first diagnosis', 'STDs: Time since last diagnosis']] = imp.fillna(-1)
# standaryzacja
numerical = ['Age', 'Number of sexual partners', 'First sexual intercourse', 'Num of pregnancies', 'Smokes (years)',
'Smokes (packs/year)', 'Hormonal Contraceptives (years)', 'IUD (years)', 'STDs: Time since first diagnosis',
'STDs: Time since last diagnosis']
scaler = StandardScaler()
dane_scaled = scaler.fit_transform(dane[numerical])
d2 = pd.DataFrame(dane_scaled, columns = numerical)
dane[numerical] = d2[numerical]
# usunięcie na - imputacja
imp = dane[[ 'Num of pregnancies', 'Hormonal Contraceptives (years)', 'IUD (years)' ]]
dane[[ 'Num of pregnancies', 'Hormonal Contraceptives (years)', 'IUD (years)' ]] = imp.fillna(0)
# stworzenie jednego targetu
targets = ['Hinselmann', 'Schiller', 'Citology', 'Biopsy']
def has_cancer(row):
for target in targets:
if row[target] == 1:
return 1
return 0
dane['cancer'] = dane.apply(lambda row: has_cancer(row), axis=1)
dane = dane.drop(targets, axis=1)
# przygotowanie danych
X, y = extract_y(dane)
X_train, X_test, y_train, y_test = default_split(X, y)
print(X.shape, X_train.shape, X_test.shape)
from sklearn.tree import DecisionTreeClassifier
model = DecisionTreeClassifier()
model.fit(X_train, y_train)
y_predicted = model.predict(X_test)
scoring(y_test, y_predicted)
###Output
ACC = 0.7861635220125787
PREC = 0.19230769230769232
RECALL = 0.2777777777777778
F1 = 0.2272727272727273
FPR = 0.5644208037825059
|
doc/ipython_notebooks_src/tutorial-relaxing-and-plotting-a-nanodisk.ipynb
|
###Markdown
In Finmag, we have lots of pre-made meshes which you can use for simple geometric shapes. These are located in the finmag.util.meshes module and include:* box* cylinder* ellipsoid* elliptic_cylinder* elliptic_nanodisk* nanodisk* regular_polygon* regular_polygon_extruded* sphere* truncated_coneMeshes created from these functions can be directly used with Finmag without any problems. For more complex structures, you can use programmes such as Netgen or Gmsh directly and then convert to the Dolfin XML mesh format. Dolfin is the underlying Finite Element library which Finmag is built on.*Remember, the bigger the mesh, the longer simulations will take! The demagnetising field in particular is proportional to $M^{4/3}$, where $M$ is the number of surface nodes.* Here, we'll just create a small nanodisk mesh:
###Code
d = 100 # diameter (nm)
t = 10 # thickness (nm)
h = 2.5 # Discretisation length (nm)
mesh = finmag.util.meshes.nanodisk(d, t, h, save_result=False)
###Output
[2019-01-23 23:03:33] DEBUG: Using netgen to convert /tmp/tmpU3ihmD.geo to DIFFPACK format.
[2019-01-23 23:03:38] DEBUG: Done!
[2019-01-23 23:03:38] DEBUG: Using dolfin-convert to convert /tmp/tmpU3ihmD.grid to xml format.
[2019-01-23 23:03:39] DEBUG: Compressing /tmp/tmpU3ihmD.xml
[2019-01-23 23:03:39] DEBUG: Removing file '/tmp/tmpU3ihmD.xml.gz' because mesh is created on the fly.
[2019-01-23 23:03:39] DEBUG: Removing file '/tmp/tmpU3ihmD.geo' because mesh is created on the fly.
###Markdown
We now create a simulation object. This basically comes in a few steps:* Create a sim (finmag.Simulation or finmag.NormalModeSimulation)* Set properties (Ms, the initial magnetisation, the damping constant, etc).* Add energy terms (Exchange, DMI, Zeeman, Demagnetising field, etc).Then, we normally relax the system to find a metastable state (i.e. where the magnetisation is not changing).Finally, we may then go on to evolve the system further, perhaps after adding a new energy or changing the applied field - (sim.run_until)Alternatively, we might compute properties around the metastable state - for e.g. the normal modes (sim.compute_normal_modes, if finmag.NormalModeSimulation was used).Here, we'll just setup the system and initialise with a first approximation to a Skyrmion state:
###Code
B = 0
alpha = 1.0
Ms = 384e3
A = 8.78e-12
D = 1.58e-3
sim = finmag.Simulation(mesh, Ms, unit_length=1e-9)
def m_init(pos):
x, y, z = pos
if x**2 + y**2 <= (d/4) ** 2:
return (0, 0, 1)
else:
return (0, 0, -1)
sim.set_m(m_init)
sim.add(Exchange(A))
sim.add(DMI(D))
sim.add(Demag())
if B != 0:
sim.add(Zeeman((0, 0, B*1e-3/mu0)))
###Output
[2019-01-23 23:03:39] INFO: Finmag logging output will be written to file: '/home/rp20g15/eigenmodes-fd-test/submission-scripts/unnamed.log' (any old content will be overwritten).
[2019-01-23 23:03:39] DEBUG: Creating DataWriter for file 'unnamed.ndt'
[2019-01-23 23:03:39] INFO: Creating Sim object name='unnamed', instance_id=0 (rank=0/1).
[2019-01-23 23:03:39] DEBUG: Total number of Sim objects in this session: 1
[2019-01-23 23:03:39] INFO: <Mesh of topological dimension 3 (tetrahedra) with 4946 vertices and 19127 cells, ordered>
/usr/local/lib/python2.7/dist-packages/aeon/timer.py:35: UserWarning: You are nesting measurements in __init__::LLG.
warnings.warn("You are nesting measurements in {}::{}.".format(name, group))
[2019-01-23 23:03:39] DEBUG: Creating LLG object.
[2019-01-23 23:03:40] DEBUG: Creating Exchange object with method box-matrix-petsc, in Jacobian.
[2019-01-23 23:03:40] DEBUG: Adding interaction Exchange to simulation.
[2019-01-23 23:03:40] DEBUG: Creating DMI object with method box-matrix-petsc, in Jacobian.
[2019-01-23 23:03:40] DEBUG: Adding interaction DMI to simulation.
[2019-01-23 23:03:40] DEBUG: Creating Demag object with solver 'FK'.
[2019-01-23 23:03:40] DEBUG: Adding interaction Demag to simulation.
[2019-01-23 23:03:40] DEBUG: Using Krylov solver for demag.
[2019-01-23 23:03:40] DEBUG: Boundary element matrix uses 82.38 MB of memory.
###Markdown
Now we relax the system:
###Code
sim.relax(stopping_dmdt=0.1)
###Output
[2019-01-23 23:03:40] INFO: Simulation will run until relaxation of the magnetisation.
[2019-01-23 23:03:40] DEBUG: Relaxation parameters: stopping_dmdt=0.1 (degrees per nanosecond), dt_limit=1e-10, dmdt_increased_counter_limit=10
[2019-01-23 23:03:40] INFO: Creating integrator with backend sundials and arguments {'reltol': 1e-06, 'abstol': 1e-06}.
[2019-01-23 23:03:41] DEBUG: Updating get method for steps in TableWriter(name=unnamed.ndt)
[2019-01-23 23:03:41] DEBUG: Updating get method for last_step_dt in TableWriter(name=unnamed.ndt)
[2019-01-23 23:03:41] DEBUG: Updating get method for dmdt in TableWriter(name=unnamed.ndt)
/usr/local/lib/python2.7/dist-packages/aeon/timer.py:35: UserWarning: You are nesting measurements in compute_field::DMI.
warnings.warn("You are nesting measurements in {}::{}.".format(name, group))
/usr/local/lib/python2.7/dist-packages/aeon/timer.py:35: UserWarning: You are nesting measurements in compute_field::Exchange.
warnings.warn("You are nesting measurements in {}::{}.".format(name, group))
/usr/local/lib/python2.7/dist-packages/aeon/timer.py:35: UserWarning: You are nesting measurements in compute_field::FKDemag.
warnings.warn("You are nesting measurements in {}::{}.".format(name, group))
[2019-01-23 23:03:42] DEBUG: At t=2e-14, last_dmdt=6.56e+05 * stopping_dmdt, next dt=1e-14.
[2019-01-23 23:03:43] DEBUG: At t=3e-14, last_dmdt=6.31e+05 * stopping_dmdt, next dt=1e-14.
[2019-01-23 23:03:43] DEBUG: At t=4.5e-14, last_dmdt=6e+05 * stopping_dmdt, next dt=1.5e-14.
[2019-01-23 23:03:43] DEBUG: At t=6.75e-14, last_dmdt=6.24e+05 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:44] DEBUG: At t=9e-14, last_dmdt=6.66e+05 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:44] DEBUG: At t=1.13e-13, last_dmdt=7.11e+05 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:44] DEBUG: At t=1.35e-13, last_dmdt=7.59e+05 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:45] DEBUG: At t=1.58e-13, last_dmdt=8.1e+05 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:45] DEBUG: At t=1.8e-13, last_dmdt=8.71e+05 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:45] DEBUG: At t=2.03e-13, last_dmdt=9.52e+05 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:45] DEBUG: At t=2.25e-13, last_dmdt=1.04e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:46] DEBUG: At t=2.48e-13, last_dmdt=1.13e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:46] DEBUG: At t=2.7e-13, last_dmdt=1.22e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:46] DEBUG: At t=2.93e-13, last_dmdt=1.32e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:46] DEBUG: At t=3.15e-13, last_dmdt=1.42e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:46] DEBUG: At t=3.38e-13, last_dmdt=1.53e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:47] DEBUG: At t=3.6e-13, last_dmdt=1.64e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:47] DEBUG: At t=3.83e-13, last_dmdt=1.75e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:47] DEBUG: At t=4.05e-13, last_dmdt=1.87e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:47] DEBUG: At t=4.28e-13, last_dmdt=1.99e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:47] DEBUG: At t=4.5e-13, last_dmdt=2.11e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:47] DEBUG: At t=4.73e-13, last_dmdt=2.23e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:48] DEBUG: At t=4.95e-13, last_dmdt=2.35e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:48] DEBUG: At t=5.17e-13, last_dmdt=2.47e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:48] DEBUG: At t=5.4e-13, last_dmdt=2.58e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:48] DEBUG: At t=5.62e-13, last_dmdt=2.7e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:48] DEBUG: At t=5.85e-13, last_dmdt=2.81e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:49] DEBUG: At t=6.07e-13, last_dmdt=2.91e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:49] DEBUG: At t=6.3e-13, last_dmdt=3.01e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:49] DEBUG: At t=6.52e-13, last_dmdt=3.1e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:49] DEBUG: At t=6.75e-13, last_dmdt=3.19e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:49] DEBUG: At t=6.97e-13, last_dmdt=3.26e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:49] DEBUG: At t=7.2e-13, last_dmdt=3.33e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:50] DEBUG: At t=7.42e-13, last_dmdt=3.39e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:50] DEBUG: At t=7.65e-13, last_dmdt=3.44e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:50] DEBUG: At t=7.87e-13, last_dmdt=3.49e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:50] DEBUG: At t=8.1e-13, last_dmdt=3.53e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:50] DEBUG: At t=8.32e-13, last_dmdt=3.56e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:50] DEBUG: At t=8.55e-13, last_dmdt=3.58e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:50] DEBUG: At t=8.77e-13, last_dmdt=3.6e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:51] DEBUG: At t=9e-13, last_dmdt=3.6e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:51] DEBUG: At t=9.22e-13, last_dmdt=3.6e+06 * stopping_dmdt, next dt=2.25e-14.
[2019-01-23 23:03:51] DEBUG: At t=9.56e-13, last_dmdt=3.57e+06 * stopping_dmdt, next dt=3.37e-14.
[2019-01-23 23:03:51] DEBUG: At t=1.01e-12, last_dmdt=3.49e+06 * stopping_dmdt, next dt=5.06e-14.
[2019-01-23 23:03:52] DEBUG: At t=1.08e-12, last_dmdt=3.24e+06 * stopping_dmdt, next dt=7.59e-14.
[2019-01-23 23:03:52] DEBUG: At t=1.2e-12, last_dmdt=2.56e+06 * stopping_dmdt, next dt=1.14e-13.
[2019-01-23 23:03:54] DEBUG: At t=1.37e-12, last_dmdt=1.66e+06 * stopping_dmdt, next dt=1.71e-13.
[2019-01-23 23:03:55] DEBUG: At t=1.62e-12, last_dmdt=1.23e+06 * stopping_dmdt, next dt=2.56e-13.
[2019-01-23 23:03:57] DEBUG: At t=2.01e-12, last_dmdt=8.73e+05 * stopping_dmdt, next dt=3.84e-13.
[2019-01-23 23:03:59] DEBUG: At t=2.58e-12, last_dmdt=4.82e+05 * stopping_dmdt, next dt=5.77e-13.
[2019-01-23 23:04:01] DEBUG: At t=3.45e-12, last_dmdt=3.04e+05 * stopping_dmdt, next dt=8.65e-13.
[2019-01-23 23:04:04] DEBUG: At t=4.75e-12, last_dmdt=2.05e+05 * stopping_dmdt, next dt=1.3e-12.
[2019-01-23 23:04:08] DEBUG: At t=6.69e-12, last_dmdt=1.42e+05 * stopping_dmdt, next dt=1.95e-12.
[2019-01-23 23:04:13] DEBUG: At t=9.61e-12, last_dmdt=1.54e+05 * stopping_dmdt, next dt=2.92e-12.
[2019-01-23 23:04:23] DEBUG: At t=1.25e-11, last_dmdt=1.19e+05 * stopping_dmdt, next dt=2.92e-12.
[2019-01-23 23:04:32] DEBUG: At t=1.69e-11, last_dmdt=6.24e+04 * stopping_dmdt, next dt=4.38e-12.
[2019-01-23 23:04:38] DEBUG: At t=2.35e-11, last_dmdt=2.87e+04 * stopping_dmdt, next dt=6.57e-12.
[2019-01-23 23:04:46] DEBUG: At t=3.33e-11, last_dmdt=1.19e+04 * stopping_dmdt, next dt=9.85e-12.
[2019-01-23 23:05:00] DEBUG: At t=4.81e-11, last_dmdt=5.63e+03 * stopping_dmdt, next dt=1.48e-11.
[2019-01-23 23:05:15] DEBUG: At t=7.03e-11, last_dmdt=6.99e+03 * stopping_dmdt, next dt=2.22e-11.
[2019-01-23 23:05:32] DEBUG: At t=9.24e-11, last_dmdt=8.4e+03 * stopping_dmdt, next dt=2.22e-11.
[2019-01-23 23:05:59] DEBUG: At t=1.15e-10, last_dmdt=9.37e+03 * stopping_dmdt, next dt=2.22e-11.
[2019-01-23 23:06:22] DEBUG: At t=1.37e-10, last_dmdt=9.7e+03 * stopping_dmdt, next dt=2.22e-11.
[2019-01-23 23:06:40] DEBUG: At t=1.59e-10, last_dmdt=9e+03 * stopping_dmdt, next dt=2.22e-11.
[2019-01-23 23:07:10] DEBUG: At t=1.92e-10, last_dmdt=6.91e+03 * stopping_dmdt, next dt=3.33e-11.
[2019-01-23 23:07:47] DEBUG: At t=2.42e-10, last_dmdt=3.34e+03 * stopping_dmdt, next dt=4.99e-11.
[2019-01-23 23:08:47] DEBUG: At t=3.17e-10, last_dmdt=1.71e+03 * stopping_dmdt, next dt=7.48e-11.
[2019-01-23 23:10:06] DEBUG: At t=4.17e-10, last_dmdt=1.3e+03 * stopping_dmdt, next dt=1e-10.
[2019-01-23 23:11:22] DEBUG: At t=5.17e-10, last_dmdt=241 * stopping_dmdt, next dt=1e-10.
[2019-01-23 23:12:41] DEBUG: At t=6.17e-10, last_dmdt=202 * stopping_dmdt, next dt=1e-10.
[2019-01-23 23:13:56] DEBUG: At t=7.17e-10, last_dmdt=56.9 * stopping_dmdt, next dt=1e-10.
[2019-01-23 23:14:47] DEBUG: At t=8.17e-10, last_dmdt=32.7 * stopping_dmdt, next dt=1e-10.
###Markdown
We can save a VTK file which can be plotted using Paraview or alternative plotting systems:
###Code
sim.save_vtk('test_new_vtk.pvd', overwrite=True)
###Output
[2019-01-23 23:22:00] WARNING: Removing file 'test_new_vtk.pvd' and all associated .vtu files (because overwrite=True).
[2019-01-23 23:22:01] DEBUG: Saved field at t=1.61690263469e-09 to file 'test_new_vtk.pvd' (snapshot #0; saving took 0.106 seconds).
###Markdown
Alternatively, we can create a function which uses Matplotlib to plot the simulation results.
###Code
finmag.util.plot_m(sim, component='all', filename='skyrmion.pdf', extent=1.0, z=1.0,
gridpoints=[200, 200], cmap='RdBu')
###Output
_____no_output_____
###Markdown
If running multiple Finmag simulations in a single Python session, it's important to shutdown the simulation objects, as sometimes things stay in memory. To do this:
###Code
sim.shutdown()
gc.collect()
###Output
[2019-01-23 23:22:33] INFO: Shutting down Simulation object finmag.Simulation(name='unnamed', instance_id=0) with <Mesh of topological dimension 3 (tetrahedra) with 4946 vertices and 19127 cells, ordered>
[2019-01-23 23:22:33] DEBUG: 0 other Simulation instances alive.
[2019-01-23 23:22:33] DEBUG: shutdown(): 1-refcount 5 for unnamed
[2019-01-23 23:22:33] DEBUG: 'Deletinging all get methods in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for H_Demag in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for dmdt in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for E_total in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for H_Exchange in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for m in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for steps in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for E_Exchange in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for H_DMI in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for time in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for H_total in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for E_Demag in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for last_step_dt in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: 'Deleting' get method for E_DMI in TableWriter(name=unnamed.ndt)
[2019-01-23 23:22:33] DEBUG: shutdown(): 2-refcount 5 for unnamed
[2019-01-23 23:22:33] DEBUG: shutdown(): 3-refcount 4 for unnamed
[2019-01-23 23:22:33] DEBUG: Removing scheduled items:
[2019-01-23 23:22:33] DEBUG: shutdown(): 4-refcount 4 for unnamed
[2019-01-23 23:22:33] DEBUG: shutdown(): 5-refcount 4 for unnamed
[2019-01-23 23:22:33] DEBUG: shutdown(): 6-refcount 4 for unnamed
[2019-01-23 23:22:33] INFO: Closing logging_handler <logging.handlers.RotatingFileHandler object at 0x2aaab22e46d0> for sim object unnamed
[2019-01-23 23:22:33] DEBUG: shutdown(): 7-refcount 4 for unnamed
|
tools/script.ipynb
|
###Markdown
Loading the corpus
###Code
import os
with open('corpuses/coed.txt', encoding='utf-8') as f:
words = [line.rstrip() for line in f]
print(f'{len(words)} words loaded.')
###Output
75754 words loaded.
###Markdown
Filtering the corpus
###Code
# Remove all words with non-alpha characters
import string
valid_letters = set([letter for letter in string.ascii_lowercase])
words = list(filter(lambda word: all((letter in valid_letters for letter in word)), words))
# Remove words shorter than 3 characters, and larger than 9 characters
words = list(filter(lambda word: len(word) >= 3 and len(word) <= 9, words))
# Remove all capitalised words
words = list(filter(lambda word: word[0].islower(), words))
words = set(words)
with open('corpuses/coed_adverbs_with_ly.txt', encoding='utf-8') as f:
adverbs_with_ly = set([line.rstrip() for line in f])
words = words - adverbs_with_ly
with open('corpuses/coed_plurals.txt', encoding='utf-8') as f:
plurals = set([line.rstrip() for line in f])
words = words - plurals
with open('corpuses/coed_tenses_and_participles.txt', encoding='utf-8') as f:
tenses_and_participles = set([line.rstrip() for line in f])
words = words - tenses_and_participles
with open('corpuses/coed_abbreviations.txt', encoding='utf-8') as f:
abbreviations = set([line.rstrip() for line in f])
words = words - abbreviations
with open('corpuses/google_profane_words.txt', encoding='utf-8') as f:
profanities = set([line.rstrip() for line in f])
words = words - profanities
with open('corpuses/words58k.txt', encoding='utf-8') as f:
words58k = set([line.rstrip() for line in f])
words = words.intersection(words58k)
words = sorted(list(words))
print(f"{len(words)} valid words")
###Output
18651 valid words
###Markdown
Save as a new corpus
###Code
with open('corpuses/glypoon.txt', 'w+', encoding='utf-8') as f:
f.write('\n'.join(sorted(words)))
###Output
_____no_output_____
###Markdown
Group words by length
###Code
import pandas
words_df = pandas.DataFrame(words, columns=['word'])
words_df['length'] = words_df.apply(lambda row: len(row['word']), axis=1)
words_df.head()
df_group_by_length = words_df.groupby(by='length')['word'] \
.apply(list) \
.reset_index(name='words')
df_group_by_length['count'] = df_group_by_length.apply(lambda row: len(row['words']), axis=1)
df_group_by_length
###Output
_____no_output_____
###Markdown
Select a random word of length K
###Code
import random
K = 8
words_with_length_k = df_group_by_length.loc[df_group_by_length['length'] == K]['words'].values[0]
chosen_word = random.choice(words_with_length_k)
print(chosen_word)
###Output
fanlight
###Markdown
Find pangram words
###Code
import random
from collections import Counter
MIN_ANSWER_LENGTH = 4 # Minimum answer length
MIN_NUM_ANSWERS = 20 # Minimum number of answers
MAX_NUM_ANSWERS = 35 # Maximum number of answers
answers_by_keyword = {}
for keyword in chosen_word: # For each possible letter to use as the 'center word'
answers_by_keyword[keyword] = []
for word in words:
if len(word) < MIN_ANSWER_LENGTH:
continue
if keyword not in word:
continue
if not Counter(word) - Counter(chosen_word):
answers_by_keyword[keyword].append(word)
solutions = []
for keyword, answers in answers_by_keyword.items():
if len(answers) >= MIN_NUM_ANSWERS and len(answers) <= MAX_NUM_ANSWERS:
solutions.append((chosen_word, keyword, sorted(answers)))
print(f'{len(solutions)} possible solution(s) found.')
if solutions:
solution = random.choice(solutions)
print(f'Full word: {solution[0]}')
print(f'Center letter: {solution[1]}')
print(f'{len(solution[2])} answers: {", ".join(solution[2])}')
###Output
5 possible solution(s) found.
Full word: fanlight
Center letter: f
24 answers: fail, fain, faint, faith, fang, fanlight, fatling, fiat, fight, filth, final, fitna, flag, flan, flat, flight, fling, flint, flit, gift, haft, half, lift, naif
###Markdown
Export as a JSON file
###Code
import json
import random
OUTPUT_FILE_NAME = 'answers.json'
letters = [char for char in solution[0]]
random.shuffle(letters)
letters.remove(solution[1])
letters.insert(0, solution[1])
json_solution = {
'letters': letters,
'answers': solution[2]
}
with open(OUTPUT_FILE_NAME, 'w+') as solution_file:
json.dump(json_solution, solution_file, indent=4)
###Output
_____no_output_____
###Markdown
思路整理根据train.txt和val.txt获取每一个视频文件路径对每个视频 使用openpose生成预测文件 将预测文件打包为数据和标签 将数据和标签存入对应文件中
###Code
# class Struct():
# pass
# arg=Struct()
# arg.data_path="../data/Skating"
# arg.out_folder="../output/Skating"
# arg.trainfile=os.path.join(arg.data_path,"train.csv")
# arg.testfile=os.path.join(arg.data_path,"val.csv")
# arg.labelfile=os.path.join(arg.data_path,"classInd.csv")
# arg.openpose="/home/jiangdong/opt/openpose/build"
# arg.model_folder="../models"
# openpose = '{}/examples/openpose/openpose.bin'.format(arg.openpose)
# def _count_lines(filename):
# with open(filename) as f:
# count=-1
# for count,_ in enumerate(f):
# pass
# count+=1
# return count
# def _video_loader(filename):
# with open(filename) as f:
# for line in f.readlines():
# info=line.strip()
# video_name , _,label=info.split(" ")
# yield video_name,str(int(label)+1)
# def pose_estimation(openpose,out_folder,video_path,model_folder,info):
# video_name=video_path.split('/')[-1].split('.')[0]
# output_snippets_dir=os.path.join(out_folder,'openpose_estimation/snippets/{}'.format(video_name))
# output_sequence_dir = os.path.join(out_folder,'data/')
# output_sequence_path = '{}/{}.json'.format(output_sequence_dir, video_name)
# # pose estimation
# openpose_args = dict(
# video=video_path,
# write_json=output_snippets_dir,
# display=0,
# render_pose=0,
# model_pose='COCO',
# model_folder=model_folder)
# command_line = openpose + ' '
# command_line += ' '.join(['--{} {}'.format(k, v) for k, v in openpose_args.items()])
# shutil.rmtree(output_snippets_dir, ignore_errors=True)
# os.makedirs(output_snippets_dir)
# print(command_line)
# os.system(command_line)
# # pack openpose ouputs
# video = utils.video.get_video_frames(video_path)
# height, width, _ = video[0].shape
# video_info = utils.openpose.json_pack(
# output_snippets_dir, video_name, width, height, label_index=info["label_index"],label=info["label"])
# if not os.path.exists(output_sequence_dir):
# os.makedirs(output_sequence_dir)
# with open(output_sequence_path, 'w') as outfile:
# json.dump(video_info, outfile)
# if len(video_info['data']) == 0:
# print('Can not find pose estimation results of %s'%(video_name))
# return
# else:
# print('%s Pose estimation complete.'%(video_name))
# print(os.getcwd())
# label_names={}
# with open(arg.labelfile) as lf:
# for line in lf.readlines():
# index,label_name=line.strip().split(" ")
# label_names[index]=label_name
# print(label_names)
# part = ['train', 'val']
# for p in part:
# csvfile=os.path.join(arg.data_path,"{}.csv".format(p))
# label_file={}
# total_count = _count_lines(csvfile)
# count=0
# for nameinfo,label in _video_loader(csvfile):
# try:
# filename=nameinfo.split('/')[3]+".mp4"
# category=filename.split("_")[0]
# info={}
# info['label_index']=int(label)
# info['has_skeleton']=True
# info['label']=label_names[label]
# label_file[filename]=info
# video_path = os.path.join(arg.data_path,category,filename)
# pose_estimation(openpose,arg.out_folder,video_path,arg.model_folder,info)
# count+=1
# print("%4.2f %% of %s has been processed"%(count*100/total_count,p))
# except Exception as e:
# print(e)
# label_save_path=os.path.join(arg.out_folder,"{}_label.json".format(p))
# with open(label_save_path,"w") as f:
# json.dump(label_file,f)
line="/share/SkatingFlow/3Lutz_n28_p10_g04"
print(line.split('/')[3].split("_")[0])
###Output
_____no_output_____
###Markdown
Loading the corpus
###Code
import os
with open('corpuses/coed.txt', encoding='utf-8') as f:
words = [line.rstrip() for line in f]
print(f'{len(words)} words loaded.')
###Output
75754 words loaded.
###Markdown
Filtering the corpus
###Code
# Remove all words with non-alpha characters
import string
valid_letters = set([letter for letter in string.ascii_lowercase])
words = list(filter(lambda word: all((letter in valid_letters for letter in word)), words))
# Remove words shorter than 3 characters, and larger than 9 characters
words = list(filter(lambda word: len(word) >= 3 and len(word) <= 9, words))
# Remove all capitalised words
words = list(filter(lambda word: word[0].islower(), words))
words = set(words)
with open('corpuses/coed_adverbs_with_ly.txt', encoding='utf-8') as f:
adverbs_with_ly = set([line.rstrip() for line in f])
words = words - adverbs_with_ly
with open('corpuses/coed_plurals.txt', encoding='utf-8') as f:
plurals = set([line.rstrip() for line in f])
words = words - plurals
with open('corpuses/coed_tenses_and_participles.txt', encoding='utf-8') as f:
tenses_and_participles = set([line.rstrip() for line in f])
words = words - tenses_and_participles
with open('corpuses/coed_abbreviations.txt', encoding='utf-8') as f:
abbreviations = set([line.rstrip() for line in f])
words = words - abbreviations
with open('corpuses/google_profane_words.txt', encoding='utf-8') as f:
profanities = set([line.rstrip() for line in f])
words = words - profanities
words = sorted(list(words))
print(f"{len(words)} valid words")
###Output
32706 valid words
###Markdown
Save as a new corpus
###Code
with open('corpuses/glypoon.txt', 'w+', encoding='utf-8') as f:
f.write('\n'.join(sorted(words)))
###Output
_____no_output_____
###Markdown
Group words by length
###Code
import pandas
words_df = pandas.DataFrame(words, columns=['word'])
words_df['length'] = words_df.apply(lambda row: len(row['word']), axis=1)
words_df.head()
df_group_by_length = words_df.groupby(by='length')['word'] \
.apply(list) \
.reset_index(name='words')
df_group_by_length['count'] = df_group_by_length.apply(lambda row: len(row['words']), axis=1)
df_group_by_length
###Output
_____no_output_____
###Markdown
Select a random word of length K
###Code
import random
K = 8
words_with_length_k = df_group_by_length.loc[df_group_by_length['length'] == K]['words'].values[0]
chosen_word = random.choice(words_with_length_k)
print(chosen_word)
###Output
fanlight
###Markdown
Find pangram words
###Code
import random
from collections import Counter
MIN_ANSWER_LENGTH = 4 # Minimum answer length
MIN_NUM_ANSWERS = 20 # Minimum number of answers
MAX_NUM_ANSWERS = 35 # Maximum number of answers
answers_by_keyword = {}
for keyword in chosen_word: # For each possible letter to use as the 'center word'
answers_by_keyword[keyword] = []
for word in words:
if len(word) < MIN_ANSWER_LENGTH:
continue
if keyword not in word:
continue
if not Counter(word) - Counter(chosen_word):
answers_by_keyword[keyword].append(word)
solutions = []
for keyword, answers in answers_by_keyword.items():
if len(answers) >= MIN_NUM_ANSWERS and len(answers) <= MAX_NUM_ANSWERS:
solutions.append((chosen_word, keyword, sorted(answers)))
print(f'{len(solutions)} possible solution(s) found.')
if solutions:
solution = random.choice(solutions)
print(f'Full word: {solution[0]}')
print(f'Center letter: {solution[1]}')
print(f'{len(solution[2])} answers: {", ".join(solution[2])}')
###Output
5 possible solution(s) found.
Full word: fanlight
Center letter: f
24 answers: fail, fain, faint, faith, fang, fanlight, fatling, fiat, fight, filth, final, fitna, flag, flan, flat, flight, fling, flint, flit, gift, haft, half, lift, naif
###Markdown
Export as a JSON file
###Code
import json
import random
OUTPUT_FILE_NAME = 'answers.json'
letters = [char for char in solution[0]]
random.shuffle(letters)
letters.remove(solution[1])
letters.insert(0, solution[1])
json_solution = {
'letters': letters,
'answers': solution[2]
}
with open(OUTPUT_FILE_NAME, 'w+') as solution_file:
json.dump(json_solution, solution_file, indent=4)
###Output
_____no_output_____
|
Movieflix.ipynb
|
###Markdown
A movie can make it to the top of the list even if only a single user has given it five stars rating. Thus, above stats can be misleading. Usually, a movie which is really a good one will receive a higher rating by a large number of users. So, we will look at the total number of ratings for movie.
###Code
df.groupby('title')['rating'].count().sort_values(ascending=False).head()
###Output
_____no_output_____
###Markdown
Now we can see some really good movies at the top. The above list supports our point that good movies normally receive higher ratings. Now we know that both the average rating per movie and the number of ratings per movie are important attributes. Let's create a new dataframe that contains both of these attributes. What is movie rating and how many users voted for this ?
###Code
# create a dataframe
data = pd.DataFrame(df.groupby('title')['rating'].mean())
data['rating_counts'] = pd.DataFrame(df['title'].value_counts())
data.sort_values(by=['rating_counts', 'rating'],ascending=False).head()
###Output
_____no_output_____
###Markdown
Exploring different types of Recommender Systems 1. Content-based filtering using cosine similarity 2. Collaborative Filtering using K-Nearest Neighbours3. Collaborative Filtering using Pearson's Coefficient4. Collaborative Filtering using Singular-Value Decomposition (SVD) 1. Content-based filtering using cosine similarity
###Code
lemmatizer = WordNetLemmatizer()
genres = df_movies["genres"]
lemmatized = []
for i in range(len(genres)):
temp = genres[i].lower()
temp = temp.split("|")
temp = [lemmatizer.lemmatize(word) for word in temp]
lemmatized.append(" ".join(temp))
movies_dataset = pd.DataFrame(lemmatized, columns=["genres"], index=df_movies["title"])
movies_dataset
cv = CountVectorizer()
genre_cv = cv.fit_transform(movies_dataset["genres"]).toarray()
genre_cv
print("Genres coresponding to the count vector are :\n",cv.get_feature_names())
genre_dataset = df_movies[['movieId', 'title']]
genre_dataset = genre_dataset.join(pd.DataFrame(genre_cv))
genre_dataset.head(-10)
similarities = cosine_similarity(genre_cv)
similarities.shape
user_id = 2 #For user 18 lets recommend movies based on his recent watched movie
timestamp = df_ratings.loc[df_ratings["userId"] == user_id]
latest_movieId_watched_by_user = timestamp.sort_values(by="timestamp",ascending=False)["movieId"].values[0]
latest_movieId_watched_by_user
movie_index = df_movies.loc[df_movies['movieId'] == latest_movieId_watched_by_user,["title"]].index[0]
genre_dataset.loc[genre_dataset['movieId'] == 1356,:]
movie_index = df_movies.loc[df_movies['movieId'] == latest_movieId_watched_by_user,["title"]].index[0]
similarity_values = pd.Series(similarities[movie_index])
similarity_values.sort_values(ascending=False)
similar_movie_indexes = list(similarity_values.sort_values(ascending=False).index)
similar_movie_indexes.remove(movie_index)
similarity_values_list = list(similarity_values.sort_values(ascending=False))
def get_movie_by_index(idx):
return movies_dataset.index[idx]
def get_movie_by_id(movie_id):
return df_movies.loc[df_movies['movieId'] == movie_id,['title']].values[0][0]
get_movie_by_index(1102)
get_movie_by_id(1356)
uid = int(input("Enter your User ID: "))
no_of_recs = int(input("Enter number of movie recommendations you want: "))
timestamp = df_ratings.loc[df_ratings["userId"] == uid]
latest_movieId_watched_by_user = timestamp.sort_values(by="timestamp",ascending=False)["movieId"].values[0]
movie_index = df_movies.loc[df_movies['movieId'] == latest_movieId_watched_by_user,["title"]].index[0]
similarity_values = pd.Series(similarities[movie_index])
similar_movie_indexes = list(similarity_values.sort_values(ascending=False).index)
similar_movie_indexes.remove(movie_index)
similarity_values_list = list(similarity_values.sort_values(ascending=False))
similarity_values_list.remove(0)
print("The latest movie watched by you is: ", get_movie_by_id(latest_movieId_watched_by_user))
print("\nBased on your latest movie watched, here are top 10 recommendations we think you may like: ")
for i in range(no_of_recs):
print(f'{i+1}. {get_movie_by_index(similar_movie_indexes[i])}, Similarity: {similarity_values_list[i]}')
###Output
Enter your User ID: 21
Enter number of movie recommendations you want: 30
The latest movie watched by you is: Futurama: Bender's Game (2008)
Based on your latest movie watched, here are top 10 recommendations we think you may like:
1. Aqua Teen Hunger Force Colon Movie Film for Theaters (2007), Similarity: 0.9999999999999997
2. The Amazing Screw-On Head (2006), Similarity: 0.9354143466934851
3. Dragon Ball Z: Dead Zone (Doragon bôru Z 1: Ora no Gohan wo kaese) (1989), Similarity: 0.9258200997725515
4. Immortel (ad vitam) (Immortal) (2004), Similarity: 0.9258200997725515
5. Justice League: War (2014), Similarity: 0.9258200997725515
6. Final Fantasy VII: Advent Children (2004), Similarity: 0.9258200997725515
7. Green Lantern: First Flight (2009), Similarity: 0.9258200997725515
8. Justice League: The New Frontier (2008), Similarity: 0.9258200997725515
9. Heavy Metal 2000 (2000), Similarity: 0.9258200997725515
10. Justice League: The Flashpoint Paradox (2013), Similarity: 0.9258200997725515
11. Dead Leaves (2004), Similarity: 0.9258200997725515
12. Batman/Superman Movie, The (1998), Similarity: 0.9258200997725515
13. Super Mario Bros. (1993), Similarity: 0.8571428571428569
14. Meet the Robinsons (2007), Similarity: 0.8571428571428569
15. Home (2015), Similarity: 0.8571428571428569
16. Chicken Little (2005), Similarity: 0.8571428571428569
17. Laputa: Castle in the Sky (Tenkû no shiro Rapyuta) (1986), Similarity: 0.8571428571428569
18. Free Birds (2013), Similarity: 0.8571428571428569
19. Space Jam (1996), Similarity: 0.8571428571428569
20. Kung Fury (2015), Similarity: 0.8571428571428569
21. Futurama: Into the Wild Green Yonder (2009), Similarity: 0.8451542547285164
22. FLCL (2000), Similarity: 0.8451542547285164
23. Appleseed (Appurushîdo) (2004), Similarity: 0.8451542547285164
24. Hellboy II: The Golden Army (2008), Similarity: 0.8451542547285164
25. Adventures of Pluto Nash, The (2002), Similarity: 0.8451542547285164
26. Time Bandits (1981), Similarity: 0.8451542547285164
27. Star Wars: Episode VII - The Force Awakens (2015), Similarity: 0.8451542547285164
28. Fifth Element, The (1997), Similarity: 0.8451542547285164
29. Wolverine, The (2013), Similarity: 0.8451542547285164
30. Star Wars: The Clone Wars (2008), Similarity: 0.8451542547285164
###Markdown
Thus, we recommend films to a user based on genres of latest movie seen by the user and generate top N recommendations. We made use of cosine similarity to find similar movies based on genre. 2. Movie recommender system based on collaborative filtering using KNN
###Code
df = df_movies.merge(df_ratings)
users_dataset = df.loc[:,["userId","movieId","title","genres","rating"]]
df_ratings = users_dataset.loc[:,["title","rating"]].groupby("title").mean()
genres = users_dataset["genres"]
lemmatizer = WordNetLemmatizer()
lemmatized = []
for i in range(len(genres)):
temp = genres[i].split("|")
for j in range(len(temp)):
temp[j] = lemmatizer.lemmatize(temp[j])
lemmatized.append(" ".join(temp))
cv = CountVectorizer()
genre_cv = cv.fit_transform(lemmatized).toarray()
genres = pd.DataFrame(genre_cv,columns=cv.get_feature_names())
users_dataset = users_dataset.iloc[:,:-2]
users_dataset = users_dataset.join(genres)
users_dataset
final_dataset = users_dataset.drop(['movieId', 'title'], axis=1)
genre_wise_count = final_dataset.groupby("userId").sum()
ratings = df_ratings.copy()
ratings = ratings.reset_index()
genre_wise_count
from sklearn.neighbors import NearestNeighbors
X = genre_wise_count.iloc[:,:].values
classifier = NearestNeighbors()
classifier.fit(X)
user_id = int(input("Enter your User ID: "))
no_of_recs = int(input("Enter number of movie recommendations you want: "))
neighbors = classifier.kneighbors([X[user_id-1]],n_neighbors=10,return_distance=False)
current_user = users_dataset.loc[users_dataset["userId"] == neighbors[0][0],:]["title"].values
similar_user = users_dataset.loc[users_dataset["userId"] == neighbors[0][1],:]["title"].values
movies_list = [movie for movie in similar_user if movie not in current_user]
ratings_list = [ratings.loc[ratings.title == movie, : ]['rating'].values for movie in movies_list]
ratings_list = [float(rating) for rating in ratings_list]
movie_rating = [(movie, rating) for movie, rating in zip(movies_list, ratings_list)]
movie_rating.sort(reverse=True, key = lambda x: x[1])
print("Recommended Movies are: ")
for i in range(no_of_recs):
print(f"{i+1}. {movie_rating[i][0]}, Average Rating: {movie_rating[i][1]}")
###Output
Enter your User ID: 21
Enter number of movie recommendations you want: 30
Recommended Movies are:
1. All Quiet on the Western Front (1930), Average Rating: 4.5
2. Among Giants (1998), Average Rating: 4.5
3. Harold and Maude (1971), Average Rating: 4.287878787878788
4. North by Northwest (1959), Average Rating: 4.273972602739726
5. Fargo (1996), Average Rating: 4.2711442786069655
6. American Flyers (1985), Average Rating: 4.25
7. Anatomy of a Murder (1959), Average Rating: 4.25
8. Star Wars: Episode V - The Empire Strikes Back (1980), Average Rating: 4.228070175438597
9. Annie Hall (1977), Average Rating: 4.205882352941177
10. All About Eve (1950), Average Rating: 4.203703703703703
11. American Beauty (1999), Average Rating: 4.157407407407407
12. Aliens (1986), Average Rating: 4.146496815286624
13. 39 Steps, The (1935), Average Rating: 4.108695652173913
14. Manhattan (1979), Average Rating: 4.1
15. Amadeus (1984), Average Rating: 4.087628865979381
16. Affair to Remember, An (1957), Average Rating: 4.071428571428571
17. Alien (1979), Average Rating: 4.064102564102564
18. Stand by Me (1986), Average Rating: 4.063725490196078
19. Affliction (1997), Average Rating: 4.05
20. Brazil (1985), Average Rating: 4.0479452054794525
21. Rain Man (1988), Average Rating: 3.97787610619469
22. Sound of Music, The (1965), Average Rating: 3.9655172413793105
23. African Queen, The (1951), Average Rating: 3.9649122807017543
24. Mary Poppins (1964), Average Rating: 3.962962962962963
25. Terminator 2: Judgment Day (1991), Average Rating: 3.960474308300395
26. 2001: A Space Odyssey (1968), Average Rating: 3.9603174603174605
27. Searching for Bobby Fischer (1993), Average Rating: 3.9363636363636365
28. American Graffiti (1973), Average Rating: 3.9342105263157894
29. Room with a View, A (1986), Average Rating: 3.9210526315789473
30. Platoon (1986), Average Rating: 3.9156626506024095
###Markdown
Hence, we tried collaborative filtering using K-Nearest Neighbours (KNNs) and made a recommender system based on it. This allows users to get suggestion on contents from similar users and the recommendations are ranked on average user rating of the movie. 3. Collaborative Filtering using Pearson's Coefficient Similarity
###Code
def get_recommendations_collab():
data_path = 'dataset/'
movies_filename = 'movies.csv'
ratings_filename = 'ratings.csv'
df_ratings = pd.read_csv(
os.path.join(data_path, ratings_filename),
usecols=['userId', 'movieId', 'rating'],
dtype={'userId': 'int32', 'movieId': 'int32', 'rating': 'float32'}
)
df_movies = pd.read_csv(
os.path.join(data_path, movies_filename),
usecols=['movieId', 'title'],
dtype={'movieId': 'int32', 'title': 'str'})
# df_ratings=df_ratings[:2000000]
movie_features = df_ratings.pivot(
index='userId',
columns='movieId',
values='rating'
).fillna(0).to_numpy()
user_id = int(input("Enter your User ID: "))
no_of_recs = int(input("Enter number of movie recommendations you want: "))
user_id -= 1
similarities = []
for i in range(movie_features.shape[0]):
similarities.append((np.corrcoef(movie_features[user_id], movie_features[i])[0, 1], i))
similarities.sort(reverse=True)
#print(similarities)
denom = sum([e[0] for e in similarities])
#ratings_user = df_movie_features.iloc[user_id].copy()
new_ratings = []
for i in range(movie_features[user_id].shape[0]):
if not movie_features[user_id][i] > 1e-8:
num = 0
# print(i)
for y in similarities:
num += y[0]*movie_features[y[1]][i]
new_ratings.append((num / denom, i))
new_ratings.sort(reverse=True)
print('\nRecommendations for you:')
for e in new_ratings[:no_of_recs]:
print(df_movies.iloc[e[1]]['title'])
get_recommendations_collab()
###Output
Enter your User ID: 21
Enter number of movie recommendations you want: 30
Recommendations for you:
Shawshank Redemption, The (1994)
Forrest Gump (1994)
Fight Club (1999)
Silence of the Lambs, The (1991)
Terminator 2: Judgment Day (1991)
Jurassic Park (1993)
Braveheart (1995)
American Beauty (1999)
Back to the Future (1985)
Usual Suspects, The (1995)
Toy Story (1995)
Godfather, The (1972)
Sixth Sense, The (1999)
Seven (a.k.a. Se7en) (1995)
Saving Private Ryan (1998)
Princess Bride, The (1987)
Gladiator (2000)
Memento (2000)
Aliens (1986)
Shrek (2001)
Terminator, The (1984)
Twelve Monkeys (a.k.a. 12 Monkeys) (1995)
Fugitive, The (1993)
Alien (1979)
Die Hard (1988)
Blade Runner (1982)
Men in Black (a.k.a. MIB) (1997)
Fargo (1996)
Léon: The Professional (a.k.a. The Professional) (Léon) (1994)
Eddie Izzard: Dress to Kill (1999)
###Markdown
Hence, we tried collaborative filtering using Pearson's Coefficient Similarity and made a recommender system based on it. This allows users to get suggestion on contents from similar users and the recommendations are ranked on average user rating of the movie. 4. Collaborative filtering using SVD
###Code
data_path = 'dataset/'
movies_filename = 'movies.csv'
ratings_filename = 'ratings.csv'
df_movies = pd.read_csv(
os.path.join(data_path, movies_filename),
usecols=['movieId', 'title'],
dtype={'movieId': 'int32', 'title': 'str'})
df_ratings = pd.read_csv(
os.path.join(data_path, ratings_filename),
usecols=['userId', 'movieId', 'rating'],
dtype={'userId': 'int32', 'movieId': 'int32', 'rating': 'float32'}
)
# df_ratings=df_ratings[:2000000]
df_movie_features = df_ratings.pivot(
index='userId',
columns='movieId',
values='rating'
).fillna(0)
R = df_movie_features.values
user_ratings_mean = np.mean(R, axis = 1)
R_demeaned = R - user_ratings_mean.reshape(-1, 1)
U, sigma, Vt = svds(R_demeaned)
# That Sigma returned is just the values instead of a diagonal matrix.
# This is useful, but since we are going to leverage matrix multiplication to get predictions
# we'll convert it to the diagonal matrix form.
sigma = np.diag(sigma)
all_user_predicted_ratings = np.dot(np.dot(U, sigma), Vt) + user_ratings_mean.reshape(-1, 1)
preds_df = pd.DataFrame(all_user_predicted_ratings, columns = df_movie_features.columns)
preds_df.head()
def recommend_movies(preds_df, userID, movies_df, original_ratings_df, num_recommendations=5):
# Get and sort the user's predictions
user_row_number = userID - 1 # UserID starts at 1, not 0
sorted_user_predictions = preds_df.iloc[user_row_number].sort_values(ascending=False) # UserID starts at 1
# Get the user's data and merge in the movie information.
user_data = original_ratings_df[original_ratings_df.userId == (userID)]
# print(user_data)
user_full = (user_data.merge(movies_df, how = 'left', left_on = 'movieId', right_on = 'movieId').
sort_values(['rating'], ascending=False)
)
#print(user_full)
# Recommend the highest predicted rating movies that the user hasn't seen yet.
recommendations = (movies_df[~movies_df['movieId'].isin(user_full['movieId'])]).merge(pd.DataFrame(sorted_user_predictions).reset_index(), how = 'left', left_on = 'movieId',
right_on = 'movieId').rename(columns = {user_row_number: 'Predictions'}).sort_values('Predictions', ascending = False).iloc[:num_recommendations, :-1]
return user_full, recommendations
user_id = int(input("Enter your User ID: "))
no_of_recs = int(input("Enter number of movie recommendations you want: "))
already_rated, predictions = recommend_movies(preds_df, user_id, df_movies, df_ratings, no_of_recs)
print("The movies recommended for you are: \n")
print('\n'.join(predictions['title'].values))
###Output
Enter your User ID: 21
Enter number of movie recommendations you want: 30
The movies recommended for you are:
Fight Club (1999)
Shawshank Redemption, The (1994)
Forrest Gump (1994)
American Beauty (1999)
Memento (2000)
Batman Begins (2005)
V for Vendetta (2006)
Gladiator (2000)
Godfather, The (1972)
Eternal Sunshine of the Spotless Mind (2004)
American History X (1998)
Sixth Sense, The (1999)
Departed, The (2006)
Back to the Future (1985)
Kill Bill: Vol. 1 (2003)
Usual Suspects, The (1995)
Silence of the Lambs, The (1991)
Seven (a.k.a. Se7en) (1995)
Sin City (2005)
Finding Nemo (2003)
Bourne Identity, The (2002)
Kill Bill: Vol. 2 (2004)
Prestige, The (2006)
Incredibles, The (2004)
Saving Private Ryan (1998)
Shrek (2001)
WALL·E (2008)
Beautiful Mind, A (2001)
Ocean's Eleven (2001)
Léon: The Professional (a.k.a. The Professional) (Léon) (1994)
###Markdown
Problem Statement:Recommendation systems are everywhere, be it an online purchasing app, movie streaming app or music streaming. They all recommend products based on their targeted customers. Many different methods exist for building recommender systems. You are supposed to work on the IMDB dataset to build a movie recommendation system. Group Members:1. 1711036 - Akshay Padte2. 1711059 - Girish Thatte3. 1711064 - Abdeali Arsiwala4. 1711071 - Kaustubh Damania5. 1711072 - Arghyadeep Das6. 1711076 - Mihir Gada Importing necessary libraries
###Code
import numpy as np # numeric computations
import pandas as pd # data processing
import matplotlib.pyplot as plt # plotting graphs
import warnings # warnings
plt.style.use('seaborn') # Chaning the plot style
warnings.filterwarnings("ignore") # to ignore any warnings
import os
from nltk.stem import WordNetLemmatizer
from scipy.sparse.linalg import svds
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics.pairwise import cosine_similarity
import nltk
nltk.download('wordnet')
###Output
[nltk_data] Downloading package wordnet to
[nltk_data] /home/arghyadeep99/nltk_data...
[nltk_data] Package wordnet is already up-to-date!
###Markdown
Loading the dataset
###Code
# Loading the IMDB dataset into pandas dataframe
df_movies = pd.read_csv('./dataset/movies.csv') # reading movies.csv file
df_ratings = pd.read_csv('./dataset/ratings.csv') # reading ratings.csv file
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis (EDA)EDA involves looking at and describing the data set from various angles and then summarizing it. It is helpful in analyzing the distribution and statistics of our data.
###Code
# shape attribute tells us a number of tuples and feature variables in our dataset
print("Shape of df_movies: ", df_movies.shape)
print("Shape of df_ratings: ", df_ratings.shape)
# print top 10 rows of dataframe - movies
df_movies.head(10)
# print top 10 rows of dataframe - ratings
df_ratings.head(10)
###Output
_____no_output_____
###Markdown
So, rating dataset has1. userId - unique for each user2. movieId - we can take the title of the movie from movies dataset3. rating - Ratings given by each user to all the movies
###Code
df_movies.info()
df_ratings.info()
df_ratings.describe()
# getting the number of movies under each genre
genrewise_movies_count = {}
for genres in df_movies["genres"]:
for genre in genres.split("|"):
genrewise_movies_count[genre] = genrewise_movies_count.get(genre, 0) + 1
print("Number of unique genres: ", len(list(genrewise_movies_count)))
genrewise_movies_count
g = df_ratings.groupby('userId')['rating'].count()
topUsers = g.sort_values(ascending=False)[:15]
g = df_ratings.groupby('movieId')['rating'].count()
topMovies = g.sort_values(ascending=False)[:15]
top_r = df_ratings.join(topUsers, rsuffix='_r', how='inner', on='userId')
top_r = top_r.join(topMovies, rsuffix='_r', how='inner', on='movieId')
pd.crosstab(top_r.userId, top_r.movieId, top_r.rating, aggfunc=np.sum)
###Output
_____no_output_____
###Markdown
Data Visualisation Barplot of Genre-wise movies
###Code
# Barplot of Genres vs No. of movies
genres = list(genrewise_movies_count.keys())
counts = list(genrewise_movies_count.values())
fig = plt.figure(figsize = (22, 8))
# creating the bar plot
plt.bar(genres, counts, color ='blue', width = 0.4)
plt.xlabel("Genres")
plt.ylabel("No. of movies")
plt.title("Genres vs No. of movies")
plt.show()
###Output
_____no_output_____
###Markdown
Analysis : A large number of movies come under Drama and Comedy genre. Some movies are not listed in any genre. Scatter plot for MovieId vs Number of users voted
###Code
number_of_users_voted = df_ratings.groupby('movieId')['rating'].agg('count')
number_of_movies_voted = df_ratings.groupby('userId')['rating'].agg('count')
fig = plt.figure(figsize = (15, 6))
plt.scatter(number_of_users_voted.index, number_of_users_voted, color='indigo')
plt.axhline(y = 10, color = 'r')
plt.xlabel('MovieId')
plt.ylabel('Number of users voted')
plt.title('MovieId vs Number of users voted')
plt.show()
###Output
_____no_output_____
###Markdown
**Analysis : Movies with MovieId 0 to 500 are being voted by more number of user**
###Code
# Merge both the datasets
df = pd.merge(df_movies, df_ratings, on = 'movieId')
print(df.shape)
df.head(10)
import re
def find_year(row):
year = re.search('(\d\d\d\d)', row)
if year is None:
print(row)
return None
return year.group().strip()
find_year("Hello there, welcome to (2020) of (20201) here")
df['year'] = df['title'].apply(find_year)
df['year'].head()
df.title = df.title.apply(lambda x: (x.strip())[:-7])
df.drop(['timestamp'], axis=1, inplace=True)
df.head(10)
# Groupby all movie titles together and find their mean ratings average rating of each movie.
df.groupby('title')['rating'].mean().head()
###Output
_____no_output_____
###Markdown
The average ratings are not sorted. Sort the ratings in the descending order of their average ratings.
###Code
# Sort movies based on ratings from highest to lowest
df.groupby('title')['rating'].mean().sort_values(ascending = False)
###Output
_____no_output_____
|
Notebook/imperatif/Imperatif_Procedures_fonctions.ipynb
|
###Markdown
Procédure et fonctions* Structurer un programme en un ensemble de procédures ou de fonctions, issue du raffinage.* Factoriser des parties de code semblable.* Ensemble de fonctions pour définir de nouvelle expression ou opérations élémentaires: Des fonctions dans une librairie (exemple tri, les fonctions mathématique, statistique, toutes vos fonctions précédemment réalisée et réutilisable, etc…* Procédure ne retourne pas de valeurs (pas de résultat) pour définir une nouvelle instruction (modifie l’état du programme) : exemple print()En Python (comme en C , Java, …) une procédure est une fonction qui retourne None. et modifie quelques chose FonctionsUne fonction est une relation entre un élément d’un ensemble de départ vers un élément unique d’un ensemble d’arrivé exemple:> plus : NxN → N> (x,y) → x+yL’algorithmique: Partant de la spécification ou définition d’une fonction f, écrire un ou plusieurs algorithmes décrivant les étapes de calcul de f(x) et prouver que ces algorithmes sont correctsEn C par exemple tout est fonction, en java tout est methodes en Python nous verrons on peut faire un peu de tout! Procédure? Paramètres de fonctionsUne procédure est une sorte de fonction sans résultat! Si des résultats sont calculés on doit trouver un moyen en modifiant l’environnement (l’état) => instructionLes paramètres de fonctions ou procédure figurant dans l’en-tête d’une fonction se nomment des “paramètres muets” (ou encore “paramètres formels”). Leur rôle est de permettre, au sein du corps de la fonction, de décrire ce qu’elle doit faire. Leur portée est limitée à la définition de la fonction concernée ; ils n’entrent donc pas en conflit avec d’éventuelles variables locales à d’autres fonctionspar exemple en C int f(int x) … En Python def f(x): ... L’appel de fonction, type de paramètresLes paramètres fournis lors de l’utilisation (l’appel) de la fonction se nomment des “paramètres effectifs”.Les “paramètres effectifs” sont transmis aux “paramètres formels” de 2 moyens* Par valeur* Ou par référence En Python : On crée une fonction selon le schéma suivant :def nom_de_la_fonction(parametre1, parametre2, parametre3, parametreN): Bloc d'instructions* **def**, mot-clé qui est l'abréviation de « define » (définir, en anglais) et qui constitue le prélude à toute construction de fonction.* **Le nom de la fonction**, qui se nomme exactement comme une variable (nous verrons par la suite que ce n'est pas par hasard). N'utilisez pas un nom de variable déjà instanciée pour nommer une fonction.* **La liste des paramètres** qui seront fournis lors d'un appel à la fonction. Les paramètres sont séparés par des virgules et la liste est encadrée par des parenthèses ouvrante et fermante (les espaces sont optionnels mais améliorent la lisibilité).**Les deux points**, encore et toujours, qui clôturent la ligne. Avec Python les paramètres leur passages est plus subtile* En C tout les passages sont par valeur * En Java les types primitifs sont par valeur, les objets par référence* En Python tout est par référence! En fait car tout est objet!Mais avec Python certaines données sont immutable et d’autre mutable Procédure et Appel de procédure cas PythonEn plus des conditions et répétition, l’appel de procédures permet de structurer et factoriser les algorithmes. Correspond à un élément de raffinage.En python tout est fonction ou procedure, doncSi pas de résultat (procedure) comme dans print* réponse => None
###Code
def f(x,y):
return x+y
f(2,3)
f("ab","cd")
###Output
_____no_output_____
###Markdown
Exemple de fonctions, elle peuvent être récursive
###Code
def fact(n):
if (n == 0): return 1
else: return n * fact(n-1)
fact(3)
###Output
_____no_output_____
###Markdown
Quelques exemple, mutable ummutable l'influance
###Code
import time
def super_concat(n):
l = range(n)
resultat = ""
for nombre in l:
resultat += str(nombre)
return resultat
d = time.time()
super_concat(10000000)
print(time.time() - d)
def better_concat(n):
l = range(n)
resultat = []
for nombre in l:
resultat.append(str(nombre))
return ''.join(resultat)
d = time.time()
better_concat(10000000)
print(time.time() - d)
def encore_autre_concat(n):
i = 0
r=""
while (i<n):
r += str(i)
i += 1
return r
d = time.time()
encore_autre_concat(10000000)
print(time.time() - d)
###Output
2.797510862350464
|
Deep Learning with Keras_TensorFlow/minimum daily temp (with LSTM RNN in Keras-TensorFlow).ipynb
|
###Markdown
Data description: This dataset describes the minimum daily temperatures over 10 years (1981-1990) in the city Melbourne, Australia. The units are in degrees Celsius and there are 3650 observations. The source of the data is credited as the Australian Bureau of Meteorology. Workflow:- Load the Time Series (TS) by Pandas Library- Prepare the data, i.e. convert the problem to a supervised ML problem- Build and evaluate the RNN model: - Fit the best RNN model - Evaluate model by in-sample prediction: Calculate RMSE- Forecast the future trend: Out-of-sample predictionNote: For data exploration of this TS, please refer to the notebook of my alternative solution with "Seasonal ARIMA model"
###Code
import keras
import sklearn
import tensorflow as tf
import numpy as np
from scipy import stats
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import metrics
from sklearn import preprocessing
import random as rn
import math
%matplotlib inline
from keras import backend as K
session_conf = tf.ConfigProto(intra_op_parallelism_threads=5, inter_op_parallelism_threads=5)
sess = tf.Session(graph=tf.get_default_graph(), config=session_conf)
K.set_session(sess)
import warnings
warnings.filterwarnings("ignore")
# Load data using Series.from_csv
from pandas import Series
#TS = Series.from_csv('C:/Users/rhash/Documents/Datasets/Time Series analysis/daily-minimum-temperatures.csv', header=0)
# Load data using pandas.read_csv
# in case, specify your own date parsing function and use the date_parser argument
from pandas import read_csv
from pandas import datetime
#def parser(x):
# return datetime.strptime('190'+x, '%Y-%m')
TS = read_csv('C:/Users/rhash/Documents/Datasets/Time Series analysis/daily-minimum-temperatures.csv', header=0, parse_dates=[0], index_col=0, squeeze=True)
print(TS.head())
#TS=pd.to_numeric(TS, errors='coerce')
TS=pd.to_numeric(TS, errors='coerce')
TS.dropna(inplace=True)
data=pd.DataFrame(TS.values)
data.describe()
# prepare the data (i.e. convert problem to a supervised ML problem)
def prepare_data(data, lags=1):
"""
Create lagged data from an input time series
"""
X, y = [], []
for row in range(len(data) - lags - 1):
a = data[row:(row + lags), 0]
X.append(a)
y.append(data[row + lags, 0])
return np.array(X), np.array(y)
# normalize the dataset
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0, 1))
dataset = scaler.fit_transform(data)
# split into train and test sets
train = dataset[0:2920, :]
test = dataset[2920:, :]
# LSTM RNN model: _________________________________________________________________
from keras.models import Sequential, Model
from keras.layers import Dense, LSTM, Dropout, average, Input, merge, concatenate
from keras.layers.merge import concatenate
from keras.regularizers import l2, l1
from keras.callbacks import EarlyStopping, ModelCheckpoint
from sklearn.utils.class_weight import compute_sample_weight
from keras.layers.normalization import BatchNormalization
np.random.seed(42)
rn.seed(42)
tf.set_random_seed(42)
# reshape into X=t and Y=t+1
lags = 2
X_train, y_train = prepare_data(train, lags)
X_test, y_test = prepare_data(test, lags)
# reshape input to be [samples, time steps, features]
X_train = np.reshape(X_train, (X_train.shape[0], lags, 1))
X_test = np.reshape(X_test, (X_test.shape[0], lags, 1))
# create and fit the LSTM network
mdl = Sequential()
mdl.add(Dense(50, input_shape=(lags, 1), activation='relu'))
mdl.add(LSTM(80, activation='relu'))
#mdl.add(Dropout(0.2))
mdl.add(Dense(1))
mdl.compile(loss='mean_squared_error', optimizer='adam')
monitor=EarlyStopping(monitor='loss', min_delta=0.001, patience=30, verbose=1, mode='auto')
checkpointer = ModelCheckpoint(filepath="min_temp_weights.hdf5", verbose=0, save_best_only=True) # save best model
history=mdl.fit(X_train, y_train, epochs=30, batch_size=1, validation_data=(X_test, y_test),
callbacks=[monitor, checkpointer], verbose=0)
mdl.load_weights('min_temp_weights.hdf5') # load weights from best model
# To measure RMSE and evaluate the RNN model:
from sklearn.metrics import mean_squared_error
# make predictions
train_predict = mdl.predict(X_train)
test_predict = mdl.predict(X_test)
# invert transformation
train_predict = scaler.inverse_transform(pd.DataFrame(train_predict))
y_train = scaler.inverse_transform(pd.DataFrame(y_train))
test_predict = scaler.inverse_transform(pd.DataFrame(test_predict))
y_test = scaler.inverse_transform(pd.DataFrame(y_test))
# calculate root mean squared error
train_score = math.sqrt(mean_squared_error(y_train, train_predict[:,0]))
print('Train Score: {:.2f} RMSE'.format(train_score))
test_score = math.sqrt(mean_squared_error(y_test, test_predict[:,0]))
print('Test Score: {:.2f} RMSE'.format(test_score))
# list all data in history
#print(history.history.keys())
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'test'], loc='upper left')
plt.show()
# shift train predictions for plotting
train_predict_plot =np.full(data.shape, np.nan)
train_predict_plot[lags:len(train_predict)+lags, :] = train_predict
# shift test predictions for plotting
test_predict_plot =np.full(data.shape, np.nan)
test_predict_plot[len(train_predict) + (lags * 2)+1:len(data)-1, :] = test_predict
# plot observation and predictions
plt.figure(figsize=(12,7))
plt.plot(data, label='Observed', color='#006699');
plt.plot(train_predict_plot, label='Prediction for Train Set', color='#006699', alpha=0.5);
plt.plot(test_predict_plot, label='Prediction for Test Set', color='#ff0066');
plt.legend(loc='upper left')
plt.title('LSTM Recurrent Neural Net')
plt.show()
plt.figure(figsize=(8,6))
mse = mean_squared_error(y_test, test_predict[:,0])
plt.title('Prediction quality: {:.2f} MSE ({:.2f} RMSE)'.format(mse, math.sqrt(mse)))
plt.plot(y_test.reshape(-1, 1), label='Observed', color='#006699')
plt.plot(test_predict.reshape(-1, 1), label='Prediction', color='#ff0066')
plt.legend(loc='upper left');
plt.show()
###Output
_____no_output_____
|
Convolutional Neural Networks/CNN-step-by-step.ipynb
|
###Markdown
Convolutional Neural Networks: Step by StepWelcome to Course 4's first assignment! In this assignment, you will implement convolutional (CONV) and pooling (POOL) layers in numpy, including both forward propagation and (optionally) backward propagation. **Notation**:- Superscript $[l]$ denotes an object of the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object from the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$, assuming this is a fully connected (FC) layer. - $n_H$, $n_W$ and $n_C$ denote respectively the height, width and number of channels of a given layer. If you want to reference a specific layer $l$, you can also write $n_H^{[l]}$, $n_W^{[l]}$, $n_C^{[l]}$. - $n_{H_{prev}}$, $n_{W_{prev}}$ and $n_{C_{prev}}$ denote respectively the height, width and number of channels of the previous layer. If referencing a specific layer $l$, this could also be denoted $n_H^{[l-1]}$, $n_W^{[l-1]}$, $n_C^{[l-1]}$. We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! 1 - PackagesLet's first import all the packages that you will need during this assignment. - [numpy](www.numpy.org) is the fundamental package for scientific computing with Python.- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work.
###Code
import numpy as np
import h5py
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
###Output
_____no_output_____
###Markdown
2 - Outline of the AssignmentYou will be implementing the building blocks of a convolutional neural network! Each function you will implement will have detailed instructions that will walk you through the steps needed:- Convolution functions, including: - Zero Padding - Convolve window - Convolution forward - Convolution backward (optional)- Pooling functions, including: - Pooling forward - Create mask - Distribute value - Pooling backward (optional) This notebook will ask you to implement these functions from scratch in `numpy`. In the next notebook, you will use the TensorFlow equivalents of these functions to build the following model:**Note** that for every forward function, there is its corresponding backward equivalent. Hence, at every step of your forward module you will store some parameters in a cache. These parameters are used to compute gradients during backpropagation. 3 - Convolutional Neural NetworksAlthough programming frameworks make convolutions easy to use, they remain one of the hardest concepts to understand in Deep Learning. A convolution layer transforms an input volume into an output volume of different size, as shown below. In this part, you will build every step of the convolution layer. You will first implement two helper functions: one for zero padding and the other for computing the convolution function itself. 3.1 - Zero-PaddingZero-padding adds zeros around the border of an image: **Figure 1** : **Zero-Padding** Image (3 channels, RGB) with a padding of 2. The main benefits of padding are the following:- It allows you to use a CONV layer without necessarily shrinking the height and width of the volumes. This is important for building deeper networks, since otherwise the height/width would shrink as you go to deeper layers. An important special case is the "same" convolution, in which the height/width is exactly preserved after one layer. - It helps us keep more of the information at the border of an image. Without padding, very few values at the next layer would be affected by pixels as the edges of an image.**Exercise**: Implement the following function, which pads all the images of a batch of examples X with zeros. [Use np.pad](https://docs.scipy.org/doc/numpy/reference/generated/numpy.pad.html). Note if you want to pad the array "a" of shape $(5,5,5,5,5)$ with `pad = 1` for the 2nd dimension, `pad = 3` for the 4th dimension and `pad = 0` for the rest, you would do:```pythona = np.pad(a, ((0,0), (1,1), (0,0), (3,3), (0,0)), 'constant', constant_values = (..,..))```
###Code
# GRADED FUNCTION: zero_pad
def zero_pad(X, pad):
"""
Pad with zeros all images of the dataset X. The padding is applied to the height and width of an image,
as illustrated in Figure 1.
Argument:
X -- python numpy array of shape (m, n_H, n_W, n_C) representing a batch of m images
pad -- integer, amount of padding around each image on vertical and horizontal dimensions
Returns:
X_pad -- padded image of shape (m, n_H + 2*pad, n_W + 2*pad, n_C)
"""
### START CODE HERE ### (≈ 1 line)
X_pad = np.pad(X, ((0, 0), (pad, pad), (pad, pad), (0, 0)), 'constant')
### END CODE HERE ###
return X_pad
np.random.seed(1)
x = np.random.randn(4, 3, 3, 2)
x_pad = zero_pad(x, 2)
print ("x.shape =", x.shape)
print ("x_pad.shape =", x_pad.shape)
print ("x[1,1] =", x[1,1])
print ("x_pad[1,1] =", x_pad[1,1])
fig, axarr = plt.subplots(1, 2)
axarr[0].set_title('x')
axarr[0].imshow(x[0,:,:,0])
axarr[1].set_title('x_pad')
axarr[1].imshow(x_pad[0,:,:,0])
###Output
x.shape = (4, 3, 3, 2)
x_pad.shape = (4, 7, 7, 2)
x[1,1] = [[ 0.90085595 -0.68372786]
[-0.12289023 -0.93576943]
[-0.26788808 0.53035547]]
x_pad[1,1] = [[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]
[ 0. 0.]]
###Markdown
**Expected Output**: **x.shape**: (4, 3, 3, 2) **x_pad.shape**: (4, 7, 7, 2) **x[1,1]**: [[ 0.90085595 -0.68372786] [-0.12289023 -0.93576943] [-0.26788808 0.53035547]] **x_pad[1,1]**: [[ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.] [ 0. 0.]] 3.2 - Single step of convolution In this part, implement a single step of convolution, in which you apply the filter to a single position of the input. This will be used to build a convolutional unit, which: - Takes an input volume - Applies a filter at every position of the input- Outputs another volume (usually of different size) **Figure 2** : **Convolution operation** with a filter of 2x2 and a stride of 1 (stride = amount you move the window each time you slide) In a computer vision application, each value in the matrix on the left corresponds to a single pixel value, and we convolve a 3x3 filter with the image by multiplying its values element-wise with the original matrix, then summing them up and adding a bias. In this first step of the exercise, you will implement a single step of convolution, corresponding to applying a filter to just one of the positions to get a single real-valued output. Later in this notebook, you'll apply this function to multiple positions of the input to implement the full convolutional operation. **Exercise**: Implement conv_single_step(). [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.sum.html).
###Code
# GRADED FUNCTION: conv_single_step
def conv_single_step(a_slice_prev, W, b):
"""
Apply one filter defined by parameters W on a single slice (a_slice_prev) of the output activation
of the previous layer.
Arguments:
a_slice_prev -- slice of input data of shape (f, f, n_C_prev)
W -- Weight parameters contained in a window - matrix of shape (f, f, n_C_prev)
b -- Bias parameters contained in a window - matrix of shape (1, 1, 1)
Returns:
Z -- a scalar value, result of convolving the sliding window (W, b) on a slice x of the input data
"""
### START CODE HERE ### (≈ 2 lines of code)
# Element-wise product between a_slice and W. Do not add the bias yet.
s = np.multiply(a_slice_prev, W)
# Sum over all entries of the volume s.
Z = np.sum(s)
# Add bias b to Z. Cast b to a float() so that Z results in a scalar value.
Z = Z + float(b)
### END CODE HERE ###
return Z
np.random.seed(1)
a_slice_prev = np.random.randn(4, 4, 3)
W = np.random.randn(4, 4, 3)
b = np.random.randn(1, 1, 1)
Z = conv_single_step(a_slice_prev, W, b)
print("Z =", Z)
###Output
Z = -6.99908945068
###Markdown
**Expected Output**: **Z** -6.99908945068 3.3 - Convolutional Neural Networks - Forward passIn the forward pass, you will take many filters and convolve them on the input. Each 'convolution' gives you a 2D matrix output. You will then stack these outputs to get a 3D volume: **Exercise**: Implement the function below to convolve the filters W on an input activation A_prev. This function takes as input A_prev, the activations output by the previous layer (for a batch of m inputs), F filters/weights denoted by W, and a bias vector denoted by b, where each filter has its own (single) bias. Finally you also have access to the hyperparameters dictionary which contains the stride and the padding. **Hint**: 1. To select a 2x2 slice at the upper left corner of a matrix "a_prev" (shape (5,5,3)), you would do:```pythona_slice_prev = a_prev[0:2,0:2,:]```This will be useful when you will define `a_slice_prev` below, using the `start/end` indexes you will define.2. To define a_slice you will need to first define its corners `vert_start`, `vert_end`, `horiz_start` and `horiz_end`. This figure may be helpful for you to find how each of the corner can be defined using h, w, f and s in the code below. **Figure 3** : **Definition of a slice using vertical and horizontal start/end (with a 2x2 filter)** This figure shows only a single channel. **Reminder**:The formulas relating the output shape of the convolution to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f + 2 \times pad}{stride} \rfloor +1 $$$$ n_C = \text{number of filters used in the convolution}$$For this exercise, we won't worry about vectorization, and will just implement everything with for-loops.
###Code
# GRADED FUNCTION: conv_forward
def conv_forward(A_prev, W, b, hparameters):
"""
Implements the forward propagation for a convolution function
Arguments:
A_prev -- output activations of the previous layer, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
W -- Weights, numpy array of shape (f, f, n_C_prev, n_C)
b -- Biases, numpy array of shape (1, 1, 1, n_C)
hparameters -- python dictionary containing "stride" and "pad"
Returns:
Z -- conv output, numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward() function
"""
### START CODE HERE ###
# Retrieve dimensions from A_prev's shape (≈1 line)
(m, n_H_prev, n_W_prev, n_C_prev) = np.shape(A_prev)
# Retrieve dimensions from W's shape (≈1 line)
(f, f, n_C_prev, n_C) = np.shape(W)
# Retrieve information from "hparameters" (≈2 lines)
stride = hparameters["stride"]
pad = hparameters["pad"]
# Compute the dimensions of the CONV output volume using the formula given above. Hint: use int() to floor. (≈2 lines)
n_H = int((n_H_prev - f + 2 * pad)/stride) + 1
n_W = int((n_W_prev - f + 2 * pad)/stride) + 1
# Initialize the output volume Z with zeros. (≈1 line)
Z = np.zeros((m, n_H, n_W, n_C))
# Create A_prev_pad by padding A_prev
A_prev_pad = zero_pad(A_prev, pad)
height = 0
width = 0
for i in range(m): # loop over the batch of training examples
a_prev_pad = A_prev_pad[i, :, :, :] # Select ith training example's padded activation
height = 0
for h in range(n_H): # loop over vertical axis of the output volume
width = 0
for w in range(n_W): # loop over horizontal axis of the output volume
for c in range(n_C): # loop over channels (= #filters) of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = height
vert_end = f + height
horiz_start = width
horiz_end = f + width
# Use the corners to define the (3D) slice of a_prev_pad (See Hint above the cell). (≈1 line)
a_slice_prev = a_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :]
# Convolve the (3D) slice with the correct filter W and bias b, to get back one output neuron. (≈1 line)
Z[i, h, w, c] = conv_single_step(a_slice_prev, W[:, :, :, c], b[:, :, :, c])
width = width + stride
height = height + stride
### END CODE HERE ###
# Making sure your output shape is correct
assert(Z.shape == (m, n_H, n_W, n_C))
# Save information in "cache" for the backprop
cache = (A_prev, W, b, hparameters)
return Z, cache
np.random.seed(1)
A_prev = np.random.randn(10,4,4,3)
W = np.random.randn(2,2,3,8)
b = np.random.randn(1,1,1,8)
hparameters = {"pad" : 2,
"stride": 2}
Z, cache_conv = conv_forward(A_prev, W, b, hparameters)
print("Z's mean =", np.mean(Z))
print("Z[3,2,1] =", Z[3,2,1])
print("cache_conv[0][1][2][3] =", cache_conv[0][1][2][3])
###Output
Z's mean = 0.0489952035289
Z[3,2,1] = [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437
5.18531798 8.75898442]
cache_conv[0][1][2][3] = [-0.20075807 0.18656139 0.41005165]
###Markdown
**Expected Output**: **Z's mean** 0.0489952035289 **Z[3,2,1]** [-0.61490741 -6.7439236 -2.55153897 1.75698377 3.56208902 0.53036437 5.18531798 8.75898442] **cache_conv[0][1][2][3]** [-0.20075807 0.18656139 0.41005165] Finally, CONV layer should also contain an activation, in which case we would add the following line of code:```python Convolve the window to get back one output neuronZ[i, h, w, c] = ... Apply activationA[i, h, w, c] = activation(Z[i, h, w, c])```You don't need to do it here. 4 - Pooling layer The pooling (POOL) layer reduces the height and width of the input. It helps reduce computation, as well as helps make feature detectors more invariant to its position in the input. The two types of pooling layers are: - Max-pooling layer: slides an ($f, f$) window over the input and stores the max value of the window in the output.- Average-pooling layer: slides an ($f, f$) window over the input and stores the average value of the window in the output.These pooling layers have no parameters for backpropagation to train. However, they have hyperparameters such as the window size $f$. This specifies the height and width of the fxf window you would compute a max or average over. 4.1 - Forward PoolingNow, you are going to implement MAX-POOL and AVG-POOL, in the same function. **Exercise**: Implement the forward pass of the pooling layer. Follow the hints in the comments below.**Reminder**:As there's no padding, the formulas binding the output shape of the pooling to the input shape is:$$ n_H = \lfloor \frac{n_{H_{prev}} - f}{stride} \rfloor +1 $$$$ n_W = \lfloor \frac{n_{W_{prev}} - f}{stride} \rfloor +1 $$$$ n_C = n_{C_{prev}}$$
###Code
# GRADED FUNCTION: pool_forward
def pool_forward(A_prev, hparameters, mode = "max"):
"""
Implements the forward pass of the pooling layer
Arguments:
A_prev -- Input data, numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
hparameters -- python dictionary containing "f" and "stride"
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
A -- output of the pool layer, a numpy array of shape (m, n_H, n_W, n_C)
cache -- cache used in the backward pass of the pooling layer, contains the input and hparameters
"""
# Retrieve dimensions from the input shape
(m, n_H_prev, n_W_prev, n_C_prev) = A_prev.shape
# Retrieve hyperparameters from "hparameters"
f = hparameters["f"]
stride = hparameters["stride"]
# Define the dimensions of the output
n_H = int(1 + (n_H_prev - f) / stride)
n_W = int(1 + (n_W_prev - f) / stride)
n_C = n_C_prev
# Initialize output matrix A
A = np.zeros((m, n_H, n_W, n_C))
### START CODE HERE ###
height = 0
width = 0
for i in range(m): # loop over the training examples
height = 0
for h in range(n_H): # loop on the vertical axis of the output volume
width = 0
for w in range(n_W): # loop on the horizontal axis of the output volume
for c in range (n_C): # loop over the channels of the output volume
# Find the corners of the current "slice" (≈4 lines)
vert_start = height
vert_end = height + f
horiz_start = width
horiz_end = width + f
# Use the corners to define the current slice on the ith training example of A_prev, channel c. (≈1 line)
a_prev_slice = A_prev[i, vert_start:vert_end, horiz_start:horiz_end, c]
# Compute the pooling operation on the slice. Use an if statment to differentiate the modes. Use np.max/np.mean.
if mode == "max":
A[i, h, w, c] = np.max(a_prev_slice)
elif mode == "average":
A[i, h, w, c] = np.mean(a_prev_slice)
width = width + stride
height = height + stride
### END CODE HERE ###
# Store the input and hparameters in "cache" for pool_backward()
cache = (A_prev, hparameters)
# Making sure your output shape is correct
assert(A.shape == (m, n_H, n_W, n_C))
return A, cache
np.random.seed(1)
A_prev = np.random.randn(2, 4, 4, 3)
hparameters = {"stride" : 2, "f": 3}
A, cache = pool_forward(A_prev, hparameters)
print("mode = max")
print("A =", A)
print()
A, cache = pool_forward(A_prev, hparameters, mode = "average")
print("mode = average")
print("A =", A)
###Output
mode = max
A = [[[[ 1.74481176 0.86540763 1.13376944]]]
[[[ 1.13162939 1.51981682 2.18557541]]]]
mode = average
A = [[[[ 0.02105773 -0.20328806 -0.40389855]]]
[[[-0.22154621 0.51716526 0.48155844]]]]
###Markdown
**Expected Output:** A = [[[[ 1.74481176 0.86540763 1.13376944]]] [[[ 1.13162939 1.51981682 2.18557541]]]] A = [[[[ 0.02105773 -0.20328806 -0.40389855]]] [[[-0.22154621 0.51716526 0.48155844]]]] Congratulations! You have now implemented the forward passes of all the layers of a convolutional network. The remainer of this notebook is optional, and will not be graded. 5 - Backpropagation in convolutional neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers don't need to bother with the details of the backward pass. The backward pass for convolutional networks is complicated. If you wish however, you can work through this optional portion of the notebook to get a sense of what backprop in a convolutional network looks like. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in convolutional neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are not trivial and we did not derive them in lecture, but we briefly presented them below. 5.1 - Convolutional layer backward pass Let's start by implementing the backward pass for a CONV layer. 5.1.1 - Computing dA:This is the formula for computing $dA$ with respect to the cost for a certain filter $W_c$ and a given training example:$$ dA += \sum _{h=0} ^{n_H} \sum_{w=0} ^{n_W} W_c \times dZ_{hw} \tag{1}$$Where $W_c$ is a filter and $dZ_{hw}$ is a scalar corresponding to the gradient of the cost with respect to the output of the conv layer Z at the hth row and wth column (corresponding to the dot product taken at the ith stride left and jth stride down). Note that at each time, we multiply the the same filter $W_c$ by a different dZ when updating dA. We do so mainly because when computing the forward propagation, each filter is dotted and summed by a different a_slice. Therefore when computing the backprop for dA, we are just adding the gradients of all the a_slices. In code, inside the appropriate for-loops, this formula translates into:```pythonda_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += W[:,:,:,c] * dZ[i, h, w, c]``` 5.1.2 - Computing dW:This is the formula for computing $dW_c$ ($dW_c$ is the derivative of one filter) with respect to the loss:$$ dW_c += \sum _{h=0} ^{n_H} \sum_{w=0} ^ {n_W} a_{slice} \times dZ_{hw} \tag{2}$$Where $a_{slice}$ corresponds to the slice which was used to generate the acitivation $Z_{ij}$. Hence, this ends up giving us the gradient for $W$ with respect to that slice. Since it is the same $W$, we will just add up all such gradients to get $dW$. In code, inside the appropriate for-loops, this formula translates into:```pythondW[:,:,:,c] += a_slice * dZ[i, h, w, c]``` 5.1.3 - Computing db:This is the formula for computing $db$ with respect to the cost for a certain filter $W_c$:$$ db = \sum_h \sum_w dZ_{hw} \tag{3}$$As you have previously seen in basic neural networks, db is computed by summing $dZ$. In this case, you are just summing over all the gradients of the conv output (Z) with respect to the cost. In code, inside the appropriate for-loops, this formula translates into:```pythondb[:,:,:,c] += dZ[i, h, w, c]```**Exercise**: Implement the `conv_backward` function below. You should sum over all the training examples, filters, heights, and widths. You should then compute the derivatives using formulas 1, 2 and 3 above.
###Code
def conv_backward(dZ, cache):
"""
Implement the backward propagation for a convolution function
Arguments:
dZ -- gradient of the cost with respect to the output of the conv layer (Z), numpy array of shape (m, n_H, n_W, n_C)
cache -- cache of values needed for the conv_backward(), output of conv_forward()
Returns:
dA_prev -- gradient of the cost with respect to the input of the conv layer (A_prev),
numpy array of shape (m, n_H_prev, n_W_prev, n_C_prev)
dW -- gradient of the cost with respect to the weights of the conv layer (W)
numpy array of shape (f, f, n_C_prev, n_C)
db -- gradient of the cost with respect to the biases of the conv layer (b)
numpy array of shape (1, 1, 1, n_C)
"""
### START CODE HERE ###
# Retrieve information from "cache"
(A_prev, W, b, hparameters) = None
# Retrieve dimensions from A_prev's shape
(m, n_H_prev, n_W_prev, n_C_prev) = None
# Retrieve dimensions from W's shape
(f, f, n_C_prev, n_C) = None
# Retrieve information from "hparameters"
stride = None
pad = None
# Retrieve dimensions from dZ's shape
(m, n_H, n_W, n_C) = None
# Initialize dA_prev, dW, db with the correct shapes
dA_prev = None
dW = None
db = None
# Pad A_prev and dA_prev
A_prev_pad = None
dA_prev_pad = None
for i in range(None): # loop over the training examples
# select ith training example from A_prev_pad and dA_prev_pad
a_prev_pad = None
da_prev_pad = None
for h in range(None): # loop over vertical axis of the output volume
for w in range(None): # loop over horizontal axis of the output volume
for c in range(None): # loop over the channels of the output volume
# Find the corners of the current "slice"
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Use the corners to define the slice from a_prev_pad
a_slice = None
# Update gradients for the window and the filter's parameters using the code formulas given above
da_prev_pad[vert_start:vert_end, horiz_start:horiz_end, :] += None
dW[:,:,:,c] += None
db[:,:,:,c] += None
# Set the ith training example's dA_prev to the unpaded da_prev_pad (Hint: use X[pad:-pad, pad:-pad, :])
dA_prev[i, :, :, :] = None
### END CODE HERE ###
# Making sure your output shape is correct
assert(dA_prev.shape == (m, n_H_prev, n_W_prev, n_C_prev))
return dA_prev, dW, db
np.random.seed(1)
dA, dW, db = conv_backward(Z, cache_conv)
print("dA_mean =", np.mean(dA))
print("dW_mean =", np.mean(dW))
print("db_mean =", np.mean(db))
###Output
_____no_output_____
###Markdown
** Expected Output: ** **dA_mean** 1.45243777754 **dW_mean** 1.72699145831 **db_mean** 7.83923256462 5.2 Pooling layer - backward passNext, let's implement the backward pass for the pooling layer, starting with the MAX-POOL layer. Even though a pooling layer has no parameters for backprop to update, you still need to backpropagation the gradient through the pooling layer in order to compute gradients for layers that came before the pooling layer. 5.2.1 Max pooling - backward pass Before jumping into the backpropagation of the pooling layer, you are going to build a helper function called `create_mask_from_window()` which does the following: $$ X = \begin{bmatrix}1 && 3 \\4 && 2\end{bmatrix} \quad \rightarrow \quad M =\begin{bmatrix}0 && 0 \\1 && 0\end{bmatrix}\tag{4}$$As you can see, this function creates a "mask" matrix which keeps track of where the maximum of the matrix is. True (1) indicates the position of the maximum in X, the other entries are False (0). You'll see later that the backward pass for average pooling will be similar to this but using a different mask. **Exercise**: Implement `create_mask_from_window()`. This function will be helpful for pooling backward. Hints:- [np.max()]() may be helpful. It computes the maximum of an array.- If you have a matrix X and a scalar x: `A = (X == x)` will return a matrix A of the same size as X such that:```A[i,j] = True if X[i,j] = xA[i,j] = False if X[i,j] != x```- Here, you don't need to consider cases where there are several maxima in a matrix.
###Code
def create_mask_from_window(x):
"""
Creates a mask from an input matrix x, to identify the max entry of x.
Arguments:
x -- Array of shape (f, f)
Returns:
mask -- Array of the same shape as window, contains a True at the position corresponding to the max entry of x.
"""
### START CODE HERE ### (≈1 line)
mask = None
### END CODE HERE ###
return mask
np.random.seed(1)
x = np.random.randn(2,3)
mask = create_mask_from_window(x)
print('x = ', x)
print("mask = ", mask)
###Output
_____no_output_____
###Markdown
**Expected Output:** **x =**[[ 1.62434536 -0.61175641 -0.52817175] [-1.07296862 0.86540763 -2.3015387 ]] **mask =**[[ True False False] [False False False]] Why do we keep track of the position of the max? It's because this is the input value that ultimately influenced the output, and therefore the cost. Backprop is computing gradients with respect to the cost, so anything that influences the ultimate cost should have a non-zero gradient. So, backprop will "propagate" the gradient back to this particular input value that had influenced the cost. 5.2.2 - Average pooling - backward pass In max pooling, for each input window, all the "influence" on the output came from a single input value--the max. In average pooling, every element of the input window has equal influence on the output. So to implement backprop, you will now implement a helper function that reflects this.For example if we did average pooling in the forward pass using a 2x2 filter, then the mask you'll use for the backward pass will look like: $$ dZ = 1 \quad \rightarrow \quad dZ =\begin{bmatrix}1/4 && 1/4 \\1/4 && 1/4\end{bmatrix}\tag{5}$$This implies that each position in the $dZ$ matrix contributes equally to output because in the forward pass, we took an average. **Exercise**: Implement the function below to equally distribute a value dz through a matrix of dimension shape. [Hint](https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.ones.html)
###Code
def distribute_value(dz, shape):
"""
Distributes the input value in the matrix of dimension shape
Arguments:
dz -- input scalar
shape -- the shape (n_H, n_W) of the output matrix for which we want to distribute the value of dz
Returns:
a -- Array of size (n_H, n_W) for which we distributed the value of dz
"""
### START CODE HERE ###
# Retrieve dimensions from shape (≈1 line)
(n_H, n_W) = None
# Compute the value to distribute on the matrix (≈1 line)
average = None
# Create a matrix where every entry is the "average" value (≈1 line)
a = None
### END CODE HERE ###
return a
a = distribute_value(2, (2,2))
print('distributed value =', a)
###Output
_____no_output_____
###Markdown
**Expected Output**: distributed_value =[[ 0.5 0.5] [ 0.5 0.5]] 5.2.3 Putting it together: Pooling backward You now have everything you need to compute backward propagation on a pooling layer.**Exercise**: Implement the `pool_backward` function in both modes (`"max"` and `"average"`). You will once again use 4 for-loops (iterating over training examples, height, width, and channels). You should use an `if/elif` statement to see if the mode is equal to `'max'` or `'average'`. If it is equal to 'average' you should use the `distribute_value()` function you implemented above to create a matrix of the same shape as `a_slice`. Otherwise, the mode is equal to '`max`', and you will create a mask with `create_mask_from_window()` and multiply it by the corresponding value of dZ.
###Code
def pool_backward(dA, cache, mode = "max"):
"""
Implements the backward pass of the pooling layer
Arguments:
dA -- gradient of cost with respect to the output of the pooling layer, same shape as A
cache -- cache output from the forward pass of the pooling layer, contains the layer's input and hparameters
mode -- the pooling mode you would like to use, defined as a string ("max" or "average")
Returns:
dA_prev -- gradient of cost with respect to the input of the pooling layer, same shape as A_prev
"""
### START CODE HERE ###
# Retrieve information from cache (≈1 line)
(A_prev, hparameters) = None
# Retrieve hyperparameters from "hparameters" (≈2 lines)
stride = None
f = None
# Retrieve dimensions from A_prev's shape and dA's shape (≈2 lines)
m, n_H_prev, n_W_prev, n_C_prev = None
m, n_H, n_W, n_C = None
# Initialize dA_prev with zeros (≈1 line)
dA_prev = None
for i in range(None): # loop over the training examples
# select training example from A_prev (≈1 line)
a_prev = None
for h in range(None): # loop on the vertical axis
for w in range(None): # loop on the horizontal axis
for c in range(None): # loop over the channels (depth)
# Find the corners of the current "slice" (≈4 lines)
vert_start = None
vert_end = None
horiz_start = None
horiz_end = None
# Compute the backward propagation in both modes.
if mode == "max":
# Use the corners and "c" to define the current slice from a_prev (≈1 line)
a_prev_slice = None
# Create the mask from a_prev_slice (≈1 line)
mask = None
# Set dA_prev to be dA_prev + (the mask multiplied by the correct entry of dA) (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
elif mode == "average":
# Get the value a from dA (≈1 line)
da = None
# Define the shape of the filter as fxf (≈1 line)
shape = None
# Distribute it to get the correct slice of dA_prev. i.e. Add the distributed value of da. (≈1 line)
dA_prev[i, vert_start: vert_end, horiz_start: horiz_end, c] += None
### END CODE ###
# Making sure your output shape is correct
assert(dA_prev.shape == A_prev.shape)
return dA_prev
np.random.seed(1)
A_prev = np.random.randn(5, 5, 3, 2)
hparameters = {"stride" : 1, "f": 2}
A, cache = pool_forward(A_prev, hparameters)
dA = np.random.randn(5, 4, 2, 2)
dA_prev = pool_backward(dA, cache, mode = "max")
print("mode = max")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
print()
dA_prev = pool_backward(dA, cache, mode = "average")
print("mode = average")
print('mean of dA = ', np.mean(dA))
print('dA_prev[1,1] = ', dA_prev[1,1])
###Output
_____no_output_____
|
Planar_data_classification/Planar_data_classification/Planar_data_classification_5.ipynb
|
###Markdown
Import packagesImport necessry packages needed
###Code
# import statements
pass
###Output
_____no_output_____
###Markdown
Training the modelIt is time to run the model and see how it performs on a planar dataset. Run the following code to test your model with a single hidden layer of $n_h$ hidden units.- Use the `nn_model()` to calculate the model parameters on the X,Y data imported in *lab_2_1*.- Use the `predict()` to calculate the model predections on X and plot the decision boundries.
###Code
# Build a model with a n_h-dimensional hidden layer
parameters = None
# Plot the decision boundary
plot_decision_boundary(lambda x: predict(None, x.T), X, Y.ravel())
plt.title("Decision Boundary for hidden layer size " + str(4))
###Output
_____no_output_____
###Markdown
**Expected Output**: **Cost after iteration 9000** 0.218607
###Code
# Print accuracy
predictions = predict(parameters, X)
print ('Accuracy: %d' % float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100) + '%')
###Output
_____no_output_____
###Markdown
**Expected Output**: **Accuracy** 90% Accuracy is really high compared to Logistic Regression. The model has learnt the leaf patterns of the flower! Neural networks are able to learn even highly non-linear decision boundaries, unlike logistic regression. Now, let's try out several hidden layer sizes. Tuning hidden layer size In the following code, populate the *hidden_layer_sizes* list with different values such as $[1, 2, 3, 4, 5, 20, 50]$. It may take few minutes. You will observe different behaviors of the model for various hidden layer sizes.
###Code
# This may take few minutes to run
plt.figure(figsize=(16, 32))
# populate with different layer size
hidden_layer_sizes = [None]
for i, n_h in enumerate(hidden_layer_sizes):
plt.subplot(5, 2, i+1)
plt.title('Hidden Layer of size %d' % n_h)
parameters = nn_model(X, Y, n_h, num_iterations = 5000)
plot_decision_boundary(lambda x: predict(parameters, x.T), X, Y.ravel())
predictions = predict(parameters, X)
accuracy = float((np.dot(Y,predictions.T) + np.dot(1-Y,1-predictions.T))/float(Y.size)*100)
print ("Accuracy for {} hidden units: {} %".format(n_h, accuracy))
###Output
_____no_output_____
|
Auto_downloaders/Autodownloader_CMEMS/get_data_CMEMS.ipynb
|
###Markdown
Automated CMEMS downloader for all operating systems This routine contains an examples of how to use python to set up an automated downloader of CMEMS data. Version: 1.1 Author: B loveday, PML Notes: 1. The python-motu client must be installed
###Code
#!/usr/bin/env python
#-imports-----------------------------------------------------------------------
import os, sys, shutil
import argparse
import logging
import datetime
import subprocess
#-functions---------------------------------------------------------------------
def download_data(Command, logging, verbose=False):
processed_state = 'Downloaded ok'
logging.info('Launching download CMD: '+Command)
try:
process = subprocess.Popen(CMD, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, shell=True)
process.wait()
# Poll process for new output until finished
while True:
nextline = process.stdout.readline()
if nextline == '' and process.poll() is not None:
break
if nextline !='':
logging.info(nextline)
if 'Error' in nextline:
processed_state = nextline
sys.stdout.flush()
output = process.communicate()[0]
exitCode = process.returncode
if (exitCode == 0):
logging.info('Downloading successful')
processed_flag = True
else:
logging.error('Something went wrong in downloading: see above')
processed_flag = False
except:
logging.info('Downloading unsuccessful')
processed_flag = False
processed_state = 'Unknown Error'
return processed_flag, processed_state
#-default parameters------------------------------------------------------------
DEFAULT_LOG_PATH = os.getcwd()
#-args--------------------------------------------------------------------------
#parser = argparse.ArgumentParser()
#args = parser.parse_args()
#-main--------------------------------------------------------------------------
if __name__ == "__main__":
# preliminary stuff
logfile = os.path.join(DEFAULT_LOG_PATH,"CMEMS_DOWNLOAD_"+datetime.datetime.now().strftime('%Y%m%d_%H%M')+".log")
verbose=False
# set file logger
try:
if os.path.exists(logfile):
os.remove(logfile)
print("logging to: "+logfile)
logging.basicConfig(filename=logfile,level=logging.DEBUG)
except:
print("Failed to set logger")
# set our variables
motu_path = os.getcwd()
username = 'your username'
password = 'your password'
outdir = os.path.join(motu_path,'Data')
product_id = 'dataset-duacs-nrt-blacksea-merged-allsat-phy-l4-v3'
service_id = 'SEALEVEL_BS_PHY_L4_NRT_OBSERVATIONS_008_041-TDS'
date_min = datetime.datetime(2018,2,3)
date_max = datetime.datetime(2018,2,4)
lonmin = 27.0625
lonmax = 41.9375
latmin = 40.0625
latmax = 46.9375
variables = ['sla','ugosa','vgosa']
# clear the output directory and make a new one
if os.path.exists(outdir):
shutil.rmtree(outdir)
os.mkdir(outdir)
# set variables
v_string=' --variable '
all_variables = ' '
for vv in variables:
all_variables=v_string+"'"+vv+"'"+all_variables
# loop through dates
this_date = date_min
while this_date <= date_max:
date_format=this_date.strftime('%Y-%m-%d')
outname = product_id+'_'+date_format+'.nc'
print '---------------------'
print('Saving to: '+outname)
this_date = this_date + datetime.timedelta(days=1)
CMD="python "+motu_path+"/motu-client-python/motu-client.py --user '"+username+"' --pwd '"+password+"' --motu 'http://motu.sltac.cls.fr/motu-web/Motu' --service-id "+service_id+" --product-id '"+product_id+"' --longitude-min '"+str(lonmin)+" ' --longitude-max '"+str(lonmax)+"' --latitude-min '"+str(latmin)+"' --latitude-max '"+str(latmax)+"' --date-min '"+date_format+"' --date-max '"+date_format+"' "+all_variables+" --out-dir '"+outdir+"' --out-name '"+outname+"'"
if verbose:
print CMD
flag, state = download_data(CMD,logging)
###Output
logging to: /Users/benloveday/Documents/Code/Autodownloader_CMEMS/CMEMS_DOWNLOAD_20180204_1105.log
---------------------
Saving to: dataset-duacs-nrt-blacksea-merged-allsat-phy-l4-v3_2018-02-03.nc
---------------------
Saving to: dataset-duacs-nrt-blacksea-merged-allsat-phy-l4-v3_2018-02-04.nc
|
Machine Learning/Applications/Streamlit/Fake News Classification/models/Gated Recurrent Units.ipynb
|
###Markdown
We have all seen fake news forwards on our WhatsApp messages. Generally, these articles are generated by bots and internet trollers and are used with an intent to intrigue the audience and mislead them. Fake news can be very dangerous as it can spread misinformation and inflict rage in public. It is now becoming a serious problem in India due to more and more people using social media and lower levels of digital awareness. Demo Here's a screenrecording of the Model in action. I copied a article from a authentic and reputed news source, pasted it on the text block and ran inference. As you can see the model gave the correct prediction of the article being Real.  Approach UsedBidirectional Recurrent Neural Networks (BRNN) connect two hidden layers of opposite directions to the same output. With this form of generative deep learning, the output layer can get information from past (backwards) and future (forward) states simultaneously. Invented in 1997 by Schuster and Paliwal, BRNNs were introduced to increase the amount of input information available to the network. Standard recurrent neural network (RNNs) also have restrictions as the future input information cannot be reached from the current state. On the contrary, BRNNs do not require their input data to be fixed. Moreover, their future input information is reachable from the current state. Importing Libraries In this notebook I'd like to continue on the work of [Atish Adhikari](https://www.kaggle.com/atishadhikari). In his [notebook](https://www.kaggle.com/atishadhikari/fake-news-cleaning-word2vec-lstm-99-accuracy), he proposes a novel approach for News Classification.We'll use the following modules, * [numpy](https://numpy.org/doc/stable/reference/index.html)* [pandas](https://pandas.pydata.org/docs/reference/index.html)* [tensorflow](https://www.tensorflow.org/api_docs/python/tf)* [tensorflow_datasets](https://www.tensorflow.org/datasets/overview?hl=en)
###Code
import numpy as np # For Linear Algebra
import pandas as pd # For I/O, Data Transformation
import tensorflow as tf # Tensorflow
import tensorflow_datasets as tfds # For the SubTextEncoder
import os
for dirname, _, filenames in os.walk('/kaggle/input'):
for filename in filenames:
print(os.path.join(dirname, filename))
###Output
/kaggle/input/fake-and-real-news-dataset/True.csv
/kaggle/input/fake-and-real-news-dataset/Fake.csv
###Markdown
Pre-Processing and Cleaning The original dataset doesn't have any class variables associated with the instances. Thus, to enable supervised learning we add another "**class**" variable to the DataFrames. Also, to get a reliable and authentic score for classification we concatenate the "**text**" and "**title**" columns. We then drop the redundant columns from both the DataFrames. Then, we just make a single DataFrame out of both the DataFrames.
###Code
fakedataset = pd.read_csv("/kaggle/input/fake-and-real-news-dataset/Fake.csv") # Make a DataFrame for Fake News
realdataset = pd.read_csv("/kaggle/input/fake-and-real-news-dataset/True.csv") # Make a DataFrame for Real News
realdataset["class"] = 1 # Adding Class to Real News
fakedataset["class"] = 0 # Adding Class to Fake News
realdataset["text"] = realdataset["title"] + " " + realdataset["text"] # Concatenating Text and Title into a single column for Real News DataFrame
fakedataset["text"] = fakedataset["title"] + " " + fakedataset["text"] # Concatenating Text and Title into a single column for Fake News DataFrame
realdataset = realdataset.drop(["subject", "date", "title"], axis = 1) # Removing Redundant features from Real News DataFrame
fakedataset = fakedataset.drop(["subject", "date", "title"], axis = 1) # Removing Redundant features from Fake News DataFrame
dataset = realdataset.append(fakedataset, ignore_index = True) # Making a Single DataFrame
del realdataset, fakedataset
###Output
_____no_output_____
###Markdown
Encoding the Corpus To encode the corpus, we use the [**SubwordTextEncoder**](https://www.tensorflow.org/datasets/api_docs/python/tfds/features/text/SubwordTextEncoder) from tfds.features.text's **build_from_corpus** function. We declare a novel vocab_size of 10,000 and then use the "**text**" column from the DataFrame.
###Code
vocab_size = 10000
encoder = tfds.deprecated.text.SubwordTextEncoder.build_from_corpus(dataset["text"], vocab_size)
###Output
_____no_output_____
###Markdown
Here, we create a function to encode the DataFrame by looping through all the sentences in the corpus, with "**post**" padding using the [**tf.keras.preprocessing.sequence.pad_sequences()**](https://www.tensorflow.org/api_docs/python/tf/keras/preprocessing/sequence/pad_sequences?hl=en) function.
###Code
def enc(dataframe):
tokenized = []
for sentence in dataframe["text"].values:
tokenized.append(encoder.encode(sentence))
out = tf.keras.preprocessing.sequence.pad_sequences(tokenized, padding = "post")
return out
x = enc(dataset)
###Output
_____no_output_____
###Markdown
Using the "**class**" column of the Dataset for Supervised Training of the Model
###Code
y = dataset["class"]
print(y)
###Output
0 1
1 1
2 1
3 1
4 1
..
44893 0
44894 0
44895 0
44896 0
44897 0
Name: class, Length: 44898, dtype: int64
###Markdown
Model Definition Here, we define our Model with the following layers:* [Embedding Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Embedding?hl=en)* [Bidirectional GRU Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Bidirectional?hl=en) with 64 units* [Bidirectional GRU Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Bidirectional?hl=en) with 32 units* [Dense Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?hl=en) with 64 units* [Dropout Layer](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dropout?hl=en) with a 50% droprate* [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense?hl=en) with a single output unitWe then compile the model using:* [Adam Optimiser](https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adam?hl=en)* [Binary Crossentropy Loss](https://www.tensorflow.org/api_docs/python/tf/keras/losses/BinaryCrossentropy?hl=en)* Metrics as [Accuracy](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/Accuracy?hl=en)
###Code
# Model Definition
model = tf.keras.Sequential([
tf.keras.layers.Embedding(encoder.vocab_size, 64), # Embedding Layer using the vocab-size from encoder
tf.keras.layers.Bidirectional(tf.keras.layers.GRU(64, return_sequences=True)), # Create the first Bidirectional layer with 64 LSTM units
tf.keras.layers.Bidirectional(tf.keras.layers.GRU(32)), # Second Bidirectional layer witth 32 LSTM units
tf.keras.layers.Dense(64, activation='relu'), # A Dense Layer with 64 units
tf.keras.layers.Dropout(0.5), # 50% Dropout
tf.keras.layers.Dense(1) # Final Dense layer with a single unit
])
model.compile(optimizer='adam', loss='binary_crossentropy', metrics= ['acc']) # Compiling the Model
###Output
_____no_output_____
###Markdown
Training the Model We train the model for a novel 2 epochs
###Code
history = model.fit(x,y, epochs = 2)
###Output
Epoch 1/2
1404/1404 [==============================] - 2324s 2s/step - loss: 0.0464 - acc: 0.9817
Epoch 2/2
1404/1404 [==============================] - 2341s 2s/step - loss: 0.0019 - acc: 0.9998
###Markdown
Predicting with the Model Here, we write 2 functions to predict using the model. A pad_to_size function to pad our vectors and a sample_predict function to encode a string and predict using the model.
###Code
def pad_to_size(vec, size):
zero = [0] * (size - len(vec))
vec.extend(zeros)
return vec
def sample_predict(sample_pred_text, pad):
encoded_sample_pred_text = encoder.encode(sample_pred_text)
if pad:
encoded_sample_pred_text = pad_to_size(encoded_sample_pred_text, 64)
encoded_sample_pred_text = tf.cast(encoded_sample_pred_text, tf.float32)
predictions = model.predict(tf.expand_dims(encoded_sample_pred_text, 0))
return (predictions)
sample_pred_text = ('The movie was cool. The animation and the graphics')
predictions = sample_predict(sample_pred_text, pad=False)
print(predictions)
###Output
[[-0.44961074]]
###Markdown
Download the Model Weights for Yourself
###Code
model.save('my_model.h5')
import os
from IPython.display import FileLink
FileLink(r'my_model.h5')
###Output
_____no_output_____
|
notebooks/pipeline/pipeline_04.ipynb
|
###Markdown
pipeline 4
###Code
data_in_shape = (9, 9, 2)
conv_0 = Conv2D(5, 3, 3, activation='relu', border_mode='same', subsample=(2, 2), dim_ordering='tf', bias=True)
bn_0 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_1 = Conv2D(4, 1, 1, activation='linear', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_1 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_2 = Conv2D(3, 3, 3, activation='relu', border_mode='same', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_2 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_3 = Conv2D(2, 3, 3, activation='relu', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_3 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
input_layer = Input(shape=data_in_shape)
x = conv_0(input_layer)
x = bn_0(x)
x = conv_1(x)
x = bn_1(x)
x = conv_2(x)
x = bn_2(x)
x = conv_3(x)
output_layer = bn_3(x)
model = Model(input=input_layer, output=output_layer)
np.random.seed(5000)
data_in = 2 * np.random.random(data_in_shape) - 1
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(5000 + i)
if i % 6 == 5:
# std should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
result = model.predict(np.array([data_in]))
print({
'input': {'data': format_decimal(data_in.ravel().tolist()), 'shape': list(data_in_shape)},
'weights': [{'data': format_decimal(weights[i].ravel().tolist()), 'shape': list(weights[i].shape)} for i in range(len(weights))],
'expected': {'data': format_decimal(result[0].ravel().tolist()), 'shape': list(result[0].shape)}
})
###Output
{'weights': [{'data': [-0.54190427, -0.27866048, 0.455306, -0.77466439, 0.2155413, 0.63149892, 0.96253877, -0.87251032, 0.5999195, -0.80610289, -0.1982645, 0.32431534, 0.93117182, -0.03819988, -0.47177543, 0.17483424, -0.88284286, 0.19139394, -0.11495341, 0.06681537, 0.18449563, -0.18105407, 0.40700154, -0.92213003, -0.79312868, -0.43548578, -0.6937702, -0.39989327, -0.36228429, 0.39306052, 0.35325382, 0.88492784, -0.18250706, 0.16155788, 0.41390947, -0.78237669, -0.20556843, -0.31064771, 0.25995609, -0.26086483, -0.68690492, -0.84234127, 0.71760244, 0.82241492, 0.66498028, 0.24531482, -0.42529677, -0.1975344, 0.2370744, 0.56347711, 0.82975085, 0.79694468, 0.2928859, -0.22128013, 0.71509939, -0.51856729, -0.06366519, 0.72865484, 0.19756596, 0.93603065, -0.15084021, -0.1689197, 0.41645923, 0.4026665, 0.80837102, -0.3004439, -0.19871903, -0.21682387, -0.38842743, -0.57839535, -0.49843779, 0.21023487, 0.90348714, -0.75704365, 0.00040865, 0.26400099, -0.23104133, -0.94006091, -0.50783639, 0.54894291, 0.31426992, -0.2139014, 0.78043251, 0.853875, -0.91062654, 0.07838259, -0.02629358, 0.47074804, -0.19907572, -0.59608873], 'shape': [3, 3, 2, 5]}, {'data': [-0.61153601, 0.8694064, 0.28018421, 0.96263283, -0.07187857], 'shape': [5]}, {'data': [0.23551283, -0.39464683, 0.89320993, 0.93499946, 0.84763587], 'shape': [5]}, {'data': [0.70368475, -0.90025953, 0.88006859, 0.19645696, 0.12316286], 'shape': [5]}, {'data': [0.56451316, 0.49527774, 0.83890439, -0.10189393, 0.53392238], 'shape': [5]}, {'data': [0.54476614, 0.43296596, 0.82355662, 0.81937529, 0.95590748], 'shape': [5]}, {'data': [-0.64757194, 0.38294579, 0.15387812, 0.90138681, -0.53161741, 0.35252906, -0.02235672, -0.74986305, -0.04463964, 0.00454036, 0.87915417, -0.60734393, 0.96179323, 0.53666761, 0.38496633, 0.42331201, 0.02650542, 0.23362457, -0.24138609, -0.91613239], 'shape': [1, 1, 5, 4]}, {'data': [-0.51744242, 0.26675251, -0.91537145, 0.3509806], 'shape': [4]}, {'data': [-0.49133238, 0.53946673, 0.32629449, -0.5869313], 'shape': [4]}, {'data': [0.52385359, 0.30660211, 0.31233849, 0.06620905], 'shape': [4]}, {'data': [-0.77285789, -0.8460116, -0.4997778, -0.61713712], 'shape': [4]}, {'data': [0.44486243, 0.62358341, 0.51217101, 0.77369451], 'shape': [4]}, {'data': [-0.26641783, 0.21101274, 0.10673114, -0.26512734, -0.88191077, 0.37535685, -0.97515663, -0.73215051, 0.98281271, 0.99204448, 0.96142256, 0.84381878, 0.02804255, 0.95206406, -0.15328345, 0.81950569, 0.28767033, -0.58071021, 0.49915272, -0.25508646, -0.4838326, -0.2001564, 0.20669987, -0.25822963, 0.90178846, -0.06853458, -0.72876868, -0.00192717, 0.4961056, -0.26408008, -0.88339506, -0.05085536, -0.08630077, 0.27701807, 0.67914649, -0.06848802, -0.81702191, 0.20299124, -0.43500192, 0.8438674, 0.93241573, 0.95279356, -0.65085876, -0.96303719, -0.65858238, -0.21449723, 0.98544923, 0.10489501, -0.46444878, 0.28525886, -0.28180049, 0.40566621, -0.09303628, 0.14394578, 0.46452957, -0.12513119, -0.49020586, 0.54100835, 0.98308434, 0.38479304, -0.61824068, -0.20460531, 0.6388524, 0.98037162, -0.9818702, 0.38908975, 0.56118427, 0.88646173, 0.24810736, 0.35984305, 0.10004167, 0.09153771, -0.37469135, 0.32099458, -0.54337686, -0.03246755, 0.16232401, 0.265073, 0.33472883, -0.50945459, -0.34869639, 0.48172934, 0.50818247, 0.65720596, 0.83050092, -0.10554667, 0.46860173, 0.29619646, 0.17816559, 0.38350462, -0.26129366, -0.93324284, 0.76302869, 0.08332493, -0.54487301, -0.34188816, -0.50811034, -0.05639039, 0.50213215, -0.04448456, -0.07471556, 0.27643016, -0.15145411, 0.22111294, 0.49173953, -0.19818168, 0.27799311, 0.27739911], 'shape': [3, 3, 4, 3]}, {'data': [-0.11340936, -0.91676683, -0.5651004], 'shape': [3]}, {'data': [-0.65488319, 0.4099804, 0.32291475], 'shape': [3]}, {'data': [-0.93498039, 0.68023768, -0.62056578], 'shape': [3]}, {'data': [0.86320517, -0.79710709, 0.30719735], 'shape': [3]}, {'data': [0.78552591, 0.98972743, 0.06610293], 'shape': [3]}, {'data': [-0.90788009, -0.65871158, 0.98369049, 0.29383902, -0.08742277, 0.69663703, 0.82887138, 0.70554946, -0.14470764, 0.13519366, 0.04637206, -0.24907638, 0.19448248, 0.37161779, 0.56028265, 0.49605271, 0.32952396, 0.50606391, -0.94529562, -0.32078199, 0.3111684, 0.98133456, 0.04259265, 0.25723684, 0.08302491, 0.35536265, 0.42758731, -0.67743478, 0.53619969, 0.46189744, -0.03201824, -0.27080139, -0.49775568, 0.29504415, -0.43338293, -0.85852925, -0.57121818, 0.15370162, 0.88746426, -0.82947518, -0.29624711, 0.13686893, 0.05752348, 0.2162744, -0.82797366, -0.61618495, 0.06020317, -0.23374197, 0.13961779, -0.0900274, -0.3206224, 0.87718281, -0.32669526, -0.4710945], 'shape': [3, 3, 3, 2]}, {'data': [0.0231515, -0.51293283], 'shape': [2]}, {'data': [0.13848836, -0.35128712], 'shape': [2]}, {'data': [0.37373003, 0.90556202], 'shape': [2]}, {'data': [0.28104076, -0.95338109], 'shape': [2]}, {'data': [0.13453168, 0.10767889], 'shape': [2]}], 'expected': {'data': [0.81196368, -0.11035025, 0.62276578, -0.11035025, 2.22645187, -0.11035025, 2.66768837, -1.83787632, 0.26800883, -0.11035025, 1.67517114, -0.11035025, 2.20183444, -0.8188796, 0.26800883, -1.61873186, 1.85180569, -1.5101192], 'shape': [3, 3, 2]}, 'input': {'data': [-0.54190427, -0.27866048, 0.455306, -0.77466439, 0.2155413, 0.63149892, 0.96253877, -0.87251032, 0.5999195, -0.80610289, -0.1982645, 0.32431534, 0.93117182, -0.03819988, -0.47177543, 0.17483424, -0.88284286, 0.19139394, -0.11495341, 0.06681537, 0.18449563, -0.18105407, 0.40700154, -0.92213003, -0.79312868, -0.43548578, -0.6937702, -0.39989327, -0.36228429, 0.39306052, 0.35325382, 0.88492784, -0.18250706, 0.16155788, 0.41390947, -0.78237669, -0.20556843, -0.31064771, 0.25995609, -0.26086483, -0.68690492, -0.84234127, 0.71760244, 0.82241492, 0.66498028, 0.24531482, -0.42529677, -0.1975344, 0.2370744, 0.56347711, 0.82975085, 0.79694468, 0.2928859, -0.22128013, 0.71509939, -0.51856729, -0.06366519, 0.72865484, 0.19756596, 0.93603065, -0.15084021, -0.1689197, 0.41645923, 0.4026665, 0.80837102, -0.3004439, -0.19871903, -0.21682387, -0.38842743, -0.57839535, -0.49843779, 0.21023487, 0.90348714, -0.75704365, 0.00040865, 0.26400099, -0.23104133, -0.94006091, -0.50783639, 0.54894291, 0.31426992, -0.2139014, 0.78043251, 0.853875, -0.91062654, 0.07838259, -0.02629358, 0.47074804, -0.19907572, -0.59608873, 0.77239477, 0.54773798, 0.00922646, -0.44019973, 0.81720055, -0.0615295, 0.04580207, -0.76165178, -0.25095654, -0.24994101, 0.45502047, -0.75264239, -0.69142981, 0.02687807, 0.32093283, 0.88250988, 0.61121992, -0.50937295, 0.77718591, 0.40262635, -0.62736296, -0.29367364, -0.36348673, 0.63311157, 0.83600435, -0.90951031, -0.32951743, 0.54277901, 0.24301942, 0.03862923, 0.16270639, 0.48954823, -0.57044853, -0.33256914, -0.78071628, -0.07926009, 0.23073969, -0.51236684, 0.48137712, 0.76199354, 0.07620622, 0.34468054, 0.88032903, 0.85625296, 0.42121203, 0.04009794, 0.79783, 0.7082213, 0.1576071, -0.00959212, 0.61794887, 0.22218222, -0.95200956, -0.83814455, -0.97645341, -0.79525945, 0.23180734, -0.39176507, -0.00617481, -0.35796406, -0.94958437, 0.49854253, 0.35452684, 0.83471916, 0.35123934, 0.6688845, 0.69015915, 0.68934495, -0.24558832, 0.85902393, 0.88134197, -0.47357725], 'shape': [9, 9, 2]}}
###Markdown
pipeline 4
###Code
random_seed = 1004
data_in_shape = (9, 9, 2)
layers = [
Conv2D(5, (3,3), activation='relu', padding='same', strides=(2,2), data_format='channels_last', use_bias=True),
BatchNormalization(epsilon=1e-03, axis=-1, center=True, scale=True),
Conv2D(4, (1,1), activation='linear', padding='valid', strides=(1,1), data_format='channels_last', use_bias=True),
BatchNormalization(epsilon=1e-03, axis=-1, center=True, scale=True),
Conv2D(3, (3,3), activation='relu', padding='same', strides=(1,1), data_format='channels_last', use_bias=True),
BatchNormalization(epsilon=1e-03, axis=-1, center=True, scale=True),
Conv2D(2, (3,3), activation='relu', padding='valid', strides=(1,1), data_format='channels_last', use_bias=True),
BatchNormalization(epsilon=1e-03, axis=-1, center=True, scale=True)
]
input_layer = Input(shape=data_in_shape)
x = layers[0](input_layer)
for layer in layers[1:-1]:
x = layer(x)
output_layer = layers[-1](x)
model = Model(inputs=input_layer, outputs=output_layer)
np.random.seed(random_seed)
data_in = 2 * np.random.random(data_in_shape) - 1
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(random_seed + i)
if i % 6 == 5:
# std should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
result = model.predict(np.array([data_in]))
data_out_shape = result[0].shape
data_in_formatted = format_decimal(data_in.ravel().tolist())
data_out_formatted = format_decimal(result[0].ravel().tolist())
DATA['pipeline_04'] = {
'input': {'data': data_in_formatted, 'shape': data_in_shape},
'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],
'expected': {'data': data_out_formatted, 'shape': data_out_shape}
}
###Output
_____no_output_____
###Markdown
export for Keras.js tests
###Code
import os
filename = '../../test/data/pipeline/04.json'
if not os.path.exists(os.path.dirname(filename)):
os.makedirs(os.path.dirname(filename))
with open(filename, 'w') as f:
json.dump(DATA, f)
print(json.dumps(DATA))
###Output
{"pipeline_04": {"input": {"data": [-0.922097, 0.712992, 0.493001, 0.727856, 0.119969, -0.839034, -0.536727, -0.515472, 0.231, 0.214218, -0.791636, -0.148304, 0.309846, 0.742779, -0.123022, 0.427583, -0.882276, 0.818571, 0.043634, 0.454859, -0.007311, -0.744895, -0.368229, 0.324805, -0.388758, -0.556215, -0.542859, 0.685655, 0.350785, -0.312753, 0.591401, 0.95999, 0.136369, -0.58844, -0.506667, -0.208736, 0.548969, 0.653173, 0.128943, 0.180094, -0.16098, 0.208798, 0.666245, 0.347307, -0.384733, -0.88354, -0.328468, -0.515324, 0.479247, -0.360647, 0.09069, -0.221424, 0.091284, 0.202631, 0.208087, 0.582248, -0.164064, -0.925036, -0.678806, -0.212846, 0.960861, 0.536089, -0.038634, -0.473456, -0.409408, 0.620315, -0.873085, -0.695405, -0.024465, 0.762843, -0.928228, 0.557106, -0.65499, -0.918356, 0.815491, 0.996431, 0.115769, -0.751652, 0.075229, 0.969983, -0.80409, -0.080661, -0.644088, 0.160702, -0.486518, -0.09818, -0.191651, -0.961566, -0.238209, 0.260427, 0.085307, -0.664437, 0.458517, -0.824692, 0.312768, -0.253698, 0.761718, 0.551215, 0.566009, -0.85706, 0.687904, -0.283819, 0.5816, 0.820087, -0.028474, 0.588153, -0.221145, 0.049173, 0.529328, -0.359074, -0.463161, 0.493967, -0.852793, -0.552675, -0.695748, -0.178157, 0.477995, 0.858725, 0.120384, -0.515209, 0.204484, -0.025025, -0.654961, 0.239585, -0.654691, -0.651696, -0.699951, -0.054626, -0.232999, 0.464974, 0.285499, -0.311165, 0.18009, -0.100505, 0.303943, 0.265535, -0.960747, -0.542418, 0.195178, -0.848394, 0.0774, 0.250615, -0.690541, -0.106589, -0.587335, 0.52418, -0.750735, 0.906333, -0.185252, 0.091099, -0.516456, -0.314899, -0.398607, 0.555608, 0.741523, -0.454881, 0.5701, 0.205032, -0.772784, 0.733803, -0.669988, -0.872516], "shape": [9, 9, 2]}, "weights": [{"data": [-0.922097, 0.712992, 0.493001, 0.727856, 0.119969, -0.839034, -0.536727, -0.515472, 0.231, 0.214218, -0.791636, -0.148304, 0.309846, 0.742779, -0.123022, 0.427583, -0.882276, 0.818571, 0.043634, 0.454859, -0.007311, -0.744895, -0.368229, 0.324805, -0.388758, -0.556215, -0.542859, 0.685655, 0.350785, -0.312753, 0.591401, 0.95999, 0.136369, -0.58844, -0.506667, -0.208736, 0.548969, 0.653173, 0.128943, 0.180094, -0.16098, 0.208798, 0.666245, 0.347307, -0.384733, -0.88354, -0.328468, -0.515324, 0.479247, -0.360647, 0.09069, -0.221424, 0.091284, 0.202631, 0.208087, 0.582248, -0.164064, -0.925036, -0.678806, -0.212846, 0.960861, 0.536089, -0.038634, -0.473456, -0.409408, 0.620315, -0.873085, -0.695405, -0.024465, 0.762843, -0.928228, 0.557106, -0.65499, -0.918356, 0.815491, 0.996431, 0.115769, -0.751652, 0.075229, 0.969983, -0.80409, -0.080661, -0.644088, 0.160702, -0.486518, -0.09818, -0.191651, -0.961566, -0.238209, 0.260427], "shape": [3, 3, 2, 5]}, {"data": [0.318429, -0.858397, -0.059042, 0.68597, -0.649837], "shape": [5]}, {"data": [0.486255, -0.547151, 0.285068, 0.764711, 0.481398], "shape": [5]}, {"data": [0.0965, 0.594443, -0.987782, 0.431322, 0.067427], "shape": [5]}, {"data": [0.228005, 0.859479, -0.49018, 0.232871, -0.303968], "shape": [5]}, {"data": [0.61488, 0.164575, 0.300991, 0.273449, 0.795127], "shape": [5]}, {"data": [-0.211487, -0.648815, -0.854588, -0.616238, -0.200391, -0.163753, 0.525164, 0.04282, -0.178234, 0.074889, -0.458875, -0.133347, 0.654533, -0.456294, 0.454776, -0.799519, -0.004428, 0.160632, 0.153349, -0.585922], "shape": [1, 1, 5, 4]}, {"data": [0.311362, -0.228519, 0.253024, -0.775634], "shape": [4]}, {"data": [-0.946541, 0.585593, -0.49527, 0.594532], "shape": [4]}, {"data": [0.114077, -0.889658, -0.472025, 0.718808], "shape": [4]}, {"data": [-0.536401, 0.404425, -0.338344, -0.818131], "shape": [4]}, {"data": [0.627511, 0.139377, 0.617668, 0.64835], "shape": [4]}, {"data": [0.677272, 0.414379, 0.565623, 0.358783, 0.401478, -0.335229, 0.52212, 0.822073, -0.215588, 0.496382, -0.508638, 0.597443, -0.380315, 0.375492, -0.491294, 0.342738, -0.671459, -0.345669, -0.372166, -0.957736, -0.46656, 0.423581, -0.318022, -0.031754, 0.556192, 0.398047, 0.601527, 0.534403, -0.299813, -0.25944, 0.698572, 0.547387, 0.558354, -0.993255, 0.26764, 0.312868, -0.885509, 0.19899, 0.252089, 0.711535, 0.607876, 0.709799, -0.17861, -0.532773, 0.123214, -0.712066, -0.366047, 0.062262, -0.236428, -0.783974, 0.824743, -0.404413, -0.963884, -0.160779, -0.363059, -0.981766, 0.580054, -0.175377, -0.475068, 0.316555, 0.04183, 0.633324, 0.822504, 0.850124, 0.583421, 0.858015, -0.295104, 0.354136, 0.055057, -0.430902, 0.190068, -0.076502, -0.836756, -0.68403, 0.024855, 0.217349, -0.392298, -0.872757, -0.58541, -0.440277, -0.168518, 0.712577, -0.736955, -0.593383, 0.543158, 0.622866, -0.667897, 0.120557, 0.018086, -0.216754, -0.573618, 0.625166, -0.630118, 0.338595, -0.761033, -0.399112, -0.437671, 0.763201, -0.854733, -0.211708, -0.562277, 0.28775, 0.749327, 0.77106, 0.689207, -0.145819, 0.476842, 0.742817], "shape": [3, 3, 4, 3]}, {"data": [-0.774929, 0.84091, -0.053971], "shape": [3]}, {"data": [-0.838065, 0.889805, 0.503326], "shape": [3]}, {"data": [-0.352161, -0.764655, -0.988392], "shape": [3]}, {"data": [0.517906, -0.666537, 0.378665], "shape": [3]}, {"data": [0.700279, 0.871936, 0.718567], "shape": [3]}, {"data": [-0.726393, 0.961405, -0.352651, -0.616831, -0.957985, 0.738251, -0.229442, -0.301669, -0.401448, -0.176988, 0.03531, -0.248273, 0.731235, -0.751996, -0.52024, 0.141734, 0.190872, 0.423504, 0.517459, 0.477292, -0.645496, -0.356895, -0.798014, -0.273988, -0.060309, 0.722704, 0.059648, -0.822663, -0.145044, 0.934283, -0.382613, -0.34684, -0.74607, -0.41484, 0.286901, 0.345101, 0.270742, 0.974401, 0.372597, 0.258112, 0.364092, -0.666525, -0.683073, 0.372326, 0.836413, -0.22059, -0.104618, 0.158763, -0.30314, -0.782504, -0.857413, 0.02191, -0.565599, 0.680123], "shape": [3, 3, 3, 2]}, {"data": [0.82814, 0.260142], "shape": [2]}, {"data": [0.295382, 0.993827], "shape": [2]}, {"data": [0.204497, 0.230931], "shape": [2]}, {"data": [-0.296706, 0.681466], "shape": [2]}, {"data": [0.109503, 0.602486], "shape": [2]}], "expected": {"data": [0.468145, -0.640879, 0.468145, -0.640879, 0.468145, -0.640879, 0.468145, -0.640879, 0.468145, -0.640879, 0.468145, -0.640879, 0.468145, -0.640879, 0.468145, -0.640879, 0.468145, -0.640879], "shape": [3, 3, 2]}}}
###Markdown
pipeline 4
###Code
data_in_shape = (9, 9, 2)
conv_0 = Convolution2D(5, 3, 3, activation='relu', border_mode='same', subsample=(2, 2), dim_ordering='tf', bias=True)
bn_0 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_1 = Convolution2D(4, 1, 1, activation='linear', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_1 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_2 = Convolution2D(3, 3, 3, activation='relu', border_mode='same', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_2 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
conv_3 = Convolution2D(2, 3, 3, activation='relu', border_mode='valid', subsample=(1, 1), dim_ordering='tf', bias=True)
bn_3 = BatchNormalization(mode=0, axis=-1, epsilon=1e-3)
input_layer = Input(shape=data_in_shape)
x = conv_0(input_layer)
x = bn_0(x)
x = conv_1(x)
x = bn_1(x)
x = conv_2(x)
x = bn_2(x)
x = conv_3(x)
output_layer = bn_3(x)
model = Model(input=input_layer, output=output_layer)
np.random.seed(5000)
data_in = 2 * np.random.random(data_in_shape) - 1
# set weights to random (use seed for reproducibility)
weights = []
for i, w in enumerate(model.get_weights()):
np.random.seed(5000 + i)
if i % 6 == 5:
# std should be positive
weights.append(np.random.random(w.shape))
else:
weights.append(2 * np.random.random(w.shape) - 1)
model.set_weights(weights)
result = model.predict(np.array([data_in]))
print({
'input': {'data': format_decimal(data_in.ravel().tolist()), 'shape': list(data_in_shape)},
'weights': [{'data': format_decimal(weights[i].ravel().tolist()), 'shape': list(weights[i].shape)} for i in range(len(weights))],
'expected': {'data': format_decimal(result[0].ravel().tolist()), 'shape': list(result[0].shape)}
})
###Output
{'weights': [{'data': [-0.54190427, -0.27866048, 0.455306, -0.77466439, 0.2155413, 0.63149892, 0.96253877, -0.87251032, 0.5999195, -0.80610289, -0.1982645, 0.32431534, 0.93117182, -0.03819988, -0.47177543, 0.17483424, -0.88284286, 0.19139394, -0.11495341, 0.06681537, 0.18449563, -0.18105407, 0.40700154, -0.92213003, -0.79312868, -0.43548578, -0.6937702, -0.39989327, -0.36228429, 0.39306052, 0.35325382, 0.88492784, -0.18250706, 0.16155788, 0.41390947, -0.78237669, -0.20556843, -0.31064771, 0.25995609, -0.26086483, -0.68690492, -0.84234127, 0.71760244, 0.82241492, 0.66498028, 0.24531482, -0.42529677, -0.1975344, 0.2370744, 0.56347711, 0.82975085, 0.79694468, 0.2928859, -0.22128013, 0.71509939, -0.51856729, -0.06366519, 0.72865484, 0.19756596, 0.93603065, -0.15084021, -0.1689197, 0.41645923, 0.4026665, 0.80837102, -0.3004439, -0.19871903, -0.21682387, -0.38842743, -0.57839535, -0.49843779, 0.21023487, 0.90348714, -0.75704365, 0.00040865, 0.26400099, -0.23104133, -0.94006091, -0.50783639, 0.54894291, 0.31426992, -0.2139014, 0.78043251, 0.853875, -0.91062654, 0.07838259, -0.02629358, 0.47074804, -0.19907572, -0.59608873], 'shape': [3, 3, 2, 5]}, {'data': [-0.61153601, 0.8694064, 0.28018421, 0.96263283, -0.07187857], 'shape': [5]}, {'data': [0.23551283, -0.39464683, 0.89320993, 0.93499946, 0.84763587], 'shape': [5]}, {'data': [0.70368475, -0.90025953, 0.88006859, 0.19645696, 0.12316286], 'shape': [5]}, {'data': [0.56451316, 0.49527774, 0.83890439, -0.10189393, 0.53392238], 'shape': [5]}, {'data': [0.54476614, 0.43296596, 0.82355662, 0.81937529, 0.95590748], 'shape': [5]}, {'data': [-0.64757194, 0.38294579, 0.15387812, 0.90138681, -0.53161741, 0.35252906, -0.02235672, -0.74986305, -0.04463964, 0.00454036, 0.87915417, -0.60734393, 0.96179323, 0.53666761, 0.38496633, 0.42331201, 0.02650542, 0.23362457, -0.24138609, -0.91613239], 'shape': [1, 1, 5, 4]}, {'data': [-0.51744242, 0.26675251, -0.91537145, 0.3509806], 'shape': [4]}, {'data': [-0.49133238, 0.53946673, 0.32629449, -0.5869313], 'shape': [4]}, {'data': [0.52385359, 0.30660211, 0.31233849, 0.06620905], 'shape': [4]}, {'data': [-0.77285789, -0.8460116, -0.4997778, -0.61713712], 'shape': [4]}, {'data': [0.44486243, 0.62358341, 0.51217101, 0.77369451], 'shape': [4]}, {'data': [-0.26641783, 0.21101274, 0.10673114, -0.26512734, -0.88191077, 0.37535685, -0.97515663, -0.73215051, 0.98281271, 0.99204448, 0.96142256, 0.84381878, 0.02804255, 0.95206406, -0.15328345, 0.81950569, 0.28767033, -0.58071021, 0.49915272, -0.25508646, -0.4838326, -0.2001564, 0.20669987, -0.25822963, 0.90178846, -0.06853458, -0.72876868, -0.00192717, 0.4961056, -0.26408008, -0.88339506, -0.05085536, -0.08630077, 0.27701807, 0.67914649, -0.06848802, -0.81702191, 0.20299124, -0.43500192, 0.8438674, 0.93241573, 0.95279356, -0.65085876, -0.96303719, -0.65858238, -0.21449723, 0.98544923, 0.10489501, -0.46444878, 0.28525886, -0.28180049, 0.40566621, -0.09303628, 0.14394578, 0.46452957, -0.12513119, -0.49020586, 0.54100835, 0.98308434, 0.38479304, -0.61824068, -0.20460531, 0.6388524, 0.98037162, -0.9818702, 0.38908975, 0.56118427, 0.88646173, 0.24810736, 0.35984305, 0.10004167, 0.09153771, -0.37469135, 0.32099458, -0.54337686, -0.03246755, 0.16232401, 0.265073, 0.33472883, -0.50945459, -0.34869639, 0.48172934, 0.50818247, 0.65720596, 0.83050092, -0.10554667, 0.46860173, 0.29619646, 0.17816559, 0.38350462, -0.26129366, -0.93324284, 0.76302869, 0.08332493, -0.54487301, -0.34188816, -0.50811034, -0.05639039, 0.50213215, -0.04448456, -0.07471556, 0.27643016, -0.15145411, 0.22111294, 0.49173953, -0.19818168, 0.27799311, 0.27739911], 'shape': [3, 3, 4, 3]}, {'data': [-0.11340936, -0.91676683, -0.5651004], 'shape': [3]}, {'data': [-0.65488319, 0.4099804, 0.32291475], 'shape': [3]}, {'data': [-0.93498039, 0.68023768, -0.62056578], 'shape': [3]}, {'data': [0.86320517, -0.79710709, 0.30719735], 'shape': [3]}, {'data': [0.78552591, 0.98972743, 0.06610293], 'shape': [3]}, {'data': [-0.90788009, -0.65871158, 0.98369049, 0.29383902, -0.08742277, 0.69663703, 0.82887138, 0.70554946, -0.14470764, 0.13519366, 0.04637206, -0.24907638, 0.19448248, 0.37161779, 0.56028265, 0.49605271, 0.32952396, 0.50606391, -0.94529562, -0.32078199, 0.3111684, 0.98133456, 0.04259265, 0.25723684, 0.08302491, 0.35536265, 0.42758731, -0.67743478, 0.53619969, 0.46189744, -0.03201824, -0.27080139, -0.49775568, 0.29504415, -0.43338293, -0.85852925, -0.57121818, 0.15370162, 0.88746426, -0.82947518, -0.29624711, 0.13686893, 0.05752348, 0.2162744, -0.82797366, -0.61618495, 0.06020317, -0.23374197, 0.13961779, -0.0900274, -0.3206224, 0.87718281, -0.32669526, -0.4710945], 'shape': [3, 3, 3, 2]}, {'data': [0.0231515, -0.51293283], 'shape': [2]}, {'data': [0.13848836, -0.35128712], 'shape': [2]}, {'data': [0.37373003, 0.90556202], 'shape': [2]}, {'data': [0.28104076, -0.95338109], 'shape': [2]}, {'data': [0.13453168, 0.10767889], 'shape': [2]}], 'expected': {'data': [0.81196368, -0.11035025, 0.62276578, -0.11035025, 2.22645187, -0.11035025, 2.66768837, -1.83787632, 0.26800883, -0.11035025, 1.67517114, -0.11035025, 2.20183444, -0.8188796, 0.26800883, -1.61873186, 1.85180569, -1.5101192], 'shape': [3, 3, 2]}, 'input': {'data': [-0.54190427, -0.27866048, 0.455306, -0.77466439, 0.2155413, 0.63149892, 0.96253877, -0.87251032, 0.5999195, -0.80610289, -0.1982645, 0.32431534, 0.93117182, -0.03819988, -0.47177543, 0.17483424, -0.88284286, 0.19139394, -0.11495341, 0.06681537, 0.18449563, -0.18105407, 0.40700154, -0.92213003, -0.79312868, -0.43548578, -0.6937702, -0.39989327, -0.36228429, 0.39306052, 0.35325382, 0.88492784, -0.18250706, 0.16155788, 0.41390947, -0.78237669, -0.20556843, -0.31064771, 0.25995609, -0.26086483, -0.68690492, -0.84234127, 0.71760244, 0.82241492, 0.66498028, 0.24531482, -0.42529677, -0.1975344, 0.2370744, 0.56347711, 0.82975085, 0.79694468, 0.2928859, -0.22128013, 0.71509939, -0.51856729, -0.06366519, 0.72865484, 0.19756596, 0.93603065, -0.15084021, -0.1689197, 0.41645923, 0.4026665, 0.80837102, -0.3004439, -0.19871903, -0.21682387, -0.38842743, -0.57839535, -0.49843779, 0.21023487, 0.90348714, -0.75704365, 0.00040865, 0.26400099, -0.23104133, -0.94006091, -0.50783639, 0.54894291, 0.31426992, -0.2139014, 0.78043251, 0.853875, -0.91062654, 0.07838259, -0.02629358, 0.47074804, -0.19907572, -0.59608873, 0.77239477, 0.54773798, 0.00922646, -0.44019973, 0.81720055, -0.0615295, 0.04580207, -0.76165178, -0.25095654, -0.24994101, 0.45502047, -0.75264239, -0.69142981, 0.02687807, 0.32093283, 0.88250988, 0.61121992, -0.50937295, 0.77718591, 0.40262635, -0.62736296, -0.29367364, -0.36348673, 0.63311157, 0.83600435, -0.90951031, -0.32951743, 0.54277901, 0.24301942, 0.03862923, 0.16270639, 0.48954823, -0.57044853, -0.33256914, -0.78071628, -0.07926009, 0.23073969, -0.51236684, 0.48137712, 0.76199354, 0.07620622, 0.34468054, 0.88032903, 0.85625296, 0.42121203, 0.04009794, 0.79783, 0.7082213, 0.1576071, -0.00959212, 0.61794887, 0.22218222, -0.95200956, -0.83814455, -0.97645341, -0.79525945, 0.23180734, -0.39176507, -0.00617481, -0.35796406, -0.94958437, 0.49854253, 0.35452684, 0.83471916, 0.35123934, 0.6688845, 0.69015915, 0.68934495, -0.24558832, 0.85902393, 0.88134197, -0.47357725], 'shape': [9, 9, 2]}}
|
docs/examples/example_stochastic_ross.ipynb
|
###Markdown
STOCHASTIC ROSS - Tutorial============================ [Go to the Download page to download this notebook](https://ross-rotordynamics.github.io/ross-website/download.html) This is a basic tutorial on how to use STOCHASTIC ROSS - a ROSS' module for stochastic rotordynamics analysis. Before starting this tutorial, be sure you're already familiar with ROSS library.If you've already used ROSS, you've noticed the graphs present deterministic results, considering a set of parameters. In other words, the model always produce the same output from a given starting condition or initial state [@...]. In STOCHASTIC ROSS, the concept is different, and we'll work it stochastic processes.A stochastic process is defined as a indexed collection of random variables defined on a common probability space ($\Omega$, $\mathcal{F}$, $P$} where $\Omega$ is a sample space, $\mathcal{F}$ is a $\sigma$-algebra, and $P$ is a probability measure. The index is often assumed to be time. [@...].This new module allows you to work with random variables applied to the ROSS' functions. Basically, any element or material can be receive a parameter considered random. Moreover, some methods are also able to receive a random variable (random force, random unbalance...). It means that a parameter, once assumed deterministic (int or float in python language), now follows a distribution (list or array), like uniform distribution, normal distribution, etc.As consequence, plots do not display deterministic results anymore. Instead, plots shows the expectation $E(X_t(t))$ (or mean) for a stochastic process and intervals of confidence (user choice).Where:- $X_t$ is the stochastic process;- $t$ is the index time
###Code
import ross as rs
import ross.stochastic as srs
from bokeh.io import output_notebook, show
import numpy as np
output_notebook()
###Output
_____no_output_____
###Markdown
Random SamplingArrays of random numbers can be creating using [`numpy.random`](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.random.html) package.`numpy.random` has a large set of distributions that cover most of our needs to run STOCHASTIC ROSS.In this [LINK](https://docs.scipy.org/doc/numpy-1.14.0/reference/routines.random.html) you can find a list of numpy random numbers generators.When using STOCHASTIC ROSS, **all the randam variables must have the same size**. Classes NameIt's important to highlight that in STOCHASTIC ROSS, the classes name are the same than ROSS, but with a "**ST_**" prefix to differ. ST_MaterialThere is a class called ST_Material to hold material's properties, where:`ST_Material` allows you to create a material with random properties. It creates an object containing a generator with random instances of [`rs.Material`](https://ross-rotordynamics.github.io/ross-website/generated/material/ross.Material.htmlross.Material).The instantiation is the same than `rs.Material` class. It has the same parameters and assumptions. The only difference is that you are able to select some parameters to consider as random and instantiate it as a list.The parameters which can be passed as random are:- `rho` - Density- `E` - Young's modulus- `G_s` - Shear modulus- `Poisson` - Poisson ratio ```textname : str Material name.rho : float, list, pint.Quantity Density (kg/m**3). Input a list to make it random.E : float, list, pint.Quantity Young's modulus (N/m**2). Input a list to make it random.G_s : float, list Shear modulus (N/m**2). Input a list to make it random.Poisson : float, list Poisson ratio (dimensionless). Input a list to make it random.color : str Can be used on plots.``` Note that, to instantiate a ST_Material class, you only need to give 2 out of the following parameters: `E`, `G_s` ,`Poisson`.Let's consider that the Young's Modulus is a random variable the follows a uniform distribution from $208e9$ to $211e9$ $N/m^2$.
###Code
var_size = 5
E = np.random.uniform(208e9, 211e9, var_size)
rand_mat = srs.ST_Material(name="Steel", rho=7810, E=E, G_s=81.2e9)
# Random values for Young's Modulus
print(rand_mat.E)
###Output
[2.10607544e+11 2.10187885e+11 2.10878153e+11 2.10901796e+11
2.09693088e+11]
###Markdown
You can return the random Materials created using the following command:`.__iter__()`It returns a generator with the random objects. It consumes less computational memory and runs loops faster.
###Code
list(rand_mat.__iter__())
###Output
_____no_output_____
###Markdown
You can pass one or all parameters as random (but remember the rule of given only 2 out of `E`, `G_s` ,`Poisson`).Let's see another example considering all parameters as random.
###Code
var_size = 5
E = np.random.uniform(208e9, 211e9, var_size)
rho = np.random.uniform(7780, 7850, var_size)
G_s = np.random.uniform(79.8e9, 81.5e9, var_size)
rand_mat = srs.ST_Material(name="Steel", rho=rho, E=E, G_s=G_s)
list(rand_mat.__iter__())
###Output
_____no_output_____
###Markdown
ST_ShaftElement`ST_ShaftElement` allows you to create random shaft element. It creates an object containing a generator with random instances of `ShaftElement`.The instantiation is the same than [`rs.ShaftElement`](https://ross-rotordynamics.github.io/ross-website/generated/elements/ross.ShaftElement.htmlross.ShaftElement) class. It has the same parameters and the same beam model and assumptions. The only difference is that you are able to select some parameters to consider as random and instantiate it as a list.The parameters which can be passed as random are:- `L` - Length- `idl` - Inner diameter of the element at the left position- `odl` - Outer diameter of the element at the left position- `idr` - Inner diameter of the element at the right position- `odr` - Outer diameter of the element at the right position.- `material` - Shaft materialThe selected parameters must be appended to `is_random` list as string.You can return the random shaft element created using the following command:`.__iter__()`. ```textL : float, pint.Quantity, list Element length. Input a list to make it random.idl : float, pint.Quantity, list Inner diameter of the element at the left position. Input a list to make it random.odl : float, pint.Quantity, list Outer diameter of the element at the left position. Input a list to make it random.idr : float, pint.Quantity, list, optional Inner diameter of the element at the right position Default is equal to idl value (cylindrical element) Input a list to make it random.odr : float, pint.Quantity, list, optional Outer diameter of the element at the right position. Default is equal to odl value (cylindrical element) Input a list to make it random.material : ross.material, list of ross.material Shaft material. Input a list to make it random.n : int, optional Element number (coincident with it's first node). If not given, it will be set when the rotor is assembled according to the element's position in the list supplied toshear_effects : bool, optional Determine if shear effects are taken into account. Default is True.rotary_inertia : bool, optional Determine if rotary_inertia effects are taken into account. Default is True.gyroscopic : bool, optional Determine if gyroscopic effects are taken into account. Default is True.shear_method_calc : str, optional Determines which shear calculation method the user will adopt. Default is 'cowper'is_random : list List of the object attributes to become random. Possibilities: ["L", "idl", "odl", "idr", "odr", "material"]``` Cylindrical shaft element with random outer diameterIf you want to create a cylindrical element with random outer diameter, making sure both `odl` and `odr` are the same, input only `odl` parameter.The same logic is applied to inner diameter.
###Code
# Creating a cylindrical shaft element with random outer diameter and material.
var_size = 5
L = 0.25
i_d = 0.0
o_d = np.random.uniform(0.04, 0.06, var_size)
is_random = ["odl", "material"]
r_s0 = srs.ST_ShaftElement(
L=L,
idl=i_d,
odl=o_d,
material=rand_mat,
shear_effects=True,
rotary_inertia=True,
gyroscopic=True,
is_random=is_random,
)
list(r_s0.__iter__())
###Output
_____no_output_____
###Markdown
Conical shaft element with random outer diameterIf you want to create a conical element with random outer diameter, input lists for `odl` ans `odr` parameters.
###Code
# Creating a conical shaft element with random outer diameter and material.
var_size = 5
L = 0.25
idl = 0.0
idr = 0.0
odl = np.random.uniform(0.04, 0.06, var_size)
odr = np.random.uniform(0.06, 0.07, var_size)
is_random = ["odl", "odr", "material"]
r_s1 = srs.ST_ShaftElement(
L=L,
idl=idl,
odl=odl,
idr=idr,
odr=odr,
material=rand_mat,
shear_effects=True,
rotary_inertia=True,
gyroscopic=True,
is_random=is_random,
)
list(r_s1.__iter__())
###Output
_____no_output_____
###Markdown
Creating a list of shaft elementsLet's see 2 examples of building rotor shafts:- a shaft with 5 shaft elements considered random```shaft_elements = [ ST_ShaftElement, ST_ShaftElement, ST_ShaftElement, ST_ShaftElement, ST_ShaftElement,]```- a shaft with 5 elements, being only the 3rd element considered as random. So we want;```shaft_elements = [ ShaftElement, ShaftElement, ST_ShaftElement, ShaftElement, ShaftElement,]```First we create the deterministic shaft elements.
###Code
################ EXAMPLE 1 #################
# Creating 5 random shaft elements
from ross.materials import steel
L = 0.25
N = 5 # Number of elements
l_list = [L for _ in range(N)]
shaft_elements = [
srs.ST_ShaftElement(
L=l,
idl=0.0,
odl=np.random.uniform(0.04, 0.06, var_size),
material=steel,
shear_effects=True,
rotary_inertia=True,
gyroscopic=True,
is_random=["odl"],
)
for l in l_list
]
# printing
for i in range(N):
print("Element", i)
print(list(shaft_elements[i].__iter__()))
################ EXAMPLE 2 #################
# Creating shaft elements
from ross.materials import steel
L = 0.25
i_d = 0.0
o_d = 0.05
N = 4 # Number of elements
l_list = [L for _ in range(N)]
shaft_elements = [
rs.ShaftElement(
L=l,
idl=i_d,
odl=o_d,
material=steel,
shear_effects=True,
rotary_inertia=True,
gyroscopic=True,
)
for l in l_list
]
shaft_elements
# Adding the random shaft element instance to the list
shaft_elements.insert(2, r_s0)
shaft_elements
###Output
_____no_output_____
###Markdown
ST_DiskElementThis class represents a random Disk element.`ST_DiskElement` allows you to create random disk element. It creates an object containing a generator with random instances of [`rs.DiskElement`](https://ross-rotordynamics.github.io/ross-website/generated/elements/ross.DiskElement.htmlross.DiskElement).The instantiation is the same than `DiskElement` class. It has the same parameters and assumptions. The only difference is that you are able to select some parameters to consider as random and instantiate it as a list.The parameters which can be passed as random are:- `m` - mass- `Id` - Diametral moment of inertia.- `Ip` - Polar moment of inertiaThe selected parameters must be appended to `is_random` list as string.You can return the random disk element created using the following command:`.__iter__()`. ```textn: int Node in which the disk will be inserted.m : float, list Mass of the disk element. Input a list to make it random.Id : float, list Diametral moment of inertia. Input a list to make it random.Ip : float, list Polar moment of inertia Input a list to make it random.tag : str, optional A tag to name the element Default is Nonecolor : str, optional A color to be used when the element is represented. Default is 'b2182b' (Cardinal).is_random : list List of the object attributes to become random. Possibilities: ["m", "Id", "Ip"]``` All the values are following the S.I. convention for the units.
###Code
m = np.random.uniform(32.0, 33.0, var_size)
Id = np.random.uniform(0.17, 0.18, var_size)
Ip = np.random.uniform(0.32, 0.33, var_size)
is_random = ["m", "Id", "Ip"]
disk0 = srs.ST_DiskElement(n=2, m=m, Id=Id, Ip=Ip, is_random=is_random)
list(disk0.__iter__())
###Output
_____no_output_____
###Markdown
From geometry DiskElement instantiationBesides the instantiation previously explained, there is a way to instantiate a ST_DiskElement with only geometrical parameters (for cylindrical disks) and the disk’s material, as we can see in the following code.Use the classmethod `ST_DiskElement.from_geometry`. ```textn: int Node in which the disk will be inserted.material: ross.Material, list of ross.Material Disk material. Input a list to make it random.width: float, list The disk width. Input a list to make it random.i_d: float, list Inner diameter. Input a list to make it random.o_d: float, list Outer diameter. Input a list to make it random.tag : str, optional A tag to name the element Default is Noneis_random : list List of the object attributes to become random. Possibilities: ["material", "width", "i_d", "o_d"]```
###Code
i_d = np.random.uniform(0.05, 0.06, var_size)
o_d = np.random.uniform(0.35, 0.39, var_size)
disk1 = srs.ST_DiskElement.from_geometry(n=3,
material=steel,
width=0.07,
i_d=i_d,
o_d=o_d,
is_random=["i_d", "o_d"],
)
list(disk1.__iter__())
###Output
_____no_output_____
###Markdown
ST_BearingElementThis class represents a random bearing element.`ST_BearingElement` allows you to create random disk element. It creates an object containing a generator with random instances of [`rs.BearingElement`](https://ross-rotordynamics.github.io/ross-website/generated/elements/ross.BearingElement.htmlross.BearingElement).The instantiation is the same than `BearingElement` class. It has the same parameters and assumptions. The only difference is that you are able to select some parameters to consider as random and instantiate it as a list.If you're considering constant coefficients, use an 1-D array to make it random.If you're considering varying coefficients to the frequency, use a 2-D array to make it randomThe parameters which can be passed as random are:- `kxx` - Direct stiffness in the x direction.- `cxx` - Direct damping in the x direction.- `kyy` - Direct stiffness in the y direction.- `cyy` - Direct damping in the y direction.- `kxy` - Cross coupled stiffness in the x direction.- `cxy` - Cross coupled damping in the x direction.- `kyx` - Cross coupled stiffness in the y direction.- `cyx` - Cross coupled damping in the y direction.The selected parameters must be appended to `is_random` list as string.You can return the random disk element created using the following command:`.__iter__()`. ```textn: int Node which the bearing will be located inkxx: float, 1-D array, 2-D array Direct stiffness in the x direction.cxx: float, 1-D array, 2-D array Direct damping in the x direction.kyy: float, 1-D array, 2-D array, optional Direct stiffness in the y direction. (defaults to kxx)kxy: float, 1-D array, 2-D array, optional Cross coupled stiffness in the x direction. (defaults to 0)kyx: float, 1-D array, 2-D array, optional Cross coupled stiffness in the y direction. (defaults to 0)cyy: float, 1-D array, 2-D array, optional Direct damping in the y direction. (defaults to cxx)cxy: float, 1-D array, 2-D array, optional Cross coupled damping in the x direction. (defaults to 0)cyx: float, 1-D array, 2-D array, optional Cross coupled damping in the y direction. (defaults to 0)frequency: array, optional Array with the frequencies (rad/s).tag: str, optional A tag to name the element Default is None.n_link: int, optional Node to which the bearing will connect. If None the bearing is connected to ground. Default is None.scale_factor: float, optional The scale factor is used to scale the bearing drawing. Default is 1.is_random : list List of the object attributes to become stochastic. Possibilities: ["kxx", "kxy", "kyx", "kyy", "cxx", "cxy", "cyx", "cyy"]``` Bearing with random constant values for each coefficient:
###Code
# Building bearing elements and matching their coefficients.
var_size = 5
kxx = np.random.uniform(1e6, 2e6, var_size)
cxx = np.random.uniform(1e3, 2e3, var_size)
brg0 = srs.ST_BearingElement(n=0,
kxx=kxx,
cxx=cxx,
is_random=["kxx", "cxx"],
)
# set kxx and cxx again, if you want different coefficients for the next bearing
# it will get new random values.
# kxx = np.random.uniform(1e6, 2e6, var_size)
# cxx = np.random.uniform(1e6, 2e6, var_size)
brg1 = srs.ST_BearingElement(n=5,
kxx=kxx,
cxx=cxx,
is_random=["kxx", "cxx"],
)
list(brg0.__iter__())
###Output
_____no_output_____
###Markdown
The coefficients could be an array with different values for different rotation speeds, in that case you only have to give a parameter 'frequency' which is a array with the same size as the coefficients array.To make it random, check the example below:
###Code
kxx = [np.random.uniform(1e6, 2e6, var_size),
np.random.uniform(2.3e6, 3.3e6, var_size)]
cxx = [np.random.uniform(1e3, 2e3, var_size),
np.random.uniform(2.1e3, 3.1e3, var_size)]
frequency = np.linspace(500, 800, len(kxx))
brg2 = srs.ST_BearingElement(n=1,
kxx=kxx,
cxx=cxx,
frequency=frequency,
is_random=["kxx", "cxx"],
)
list(brg2.__iter__())
###Output
_____no_output_____
###Markdown
ST_RotorThis class will create several instances of [`rs.Rotor`](https://ross-rotordynamics.github.io/ross-website/generated/results/ross.Rotor.htmlross.Rotor) class. The number of rotors to be created depends on the amount of random elements instantiated and theirs respective sizes.To use this class, you only have to give all the already instantiated elements in a list format, as it follows. ```text shaft_elements : list List with the shaft elements disk_elements : list List with the disk elements bearing_seal_elements : list List with the bearing elements point_mass_elements: list List with the point mass elements sparse : bool, optional If sparse, eigenvalues will be calculated with arpack. Default is True. n_eigen : int, optional Number of eigenvalues calculated by arpack. Default is 12. tag : str A tag for the rotor```It's important to notice the `n_eigen` parameter, which will determine how many eigenvalues will be calculated by the other functions, then how many natural frequencies and mode shapes (always half the value of `n_eigen`) will be available to retrieve.
###Code
rotor1 = srs.ST_Rotor(
shaft_elements,
[disk0, disk1],
[brg0, brg1],
)
###Output
[ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.05, odr=0.05, material='Steel', n=None), ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.05, odr=0.05, material='Steel', n=None), [ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.052823, odr=0.052823, material='Steel', n=None), ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.040779, odr=0.040779, material='Steel', n=None), ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.043602, odr=0.043602, material='Steel', n=None), ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.059894, odr=0.059894, material='Steel', n=None), ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.052015, odr=0.052015, material='Steel', n=None)], ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.05, odr=0.05, material='Steel', n=None), ShaftElement(L=0.25, idl=0.0, idr=0.0, odl=0.05, odr=0.05, material='Steel', n=None)]
###Markdown
Visualizing the RotorIt is interesting to plot the rotor to check if the geometry checks with what you wanted to model. Differently from ROSS `Rotor` class, the object here holds serveral instances of rotors. So, an index is needed to indicate which rotor to plot you can plot it with the following code.Note: You can choose plotting rotor with `plot_type='matplotlib` or `plot_type='bokeh'`. The default is the bokeh output.
###Code
rotor_list = list(rotor1.__iter__())
show(rotor_list[0].plot_rotor(plot_type='bokeh'))
###Output
_____no_output_____
###Markdown
Running the simulationAfter you verify that everything is fine with the rotor, you should run the simulation and obtain results.To do that you only need to use the one of the `.run_()` methods available.For now, STOCHASTIC ROSS has only a few stochastic analysis as shown below. Obtaining resultsThese are the following stochastic analysis you can do with the program:- `.run_campbell()` - Campbell Diagram- `.run_freq_response()` - Frequency response- `.run_unbalance_response()` - Unbalance response- `.run_time_response()` - Time response Plotting resultsAs it has been spoken before, STOCHASTIC ROSS presents results, not deterministic as ROSS does, but in the form of expectation (mean values) and percentiles (or confidence intervals). When plotting these analysis, it will always display the expectation and you are able to choose which percentile to plot. To return a plot, you need to enter the command `.plot()` rigth before the command the run an analysis:`.run_something().plot()``.plot()` methods have two main arguments:```textpercentile : list, optional Sequence of percentiles to compute, which must be between 0 and 100 inclusive.conf_interval : list, optional Sequence of confidence intervals to compute, which must be between 0 and 100 inclusive.``` Plot interactionYou can click on the legend label to ommit an object from the graph. Campbell DiagramThis function will calculate the damped natural frequencies for a speed range.```textspeed_range : array Array with the desired range of frequencies.frequencies : int, optional Number of frequencies that will be calculated. Default is 6.frequency_type : str, optional Choose between displaying results related to the undamped natural frequencies ("wn") or damped natural frequencies ("wd"). The default is "wd".```To run the Campbell Diagram, use the command `.run_campbell()`To plot the figure, use `.run_campbell().plot()`Notice that there're two plots. You can plot both or one of them:- damped natural frequency vs frequency; - use `.run_campbell().plot_nat_freq()`- log dec vs frequency - use `.run_campbell().plot_log_dec()`
###Code
samples = 31
speed_range = np.linspace(0, 500, samples)
results = rotor1.run_campbell(speed_range)
show(results.plot_nat_freq(conf_interval=[90]))
###Output
_____no_output_____
###Markdown
Frenquency Response```textspeed_range : array Array with the desired range of frequencies.inp : int Degree of freedom to be excited.out : int Degree of freedom to be observed.modes : list, optional Modes that will be used to calculate the frequency response (all modes will be used if a list is not given).```We can put the frequency response of selecting the input and output degree of freedom.- Input is the degree of freedom to be excited;- Output is the degree of freedom to be observed.Each shaft node has 4 local degrees of freedom (dof) $[x, y, \alpha, \beta]$, and each degree of freedom has it own index:- $x$ -> index 0- $y$ -> index 1- $\alpha$ -> index 2- $\beta$ -> index 3Taking the rotor built as example, let's excite the node 3 (in the $y$ direction) and observe the response on the node 2 (also in $y$ direction):$global\_dof = dof\_per\_node * node\_number + dof\_index$node 2, local dof $y$:$out = 4 * 2 + 1 = 9$node 3, local dof $y$:$inp = 4 * 3 + 1 = 13$To run the Frequency Response, use the command `.run_freq_response()`To plot the figure, use the command `run_freq_response().plot()`
###Code
speed_range = np.linspace(0, 500, 31)
inp = 13
out = 9
results = rotor1.run_freq_response(speed_range, inp, out)
show(results.plot(conf_interval=[90]))
###Output
_____no_output_____
###Markdown
Unbalance ResponseThis method returns the unbalanced response for a mdof system given magnitide and phase of the unbalance, the node where it's applied and a frequency range.```textnode : list, int Node where the unbalance is applied.magnitude : list, float Unbalance magnitude. If node is int, input a list to make make it random. If node is list, input a list of lists to make it random.phase : list, float Unbalance phase. If node is int, input a list to make make it random. If node is list, input a list of lists to make it random.frequency_range : list, float Array with the desired range of frequencies.```In this analysis, you can enter **magnitude** and **phase** as random variables.To run the Unbalance Response, use the command `.run_unbalance_response()`To plot the figure, use the command `.run_unbalance_response().plot(dof)`Where `dof` is the degree of freedom for which you want to plot the response, which follows the same logic applied to Frequency Response.In this following example, we can obtain the response for a random unbalance(kg.m) with a uniform distribution and its respective phase in a selected node. Notice that it's possible to add multiple unbalances instantiating node, magnitude and phase as lists.```textUnbalance: node = 3 magnitude = np.random.uniform(0.001, 0.002, 10) phase = 0```
###Code
freq_range = np.linspace(0, 500, 31)
n = 3
m = np.random.uniform(0.001, 0.002, 10)
p = 0.0
dof = 13
results = rotor1.run_unbalance_response(n, m, p, freq_range)
show(results.plot(dof, conf_interval=[90]))
###Output
_____no_output_____
###Markdown
Time ResponseThis function will take a rotor object and plot its time response given a force and a time.The **force** and **ic** parameters can be passed as random.This function takes the following parameters:```textspeed: float Rotor speedforce : 2-dimensional array, 3-dimensional array Force array (needs to have the same number of rows as time array). Each column corresponds to a dof and each row to a time step. Inputing a 3-dimensional array, the method considers the force as a random variable. The 3rd dimension must have the same size than ST_Rotor.rotor_listtime_range : 1-dimensional array Time array.dof : int Degree of freedom that will be observed.ic : 1-dimensional array, 2-dimensional array, optional The initial conditions on the state vector (zero by default). Inputing a 2-dimensional array, the method considers the initial condition as a random variable.```To run the Time Response, use the command `.run_time_response()`To plot the figure, use the command `.run_time_response().plot()`In the following example, let's apply harmonic forces to the node 3 in both directions $x$ and $y$. Also lets analyze the first 10 seconds from the response for a speed of 100.0 rad/s (~955.0 RPM).
###Code
size = 1000
ndof = rotor1.ndof
node = 3 # node where the force is applied
dof = 9
speed = 250.0
t = np.linspace(0, 10, size)
F = np.zeros((size, ndof))
F[:, 4 * node] = 10 * np.cos(2 * t)
F[:, 4 * node + 1] = 10 * np.sin(2 * t)
results = rotor1.run_time_response(speed, F, t, dof)
show(results.plot(conf_interval=[90]))
###Output
_____no_output_____
|
rl_dynamic_programming/Dynamic_Programming_Solution.ipynb
|
###Markdown
Mini Project: Dynamic ProgrammingIn this notebook, you will write your own implementations of many classical dynamic programming algorithms. While we have provided some starter code, you are welcome to erase these hints and write your code from scratch. Part 0: Explore FrozenLakeEnvUse the code cell below to create an instance of the [FrozenLake](https://github.com/openai/gym/blob/master/gym/envs/toy_text/frozen_lake.py) environment.
###Code
!pip install -q matplotlib==2.2.2
from frozenlake import FrozenLakeEnv
env = FrozenLakeEnv()
###Output
_____no_output_____
###Markdown
The agent moves through a $4 \times 4$ gridworld, with states numbered as follows:```[[ 0 1 2 3] [ 4 5 6 7] [ 8 9 10 11] [12 13 14 15]]```and the agent has 4 potential actions:```LEFT = 0DOWN = 1RIGHT = 2UP = 3```Thus, $\mathcal{S}^+ = \{0, 1, \ldots, 15\}$, and $\mathcal{A} = \{0, 1, 2, 3\}$. Verify this by running the code cell below.
###Code
# print the state space and action space
print(env.observation_space)
print(env.action_space)
# print the total number of states and actions
print(env.nS)
print(env.nA)
###Output
Discrete(16)
Discrete(4)
16
4
###Markdown
Dynamic programming assumes that the agent has full knowledge of the MDP. We have already amended the `frozenlake.py` file to make the one-step dynamics accessible to the agent. Execute the code cell below to return the one-step dynamics corresponding to a particular state and action. In particular, `env.P[1][0]` returns the the probability of each possible reward and next state, if the agent is in state 1 of the gridworld and decides to go left.
###Code
env.P[1][0]
###Output
_____no_output_____
###Markdown
Each entry takes the form ```prob, next_state, reward, done```where: - `prob` details the conditional probability of the corresponding (`next_state`, `reward`) pair, and- `done` is `True` if the `next_state` is a terminal state, and otherwise `False`.Thus, we can interpret `env.P[1][0]` as follows:$$\mathbb{P}(S_{t+1}=s',R_{t+1}=r|S_t=1,A_t=0) = \begin{cases} \frac{1}{3} \text{ if } s'=1, r=0\\ \frac{1}{3} \text{ if } s'=0, r=0\\ \frac{1}{3} \text{ if } s'=5, r=0\\ 0 \text{ else} \end{cases}$$To understand the value of `env.P[1][0]`, note that when you create a FrozenLake environment, it takes as an (optional) argument `is_slippery`, which defaults to `True`. To see this, change the first line in the notebook from `env = FrozenLakeEnv()` to `env = FrozenLakeEnv(is_slippery=False)`. Then, when you check `env.P[1][0]`, it should look like what you expect (i.e., `env.P[1][0] = [(1.0, 0, 0.0, False)]`).The default value for the `is_slippery` argument is `True`, and so `env = FrozenLakeEnv()` is equivalent to `env = FrozenLakeEnv(is_slippery=True)`. In the event that `is_slippery=True`, you see that this can result in the agent moving in a direction that it did not intend (where the idea is that the ground is *slippery*, and so the agent can slide to a location other than the one it wanted).Feel free to change the code cell above to explore how the environment behaves in response to other (state, action) pairs. Before proceeding to the next part, make sure that you set `is_slippery=True`, so that your implementations below will work with the slippery environment! Part 1: Iterative Policy EvaluationIn this section, you will write your own implementation of iterative policy evaluation.Your algorithm should accept four arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used to decide if the estimate has sufficiently converged to the true value function (default value: `1e-8`).The algorithm returns as **output**:- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s` under the input policy.Please complete the function in the code cell below.
###Code
import numpy as np
def policy_evaluation(env, policy, gamma=1, theta=1e-8):
V = np.zeros(env.nS)
while True:
delta = 0
for s in range(env.nS):
Vs = 0
for a, action_prob in enumerate(policy[s]):
for prob, next_state, reward, done in env.P[s][a]:
Vs += action_prob * prob * (reward + gamma * V[next_state])
delta = max(delta, np.abs(V[s]-Vs))
V[s] = Vs
if delta < theta:
break
return V
###Output
_____no_output_____
###Markdown
We will evaluate the equiprobable random policy $\pi$, where $\pi(a|s) = \frac{1}{|\mathcal{A}(s)|}$ for all $s\in\mathcal{S}$ and $a\in\mathcal{A}(s)$. Use the code cell below to specify this policy in the variable `random_policy`.
###Code
random_policy = np.ones([env.nS, env.nA]) / env.nA
###Output
_____no_output_____
###Markdown
Run the next code cell to evaluate the equiprobable random policy and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.
###Code
from plot_utils import plot_values
# evaluate the policy
V = policy_evaluation(env, random_policy)
plot_values(V)
###Output
_____no_output_____
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that your `policy_evaluation` function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged).
###Code
import check_test
check_test.run_check('policy_evaluation_check', policy_evaluation)
###Output
_____no_output_____
###Markdown
Part 2: Obtain $q_\pi$ from $v_\pi$In this section, you will write a function that takes the state-value function estimate as input, along with some state $s\in\mathcal{S}$. It returns the **row in the action-value function** corresponding to the input state $s\in\mathcal{S}$. That is, your function should accept as input both $v_\pi$ and $s$, and return $q_\pi(s,a)$ for all $a\in\mathcal{A}(s)$.Your algorithm should accept four arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.- `s`: This is an integer corresponding to a state in the environment. It should be a value between `0` and `(env.nS)-1`, inclusive.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as **output**:- `q`: This is a 1D numpy array with `q.shape[0]` equal to the number of actions (`env.nA`). `q[a]` contains the (estimated) value of state `s` and action `a`.Please complete the function in the code cell below.
###Code
def q_from_v(env, V, s, gamma=1):
q = np.zeros(env.nA)
for a in range(env.nA):
for prob, next_state, reward, done in env.P[s][a]:
q[a] += prob * (reward + gamma * V[next_state])
return q
###Output
_____no_output_____
###Markdown
Run the code cell below to print the action-value function corresponding to the above state-value function.
###Code
Q = np.zeros([env.nS, env.nA])
for s in range(env.nS):
Q[s] = q_from_v(env, V, s)
print("Action-Value Function:")
print(Q)
###Output
Action-Value Function:
[[ 0.0147094 0.01393978 0.01393978 0.01317015]
[ 0.00852356 0.01163091 0.0108613 0.01550788]
[ 0.02444514 0.02095298 0.02406033 0.01435346]
[ 0.01047649 0.01047649 0.00698432 0.01396865]
[ 0.02166487 0.01701828 0.01624865 0.01006281]
[ 0. 0. 0. 0. ]
[ 0.05433538 0.04735105 0.05433538 0.00698432]
[ 0. 0. 0. 0. ]
[ 0.01701828 0.04099204 0.03480619 0.04640826]
[ 0.07020885 0.11755991 0.10595784 0.05895312]
[ 0.18940421 0.17582037 0.16001424 0.04297382]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0.08799677 0.20503718 0.23442716 0.17582037]
[ 0.25238823 0.53837051 0.52711478 0.43929118]
[ 0. 0. 0. 0. ]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `q_from_v` function satisfies the requirements outlined above (with four inputs, a single output, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('q_from_v_check', q_from_v)
###Output
_____no_output_____
###Markdown
Part 3: Policy ImprovementIn this section, you will write your own implementation of policy improvement. Your algorithm should accept three arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.Please complete the function in the code cell below. You are encouraged to use the `q_from_v` function you implemented above.
###Code
def policy_improvement(env, V, gamma=1):
policy = np.zeros([env.nS, env.nA]) / env.nA
for s in range(env.nS):
q = q_from_v(env, V, s, gamma)
# OPTION 1: construct a deterministic policy
# policy[s][np.argmax(q)] = 1
# OPTION 2: construct a stochastic policy that puts equal probability on maximizing actions
best_a = np.argwhere(q==np.max(q)).flatten()
policy[s] = np.sum([np.eye(env.nA)[i] for i in best_a], axis=0)/len(best_a)
return policy
###Output
_____no_output_____
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `policy_improvement` function satisfies the requirements outlined above (with three inputs, a single output, and with the default values of the input arguments unchanged).Before moving on to the next part of the notebook, you are strongly encouraged to check out the solution in **Dynamic_Programming_Solution.ipynb**. There are many correct ways to approach this function!
###Code
check_test.run_check('policy_improvement_check', policy_improvement)
###Output
_____no_output_____
###Markdown
Part 4: Policy IterationIn this section, you will write your own implementation of policy iteration. The algorithm returns the optimal policy, along with its corresponding state-value function.Your algorithm should accept three arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used to decide if the policy evaluation step has sufficiently converged to the true value function (default value: `1e-8`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.Please complete the function in the code cell below. You are strongly encouraged to use the `policy_evaluation` and `policy_improvement` functions you implemented above.
###Code
import copy
def policy_iteration(env, gamma=1, theta=1e-8):
policy = np.ones([env.nS, env.nA]) / env.nA
while True:
V = policy_evaluation(env, policy, gamma, theta)
new_policy = policy_improvement(env, V)
# OPTION 1: stop if the policy is unchanged after an improvement step
if (new_policy == policy).all():
break;
# OPTION 2: stop if the value function estimates for successive policies has converged
# if np.max(abs(policy_evaluation(env, policy) - policy_evaluation(env, new_policy))) < theta*1e2:
# break;
policy = copy.copy(new_policy)
return policy, V
###Output
_____no_output_____
###Markdown
Run the next code cell to solve the MDP and visualize the output. The optimal state-value function has been reshaped to match the shape of the gridworld.**Compare the optimal state-value function to the state-value function from Part 1 of this notebook**. _Is the optimal state-value function consistently greater than or equal to the state-value function for the equiprobable random policy?_
###Code
# obtain the optimal policy and optimal state-value function
policy_pi, V_pi = policy_iteration(env)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_pi,"\n")
plot_values(V_pi)
###Output
Optimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):
[[ 1. 0. 0. 0. ]
[ 0. 0. 0. 1. ]
[ 0. 0. 0. 1. ]
[ 0. 0. 0. 1. ]
[ 1. 0. 0. 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0.5 0. 0.5 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0. 0. 0. 1. ]
[ 0. 1. 0. 0. ]
[ 1. 0. 0. 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0.25 0.25 0.25 0.25]
[ 0. 0. 1. 0. ]
[ 0. 1. 0. 0. ]
[ 0.25 0.25 0.25 0.25]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `policy_iteration` function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('policy_iteration_check', policy_iteration)
###Output
_____no_output_____
###Markdown
Part 5: Truncated Policy IterationIn this section, you will write your own implementation of truncated policy iteration. You will begin by implementing truncated policy evaluation. Your algorithm should accept five arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.- `max_it`: This is a positive integer that corresponds to the number of sweeps through the state space (default value: `1`).- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).The algorithm returns as **output**:- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.Please complete the function in the code cell below.
###Code
def truncated_policy_evaluation(env, policy, V, max_it=1, gamma=1):
num_it=0
while num_it < max_it:
for s in range(env.nS):
v = 0
q = q_from_v(env, V, s, gamma)
for a, action_prob in enumerate(policy[s]):
v += action_prob * q[a]
V[s] = v
num_it += 1
return V
###Output
_____no_output_____
###Markdown
Next, you will implement truncated policy iteration. Your algorithm should accept five arguments as **input**:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `max_it`: This is a positive integer that corresponds to the number of sweeps through the state space (default value: `1`).- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used for the stopping criterion (default value: `1e-8`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.Please complete the function in the code cell below.
###Code
def truncated_policy_iteration(env, max_it=1, gamma=1, theta=1e-8):
V = np.zeros(env.nS)
policy = np.zeros([env.nS, env.nA]) / env.nA
while True:
policy = policy_improvement(env, V)
old_V = copy.copy(V)
V = truncated_policy_evaluation(env, policy, V, max_it, gamma)
if max(abs(V-old_V)) < theta:
break;
return policy, V
###Output
_____no_output_____
###Markdown
Run the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.Play with the value of the `max_it` argument. Do you always end with the optimal state-value function?
###Code
policy_tpi, V_tpi = truncated_policy_iteration(env, max_it=2)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_tpi,"\n")
# plot the optimal state-value function
plot_values(V_tpi)
###Output
Optimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):
[[ 1. 0. 0. 0. ]
[ 0. 0. 0. 1. ]
[ 0. 0. 0. 1. ]
[ 0. 0. 0. 1. ]
[ 1. 0. 0. 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0.5 0. 0.5 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0. 0. 0. 1. ]
[ 0. 1. 0. 0. ]
[ 1. 0. 0. 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0.25 0.25 0.25 0.25]
[ 0. 0. 1. 0. ]
[ 0. 1. 0. 0. ]
[ 0.25 0.25 0.25 0.25]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `truncated_policy_iteration` function satisfies the requirements outlined above (with four inputs, two outputs, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('truncated_policy_iteration_check', truncated_policy_iteration)
###Output
_____no_output_____
###Markdown
Part 6: Value IterationIn this section, you will write your own implementation of value iteration.Your algorithm should accept three arguments as input:- `env`: This is an instance of an OpenAI Gym environment, where `env.P` returns the one-step dynamics.- `gamma`: This is the discount rate. It must be a value between 0 and 1, inclusive (default value: `1`).- `theta`: This is a very small positive number that is used for the stopping criterion (default value: `1e-8`).The algorithm returns as **output**:- `policy`: This is a 2D numpy array with `policy.shape[0]` equal to the number of states (`env.nS`), and `policy.shape[1]` equal to the number of actions (`env.nA`). `policy[s][a]` returns the probability that the agent takes action `a` while in state `s` under the policy.- `V`: This is a 1D numpy array with `V.shape[0]` equal to the number of states (`env.nS`). `V[s]` contains the estimated value of state `s`.
###Code
def value_iteration(env, gamma=1, theta=1e-8):
V = np.zeros(env.nS)
while True:
delta = 0
for s in range(env.nS):
v = V[s]
V[s] = max(q_from_v(env, V, s, gamma))
delta = max(delta,abs(V[s]-v))
if delta < theta:
break
policy = policy_improvement(env, V, gamma)
return policy, V
###Output
_____no_output_____
###Markdown
Use the next code cell to solve the MDP and visualize the output. The state-value function has been reshaped to match the shape of the gridworld.
###Code
policy_vi, V_vi = value_iteration(env)
# print the optimal policy
print("\nOptimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):")
print(policy_vi,"\n")
# plot the optimal state-value function
plot_values(V_vi)
###Output
Optimal Policy (LEFT = 0, DOWN = 1, RIGHT = 2, UP = 3):
[[ 1. 0. 0. 0. ]
[ 0. 0. 0. 1. ]
[ 0. 0. 0. 1. ]
[ 0. 0. 0. 1. ]
[ 1. 0. 0. 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0.5 0. 0.5 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0. 0. 0. 1. ]
[ 0. 1. 0. 0. ]
[ 1. 0. 0. 0. ]
[ 0.25 0.25 0.25 0.25]
[ 0.25 0.25 0.25 0.25]
[ 0. 0. 1. 0. ]
[ 0. 1. 0. 0. ]
[ 0.25 0.25 0.25 0.25]]
###Markdown
Run the code cell below to test your function. If the code cell returns **PASSED**, then you have implemented the function correctly! **Note:** In order to ensure accurate results, make sure that the `value_iteration` function satisfies the requirements outlined above (with three inputs, two outputs, and with the default values of the input arguments unchanged).
###Code
check_test.run_check('value_iteration_check', value_iteration)
###Output
_____no_output_____
|
digit-recognition-with-tensorflow (1).ipynb
|
###Markdown
Digit-Recognition with Artificial Neural Networks using `Tensorflow` import libraries***tensorflow for deep learning modelling.******numpy for numerical computing.******pandas for working with dataframes.******matplotlib for plotting charts*** 
###Code
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Dataset we're usning, famous MNIST 
###Code
from tensorflow.keras.datasets import mnist
###Output
_____no_output_____
###Markdown
According to ***http://yann.lecun.com/exdb/mnist/***> "The MNIST database of handwritten digits, available from this page, has a training set of 60,000 examples, and a test set of 10,000 examples. It is a subset of a larger set available from NIST. The digits have been size-normalized and centered in a fixed-size image.It is a good database for people who want to try learning techniques and pattern recognition methods on real-world data while spending minimal efforts on preprocessing and formatting." ***this dataset contains images of digits, 0-9, what wa want to do is to build a model that can recognize digits correctly.******first we need to extract features and labels from dataset, simply with two tuples.*** some coding to undrestand better our dataset
###Code
(X_train, y_train), (X_test, y_test) = mnist.load_data()
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
* see first item of X_train and y_train *
###Code
X_train[0], y_train[0]
###Output
_____no_output_____
###Markdown
shape of our X_train, X_test, y_train, y_test
###Code
X_train.shape, X_test.shape, y_train.shape, y_test.shape
###Output
_____no_output_____
###Markdown
our labels are from 0 to 9
###Code
np.unique(y_train)
###Output
_____no_output_____
###Markdown
show first image
###Code
fig = plt.figure(figsize=(8, 6))
plt.imshow(X_train[0], cmap=plt.cm.binary)
plt.title('first picture')
###Output
_____no_output_____
###Markdown
randomly show 4 images with labels
###Code
import random
fig = plt.figure(figsize=(8, 6))
for i in range(0, 4):
ax = plt.subplot(2, 2, i+1)
random_num = random.randint(0, 100)
plt.imshow(X_train[random_num], cmap=plt.cm.binary)
plt.title(y_train[random_num])
plt.axis(False)
###Output
_____no_output_____
###Markdown
one of the most important steps, normalizing data! ***if we don't normalizing data, model can't find patterns correctly, and the performance will be low and disappointing*** we are choosing the simplest way, just divide them to 255(every picture pixel value differs from 0 to 255, so we just divide them to the max values and it is going in 0-1 range(same as MinMaxScaler))
###Code
X_train, X_test = X_train / 255, X_test / 255
###Output
_____no_output_____
###Markdown
***correctly in 0-1!***
###Code
X_train.max(), X_test.min()
###Output
_____no_output_____
###Markdown
most important step, building our model!
###Code
tf.random.set_seed(42)
# initial model
model = tf.keras.Sequential()
# creat a flatten layer
model.add(tf.keras.layers.Flatten(input_shape=(28, 28)))
# adding layers, first layer 8 units, second one 4 units and relu activation function
model.add(tf.keras.layers.Dense(8, activation='relu'))
model.add(tf.keras.layers.Dense(4, activation='relu'))
# output layer, with shape of 10(output shape) and softmax activation function
model.add(tf.keras.layers.Dense(10, activation='softmax'))
# compile the model, with Adam optimizer, SparaseCategoricalCrossentrpy(because labels aren't in one-hot encoding)
model.compile(
optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=['accuracy']
)
# fit the model, with 20 epochs
history = model.fit(X_train, y_train, epochs=20, validation_data=(X_test, y_test))
###Output
Epoch 1/20
1875/1875 [==============================] - 5s 3ms/step - loss: 0.8695 - accuracy: 0.7159 - val_loss: 0.6042 - val_accuracy: 0.8180
Epoch 2/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.5508 - accuracy: 0.8389 - val_loss: 0.5158 - val_accuracy: 0.8572
Epoch 3/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.4725 - accuracy: 0.8659 - val_loss: 0.4454 - val_accuracy: 0.8768
Epoch 4/20
1875/1875 [==============================] - 5s 3ms/step - loss: 0.4262 - accuracy: 0.8799 - val_loss: 0.4211 - val_accuracy: 0.8849
Epoch 5/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3939 - accuracy: 0.8889 - val_loss: 0.4077 - val_accuracy: 0.8878
Epoch 6/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3704 - accuracy: 0.8952 - val_loss: 0.3808 - val_accuracy: 0.8929
Epoch 7/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3531 - accuracy: 0.8988 - val_loss: 0.3787 - val_accuracy: 0.8928
Epoch 8/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3401 - accuracy: 0.9029 - val_loss: 0.3620 - val_accuracy: 0.8982
Epoch 9/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3293 - accuracy: 0.9060 - val_loss: 0.3494 - val_accuracy: 0.9029
Epoch 10/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3217 - accuracy: 0.9075 - val_loss: 0.3407 - val_accuracy: 0.9062
Epoch 11/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3153 - accuracy: 0.9089 - val_loss: 0.3444 - val_accuracy: 0.9045
Epoch 12/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3099 - accuracy: 0.9102 - val_loss: 0.3432 - val_accuracy: 0.9051
Epoch 13/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3056 - accuracy: 0.9115 - val_loss: 0.3342 - val_accuracy: 0.9063
Epoch 14/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.3008 - accuracy: 0.9120 - val_loss: 0.3367 - val_accuracy: 0.9064
Epoch 15/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2979 - accuracy: 0.9150 - val_loss: 0.3346 - val_accuracy: 0.9077
Epoch 16/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2942 - accuracy: 0.9146 - val_loss: 0.3367 - val_accuracy: 0.9079
Epoch 17/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2920 - accuracy: 0.9152 - val_loss: 0.3263 - val_accuracy: 0.9115
Epoch 18/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2877 - accuracy: 0.9169 - val_loss: 0.3317 - val_accuracy: 0.9088
Epoch 19/20
1875/1875 [==============================] - 5s 3ms/step - loss: 0.2854 - accuracy: 0.9177 - val_loss: 0.3238 - val_accuracy: 0.9136
Epoch 20/20
1875/1875 [==============================] - 4s 2ms/step - loss: 0.2830 - accuracy: 0.9171 - val_loss: 0.3205 - val_accuracy: 0.9149
###Markdown
91% accuracy, really good model! summary of the model
###Code
model.summary()
###Output
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
flatten_2 (Flatten) (None, 784) 0
_________________________________________________________________
dense_17 (Dense) (None, 8) 6280
_________________________________________________________________
dense_18 (Dense) (None, 4) 36
_________________________________________________________________
dense_19 (Dense) (None, 10) 50
=================================================================
Total params: 6,366
Trainable params: 6,366
Non-trainable params: 0
_________________________________________________________________
###Markdown
plot our model
###Code
from tensorflow.keras.utils import plot_model
plot_model(model, show_shapes=True)
###Output
_____no_output_____
###Markdown
***better understanding the training phase, plotting loss, accuracy***
###Code
history_dataframe = pd.DataFrame(history.history)
history_dataframe
history_dataframe.plot()
plt.plot(history.history['loss'], history.history['val_loss'])
plt.xlabel('train loss')
plt.ylabel('test loss')
###Output
_____no_output_____
###Markdown
Start our Evaluating phase, first Step, creating y_pred
###Code
y_preds = model.predict(X_test)
y_preds.shape
y_preds[:3]
###Output
_____no_output_____
###Markdown
what is going on with y_pred?***the problem is that it is not showing a single label, it returns the probability of being in every 10 classes, we need the max probability.*** How to get the max probability?***just run tf.argmax(y_preds) to get the label***
###Code
y_preds_labels = [tf.argmax(y_preds[i]) for i in range(len(y_preds))]
y_preds_labels[:5]
###Output
_____no_output_____
###Markdown
Evaluating The Model make_confusion_matrix code directly copied from:https://github.com/mrdbourke/tensorflow-deep-learning/blob/main/docs/02_neural_network_classification_in_tensorflow.ipynb
###Code
# Note: The following confusion matrix code is a remix of Scikit-Learn's
# plot_confusion_matrix function - https://scikit-learn.org/stable/modules/generated/sklearn.metrics.plot_confusion_matrix.html
# and Made with ML's introductory notebook - https://github.com/GokuMohandas/MadeWithML/blob/main/notebooks/08_Neural_Networks.ipynb
import itertools
from sklearn.metrics import confusion_matrix
# Our function needs a different name to sklearn's plot_confusion_matrix
def make_confusion_matrix(y_true, y_pred, classes=None, figsize=(10, 10), text_size=15):
"""Makes a labelled confusion matrix comparing predictions and ground truth labels.
If classes is passed, confusion matrix will be labelled, if not, integer class values
will be used.
Args:
y_true: Array of truth labels (must be same shape as y_pred).
y_pred: Array of predicted labels (must be same shape as y_true).
classes: Array of class labels (e.g. string form). If `None`, integer labels are used.
figsize: Size of output figure (default=(10, 10)).
text_size: Size of output figure text (default=15).
Returns:
A labelled confusion matrix plot comparing y_true and y_pred.
Example usage:
make_confusion_matrix(y_true=test_labels, # ground truth test labels
y_pred=y_preds, # predicted labels
classes=class_names, # array of class label names
figsize=(15, 15),
text_size=10)
"""
# Create the confustion matrix
cm = confusion_matrix(y_true, y_pred)
cm_norm = cm.astype("float") / cm.sum(axis=1)[:, np.newaxis] # normalize it
n_classes = cm.shape[0] # find the number of classes we're dealing with
# Plot the figure and make it pretty
fig, ax = plt.subplots(figsize=figsize)
cax = ax.matshow(cm, cmap=plt.cm.Blues) # colors will represent how 'correct' a class is, darker == better
fig.colorbar(cax)
# Are there a list of classes?
if classes:
labels = classes
else:
labels = np.arange(cm.shape[0])
# Label the axes
ax.set(title="Confusion Matrix",
xlabel="Predicted label",
ylabel="True label",
xticks=np.arange(n_classes), # create enough axis slots for each class
yticks=np.arange(n_classes),
xticklabels=labels, # axes will labeled with class names (if they exist) or ints
yticklabels=labels)
# Make x-axis labels appear on bottom
ax.xaxis.set_label_position("bottom")
ax.xaxis.tick_bottom()
# Set the threshold for different colors
threshold = (cm.max() + cm.min()) / 2.
# Plot the text on each cell
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, f"{cm[i, j]} ({cm_norm[i, j]*100:.1f}%)",
horizontalalignment="center",
color="white" if cm[i, j] > threshold else "black",
size=text_size)
###Output
_____no_output_____
###Markdown
confusion matrix
###Code
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_preds_labels)
###Output
_____no_output_____
###Markdown
classification report contains:* precision* recall* f1-score* accuracy
###Code
from sklearn.metrics import classification_report
print(classification_report(y_test, y_preds_labels))
###Output
precision recall f1-score support
0 0.96 0.95 0.95 980
1 0.93 0.98 0.95 1135
2 0.93 0.91 0.92 1032
3 0.90 0.89 0.89 1010
4 0.92 0.92 0.92 982
5 0.87 0.82 0.85 892
6 0.95 0.93 0.94 958
7 0.93 0.94 0.94 1028
8 0.83 0.89 0.86 974
9 0.91 0.90 0.91 1009
accuracy 0.91 10000
macro avg 0.91 0.91 0.91 10000
weighted avg 0.92 0.91 0.91 10000
###Markdown
really pretty confusion matrix, thanks to [Daniel Bourke](https://github.com/mrdbourke)
###Code
make_confusion_matrix(y_true=y_test,
y_pred=y_preds_labels,
figsize=(15, 15),
text_size=10)
###Output
_____no_output_____
###Markdown
showing some images, with labels and prediction if prediction is correct, it shows it in green, and if it's wrong, it shows it in red
###Code
plt.figure(figsize=(10, 8))
for i in range(0, 8):
plt.subplot(4, 4, i+1)
random_2 = random.randint(0, 400)
plt.imshow(X_test[random_2], cmap=plt.cm.binary)
plt.axis(False)
if (y_preds_labels[random_2] == y_test[random_2]):
plt.title('prediction is {}, correct.'.format(y_preds_labels[random_2]), color='green')
else:
plt.title('prediction is {}, wrong.\nreal value is {}'.format(y_preds_labels[random_2], y_test[random_2]), color='red')
###Output
_____no_output_____
|
utils/Performance Metrics - Kidney.ipynb
|
###Markdown
HubMap - Hacking the Kidney Goal - Mapping the human body at function tissue unit level - detect glomeruli FTUs in kidney Description - Calculate the performance metrics for test data predictions of kidney data. Input - submission.csv (csv file containing rle format predicted mask), test.csv (csv file containing rle format original mask).Output - Performance metrics values - dice coeff, Jaccard index, pixel accuracy, hausdorff distance. How to use? Change the basepath to where your data lives and you're good to go. How to reproduce on a different dataset?Create a train and test folders of the dataset containing train images and masks and test images and masks respectively. Have a train.csv with the rle for train images and a sample-submission file with test image names. Create a test.csv with rle for test images and predicted csv from the trained network. Step 1 - Import useful libraries
###Code
import numpy as np
import pandas as pd
from pathlib import Path
from sklearn.metrics import jaccard_score
from scipy.spatial.distance import directed_hausdorff
###Output
_____no_output_____
###Markdown
Step 2 - Write utility functions
###Code
def enc2mask(encs, shape):
img = np.zeros(shape[0] * shape[1], dtype=np.uint8)
for m, enc in enumerate(encs):
if isinstance(enc, np.float) and np.isnan(enc):
continue
enc_split = enc.split()
for i in range(len(enc_split) // 2):
start = int(enc_split[2 * i]) - 1
length = int(enc_split[2 * i + 1])
img[start: start + length] = 1 + m
return img.reshape(shape).T
def dice_scores_img(pred, truth, eps=1e-8):
pred = pred.reshape(-1) > 0
truth = truth.reshape(-1) > 0
intersect = (pred & truth).sum(-1)
union = pred.sum(-1) + truth.sum(-1)
dice = (2.0 * intersect + eps) / (union + eps)
return dice
def perf_metrics(gt, pred):
n = 0
d = 0
for i in range(gt.shape[0]):
for j in range (gt.shape[1]):
if (gt[i][j]==pred[i][j]):
n = n+1
d = d+1
return n/d, jaccard_score(gt.flatten(order='C'), pred.flatten(order='C')), directed_hausdorff(gt, pred)
###Output
_____no_output_____
###Markdown
Step 3 - Calculate mean metrics values for test images
###Code
DATA_PATH = Path(r'C:/Users/soodn/Downloads/Naveksha/Kaggle HuBMAP/')
df_pred = pd.read_csv('output/submission_kidney_pvt_deeplive.csv')
df_truth = pd.read_csv(DATA_PATH/'Data/kidney-data/private_test.csv')
df_info = pd.read_csv(DATA_PATH/'Data/kidney-data/HuBMAP-20-dataset_information_pvt.csv')
scores = []
pa_list = []
ji_list = []
haus_dis_list = []
pvt_test = ['00a67c839', '0749c6ccc', '1eb18739d', '5274ef79a', '5d8b53a68', '9e81e2693', 'a14e495cf', 'bacb03928', 'e464d2f6c',
'ff339c0b2']
for img in pvt_test:
shape = df_info[df_info.image_file == img][['width_pixels', 'height_pixels']].values.astype(int)[0]
truth = df_truth[df_truth['id'] == img]['expected']
mask_truth = enc2mask(truth, shape)
pred = df_pred[df_pred['id'] == img]['predicted']
mask_pred = enc2mask(pred, shape)
score = dice_scores_img(mask_pred, mask_truth)
print (score)
# pa, ji, haus = perf_metrics(mask_pred, mask_truth)
# pa_list.append (pa)
# ji_list.append(ji)
# haus_dis_list.append(haus[0])
scores.append(score)
l = len(df)
for img, s in zip(rles[5:]['id'],scores):
print (round(s, 3))
print ("Average Dice Score = ", round(sum(scores)/l,3))
###Output
0.947
0.961
0.954
0.924
0.933
0.948
0.967
0.937
0.966
0.966
0.95
|
I Resolving Python with Data Science/01_Basic Elements of Programming/webinar/01practice_data-tables.ipynb
|
###Markdown
01 | Data Tables & Basic Concepts of Programming - Subscribe to my [Blog ↗](https://blog.pythonassembly.com/)- Let's keep in touch on [LinkedIn ↗](www.linkedin.com/in/jsulopz) 😄 Discipline to Search Solutions in Google > Apply the following steps when **looking for solutions in Google**:>> 1. **Necesity**: How to load an Excel in Python?> 2. **Search in Google**: by keywords> - `load excel python`> - ~~how to load excel in python~~> 3. **Solution**: What's the `function()` that loads an Excel in Python?> - A Function to Programming is what the Atom to Phisics.> - Every time you want to do something in programming> - **You will need a `function()`** to make it> - Theferore, you must **detect parenthesis `()`**> - Out of all the words that you see in a website> - Because they indicate the presence of a `function()`. Load the Data
###Code
import pandas as pd
df = pd.read_excel('df_mortality_regions.xlsx')
df.head()
###Output
_____no_output_____
###Markdown
Islands Number of Islands Which region had more Islands? Filter for Islands Count number of `Islands` in each `Region` Pick the one with most Islands
###Code
df[mask]['Regional indicator'].value_counts()[:1]
###Output
_____no_output_____
###Markdown
Show all Columns for these Islands Mean Age of across the above Islands? Female Heads of State Number of Countries with Female Heads of State Which region had more Female Heads of State? Filter for Countries with Female Heads of State Count number of `Islands` in each `Region` Pick the one with most Islands
###Code
res[:1]
###Output
_____no_output_____
|
09.17.ipynb
|
###Markdown
对象和类- 一个学生,一张桌子,一个圆都是对象- 对象是类的一个实例,你可以创建多个对象,创建类的一个实例过程被称为实例化,- 在Python中对象就是实例,而实例就是对象 定义类class ClassName: do something - class 类的表示与def 一样- 类名最好使用驼峰式- 在Python2中类是需要继承基类object的,在Python中默认继承,可写可不写- 可以将普通代码理解为皮肤,而函数可以理解为内衣,那么类可以理解为外套
###Code
#python 2
class ClassName(objet):
pass
#python 3
class ClassName:
pass
###Output
_____no_output_____
###Markdown
定义一个不含初始化__init__的简单类class ClassName: joker = “Home” def func(): print('Worker') - 尽量少使用
###Code
#实际上,我们之前所用的"."调用,就是调用类中的函数或者变量
import pygame
class Music:
def play():
print("播放音乐2")
track1=pygame.mixer.music.load("xx.mp3")
pygame.mixer.music.play()
###Output
_____no_output_____
###Markdown
定义一个标准类- __init__ 代表初始化,可以初始化任何动作- 此时类调用要使用(),其中()可以理解为开始初始化- 初始化内的元素,类中其他的函数可以共享
###Code
class Class_name:
def __init__(self): #类中的初始化的一个函数,初始化类自身
self.Joker = 'hahaha'
def fun1(self): # self代表 fun1 是类中的函数
print(self.Joker)
def fun2(self):
self.fun1()
Class_name() #类名 + () 就代表了走初始化函数
Class_name().fun1() #只有自己家的东西才可以互相调用,
Class_name().fun2()
class Joker:
def __init__(self):
self.haha = 10
self.lala = 10
self.m = None
self.n = None
def pow2(self):
self.m = self.haha ** 2
print(self.m)
def pow3(self):
self.n = self.lala ** 3
print(self.n)
def minus(self):
print(self.n - self.m)
#创建一个实例
A = Joker() #A就相当于初始完毕后的Joker
A.pow2()
A.pow3()
A.minus()
class Joker:
def __init__(self,num1,num2):
self.haha = num1
self.lala = num2
# self.m = None
# self.n = None
def pow2(self,pow_num):
m = self.haha ** 2 + pow_num
return m
def pow3(self,pow_num):
n = self.lala ** 3
return n
def miuns(self):
m1 = self.pow2()
n1 = self.pow3()
print(n1- m1)
Joker(num1=10,num2=10).pow2(pow_num = 0)
###Output
_____no_output_____
###Markdown
- Circle 和 className_ 的第一个区别有 __init__ 这个函数- 。。。。 第二个区别,类中的每一个函数都有self的这个“参数” 何为self?- self 是指向对象本身的参数- self 只是一个命名规则,其实可以改变的,但是我们约定俗成的是self,也便于理解- 使用了self就可以访问类中定义的成员 使用类 Cirlcle 类的传参- class ClassName: def __init__(self, para1,para2...): self.para1 = para1 self.para2 = para2 EP:- A:定义一个类,类中含有两个功能: - 1、计算随机数的最大值 - 2、计算随机数的最小值- B:定义一个类,(类中函数的嵌套使用) - 1、第一个函数的功能为:输入一个数字 - 2、第二个函数的功能为:使用第一个函数中得到的数字进行平方处理 - 3、第三个函数的功能为:得到平方处理后的数字 - 原来输入的数字,并打印结果 类的继承- 类的单继承- 类的多继承- 继承标识> class SonClass(FatherClass): def __init__(self): FatherClass.__init__(self)
###Code
class mayun:
def __init__(self):
self.caichan = 10000000
def showmayun(self):
print(self.caichan)
class huwang(mayun):#告诉python我即将要继承父类
def __init__(self):
mayun.__init__(self) #真正的打上印记,我要继承父类
self.hu = 'wang'
def showhuwang(self):
print(self.hu)
print(self.caichan)
print(self.showmayun())
huwang().showhuwang()
class get_pow2_pow3:
def __init__(self,num1,num2):
self.num1 = num1
self.num2 = num2
self.res1 = None
self.res2 = None
def pow2_pow3(self):
self.res1 = self.num1 ** 2
self.res2 = self.num2 ** 3
class chazhi(get_pow2_pow3):
def __init__(self,num1,num2):
get_pow2_pow3.__init__(self,num1,num2)
def cz(self):
print(self.res2 - self.res1)
B = chazhi(10,10)
B.pow2_pow3()
B.cz()
class get_pow2_pow3:
def __init__(self,num1,num2):
self.num1 = num1
self.num2 = num2
def pow2_pow3(self):
res1 = self.num1 ** 2
res2 = self.num2 ** 3
return res1,res2
class chazhi(get_pow2_pow3):
def __init__(self,num1,num2):
get_pow2_pow3.__init__(self,num1,num2)
def cz(self):
RES1,RES2 = self.pow2_pow3()
print(RES1 -RES2)
class get_pow2:
def __init__(self,num1):
self.num1 = num1
self.res1 = None
def pow2(self):
self.res1 = self.num1 ** 2
class get_pow3:
def __init__(self,num2):
self.num2 = num2
self.res2 = None
def pow3(self):
self.res2 = self.num2 ** 3
class chazhi(get_pow2,get_pow3):
def __init__(self,num1,num2):
get_pow2.__init__(self,num1)
get_pow3.__init__(self,num2)
def cz(self):
print(self.res2 - self.res1)
B = chazhi(10,10)
B.pow2()
B.pow3()
B.cz()
###Output
900
###Markdown
私有数据域(私有变量,或者私有函数)- 在Python中 变量名或者函数名使用双下划线代表私有 \__Joker, def \__Joker():- 私有数据域不可继承- 私有数据域强制继承 \__dir__()  EP: 类的其他- 类的封装 - 实际上就是将一类功能放在一起,方便未来进行管理- 类的继承(上面已经讲过)- 类的多态 - 包括装饰器:将放在以后处理高级类中教 - 装饰器的好处:当许多类中的函数需要使用同一个功能的时候,那么使用装饰器就会方便许多 - 装饰器是有固定的写法 - 其包括普通装饰器与带参装饰器 Homewor UML类图可以不用画 UML 实际上就是一个思维图- 1
###Code
class Rectangle:
def __init__(self, width = 1, height = 2):
self.width = width
self.height = height
def get_width_heigth(self):
return self.width,self.height
def get_Area(self):
return self.width * self.height
def get_Zhouchang(self):
return 2*(self.width + self.height)
Rect1 = Rectangle(4, 40)
width1, height1 = Rect1.get_width_heigth()
area1 = Rect1.get_Area()
zc1 = Rect1.get_Zhouchang()
Rect2 = Rectangle(3.5, 35.7)
width2, height2 = Rect2.get_width_heigth()
area2 = Rect2.get_Area()
zc2 = Rect2.get_Zhouchang()
print('宽为',width1,'高为',height1,'的长方形的面积为:',area1,'周长为:', zc1)
print('宽为',width2,'高为',height2,'的长方形的面积为:',round(area2,2),'周长为:', zc2)
###Output
宽为 4 高为 40 的长方形的面积为: 160 周长为: 88
宽为 3.5 高为 35.7 的长方形的面积为: 124.95 周长为: 78.4
###Markdown
- 2 - 3
###Code
class Fan:
def __init__(self, speed = 1, on = False, radius = 5, color = 'blue'):
if speed == 1:
self.__speed = 'SLOW'
elif speed == 2:
self.__speed = 'MEDIUM'
else:
self.__speed = 'FAST'
self.__on = on
self.__radius = radius
self.__color = color
def get_Speed(self):
return self.__speed
def get_On(self):
return self.__on
def get_Radius(self):
return self.__radius
def get_Color(self):
return self.__color
def change_Speed(self, speed):
if speed == 1:
self.__speed = 'SLOW'
elif speed == 2:
self.__speed = 'MEDIUM'
else:
self.__speed = 'FAST'
def change_On(self, on):
self.__on = on
def change_Radius(self, radius):
self.__radius = radius
def change_Color(self, color):
self.__color = color
fan1 = Fan(3,True, 10, 'yellow')
print('速度为: ', fan1.get_Speed())
print('半径为: ', fan1.get_Radius())
print('颜色为: ', fan1.get_Color())
print('风扇状态为: ', fan1.get_On())
fan2 = Fan()
fan2.change_Speed(2)
fan2.change_On(False)
fan2.change_Radius(5)
fan2.change_Color('red')
print('速度为: ', fan2.get_Speed())
print('半径为: ', fan2.get_Radius())
print('颜色为: ', fan2.get_Color())
print('风扇状态为: ', fan2.get_On())
###Output
速度为: MEDIUM
半径为: 5
颜色为: red
风扇状态为: False
###Markdown
- 4
###Code
import math
class RegularPolygon:
def __init__(self, n=3, bianchang=1, x=0, y=0):
self.__n = n
self.__bianchang = bianchang
self.__x = x
self.__y = y
def get_N(self):
return self.__n
def get_Bianchang(self):
return self.__bianchang
def get_X(self):
return self.__x
def get_Y(self):
return self.__y
def set_N(self, n):
self.__n = n
def set_Bianchang(self, bianchang):
self.__bianchang = bianchang
def set_X(self, x):
self.__x = x
def set_Y(self, y):
self.__y = y
def getPerimeter(self):
return self.__n * self.__bianchang
def getArea(self):
area = self.__n * (self.__bianchang ** 2) / (4 * math.tan(math.pi/self.__n))
return area
reg1 = RegularPolygon()
reg2 = RegularPolygon(6, 4)
reg3 = RegularPolygon(10, 4, 5.6, 7.8)
print('边长为',reg1.get_Bianchang(),'的正',reg1.get_N(),'边形的周长为:',reg1.getPerimeter(),'面积为:',round(reg1.getArea(),2))
print('边长为',reg2.get_Bianchang(),'的正',reg2.get_N(),'边形的周长为:',reg2.getPerimeter(),'面积为:',round(reg2.getArea(),2))
print('边长为',reg3.get_Bianchang(),'的正',reg3.get_N(),'边形的周长为:',reg3.getPerimeter(),'面积为:',round(reg3.getArea(),2))
###Output
边长为 1 的正 3 边形的周长为: 3 面积为: 0.43
边长为 4 的正 6 边形的周长为: 24 面积为: 41.57
边长为 4 的正 10 边形的周长为: 40 面积为: 123.11
###Markdown
- 5
###Code
class LinearEquation:
def __init__(self, a, b, c, d, e, f):
self.__a = a
self.__b = b
self.__c = c
self.__d = d
self.__e = e
self.__f = f
def get_A(self):
return self.__a
def get_B(self):
return self.__b
def get_C(self):
return self.__c
def get_D(self):
return self.__d
def get_E(self):
return self.__e
def get_F(self):
return self.__f
def isSolvable(self):
if (self.__a * self.__d) - (self.__b * self.__c) != 0:
return True
else:
return False
def get_X(self):
return (self.__e * self.__d - self.__b * self.__f) / (self.__a * self.__d - self.__b * self.__c)
def get_Y(self):
return (self.__a * self.__f - self.__e * self.__c) / (self.__a * self.__d - self.__b * self.__c)
def num():
a = eval(input('输入 a:'))
b = eval(input('输入 b:'))
c = eval(input('输入 c:'))
d = eval(input('输入 d:'))
e = eval(input('输入 e:'))
f = eval(input('输入 f:'))
equation = LinearEquation(a, b, c, d, e, f)
if equation.isSolvable() == True:
print('X=',equation.get_X(),'\n','Y=',equation.get_Y())
else:
print('这个方程无解')
num()
###Output
输入 a:1
输入 b:2
输入 c:3
输入 d:4
输入 e:5
输入 f:4
X= -6.0
Y= 5.5
###Markdown
- 6
###Code
class LinearEquation:
def __init__(self,x1,y1,x2,y2)
self.x1 = x1
self.x2 = x2
self.y1 = y1
self.y2 = y2
###Output
_____no_output_____
|
7_Batch_Processing.ipynb
|
###Markdown
Batch processing with Azure pipelinesAzure Machine Learning pipelines can either be created in the designer or with the python azureml API.In this lab we are going to create a simple Azure pipeline for batch processing. The pipeline consists of two steps- preprocessing and scoring.Be aware that we are going to use experimental features of azureml which should not be used in a productive environment.Lets first import all needed packages:
###Code
import os
import pandas as pd
from azureml.core.model import Model
from azureml.core import Workspace
from azureml.core import Experiment
from azureml.core.dataset import Dataset
from azureml.core.compute import AmlCompute, ComputeTarget
from azureml.pipeline.steps import PythonScriptStep
from azureml.pipeline.core import Pipeline
from azureml.data.dataset_consumption_config import DatasetConsumptionConfig
from azureml.data.output_dataset_config import OutputFileDatasetConfig
from azureml.core.conda_dependencies import CondaDependencies
from azureml.core import RunConfiguration
###Output
_____no_output_____
###Markdown
Connect to workspace, set up dataset and computeTo have a more realistic setting we are not going to use our registered dataset, but the csv file with the raw credit data directly. Be aware, with this setting we are using our training data for prediction. This is just feasible for demonstration purpose, it is not something you would want to do in production. We create a DatasetConsumptionConfig for data input at the beginning of the pipeline. Two OutputFileDatasetConfig objects serve as intermediate and final location for the output files. The result_data will be registered as new dataset (batch-scoring-results) which is accomplished with the command register_on_complete.
###Code
ws = Workspace.from_config()
datastore = ws.get_default_datastore()
dataset = Dataset.Tabular.from_delimited_files(path=[(datastore, 'german_credit_dataset.csv')])
input_data = DatasetConsumptionConfig("input_dataset", dataset)
intermediate_data = OutputFileDatasetConfig(name='intermediate_dataset', destination=(datastore, 'intermediate/{run-id}'))
result_data = OutputFileDatasetConfig(name='result_dataset', destination=(datastore, 'result/{run-id}')).register_on_complete('batch-scoring-results')
###Output
_____no_output_____
###Markdown
If the compute "batch-comp" is not available in your workspace, it will be created.
###Code
compute_name = 'batch-comp'
# checks to see if compute target already exists in workspace, else create it
if compute_name in ws.compute_targets:
compute_target = ComputeTarget(workspace=ws, name=compute_name)
else:
config = AmlCompute.provisioning_configuration(vm_size="STANDARD_DS11_V2",
vm_priority="lowpriority",
min_nodes=1,
max_nodes=2)
compute_target = ComputeTarget.create(workspace=ws, name=compute_name, provisioning_configuration=config)
compute_target.wait_for_completion(show_output=True, min_node_count=None, timeout_in_minutes=20)
###Output
_____no_output_____
###Markdown
A run configuration based on the conda dependencies is automatically created.
###Code
conda_dep = CondaDependencies()
conda_dep.add_pip_package("scikit-learn==0.22")
config = RunConfiguration(conda_dependencies=conda_dep)
config
###Output
_____no_output_____
###Markdown
Prepare the pipeline stepsWe create two PythonScriptStep objects. For each object we need to supply a python script. The scripts are prepared in the batch_script folder and we load them only to have a look at it. You can find different pipeline steps [here](https://docs.microsoft.com/en-us/python/api/azureml-pipeline-steps/azureml.pipeline.steps?view=azure-ml-py).
###Code
with open("batch_scripts/preprocessing_step.py", "r") as f:
print(f.read())
with open("batch_scripts/scoring_step.py", "r") as f:
print(f.read())
###Output
_____no_output_____
###Markdown
The two scripts, together with the locations and compute are given as inputs to the PythonScriptStep constructors. The allow_reuse flag will allow us to use the intermediate results from earlier runs, if there are any and the pipeline step has not changed since the last run.
###Code
preprocessing_step = PythonScriptStep(
script_name="preprocessing_step.py",
name='preprocessing_step',
arguments=['--intermediate-data-path', intermediate_data],
compute_target=compute_target,
runconfig=config,
inputs=[input_data],
outputs=[intermediate_data],
source_directory='./batch_scripts',
allow_reuse=True
)
scoring_step = PythonScriptStep(
script_name="scoring_step.py",
name='scoring_step',
arguments=['--intermediate-data-path', intermediate_data, '--result-data-path', result_data],
compute_target=compute_target,
runconfig=config,
inputs=[intermediate_data],
outputs=[result_data],
source_directory='./batch_scripts'
)
###Output
_____no_output_____
###Markdown
Run the pipelineWe can combine the steps to a whole pipeline and submit the pipeline as a new experiment run. You can find all logs in your workspace. The intermediate and final file locations and data can be found your Azure blob storage which was created automatically.
###Code
scoring_pipeline = Pipeline(workspace=ws, steps=[preprocessing_step, scoring_step])
pipeline_run = Experiment(ws, 'batch-score').submit(scoring_pipeline)
pipeline_run.wait_for_completion(show_output=False)
###Output
_____no_output_____
###Markdown
As you are used from the designer, you can still monitor the pipeline during training in the experiments section (open the specific run) in your workspace ResultsLet us have a look at the resulting data. We can easily access the results from the registered dataset. The result was automatically registered as batch-scoring-results as defined at the output location creation above. For comparison we open the original credit risk set, that we have registered in lab 3. We can see the added column "prediction". Of course, in a real-life scenario, you would not have the "Risk" column i.e. unlabeled data.
###Code
dataset = Dataset.get_by_name(ws, name='batch-scoring-results', version = "latest")
df_path = dataset.download('data/batch_scoring_results', overwrite=True)
pd.read_csv(df_path[0]).head()
dataset = Dataset.get_by_name(ws, name='german_credit_dataset', version = "latest")
ds_df = dataset.to_pandas_dataframe()
ds_df.head()
###Output
_____no_output_____
|
notebooks/community/gapic/automl/showcase_automl_tabular_classification_online.ipynb
|
###Markdown
Vertex client library: AutoML tabular classification model for online prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create tabular classification models and do online prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the [Iris dataset](https://www.tensorflow.org/datasets/catalog/iris) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor. ObjectiveIn this tutorial, you create an AutoML tabular classification model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Deploy the `Model` resource to a serving `Endpoint` resource.- Make a prediction.- Undeploy the `Model`. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex client library.
###Code
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
###Code
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
###Output
_____no_output_____
###Markdown
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
###Code
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
###Output
_____no_output_____
###Markdown
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
###Code
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
###Output
_____no_output_____
###Markdown
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.
###Code
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
###Output
_____no_output_____
###Markdown
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine type to use for prediction.- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*
###Code
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML tabular classification model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Dataset Service for `Dataset` resources.- Model Service for `Model` resources.- Pipeline Service for training.- Endpoint Service for deployment.- Prediction Service for serving.
###Code
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
###Output
_____no_output_____
###Markdown
DatasetNow that your clients are ready, your first step is to create a `Dataset` resource instance. This step differs from Vision, Video and Language. For those products, after the `Dataset` resource is created, one then separately imports the data, using the `import_data` method.For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the `import_data` method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the `Dataset` resource's metadata. Cloud Storage`metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}`The format for a Cloud Storage path is: gs://[bucket_name]/[folder(s)/[file] BigQuery`metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}`The format for a BigQuery path is: bq://[collection].[dataset].[table]Note that the `uri` field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files. Data preparationThe Vertex `Dataset` resource for tabular has a couple of requirements for your tabular data.- Must be in a CSV file or a BigQuery query. CSVFor tabular classification, the CSV file has a few requirements:- The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.- All but one column are features.- One column is the label, which you will specify when you subsequently create the training pipeline. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv"
###Output
_____no_output_____
###Markdown
Quick peek at your dataYou will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.You also need for training to know the heading name of the label column, which is save as `label_column`. For this dataset, it is the last column in the CSV file.
###Code
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
###Output
_____no_output_____
###Markdown
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Uses the dataset client service.2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: - `display_name`: The human-readable name you choose to give it. - `metadata_schema_uri`: The schema for the dataset type. - `metadata`: The Cloud Storage or BigQuery location of the tabular data.3. Calls the client dataset service method `create_dataset`, with the following parameters: - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. - `dataset`: The Vertex dataset object instance you created.4. The method returns an `operation` object.An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:| Method | Description || ----------- | ----------- || result() | Waits for the operation to complete and returns a result object in JSON format. || running() | Returns True/False on whether the operation is still running. || done() | Returns True/False on whether the operation is completed. || canceled() | Returns True/False on whether the operation was canceled. || cancel() | Cancels the operation (this may take up to 30 seconds). |
###Code
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("iris-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
###Output
_____no_output_____
###Markdown
Now save the unique dataset identifier for the `Dataset` resource instance you created.
###Code
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
###Output
_____no_output_____
###Markdown
Train the modelNow train an AutoML tabular classification model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:1. Being reusable for subsequent training jobs.2. Can be containerized and ran as a batch job.3. Can be distributed.4. All the steps are associated with the same pipeline job for tracking progress.Use this helper function `create_pipeline`, which takes the following parameters:- `pipeline_name`: A human readable name for the pipeline job.- `model_name`: A human readable name for the model.- `dataset`: The Vertex fully qualified dataset identifier.- `schema`: The dataset labeling (annotation) training schema.- `task`: A dictionary describing the requirements for the training job.The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.- `training_pipeline`: the full specification for the pipeline training job.Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:- `display_name`: A human readable name for the pipeline job.- `training_task_definition`: The dataset labeling (annotation) training schema.- `training_task_inputs`: A dictionary describing the requirements for the training job.- `model_to_upload`: A human readable name for the model.- `input_data_config`: The dataset specification. - `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. - `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
###Code
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
###Output
_____no_output_____
###Markdown
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need to specify are:- `prediction_type`: Whether we are doing "classification" or "regression".- `target_column`: The CSV heading column name for the column we want to predict (i.e., the label).- `train_budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.- `disable_early_stopping`: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.- `transformations`: Specifies the feature engineering for each feature column.For `transformations`, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to `"auto"` to tell AutoML to automatically determine it.Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
###Code
TRANSFORMATIONS = [
{"auto": {"column_name": "sepal_width"}},
{"auto": {"column_name": "sepal_length"}},
{"auto": {"column_name": "petal_length"}},
{"auto": {"column_name": "petal_width"}},
]
PIPE_NAME = "iris_pipe-" + TIMESTAMP
MODEL_NAME = "iris_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
###Output
_____no_output_____
###Markdown
Now save the unique identifier of the training pipeline you created.
###Code
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
###Output
_____no_output_____
###Markdown
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline identifier.When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
###Code
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
###Output
_____no_output_____
###Markdown
DeploymentTraining the above model may take upwards of 30 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
###Code
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slicesUse this helper function `list_model_evaluations`, which takes the following parameter:- `name`: The Vertex fully qualified model identifier for the `Model` resource.This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (`logLoss` and `auPrc`) you will print the result.
###Code
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Deploy the `Model` resourceNow deploy the trained Vertex `Model` resource you created with AutoML. This requires two steps:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource. Create an `Endpoint` resourceUse this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:- `display_name`: A human readable name for the `Endpoint` resource.The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:- `display_name`: A human readable name for the `Endpoint` resource.Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`.
###Code
ENDPOINT_NAME = "iris_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the `Endpoint` resource you created.
###Code
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
###Output
_____no_output_____
###Markdown
Compute instance scalingYou have several choices on scaling the compute instances for handling your online prediction requests:- Single Instance: The online prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
###Code
MIN_NODES = 1
MAX_NODES = 1
###Output
_____no_output_____
###Markdown
Deploy `Model` resource to the `Endpoint` resourceUse this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:- `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.- `deploy_model_display_name`: A human readable name for the deployed model.- `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to.The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:- `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.- `deployed_model`: The requirements specification for deploying the model.- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. - If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic. - If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:- `model`: The Vertex fully qualified model identifier of the (upload) model to deploy.- `display_name`: A human readable name for the deployed model.- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.- `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`. Traffic SplitLet's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision. ResponseThe method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
###Code
DEPLOYED_NAME = "iris_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
###Output
_____no_output_____
###Markdown
Make a online prediction requestNow do a online prediction to your deployed model. Make test itemYou will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
###Code
INSTANCE = {
"petal_length": "1.4",
"petal_width": "1.3",
"sepal_length": "5.1",
"sepal_width": "2.8",
}
###Output
_____no_output_____
###Markdown
Make a predictionNow you have a test item. Use this helper function `predict_item`, which takes the following parameters:- `filename`: The Cloud Storage path to the test item.- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.- `parameters_dict`: Additional filtering parameters for serving prediction results.This function calls the prediction client service's `predict` method with the following parameters:- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.- `instances`: A list of instances (data items) to predict.- `parameters`: Additional filtering parameters for serving prediction results. *Note*, tabular models do not support additional parameters. RequestThe format of each instance is, where values must be specified as a string: { 'feature_1': 'value_1', 'feature_2': 'value_2', ... }Since the `predict()` method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` method. ResponseThe `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in this case there is just one:- `confidences`: Confidence level in the prediction.- `displayNames`: The predicted label.
###Code
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [data]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(INSTANCE, endpoint_id, None)
###Output
_____no_output_____
###Markdown
Undeploy the `Model` resourceNow undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.This function calls the endpoint client service's method `undeploy_model`, with the following parameters:- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}.
###Code
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
###Markdown
Vertex client library: AutoML tabular classification model for online prediction Run in Colab View on GitHub OverviewThis tutorial demonstrates how to use the Vertex client library for Python to create tabular classification models and do online prediction using Google Cloud's [AutoML](https://cloud.google.com/vertex-ai/docs/start/automl-users). DatasetThe dataset used for this tutorial is the [Iris dataset](https://www.tensorflow.org/datasets/catalog/iris) from [TensorFlow Datasets](https://www.tensorflow.org/datasets/catalog/overview). This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of Iris flower species from a class of three species: setosa, virginica, or versicolor. ObjectiveIn this tutorial, you create an AutoML tabular classification model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the `gcloud` command-line tool or online using the Google Cloud Console.The steps performed include:- Create a Vertex `Dataset` resource.- Train the model.- View the model evaluation.- Deploy the `Model` resource to a serving `Endpoint` resource.- Make a prediction.- Undeploy the `Model`. CostsThis tutorial uses billable components of Google Cloud (GCP):* Vertex AI* Cloud StorageLearn about [Vertex AIpricing](https://cloud.google.com/vertex-ai/pricing) and [Cloud Storagepricing](https://cloud.google.com/storage/pricing), and use the [PricingCalculator](https://cloud.google.com/products/calculator/)to generate a cost estimate based on your projected usage. InstallationInstall the latest version of Vertex client library.
###Code
import os
import sys
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install -U google-cloud-aiplatform $USER_FLAG
###Output
_____no_output_____
###Markdown
Install the latest GA version of *google-cloud-storage* library as well.
###Code
! pip3 install -U google-cloud-storage $USER_FLAG
###Output
_____no_output_____
###Markdown
Restart the kernelOnce you've installed the Vertex client library and Google *cloud-storage*, you need to restart the notebook kernel so it can find the packages.
###Code
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
###Output
_____no_output_____
###Markdown
Before you begin GPU runtime*Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select* **Runtime > Change Runtime Type > GPU** Set up your Google Cloud project**The following steps are required, regardless of your notebook environment.**1. [Select or create a Google Cloud project](https://console.cloud.google.com/cloud-resource-manager). When you first create an account, you get a $300 free credit towards your compute/storage costs.2. [Make sure that billing is enabled for your project.](https://cloud.google.com/billing/docs/how-to/modify-project)3. [Enable the Vertex APIs and Compute Engine APIs.](https://console.cloud.google.com/flows/enableapi?apiid=ml.googleapis.com,compute_component)4. [The Google Cloud SDK](https://cloud.google.com/sdk) is already installed in Google Cloud Notebook.5. Enter your project ID in the cell below. Then run the cell to make sure theCloud SDK uses the right project for all the commands in this notebook.**Note**: Jupyter runs lines prefixed with `!` as shell commands, and it interpolates Python variables prefixed with `$` into these commands.
###Code
PROJECT_ID = "[your-project-id]" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
###Output
_____no_output_____
###Markdown
RegionYou can also change the `REGION` variable, which is used for operationsthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.- Americas: `us-central1`- Europe: `europe-west4`- Asia Pacific: `asia-east1`You may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the [Vertex locations documentation](https://cloud.google.com/vertex-ai/docs/general/locations)
###Code
REGION = "us-central1" # @param {type: "string"}
###Output
_____no_output_____
###Markdown
TimestampIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
###Code
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
###Output
_____no_output_____
###Markdown
Authenticate your Google Cloud account**If you are using Google Cloud Notebook**, your environment is already authenticated. Skip this step.**If you are using Colab**, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.**Otherwise**, follow these steps:In the Cloud Console, go to the [Create service account key](https://console.cloud.google.com/apis/credentials/serviceaccountkey) page.**Click Create service account**.In the **Service account name** field, enter a name, and click **Create**.In the **Grant this service account access to project** section, click the Role drop-down list. Type "Vertex" into the filter box, and select **Vertex Administrator**. Type "Storage Object Admin" into the filter box, and select **Storage Object Admin**.Click Create. A JSON file that contains your key downloads to your local environment.Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
###Code
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
###Output
_____no_output_____
###Markdown
Set up variablesNext, set up some variables used throughout the tutorial. Import libraries and define constants Import Vertex client libraryImport the Vertex client library into our Python environment.
###Code
import time
from google.cloud.aiplatform import gapic as aip
from google.protobuf import json_format
from google.protobuf.json_format import MessageToJson, ParseDict
from google.protobuf.struct_pb2 import Struct, Value
###Output
_____no_output_____
###Markdown
Vertex constantsSetup up the following constants for Vertex:- `API_ENDPOINT`: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.- `PARENT`: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.
###Code
# API service endpoint
API_ENDPOINT = "{}-aiplatform.googleapis.com".format(REGION)
# Vertex location root path for your dataset, model and endpoint resources
PARENT = "projects/" + PROJECT_ID + "/locations/" + REGION
###Output
_____no_output_____
###Markdown
AutoML constantsSet constants unique to AutoML datasets and training:- Dataset Schemas: Tells the `Dataset` resource service which type of dataset it is.- Data Labeling (Annotations) Schemas: Tells the `Dataset` resource service how the data is labeled (annotated).- Dataset Training Schemas: Tells the `Pipeline` resource service the task (e.g., classification) to train the model for.
###Code
# Tabular Dataset type
DATA_SCHEMA = "gs://google-cloud-aiplatform/schema/dataset/metadata/tables_1.0.0.yaml"
# Tabular Labeling type
LABEL_SCHEMA = (
"gs://google-cloud-aiplatform/schema/dataset/ioformat/table_io_format_1.0.0.yaml"
)
# Tabular Training task
TRAINING_SCHEMA = "gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_tables_1.0.0.yaml"
###Output
_____no_output_____
###Markdown
Hardware AcceleratorsSet the hardware accelerators (e.g., GPU), if any, for prediction.Set the variable `DEPLOY_GPU/DEPLOY_NGPU` to use a container image supporting a GPU and the number of GPUs allocated to the virtual machine (VM) instance. For example, to use a GPU container image with 4 Nvidia Telsa K80 GPUs allocated to each VM, you would specify: (aip.AcceleratorType.NVIDIA_TESLA_K80, 4)For GPU, available accelerators include: - aip.AcceleratorType.NVIDIA_TESLA_K80 - aip.AcceleratorType.NVIDIA_TESLA_P100 - aip.AcceleratorType.NVIDIA_TESLA_P4 - aip.AcceleratorType.NVIDIA_TESLA_T4 - aip.AcceleratorType.NVIDIA_TESLA_V100Otherwise specify `(None, None)` to use a container image to run on a CPU.
###Code
if os.getenv("IS_TESTING_DEPOLY_GPU"):
DEPLOY_GPU, DEPLOY_NGPU = (
aip.AcceleratorType.NVIDIA_TESLA_K80,
int(os.getenv("IS_TESTING_DEPOLY_GPU")),
)
else:
DEPLOY_GPU, DEPLOY_NGPU = (aip.AcceleratorType.NVIDIA_TESLA_K80, 1)
###Output
_____no_output_____
###Markdown
Container (Docker) imageFor AutoML batch prediction, the container image for the serving binary is pre-determined by the Vertex prediction service. More specifically, the service will pick the appropriate container for the model depending on the hardware accelerator you selected. Machine TypeNext, set the machine type to use for prediction.- Set the variable `DEPLOY_COMPUTE` to configure the compute resources for the VM you will use for prediction. - `machine type` - `n1-standard`: 3.75GB of memory per vCPU. - `n1-highmem`: 6.5GB of memory per vCPU - `n1-highcpu`: 0.9 GB of memory per vCPU - `vCPUs`: number of \[2, 4, 8, 16, 32, 64, 96 \]*Note: You may also use n2 and e2 machine types for training and deployment, but they do not support GPUs*
###Code
if os.getenv("IS_TESTING_DEPLOY_MACHINE"):
MACHINE_TYPE = os.getenv("IS_TESTING_DEPLOY_MACHINE")
else:
MACHINE_TYPE = "n1-standard"
VCPU = "4"
DEPLOY_COMPUTE = MACHINE_TYPE + "-" + VCPU
print("Deploy machine type", DEPLOY_COMPUTE)
###Output
_____no_output_____
###Markdown
TutorialNow you are ready to start creating your own AutoML tabular classification model. Set up clientsThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.You will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.- Dataset Service for `Dataset` resources.- Model Service for `Model` resources.- Pipeline Service for training.- Endpoint Service for deployment.- Prediction Service for serving.
###Code
# client options same for all services
client_options = {"api_endpoint": API_ENDPOINT}
def create_dataset_client():
client = aip.DatasetServiceClient(client_options=client_options)
return client
def create_model_client():
client = aip.ModelServiceClient(client_options=client_options)
return client
def create_pipeline_client():
client = aip.PipelineServiceClient(client_options=client_options)
return client
def create_endpoint_client():
client = aip.EndpointServiceClient(client_options=client_options)
return client
def create_prediction_client():
client = aip.PredictionServiceClient(client_options=client_options)
return client
clients = {}
clients["dataset"] = create_dataset_client()
clients["model"] = create_model_client()
clients["pipeline"] = create_pipeline_client()
clients["endpoint"] = create_endpoint_client()
clients["prediction"] = create_prediction_client()
for client in clients.items():
print(client)
###Output
_____no_output_____
###Markdown
DatasetNow that your clients are ready, your first step is to create a `Dataset` resource instance. This step differs from Vision, Video and Language. For those products, after the `Dataset` resource is created, one then separately imports the data, using the `import_data` method.For tabular, importing of the data is deferred until the training pipeline starts training the model. What do we do different? Well, first you won't be calling the `import_data` method. Instead, when you create the dataset instance you specify the Cloud Storage location of the CSV file or BigQuery location of the data table, which contains your tabular data as part of the `Dataset` resource's metadata. Cloud Storage`metadata = {"input_config": {"gcs_source": {"uri": [gcs_uri]}}}`The format for a Cloud Storage path is: gs://[bucket_name]/[folder(s)/[file] BigQuery`metadata = {"input_config": {"bigquery_source": {"uri": [gcs_uri]}}}`The format for a BigQuery path is: bq://[collection].[dataset].[table]Note that the `uri` field is a list, whereby you can input multiple CSV files or BigQuery tables when your data is split across files. Data preparationThe Vertex `Dataset` resource for tabular has a couple of requirements for your tabular data.- Must be in a CSV file or a BigQuery query. CSVFor tabular classification, the CSV file has a few requirements:- The first row must be the heading -- note how this is different from Vision, Video and Language where the requirement is no heading.- All but one column are features.- One column is the label, which you will specify when you subsequently create the training pipeline. Location of Cloud Storage training data.Now set the variable `IMPORT_FILE` to the location of the CSV index file in Cloud Storage.
###Code
IMPORT_FILE = "gs://cloud-samples-data/tables/iris_1000.csv"
###Output
_____no_output_____
###Markdown
Quick peek at your dataYou will use a version of the Iris dataset that is stored in a public Cloud Storage bucket, using a CSV index file.Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (`wc -l`) and then peek at the first few rows.You also need for training to know the heading name of the label column, which is save as `label_column`. For this dataset, it is the last column in the CSV file.
###Code
count = ! gsutil cat $IMPORT_FILE | wc -l
print("Number of Examples", int(count[0]))
print("First 10 rows")
! gsutil cat $IMPORT_FILE | head
heading = ! gsutil cat $IMPORT_FILE | head -n1
label_column = str(heading).split(",")[-1].split("'")[0]
print("Label Column Name", label_column)
if label_column is None:
raise Exception("label column missing")
###Output
_____no_output_____
###Markdown
DatasetNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it. Create `Dataset` resource instanceUse the helper function `create_dataset` to create the instance of a `Dataset` resource. This function does the following:1. Uses the dataset client service.2. Creates an Vertex `Dataset` resource (`aip.Dataset`), with the following parameters: - `display_name`: The human-readable name you choose to give it. - `metadata_schema_uri`: The schema for the dataset type. - `metadata`: The Cloud Storage or BigQuery location of the tabular data.3. Calls the client dataset service method `create_dataset`, with the following parameters: - `parent`: The Vertex location root path for your `Database`, `Model` and `Endpoint` resources. - `dataset`: The Vertex dataset object instance you created.4. The method returns an `operation` object.An `operation` object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.You can use the `operation` object to get status on the operation (e.g., create `Dataset` resource) or to cancel the operation, by invoking an operation method:| Method | Description || ----------- | ----------- || result() | Waits for the operation to complete and returns a result object in JSON format. || running() | Returns True/False on whether the operation is still running. || done() | Returns True/False on whether the operation is completed. || canceled() | Returns True/False on whether the operation was canceled. || cancel() | Cancels the operation (this may take up to 30 seconds). |
###Code
TIMEOUT = 90
def create_dataset(name, schema, src_uri=None, labels=None, timeout=TIMEOUT):
start_time = time.time()
try:
if src_uri.startswith("gs://"):
metadata = {"input_config": {"gcs_source": {"uri": [src_uri]}}}
elif src_uri.startswith("bq://"):
metadata = {"input_config": {"bigquery_source": {"uri": [src_uri]}}}
dataset = aip.Dataset(
display_name=name,
metadata_schema_uri=schema,
labels=labels,
metadata=json_format.ParseDict(metadata, Value()),
)
operation = clients["dataset"].create_dataset(parent=PARENT, dataset=dataset)
print("Long running operation:", operation.operation.name)
result = operation.result(timeout=TIMEOUT)
print("time:", time.time() - start_time)
print("response")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" metadata_schema_uri:", result.metadata_schema_uri)
print(" metadata:", dict(result.metadata))
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
print(" etag:", result.etag)
print(" labels:", dict(result.labels))
return result
except Exception as e:
print("exception:", e)
return None
result = create_dataset("iris-" + TIMESTAMP, DATA_SCHEMA, src_uri=IMPORT_FILE)
###Output
_____no_output_____
###Markdown
Now save the unique dataset identifier for the `Dataset` resource instance you created.
###Code
# The full unique ID for the dataset
dataset_id = result.name
# The short numeric ID for the dataset
dataset_short_id = dataset_id.split("/")[-1]
print(dataset_id)
###Output
_____no_output_____
###Markdown
Train the modelNow train an AutoML tabular classification model using your Vertex `Dataset` resource. To train the model, do the following steps:1. Create an Vertex training pipeline for the `Dataset` resource.2. Execute the pipeline to start the training. Create a training pipelineYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:1. Being reusable for subsequent training jobs.2. Can be containerized and ran as a batch job.3. Can be distributed.4. All the steps are associated with the same pipeline job for tracking progress.Use this helper function `create_pipeline`, which takes the following parameters:- `pipeline_name`: A human readable name for the pipeline job.- `model_name`: A human readable name for the model.- `dataset`: The Vertex fully qualified dataset identifier.- `schema`: The dataset labeling (annotation) training schema.- `task`: A dictionary describing the requirements for the training job.The helper function calls the `Pipeline` client service'smethod `create_pipeline`, which takes the following parameters:- `parent`: The Vertex location root path for your `Dataset`, `Model` and `Endpoint` resources.- `training_pipeline`: the full specification for the pipeline training job.Let's look now deeper into the *minimal* requirements for constructing a `training_pipeline` specification:- `display_name`: A human readable name for the pipeline job.- `training_task_definition`: The dataset labeling (annotation) training schema.- `training_task_inputs`: A dictionary describing the requirements for the training job.- `model_to_upload`: A human readable name for the model.- `input_data_config`: The dataset specification. - `dataset_id`: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier. - `fraction_split`: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.
###Code
def create_pipeline(pipeline_name, model_name, dataset, schema, task):
dataset_id = dataset.split("/")[-1]
input_config = {
"dataset_id": dataset_id,
"fraction_split": {
"training_fraction": 0.8,
"validation_fraction": 0.1,
"test_fraction": 0.1,
},
}
training_pipeline = {
"display_name": pipeline_name,
"training_task_definition": schema,
"training_task_inputs": task,
"input_data_config": input_config,
"model_to_upload": {"display_name": model_name},
}
try:
pipeline = clients["pipeline"].create_training_pipeline(
parent=PARENT, training_pipeline=training_pipeline
)
print(pipeline)
except Exception as e:
print("exception:", e)
return None
return pipeline
###Output
_____no_output_____
###Markdown
Construct the task requirementsNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the `task` field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the `json_format.ParseDict` method for the conversion.The minimal fields you need to specify are:- `prediction_type`: Whether we are doing "classification" or "regression".- `target_column`: The CSV heading column name for the column we want to predict (i.e., the label).- `train_budget_milli_node_hours`: The maximum time to budget (billed) for training the model, where 1000 = 1 hour.- `disable_early_stopping`: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.- `transformations`: Specifies the feature engineering for each feature column.For `transformations`, the list must have an entry for each column. The outer key field indicates the type of feature engineering for the corresponding column. In this tutorial, you set it to `"auto"` to tell AutoML to automatically determine it.Finally, create the pipeline by calling the helper function `create_pipeline`, which returns an instance of a training pipeline object.
###Code
TRANSFORMATIONS = [
{"auto": {"column_name": "sepal_width"}},
{"auto": {"column_name": "sepal_length"}},
{"auto": {"column_name": "petal_length"}},
{"auto": {"column_name": "petal_width"}},
]
PIPE_NAME = "iris_pipe-" + TIMESTAMP
MODEL_NAME = "iris_model-" + TIMESTAMP
task = Value(
struct_value=Struct(
fields={
"target_column": Value(string_value=label_column),
"prediction_type": Value(string_value="classification"),
"train_budget_milli_node_hours": Value(number_value=1000),
"disable_early_stopping": Value(bool_value=False),
"transformations": json_format.ParseDict(TRANSFORMATIONS, Value()),
}
)
)
response = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)
###Output
_____no_output_____
###Markdown
Now save the unique identifier of the training pipeline you created.
###Code
# The full unique ID for the pipeline
pipeline_id = response.name
# The short numeric ID for the pipeline
pipeline_short_id = pipeline_id.split("/")[-1]
print(pipeline_id)
###Output
_____no_output_____
###Markdown
Get information on a training pipelineNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's `get_training_pipeline` method, with the following parameter:- `name`: The Vertex fully qualified pipeline identifier.When the model is done training, the pipeline state will be `PIPELINE_STATE_SUCCEEDED`.
###Code
def get_training_pipeline(name, silent=False):
response = clients["pipeline"].get_training_pipeline(name=name)
if silent:
return response
print("pipeline")
print(" name:", response.name)
print(" display_name:", response.display_name)
print(" state:", response.state)
print(" training_task_definition:", response.training_task_definition)
print(" training_task_inputs:", dict(response.training_task_inputs))
print(" create_time:", response.create_time)
print(" start_time:", response.start_time)
print(" end_time:", response.end_time)
print(" update_time:", response.update_time)
print(" labels:", dict(response.labels))
return response
response = get_training_pipeline(pipeline_id)
###Output
_____no_output_____
###Markdown
DeploymentTraining the above model may take upwards of 30 minutes time.Once your model is done training, you can calculate the actual time it took to train the model by subtracting `end_time` from `start_time`. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field `model_to_deploy.name`.
###Code
while True:
response = get_training_pipeline(pipeline_id, True)
if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:
print("Training job has not completed:", response.state)
model_to_deploy_id = None
if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:
raise Exception("Training Job Failed")
else:
model_to_deploy = response.model_to_upload
model_to_deploy_id = model_to_deploy.name
print("Training Time:", response.end_time - response.start_time)
break
time.sleep(60)
print("model to deploy:", model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Model informationNow that your model is trained, you can get some information on your model. Evaluate the Model resourceNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model. List evaluations for all slicesUse this helper function `list_model_evaluations`, which takes the following parameter:- `name`: The Vertex fully qualified model identifier for the `Model` resource.This helper function uses the model client service's `list_model_evaluations` method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.For each evaluation (you probably only have one) we then print all the key names for each metric in the evaluation, and for a small set (`logLoss` and `auPrc`) you will print the result.
###Code
def list_model_evaluations(name):
response = clients["model"].list_model_evaluations(parent=name)
for evaluation in response:
print("model_evaluation")
print(" name:", evaluation.name)
print(" metrics_schema_uri:", evaluation.metrics_schema_uri)
metrics = json_format.MessageToDict(evaluation._pb.metrics)
for metric in metrics.keys():
print(metric)
print("logloss", metrics["logLoss"])
print("auPrc", metrics["auPrc"])
return evaluation.name
last_evaluation = list_model_evaluations(model_to_deploy_id)
###Output
_____no_output_____
###Markdown
Deploy the `Model` resourceNow deploy the trained Vertex `Model` resource you created with AutoML. This requires two steps:1. Create an `Endpoint` resource for deploying the `Model` resource to.2. Deploy the `Model` resource to the `Endpoint` resource. Create an `Endpoint` resourceUse this helper function `create_endpoint` to create an endpoint to deploy the model to for serving predictions, with the following parameter:- `display_name`: A human readable name for the `Endpoint` resource.The helper function uses the endpoint client service's `create_endpoint` method, which takes the following parameter:- `display_name`: A human readable name for the `Endpoint` resource.Creating an `Endpoint` resource returns a long running operation, since it may take a few moments to provision the `Endpoint` resource for serving. You call `response.result()`, which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the `Endpoint` resource: `response.name`.
###Code
ENDPOINT_NAME = "iris_endpoint-" + TIMESTAMP
def create_endpoint(display_name):
endpoint = {"display_name": display_name}
response = clients["endpoint"].create_endpoint(parent=PARENT, endpoint=endpoint)
print("Long running operation:", response.operation.name)
result = response.result(timeout=300)
print("result")
print(" name:", result.name)
print(" display_name:", result.display_name)
print(" description:", result.description)
print(" labels:", result.labels)
print(" create_time:", result.create_time)
print(" update_time:", result.update_time)
return result
result = create_endpoint(ENDPOINT_NAME)
###Output
_____no_output_____
###Markdown
Now get the unique identifier for the `Endpoint` resource you created.
###Code
# The full unique ID for the endpoint
endpoint_id = result.name
# The short numeric ID for the endpoint
endpoint_short_id = endpoint_id.split("/")[-1]
print(endpoint_id)
###Output
_____no_output_____
###Markdown
Compute instance scalingYou have several choices on scaling the compute instances for handling your online prediction requests:- Single Instance: The online prediction requests are processed on a single compute instance. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to one.- Manual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified. - Set the minimum (`MIN_NODES`) and maximum (`MAX_NODES`) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.- Auto Scaling: The online prediction requests are split across a scaleable number of compute instances. - Set the minimum (`MIN_NODES`) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.The minimum number of compute instances corresponds to the field `min_replica_count` and the maximum number of compute instances corresponds to the field `max_replica_count`, in your subsequent deployment request.
###Code
MIN_NODES = 1
MAX_NODES = 1
###Output
_____no_output_____
###Markdown
Deploy `Model` resource to the `Endpoint` resourceUse this helper function `deploy_model` to deploy the `Model` resource to the `Endpoint` resource you created for serving predictions, with the following parameters:- `model`: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.- `deploy_model_display_name`: A human readable name for the deployed model.- `endpoint`: The Vertex fully qualified endpoint identifier to deploy the model to.The helper function calls the `Endpoint` client service's method `deploy_model`, which takes the following parameters:- `endpoint`: The Vertex fully qualified `Endpoint` resource identifier to deploy the `Model` resource to.- `deployed_model`: The requirements specification for deploying the model.- `traffic_split`: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs. - If only one model, then specify as **{ "0": 100 }**, where "0" refers to this model being uploaded and 100 means 100% of the traffic. - If there are existing models on the endpoint, for which the traffic will be split, then use `model_id` to specify as **{ "0": percent, model_id: percent, ... }**, where `model_id` is the model id of an existing model to the deployed endpoint. The percents must add up to 100.Let's now dive deeper into the `deployed_model` parameter. This parameter is specified as a Python dictionary with the minimum required fields:- `model`: The Vertex fully qualified model identifier of the (upload) model to deploy.- `display_name`: A human readable name for the deployed model.- `disable_container_logging`: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.- `dedicated_resources`: This refers to how many compute instances (replicas) that are scaled for serving prediction requests. - `machine_spec`: The compute instance to provision. Use the variable you set earlier `DEPLOY_GPU != None` to use a GPU; otherwise only a CPU is allocated. - `min_replica_count`: The number of compute instances to initially provision, which you set earlier as the variable `MIN_NODES`. - `max_replica_count`: The maximum number of compute instances to scale to, which you set earlier as the variable `MAX_NODES`. Traffic SplitLet's now dive deeper into the `traffic_split` parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.Why would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision. ResponseThe method returns a long running operation `response`. We will wait sychronously for the operation to complete by calling the `response.result()`, which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.
###Code
DEPLOYED_NAME = "iris_deployed-" + TIMESTAMP
def deploy_model(
model, deployed_model_display_name, endpoint, traffic_split={"0": 100}
):
if DEPLOY_GPU:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_type": DEPLOY_GPU,
"accelerator_count": DEPLOY_NGPU,
}
else:
machine_spec = {
"machine_type": DEPLOY_COMPUTE,
"accelerator_count": 0,
}
deployed_model = {
"model": model,
"display_name": deployed_model_display_name,
"dedicated_resources": {
"min_replica_count": MIN_NODES,
"max_replica_count": MAX_NODES,
"machine_spec": machine_spec,
},
"disable_container_logging": False,
}
response = clients["endpoint"].deploy_model(
endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split
)
print("Long running operation:", response.operation.name)
result = response.result()
print("result")
deployed_model = result.deployed_model
print(" deployed_model")
print(" id:", deployed_model.id)
print(" model:", deployed_model.model)
print(" display_name:", deployed_model.display_name)
print(" create_time:", deployed_model.create_time)
return deployed_model.id
deployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)
###Output
_____no_output_____
###Markdown
Make a online prediction requestNow do a online prediction to your deployed model. Make test itemYou will use synthetic data as a test data item. Don't be concerned that we are using synthetic data -- we just want to demonstrate how to make a prediction.
###Code
INSTANCE = {
"petal_length": "1.4",
"petal_width": "1.3",
"sepal_length": "5.1",
"sepal_width": "2.8",
}
###Output
_____no_output_____
###Markdown
Make a predictionNow you have a test item. Use this helper function `predict_item`, which takes the following parameters:- `filename`: The Cloud Storage path to the test item.- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.- `parameters_dict`: Additional filtering parameters for serving prediction results.This function calls the prediction client service's `predict` method with the following parameters:- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource was deployed.- `instances`: A list of instances (data items) to predict.- `parameters`: Additional filtering parameters for serving prediction results. *Note*, tabular models do not support additional parameters. RequestThe format of each instance is, where values must be specified as a string: { 'feature_1': 'value_1', 'feature_2': 'value_2', ... }Since the `predict()` method can take multiple items (instances), you send your single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the `predict()` method. ResponseThe `response` object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in this case there is just one:- `confidences`: Confidence level in the prediction.- `displayNames`: The predicted label.
###Code
def predict_item(data, endpoint, parameters_dict):
parameters = json_format.ParseDict(parameters_dict, Value())
# The format of each instance should conform to the deployed model's prediction input schema.
instances_list = [data]
instances = [json_format.ParseDict(s, Value()) for s in instances_list]
response = clients["prediction"].predict(
endpoint=endpoint, instances=instances, parameters=parameters
)
print("response")
print(" deployed_model_id:", response.deployed_model_id)
predictions = response.predictions
print("predictions")
for prediction in predictions:
print(" prediction:", dict(prediction))
predict_item(INSTANCE, endpoint_id, None)
###Output
_____no_output_____
###Markdown
Undeploy the `Model` resourceNow undeploy your `Model` resource from the serving `Endpoint` resoure. Use this helper function `undeploy_model`, which takes the following parameters:- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed to.- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` is deployed to.This function calls the endpoint client service's method `undeploy_model`, with the following parameters:- `deployed_model_id`: The model deployment identifier returned by the endpoint service when the `Model` resource was deployed.- `endpoint`: The Vertex fully qualified identifier for the `Endpoint` resource where the `Model` resource is deployed.- `traffic_split`: How to split traffic among the remaining deployed models on the `Endpoint` resource.Since this is the only deployed model on the `Endpoint` resource, you simply can leave `traffic_split` empty by setting it to {}.
###Code
def undeploy_model(deployed_model_id, endpoint):
response = clients["endpoint"].undeploy_model(
endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}
)
print(response)
undeploy_model(deployed_model_id, endpoint_id)
###Output
_____no_output_____
###Markdown
Cleaning upTo clean up all GCP resources used in this project, you can [delete the GCPproject](https://cloud.google.com/resource-manager/docs/creating-managing-projectsshutting_down_projects) you used for the tutorial.Otherwise, you can delete the individual resources you created in this tutorial:- Dataset- Pipeline- Model- Endpoint- Batch Job- Custom Job- Hyperparameter Tuning Job- Cloud Storage Bucket
###Code
delete_dataset = True
delete_pipeline = True
delete_model = True
delete_endpoint = True
delete_batchjob = True
delete_customjob = True
delete_hptjob = True
delete_bucket = True
# Delete the dataset using the Vertex fully qualified identifier for the dataset
try:
if delete_dataset and "dataset_id" in globals():
clients["dataset"].delete_dataset(name=dataset_id)
except Exception as e:
print(e)
# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline
try:
if delete_pipeline and "pipeline_id" in globals():
clients["pipeline"].delete_training_pipeline(name=pipeline_id)
except Exception as e:
print(e)
# Delete the model using the Vertex fully qualified identifier for the model
try:
if delete_model and "model_to_deploy_id" in globals():
clients["model"].delete_model(name=model_to_deploy_id)
except Exception as e:
print(e)
# Delete the endpoint using the Vertex fully qualified identifier for the endpoint
try:
if delete_endpoint and "endpoint_id" in globals():
clients["endpoint"].delete_endpoint(name=endpoint_id)
except Exception as e:
print(e)
# Delete the batch job using the Vertex fully qualified identifier for the batch job
try:
if delete_batchjob and "batch_job_id" in globals():
clients["job"].delete_batch_prediction_job(name=batch_job_id)
except Exception as e:
print(e)
# Delete the custom job using the Vertex fully qualified identifier for the custom job
try:
if delete_customjob and "job_id" in globals():
clients["job"].delete_custom_job(name=job_id)
except Exception as e:
print(e)
# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job
try:
if delete_hptjob and "hpt_job_id" in globals():
clients["job"].delete_hyperparameter_tuning_job(name=hpt_job_id)
except Exception as e:
print(e)
if delete_bucket and "BUCKET_NAME" in globals():
! gsutil rm -r $BUCKET_NAME
###Output
_____no_output_____
|
applied-project/Kaggle Credit Card Fraud Detection/pca.ipynb
|
###Markdown
PCA (simple version)- unsupervised learning approach- pca를 통해 transform한 뒤, 각 변수마다 `(변수-평균)/표준편차` 를 사용하여 변수마다 outlier score를 만든다.- 이를 합해서 최종 outlier score로 산정한다.
###Code
def simple_pca(X_train, X_test, y_train, y_test):
minority = np.sum(data['Class'] == 1) / len(data)
tmp = abs(X_train - np.mean(X_train)) / np.std(X_train)
outlier_score = np.sum(tmp, axis=1)
train_outlier_count = int(len(X_train) * minority)
train_outlier = outlier_score.sort_values(ascending=False)[:train_outlier_count]
train_pred = y_train.copy()
train_pred.iloc[:] = 0
train_pred[train_outlier.index] = 1
tmp = abs(X_test - np.mean(X_train)) / np.std(X_train)
outlier_score = np.sum(tmp, axis=1)
test_outlier_count = int(len(X_test) * minority)
test_outlier = outlier_score.sort_values(ascending=False)[:test_outlier_count]
test_pred = y_test.copy()
test_pred.iloc[:] = 0
test_pred[test_outlier.index] = 1
return train_pred, test_pred
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(data.drop('Class', axis=1),
data['Class'],
test_size=0.2,
random_state=42,
stratify=data['Class'],
shuffle=True)
train_pred, test_pred = simple_pca(X_train, X_test, y_train, y_test)
from sklearn.metrics import classification_report
print('Train')
print(classification_report(y_train, train_pred))
print('Test')
print(classification_report(y_test, test_pred))
###Output
Train
precision recall f1-score support
0 1.00 1.00 1.00 227451
1 0.27 0.27 0.27 394
accuracy 1.00 227845
macro avg 0.63 0.63 0.63 227845
weighted avg 1.00 1.00 1.00 227845
Test
precision recall f1-score support
0 1.00 1.00 1.00 56864
1 0.30 0.30 0.30 98
accuracy 1.00 56962
macro avg 0.65 0.65 0.65 56962
weighted avg 1.00 1.00 1.00 56962
|
HW/HW0.ipynb
|
###Markdown
HW0In this homework, you'll get set up with Python, `git`, GitHub, and GitHub Pages. §1. PythonInstall Anaconda and set up the PIC16B Python environment as directed [here](https://philchodrow.github.io/PIC16B/installation/). For your convenience, I've included the code required to verify your installation after these instructions. §2. GitHubCreate an account on [GitHub](https://github.com/). §3. (Optional, strongly recommended): GitHub DesktopDownload [GitHub Desktop](https://desktop.github.com/), a graphical client for working with `git`. If you do not use GitHub Desktop (or another graphical client), you will need to work with `git` from the command line. §4. GitHub PagesCreate a professional website. If you already have one on which you can write technical content and code, you are free to use that. Otherwise, you should create your website via [GitHub Pages](https://docs.github.com/en/github/working-with-github-pages/about-github-pages). To do so, you will need to make a repository whose title is `username.github.io`. For example, my repo is `philchodrow.github.io`. You will need to then enable GitHub Pages publishing (under Settings). You can also make choices about the theme and structure of your website. If you are feeling fearless, you can work on getting your website set up with a [custom theme](https://jekyllthemes.io/github-pages-themes). This will require you to learn a lot about the settings and potentially break some things, but can also lead you to a very attractive website. The "safe" (and recommended) approach is to follow the instructions at Barry Clark's [Jekyll Now page](https://github.com/barryclark/jekyll-now). You can fork his GitHub repository and immediately get started customizing your website. There are even more detailed instructions [here](https://www.smashingmagazine.com/2014/08/build-blog-jekyll-github-pages/). §6. (Optional): Local Website DevelopmentIt is possible to do just fine in this course by modifying your blog from the GitHub website. You may find it more convenient to work on your blog locally, from within your favorite text editor. This is possible, and you can even render (view) your website by running a local version of the Jekyll software. This is the software that converts plain text files into the complex HTML pages that you view in your browser. Doing this requires use of the command line. Here's how: 1. [Install Jekyll](https://jekyllrb.com/docs/installation/). 2. Clone your GitHub repository to your local computer. 3. In the main directory of the repository, you would run the command `jekyll serve` from the command line. 4. Your site is now available in your web browser at `http://127.0.0.1:4000/`. Changes that you make to your site files will be periodically re-rendered -- refresh your browser to see them. 5. After you are done modifying your site, commit and push your changes back to your GitHub repository. 6. After a few minutes, your online website will reflect the changes that you made.
###Code
import tensorflow as tf
print("My name is [name] and I installed Anaconda and TensorFlow")
###Output
_____no_output_____
|
Tarea_Clase1.ipynb
|
###Markdown
**Tipo Entero**
###Code
a=int(9.395)
print(a)
b=int(13)
print(b, type(b))
###Output
9
13 <class 'int'>
###Markdown
**Tipo Float**
###Code
a=float(12.395)
print(a)
b=float(9.999)
print(b, type(b))
###Output
12.395
9.999 <class 'float'>
###Markdown
**Tipo Cadena**
###Code
a="Bienvenido"
b="Carlos"
print(a,b)
val1="Usuario"
print(val1)
val2="Contraseña"
print(val2)
###Output
Bienvenido Carlos
Usuario
Contraseña
###Markdown
**Tipo Booleano**
###Code
a=True
print("El valor de a es:", a)
b=False
print("El valor de b es:", b,", el cual es de tipo", type(b))
###Output
El valor de a es: True
El valor de b es: False , el cual es de tipo <class 'bool'>
###Markdown
**Tipo conjuntos**
###Code
pais='Colombia','Argentina','Mexico','Ecuador','Venezuela'
ciud='Bogota','Buenos Aires','Ciudad de Mexico','Quito','Caracas'
print(pais,ciud)
###Output
('Colombia', 'Argentina', 'Mexico', 'Ecuador', 'Venezuela') ('Bogota', 'Buenos Aires', 'Ciudad de Mexico', 'Quito', 'Caracas')
###Markdown
**Tipo Listas**
###Code
parque=['arboles','culumpio','rodadero','tobogan','saltarin','niños']
print(parque)
fe=parque[4]
print(fe)
casa=['alcoba','patio','baño','cocina','balcón','sala','estudio']
print(casa)
fe=casa[0:5]
print(fe)
###Output
['arboles', 'culumpio', 'rodadero', 'tobogan', 'saltarin', 'niños']
saltarin
['alcoba', 'patio', 'baño', 'cocina', 'balcón', 'sala', 'estudio']
['alcoba', 'patio', 'baño', 'cocina', 'balcón']
###Markdown
**Tipo Tuplas**
###Code
tupla1=1,2,3,4,5,6
print(tupla1)
tupla2=tupla1,('primero','segundo','tercero','cuarto','quinto','sexto')
print(tupla2)
###Output
(1, 2, 3, 4, 5, 6)
((1, 2, 3, 4, 5, 6), ('primero', 'segundo', 'tercero', 'cuarto', 'quinto', 'sexto'))
###Markdown
**Tipo Diccionario**
###Code
estudiante_e={
"nombres":"Johan Sebastian",
"apellidos":"Orjuela Rivera",
"cedula":"1014058500",
"est_civil":"casado",
"celular":"3117813113",
"lugar_nacimiento":"Bogota",
"fecha_nacimiento":"31/10/1995",
}
print("ID del diccionario", estudiante_e.keys())
print("ID del diccionario", estudiante_e.values())
print("ID del diccionario", estudiante_e.items())
print("Numero de celular:", estudiante_e['celular'])
casa_0={}
casa_0['tam']='grande'
casa_0['alcobas']='3'
casa_0['baños']='2'
casa_0['lugar']='Medellin'
casa_0['precio']=140
casa_1={}
casa_1['tam']='pequeña'
casa_1['alcobas']='1'
casa_1['baños']='1'
casa_1['lugar']='Bogota'
casa_1['precio']=120
print(casa_0)
print(casa_1)
compra0=casa_0['precio']
compra1=casa_1['precio']
compratotal=compra0+compra1
print("La compra total fue de:"+ str(compratotal))
###Output
{'tam': 'grande', 'alcobas': '3', 'baños': '2', 'lugar': 'Medellin', 'precio': 140}
{'tam': 'pequeña', 'alcobas': '1', 'baños': '1', 'lugar': 'Bogota', 'precio': 120}
La compra total fue de:260
###Markdown
###Code
()
###Output
_____no_output_____
|
03-Sentiment-Analysis-Assessment.ipynb
|
###Markdown
___ ___ Sentiment Analysis Assessment - Solution Task 1: Perform vector arithmetic on your own wordsWrite code that evaluates vector arithmetic on your own set of related words. The goal is to come as close to an expected word as possible. Please feel free to share success stories in the Q&A Forum for this section!
###Code
# Import spaCy and load the language library. Remember to use a larger model!
# Choose the words you wish to compare, and obtain their vectors
# Import spatial and define a cosine_similarity function
# Write an expression for vector arithmetic
# For example: new_vector = word1 - word2 + word3
# List the top ten closest vectors in the vocabulary to the result of the expression above
###Output
_____no_output_____
###Markdown
CHALLENGE: Write a function that takes in 3 strings, performs a-b+c arithmetic, and returns a top-ten result
###Code
def vector_math(a,b,c):
# Test the function on known words:
vector_math('king','man','woman')
###Output
_____no_output_____
###Markdown
Task 2: Perform VADER Sentiment Analysis on your own reviewWrite code that returns a set of SentimentIntensityAnalyzer polarity scores based on your own written review.
###Code
# Import SentimentIntensityAnalyzer and create an sid object
# Write a review as one continuous string (multiple sentences are ok)
review = ''
# Obtain the sid scores for your review
sid.polarity_scores(review)
###Output
_____no_output_____
###Markdown
CHALLENGE: Write a function that takes in a review and returns a score of "Positive", "Negative" or "Neutral"
###Code
def review_rating(string):
# Test the function on your review above:
review_rating(review)
###Output
_____no_output_____
|
lowlight_train.ipynb
|
###Markdown
###Code
from google.colab import github
import torch
import torch.nn as nn
import torchvision
import torch.backends.cudnn as cudnn
import torch.optim
import os
import sys
import argparse
import time
import dataloader
import model
import Myloss
import numpy as np
from torchvision import transforms
def weights_init(m):
classname = m.__class__.__name__
if classname.find('Conv') != -1:
m.weight.data.normal_(0.0, 0.02)
elif classname.find('BatchNorm') != -1:
m.weight.data.normal_(1.0, 0.02)
m.bias.data.fill_(0)
def train(config):
os.environ['CUDA_VISIBLE_DEVICES']='0'
scale_factor = config.scale_factor
DCE_net = model.enhance_net_nopool(scale_factor).cuda()
# DCE_net.apply(weights_init)
if config.load_pretrain == True:
DCE_net.load_state_dict(torch.load(config.pretrain_dir))
train_dataset = dataloader.lowlight_loader(config.lowlight_images_path)
train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=config.train_batch_size, shuffle=True, num_workers=config.num_workers, pin_memory=True)
L_color = Myloss.L_color()
L_spa = Myloss.L_spa()
L_exp = Myloss.L_exp(16)
# L_exp = Myloss.L_exp(16,0.6)
L_TV = Myloss.L_TV()
optimizer = torch.optim.Adam(DCE_net.parameters(), lr=config.lr, weight_decay=config.weight_decay)
DCE_net.train()
for epoch in range(config.num_epochs):
for iteration, img_lowlight in enumerate(train_loader):
img_lowlight = img_lowlight.cuda()
E = 0.6
enhanced_image,A = DCE_net(img_lowlight)
Loss_TV = 1600*L_TV(A)
# Loss_TV = 200*L_TV(A)
loss_spa = torch.mean(L_spa(enhanced_image, img_lowlight))
loss_col = 5*torch.mean(L_color(enhanced_image))
loss_exp = 10*torch.mean(L_exp(enhanced_image,E))
# best_loss
loss = Loss_TV + loss_spa + loss_col + loss_exp
optimizer.zero_grad()
loss.backward()
torch.nn.utils.clip_grad_norm(DCE_net.parameters(),config.grad_clip_norm)
optimizer.step()
if ((iteration+1) % config.display_iter) == 0:
print("Loss at iteration", iteration+1, ":", loss.item())
if ((iteration+1) % config.snapshot_iter) == 0:
torch.save(DCE_net.state_dict(), config.snapshots_folder + "Epoch" + str(epoch) + '.pth')
if __name__ == "__main__":
parser = argparse.ArgumentParser()
# Input Parameters
parser.add_argument('--lowlight_images_path', type=str, default="data/train_data/")
parser.add_argument('--lr', type=float, default=0.0001)
parser.add_argument('--weight_decay', type=float, default=0.0001)
parser.add_argument('--grad_clip_norm', type=float, default=0.1)
parser.add_argument('--num_epochs', type=int, default=100)
parser.add_argument('--train_batch_size', type=int, default=8)
parser.add_argument('--val_batch_size', type=int, default=8)
parser.add_argument('--num_workers', type=int, default=4)
parser.add_argument('--display_iter', type=int, default=10)
parser.add_argument('--snapshot_iter', type=int, default=10)
parser.add_argument('--scale_factor', type=int, default=1)
parser.add_argument('--snapshots_folder', type=str, default="snapshots_Zero_DCE++/")
parser.add_argument('--load_pretrain', type=bool, default= False)
parser.add_argument('--pretrain_dir', type=str, default= "snapshots_Zero_DCE++/Epoch99.pth")
config = parser.parse_args()
if not os.path.exists(config.snapshots_folder):
os.mkdir(config.snapshots_folder)
train(config)
###Output
_____no_output_____
|
notebooks/05.Events/05.02-OPTIONAL-Widget_Events_2_--_bad_password_generator,_version_1.ipynb
|
###Markdown
*OPTIONAL* Password generator: `observe`Consider a super-simple (and super-bad) password generator widget: given a password length, represented by a slider in the interface, it constructs a sequence of random letters of that length and displays it. This notebook illustrates how to connect the function that calculates the password to the length slider using `observe` but mixes together the code to calculate the password and the code to handle the events generated by the interface Construct the interface (widget)The widget should look like this once constructed:Compose the widget out of three basic widgets, one each for the title, the (currently not set) password, and one for the slider. In the cell below construct each of the basic widgets.
###Code
helpful_title = 0 # Replace with some that displays "Generated password is:"
password_text = 0 # Replace with something that displays "No password set"
password_length = 0 # Replace with slider
###Output
_____no_output_____
###Markdown
Combine these three into a single widget...the output should look like the image above.
###Code
password_widget = widgets.VBox(children=[helpful_title, password_text, password_length])
password_widget
# %load solutions/bad-pass-pass1-widgets.py
###Output
_____no_output_____
###Markdown
Calculate the password...The function below calculates the password and should set the value of the `password_text` widget. The first part has been done, you just need to add the line that sets the widget value.
###Code
def calculate_password(change):
import string
from secrets import choice
length = change.new
# Generate a list of random letters of the correct length.
password = ''.join(choice(string.ascii_letters) for _ in range(length))
# Add a line below to set the value of the widget password_text
# %load solutions/bad-pass-pass1-passgen.py
###Output
_____no_output_____
###Markdown
...and link password to widgetsFill in the line below. You want `calculate_password` to be called when the value of `password_length` changes. Here is a link to [Widget Events](06-Widget_Events.ipynb) in case you need it.
###Code
# call calculate_password whenever the password length changes
# %load solutions/bad-pass-pass1-observe.py
###Output
_____no_output_____
|
Project_2/COVID/Project_1.ipynb
|
###Markdown
Project 1: Covid-19 *TODO: table of contents* Table of ContentsBackground Knowledge: Spread of Disease1. The Data Science Life Cycle a. Formulating a question or problem b. Acquiring and cleaning data c. Conducting exploratory data analysis d. Using prediction and inference to draw conclusions The Data Science Life Cycle *TODO:* Update resources. Update formulating a question or problem Formulating a question or problem It is important to ask questions that will be informative and that will avoid misleading results. There are many different questions we could ask about Covid-19, for example, many researchers use data to predict the outcomes based on intervention techniques such as social distancing. Question: Take some time to formulate questions you have about this pandemic and the data you would need to answer the questions. In addition, add the link of an article you found interesting with a description an why it interested you. You can find [resources](https://docs.google.com/document/d/1yGSQkqlkroF6Efj3mHvP4sbQXyZM9ddO43YV1FQ75uQ/edit?usp=sharing) here to choose from. Your questions: *here*Data you would need: *here*Article: *link* *TODO:* Update data background. Acquiring and cleaning data We'll be looking at the COVID-19 Data Repository from Johns Hopkins University. You can find the raw data [here](https://github.com/CSSEGISandData/COVID-19/tree/master/csse_covid_19_data/csse_covid_19_time_series). We've cleaned up the datasets a bit, but we will be investigating the number of cases, new cases, deaths, and new deaths for counties in states accross the US from March 2020 - May 2021.The following table, `covid_statistics`, contains the several statistics collected at the start of each month for every county in the United States. Columns dropped: `UID`, `iso2`, `iso3`, `code3`, `FIPS`, `Country_Region`, `Lat`, `Long_'`, `Combined_Key`
###Code
covid_statistics = Table().read_table("data/covid_timeseries.csv").drop(0, 1, 2, 3, 4, 7,
'Lat', 'Long_', 'Combined_Key')
covid_statistics
#Here, we are relabeling the columns to have more accurate names
covid_statistics = covid_statistics.relabel(make_array('Admin2', 'Province_State', 'month', 'cases',
'cases_new', 'deaths', 'deaths_new'),
make_array('County', 'State', 'Date', 'Cases',
'New Cases', 'Deaths', 'New Deaths'))
covid_statistics
###Output
_____no_output_____
###Markdown
Question: It's important to evalute our data source. What do you know about Johns Hopkins University? What motivations do they have for collecting this data? What data is missing? *Insert answer* Question: We want to learn more about the dataset. First, how many total rows are in this table?
###Code
total_rows = ...
###Output
_____no_output_____
###Markdown
*Insert answer here* Question: What does each row represent? *Insert answer here* Conducting exploratory data analysis Visualizations help us to understand what the dataset is telling us. Compare the county with the most confirmed cases on April 1st with the next 9 most confirmed cases in a bar chart. Part 1 Question: First, sort the dataset to show the counties with the highest number of new cases for a given month.
###Code
new_cases_sorted = covid_statistics.sort('...', descending=...)
new_cases_sorted
#KEY
new_cases_sorted = covid_statistics.sort('New Cases', descending=True)
new_cases_sorted
###Output
_____no_output_____
###Markdown
Question: Now, cut down the table to only have the top twenty from sorted_cases above.
###Code
top_twenty = new_cases_sorted...(np.arange(20))
top_twenty
#KEY
top_twenty = new_cases_sorted.take(np.arange(20))
top_twenty
###Output
_____no_output_____
###Markdown
Question: Next, create a bar chart to visualize the comparison between the top_ten counties for the number of cases on April 1st.
###Code
top_twenty...("...", "...")
top_twenty.barh("County", "New Cases")
###Output
_____no_output_____
###Markdown
Question: Let's look at the counties in California. First, return a table that only has the California counties. Then, select the counties from the table you want to compare to each other.
###Code
ca_cases = covid_statistics.where("...", are.equal_to("..."))
ca_cases
#KEY
ca_cases = covid_statistics.where("State", are.equal_to("California"))
ca_cases
select_counties = ["Los Angeles", "Alameda", "Orange", "San Bernandino", "Bakersfield"]
#This will take the counties you choose for the comparison.
my_counties = ca_cases.where("County", are.contained_in(select_counties))
my_counties
###Output
_____no_output_____
###Markdown
Question: Now make another bar chart using your selected counties and the number of cases on May. First, filter out the data to contain information about May only. **Hint:** Use the number of the month.
###Code
#Filter table to contain only May data
may_my_counties = my_counties.where('...', are.containing('...'))
may_my_counties
#KEY
may_my_counties = my_counties.where('Date', are.containing("5"))
may_my_counties
# Use this cell to make a bar chart of new cases in May
...
#KEY
may_my_counties.barh("County", "New Cases")
###Output
_____no_output_____
###Markdown
Question: What are some possible reasons for the disparities in certain counties? Why do counties appear twice? Hint: Think about the size of the counties. *Insert answer here.* Part 2 A disease will spread more when there are more people in a population to spread to. Let's look at the population of the states to compare the percentages based on the number of people. Here is a table with the states and their populations.
###Code
pop_by_state = Table().read_table("data/pop_by_state.csv")
pop_by_state
###Output
_____no_output_____
###Markdown
Question: First, group the covid statistics to show the number of cases for each state and the sum of the cases.
###Code
#We are grouping all the counties into their states and taking the sum of the cases using this code.
grouped_by_state = covid_statistics.group("State", sum)
grouped_by_state
#Now we will drop the County sum and Combined_Key sum because they
#do not have numbers to add and we do not need the columns anymore.
grouped_by_state = grouped_by_state.drop(1, 2)
grouped_by_state
###Output
_____no_output_____
###Markdown
Question: Now that we have it grouped by state, let's first look at the number of cases in June so we can compare it to the percentages we will look at later.
###Code
#Run this cell to see the number of
grouped_by_state.sort("June sum", descending = True).take(np.arange(10)).barh("State", "June sum")
###Output
_____no_output_____
###Markdown
Question: Now join this table with the pop_by_state table.
###Code
#We are going to join the two tables by providing the column they share which is "State".
with_pop = grouped_by_state.join("State", pop_by_state)
with_pop
###Output
_____no_output_____
###Markdown
Question: Add a column called "Percentage" that has the number of cases collected in June divided by the population.
###Code
#First, we want to find the columns that would make up an array of the percentages.
june_cases = with_pop.column("6/1/2020 sum")
population = ...
percentage = (.../...)*100
percentage
#KEY
june_cases = with_pop.column("6/1/2020 sum")
population = with_pop.column("Population")
percentage = (june_cases/population)*100
percentage
with_pct = with_pop.with_column("...", ...)
with_pct
#KEY
with_pct = with_pop.with_column("Percentage", percentage)
with_pct
###Output
_____no_output_____
###Markdown
Question: Like we did in the previous section, sort with_pct and include the top ten states with the most cases on June 1st. Then, create a bar chart to compare the states with the highest percentages of cases.
###Code
top_ten_pct = ...
#KEY
top_ten_pct = with_pct.sort("Percentage", descending = True).take(np.arange(10))
top_ten_pct
#fill in the code to make the bar chart looking at the States and their Percentages.
...
#KEY
top_ten_pct.barh("State", "Percentage")
###Output
_____no_output_____
###Markdown
Question: What differences do you see from the bar chart of the states when we just saw the number of cases? Give some possible reasons for the differences. *Insert answer here.* Using prediction and inference to draw conclusions Now that we have some experience making these visualizations, let's go back to exponential growth. We know that, without intervention, a disease can behave like a rumor and spread at an alarming rate. From the previous section, we also know that we need to take into account the population of the region when looking at the number of cases. Now we will read in two tables: Covid by State and Population by state in order to look at the percentage of the cases. And the growth of the
###Code
covid_by_state = Table().read_table("data/covid_by_state.csv")
covid_by_state.show(5)
#run this cell to get a line plot!
def plot_states(state_1, state_2):
covid_by_state.select(0, state_1, state_2).plot(0)
interact(plot_states,
state_1=Dropdown(options=covid_by_state.labels[1:]),
state_2=Dropdown(options=covid_by_state.labels[1:]));
###Output
_____no_output_____
|
notebooks/tutorials/landscape_evolution/space/SPACE_large_scale_eroder_user_guide_and_examples.ipynb
|
###Markdown
User guide and example for the Landlab SPACE_large_Scale_eroder componentThis notebook provides a brief introduction and user's guide for the Stream Power And Alluvial Conservation Equation large_Scale_eroder (SPACE_large_Scale_eroder) component for landscape evolution modeling. The SPACE_large_Scale_eroder is based on the SPACE component and is designed to be more robust against large time steps and coded in such a way that mass conservation is explicitly conserved during calculation. This notebook combines two documents, a User's Manual and a notebook-based example, written Charles M. Shobe to accompany the following publication:Shobe, C. M., Tucker, G. E., & Barnhart, K. R. (2017). The SPACE 1.0 model: a Landlab component for 2-D calculation of sediment transport, bedrock erosion, and landscape evolution. Geoscientific Model Development, 10(12), 4577-4604, [https://doi.org/10.5194/gmd-10-4577-2017](https://doi.org/10.5194/gmd-10-4577-2017).This notebook is adjusted from the SAPCE notebook created by Greg Tucker in July 2021 and created to complement the development of the SPACE_large_Scale_eroder. *(User's Manual and example notebook written by C.M. Shobe in July 2017; combined into a notebook, updated for compatibility with Landlab 2.x, and added to the Landlab tutorials collection by Greg Tucker, July 2021. Later adjusted to demonstrate the functionality of the SPACE_large_Scale_eroder by Benjamin Campforts in October 2021.)* Background on SPACE_large_Scale_eroder componentThe Landlab SPACE_large_Scale_eroder (Stream Power with Alluvium Conservation and Entrainment) component computes sediment transport and bedrock erosion across two-dimensional model landscapes. The SPACE model provides advantages relative to many other fluvial erosion models in that it 1) allows simultaneous erosion of sediment and bedrock, 2) explicitly treats sediment fluxes rather than relying on a proxy for bed cover, and 3) is easily coupled with other surface process components in Landlab. The SPACE component enhances Landlab’s functionality by enabling modeling of bedrock-alluvial channels, rather than simply using parameterized sediment-flux-dependent incision models.This user manual teaches users how to use the SPACE component using twoexamples provided in Shobe et al. (2017).This user manual serves as a supplement to that manuscript.Prerequisites: A working knowledge of the Python programming language(SPACE and Landlab support Python 3.x) as well as the NumPyand MatPlotLib libraries. Basic familiarity with the Landlab modeling toolkit (see Hobley et al., 2017 GMD, and Barnhart et al., 2020 eSurf) is recommended. Model description Input parameters- **Sediment erodibility** $K_s$: Governs the rate of sediment entrainment; may be specified as a single floating point number, an array of length equal to the number of grid nodes, or a string naming an existing grid field.- **Bedrock erodibility** $K_r$: Governs the rate of bedrock erosion; may be specified as a single floating point number, an array of length equal to the number of grid nodes, or a string naming an existing grid field.- **Fraction of fine sediment** $F_f$: The unitless fraction (0–1) of rock that does not get converted to sediment, but is assumed to exit the model domain as “fine sediment,” or wash load.- **Sediment porosity** $\phi$: The unitless fraction (0–1) of sediment thickness caused by pore space.- **Sediment entrainment length scale** $H_*$: Length scale governing the shape of the exponential sediment entrainment and bedrock erosion func- tions. $H_*$ may be thought of as reflecting bedrock surface roughness, with larger $H_*$ representing a rougher bedrock surface.- **Effective settling velocity** $V$: Settling velocity of sediment after accounting for the upward effects of turbulence. For details, see discussion by Davy and Lague, 2009.- **Stream power exponent** $m$: Exponent on drainage area or discharge in the stream power framework. Generally $\approx 0.5$.- **Stream power exponent** $n$: Exponent on channel slope in the stream power framework. Generally $\approx 1$.- **Sediment erosion threshold** $\omega_{cs}$: Threshold erosive power required to entrain sediment.- **Bedrock erosion threshold** $\omega_{cr}$: Threshold erosive power required to erode bedrock.- **Discharge field**: The field name or array to use for water discharge. The default is to use the grid field `surface_water__discharge`, which is simply drainage area multiplied by the default rainfall rate (1 m/yr). To use custom spatially/temporally varying rainfall, use `water__unit_flux_in` to specify water input to the `FlowAccumulator`. Model VariablesVariables listed here are updated by the component at the grid locations listed. NOTE: because flow routing, calculation of discharge, and calculation of flow depth (if applicable) are handled by other Landlab components, variables such as water discharge and flow depth are not altered by the SPACE model and are not listed here.Note that the SPACE_large_Scale_eroder does currently not support different numerical solvers. A 'basic' (default): explicit forward-time extrapolation is used. The implies that the solution will become unstable if time step is too large so care must be taken when selecting a timestep. - `soil__depth`, node, [m]: Thickness of soil (also called sediment or alluvium) at every node. The name “soil” was used to match existing Landlab components. Soil thickness is calculated at every node incorporating the effects of sediment entrainment and deposition and bedrock erosion.- `sediment__flux`, node, [m$^3$/yr]: The volumetric flux of sediment at each node. Sediment flux is used to calculate sediment deposition rates. Steps of a SPACE_large_Scale_eroder modelNote: these steps are for a SPACE model that is not coupled to any other Landlab components. To see examples of how to couple Landlab components, please refer to the Landlab documentation: [http://landlab.github.io](http://landlab.github.io). Step 1: Import the necessary libraries The `SPACE_large_Scale_eroder` component is required, as are the model grid component and a flow routing component. Here, we use the `PriorityFloodFlowRouter` component that takes care of routing the flow across flats or pits in the digital elevation model, calculates the flow direction as well as the flow accumulation.
###Code
## Import Numpy and Matplotlib packages
import numpy as np
import matplotlib.pyplot as plt # For plotting results; optional
## Import Landlab components
# Flow routing
from landlab.components import PriorityFloodFlowRouter
# SPACE model
from landlab.components import SpaceLargeScaleEroder # SpaceLargeScaleEroder model
## Import Landlab utilities
from landlab import RasterModelGrid # Grid utility
from landlab import imshow_grid # For plotting results; optional
###Output
_____no_output_____
###Markdown
Two Landlab components are essential to running the SPACE model: the model itself, and the `PriorityFloodFlowRouter`, which calculates drainage pathways, topographic slopes, and surface water discharge across the grid. In addition to the relevant process components, some Landlab utilities are required to generate the model grid (in this example `RasterModelGrid`) and to visualize output (`imshow_grid`). Note that while it is possible to visualize output through functionality in other libraries (e.g., matplotlib), `imshow_grid` provides a simple way to generate 2-D maps of model variables.Most Landlab functionality requires the Numpy package for scientific computing in python. The matplotlib plotting library has also been imported to aid visualization of results. Step 2: Define the model domain and initial conditionsThe SPACE component works on raster grids. For this example we will use a synthetic raster grid. An example and description of the Landlab raster model grid are given in (Shobe et al., 2017), with a more complete explanation offered in Hobley et al. (2017) and Barnhart et al. (2020). In addition to using user-defined, synthetic model grids, it is also possible to import digital elevation models for use as a model domain (see the tutorial *reading_dem_into_landlab*). In this example, we create a synthetic, square model domain by creating an instance of the RasterModelGrid. In this case, the domain will be a plane slightly tilted towards the lower-left (southwest) corner with random micro-scale topographic roughness to force flow convergence and channelization. The grid is composed of 20 rows and 20 columns for a total of 400 nodes, with user-defined spacing.Once the grid has been created, the user defines a grid field to contain values of land surface elevation, and then imposes the desired initial condition topography on the model grid. In the case shown below, the field `topographic__elevation` is added to the model grid and given initial values of all zeros. After that, initial model topography is added to the field. To create a plane tilted to the southwest corner, which is referenced by $(x,y)$ coordinate pair (0,0), topographic elevation is modified to depend on the $x$ and $y$ coordinates of each grid node. Then, randomized micro-scale topographic roughness is added to the model grid. While not strictly necessary for the `SPACE_large_Scale_eroder` model to run, the micro-roughness allows flow convergence, channelization, and the development of realistic landscapes.In this example, we initialize the model domain with 2 meters of sediment thickness at every core (non-boundary) node. The sediment thickness will shrink over time as water mobilizes and removes sediment. To do this, the fields `soil__depth` and `bedrock__elevation` must be added to the model grid. If they are not added, the SPACE model will create them. In that case, however, the default sediment thickness is zero and the default bedrock topography is simply the provided topographic elevation.
###Code
# Set grid parameters
num_rows = 20
num_columns = 20
node_spacing = 100.0
# track sediment flux at the node adjacent to the outlet at lower-left
node_next_to_outlet = num_columns + 1
# Instantiate model grid
mg = RasterModelGrid((num_rows, num_columns), node_spacing)
# add field ’topographic elevation’ to the grid
mg.add_zeros("node", "topographic__elevation")
# set constant random seed for consistent topographic roughness
np.random.seed(seed=5000)
# Create initial model topography:
# plane tilted towards the lower−left corner
topo = mg.node_y / 100000.0 + mg.node_x / 100000.0
# add topographic roughness
random_noise = (
np.random.rand(len(mg.node_y)) / 1000.0
) # impose topography values on model grid
mg["node"]["topographic__elevation"] += topo + random_noise
# add field 'soil__depth' to the grid
mg.add_zeros("node", "soil__depth")
# Set 2 m of initial soil depth at core nodes
mg.at_node["soil__depth"][mg.core_nodes] = 2.0 # meters
# Add field 'bedrock__elevation' to the grid
mg.add_zeros("bedrock__elevation", at="node")
# Sum 'soil__depth' and 'bedrock__elevation'
# to yield 'topographic elevation'
mg.at_node["bedrock__elevation"][:] = mg.at_node["topographic__elevation"]
mg.at_node["topographic__elevation"][:] += mg.at_node["soil__depth"]
###Output
_____no_output_____
###Markdown
Step 3: Set the boundary conditionsThe user must determine the boundary conditions of the model domain (i.e., determine across which boundaries water and sediment may flow). Boundary conditions are controlled by setting the status of individual nodes or grid edges (see Hobley et al., 2017). We will use a single corner node as an “open” boundary and all other boundary nodes will be “closed”. We first use set closed boundaries at grid edges to ensure that no mass (water or sediment) may cross the model boundaries. Then, set watershed boundary condition outlet id is used to open (allow flow through) the lower-left corner of the model domain.
###Code
# Close all model boundary edges
mg.set_closed_boundaries_at_grid_edges(
bottom_is_closed=True, left_is_closed=True, right_is_closed=True, top_is_closed=True
)
# Set lower-left (southwest) corner as an open boundary
mg.set_watershed_boundary_condition_outlet_id(
0, mg["node"]["topographic__elevation"], -9999.0
)
###Output
_____no_output_____
###Markdown
In this configuration, the model domain is set to drain water and sediment out of the only open boundary on the grid, the lower-left corner. There are several options for changing boundary conditions in Landlab. See Hobley et al. (2017) or the Landlab [online documentation](https://landlab.readthedocs.io). Step 4: Initialize the SPACE_large_Scale_eroder component and any other components usedLike most Landlab components, SPACE is written as a Python class. The class was imported at the beginning of the driver script (step 1). In this step, the user declares the instance of the SPACE class and sets any relevant model parameters. The same must be done for any other components used.
###Code
# Instantiate flow router
fr = PriorityFloodFlowRouter(mg, flow_metric="D8")
# Instantiate SPACE model with chosen parameters
sp = SpaceLargeScaleEroder(
mg,
K_sed=0.01,
K_br=0.001,
F_f=0.0,
phi=0.0,
H_star=1.0,
v_s=5.0,
m_sp=0.5,
n_sp=1.0,
sp_crit_sed=0,
sp_crit_br=0,
)
###Output
_____no_output_____
###Markdown
Step 5: Run the time loopThe `SPACE_large_Scale_eroder` component calculates sediment entrainment and deposition, bedrock erosion, and changes in land surface elevation over time. The code shown below is an example of how to run the `SPACE_large_Scale_eroder` model over several model timesteps. In the example below, SPACE is run in a loop that executes until elapsed model time has reached a user-defined run time. The user is also responsible for choosing the model timestep. Within the loop, the following steps occur:1. The flow router (`PriorityFloodFlowRouter`) runs first to determine topographic slopes and water discharge at all nodes on the model domain. Within this component, any nodes located in local topographic minima (i.e., nodes that water cannot drain out of) are mapped to establish flow paths across the surface of these “lakes". Looking for depressions is optional. However, because the SPACE_large_Scale_eroder model may in certain situations create local minima, using the depression finder and router can prevent the development of fatal instabilities.2. The SPACE model runs for the duration of a single timestep, computing sediment transport, bedrock erosion, and topographic surface evolution.3. The elapsed time is updated.
###Code
# Set model timestep
timestep = 1.0 # years
# Set elapsed time to zero
elapsed_time = 0.0 # years
# Set timestep count to zero
count = 0
# Set model run time
run_time = 500.0 # years
# Array to save sediment flux values
sed_flux = np.zeros(int(run_time // timestep))
while elapsed_time < run_time: # time units of years
# Run the flow router
fr.run_one_step()
# Run SPACE for one time step
sp.run_one_step(dt=timestep)
# Save sediment flux value to array
sed_flux[count] = mg.at_node["sediment__flux"][node_next_to_outlet]
# Add to value of elapsed time
elapsed_time += timestep
# Increase timestep count
count += 1
###Output
_____no_output_____
###Markdown
Visualization of results Sediment flux map
###Code
# Instantiate figure
fig = plt.figure()
# Instantiate subplot
plot = plt.subplot()
# Show sediment flux map
imshow_grid(
mg,
"sediment__flux",
plot_name="Sediment flux",
var_name="Sediment flux",
var_units=r"m$^3$/yr",
grid_units=("m", "m"),
cmap="terrain",
)
# Export figure to image
fig.savefig("sediment_flux_map.eps")
###Output
_____no_output_____
###Markdown
SedimentographOnce the data required for the time series has been saved during the time loop, the time series may be plotted using standard matplotlib plotting commands:
###Code
# Instantiate figure
fig = plt.figure()
# Instantiate subplot
sedfluxplot = plt.subplot()
# Plot data
sedfluxplot.plot(np.arange(500), sed_flux, color="k", linewidth=3.0)
# Add axis labels
sedfluxplot.set_xlabel("Time [yr]")
sedfluxplot.set_ylabel(r"Sediment flux [m$^3$/yr]")
###Output
_____no_output_____
###Markdown
User guide and example for the Landlab SPACE_large_Scale_eroder componentThis notebook provides a brief introduction and user's guide for the Stream Power And Alluvial Conservation Equation large_Scale_eroder (SPACE_large_Scale_eroder) component for landscape evolution modeling. The SPACE_large_Scale_eroder is based on the SPACE component and is designed to be more robust against large time steps and coded in such a way that mass conservation is explicitly conserved during calculation. This notebook combines two documents, a User's Manual and a notebook-based example, written Charles M. Shobe to accompany the following publication:Shobe, C. M., Tucker, G. E., & Barnhart, K. R. (2017). The SPACE 1.0 model: a Landlab component for 2-D calculation of sediment transport, bedrock erosion, and landscape evolution. Geoscientific Model Development, 10(12), 4577-4604, [https://doi.org/10.5194/gmd-10-4577-2017](https://doi.org/10.5194/gmd-10-4577-2017).This notebook is adjusted from the SAPCE notebook created by Greg Tucker in July 2021 and created to complement the development of the SPACE_large_Scale_eroder. *(User's Manual and example notebook written by C.M. Shobe in July 2017; combined into a notebook, updated for compatibility with Landlab 2.x, and added to the Landlab tutorials collection by Greg Tucker, July 2021. Later adjusted to demonstrate the functionality of the SPACE_large_Scale_eroder by Benjamin Campforts in October 2021.)* Background on SPACE_large_Scale_eroder componentThe Landlab SPACE_large_Scale_eroder (Stream Power with Alluvium Conservation and Entrainment) component computes sediment transport and bedrock erosion across two-dimensional model landscapes. The SPACE model provides advantages relative to many other fluvial erosion models in that it 1) allows simultaneous erosion of sediment and bedrock, 2) explicitly treats sediment fluxes rather than relying on a proxy for bed cover, and 3) is easily coupled with other surface process components in Landlab. The SPACE component enhances Landlab’s functionality by enabling modeling of bedrock-alluvial channels, rather than simply using parameterized sediment-flux-dependent incision models.This user manual teaches users how to use the SPACE component using twoexamples provided in Shobe et al. (2017).This user manual serves as a supplement to that manuscript.Prerequisites: A working knowledge of the Python programming language(SPACE and Landlab support Python 3.x) as well as the NumPyand MatPlotLib libraries. Basic familiarity with the Landlab modeling toolkit (see Hobley et al., 2017 GMD, and Barnhart et al., 2020 eSurf) is recommended. Model description Input parameters- **Sediment erodibility** $K_s$: Governs the rate of sediment entrainment; may be specified as a single floating point number, an array of length equal to the number of grid nodes, or a string naming an existing grid field.- **Bedrock erodibility** $K_r$: Governs the rate of bedrock erosion; may be specified as a single floating point number, an array of length equal to the number of grid nodes, or a string naming an existing grid field.- **Fraction of fine sediment** $F_f$: The unitless fraction (0–1) of rock that does not get converted to sediment, but is assumed to exit the model domain as “fine sediment,” or wash load.- **Sediment porosity** $\phi$: The unitless fraction (0–1) of sediment thickness caused by pore space.- **Sediment entrainment length scale** $H_*$: Length scale governing the shape of the exponential sediment entrainment and bedrock erosion func- tions. $H_*$ may be thought of as reflecting bedrock surface roughness, with larger $H_*$ representing a rougher bedrock surface.- **Effective settling velocity** $V$: Settling velocity of sediment after accounting for the upward effects of turbulence. For details, see discussion by Davy and Lague, 2009.- **Stream power exponent** $m$: Exponent on drainage area or discharge in the stream power framework. Generally $\approx 0.5$.- **Stream power exponent** $n$: Exponent on channel slope in the stream power framework. Generally $\approx 1$.- **Sediment erosion threshold** $\omega_{cs}$: Threshold erosive power required to entrain sediment.- **Bedrock erosion threshold** $\omega_{cr}$: Threshold erosive power required to erode bedrock.- **Discharge field**: The field name or array to use for water discharge. The default is to use the grid field `surface_water__discharge`, which is simply drainage area multiplied by the default rainfall rate (1 m/yr). To use custom spatially/temporally varying rainfall, use `water__unit_flux_in` to specify water input to the `FlowAccumulator`. Model VariablesVariables listed here are updated by the component at the grid locations listed. NOTE: because flow routing, calculation of discharge, and calculation of flow depth (if applicable) are handled by other Landlab components, variables such as water discharge and flow depth are not altered by the SPACE model and are not listed here.Note that the SPACE_large_Scale_eroder does currently not support different numerical solvers. A 'basic' (default): explicit forward-time extrapolation is used. The implies that the solution will become unstable if time step is too large so care must be taken when selecting a timestep. - `soil__depth`, node, [m]: Thickness of soil (also called sediment or alluvium) at every node. The name “soil” was used to match existing Landlab components. Soil thickness is calculated at every node incorporating the effects of sediment entrainment and deposition and bedrock erosion.- `sediment__flux`, node, [m$^3$/yr]: The volumetric flux of sediment at each node. Sediment flux is used to calculate sediment deposition rates. Steps of a SPACE_large_Scale_eroder modelNote: these steps are for a SPACE model that is not coupled to any other Landlab components. To see examples of how to couple Landlab components, please refer to the Landlab documentation: [http://landlab.github.io](http://landlab.github.io). Step 1: Import the necessary libraries The `SPACE_large_Scale_eroder` component is required, as are the model grid component and a flow routing component. Here, we use the `PriorityFloodFlowRouter` component that takes care of routing the flow across flats or pits in the digital elevation model, calculates the flow direction as well as the flow accumulation.
###Code
## Import Numpy and Matplotlib packages
import numpy as np
import matplotlib.pyplot as plt # For plotting results; optional
## Import Landlab components
# Flow routing
from landlab.components import PriorityFloodFlowRouter
# SPACE model
from landlab.components import SpaceLargeScaleEroder # SpaceLargeScaleEroder model
## Import Landlab utilities
from landlab import RasterModelGrid # Grid utility
from landlab import imshow_grid # For plotting results; optional
###Output
_____no_output_____
###Markdown
Two Landlab components are essential to running the SPACE model: the model itself, and the `PriorityFloodFlowRouter`, which calculates drainage pathways, topographic slopes, and surface water discharge across the grid. In addition to the relevant process components, some Landlab utilities are required to generate the model grid (in this example `RasterModelGrid`) and to visualize output (`imshow_grid`). Note that while it is possible to visualize output through functionality in other libraries (e.g., matplotlib), `imshow_grid` provides a simple way to generate 2-D maps of model variables.Most Landlab functionality requires the Numpy package for scientific computing in python. The matplotlib plotting library has also been imported to aid visualization of results. Step 2: Define the model domain and initial conditionsThe SPACE component works on raster grids. For this example we will use a synthetic raster grid. An example and description of the Landlab raster model grid are given in (Shobe et al., 2017), with a more complete explanation offered in Hobley et al. (2017) and Barnhart et al. (2020). In addition to using user-defined, synthetic model grids, it is also possible to import digital elevation models for use as a model domain (see the tutorial *reading_dem_into_landlab*). In this example, we create a synthetic, square model domain by creating an instance of the RasterModelGrid. In this case, the domain will be a plane slightly tilted towards the lower-left (southwest) corner with random micro-scale topographic roughness to force flow convergence and channelization. The grid is composed of 20 rows and 20 columns for a total of 400 nodes, with user-defined spacing.Once the grid has been created, the user defines a grid field to contain values of land surface elevation, and then imposes the desired initial condition topography on the model grid. In the case shown below, the field `topographic__elevation` is added to the model grid and given initial values of all zeros. After that, initial model topography is added to the field. To create a plane tilted to the southwest corner, which is referenced by $(x,y)$ coordinate pair (0,0), topographic elevation is modified to depend on the $x$ and $y$ coordinates of each grid node. Then, randomized micro-scale topographic roughness is added to the model grid. While not strictly necessary for the `SPACE_large_Scale_eroder` model to run, the micro-roughness allows flow convergence, channelization, and the development of realistic landscapes.In this example, we initialize the model domain with 2 meters of sediment thickness at every core (non-boundary) node. The sediment thickness will shrink over time as water mobilizes and removes sediment. To do this, the fields `soil__depth` and `bedrock__elevation` must be added to the model grid. If they are not added, the SPACE model will create them. In that case, however, the default sediment thickness is zero and the default bedrock topography is simply the provided topographic elevation.
###Code
# Set grid parameters
num_rows = 20
num_columns = 20
node_spacing = 100.0
# track sediment flux at the node adjacent to the outlet at lower-left
node_next_to_outlet = num_columns + 1
# Instantiate model grid
mg = RasterModelGrid((num_rows, num_columns), node_spacing)
# add field ’topographic elevation’ to the grid
mg.add_zeros("node", "topographic__elevation")
# set constant random seed for consistent topographic roughness
np.random.seed(seed=5000)
# Create initial model topography:
# plane tilted towards the lower−left corner
topo = mg.node_y / 100000.0 + mg.node_x / 100000.0
# add topographic roughness
random_noise = (
np.random.rand(len(mg.node_y)) / 1000.0
) # impose topography values on model grid
mg["node"]["topographic__elevation"] += topo + random_noise
# add field 'soil__depth' to the grid
mg.add_zeros("node", "soil__depth")
# Set 2 m of initial soil depth at core nodes
mg.at_node["soil__depth"][mg.core_nodes] = 2.0 # meters
# Add field 'bedrock__elevation' to the grid
mg.add_zeros("bedrock__elevation", at="node")
# Sum 'soil__depth' and 'bedrock__elevation'
# to yield 'topographic elevation'
mg.at_node["bedrock__elevation"][:] = mg.at_node["topographic__elevation"]
mg.at_node["topographic__elevation"][:] += mg.at_node["soil__depth"]
###Output
_____no_output_____
###Markdown
Step 3: Set the boundary conditionsThe user must determine the boundary conditions of the model domain (i.e., determine across which boundaries water and sediment may flow). Boundary conditions are controlled by setting the status of individual nodes or grid edges (see Hobley et al., 2017). We will use a single corner node as an “open” boundary and all other boundary nodes will be “closed”. We first use set closed boundaries at grid edges to ensure that no mass (water or sediment) may cross the model boundaries. Then, set watershed boundary condition outlet id is used to open (allow flow through) the lower-left corner of the model domain.
###Code
# Close all model boundary edges
mg.set_closed_boundaries_at_grid_edges(
bottom_is_closed=True, left_is_closed=True, right_is_closed=True, top_is_closed=True
)
# Set lower-left (southwest) corner as an open boundary
mg.set_watershed_boundary_condition_outlet_id(
0, mg["node"]["topographic__elevation"], -9999.0
)
###Output
_____no_output_____
###Markdown
In this configuration, the model domain is set to drain water and sediment out of the only open boundary on the grid, the lower-left corner. There are several options for changing boundary conditions in Landlab. See Hobley et al. (2017) or the Landlab [online documentation](https://landlab.readthedocs.io). Step 4: Initialize the SPACE_large_Scale_eroder component and any other components usedLike most Landlab components, SPACE is written as a Python class. The class was imported at the beginning of the driver script (step 1). In this step, the user declares the instance of the SPACE class and sets any relevant model parameters. The same must be done for any other components used.
###Code
# Instantiate flow router
fr = PriorityFloodFlowRouter(mg, flow_metric="D8")
# Instantiate SPACE model with chosen parameters
sp = SpaceLargeScaleEroder(
mg,
K_sed=0.01,
K_br=0.001,
F_f=0.0,
phi=0.0,
H_star=1.0,
v_s=5.0,
m_sp=0.5,
n_sp=1.0,
sp_crit_sed=0,
sp_crit_br=0,
)
###Output
_____no_output_____
###Markdown
Step 5: Run the time loopThe `SPACE_large_Scale_eroder` component calculates sediment entrainment and deposition, bedrock erosion, and changes in land surface elevation over time. The code shown below is an example of how to run the `SPACE_large_Scale_eroder` model over several model timesteps. In the example below, SPACE is run in a loop that executes until elapsed model time has reached a user-defined run time. The user is also responsible for choosing the model timestep. Within the loop, the following steps occur:1. The flow router (`PriorityFloodFlowRouter`) runs first to determine topographic slopes and water discharge at all nodes on the model domain. Within this component, any nodes located in local topographic minima (i.e., nodes that water cannot drain out of) are mapped to establish flow paths across the surface of these “lakes". Looking for depressions is optional. However, because the SPACE_large_Scale_eroder model may in certain situations create local minima, using the depression finder and router can prevent the development of fatal instabilities.2. The SPACE model runs for the duration of a single timestep, computing sediment transport, bedrock erosion, and topographic surface evolution.3. The elapsed time is updated.
###Code
# Set model timestep
timestep = 1.0 # years
# Set elapsed time to zero
elapsed_time = 0.0 # years
# Set timestep count to zero
count = 0
# Set model run time
run_time = 500.0 # years
# Array to save sediment flux values
sed_flux = np.zeros(int(run_time // timestep))
while elapsed_time < run_time: # time units of years
# Run the flow router
fr.run_one_step()
# Run SPACE for one time step
sp.run_one_step(dt=timestep)
# Save sediment flux value to array
sed_flux[count] = mg.at_node["sediment__flux"][node_next_to_outlet]
# Add to value of elapsed time
elapsed_time += timestep
# Increase timestep count
count += 1
###Output
_____no_output_____
###Markdown
Visualization of results Sediment flux map
###Code
# Instantiate figure
fig = plt.figure()
# Instantiate subplot
plot = plt.subplot()
# Show sediment flux map
imshow_grid(
mg,
"sediment__flux",
plot_name="Sediment flux",
var_name="Sediment flux",
var_units=r"m$^3$/yr",
grid_units=("m", "m"),
cmap="terrain",
)
# Export figure to image
fig.savefig("sediment_flux_map.eps")
###Output
_____no_output_____
###Markdown
SedimentographOnce the data required for the time series has been saved during the time loop, the time series may be plotted using standard matplotlib plotting commands:
###Code
# Instantiate figure
fig = plt.figure()
# Instantiate subplot
sedfluxplot = plt.subplot()
# Plot data
sedfluxplot.plot(np.arange(500), sed_flux, color="k", linewidth=3.0)
# Add axis labels
sedfluxplot.set_xlabel("Time [yr]")
sedfluxplot.set_ylabel(r"Sediment flux [m$^3$/yr]")
###Output
_____no_output_____
|
_notebooks/2020-07-29-saa-c02-chad-smith-pearson-live-lesson.ipynb
|
###Markdown
SAA-C02 AWS solutions architect notes> Chad Smith's video course- toc: true- badges: false- comments: false- categories: ['AWS', 'Revision']- image: http://i.imgur.com/AlR4Rmk.png table {font-size:100%; white-space:inherit}table td {max-width:inherit}I had my exam booked for 19/03 and then Covid hit. I actually turned up to the venue in Crawley only to be greeted by a hastily written note taped on the door telling me all exams are cancelled.Having not had an email, and having studied fairly hard for many months we can say I wasn't overjoyed.It's all good now, I've rescheduled for the remote exam and have restarted my revision.Here is the course outline for Chad Smith's live lessons course from O'Reilly [^1]. | Module | Lesson | Section ||----------------------------------|--------------------------------------------|-------------------------------------------------|| 1. Overview | 1 Exam Strategies | 1. Logistics || | | 2. Exam Guide || | | 3.Well-Architected Framework || | | 4. Exam Question Domains || 2. Resilient Architectures | 2. Multi-Tier | 1. Resilient VPC Architectures || | | 2. Resilient Application Architectures || | | 3. Resilient Serverless Architectures || | | 4. [Question Breakdown](https://jonwhittlestone.github.io/notes/aws/revision/2020/07/29/saa-c02-chad-smith-pearson-live-lesson.htmlChad's-Question-Breakdown) || | 3. Highly Available | 1. Definitions || | | 2. AWS Global Infrastructure || | | 3. [Questions Breakdown](https://jonwhittlestone.github.io/notes/aws/revision/2020/07/29/saa-c02-chad-smith-pearson-live-lesson.htmlChad's-Question-Breakdown) || | 4. Design Decoupling Mechanisms | 1. Decoupling with ELB || | | 2. Decoupling with AWS Lambda and S3 || | | 3. Decoupling with SNS, SQS, Auto Scaling || | | 4. Question Breakdown || | 5. Appropriate Resilient Storage | 1. EBS Resilience || | | 2. EFS Resilience || | | 3. S3 Resilience || | | 4. Question Breakdown || 3. High-Performing Architectures | 6. Identify Elastic/Scalable compute | 1. Elasticity with Unitfied Auto Scaling || | | 2. Elasticity with Managed services || | | 3. Question Breakdown || | 7. Select high-performing/scalable storage | 1. Block-based storage perf || | | 2. File-based storage perf || | | 3. Object-based storage perf || | | 4. Caching perf - Cloudfront || | | 5. Caching perf - Elasticache || | | 6. Question Breakdown || | 8. Select high-performance Networking | 1. VPC perf || | | 2. Single-node perf || | | 3. Hybrid network perf || | | 4. Question Breakdown || | 9. Select high-perf database solutions | 1. RDS perf || | | 2. DynamoDB perf || | | 3. Question Breakdown || 4. Secure Apps and Architectures | 10. Secure access to AWS resources | 1. Account-based access control || | | 2. User-based access || | | 3.Resource-based access || | | 4. Question Breakdown || | 11.Design secure app tiers | 1. Design secure VPC internal net || | | 2. Design secure VPC egress || | | 3.Securing app access || | | 4. Monitoring application activity || | | 5. Question Breakdown || | 12. Appropriate data security options | 1. Secure data at-rest || | | 2. Secure data in-transit with SSL || | | 3. Secure data in-transit with network features || | | 4. Key Management solutions || | | 5. Question Breakdown || 5. Cost-Optimised Architectures | 13. Cost-effective storage | 1. Block and File storage costs || | | 2. Object Storage costs || | | 3. Question Breakdown || | 14. Cost-effective compute & DB | 1. EC2 Cost optimisation || | | 2.ECS and Lambda Cost optimisation || | | 3. Database cost optimisation || | | 4. Question Breakdown || | 15. Cost optimised network architectures | 1. VPC cost optimisation || | | 2. Reguinal & Internet network transfer costs || | | 3. Question Breakdown || | | | QuestionsThis is a good video course because at the end of each lesson, the instructor takes you through a couple of sample questions and explaining reasoning behind the correct/incorrect answers.Questions that I have devised may be applicable. This technique came to me with the [Cornell Method](http://lsc.cornell.edu/notes.html?utm_source=hackernewsletter&utm_medium=email&utm_term=learn) of note-taking. L2: Resillient Architectures > Multi-Tier [L2 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_02_02_04)> An application is currently hosted on an EC2 Instance and consists of static images, Java code and a MySQL database. What steps could be performed to improve the resillience? (pick two.)- Move the database to RDS and enable Multi-AZ.- Resize the EC2 instance to increase memory and CPU.- Move the static images and Javascript to an EFS volume.- Move the static images and JavaScript to an S3 bucket.- Move the static images/Javascript/Java to one EBS volume, and the database to a second volume.
###Code
#collapse
answers = '''
✔️ Move the database to RDS and enable Multi-AZ
- Resize the EC2 instance to increase memory and CPU.
- Move the static images and Javascript to an EFS volume.
✔️ Move the static images and JavaScript to an S3 bucket.
- Move the static images/Javascript/Java to one EBS volume, and the database to a second volume.
'''
###Output
_____no_output_____
###Markdown
> As an AWS network architect you are responsible for improving the resilience of an existing VPC network with the following configuration: Two AZ with public and private subnets, Internet Gateway and an EC2 NAT instance deployed in one public subnet for private subnet outbound internet traffic. Which of the following recommendations would most improve the resilience of the network architecture?- Deploy public and private subnets into a third AZ.- Upsize the EC2 NAT instance.- Deploy a second EC2 NAT instance in the second AZ.- Replace the EC2 NAT instance with a NAT Gateway.
###Code
#collapse
answers = '''
- Deploy public and private subnets into a third AZ.
- Upsize the EC2 NAT instance.
- Deploy a second EC2 NAT instance in the second AZ.
✔️ Replace the EC2 NAT instance with a NAT Gateway.
- _Replacing a SPOF_
###Output
_____no_output_____
###Markdown
--- L3: Resillient Architectures > Design Highly Available and/or Fault-Tolerant Architectures [L3 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_02_03_03)> Assuming your application infrastructure has an availability requirement of 99.99%, which of the following resilience strategies would NOT achieve the required uptime?- Deploying the database back end via RDS with Multi-AZ enabled.- Deploying infrastructure via CloudFormation templates. In a disaster, re-deploy from scratch.- Monitoring on all application layer KPIS with sensitive alarms and early notification, automated mitigation wherever possible.- All web services are hosted behind ALB and use Auto Scaling, both in multiple availability zones.
###Code
#collapse
answers = '''
- Deploying the database back end via RDS with Multi-AZ enabled.
✔️ Deploying infrastructure via CloudFormation templates. In a disaster, re-deploy from scratch.
- Monitoring on all application layer KPIS with sensitive alarms and early notification, automated mitigation wherever possible.
- All web services are hosted behind ALB and use Auto Scaling, both in multiple availability zones.
'''
###Output
_____no_output_____
###Markdown
> As an AWS application architect, you've been asked to design a multi-tier application infrastructure that is highly available AND fault tolerant end-to-end. Which of the following solutions would meet these requirements?- Elastic Load Balancer, Auto Scaling on EC2, RDS Multi-AZ.- CloudFront, Elastic Load Balancer, Auto Scaling on EC2, RDS Multi-AZ. - CloudFront, S3, Elastic Load Balancer, ECS on EC2, RDS Aurora Serverless.- S3, API Gateway, Lambda, DynamoDB.
###Code
#collapse
answers = '''
- Elastic Load Balancer, Auto Scaling on EC2, RDS Multi-AZ.
- CloudFront, Elastic Load Balancer, Auto Scaling on EC2, RDS Multi-AZ.
- CloudFront, S3, Elastic Load Balancer, ECS on EC2, RDS Aurora Serverless.
✔️ S3, API Gateway, Lambda, DynamoDB.
'''
###Output
_____no_output_____
###Markdown
--- L4: Resillient Architectures > Design Decoupling Mechanisms [L4 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_02_04_04)> An application team supports a service that runs on a single EC2 instance with an EIP attached. The service accepts HTTP requests and performs asynchronous work before placing results in a S3 bucket. There is a new requirement to improve the overall resilience of the application. Which of the following decoupling solutions will best improve the resilience of the infrastructure?- Create an AMI of the instance. Launch two instances from the AMI and place them behind an Application Load Balancer. - Create an AMI of the instance. Create an Auto Scaling group using the AMI in a Launch Template, and associate the ASG with an Application Load Balancer.- Create an SQS Queue. Place requests in the queue, and migrate the app code to a Lambda function that is triggered by messages in the queue.- Create an SQS Queue. Place requests in the queue and poll the queue from the EC2 instance
###Code
#collapse
answers = '''
- Create an AMI of the instance. Launch two instances from the AMI and place them behind an Application Load Balancer.
- Create an AMI of the instance. Create an Auto Scaling group using the AMI in a Launch Template, and associate the ASG with an Application Load Balancer.
✔️ Create an SQS Queue. Place requests in the queue, and migrate the app code to a Lambda function that is triggered by messages in the queue.
- Create an SQS Queue. Place requests in the queue and poll the queue from the EC2 instance
'''
###Output
_____no_output_____
###Markdown
> An application architecture consists of an Auto Scaling Group of EC2 instances that communicates with an RDS database for storing relational data. During the daily peak, database writes overload the RDS instance and impact the customer experience. You've been asked to evaluate a solution that will protect the user experience during peak load. Which of the following architectural changes will best achieve this?- During peak load, submit database writes to an SQS Queue and process the queue asynchronously after the peak has passed.- Upsize the RDS database instance to increase CPU and memory available during peak.- Provision read replicas to separate RDS read requests from the primary write endpoint.- Migrate the database to DynamoDB and provision the table as On-Demand.
###Code
#collapse
answers = '''
✔️ During peak load, submit database writes to an SQS Queue and process the queue asynchronously after the peak has passed.
- Upsize the RDS database instance to increase CPU and memory available during peak.
- Provision read replicas to separate RDS read requests from the primary write endpoint.
- Migrate the database to DynamoDB and provision the table as On-Demand.
'''
###Output
_____no_output_____
###Markdown
--- L5: Resillient Architectures > Appropriate storageChad covers availability (S3 Intelligent tiering is 3 9s 99.9%) and durability metrics for instance storage, EBS and EFS as well as if it's scoped at the Availability zone or Region (S3-Standard)The durability is measured in 'Annual Failure Rate'. The AFR of EBS is 0.1% - 0.2%. EG. The durability of EBS snapshots and S3 is 11 9s. [L5 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_02_05_04)> Your team has been asked to implement an AWS storage infrastructure that can support multiple AZs within a region. Multiple EC2 instances will require access to the data. High availability is more important than performance. Which of the following solutions meet the requirements with the least operational overhead?- AWS Storage Gateway in volume cache mode. Data stored in S3.- Individual EBS volumes attached to instances. Data downloaded from S3. - GlusterFS installed on all instances with multiple partitions and replicas of data. - EFS volume deployed in the region. Each EC2 instance mounts the volume via NFS.
###Code
#collapse
answers = '''
-️ AWS Storage Gateway in volume cache mode. Data stored in S3.
- Individual EBS volumes attached to instances. Data downloaded from S3.
- GlusterFS installed on all instances with multiple partitions and replicas of data.
✔️ EFS volume deployed in the region. Each EC2 instance mounts the volume via NFS.
'''
###Output
_____no_output_____
###Markdown
> An application is running on a singleton EC2 instance with no opportunity for horizontal scaling. The application data is stored and accessed from a single EBS volume. You've been asked to maximize the durability of this data with the ability to recover from accidental deletion of single files. Which of the following steps can be implemented to best meet the requirements? (Choose two.)- Migrate the data to instance storage to improve IOPS performance.- Create a second EBS volume and write all files to both volumes.- Using AWS Backup, schedule a daily snapshot of the data volume.- Create a single AMI of the instance.- Migrate the data to an EBS striped raid filesystem.
###Code
#collapse
answers = '''
- Migrate the data to instance storage to improve IOPS performance.
✔️ Create a second EBS volume and write all files to both volumes.
✔️ Using AWS Backup, schedule a daily snapshot of the data volume.
- Create a single AMI of the instance.
- Migrate the data to an EBS striped raid filesystem.
'''
###Output
_____no_output_____
###Markdown
--- L6: Performant Architectures > Elastic & Scalable computeScalability: The ability to increase resources to accomodate increase demand (vertically/horizontally)Elasticity: The ability to **increase** and **decrease**. Automation is implied. [L6 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_03_06_03)> An application is currently deployed using AWS Auto Scaling on EC2. The application experiences a steep traffic spike twice per week, but not always at the same time. The spike usually starts within the same 60 minute window. What strategy could be used to optimize for both cost and a good user experience, as the current Auto Scaling configuration is not able to scale fast enough at the start of the traffic spike?- Configure Scheduled scale-out at the beginning of the hour window on the spike days.- Increase the minimum instance number to more effectively handle the spikes.- Write a shell script to execute manual scaling out before the hour window on spike days.- Configure Predictive Scaling on the Auto Scaling group.
###Code
#collapse
answers = '''
- Configure Scheduled scale-out at the beginning of the hour window on the spike days.
- Increase the minimum instance number to more effectively handle the spikes.
- Write a shell script to execute manual scaling out before the hour window on spike days.
✔️ Configure Predictive Scaling on the Auto Scaling group.
'''
###Output
_____no_output_____
###Markdown
> An application is deployed into an Auto Scaling group for EC2, associated with a Target Group and an Application Load Balancer. The database is a DynamoDB table with on-demand scaling enabled. As traffic increases organically over time, which of the following will need to be reviewed periodically to ensure smooth scaling? (Choose two.)- DynamoDB table maximum read/write ops- Auto Scaling Group maximum instances- DynamoDB table minimum read/write ops- Auto Scaling Group minimum instances- Regional EC2 maximum vCPU quota
###Code
#collapse
answers = '''
- DynamoDB table maximum read/write ops
✔️ Auto Scaling Group maximum instances
- DynamoDB table minimum read/write ops
- Auto Scaling Group minimum instances
✔️ Regional EC2 maximum vCPU quota
- _As traffic increases over time, the number of EC2 instances launched into the Auto Scaling group will increase. These will count against the regional EC2 vCPU quote along with the other EC2 instances. Watching this value and comparing against usage can ensure a smooth scaling experience_
'''
###Output
_____no_output_____
###Markdown
--- L7: Performant Architectures > High-performing, scalabale storage for workloads- Max data rates (approx. at Mar 2020) but important to benchmark yourself - EBS standard HDD - 90 MiB/s - Low IOPS - SC1 - EBS Cold HDD - 250 MiB/s - IOPS dependent on size - ST1 - EBS Throughput optimised: - 500 MiB/s - IOPS dependent on size - GP2 - EBS general purpose default: - 128 - 250 MiB/s - Up to 16,000 IOPS dependent on size - EBS Provisioned IOPS SSD - 1000 MiB/s - 64,000 IOPS - Striping multiple volumes together - 3500 Mbps - 160k IOPS [L7 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_03_07_06)> When migrating an on-premises legacy database, an AWS architect must recommend an Oracle database hosting option that supports 32Tb of database storage and handles a sustained load higher than 60,000 IOPS. Which of the following choices should the architect recommend? (Choose two.)- r5.12xlarge EC2 instance with multiple IOPS EBS volumes configured as a striped RAID. - r4.16xlarge EC2 instance with multiple PIOPS EBS volumes configured as a striped RAID.- r4.16xlarge EC2 instance with a single GP2 EBS volume.- db.r5.24xlarge RDS instance with PIOPS storage.- db.r5.24xlarge RDS instance with GP2 storage.
###Code
#collapse
answers = '''
- r5.12xlarge EC2 instance with multiple IOPS EBS volumes configured as a striped RAID.
✔️ r4.16xlarge EC2 instance with multiple PIOPS EBS volumes configured as a striped RAID.
- Supports a total of 75,000 IOPS across all EBS volumes
- r4.16xlarge EC2 instance with a single GP2 EBS volume.
✔️ db.r5.24xlarge RDS instance with PIOPS storage.
- Supports up to 80,000 IOPS
- RDS gives you option of setting storage as PIOPS
- db.r5.24xlarge RDS instance with GP2 storage.
'''
###Output
_____no_output_____
###Markdown
> During the peak load every weekday, an MSSQL RDS database becomes overloaded due to heavy read traffic, impacting user request latencies. You've been asked to recommend a solution that improves the user experience and enables easier scaling during future anticipated increased load. Which of the following will best meet the requirements?- Configure an Elasticache cluster to cache database reads. Query the cache from the application before issuing reads to the database.- Increase either the RDS storage size or PIOPS to maximum value to improve database performance.- Upsize the RDS database instance to improve database performance.- Scale the application tier horizontally to accommodate more concurrent requests.
###Code
#collapse
answers = '''
✔️ Configure an Elasticache cluster to cache database reads. Query the cache from the application before issuing reads to the database.
- Increase either the RDS storage size or PIOPS to maximum value to improve database performance.
- Upsize the RDS database instance to improve database performance.
- Scale the application tier horizontally to accommodate more concurrent requests.
'''
###Output
_____no_output_____
###Markdown
--- L8: Performant Architectures > High-performing, network solutions for a workload- Consolodate resources into single AZ to minimise latency and ensure they are in same colocated data centre with sub 1ms latency (smallest in AWS) - Weigh up with Resillience priorities- Enable Jumbo frames @ 9000 MTU to ensure the efficiency of TCP traffic (esp large payloads) and we need to know if data is egressing at a gateway, will the packet fragment. [L8 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_03_08_04)> An application is deployed with Apache Kafka onto a fleet of EC2 instances. There are multiple Kafka topics and multiple partitions per topic. The application requires high performance and low latency. Which of the following recommendations would achieve this? (Choose two.)- Use an EC2 Spread Placement Group during instance launch.- Use an EC2 Cluster Placement Group during instance launch.- Use an EC2 Partition Placement Group during instance launch.- Configure jumbo frames on the EC2 instances.- Use EC2 instance types that support Enhanced Networking.
###Code
#collapse
answers = '''
- Use an EC2 Spread Placement Group during instance launch.
- _Designed more for resillience than performance_
- Use an EC2 Cluster Placement Group during instance launch.
- _not good for resillience or outside communication_
✔️ Use an EC2 Partition Placement Group during instance launch.**
- _Good for spreading instances accross hardware so instances in one partition don't share underlying hardware with instances in other partitiions and ideal for distributed workloads_
- Configure jumbo frames on the EC2 instances.
✔️ Use EC2 instance types that support Enhanced Networking.
'''
###Output
_____no_output_____
###Markdown
> You've been asked to design a network and application infrastructure for a three-tier app consisting of the following: load balancer, application servers and database. The application servers must communicate with S3 regularly. What would be your design recommendation, assuming that performance is the highest priority?- Deploy separate ALB and EC2 Auto Scaling into each AZ. Deploy Multi-AZ RDS, with read replica in the second AZ. S3 communication through a VPC Gateway Endpoint.- Deploy separate ALB and EC2 Auto Scaling into each AZ. Deploy Aurora multi-master into same two AZ. S3 communication through a VPC Gateway Endpoint.- Deploy ALB, EC2 and RDS using multi-AZ configuration of each. S3 communication through the Internet Gateway.- Deploy multi-AZ ALB. Deploy separate EC2 Auto Scaling into each AZ. Deploy multi-AZ RDS with read replica in the second AZ. S3 communication through the Internet Gateway.
###Code
#collapse
answers = '''
- Deploy separate ALB and EC2 Auto Scaling into each AZ. Deploy Multi-AZ RDS, with read replica in the second AZ. S3 communication through a VPC Gateway Endpoint.
✔️ Deploy separate ALB and EC2 Auto Scaling into each AZ. Deploy Aurora multi-master into same two AZ. S3 communication through a VPC Gateway Endpoint.
- Deploy ALB, EC2 and RDS using multi-AZ configuration of each. S3 communication through the Internet Gateway.
- Deploy multi-AZ ALB. Deploy separate EC2 Auto Scaling into each AZ. Deploy multi-AZ RDS with read replica in the second AZ. S3 communication through the Internet Gateway.
'''
###Output
_____no_output_____
###Markdown
--- L9: Performant Architectures > High-performing databases [L9 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_03_09_03)An HR application back end is running Postgres on an m5.xlarge EC2 instance with a single 100Gb IOPS EBS volume with 2000 provisioned IOPS. During company holidays and other slow times, the database experiences almost zero load. During mid-year and end-of-year reviews, the database gets overloaded during the days. What change would you propose to improve performance during the peaks while optimizing for cost during the slow times?- Provision the maximum (based on 100Gb) of 5000 IOPS on the EBS volume.- Migrate the database to RDS Aurora Serverless and provision appropriate min/max ACUS (Aurora Compute Units) to match the peaks and slow times.- Migrate the database to RDS and provision read replicas to handle the peak load.- Resize the instance to m5.4xlarge to increase resources available to Postgres.
###Code
#collapse
answers = '''
- Provision the maximum (based on 100Gb) of 5000 IOPS on the EBS volume.
✔️ Migrate the database to RDS Aurora Serverless and provision appropriate min/max ACUS (Aurora Compute Units) to match the peaks and slow times.
- _Will address peak and slow periods although min and max may need to be adjusted periodically_
- Migrate the database to RDS and provision read replicas to handle the peak load.
- _This migration may address the peak times but there is no guarantee_
- Resize the instance to m5.4xlarge to increase resources available to Postgres.
'''
###Output
_____no_output_____
###Markdown
> For a new application, a database architect has been asked to design a DynamoDB table that must store persistent session data. The table must be designed for high performance, and a TTL will be configured to expire items when they reach 30 days age. What partition key choice would lead to the highest performing table that will scale with the size of the table?- Username- Session Creation Date- User Region- Last Name
###Code
#collapse
answers = '''
✔️ Username
- _A reasonable choice for a partition key as tends to have even distribution across a wide range of alphanumeric characters_
- Session Creation Date
- User Region
- Last Name
'''
###Output
_____no_output_____
###Markdown
--- L10: Secure Architectures > Design Secure access to AWS resources- Account and user-based access control- Service Control Policy (SCP) specify boundaries of what and cannot be done in AWS account(s). - Can only be used to deny- At the user level, permissions are defined to ALLOW whereas permission boundaries can be set to define boundaries as the maximum set of permissions allowed regardless of what's been granted in permission policy documents.- IAM roles are like `sudo` and most efficient way to grant access and allow temporary permissions. - Good for cross account access. E.g for consulting - Good for cross-service access- Resource-based permissions - eg. S3 - only applies to single bucket - _be super aware of S3 Block public access override_ - eg. Lambda function access policy - eg. API Gateway resource policy - Can force user to be authenticated before request is granted. - eg. SNS access policy (for AWS budgets say) to individual topics [L10 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_04_10_04)> Which of the following would be an appropriate least-privilege policy addition for an SCP to be applied to all member accounts in an AWS Organization?- Deny EC2 Termination actions to all users.- Deny S3 Bucket delete actions to all users.- Allow administrative permissions to all IT admin.- Deny Cloudtrail delete actions to all users.
###Code
#collapse
answers = '''
- Deny EC2 Termination actions to all users.
- _a functionality breaker rather than least-privilege_
- Deny S3 Bucket delete actions to all users.
- _a functionality breaker rather than least-privilege_
- Allow administrative permissions to all IT admin.
- _any time a question asks about least-privilege, the answer will not be related to **ALLOW**ing permissions_
✔️ Deny Cloudtrail delete actions to all users.
'''
###Output
_____no_output_____
###Markdown
> In an AWS account, the following permissions have been configured> 1. IAM Policy granting full access to objects in a single S3 bucket> 2. IAM Permission boundary granting administrative access to EC2> 3. S3 Bucket policy that denies delete actions on the bucketWhich of the following actions is possible with all of the above permissions in place for a single IAM User?- Upload a new object to the S3 Bucket.- Launch an EC2 instance.- Delete the S3 bucket.- Resize an EC2 instance.- None of these are possible.
###Code
#collapse
answers = '''
- Upload a new object to the S3 Bucket.
-_Overridden by the IAM permission boundary which does not permit this action so will be denied_
- Launch an EC2 instance.
- Delete the S3 bucket.
- Resize an EC2 instance.
✔️ None of these are possible.
'''
###Output
_____no_output_____
###Markdown
--- L11: Secure Architectures > Design Secure Application Tiers- NACL as block-list between subnets because acting as allow-list in both directions can mean for overly complex network configuration - eg. Block outbound from a Public subnet to a database subnet - eg. Block inbound from database subnet to a public subnet - Security groups will block all by default and because security groups are stateful (attached to resources) - the rules only need to be created in one direction. - _'security groups whitelist application traffic'_- Gateway endpoints can be used to provide private network access to either S3 or Dynamo DB- Virtual Private Gateway can be used by a non-encrypted network into your VPC and VPN connections to outside networks as well as Direct Connect.- Unauthorized requests containing a SQL injection attack (or missing authorisation headers) can be rejected by using a web application firewall.- Tools for monitoring network & application activity - Cloudtrail > CloudwWatch Logs > apply alarm - _audit trail of actions taken in AWS account_ - EC2 running CW Agent - AWS Config - _specify rules to monitor resource changes to get notified if resource are no longer complying with your security control_ - GuardDuty - Start a workflow with this tool that monitors API key usage by generating an ML model on your account's normal behaviour - Amazon Macie - Monitors sensitive S3 objects - CloudWatch Event Bus - Transaction Log of important happenings on Account - Event Rules watch for happenings - Target through to SNS - Target through to Lambda if more complex logic [L11 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_04_11_05)> Your team supports a Java-based application that uses a JDBC connection to an RDS database running MySQL. The connection string contains hard-coded credentials. You've been asked to improve the security of the database credentials, and must account for a new 30-day password rotation policy on RDS. Which of the following meet the requirements with the least ongoing overhead?- Move the database credentials to a text file on each instance. Read the text file upon application start. Update the text file on each instance when password is rotated. - Move the database credentials to SSM Parameter Store. Read the Parameter uponapplication start. Update the Parameter when password is rotated.- Move the database credentials to AWS Secrets Manager. Read the Secret upon application start. Configure the Secret to rotate automatically. - Move the database credentials to 53. Download the object upon application start. Update the S3 object when password is rotated.
###Code
#collapse
answers = '''
- Move the database credentials to a text file on each instance. Read the text file upon application start. Update the text file on each instance when password is rotated.
- Move the database credentials to SSM Parameter Store. Read the Parameter upon
application start. Update the Parameter when password is rotated.
✔️ Move the database credentials to AWS Secrets Manager. Read the Secret upon application start. Configure the Secret to rotate automatically.
- Move the database credentials to 53. Download the object upon application start. Update the S3 object when password is rotated.
'''
###Output
_____no_output_____
###Markdown
> As an AWS network architect, you've been asked to design a VPC that must host the following> 1. ALB front end> 2. Docker containers managed by ECS> 3. RDS Aurora database.> Which of the following VPC security strategies would ensure the greatest security control over each of the application tiers?- All applications in the same public subnets. Isolate workloads via Security Groups. - Each application in dedicated subnets (ALB - public, ECS - private, RDS - private). Isolate workloads via Security Groups and NACLS.- ALB and ECS containers in the same public subnets, RDS in dedicated private subnets. Isolate workloads via Security Groups and NACLS. - ALB in dedicated public subnets, ECS and RDS colocated in the same private subnets. Isolate workloads via Security Groups and NACLS.
###Code
#collapse
answers = '''
- All applications in the same public subnets. Isolate workloads via Security Groups.
✔️ Each application in dedicated subnets (ALB - public, ECS - private, RDS - private). Isolate workloads via Security Groups and NACLS.
- ALB and ECS containers in the same public subnets, RDS in dedicated private subnets. Isolate workloads via Security Groups and NACLS.
- ALB in dedicated public subnets, ECS and RDS colocated in the same private subnets. Isolate workloads via Security Groups and NACLS.
'''
###Output
_____no_output_____
###Markdown
--- L12: Secure Architectures > Select secure storage- Securing data at-rest - EBS: has only a 1-2% impact on latency - EFS: has only a 1-2% impact on latency - RDS: option on Aurora - RedShift: 20-40% effect on performance - S3: enforce SSE with bucket policy- All use KMS - Shared-tenancy service for sharing master keys - region-scoped- Can always fallback to client side encryption- Securing data In-transit - SSL cert installed on Cloudfront - SSL cert install on API gateway w/o HTTP listener - SSL cert installed on ELB- Fully securing data in-transit end-to-end - VPN Gateway - No guarantees on performance as traffic traverses the internet - VPC peering - Guaranteed private because it doesn't touch public internet - Uses Amazon's LAN links - Direct Connect - secure by running a fibre link from your datacentre to a partner's data centre. - Do It Yourself - VPC -> Non-AWS Cloud using OpenVPN if instances on either end run software but you introduce SPOF- Key Management Solutions - KMS - Largest integrations - CloudHSM - Hardware backed - AWS Certificate manager - Secrets Manager - Secure credential storage [L12 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_04_12_05)> A company is building a data lake containing healthcare data that must be properly secured. The data will be stored in S3 using SSE-KMS and accessed by users who will be separated into two groups> 1. those that can view PHI (protected health information), and> 2. those that cannot.> Which of the following strategies will meet the requirements using least privilege techniques and low operational overhead? (Choose two.)- Tag all S3 buckets and objects to indicate the presence of PHI. Create IAM Policies and S3 bucket policies using conditions based on the tags.- Create an S3 full-access IAM policy and associate with users requiring PHI access. Create a more restrictive IAM policy for the non-PHI users. - Tag all IAM users based on PHI access. Test for those tags using IAM Policy and S3 bucket policy conditions for object access and KMS CMK usage.- Write an application to interface with S3 and implement access using custom code. Create IAM policies and S3 bucket policies to allow access only through the application
###Code
#collapse
answers = '''
✔️ Tag all S3 buckets and objects to indicate the presence of PHI. Create IAM Policies and S3 bucket policies using conditions based on the tags.
- _Tagging alone isn't a security control but may be used as a building block towards least privilege_
- Create an S3 full-access IAM policy and associate with users requiring PHI access. Create a more restrictive IAM policy for the non-PHI users.
- _Any strategy tbat involves "full" access to any any service will struggle to meet a least privilege requirement_
✔️ Tag all IAM users based on PHI access. Test for those tags using IAM Policy and S3 bucket policy conditions for object access and KMS CMK usage.
- _Along with tagging, provides a mechanism for testing users with access rights_
- Write an application to interface with S3 and implement access using custom code. Create IAM policies and S3 bucket policies to allow access only through the application
- _Meets functional requirement but introduces operational overhead and SPOF_
'''
###Output
_____no_output_____
###Markdown
> An application has a requirement for end-to-end, in-transit encryption for all web traffic. The architecture will require a load balancer, and the Elastic Load Balancer service is being considered. Which of the load balancer options would meet the application encryption requirement? (Choose two.)- Classic Load Balancer, SSL listener- Classic Load Balancer, TCP listener - Classic Load Balancer, HTTPS listener- Application Load Balancer- Network Load Balancer
###Code
#collapse
answers = '''
✔️ Classic Load Balancer, SSL listener
- _The SSL listener does not terminate the connection (because operates on layer 4) and will preserve the encryption from the client to the backend resource_
- Classic Load Balancer, TCP listener
- _Does not terminate connection but not encrypted so does not meet requirement_
- Classic Load Balancer, HTTPS listener
- _HTTPS listener operates at layer 7 (Application layer) and will terminate the connection before reencrypting so does not meet requirement_
- Application Load Balancer
- _Can only create HTTPS listener and will terminate and reincrypt_
✔️ Network Load Balancer
- _Only implements layer 4 listeners so will be sufficient if SSL listener is used_
'''
###Output
_____no_output_____
###Markdown
--- L13: Cost-Optimised Architectures > Cost-effective storage``` Cost Optimised Resilience PerformanceEBS Standard + Charged for IOPS Lower limit on size Lower IOPS capacity | (1 TB) (Low x00 IOP/S) | | IOPS not dependent on size | |+--------------------------------------------------------------------------------------------+ | EBS SC1 | Appropriate for Easy to upsize as Throughput Cold HDD | cold storage data data increases dependent on | sets (16 TB) size |+---------------------------------------------------------------------------------------------+ | EBS ST1 | Appropriate for Easy to upsize Throughput Throughput | high throughput as data increases dependent on Optimised | datasets (16 TB) size | |+----------------------------------------------------------------------------------------------+ | EBS GP2 | Appropriate for Easy to upsize Throughput General | medium to high as data increases dependent on Purpose | IOPS-bound size SSD | workloads | | +---------------------------------------------------------------------------------------------+ | EBS PIOPS | Charged for Easy to upsize | provisioned IOPS as data and | throughput increases | | +----------------------------------------------------------------------------------------------+ | EFS | Only charged File system is IOPS/Throughput | for data used elastic so no dependent on | need to provision size amount of data | Appropriate for | larger data sets | and file sizes v``` Object storage costs``` Cost Optimise Resilience Performance S3 Object Highest storage Highest availability Appropriate for Standard Access cost of S3 storage static website Cost classes objects + | 4 9s |+--------------------------------------------------------------------------------------------+ | | Lower Appropriate for backups S3-IA | storage Lower availability requiring low latency | cost access |+---------------------------------------------------------------------------------------------+ | | Dynamic moving Availability Appropriate for S3 | between storage according to objects with Intelligent | class so cost is current changing Tiering | ^ariable storage class access patterns | | | Monitoring & | Automation | charges | +---------------------------------------------------------------------------------------------+ | Lowest availability Appropriate for S3 onezone | Same as S3-IA backups with infrequent | lower availability access | needs Z-IA | | +----------------------------------------------------------------------------------------------+ | Regular | 4 9s of Appropriate Glacier | availability for archival | with min-hours | latency needs | | v``` [L13 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_05_13_03)> After an audit of your company's AWS bill, there is an initiative to reduce costs, and you've been asked to focus on S3 usage. There are tens of millions of large objects spread across many buckets. The usage patterns are varied by bucket and prefix, and are not always predictable. Which of the following cost optimization strategies would be the most appropriate?- Provision CloudFront distributions using the S3 buckets as origins to reduce the cost of accessing the objects by caching.- Manually migrate all objects to S3 Infrequent Access to reduce storage costs. - Create lifecycle policies on the S3 buckets that migrate objects to cheaper storage classes as they age, regardless of usage patterns. - Migrate objects to the S3 Intelligent-Tiering storage class to automate the optimization of storage costs based on access frequency
###Code
#collapse
answers = '''
- Provision CloudFront distributions using the S3 buckets as origins to reduce the cost of
accessing the objects by caching.
- _Cloudfront won't impact actual S3 storage costs_
- Manually migrate all objects to S3 Infrequent Access to reduce storage costs.
- _May make a difference but if we don't know access costs, may baloon costs_
- Create lifecycle policies on the S3 buckets that migrate objects to cheaper storage classes as they age, regardless of usage patterns.
✔️ Migrate objects to the S3 Intelligent-Tiering storage class to automate the optimization of
storage costs based on access frequency
- _Solution that will allow you to account for variability_
'''
###Output
_____no_output_____
###Markdown
> An application has a storage requirement of several terabytes on a single volume. The application owner would like to optimize for cost, and performance is not a priority. The application owner cannot predict the number of IOPS that will be required, but is ok with the drive being throttled as long as cost is top priority. Which EBS volume type would best meet the requirements?- Standard- SC1- ST1- GP2- PIOPS
###Code
#collapse
answers = '''
- Standard
✔️ SC1
- _will only charge you based on volume size_
- ST1
- GP2
- PIOPS
'''
###Output
_____no_output_____
###Markdown
--- L14: Cost-Optimised Architectures > Cost-effective compute & database- EC2 pricing (cost ascending) - Spot: paying for unsused capacity - Reserved instances: guaranteed pricing for up to 3 years - On Demand instances: pay as you go - Dedicated instances: dedicated hardware - Dedicated hosts: dedicated host w/single instance typeOn Demand ==> Dedicated Instances = ++PRICE INCREASE++On Demand ==> Reserved/Spot Mix = --PRICE DECREASE--Dedicated Hose ==> Dedicated Instance = ?? IT DEPENDS ON UTILISATION ??Managed services to reduce operational overhead- Auto Scaling- Elastic Beanstalk- ECS on Fargate- Lambda``` Cost Optimise Resilience Performance + | Pay for provisioned Resilience dependent Dependent on single RDS | compute resources on single node limits node limits | | pay for provisioned | storage resources |+-------------------------------------------------------------------------------------------+ | Aurora | Pay for provisioned compute | or actual compute Serverless capability | Better than RDS/EC2 enables horizontal | Pay for actual storage scaling |+-------------------------------------------------------------------------------------------+ | | | Only pay for Much higher than Scales according | provisioned RDS/Aurora to number of cluster Redshift | resources nodes | | Storage charged | according to compute | |+---------------------------------------------------------------------------------------------+ | | Pay for provisioned Perf only limited by | or actual Higher resilience than account quotas DynamoDB | read/write ops RDS, Aurora, Redshift | | Perf limited by | Pay for actual storage partition key choice | |+----------------------------------------------------------------------------------------------+ | Elasticache | Pay for provisioned Memecached: SPOF Depends on no. of nodes | compute resources Redis: depends on 1 node | (in memory) <+``` [L14 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_05_14_04)> A new application is being deployed onto EC2 instances, with a requirement for horizontal scaling. The EC2 instance type doesn't need to be static, as long as the instances meet minimum CPU and memory requirements. What would be the lowest cost deployment strategy for the application as well as lowest operational overhead?- Deploy one Auto Scaling group using single launch template with multiple instance types defined. Specify an appropriate percentage of On Demand instances to maintain resilience.- Deploy two Auto Scaling groups for On Demand and Spot pricing. Specify baseline maximum instances for On Demand and everything else will be Spot instances, with multiple instance types defined.- Deploy one steady-state Auto Scaling group with reserved instances for baseline traffic. Deploy a second Auto Scaling group with On Demand instances for variable traffic.- Deploy one Auto Scaling group using only Spot instances in two AZ to minimize chances of spot price spikes having a cost impact.
###Code
#collapse
answers = '''
✔️ Deploy one Auto Scaling group using single launch template with multiple instance types defined. Specify an appropriate percentage of On Demand instances to maintain resilience.
- _fewer moving parts, launch template have ability to select multiple AZ to maximise resilience_
- Deploy two Auto Scaling groups for On Demand and Spot pricing. Specify baseline maximum instances for On Demand and everything else will be Spot instances, with multiple instance
types defined.
- _Functionally correct but needing to manage both Auto scaling groups will increase operational overhead_
- Deploy one steady-state Auto Scaling group with reserved instances for baseline traffic. Deploy a second Auto Scaling group with On Demand instances for variable traffic.
- _Reserved instances have to be one instance type so lose flexibility_
- Deploy one Auto Scaling group using only Spot instances in two AZ to minimize chances of spot price spikes having a cost impact.
- _risks that spot price will go up, run risk of AWS having to reclaim machines_
'''
###Output
_____no_output_____
###Markdown
> Your company's analytics team has been tasked with processing a large amount of historical data in the shortest time possible, using EC2 instances running custom code. Which EC2 pricing model would be optimal for this job?- Dedicated Instances- On Demand- Spot- Reserved
###Code
#collapse
answers = '''
- Dedicated Instances
- _will cost more due to region specific surcharge_
- On Demand
✔️ Spot
- _will allow for a much larger cluster and larger instance size for same price as on demand_
- Reserved
- _will require a minimum time obligation_
'''
###Output
_____no_output_____
###Markdown
--- L15: Cost-Optimised Architectures > Cost-effective network design- Free resources - VPC (but useless without anything) - subnets - route tables - NACL - internet gateway - inbound traffic from the internet - gateway endpoints (to allow connectivity to S3/DynamoDB) - Elastic Network Interface/ENA/EFA - _but will be charged for traffic depending on destination_ - Security group - _but having many will impact perf_ - Same-AZ network traffic unless public IP is used, then traffic will using public internet and incur costs - Less Expensive/Free: S3 origin => cloudfront => end user - More Expensive: S3 end user- Charged VPC network resources - _charged per hour_ - _charged based on throughput_ - NAT Gateway - VPC peering - Interface endpoints (services that are not S3) - VPC Flow logs- Cross-region traffic, you pay just for the traffic itself including built in features like S3 cross region replication- All outbound traffic is charged from a region- To get data out to users, optimise with cloudfront instead of S3 or ALB [L15 Chad's Question Breakdown](https://learning.oreilly.com/videos/aws-certified-solutions/9780136721246/9780136721246-ACS2_05_15_03)> Your production network consists of a VPC with public and private subnets. The private subnets (in three Availability Zones) use a single NAT Gateway in the first AZ for outbound access to S3 and the Internet. Network traffic charges have increased and you've been asked to propose network architecture changes that can reduce cost. Which of the following solutions will meet the requirement without compromising network security? (Choose two.)- Migrate all VPC resources into public subnets and remove the NAT Gateway. - Deploy an Auto Scaled EC2-based Squid proxy behind an ALB that will replace the NAT Gateway.- Deploy NAT Gateways into the other two AZs and update route tables accordingly.**- Route all traffic through a Virtual Private Gateway back to the corporate network and use corporate Internet connection for all outbound traffic,- Deploy a Gateway VPC Endpoint for S3 and route all private subnet S3 traffic through it.
###Code
#collapse
answers = '''
- Migrate all VPC resources into public subnets and remove the NAT Gateway.
- _will compromise security_
- Deploy an Auto Scaled EC2-based Squid proxy behind an ALB that will replace the NAT Gateway.
- _Replacing NAT gateway costs with ALB, but cross AZ traffic will reduce costs but not overall cost_
✔️ Deploy NAT Gateways into the other two AZs and update route tables accordingly.
- Route all traffic through a Virtual Private Gateway back to the corporate network and use corporate Internet connection for all outbound traffic,
- _traffic charges for a VPG wil be higher than those incurred by NAT Gateway and will harm performance by forcing traffic accross the VPN_
✔️ Deploy a Gateway VPC Endpoint for S3 and route all private subnet S3 traffic through it.
- _no charge for resource or traffic_
'''
###Output
_____no_output_____
###Markdown
> Your company has deployed a high-bandwidth website that is entirely static content and served directly from S3. The monthly charges are significant and you've been asked to reduce cost if possible. Which of the following strategies would result in lower charges for the site?- Deploy an ALB with EC2 instances and migrate the content to an EFS volume shared to EC2.- Deploy a CloudFront distribution which uses the 53 bucket as an origin and migrate DNS to the CloudFront distribution endpoint.- Replicate the S3 content to multiple regions and configure Route 53 latency-based routing entries to direct traffic to the appropriate region.- Write a script to migrate all of the static S3 objects to S3-IA storage class.
###Code
#collapse
answers = '''
- Deploy an ALB with EC2 instances and migrate the content to an EFS volume shared to EC2.
✔️ Deploy a CloudFront distribution which uses the 53 bucket as an origin and migrate DNS to the CloudFront distribution endpoint.
- Replicate the S3 content to multiple regions and configure Route 53 latency-based routing entries to direct traffic to the appropriate region.
- Write a script to migrate all of the static S3 objects to S3-IA storage class.
'''
###Output
_____no_output_____
|
Analysis/House Price/Untitled.ipynb
|
###Markdown
Correlation analysis
###Code
train_df.corr()['price'].sort_values()
###Output
_____no_output_____
###Markdown
We can find the `sqft_living` `grade` and `sqft_above` have strong correlations with price.
###Code
train_df.drop(['id', 'price'], axis=1, inplace=True)
test_df.drop(['id', 'price'], axis=1, inplace=True)
train_df.shape
train_df.dtypes
###Output
_____no_output_____
###Markdown
Feature transformationdate is string type so we convert it to numerica by 1. Keeping 6 characters (Year Month)2. Use LabelEncoder to convert categorical to numerical
###Code
def convert_to_date(date_string: str):
"""
Only keep year and month
"""
# date = date_string[:8]
# return pd.to_datetime(date, format='%Y%m%d', errors='ignore')
return date_string[:6]
train_df.date = train_df.date.apply(convert_to_date)
test_df.date = test_df.date.apply(convert_to_date)
from sklearn.preprocessing import LabelEncoder
label_encoder = LabelEncoder()
train_df.date = label_encoder.fit_transform(train_df.date)
test_df.date = label_encoder.fit_transform(test_df.date)
###Output
_____no_output_____
###Markdown
Normlize columnsBecause value range of some columns are very big. This could dominate the train process.We normalize them by MinMaxScaler
###Code
from sklearn import preprocessing
# Create x, where x the 'scores' column's values as floats
x = train_df.values.astype(float)
# Create a minimum and maximum processor object
min_max_scaler = preprocessing.MinMaxScaler()
# Create an object to transform the data to fit minmax processor
x_scaled = min_max_scaler.fit_transform(x)
x_scaled.shape
# Run the normalizer on the dataframe
train_df_norm = pd.DataFrame(x_scaled, columns=train_df.columns)
train_df_norm.head()
###Output
_____no_output_____
###Markdown
Distribution of priceLet's see the distribution of house price
###Code
y_train.hist(xlabelsize=30, ylabelsize=30, bins=120,figsize=(28,15))
y_train.describe()
(y_train>640000).sum()
###Output
_____no_output_____
###Markdown
75% of the house prices are in the range between 0 and 640000From the table, we can see the descriptive statistics of training data Skewness[skewness](https://whatis.techtarget.com/definition/skewness)The skewness should be about zero for normal distribution. A skenewss value greater than zero means that there is more weight in the left tail of the distribution
###Code
plt.figure()
qq = stats.probplot(y_train, plot=plt)
plt.show()
print("Skewness: {:.3f}".format(y_train.skew()))
###Output
_____no_output_____
###Markdown
Our data has a positive skewness. There is more weight in the left tail of the price distributionNext, we take a log of price column and see what happens!
###Code
y_train = np.log1p(y_train)
y_test = np.log1p(y_test)
y_train.hist(xlabelsize=30, ylabelsize=30, bins=120,figsize=(28,15))
print("Skewness: {:.3f}".format(y_train.skew()))
###Output
Skewness: 0.419
###Markdown
The distribution is more like a normal distribution than before! Q-Q Plot
###Code
plt.figure()
qq = stats.probplot(y_train, plot=plt)
plt.show()
###Output
_____no_output_____
###Markdown
By taking a log of price column, it is close to normal distribution
###Code
train_df.isnull().sum()
###Output
_____no_output_____
###Markdown
GBM
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn import ensemble
from sklearn import datasets
from sklearn.utils import shuffle
from sklearn.metrics import mean_squared_error
X_train, y_train = train_df.values, y_train.values
X_test, y_test = test_df.values, y_test.values
params = {'n_estimators': 500, 'max_depth': 5, 'min_samples_split': 2,
'learning_rate': 0.01, 'loss': 'ls'}
gb_reg = ensemble.GradientBoostingRegressor(**params)
gb_reg.fit(X_train, y_train)
mse = mean_squared_error(y_test, gb_reg.predict(X_test))
print("MSE: %.4f" % mse)
# #############################################################################
# Plot training deviance
# compute test set deviance
test_score = np.zeros((params['n_estimators'],), dtype=np.float64)
for i, y_pred in enumerate(gb_reg.staged_predict(X_test)):
test_score[i] = gb_reg.loss_(y_test, y_pred)
plt.figure(figsize=(12, 6))
plt.subplot(1, 2, 1)
plt.title('Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, gb_reg.train_score_, 'b-',
label='Training Set Deviance')
plt.plot(np.arange(params['n_estimators']) + 1, test_score, 'r-',
label='Test Set Deviance')
plt.legend(loc='upper right')
plt.xlabel('Boosting Iterations')
plt.ylabel('Deviance')
# #############################################################################
# Plot feature importance
feature_importance = gb_reg.feature_importances_
# make importances relative to max importance
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, train_df.columns[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()
###Output
MSE: 0.0242
###Markdown
Random Forest
###Code
from sklearn.ensemble import RandomForestRegressor
params = {'n_estimators': 500, 'max_depth': 8, 'min_samples_split': 2}
rf_reg = RandomForestRegressor(**params)
rf_reg.fit(X_train, y_train)
mse = mean_squared_error(y_test, rf_reg.predict(X_test))
print("MSE: %.4f" % mse)
# #############################################################################
# Plot feature importance
feature_importance = rf_reg.feature_importances_
# make importances relative to max importance
feature_importance = 100.0 * (feature_importance / feature_importance.max())
sorted_idx = np.argsort(feature_importance)
pos = np.arange(sorted_idx.shape[0]) + .5
plt.subplot(1, 2, 2)
plt.barh(pos, feature_importance[sorted_idx], align='center')
plt.yticks(pos, train_df.columns[sorted_idx])
plt.xlabel('Relative Importance')
plt.title('Variable Importance')
plt.show()
###Output
MSE: 0.0308
###Markdown
XGBoost
###Code
import xgboost as xgb
dtrain = xgb.DMatrix(X_train, label=y_train)
dtest = xgb.DMatrix(X_test, label=y_test)
best_xgb_model = xgboost.XGBRegressor(colsample_bytree=0.4,
gamma=0,
learning_rate=0.07,
max_depth=3,
min_child_weight=1.5,
n_estimators=10000,
reg_alpha=0.75,
reg_lambda=0.45,
subsample=0.6,
seed=42)
best_xgb_model.fit(train_x,train_y)
###Output
_____no_output_____
|
notebooks/trees_ex_02-mw.ipynb
|
###Markdown
📝 Exercise M5.02The aim of this exercise is to find out whether a decision treemodel is able to extrapolate.By extrapolation, we refer to values predicted by a model outside of therange of feature values seen during the training.We will first load the regression data.
###Code
import pandas as pd
penguins = pd.read_csv("../datasets/penguins_regression.csv")
data_columns = ["Flipper Length (mm)"]
target_column = "Body Mass (g)"
data_train, target_train = penguins[data_columns], penguins[target_column]
###Output
_____no_output_____
###Markdown
NoteIf you want a deeper overview regarding this dataset, you can refer to theAppendix - Datasets description section at the end of this MOOC. First, create two models, a linear regression model and a decision treeregression model, and fit them on the training data. Limit the depth at3 levels for the decision tree.
###Code
# Write your code here.
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
linear_regression = LinearRegression()
linear_regression.fit(data_train, target_train)
tree = DecisionTreeRegressor(max_depth=3)
tree.fit(data_train, target_train)
###Output
_____no_output_____
###Markdown
Create a testing dataset, ranging from the minimum to the maximum of theflipper length of the training dataset. Get the predictions of each modelusing this test dataset.
###Code
# Write your code here.
import numpy as np
data_test = pd.DataFrame(np.arange(data_train[data_columns[0]].min(),
data_train[data_columns[0]].max()),
columns=data_columns)
linear_regression_predicted = linear_regression.predict(data_test)
tree_predicted = tree.predict(data_test)
###Output
_____no_output_____
###Markdown
Create a scatter plot containing the training samples and superimpose thepredictions of both model on the top.
###Code
# Write your code here.
import seaborn as sns
import matplotlib.pyplot as plt
sns.scatterplot(data=penguins, x="Flipper Length (mm)", y="Body Mass (g)",
color="black", alpha=0.5)
plt.plot(data_test, linear_regression_predicted, label="Linear regression")
plt.plot(data_test, tree_predicted, label="Decision tree")
plt.legend()
_ = plt.title("Prediction of LinearRegression and decision tree")
###Output
_____no_output_____
###Markdown
Now, we will check the extrapolation capabilities of each model. Create adataset containing the value of your previous dataset. Besides, add valuesbelow and above the minimum and the maximum of the flipper length seenduring training.
###Code
# Write your code here.
offset = 30
data_extra = pd.DataFrame(np.arange(data_train[data_columns[0]].min() - offset,
data_train[data_columns[0]].max() + offset),
columns=data_columns)
###Output
_____no_output_____
###Markdown
Finally, make predictions with both model on this new testing set. Repeatthe plotting of the previous exercise.
###Code
# Write your code here.
linear_regression_predicted_extra = linear_regression.predict(data_extra)
tree_predicted_extra = tree.predict(data_extra)
sns.scatterplot(data=penguins, x="Flipper Length (mm)", y="Body Mass (g)",
color="black", alpha=0.5)
plt.plot(data_extra, linear_regression_predicted_extra, label="Linear regression")
plt.plot(data_extra, tree_predicted_extra, label="Decision tree")
plt.legend()
_ = plt.title("Prediction of LinearRegression and Decision tree")
###Output
_____no_output_____
|
tutorials/flow_1.ipynb
|
###Markdown
###Code
from flows.flows import Flows
flow = Flows(1)
path = './data/flow_1'
files_list = ['train.csv','test.csv']
dataframe_dict, columns_set = flow.load_data(path, files_list)
dataframe_dict, columns_set = flow.encode_categorical_feature(dataframe_dict)
ignore_columns = ['id', 'SalePrice']
dataframe_dict, columns_set = flow.features_encoding("one-hot",
dataframe_dict,
"train",
ignore_columns,
class_number_range=[3, 50])
dataframe_dict, columns_set = flow.scale_data(dataframe_dict, ignore_columns)
import numpy as np
ignore_columns = ["id", "SalePrice"]
columns = columns_set["train"]["categorical_integer"] + columns_set["train"]['continuous']
train_dataframe = dataframe_dict["train"][[x for x in columns if x not in ignore_columns]]
test_dataframe = dataframe_dict["test"][[x for x in columns if x not in ignore_columns]]
train_target = np.log1p(dataframe_dict["train"]["SalePrice"])
parameters = {
"data": {
"train": {"features": train_dataframe, "target": train_target.to_numpy()},
},
"split": {
"method": "kfold", # "method":"kfold"
"fold_nr":5, # foldnr:5 , "split_ratios": 0.2 # "split_ratios":(0.3,0.2)
},
"model": {"type": "Ridge linear regression",
"hyperparameters": {"alpha": "optimize", # alpha:optimize
},
},
"metrics": ["r2_score", "mean_squared_error"],
"predict": {
"test": {"features": test_dataframe}
}
}
model_index_list, save_models_dir, y_test = flow.training(parameters)
parameters_lighgbm = {
"data": {
"train": {"features": train_dataframe, "target": train_target.to_numpy()},
},
"split": {
"method": "kfold", # "method":"kfold"
"fold_nr": 5, # foldnr:5 , "split_ratios": 0.2 # "split_ratios":(0.3,0.2)
},
"model": {"type": "lightgbm",
"hyperparameters": dict(objective='regression', metric='root_mean_squared_error', num_leaves=5,
boost_from_average=True,
learning_rate=0.05, bagging_fraction=0.99, feature_fraction=0.99, max_depth=-1,
num_rounds=10000, min_data_in_leaf=10, boosting='dart')
},
"metrics": ["r2_score", "mean_squared_error"],
"predict": {
"test": {"features": test_dataframe}
}
}
model_index_list, save_models_dir, y_test = flow.training(parameters_lighgbm)
parameters_xgboost = {
"data": {
"train": {"features": train_dataframe, "target": train_target.to_numpy()},
},
"split": {
"method": "kfold", # "method":"kfold"
"fold_nr": 5, # fold_nr:5 , "split_ratios": 0.3 # "split_ratios":(0.3,0.2)
},
"model": {"type": "xgboost",
"hyperparameters": {'max_depth': 5, 'eta': 1, 'eval_metric': "rmse"}
},
"metrics": ["r2_score", "mean_squared_error"],
"predict": {
"test": {"features": test_dataframe}
}
}
model_index_list, save_models_dir, y_test = flow.training(parameters_xgboost)
###Output
_____no_output_____
|
notebooks/tg/mera/general/real/mnist_gt_4.ipynb
|
###Markdown
Imports
###Code
import math
import pandas as pd
import pennylane as qml
import time
from keras.datasets import mnist
from matplotlib import pyplot as plt
from pennylane import numpy as np
from pennylane.templates import AmplitudeEmbedding, AngleEmbedding
from pennylane.templates.subroutines import ArbitraryUnitary
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
###Output
_____no_output_____
###Markdown
Model Params
###Code
np.random.seed(131)
initial_params = np.random.random([66])
INITIALIZATION_METHOD = 'Angle'
BATCH_SIZE = 20
EPOCHS = 400
STEP_SIZE = 0.01
BETA_1 = 0.9
BETA_2 = 0.99
EPSILON = 0.00000001
TRAINING_SIZE = 0.78
VALIDATION_SIZE = 0.07
TEST_SIZE = 1-TRAINING_SIZE-VALIDATION_SIZE
initial_time = time.time()
###Output
_____no_output_____
###Markdown
Import dataset
###Code
(train_X, train_y), (test_X, test_y) = mnist.load_data()
examples = np.append(train_X, test_X, axis=0)
examples = examples.reshape(70000, 28*28)
classes = np.append(train_y, test_y)
x = []
y = []
for (example, label) in zip(examples, classes):
if label in [0, 1, 2, 3]:
x.append(example)
y.append(-1)
else:
x.append(example)
y.append(1)
x = np.array(x)
y = np.array(y)
# Normalize pixels values
x = x / 255
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=TEST_SIZE, shuffle=True)
validation_indexes = np.random.random_integers(len(X_train), size=(math.floor(len(X_train)*VALIDATION_SIZE),))
X_validation = [X_train[n] for n in validation_indexes]
y_validation = [y_train[n] for n in validation_indexes]
pca = PCA(n_components=8)
pca.fit(X_train)
X_train = pca.transform(X_train)
X_validation = pca.transform(X_validation)
X_test = pca.transform(X_test)
preprocessing_time = time.time()
###Output
_____no_output_____
###Markdown
Circuit creation
###Code
device = qml.device("default.qubit", wires=8)
def unitary(params, wire1, wire2):
# qml.RZ(0, wires=wire1)
qml.RY(params[0], wires=wire1)
# qml.RZ(0, wires=wire1)
# qml.RZ(0, wires=wire2)
qml.RY(params[1], wires=wire2)
# qml.RZ(0, wires=wire2)
qml.CNOT(wires=[wire2, wire1])
# qml.RZ(0, wires=wire1)
qml.RY(params[2], wires=wire2)
qml.CNOT(wires=[wire1, wire2])
qml.RY(params[3], wires=wire2)
qml.CNOT(wires=[wire2, wire1])
# qml.RZ(0, wires=wire1)
qml.RY(params[4], wires=wire1)
# qml.RZ(0, wires=wire1)
# qml.RZ(0, wires=wire2)
qml.RY(params[5], wires=wire2)
# qml.RZ(0, wires=wire2)
@qml.qnode(device)
def circuit(features, params):
# Load state
if INITIALIZATION_METHOD == 'Amplitude':
AmplitudeEmbedding(features=features, wires=range(8), normalize=True, pad_with=0.)
else:
AngleEmbedding(features=features, wires=range(8), rotation='Y')
# First layer
unitary(params[0:6], 1, 2)
unitary(params[6:12], 3, 4)
unitary(params[12:18], 5, 6)
# Second layer
unitary(params[18:24], 0, 1)
unitary(params[24:30], 2, 3)
unitary(params[30:36], 4, 5)
unitary(params[36:42], 6, 7)
# Third layer
unitary(params[42:48], 2, 5)
# Fourth layer
unitary(params[48:54], 1, 2)
unitary(params[54:60], 5, 6)
# Fifth layer
unitary(params[60:66], 2, 5)
# Measurement
return qml.expval(qml.PauliZ(5))
###Output
_____no_output_____
###Markdown
Circuit example
###Code
features = X_train[0]
print(f"Inital parameters: {initial_params}\n")
print(f"Example features: {features}\n")
print(f"Expectation value: {circuit(features, initial_params)}\n")
print(circuit.draw())
###Output
Inital parameters: [0.65015361 0.94810917 0.38802889 0.64129616 0.69051205 0.12660931
0.23946678 0.25415707 0.42644165 0.83900255 0.74503365 0.38067928
0.26169292 0.05333379 0.43689638 0.20897912 0.59441102 0.09890353
0.22409353 0.5842624 0.95908107 0.20988382 0.66133746 0.50261295
0.32029143 0.12506485 0.80688893 0.98696002 0.54304141 0.23132314
0.60351254 0.17669598 0.88653747 0.58902228 0.72117264 0.27567029
0.78811469 0.1326223 0.39971595 0.62982409 0.42404345 0.16187284
0.52034418 0.6070413 0.5808057 0.82111597 0.98499188 0.93449492
0.90305486 0.3380262 0.78324429 0.74373474 0.58058546 0.43266356
0.66792795 0.23668741 0.45173663 0.91999741 0.96687301 0.76905057
0.32671177 0.62283984 0.19160224 0.24832171 0.11683869 0.01032549]
Example features: [ 3.23006689 -3.38480243 1.9747935 0.71663767 -0.18818383 -0.31244166
0.66383923 3.90955675]
Expectation value: 0.07567050612275517
0: ──RY(3.23)────RY(0.224)────────────────────────────────────────────────────────────╭X─────────────╭C─────────────╭X──RY(0.661)───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
1: ──RY(-3.38)───RY(0.65)────╭X─────────────╭C─────────────╭X──RY(0.691)───RY(0.584)──╰C──RY(0.959)──╰X──RY(0.21)───╰C──RY(0.503)──RY(0.903)──────────────────────────────────────────────────────────╭X─────────────╭C─────────────╭X──RY(0.581)───────────────────────────────────────────────────────────┤
2: ──RY(1.97)────RY(0.948)───╰C──RY(0.388)──╰X──RY(0.641)──╰C──RY(0.127)───RY(0.32)───╭X─────────────╭C─────────────╭X──RY(0.543)──RY(0.52)───╭X─────────────╭C─────────────╭X──RY(0.985)──RY(0.338)──╰C──RY(0.783)──╰X──RY(0.744)──╰C──RY(0.433)──RY(0.327)──╭X─────────────╭C─────────────╭X──RY(0.117)───┤
3: ──RY(0.717)───RY(0.239)───╭X─────────────╭C─────────────╭X──RY(0.745)───RY(0.125)──╰C──RY(0.807)──╰X──RY(0.987)──╰C──RY(0.231)─────────────│──────────────│──────────────│─────────────────────────────────────────────────────────────────────────────────│──────────────│──────────────│───────────────┤
4: ──RY(-0.188)──RY(0.254)───╰C──RY(0.426)──╰X──RY(0.839)──╰C──RY(0.381)───RY(0.604)──╭X─────────────╭C─────────────╭X──RY(0.721)─────────────│──────────────│──────────────│─────────────────────────────────────────────────────────────────────────────────│──────────────│──────────────│───────────────┤
5: ──RY(-0.312)──RY(0.262)───╭X─────────────╭C─────────────╭X──RY(0.594)───RY(0.177)──╰C──RY(0.887)──╰X──RY(0.589)──╰C──RY(0.276)──RY(0.607)──╰C──RY(0.581)──╰X──RY(0.821)──╰C──RY(0.934)──RY(0.668)──╭X─────────────╭C─────────────╭X──RY(0.967)──RY(0.623)──╰C──RY(0.192)──╰X──RY(0.248)──╰C──RY(0.0103)──┤ ⟨Z⟩
6: ──RY(0.664)───RY(0.0533)──╰C──RY(0.437)──╰X──RY(0.209)──╰C──RY(0.0989)──RY(0.788)──╭X─────────────╭C─────────────╭X──RY(0.424)──RY(0.237)──────────────────────────────────────────────────────────╰C──RY(0.452)──╰X──RY(0.92)───╰C──RY(0.769)───────────────────────────────────────────────────────────┤
7: ──RY(3.91)────RY(0.133)────────────────────────────────────────────────────────────╰C──RY(0.4)────╰X──RY(0.63)───╰C──RY(0.162)───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┤
###Markdown
Accuracy test definition
###Code
def measure_accuracy(x, y, circuit_params):
class_errors = 0
for example, example_class in zip(x, y):
predicted_value = circuit(example, circuit_params)
if (example_class > 0 and predicted_value <= 0) or (example_class <= 0 and predicted_value > 0):
class_errors += 1
return 1 - (class_errors/len(y))
###Output
_____no_output_____
###Markdown
Training
###Code
params = initial_params
opt = qml.AdamOptimizer(stepsize=STEP_SIZE, beta1=BETA_1, beta2=BETA_2, eps=EPSILON)
test_accuracies = []
best_validation_accuracy = 0.0
best_params = []
for i in range(len(X_train)):
features = X_train[i]
expected_value = y_train[i]
def cost(circuit_params):
value = circuit(features, circuit_params)
return ((expected_value - value) ** 2)/len(X_train)
params = opt.step(cost, params)
if i % BATCH_SIZE == 0:
print(f"epoch {i//BATCH_SIZE}")
if i % (10*BATCH_SIZE) == 0:
current_accuracy = measure_accuracy(X_validation, y_validation, params)
test_accuracies.append(current_accuracy)
print(f"accuracy: {current_accuracy}")
if current_accuracy > best_validation_accuracy:
print("best accuracy so far!")
best_validation_accuracy = current_accuracy
best_params = params
if len(test_accuracies) == 30:
print(f"test_accuracies: {test_accuracies}")
if np.allclose(best_validation_accuracy, test_accuracies[0]):
params = best_params
break
del test_accuracies[0]
print("Optimized rotation angles: {}".format(params))
training_time = time.time()
###Output
Optimized rotation angles: [ 0.86408956 -0.09955498 -0.03664271 0.36046791 0.39166864 0.1540102
-0.06655431 -0.06608116 -0.02885667 0.2122525 0.16261558 0.14057704
-0.8162092 1.3430153 1.87622921 1.74410378 0.91654476 0.01226037
-1.38566751 0.28541899 0.69887156 0.5122742 0.66133746 0.38324226
0.34769232 -0.45735322 1.18551382 1.11102504 1.06530332 0.23132314
0.3634103 0.49882973 1.07278223 0.70662337 0.72117264 0.64068389
0.70147153 -0.26709295 0.10022589 0.65922285 0.04805418 0.16187284
1.04260608 0.9720549 0.4052931 0.07287276 0.70212312 2.0895636
0.78368417 0.05515744 -0.75726573 0.81283846 0.58058546 0.38557182
1.82299663 -0.13930186 0.9981214 1.8455721 -0.22671516 0.76905057
0.27962003 -0.57074833 0.67374046 0.92092029 0.11683869 -0.05983815]
###Markdown
Testing
###Code
accuracy = measure_accuracy(X_test, y_test, params)
print(accuracy)
test_time = time.time()
print(f"pre-processing time: {preprocessing_time-initial_time}")
print(f"training time: {training_time - preprocessing_time}")
print(f"test time: {test_time - training_time}")
print(f"total time: {test_time - initial_time}")
###Output
pre-processing time: 10.642351150512695
training time: 7622.2338008880615
test time: 226.1484990119934
total time: 7859.024651050568
|
code/algorithms/course_udemy_1/Algorithm Analysis and Big O/Big O Notation.ipynb
|
###Markdown
Big O NotationIn this lecture we will go over how the syntax of Big-O Notation works and how we can describe algorithms using Big-O Notation!We previously discussed the functions below:
###Code
# First function (Note the use of xrange since this is in Python 2)
def sum1(n):
'''
Take an input of n and return the sum of the numbers from 0 to n
'''
final_sum = 0
for x in range(n+1):
final_sum += x
return final_sum
def sum2(n):
"""
Take an input of n and return the sum of the numbers from 0 to n
"""
return (n*(n+1))/2
###Output
_____no_output_____
###Markdown
Now we want to develop a notation to objectively compare the efficiency of these two algorithms. A good place to start would be to compare the number of assignments each algorithm makes.The original **sum1** function will create an assignment **n+1** times, we can see this from the range based function. This means it will assign the final_sum variable n+1 times. We can then say that for a problem of n size (in this case just a number n) this function will take 1+n steps.This **n** notation allows us to compare solutions and algorithms relative to the size of the problem, since sum1(10) and sum1(100000) would take very different times to run but be using the same algorithm. We can also note that as n grows very large, the **+1** won't have much effect. So let's begin discussing how to build a syntax for this notation.________Now we will discuss how we can formalize this notation and idea.Big-O notation describes *how quickly runtime will grow relative to the input as the input get arbitrarily large*.Let's examine some of these points more closely:* Remember, we want to compare how quickly runtime will grows, not compare exact runtimes, since those can vary depending on hardware.* Since we want to compare for a variety of input sizes, we are only concerned with runtime grow *relative* to the input. This is why we use **n** for notation.* As n gets arbitrarily large we only worry about terms that will grow the fastest as n gets large, to this point, Big-O analysis is also known as **asymptotic analysis**As for syntax sum1() can be said to be **O(n)** since its runtime grows linearly with the input size. In the next lecture we will go over more specific examples of various O() types and examples. To conclude this lecture we will show the potential for vast difference in runtimes of Big-O functions. Runtimes of Common Big-O FunctionsHere is a table of common Big-O functions: Big-O Name 1 Constant log(n) Logarithmic n Linear nlog(n) Log Linear n^2 Quadratic n^3 Cubic 2^n Exponential Now let's plot the runtime versus the Big-O to compare the runtimes. We'll use a simple [matplotlib](http://matplotlib.org/) for the plot below. (Don't be concerned with how to use matplotlib, that is irrelevant for this part).
###Code
from math import log
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('bmh')
# Set up runtime comparisons
n = np.linspace(1,10,1000)
labels = ['Constant','Logarithmic','Linear','Log Linear','Quadratic','Cubic','Exponential']
big_o = [np.ones(n.shape),np.log(n),n,n*np.log(n),n**2,n**3,2**n]
# Plot setup
plt.figure(figsize=(12,10))
plt.ylim(0,50)
for i in range(len(big_o)):
plt.plot(n,big_o[i],label = labels[i])
plt.legend(loc=0)
plt.ylabel('Relative Runtime')
plt.xlabel('n')
###Output
_____no_output_____
|
chapter 2.ipynb
|
###Markdown
Iris Classification
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
###Output
_____no_output_____
###Markdown
Loading Data
###Code
df = pd.read_csv('iris.data', names = ['sepal length', 'sepal width', 'petal length', 'petal width', 'class'], header = None)
df.head()
df.info()
X = df.iloc[:100,[0,2]].values
y = df.iloc[:100,-1].values
y = np.where(y=='Iris-setosa', 1 , -1)
###Output
_____no_output_____
###Markdown
Model
###Code
class Perceptron:
def __init__(self, learning_rate = 0.02, epoch = 50, random_state = 42):
self.learning_rate = learning_rate
self.epoch = epoch
self.random_state = random_state
def net_input(self,X):
return np.dot(X,self.w_[1:]) + self.w_[0]
def predict(self,X):
return np.where(self.net_input(X) >= 0.0 , 1, -1)
def fit(self, X, y):
random = np.random.RandomState(self.random_state)
self.w_ = random.normal(loc = 0, scale = 0.01, size = 1 + X.shape[1])
self.errors_ = []
for _ in range(self.epoch):
errors = 0
for xi, yi in zip(X,y):
update = self.learning_rate * (yi - self.predict(xi))
self.w_[1:] += update * xi
self.w_[0] += update
errors += int(update!=0.0)
self.errors_.append(errors)
return self
model = Perceptron(epoch = 10)
model.fit(X,y)
plt.plot(range(1,len(model.errors_) + 1), model.errors_, marker = 'o', color = 'green')
plt.xlabel('Epoch')
plt.ylabel('Number of updates')
plt.show()
###Output
_____no_output_____
###Markdown
Visualing our data
###Code
_ = plt.scatter(X[:50,0], X[:50,1], marker = '^', color = 'red', label = 'setosa')
_ = plt.scatter(X[50:,0], X[50:,1], marker = '*', color = 'green', label = 'versicolor')
plt.xlabel('sepal length')
plt.ylabel('petal length')
plt.legend(loc = 'upper left')
plt.show()
from matplotlib.colors import ListedColormap
markers = ('s','x','o','^','v')
colors = ('red','blue','lightgreen', 'gray', 'cyan')
cmap = ListedColormap(colors[:len(np.unique(y))])
markers
cmap
X1_min, X1_max = X[:,0].min() - 1, X[:,0].max() + 1
print(X1_min, X1_max)
X2_min, X2_max = X[:,1].min() - 1, X[:,1].max() + 1
print(X2_min, X2_max)
xx1, xx2 = np.meshgrid(
np.arange(X1_min, X1_max, 0.02),
np.arange(X2_min, X2_max, 0.02)
)
plt.scatter(xx1, xx2, marker = 'o', color = 'green')
xx1
xx2
a = [1,2,3]
b = [4,5,6]
a1, a2 = np.meshgrid(a,b)
a1
a2
Z = model.predict(np.array([xx1.ravel(), xx2.ravel()]).T)
print(Z)
temp = np.array([xx1.ravel(), xx2.ravel()]).T
temp.T
temp
Z = Z.reshape(xx1.shape)
xx1.shape
xx2.shape
Z
plt.contourf(xx1,xx2, Z, cmap = cmap)
plt.xlim(xx1.min(), xx1.max())
plt.ylim(xx2.min(), xx2.max())
plt.scatter(X[:,0], X[:,1], col
X[0,0]
np.unique(y)
for idx, cl in enumerate(np.unique(y)):
print(idx,cl)
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x = X[y == cl, 0],
y = X[y == cl, 1],
alpha = 0.8,
c = colors[idx],
marker = markers[idx],
label = cl,
edgecolor = 'black'
)
X[y==1, 0]
X[y==1,1]
a = np.array([[1,2],[3,4]])
print(a.shape)
b = a.ravel()
print(b.shape)
###Output
(2, 2)
(4,)
###Markdown
Decision plot region
###Code
def decision_plot_region(X, y, classifier , resolution = 0.02):
color = ('red','green','blue', 'yellow','orange', 'pink')
markers = ('o','*','^','v','s')
cmap = ListedColormap(color[:len(np.unique(y))])
x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1
x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx1, xx2 = np.meshgrid(
np.arange(x1_min, x1_max, resolution),
np.arange(x2_min, x2_max, resolution)
)
Z = classifier.predict(np.array([xx1.ravel(),xx2.ravel()]).T)
Z = Z.reshape(xx1.shape)
plt.contourf(xx1, xx2, Z, alpha = 0.8, cmap = cmap)
for idx, cl in enumerate(np.unique(y)):
plt.scatter(x = X[y==cl,0], y = X[y==cl,1], marker = markers[idx], color = colors[idx], label = cl, edgecolor = 'black')
return Z
decision_plot_region(X,y, model)
###Output
[[ 1 1 1 ... 1 1 1]
[ 1 1 1 ... 1 1 1]
[ 1 1 1 ... 1 1 1]
...
[-1 -1 -1 ... -1 -1 -1]
[-1 -1 -1 ... -1 -1 -1]
[-1 -1 -1 ... -1 -1 -1]]
###Markdown
Higher Dimensionality Data
###Code
column = ['sepal length', 'sepal width', 'petal length', 'petal width','class']
df = pd.read_csv('iris.data', names = column, header = None)
df.info()
X, y = df.iloc[:,:-1].values, df.iloc[:,-1].values
np.unique(y)
y = np.where(y=='Iris-setosa', -1 , np.where(y=='Iris-versicolor', 0, 1))
from sklearn.preprocessing import StandardScaler
x_ = StandardScaler().fit_transform(X)
from sklearn.decomposition import PCA
pca = PCA(n_components=2)
new_df = pca.fit_transform(x_)
new_df.shape
temp = decision_plot_region(new_df, y, model)
np.unique(temp)
np.unique(y)
model = Perceptron()
model.fit(new_df,y)
temp = decision_plot_region(new_df,y, model)
np.unique(temp)
from sklearn.linear_model import LogisticRegression
model = LogisticRegression()
model.fit(new_df,y)
decision_plot_region(new_df, y, model)
###Output
_____no_output_____
###Markdown
Implementing Adaline in Python Adaline neural network uses activation function to update its weight, and it happen in batches that mean every samples is considered in making weight update rather than Perceptron where weight get updated at each sample run
###Code
class AdalineGD(object):
def __init__(self, eta = 0.01, n_iter = 50, random_state = 1):
self.eta = eta
self.n_iter = n_iter
self.random_state = random_state
def fit(self,X,y):
rgen = np.random.RandomState(self.random_state)
self.w_ = rgen.normal(loc = 0.0, scale = 0.01, size = 1 + X.shape[1])
self.cost_ = []
for i in range(self.n_iter):
net_input = self.net_input(X)
output = self.activation(net_input)
errors = (y-output)
self.w_[1:] += self.eta * X.T.dot(errors)
self.w_[0] += self.eta * errors.sum()
cost = (errors**2).sum() / 2.0
self.cost_.append(cost)
return self
def net_input(self,X):
return np.dot(X,self.w_[1:]) + self.w_[0]
def activation(self,X):
return X
def predict(self,X):
return np.where(self.activation(self.net_input(X)>=0.0, 1, -1))
fig, ax = plt.subplots(nrows = 1, ncols = 2, figsize = (10,4))
ada1 = AdalineGD(n_iter = 10, eta = 0.01).fit(X,y)
ax[0].plot(range(1,len(ada1.cost_) + 1), np.log10(ada1.cost_), marker = 'o')
ax[0].set_xlabel('Epochs')
ax[0].set_ylabel('log(sum-squared-error)')
ax[0].set_title('Adaline - Learning rate 0.01')
ada2 = AdalineGD(n_iter = 10, eta = 0.0001).fit(X,y)
ax[1].plot(range(1,len(ada2.cost_) + 1), np.log10(ada2.cost_), marker = 'o')
ax[1].set_xlabel('Epochs')
ax[1].set_ylabel('log(sum-squared-error)')
ax[1].set_title('Adaline - Learning rate 0.0001')
plt.show()
a = np.array([[1,2,3],[4,5,6]])
a
a.shape
b = np.array([7,8,9])
b = b.reshape(1,3)
a.dot(b.T)
a
b
a.shape
b.shape
np.matmul(a,b.T)
np.dot(a,b.T)
a = np.array([1,2,3])
b = np.array([[1,2,3],[4,5,6]])
np.dot(b,a)
np.matmul(b,a)
np.dot(a,b)
np.dot(a,2)
###Output
_____no_output_____
|
Codes/UdemyCourseCodes/UPDATED_NLP_COURSE/06-Deep-Learning/01-Text-Generation-with-Neural-Networks.ipynb
|
###Markdown
___ ___ Text Generation with Neural Networks Functions for Processing Text Reading in files as a string text
###Code
def read_file(filepath):
with open(filepath) as f:
str_text = f.read()
return str_text
read_file('moby_dick_four_chapters.txt')
###Output
_____no_output_____
###Markdown
Tokenize and Clean Text
###Code
import spacy
nlp = spacy.load('en',disable=['parser', 'tagger','ner'])
nlp.max_length = 1198623
def separate_punc(doc_text):
return [token.text.lower() for token in nlp(doc_text) if token.text not in '\n\n \n\n\n!"-#$%&()--.*+,-/:;<=>?@[\\]^_`{|}~\t\n ']
d = read_file('melville-moby_dick.txt')
tokens = separate_punc(d)
tokens
len(tokens)
4431/25
###Output
_____no_output_____
###Markdown
Create Sequences of Tokens
###Code
# organize into sequences of tokens
train_len = 25+1 # 50 training words , then one target word
# Empty list of sequences
text_sequences = []
for i in range(train_len, len(tokens)):
# Grab train_len# amount of characters
seq = tokens[i-train_len:i]
# Add to list of sequences
text_sequences.append(seq)
' '.join(text_sequences[0])
' '.join(text_sequences[1])
' '.join(text_sequences[2])
len(text_sequences)
###Output
_____no_output_____
###Markdown
Keras Keras Tokenization
###Code
from keras.preprocessing.text import Tokenizer
# integer encode sequences of words
tokenizer = Tokenizer()
tokenizer.fit_on_texts(text_sequences)
sequences = tokenizer.texts_to_sequences(text_sequences)
sequences[0]
tokenizer.index_word
for i in sequences[0]:
print(f'{i} : {tokenizer.index_word[i]}')
tokenizer.word_counts
vocabulary_size = len(tokenizer.word_counts)
###Output
_____no_output_____
###Markdown
Convert to Numpy Matrix
###Code
import numpy as np
sequences = np.array(sequences)
sequences
###Output
_____no_output_____
###Markdown
Creating an LSTM based model
###Code
import keras
from keras.models import Sequential
from keras.layers import Dense,LSTM,Embedding
def create_model(vocabulary_size, seq_len):
model = Sequential()
model.add(Embedding(vocabulary_size, 25, input_length=seq_len))
model.add(LSTM(150, return_sequences=True))
model.add(LSTM(150))
model.add(Dense(150, activation='relu'))
model.add(Dense(vocabulary_size, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
return model
###Output
_____no_output_____
###Markdown
Train / Test Split
###Code
from keras.utils import to_categorical
sequences
# First 49 words
sequences[:,:-1]
# last Word
sequences[:,-1]
X = sequences[:,:-1]
y = sequences[:,-1]
y = to_categorical(y, num_classes=vocabulary_size+1)
seq_len = X.shape[1]
seq_len
###Output
_____no_output_____
###Markdown
Training the Model
###Code
# define model
model = create_model(vocabulary_size+1, seq_len)
###Output
_____no_output_____
###Markdown
-------
###Code
from pickle import dump,load
# fit model
model.fit(X, y, batch_size=128, epochs=300,verbose=1)
# save the model to file
model.save('epochBIG.h5')
# save the tokenizer
dump(tokenizer, open('epochBIG', 'wb'))
###Output
_____no_output_____
###Markdown
Generating New Text
###Code
from random import randint
from pickle import load
from keras.models import load_model
from keras.preprocessing.sequence import pad_sequences
def generate_text(model, tokenizer, seq_len, seed_text, num_gen_words):
'''
INPUTS:
model : model that was trained on text data
tokenizer : tokenizer that was fit on text data
seq_len : length of training sequence
seed_text : raw string text to serve as the seed
num_gen_words : number of words to be generated by model
'''
# Final Output
output_text = []
# Intial Seed Sequence
input_text = seed_text
# Create num_gen_words
for i in range(num_gen_words):
# Take the input text string and encode it to a sequence
encoded_text = tokenizer.texts_to_sequences([input_text])[0]
# Pad sequences to our trained rate (50 words in the video)
pad_encoded = pad_sequences([encoded_text], maxlen=seq_len, truncating='pre')
# Predict Class Probabilities for each word
pred_word_ind = model.predict_classes(pad_encoded, verbose=0)[0]
# Grab word
pred_word = tokenizer.index_word[pred_word_ind]
# Update the sequence of input text (shifting one over with the new word)
input_text += ' ' + pred_word
output_text.append(pred_word)
# Make it look like a sentence.
return ' '.join(output_text)
###Output
_____no_output_____
###Markdown
Grab a random seed sequence
###Code
text_sequences[0]
import random
random.seed(101)
random_pick = random.randint(0,len(text_sequences))
random_seed_text = text_sequences[random_pick]
random_seed_text
seed_text = ' '.join(random_seed_text)
seed_text
generate_text(model,tokenizer,seq_len,seed_text=seed_text,num_gen_words=50)
###Output
_____no_output_____
###Markdown
Exploring Generated Sequence
###Code
full_text = read_file('moby_dick_four_chapters.txt')
for i,word in enumerate(full_text.split()):
if word == 'inkling':
print(' '.join(full_text.split()[i-20:i+20]))
print('\n')
###Output
_____no_output_____
|
DecisionTreeClassification/Decision_Trees_Classification.ipynb
|
###Markdown
Decision Tree Algorithm 👨🏻💻--- SKlearn implementation--- `Imports`
###Code
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
###Output
_____no_output_____
###Markdown
`Importing Dataset` Next, we import the dataset from the CSV file to the Pandas dataframes.
###Code
col = [ 'Class Name','Left weight','Left distance','Right weight','Right distance']
df = pd.read_csv('/content/balance-scale.data',names=col,sep=',')
df.head()
###Output
_____no_output_____
###Markdown
`Information About Dataset` We can get the overall information of our data set by using the df.info function. From the output, we can see that it has 625 records with 5 fields.
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 625 entries, 0 to 624
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Class Name 625 non-null object
1 Left weight 625 non-null int64
2 Left distance 625 non-null int64
3 Right weight 625 non-null int64
4 Right distance 625 non-null int64
dtypes: int64(4), object(1)
memory usage: 24.5+ KB
###Markdown
`Exploratory Data Analysis (EDA)` Let us do a bit of exploratory data analysis to understand our dataset better. We have plotted the classes by using countplot function. We can see in the figure given below that most of the classes names fall under the labels R and L which means Right and Left respectively. Very few data fall under B, which stands for balanced.
###Code
sns.countplot(df['Class Name'])
sns.countplot(df['Left weight'],hue=df['Class Name'])
sns.countplot(df['Right weight'],hue=df['Class Name'])
###Output
/usr/local/lib/python3.7/dist-packages/seaborn/_decorators.py:43: FutureWarning: Pass the following variable as a keyword arg: x. From version 0.12, the only valid positional argument will be `data`, and passing other arguments without an explicit keyword will result in an error or misinterpretation.
FutureWarning
###Markdown
`Splitting the Dataset in Train-Test` Before feeding the data into the model we first split it into train and test data using the train_test_split function.
###Code
from sklearn.model_selection import train_test_split
X = df.drop('Class Name',axis=1)
y = df[['Class Name']]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3,random_state=42)
###Output
_____no_output_____
###Markdown
`Training the Decision Tree Classifier` We have used the Gini index as our attribute selection method for the training of decision tree classifier with Sklearn function DecisionTreeClassifier().We have created the decision tree classifier by passing other parameters such as random state, max_depth, and min_sample_leaf to DecisionTreeClassifier().Finally, we do the training process by using the model.fit() method.
###Code
from sklearn.tree import DecisionTreeClassifier
# defult gini
clf_model = DecisionTreeClassifier(criterion="gini", random_state=42,max_depth=3, min_samples_leaf=5)
clf_model.fit(X_train,y_train)
###Output
_____no_output_____
###Markdown
`Test Accuracy` We will now test accuracy by using the classifier on test data. For this we first use the model.predict function and pass X_test as attributes.
###Code
y_predict = clf_model.predict(X_test)
###Output
_____no_output_____
###Markdown
Next, we use accuracy_score function of Sklearn to calculate the accuracty. We can see that we are getting a pretty good accuracy of 78.6% on our test data.
###Code
from sklearn.metrics import accuracy_score,classification_report,confusion_matrix
accuracy_score(y_test,y_predict)
###Output
_____no_output_____
###Markdown
`Plotting Decision Tree` We can plot our decision tree with the help of the Graphviz library and passing after a bunch of parameters such as classifier model, target values, and the features name of our data.
###Code
target = list(df['Class Name'].unique())
feature_names = list(X.columns)
from sklearn import tree
import graphviz
dot_data = tree.export_graphviz(clf_model,
out_file=None,
feature_names=feature_names,
class_names=target,
filled=True, rounded=True,
special_characters=True)
graph = graphviz.Source(dot_data)
graph
###Output
_____no_output_____
|
Optimization_attempt_3.ipynb
|
###Markdown
Preprocessing
###Code
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df.drop(['EIN', 'NAME'], axis=1, inplace=True)
application_df.head()
# Determine the number of unique values in each column.
application_df.apply(lambda col: len(col.unique()))
# Look at ASK_AMT value counts for binning
ask_amounts = application_df['ASK_AMT'].value_counts()
ask_amounts
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
class_replace=list(ask_amounts[ask_amounts < 20000].index)
# Replace in dataframe
for cls in class_replace:
application_df['ASK_AMT'] = application_df['ASK_AMT'].replace(cls,"Not Standard")
# Check to make sure binning was successful
application_df['ASK_AMT'].value_counts()
# Look at APPLICATION_TYPE value counts for binning
application_counts = application_df['APPLICATION_TYPE'].value_counts()
application_counts
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = list(application_counts[application_counts < 100].index)
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
application_df.head()
# Look at CLASSIFICATION value counts for binning
classification_count = application_df['CLASSIFICATION'].value_counts()
classification_count
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
class_replace=list(classification_count[classification_count < 10].index)
# Replace in dataframe
for cls in class_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
# Convert categorical data to numeric with `pd.get_dummies`
dummy_df = pd.get_dummies(application_df)
dummy_df.head()
# Split our preprocessed data into our features and target arrays
y = dummy_df['IS_SUCCESSFUL']
X = dummy_df.drop('IS_SUCCESSFUL', axis=1)
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
X_train_scaled.shape
###Output
_____no_output_____
###Markdown
Compile, Train and Evaluate the Model
###Code
# Define the model - deep neural net, i.e., the number of input features and hidden nodes for each layer.
# YOUR CODE GOES HERE
number_input_features = len(X_train)
hidden_nodes_layer1 = 8
hidden_nodes_layer2 = 5
nn = tf.keras.models.Sequential()
# First hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer1, input_dim=69, activation="relu"))
# Second hidden layer
nn.add(tf.keras.layers.Dense(units=hidden_nodes_layer2, activation="relu"))
# Output layer
nn.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Check the structure of the model
nn.summary()
# Compile the model
nn.compile(loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"])
X_train_scaled.shape
# Train the model
fit_model = nn.fit(X_train_scaled, y_train, epochs=100)
# Evaluate the model using the test data
model_loss, model_accuracy = nn.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
nn.save("charity_attempt_3.h5")
###Output
_____no_output_____
|
others/Data_Challenge1/Caitlin_Monaghan_Employee_Retention.ipynb
|
###Markdown
Salary differences between those who quit vs stay exist within certain departments: Data Science and Engineering
###Code
ax = df.groupby(['dept', 'quit'])['salary'].mean().plot(kind='bar', figsize=(8,5), title='Salary by employment status across departments')
###Output
_____no_output_____
###Markdown
Not due to more senior members within those departments:
###Code
c = df['seniority'][df['quit']==0]
d = df['seniority'][df['quit']==1]
print('Seniority differences: \n')
print('mean of non-quitters: \n' + str(round(c.mean(), 3)))
print('mean of quitters: \n' + str(round(d.mean(), 3)))
###Output
Seniority differences:
mean of non-quitters:
14.123
mean of quitters:
14.119
###Markdown
Noticeable discrepancy between salaries for those who quit vs those who don't, at the senior level
###Code
#fig, ax = plt.subplots(figsize=(8,6))
ax = df[df['quit']==0].groupby(['seniority'])['salary'].mean().plot.line(label='Employed', legend=True, title='Salary across seniority between employment statuses')
ax = df[df['quit']==1].groupby(['seniority'])['salary'].mean().plot.line(label='Quit', legend=True, ax=ax)
###Output
_____no_output_____
###Markdown
Paying employees based on average salary of individuals who have not left could save money in the long run
###Code
# calculate numbers for salaries of those who quit vs not
cols4 = ['salary','dept','senior_cat']
df_employed = df[cols4][df['quit']==0].groupby(by=['dept','senior_cat']).mean()
df_employed = df_employed.rename({'salary': 'salary_employed'}, axis='columns')
df_quit = df[cols4][df['quit']==1].groupby(by=['dept','senior_cat']).mean()
df_quit = df_quit.rename({'salary': 'salary_quit'}, axis='columns')
# add counts for each group
df_employed['n_employed'] = df[cols4][df['quit']==0].groupby(by=['dept','senior_cat']).count()
df_quit['n_quit'] = df[cols4][df['quit']==1].groupby(by=['dept','senior_cat']).count()
df_info = pd.concat([df_employed, df_quit], sort=True, axis=1)
df_info['salary_diff'] = df_info['salary_employed'] - df_info['salary_quit']
df_info['payroll_change'] = df_info['n_quit'] * df_info['salary_diff']
# calculate what changing salaries would cost overall
payroll_cost = df_info['payroll_change'].sum()
replace_cost = (df_info['n_quit'].sum())*100000
print('New payroll cost: \n' + '$' + str('{:,}'.format(int(payroll_cost))))
print('\n')
print('Current replacement cost (conservatively at $100k/person): \n' + '$' + str('{:,}'.format(round(replace_cost,2))))
print('\n')
print('Savings: \n' + '$' + str('{:,}'.format(int(replace_cost - payroll_cost))))
# one hot encode categorical variables and combine into a dataframe
from sklearn.preprocessing import LabelEncoder
from sklearn.preprocessing import OneHotEncoder
from numpy import argmax
label_encoder = LabelEncoder()
integer_encoded = label_encoder.fit_transform(df['dept'])
onehot_encoder = OneHotEncoder(sparse=False, categories='auto')
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
inverted = label_encoder.inverse_transform([argmax(onehot_encoded[0, :])])
temp_dept = pd.DataFrame(onehot_encoded, columns = label_encoder.classes_)
integer_encoded = label_encoder.fit_transform(df['senior_cat'])
onehot_encoder = OneHotEncoder(sparse=False, categories='auto')
integer_encoded = integer_encoded.reshape(len(integer_encoded), 1)
onehot_encoded = onehot_encoder.fit_transform(integer_encoded)
inverted = label_encoder.inverse_transform([argmax(onehot_encoded[0, :])])
temp2 = pd.DataFrame(onehot_encoded, columns = label_encoder.classes_)
df_onehot = pd.concat([df, temp2],sort=True, axis=1)
df_onehot2 = pd.concat([df_onehot, temp_dept], sort=True, axis=1)
# focus on certain variables for modeling
model_cols = ['entry', 'mid', 'senior', 'quit', 'days_employed',
'salary', 'customer_service', 'data_science', 'design',
'engineer', 'marketing', 'sales']
df_model = df_onehot2[model_cols]
xcols = ['salary', 'customer_service', 'data_science', 'design',
'engineer', 'marketing', 'sales', 'entry', 'mid', 'senior']
#xcols = ['dept_num','senior_ord','salary']
ycol = ['quit']
y = np.ravel(df_model[ycol])
X = df_model[xcols]
# split into test/train groups
# normalize values for logistic regression coefficient interpretability
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(
StandardScaler().fit_transform(X), y, test_size=0.33)
# X, y, test_size=0.33, random_state=0)
from sklearn.linear_model import LogisticRegression
clfLR = LogisticRegression(solver='lbfgs').fit(X_train,y_train)
print('Accuracy: ' + str(round(clfLR.score(X_test,y_test),4)))
print('Training accuracy: ' + str(round(clfLR.score(X_train,y_train), 4)))
from sklearn.metrics import r2_score
y_pred = clfLR.predict(X_test)
print('R-squared: ' + str(round(r2_score(y_test, y_pred),4)))
from sklearn.ensemble import RandomForestClassifier
clfRF = RandomForestClassifier(n_estimators=100).fit(X_train, y_train)
print('Accuracy: ' + str(round(clfRF.score(X_test,y_test),4)))
print('Training Accuracy: ' + str(round(clfRF.score(X_train,y_train),4)))
###Output
Accuracy: 0.5363
Training Accuracy: 0.6306
|
08_apples_and_bananas/apples.ipynb
|
###Markdown
Apples and BananasWrite a program that will substitute all the vowels in a given text with a single vowel (default "a")
###Code
.\apples.ps1 'The quick brown fox jumps over the lazy dog.'
###Output
_____no_output_____
###Markdown
The argument may name a file in which case you should read the contents of that file. In addition the -vowel command line argument can be passed to override the default character (a)
###Code
.\apples.ps1 ..\inputFiles\fox.txt -vowel u
###Output
Thu quuck bruwn fux jumps uvur thu luzy dug.
|
youtube_EAS12_schedutil_iowaitboost_off_bigsoff.ipynb
|
###Markdown
YouTube energy comparison for turning off iowait boostTest: Run YouTube video for 30 seconds, and collect energy 15 times (total test time 7.5 minutes)Wifi was turned off and video played back with youtube red
###Code
%pylab inline
import pandas as pd
import sqlite3
import matplotlib.cm as cm
import os, json
from collections import namedtuple
# Provide the root path where your test folders are stored
results_dir = '/home/joelaf/repo/lisa/results/wifi-off/'
# Provide the names of the results folders you want compared
all_test_dirs = [
"yt_schedutil_energy_1.2_30s_run1",
"yt_schedutil_energy_1.2_30s_noiowaitboost_run2",
"yt_schedutil_energy_1.2_bigsoff"
]
###Output
Populating the interactive namespace from numpy and matplotlib
###Markdown
Plot histograms of energy consumed for tests
###Code
# Plot a box plot
fig, axes = plt.subplots()
df_all = []
for test in all_test_dirs:
test_dir = results_dir + "/" + test
with open(test_dir + "/energy_all_runs.json") as f:
samples = json.load(f)['energy_samples']
df = pd.DataFrame(samples, columns=[test[3:]])
print df.describe()
df_all.append(df)
df_box = pd.concat(df_all, axis=1)
axes = df_box.plot.box(figsize=(10, 6), ax=axes, ylim=(7.4,8.2), title="Box plot comparing energy samples")
# Plot a histogram of energy values collected
def plot_energy(test):
test_dir = results_dir + "/" + test
with open(test_dir + "/energy_all_runs.json") as f:
samples = json.load(f)['energy_samples']
df = pd.DataFrame(samples, columns=['energy'])
fig, axes = plt.subplots()
# print axes
df.plot(kind='hist', bins=32, xlim=(6,10), title=test, figsize=(16,5), ax=axes)
for t in all_test_dirs:
plot_energy(t)
###Output
schedutil_energy_1.2_30s_run1
count 15.000000
mean 8.042533
std 0.039394
min 7.966584
25% 8.014455
50% 8.051294
75% 8.058296
max 8.110917
schedutil_energy_1.2_30s_noiowaitboost_run2
count 15.000000
mean 7.948377
std 0.039497
min 7.897061
25% 7.910292
50% 7.957902
75% 7.977662
max 8.005220
schedutil_energy_1.2_bigsoff
count 15.000000
mean 7.580664
std 0.037787
min 7.532140
25% 7.547024
50% 7.573458
75% 7.611389
max 7.641877
|
cs229/Naive Bayes.ipynb
|
###Markdown
CS229: Naive BayesIn this notebook we implement the Naive Bayes algorithm described in Lecture 5 for text classification and test it on a public dataset of SMS messages.
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn import cross_validation
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.metrics import accuracy_score, precision_score, recall_score, precision_recall_curve, roc_curve
messages = pd.read_csv('SMSSpamCollection.tsv', sep='\t', header=None, names=['label', 'text'])
messages.iloc[0].text
cv = CountVectorizer()
X = cv.fit_transform(messages[['text']].as_matrix().ravel()).todense()
y = (messages[['label']] == 'spam').as_matrix().ravel().astype(int)
X_example = cv.transform(['crazy crazy how']).todense()
X_example[0].max()
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size=0.3)
p_spam = np.sum(y_train) / y_train.shape[0]
p_ham = 1 - p_spam
# From X_train, choose only those rows (messages) that are labeled as spam.
spam_messages = X_train[y_train.astype(bool)]
# For each word (column), sum over all rows.
spam_counts = np.sum(spam_messages, axis=0)
p_words_spam = np.ravel((spam_counts + 1) / (spam_counts.sum() + 2))
spam_counts.shape
ham_messages = X_train[np.logical_not(y_train.astype(bool))]
ham_counts = np.sum(ham_messages, axis=0)
p_words_ham = np.ravel((ham_counts + 1) / (ham_counts.sum() + 2))
def predict(msg):
msg = np.ravel((msg != 0))
p_x_spam = np.prod(p_words_spam[msg]) * p_spam
p_x_ham = np.prod(p_words_ham[msg]) * p_ham
p_x = p_x_spam * p_spam + p_x_ham * p_ham
p_is_spam = p_x_spam * p_spam / p_x
return p_is_spam
y_pred = np.apply_along_axis(predict, 1, X_test)
precision, recall, thresholds = precision_recall_curve(y_test, y_pred)
plt.figure()
ax = plt.subplot(111)
plt.xlabel('threshold')
plt.plot(thresholds, precision[:-1], label='precision')
plt.plot(thresholds, recall[:-1], label='recall')
ax.legend(bbox_to_anchor=(1.0, 0.8))
plt.show()
fpr, tpr, thresholds = roc_curve(y_test, y_pred)
plt.figure()
ax = plt.subplot(111)
plt.xlabel('False positive rate')
plt.ylabel('True positive rate')
plt.plot(fpr, tpr)
plt.show()
def is_spam(msg):
if predict(msg) > 0.2:
return 1
else:
return 0
y_pred = np.apply_along_axis(is_spam, 1, X_test)
print("Spam precision: {0:.1f}%".format(precision_score(y_pred, y_test) * 100))
print("Spam recall: {0:.1f}%".format(recall_score(y_pred, y_test) * 100))
from sklearn.metrics import matthews_corrcoef
matthews_corrcoef(y_pred, y_test)
def is_spam_text(text):
x = np.ravel(cv.transform([text]).todense())
return predict(x)
###Output
_____no_output_____
|
doc/source/examples/15DynamicNuclearPolarisation.ipynb
|
###Markdown
Dynamic Nuclear Polarisation/Changing repetition count during runtimeThis example demonstrates how to change the repetition count of pulses during runtime. One possible application of changing parameters during runtime is dynamic nuclear polarisation. We will call parameters which are able to change after program creation volatile. Since this example is meant to illustrate how the concept of changing the values of volatile parameter works, we will use simple example pulses.First we have to connect to the AWG (If you want to run this cell, set `awg_name` and possibly `awg_address` according to the AWG you are using).
###Code
from qupulse.hardware.setup import HardwareSetup
from doc.source.examples.hardware.zhinst import add_to_hardware_setup
from doc.source.examples.hardware.tabor import add_tabor_to_hardware_setup
awg_name = 'TABOR'
awg_address = None
hardware_setup = HardwareSetup()
if awg_name == 'ZI':
hdawg, channel_pairs = add_to_hardware_setup(hardware_setup, awg_address, name=awg_name)
used_awg = hdawg.channel_pair_AB
elif awg_name == 'TABOR':
teawg, channel_pairs = add_tabor_to_hardware_setup(hardware_setup, tabor_address=awg_address, name=awg_name)
used_awg = channel_pairs[0]
else:
ValueError('Unknown AWG')
###Output
_____no_output_____
###Markdown
As a next step we create our dnp pulse template, with three different pumping schemes: 'minus', 'zero' and 'plus'. In reality these could for example be t-, s- and cs-pumping pulses.
###Code
from qupulse.pulses import PointPT, RepetitionPT
zero = PointPT([(0, 0), ('t_quant', 0)], ('X', 'Y'))
minus = PointPT([(0, '-x'), ('t_quant', '-x')], ('X', 'Y'))
plus = PointPT([(0, 'x'), ('t_quant', 'x')], ('X', 'Y'))
dnp = RepetitionPT(minus, 'n_minus') @ RepetitionPT(zero, 'n_zero') @ RepetitionPT(plus, 'n_plus')
###Output
_____no_output_____
###Markdown
On program creation, we set the parameters and channel mappings of the program as usual. However we want to be able to change how often we repeat each of the pulses dynamically. For that we have to say on program creating which of the parameters are supposed to change during runtime, using the keyword `volatile`.
###Code
sample_rate = used_awg.sample_rate / 10**9
n_quant = 192
t_quant = n_quant / sample_rate
dnp_prog = dnp.create_program(parameters=dict(t_quant=float(t_quant), n_minus=3, n_zero=3, n_plus=3, x=0.25),
channel_mapping={'X': '{}_A'.format(awg_name), 'Y': '{}_B'.format(awg_name)},
volatile={'n_minus', 'n_zero', 'n_plus'})
dnp_prog.cleanup()
###Output
_____no_output_____
###Markdown
Now we can upload our program to the AWG and use it as usual.
###Code
hardware_setup.register_program('dnp', dnp_prog)
hardware_setup.arm_program('dnp')
used_awg.run_current_program()
print(used_awg._known_programs['dnp'].program.program)
###Output
LOOP 1 times:
->EXEC <qupulse._program.waveforms.MultiChannelWaveform object at 0x00000000093D6948> 3 times
->EXEC <qupulse._program.waveforms.MultiChannelWaveform object at 0x0000000005174888> 3 times
->EXEC <qupulse._program.waveforms.MultiChannelWaveform object at 0x00000000093E3708> 3 times
###Markdown
As expected our pumping pulses are executed 3 times each.We can now adjust the repetitions of the pulses by simply using the function `update_parameters`. We need to give `update_parameters` the name of the program we want to change and the values to which we want to set certain parameters. Say, next time we run the program we only want to do one zero pulse but 5 plus pulses instead of 3. Then we can simply do:
###Code
hardware_setup.update_parameters('dnp', dict(n_zero=1, n_plus=5))
###Output
_____no_output_____
###Markdown
This changes the program in the AWG and the program memory accordingly such that next time we run the program the AWG will output 3 minus, 1 zero and 5 plus pulses.
###Code
used_awg.run_current_program()
print(used_awg._known_programs['dnp'].program.program)
###Output
LOOP 1 times:
->EXEC <qupulse._program.waveforms.MultiChannelWaveform object at 0x00000000093D6948> 3 times
->EXEC <qupulse._program.waveforms.MultiChannelWaveform object at 0x0000000005174888> 1 times
->EXEC <qupulse._program.waveforms.MultiChannelWaveform object at 0x00000000093E3708> 5 times
|
tutorials/drug_target_interaction_tutorial.ipynb
|
###Markdown
Predicting drug-target interaction In this tuorial, we will go through how to run a GraphDTA model for compound-protein affinity prediction. In particular, we will demonstrate to train, evaluate and inference the GraphDTA model using scripts in folder `apps/drug_target_interaction/graph_dta/`. GraphDTA **GraphDTA** represents compound drugs as graphs and uses graph neural networks to predict drug-target affinity. Specifically, the graph is converted from SMILES using RDKit, and passed through variants of graph neural network to extract its representation. For protein, the amino acid sequence is first embeded to an array of vectors, then sequence convolution is applied to get the protein representation. Finally, the combined representations of the compound drug and the protein is feeded into a feedforward network to regress the affinity measurement, such as Kd, Ki, KIBA, etc.  The code for GraphDTA is in `../apps/drug_target_interaction/graph_dta/`, we will redirect to this folder for later steps.
###Code
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.getcwd(), "..")))
os.chdir('../apps/drug_target_interaction/graph_dta/')
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
Prepare dataset Download the Davis dataset using `wget`. If you do not have `wget` on your machine, you could alsocopy the url below into your web browser to download the data. But remember to copy the data manually to thepath "../apps/drug_target_interaction/graph_dta/".
###Code
# download and decompress the data
!wget "https://baidu-nlp.bj.bcebos.com/PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz" --no-check-certificate
!tar -zxf "PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz"
!ls "./davis/processed"
###Output
--2020-12-17 19:27:53-- https://baidu-nlp.bj.bcebos.com/PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz
正在解析主机 baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 10.70.0.165
正在连接 baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|10.70.0.165|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:23301615 (22M) [application/gzip]
正在保存至: “PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz”
PaddleHelix%2Fdatas 100%[===================>] 22.22M 6.47MB/s 用时 4.7s
2020-12-17 19:27:58 (4.72 MB/s) - 已保存 “PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz” [23301615/23301615])
[1m[36mtest[m[m [1m[36mtrain[m[m
###Markdown
Suppose you have download the processed Davis dataset , please refer to the script `data_gen.py` for the implementation of `DTADataset` class, which is a stream dataset wrapper for [PGL](https://github.com/PaddlePaddle/PGL).
###Code
from data_gen import DTADataset
###Output
[INFO] 2020-12-17 19:28:04,139 [mp_reader.py: 23]: ujson not install, fail back to use json instead
###Markdown
For the proteins sequences, there are two way to process them and get the inputs:* cut or add padding to get protein sequences with a fixed length, i.e. setting a `max_protein_len` > 0.* use the full protein sequence, i.e. setting a `max_protein_len` < 0.
###Code
train_data = './davis/processed/train'
test_data = './davis/processed/test'
max_protein_len = 1000 # set -1 to use full sequence
train_dataset = DTADataset(train_data, max_protein_len=max_protein_len)
test_dataset = DTADataset(test_data, max_protein_len=max_protein_len)
print(len(train_dataset), len(test_dataset))
###Output
25046 5010
###Markdown
Create the model In this tutorial, we take the GIN network as an example.
###Code
import paddle
import paddle.fluid as fluid
from model import DTAModel
paddle.enable_static()
###Output
_____no_output_____
###Markdown
`model_config` shows the hyperparameters for the whole network architecture. In particular, the `model_config['compound']` is the configuration for the GNN model of compounds, and `model_config['protein']` is the configuration for the sequence convolution-based protein presentation module.
###Code
lr = 0.0005 # learning rate
model_config = {
"compound": {
"gnn_type": "gin", # type of the GNN
"dropout_rate": 0.2,# dropout rate for the GNN
"embed_dim": 32, # embedding size of atom type
"layer_num": 5, # number of GNN layers
"hidden_size": 32, # hidden size of GNN layers
"output_dim": 128 # the dimension of representation of compound graph
},
"protein": {
"max_protein_len": max_protein_len, # set -1 to use full sequence
"embed_dim": 128, # embedding size of amino acid
"num_filters": 32, # num of filters of the sequence convolution
"output_dim": 128 # the the dimension of representation of target protein
},
"dropout_rate": 0.2 # dropout rate for the affinity predictor
}
###Output
_____no_output_____
###Markdown
Create main program, start program, test program with static model `DTAModel` and Adam optimizer. For the details of `DTAModel`, please check the `model.py`. Basically, it implements the network architecutre showing in the above figure.
###Code
train_program, train_startup = fluid.Program(), fluid.Program()
with fluid.program_guard(train_program, train_startup):
with fluid.unique_name.guard():
model = DTAModel(model_config=model_config)
model.train()
test_program = train_program.clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=lr)
optimizer.minimize(model.loss)
###Output
_____no_output_____
###Markdown
Train and evaluate
###Code
import shutil
import numpy as np
from pgl.utils.data.dataloader import Dataloader
from data_gen import DTACollateFunc
from utils import concordance_index
max_epoch = 2 # we use a small epoch number as demonstration
batch_size = 512 # batch size for training
num_workers = 4 # number of workers for the PGL dataloader
best_model = 'gin_best_model' # directory to save the best model, i.e. with the minimum MSE
eval_txt = 'eval.txt' # the text file to record the evaluation metric
###Output
_____no_output_____
###Markdown
Create a Paddle Executor. Note that if you want to run on GPU, use `place = fluid.cuda_places()[0]` instead.
###Code
# place = fluid.cuda_places()[0]
place = fluid.CPUPlace()
exe = fluid.Executor(place)
###Output
_____no_output_____
###Markdown
In the `train()` function, we create a `DTACollateFunc` which wraps a batch of processed into a batch of graph data `pgl.graph.MultiGraph` in PGL, then with the protein input data, it can help to organize the full feed dictionary. You can check the data preparation in the inference section to understand how it works.
###Code
def train(exe, train_program, model, train_dataset):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
train_dataset,
batch_size=batch_size,
num_workers=num_workers,
stream_shuffle_size=1000,
collate_fn=collate_fn)
list_loss = []
for feed_dict in data_loader:
train_loss, = exe.run(
train_program, feed=feed_dict, fetch_list=[model.loss], return_numpy=False)
list_loss.append(np.array(train_loss).mean())
return np.mean(list_loss)
###Output
_____no_output_____
###Markdown
In the `evaluate()` function, we utilize MSE and Concordance Index (CI) to evaluate the model. However, computing the ranking-based metric CI is time-consuming, we introduce the prior smallest MSE (`best_mse`) to compare with current MSE, so we can avoid some unnecessary computation of CI.
###Code
def evaluate(exe, test_program, model, test_dataset, best_mse):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
test_dataset,
batch_size=batch_size,
num_workers=1,
collate_fn=collate_fn)
total_n, processed = len(test_dataset), 0
total_pred, total_label = [], []
for idx, feed_dict in enumerate(data_loader):
print('Evaluated {}/{}'.format(processed, total_n))
pred, = exe.run(
test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=False)
total_pred.append(np.array(pred))
total_label.append(feed_dict['label'])
processed += total_pred[-1].shape[0]
print('Evaluated {}/{}'.format(processed, total_n))
total_pred = np.concatenate(total_pred, 0).flatten()
total_label = np.concatenate(total_label, 0).flatten()
mse = ((total_label - total_pred) ** 2).mean(axis=0)
ci = None
if mse < best_mse:
ci = concordance_index(total_label, total_pred)
return mse, ci
###Output
_____no_output_____
###Markdown
The training and evaluating pipline: for each epoch, evaluate the model, if it achieves a smaller MSE on the test dataset, save the best model and update the evaluation metrics.
###Code
exe.run(train_startup)
best_mse, best_ci, best_ep = np.inf, 0, 0
for epoch_id in range(1, max_epoch + 1):
print('========== Epoch {} =========='.format(epoch_id))
train_loss = train(exe, train_program, model, train_dataset)
print('#Epoch: {}, Train loss: {}'.format(epoch_id, train_loss))
mse, ci = evaluate(exe, test_program, model, test_dataset, best_mse)
if mse < best_mse:
best_mse, best_ci, best_ep = mse, ci, epoch_id
if os.path.exists(best_model):
shutil.rmtree(best_model)
fluid.io.save_params(exe, best_model, train_program)
metric = 'Epoch: {}, Best MSE: {}, Best CI: {}'.format(epoch_id, best_mse, best_ci)
print(metric)
with open(eval_txt, 'w') as f:
f.write(metric)
else:
print('No improvement in epoch {}'.format(epoch_id))
metric = open(os.path.join(eval_txt), 'r').read()
print('===== Current best:\n{}'.format(metric))
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
`eval.txt` and folder `gin_best_model` are saved after training. Inference
###Code
import pgl
from rdkit import Chem
from pahelix.utils.compound_tools import smiles_to_graph_data
from pahelix.utils.protein_tools import ProteinTokenizer
protein_example = 'MENKKKDKDKSDDRMARPSGRSGHNTRGTGSSSSGVLMVGPNFRVGKKIGCGNFGELRLGKNLYTNEYVAIKLEPMKSRAPQLHLEYRFYKQLGSGDGIPQVYYFGPCGKYNAMVLELLGPSLEDLFDLCDRTFSLKTVLMIAIQLISRMEYVHSKNLIYRDVKPENFLIGRPGNKTQQVIHIIDFGLAKEYIDPETKKHIPYREHKSLTGTARYMSINTHLGKEQSRRDDLEALGHMFMYFLRGSLPWQGLKADTLKERYQKIGDTKRATPIEVLCENFPEMATYLRYVRRLDFFEKPDYDYLRKLFTDLFDRKGYMFDYEYDWIGKQLPTPVGAVQQDPALSSNREAHQHRDKMQQSKNQSADHRAAWDSQQANPHHLRAHLAADRHGGSVQVVSSTNGELNTDDPTAGRSNAPITAPTEVEVMDETKCCCFFKRRKRKTIQRHK'
drug_example = 'CCN1C2=C(C=CC(=C2)OC)SC1=CC(=O)C'
len(protein_example)
isomeric_smiles = Chem.MolToSmiles(Chem.MolFromSmiles(drug_example), isomericSmiles=True)
compound_graph = smiles_to_graph_data(isomeric_smiles)
isomeric_smiles
###Output
_____no_output_____
###Markdown
Create a protein tokenizer which converts amino acid sequence into token IDs, ready for the embedding layer.
###Code
tokenizer = ProteinTokenizer()
protein_seq = tokenizer.gen_token_ids(protein_example)
len(protein_seq)
###Output
_____no_output_____
###Markdown
Add padding or cut the protein sequence when use fixed maximum protein length.
###Code
protein_seq = np.array(protein_seq, dtype=np.int64)
if max_protein_len > 0:
protein_token_ids = np.zeros(max_protein_len) + ProteinTokenizer.padding_token_id
n = min(max_protein_len, len(protein_seq))
protein_token_ids[:n] = np.array(protein_seq)[:n]
protein_seq = protein_token_ids
len(protein_seq)
###Output
_____no_output_____
###Markdown
Create the `feed_dict` for compound graph. Note that GraphDTA takes atom characteristics, such as number of directly-bonded neighbors (**degrees**), number of sigma electrons excluding electrons bonded to hydrogens (**Hs**), the number of hydrogens implicitly bonded to an atom (**implicit valence**), and whether it is **aromatic**. These four characteristics are treated as the numeric feature. Plus the other features used by Pretrain GNNs, we can represent the input graph using PGL API `pgl.graph.Graph` and `pgl.graph.MultiGraph`.
###Code
atom_numeric_feat = np.concatenate([
compound_graph['atom_degrees'],
compound_graph['atom_Hs'],
compound_graph['atom_implicit_valence'],
compound_graph['atom_is_aromatic'].reshape([-1, 1])
], axis=1).astype(np.float32)
g = pgl.graph.Graph(
num_nodes = len(compound_graph['atom_type']),
edges = compound_graph['edges'],
node_feat = {
'atom_type': compound_graph['atom_type'].reshape([-1, 1]),
'chirality_tag': compound_graph['chirality_tag'].reshape([-1, 1]),
'atom_numeric_feat': atom_numeric_feat
},
edge_feat = {
'bond_type': compound_graph['bond_type'].reshape([-1, 1]),
'bond_direction': compound_graph['bond_direction'].reshape([-1, 1])
})
join_graph = pgl.graph.MultiGraph([g])
feed_dict = model.compound_graph_wrapper.to_feed(join_graph)
###Output
_____no_output_____
###Markdown
Update the `feed_dict` for protein sequence. Notice that the `label` input is just a placeholder, otherwise the static graph won't work.
###Code
protein_token = [protein_seq]
protein_length = [0, protein_seq.size]
feed_dict['protein_token'] = np.concatenate(protein_token).reshape([-1, 1]).astype('int64')
feed_dict['protein_token_lod'] = np.add.accumulate(protein_length).reshape([1, -1]).astype('int32')
feed_dict['label'] = np.array([[1.0]]).astype(np.float32) # just a placeholder
pred, = exe.run(test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=True)
###Output
_____no_output_____
###Markdown
Predicted Kd value:
###Code
pred[0][0]
###Output
_____no_output_____
###Markdown
Predicting drug-target interaction In this tuorial, we will go through how to run a GraphDTA model for compound-protein affinity prediction. In particular, we will demonstrate to train, evaluate and inference the GraphDTA model using scripts in folder `apps/drug_target_interaction/graph_dta/`. GraphDTA **GraphDTA** represents compound drugs as graphs and uses graph neural networks to predict drug-target affinity. Specifically, the graph is converted from SMILES using RDKit, and passed through variants of graph neural network to extract its representation. For protein, the amino acid sequence is first embeded to an array of vectors, then sequence convolution is applied to get the protein representation. Finally, the combined representations of the compound drug and the protein is feeded into a feedforward network to regress the affinity measurement, such as Kd, Ki, KIBA, etc.  The code for GraphDTA is in `../apps/drug_target_interaction/graph_dta/`, we will redirect to this folder for later steps.
###Code
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.getcwd(), "..")))
os.chdir('../apps/drug_target_interaction/graph_dta/')
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
Prepare dataset Download the Davis dataset using `wget`.
###Code
# download and decompress the data
!wget "https://baidu-nlp.bj.bcebos.com/PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz"
!tar -zxf "PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz"
!ls "./davis/processed"
###Output
--2020-12-16 16:24:35-- https://baidu-nlp.bj.bcebos.com/PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz
正在解析主机 baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 10.70.0.165
正在连接 baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|10.70.0.165|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:23301615 (22M) [application/gzip]
正在保存至: “PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz.1”
PaddleHelix%2Fdatas 100%[===================>] 22.22M 4.65MB/s 用时 5.7s
2020-12-16 16:24:41 (3.87 MB/s) - 已保存 “PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz.1” [23301615/23301615])
[34mtest[m[m [34mtrain[m[m
###Markdown
Suppose you have download the processed Davis dataset , please refer to the script `data_gen.py` for the implementation of `DTADataset` class, which is a stream dataset wrapper for [PGL](https://github.com/PaddlePaddle/PGL).
###Code
from data_gen import DTADataset
###Output
[INFO] 2020-12-16 16:24:46,122 [mp_reader.py: 23]: ujson not install, fail back to use json instead
###Markdown
For the proteins sequences, there are two way to process them and get the inputs:* cut or add padding to get protein sequences with a fixed length, i.e. setting a `max_protein_len` > 0.* use the full protein sequence, i.e. setting a `max_protein_len` < 0.
###Code
train_data = './davis/processed/train'
test_data = './davis/processed/test'
max_protein_len = 1000 # set -1 to use full sequence
train_dataset = DTADataset(train_data, max_protein_len=max_protein_len)
test_dataset = DTADataset(test_data, max_protein_len=max_protein_len)
print(len(train_dataset), len(test_dataset))
###Output
25046 5010
###Markdown
Create the model In this tutorial, we take the GIN network as an example.
###Code
import paddle.fluid as fluid
from model import DTAModel
###Output
_____no_output_____
###Markdown
`model_config` shows the hyperparameters for the whole network architecture. In particular, the `model_config['compound']` is the configuration for the GNN model of compounds, and `model_config['protein']` is the configuration for the sequence convolution-based protein presentation module.
###Code
lr = 0.0005 # learning rate
model_config = {
"compound": {
"gnn_type": "gin", # type of the GNN
"dropout_rate": 0.2,# dropout rate for the GNN
"embed_dim": 32, # embedding size of atom type
"layer_num": 5, # number of GNN layers
"hidden_size": 32, # hidden size of GNN layers
"output_dim": 128 # the dimension of representation of compound graph
},
"protein": {
"max_protein_len": max_protein_len, # set -1 to use full sequence
"embed_dim": 128, # embedding size of amino acid
"num_filters": 32, # num of filters of the sequence convolution
"output_dim": 128 # the the dimension of representation of target protein
},
"dropout_rate": 0.2 # dropout rate for the affinity predictor
}
###Output
_____no_output_____
###Markdown
Create main program, start program, test program with static model `DTAModel` and Adam optimizer. For the details of `DTAModel`, please check the `model.py`. Basically, it implements the network architecutre showing in the above figure.
###Code
train_program, train_startup = fluid.Program(), fluid.Program()
with fluid.program_guard(train_program, train_startup):
with fluid.unique_name.guard():
model = DTAModel(model_config=model_config)
model.train()
test_program = train_program.clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=lr)
optimizer.minimize(model.loss)
###Output
_____no_output_____
###Markdown
Train and evaluate
###Code
import shutil
import numpy as np
from pgl.utils.data.dataloader import Dataloader
from data_gen import DTACollateFunc
from utils import concordance_index
max_epoch = 2 # we use a small epoch number as demonstration
batch_size = 512 # batch size for training
num_workers = 4 # number of workers for the PGL dataloader
best_model = 'gin_best_model' # directory to save the best model, i.e. with the minimum MSE
eval_txt = 'eval.txt' # the text file to record the evaluation metric
###Output
_____no_output_____
###Markdown
Create a Paddle Executor. Note that we use GPU if there is any GPU card available.
###Code
has_cuda = len(fluid.cuda_places()) > 0
place = fluid.cuda_places()[0] if has_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
place
###Output
_____no_output_____
###Markdown
In the `train()` function, we create a `DTACollateFunc` which wraps a batch of processed into a batch of graph data `pgl.graph.MultiGraph` in PGL, then with the protein input data, it can help to organize the full feed dictionary. You can check the data preparation in the inference section to understand how it works.
###Code
def train(exe, train_program, model, train_dataset):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
train_dataset,
batch_size=batch_size,
num_workers=num_workers,
stream_shuffle_size=1000,
collate_fn=collate_fn)
list_loss = []
for feed_dict in data_loader:
train_loss, = exe.run(
train_program, feed=feed_dict, fetch_list=[model.loss], return_numpy=False)
list_loss.append(np.array(train_loss).mean())
return np.mean(list_loss)
###Output
_____no_output_____
###Markdown
In the `evaluate()` function, we utilize MSE and Concordance Index (CI) to evaluate the model. However, computing the ranking-based metric CI is time-consuming, we introduce the prior smallest MSE (`best_mse`) to compare with current MSE, so we can avoid some unnecessary computation of CI.
###Code
def evaluate(exe, test_program, model, test_dataset, best_mse):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
test_dataset,
batch_size=batch_size,
num_workers=1,
collate_fn=collate_fn)
total_n, processed = len(test_dataset), 0
total_pred, total_label = [], []
for idx, feed_dict in enumerate(data_loader):
print('Evaluated {}/{}'.format(processed, total_n))
pred, = exe.run(
test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=False)
total_pred.append(np.array(pred))
total_label.append(feed_dict['label'])
processed += total_pred[-1].shape[0]
print('Evaluated {}/{}'.format(processed, total_n))
total_pred = np.concatenate(total_pred, 0).flatten()
total_label = np.concatenate(total_label, 0).flatten()
mse = ((total_label - total_pred) ** 2).mean(axis=0)
ci = None
if mse < best_mse:
ci = concordance_index(total_label, total_pred)
return mse, ci
###Output
_____no_output_____
###Markdown
The training and evaluating pipline: for each epoch, evaluate the model, if it achieves a smaller MSE on the test dataset, save the best model and update the evaluation metrics.
###Code
exe.run(train_startup)
best_mse, best_ci, best_ep = np.inf, 0, 0
for epoch_id in range(1, max_epoch + 1):
print('========== Epoch {} =========='.format(epoch_id))
train_loss = train(exe, train_program, model, train_dataset)
print('#Epoch: {}, Train loss: {}'.format(epoch_id, train_loss))
mse, ci = evaluate(exe, test_program, model, test_dataset, best_mse)
if mse < best_mse:
best_mse, best_ci, best_ep = mse, ci, epoch_id
if os.path.exists(best_model):
shutil.rmtree(best_model)
fluid.io.save_params(exe, best_model, train_program)
metric = 'Epoch: {}, Best MSE: {}, Best CI: {}'.format(epoch_id, best_mse, best_ci)
print(metric)
with open(eval_txt, 'w') as f:
f.write(metric)
else:
print('No improvement in epoch {}'.format(epoch_id))
metric = open(os.path.join(eval_txt), 'r').read()
print('===== Current best:\n{}'.format(metric))
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
`eval.txt` and folder `gin_best_model` are saved after training. Inference
###Code
import pgl
from rdkit import Chem
from pahelix.utils.compound_tools import smiles_to_graph_data
from pahelix.utils.protein_tools import ProteinTokenizer
protein_example = 'MENKKKDKDKSDDRMARPSGRSGHNTRGTGSSSSGVLMVGPNFRVGKKIGCGNFGELRLGKNLYTNEYVAIKLEPMKSRAPQLHLEYRFYKQLGSGDGIPQVYYFGPCGKYNAMVLELLGPSLEDLFDLCDRTFSLKTVLMIAIQLISRMEYVHSKNLIYRDVKPENFLIGRPGNKTQQVIHIIDFGLAKEYIDPETKKHIPYREHKSLTGTARYMSINTHLGKEQSRRDDLEALGHMFMYFLRGSLPWQGLKADTLKERYQKIGDTKRATPIEVLCENFPEMATYLRYVRRLDFFEKPDYDYLRKLFTDLFDRKGYMFDYEYDWIGKQLPTPVGAVQQDPALSSNREAHQHRDKMQQSKNQSADHRAAWDSQQANPHHLRAHLAADRHGGSVQVVSSTNGELNTDDPTAGRSNAPITAPTEVEVMDETKCCCFFKRRKRKTIQRHK'
drug_example = 'CCN1C2=C(C=CC(=C2)OC)SC1=CC(=O)C'
len(protein_example)
isomeric_smiles = Chem.MolToSmiles(Chem.MolFromSmiles(drug_example), isomericSmiles=True)
compound_graph = smiles_to_graph_data(isomeric_smiles)
isomeric_smiles
###Output
_____no_output_____
###Markdown
Create a protein tokenizer which converts amino acid sequence into token IDs, ready for the embedding layer.
###Code
tokenizer = ProteinTokenizer()
protein_seq = tokenizer.gen_token_ids(protein_example)
len(protein_seq)
###Output
_____no_output_____
###Markdown
Add padding or cut the protein sequence when use fixed maximum protein length.
###Code
protein_seq = np.array(protein_seq, dtype=np.int64)
if max_protein_len > 0:
protein_token_ids = np.zeros(max_protein_len) + ProteinTokenizer.padding_token_ID
n = min(max_protein_len, len(protein_seq))
protein_token_ids[:n] = np.array(protein_seq)[:n]
protein_seq = protein_token_ids
len(protein_seq)
###Output
_____no_output_____
###Markdown
Create the `feed_dict` for compound graph. Note that GraphDTA takes atom characteristics, such as number of directly-bonded neighbors (**degrees**), number of sigma electrons excluding electrons bonded to hydrogens (**Hs**), the number of hydrogens implicitly bonded to an atom (**implicit valence**), and whether it is **aromatic**. These four characteristics are treated as the numeric feature. Plus the other features used by Pretrain GNNs, we can represent the input graph using PGL API `pgl.graph.Graph` and `pgl.graph.MultiGraph`.
###Code
atom_numeric_feat = np.concatenate([
compound_graph['atom_degrees'],
compound_graph['atom_Hs'],
compound_graph['atom_implicit_valence'],
compound_graph['atom_is_aromatic'].reshape([-1, 1])
], axis=1).astype(np.float32)
g = pgl.graph.Graph(
num_nodes = len(compound_graph['atom_type']),
edges = compound_graph['edges'],
node_feat = {
'atom_type': compound_graph['atom_type'].reshape([-1, 1]),
'chirality_tag': compound_graph['chirality_tag'].reshape([-1, 1]),
'atom_numeric_feat': atom_numeric_feat
},
edge_feat = {
'bond_type': compound_graph['bond_type'].reshape([-1, 1]),
'bond_direction': compound_graph['bond_direction'].reshape([-1, 1])
})
join_graph = pgl.graph.MultiGraph([g])
feed_dict = model.compound_graph_wrapper.to_feed(join_graph)
###Output
_____no_output_____
###Markdown
Update the `feed_dict` for protein sequence. Notice that the `label` input is just a placeholder, otherwise the static graph won't work.
###Code
protein_token = [protein_seq]
protein_length = [0, protein_seq.size]
feed_dict['protein_token'] = np.concatenate(protein_token).reshape([-1, 1]).astype('int64')
feed_dict['protein_token_lod'] = np.add.accumulate(protein_length).reshape([1, -1]).astype('int32')
feed_dict['label'] = np.array([[1.0]]).astype(np.float32) # just a placeholder
pred, = exe.run(test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=True)
###Output
_____no_output_____
###Markdown
Predicted Kd value:
###Code
pred[0][0]
###Output
_____no_output_____
###Markdown
Predicting drug-target interaction In this tuorial, we will go through how to run a GraphDTA model for compound-protein affinity prediction. In particular, we will demonstrate to train, evaluate and inference the GraphDTA model using scripts in folder `apps/drug_target_interaction/graph_dta/`. GraphDTA **GraphDTA** represents compound drugs as graphs and uses graph neural networks to predict drug-target affinity. Specifically, the graph is converted from SMILES using RDKit, and passed through variants of graph neural network to extract its representation. For protein, the amino acid sequence is first embeded to an array of vectors, then sequence convolution is applied to get the protein representation. Finally, the combined representations of the compound drug and the protein is feeded into a feedforward network to regress the affinity measurement, such as Kd, Ki, KIBA, etc.  The code for GraphDTA is in `../apps/drug_target_interaction/graph_dta/`, we will redirect to this folder for later steps.
###Code
import os
import sys
sys.path.insert(0, os.path.abspath(os.path.join(os.getcwd(), "..")))
os.chdir('../apps/drug_target_interaction/graph_dta/')
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
Prepare dataset Download the Davis dataset using `wget`.
###Code
# download and decompress the data
!wget "https://baidu-nlp.bj.bcebos.com/PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz" --no-check-certificate
!tar -zxf "PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz"
!ls "./davis/processed"
###Output
--2020-12-17 19:27:53-- https://baidu-nlp.bj.bcebos.com/PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz
正在解析主机 baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)... 10.70.0.165
正在连接 baidu-nlp.bj.bcebos.com (baidu-nlp.bj.bcebos.com)|10.70.0.165|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:23301615 (22M) [application/gzip]
正在保存至: “PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz”
PaddleHelix%2Fdatas 100%[===================>] 22.22M 6.47MB/s 用时 4.7s
2020-12-17 19:27:58 (4.72 MB/s) - 已保存 “PaddleHelix%2Fdatasets%2Fdti_datasets%2Fdavis.tgz” [23301615/23301615])
[1m[36mtest[m[m [1m[36mtrain[m[m
###Markdown
Suppose you have download the processed Davis dataset , please refer to the script `data_gen.py` for the implementation of `DTADataset` class, which is a stream dataset wrapper for [PGL](https://github.com/PaddlePaddle/PGL).
###Code
from data_gen import DTADataset
###Output
[INFO] 2020-12-17 19:28:04,139 [mp_reader.py: 23]: ujson not install, fail back to use json instead
###Markdown
For the proteins sequences, there are two way to process them and get the inputs:* cut or add padding to get protein sequences with a fixed length, i.e. setting a `max_protein_len` > 0.* use the full protein sequence, i.e. setting a `max_protein_len` < 0.
###Code
train_data = './davis/processed/train'
test_data = './davis/processed/test'
max_protein_len = 1000 # set -1 to use full sequence
train_dataset = DTADataset(train_data, max_protein_len=max_protein_len)
test_dataset = DTADataset(test_data, max_protein_len=max_protein_len)
print(len(train_dataset), len(test_dataset))
###Output
25046 5010
###Markdown
Create the model In this tutorial, we take the GIN network as an example.
###Code
import paddle
import paddle.fluid as fluid
from model import DTAModel
paddle.enable_static()
###Output
_____no_output_____
###Markdown
`model_config` shows the hyperparameters for the whole network architecture. In particular, the `model_config['compound']` is the configuration for the GNN model of compounds, and `model_config['protein']` is the configuration for the sequence convolution-based protein presentation module.
###Code
lr = 0.0005 # learning rate
model_config = {
"compound": {
"gnn_type": "gin", # type of the GNN
"dropout_rate": 0.2,# dropout rate for the GNN
"embed_dim": 32, # embedding size of atom type
"layer_num": 5, # number of GNN layers
"hidden_size": 32, # hidden size of GNN layers
"output_dim": 128 # the dimension of representation of compound graph
},
"protein": {
"max_protein_len": max_protein_len, # set -1 to use full sequence
"embed_dim": 128, # embedding size of amino acid
"num_filters": 32, # num of filters of the sequence convolution
"output_dim": 128 # the the dimension of representation of target protein
},
"dropout_rate": 0.2 # dropout rate for the affinity predictor
}
###Output
_____no_output_____
###Markdown
Create main program, start program, test program with static model `DTAModel` and Adam optimizer. For the details of `DTAModel`, please check the `model.py`. Basically, it implements the network architecutre showing in the above figure.
###Code
train_program, train_startup = fluid.Program(), fluid.Program()
with fluid.program_guard(train_program, train_startup):
with fluid.unique_name.guard():
model = DTAModel(model_config=model_config)
model.train()
test_program = train_program.clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=lr)
optimizer.minimize(model.loss)
###Output
_____no_output_____
###Markdown
Train and evaluate
###Code
import shutil
import numpy as np
from pgl.utils.data.dataloader import Dataloader
from data_gen import DTACollateFunc
from utils import concordance_index
max_epoch = 2 # we use a small epoch number as demonstration
batch_size = 512 # batch size for training
num_workers = 4 # number of workers for the PGL dataloader
best_model = 'gin_best_model' # directory to save the best model, i.e. with the minimum MSE
eval_txt = 'eval.txt' # the text file to record the evaluation metric
###Output
_____no_output_____
###Markdown
Create a Paddle Executor. Note that if you want to run on GPU, use `place = fluid.cuda_places()[0]` instead.
###Code
# place = fluid.cuda_places()[0]
place = fluid.CPUPlace()
exe = fluid.Executor(place)
###Output
_____no_output_____
###Markdown
In the `train()` function, we create a `DTACollateFunc` which wraps a batch of processed into a batch of graph data `pgl.graph.MultiGraph` in PGL, then with the protein input data, it can help to organize the full feed dictionary. You can check the data preparation in the inference section to understand how it works.
###Code
def train(exe, train_program, model, train_dataset):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
train_dataset,
batch_size=batch_size,
num_workers=num_workers,
stream_shuffle_size=1000,
collate_fn=collate_fn)
list_loss = []
for feed_dict in data_loader:
train_loss, = exe.run(
train_program, feed=feed_dict, fetch_list=[model.loss], return_numpy=False)
list_loss.append(np.array(train_loss).mean())
return np.mean(list_loss)
###Output
_____no_output_____
###Markdown
In the `evaluate()` function, we utilize MSE and Concordance Index (CI) to evaluate the model. However, computing the ranking-based metric CI is time-consuming, we introduce the prior smallest MSE (`best_mse`) to compare with current MSE, so we can avoid some unnecessary computation of CI.
###Code
def evaluate(exe, test_program, model, test_dataset, best_mse):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
test_dataset,
batch_size=batch_size,
num_workers=1,
collate_fn=collate_fn)
total_n, processed = len(test_dataset), 0
total_pred, total_label = [], []
for idx, feed_dict in enumerate(data_loader):
print('Evaluated {}/{}'.format(processed, total_n))
pred, = exe.run(
test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=False)
total_pred.append(np.array(pred))
total_label.append(feed_dict['label'])
processed += total_pred[-1].shape[0]
print('Evaluated {}/{}'.format(processed, total_n))
total_pred = np.concatenate(total_pred, 0).flatten()
total_label = np.concatenate(total_label, 0).flatten()
mse = ((total_label - total_pred) ** 2).mean(axis=0)
ci = None
if mse < best_mse:
ci = concordance_index(total_label, total_pred)
return mse, ci
###Output
_____no_output_____
###Markdown
The training and evaluating pipline: for each epoch, evaluate the model, if it achieves a smaller MSE on the test dataset, save the best model and update the evaluation metrics.
###Code
exe.run(train_startup)
best_mse, best_ci, best_ep = np.inf, 0, 0
for epoch_id in range(1, max_epoch + 1):
print('========== Epoch {} =========='.format(epoch_id))
train_loss = train(exe, train_program, model, train_dataset)
print('#Epoch: {}, Train loss: {}'.format(epoch_id, train_loss))
mse, ci = evaluate(exe, test_program, model, test_dataset, best_mse)
if mse < best_mse:
best_mse, best_ci, best_ep = mse, ci, epoch_id
if os.path.exists(best_model):
shutil.rmtree(best_model)
fluid.io.save_params(exe, best_model, train_program)
metric = 'Epoch: {}, Best MSE: {}, Best CI: {}'.format(epoch_id, best_mse, best_ci)
print(metric)
with open(eval_txt, 'w') as f:
f.write(metric)
else:
print('No improvement in epoch {}'.format(epoch_id))
metric = open(os.path.join(eval_txt), 'r').read()
print('===== Current best:\n{}'.format(metric))
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
`eval.txt` and folder `gin_best_model` are saved after training. Inference
###Code
import pgl
from rdkit import Chem
from pahelix.utils.compound_tools import smiles_to_graph_data
from pahelix.utils.protein_tools import ProteinTokenizer
protein_example = 'MENKKKDKDKSDDRMARPSGRSGHNTRGTGSSSSGVLMVGPNFRVGKKIGCGNFGELRLGKNLYTNEYVAIKLEPMKSRAPQLHLEYRFYKQLGSGDGIPQVYYFGPCGKYNAMVLELLGPSLEDLFDLCDRTFSLKTVLMIAIQLISRMEYVHSKNLIYRDVKPENFLIGRPGNKTQQVIHIIDFGLAKEYIDPETKKHIPYREHKSLTGTARYMSINTHLGKEQSRRDDLEALGHMFMYFLRGSLPWQGLKADTLKERYQKIGDTKRATPIEVLCENFPEMATYLRYVRRLDFFEKPDYDYLRKLFTDLFDRKGYMFDYEYDWIGKQLPTPVGAVQQDPALSSNREAHQHRDKMQQSKNQSADHRAAWDSQQANPHHLRAHLAADRHGGSVQVVSSTNGELNTDDPTAGRSNAPITAPTEVEVMDETKCCCFFKRRKRKTIQRHK'
drug_example = 'CCN1C2=C(C=CC(=C2)OC)SC1=CC(=O)C'
len(protein_example)
isomeric_smiles = Chem.MolToSmiles(Chem.MolFromSmiles(drug_example), isomericSmiles=True)
compound_graph = smiles_to_graph_data(isomeric_smiles)
isomeric_smiles
###Output
_____no_output_____
###Markdown
Create a protein tokenizer which converts amino acid sequence into token IDs, ready for the embedding layer.
###Code
tokenizer = ProteinTokenizer()
protein_seq = tokenizer.gen_token_ids(protein_example)
len(protein_seq)
###Output
_____no_output_____
###Markdown
Add padding or cut the protein sequence when use fixed maximum protein length.
###Code
protein_seq = np.array(protein_seq, dtype=np.int64)
if max_protein_len > 0:
protein_token_ids = np.zeros(max_protein_len) + ProteinTokenizer.padding_token_ID
n = min(max_protein_len, len(protein_seq))
protein_token_ids[:n] = np.array(protein_seq)[:n]
protein_seq = protein_token_ids
len(protein_seq)
###Output
_____no_output_____
###Markdown
Create the `feed_dict` for compound graph. Note that GraphDTA takes atom characteristics, such as number of directly-bonded neighbors (**degrees**), number of sigma electrons excluding electrons bonded to hydrogens (**Hs**), the number of hydrogens implicitly bonded to an atom (**implicit valence**), and whether it is **aromatic**. These four characteristics are treated as the numeric feature. Plus the other features used by Pretrain GNNs, we can represent the input graph using PGL API `pgl.graph.Graph` and `pgl.graph.MultiGraph`.
###Code
atom_numeric_feat = np.concatenate([
compound_graph['atom_degrees'],
compound_graph['atom_Hs'],
compound_graph['atom_implicit_valence'],
compound_graph['atom_is_aromatic'].reshape([-1, 1])
], axis=1).astype(np.float32)
g = pgl.graph.Graph(
num_nodes = len(compound_graph['atom_type']),
edges = compound_graph['edges'],
node_feat = {
'atom_type': compound_graph['atom_type'].reshape([-1, 1]),
'chirality_tag': compound_graph['chirality_tag'].reshape([-1, 1]),
'atom_numeric_feat': atom_numeric_feat
},
edge_feat = {
'bond_type': compound_graph['bond_type'].reshape([-1, 1]),
'bond_direction': compound_graph['bond_direction'].reshape([-1, 1])
})
join_graph = pgl.graph.MultiGraph([g])
feed_dict = model.compound_graph_wrapper.to_feed(join_graph)
###Output
_____no_output_____
###Markdown
Update the `feed_dict` for protein sequence. Notice that the `label` input is just a placeholder, otherwise the static graph won't work.
###Code
protein_token = [protein_seq]
protein_length = [0, protein_seq.size]
feed_dict['protein_token'] = np.concatenate(protein_token).reshape([-1, 1]).astype('int64')
feed_dict['protein_token_lod'] = np.add.accumulate(protein_length).reshape([1, -1]).astype('int32')
feed_dict['label'] = np.array([[1.0]]).astype(np.float32) # just a placeholder
pred, = exe.run(test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=True)
###Output
_____no_output_____
###Markdown
Predicted Kd value:
###Code
pred[0][0]
###Output
_____no_output_____
###Markdown
Drug Target Interaction Tutorial Introduction GraphDTA **GraphDTA** represents compound drugs as graphs and uses graph neural networks to predict drug-target affinity. Specifically, the graph is converted from SMILES using RDKit, and passed through variants of graph neural network to extract its representation. For protein, the amino acid sequence is first embeded to an array of vectors, then sequence convolution is applied to get the protein representation. Finally, the combined representations of the compound drug and the protein is feeded into a feedforward network to regress the affinity measurement, such as Kd, Ki, KIBA, etc.  The code for GraphDTA is in `../apps/drug_target_interaction/graph_dta/`, we will redirect to this folder for later steps.
###Code
import os
#os.chdir('../apps/drug_target_interaction/graph_dta/')
os.chdir('../apps/graph_dta/')
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
Prepare Dataset **TODO**: add downloader and preprocessing steps
###Code
from data_gen import DTADataset
train_data = '/mnt/xueyang/Datasets/PaddleHelix/davis/processed/train'
test_data = '/mnt/xueyang/Datasets/PaddleHelix/davis/processed/test'
max_protein_len = 1000 # set -1 to use full sequence
train_dataset = DTADataset(train_data, max_protein_len=max_protein_len)
test_dataset = DTADataset(test_data, max_protein_len=max_protein_len)
print(len(train_dataset), len(test_dataset))
###Output
25046 5010
###Markdown
Create Model Taken the GIN network as an example, we have:
###Code
import paddle.fluid as fluid
from model import DTAModel
lr = 0.0005 # learning rate
model_config = {
"compound": {
"gnn_type": "gin",
"dropout_rate": 0.2,
"embed_dim": 32, # embedding size of atom type
"layer_num": 5,
"hidden_size": 32,
"output_dim": 128 # the dimension of representation of compound graph
},
"protein": {
"max_protein_len": max_protein_len, # set -1 to use full sequence
"embed_dim": 128, # embedding size of amino acid
"num_filters": 32, # num of filters of the sequence convolution
"output_dim": 128 # the the dimension of representation of target protein
},
"dropout_rate": 0.2
}
train_program, train_startup = fluid.Program(), fluid.Program()
with fluid.program_guard(train_program, train_startup):
with fluid.unique_name.guard():
model = DTAModel(
model_config=model_config,
use_pretrained_compound_gnns=False)
model.train()
test_program = train_program.clone(for_test=True)
optimizer = fluid.optimizer.Adam(learning_rate=lr)
optimizer.minimize(model.loss)
###Output
_____no_output_____
###Markdown
Train and Evaluate
###Code
import shutil
import numpy as np
from pgl.utils.data.dataloader import Dataloader
from data_gen import DTACollateFunc
from utils import concordance_index
max_epoch = 2 # we use a small epoch number as demonstration
batch_size = 512
num_workers = 4
best_model = 'gin_best_model'
eval_txt = 'eval.txt'
has_cuda = len(fluid.cuda_places()) > 0
place = fluid.cuda_places()[0] if has_cuda else fluid.CPUPlace()
exe = fluid.Executor(place)
place
def train(exe, train_program, model, train_dataset):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
train_dataset,
batch_size=batch_size,
num_workers=num_workers,
stream_shuffle_size=1000,
collate_fn=collate_fn)
list_loss = []
for feed_dict in data_loader:
train_loss, = exe.run(
train_program, feed=feed_dict, fetch_list=[model.loss], return_numpy=False)
list_loss.append(np.array(train_loss).mean())
return np.mean(list_loss)
def evaluate(exe, test_program, model, test_dataset, best_mse):
collate_fn = DTACollateFunc(
model.compound_graph_wrapper,
is_inference=False,
label_name='Log10_Kd')
data_loader = Dataloader(
test_dataset,
batch_size=batch_size,
num_workers=1,
collate_fn=collate_fn)
total_n, processed = len(test_dataset), 0
total_pred, total_label = [], []
for idx, feed_dict in enumerate(data_loader):
print('Evaluated {}/{}'.format(processed, total_n))
pred, = exe.run(
test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=False)
total_pred.append(np.array(pred))
total_label.append(feed_dict['label'])
processed += total_pred[-1].shape[0]
print('Evaluated {}/{}'.format(processed, total_n))
total_pred = np.concatenate(total_pred, 0).flatten()
total_label = np.concatenate(total_label, 0).flatten()
mse = ((total_label - total_pred) ** 2).mean(axis=0)
ci = None
if mse < best_mse:
ci = concordance_index(total_label, total_pred)
return mse, ci
exe.run(train_startup)
best_mse, best_ci, best_ep = np.inf, 0, 0
for epoch_id in range(1, max_epoch + 1):
print('========== Epoch {} =========='.format(epoch_id))
train_loss = train(exe, train_program, model, train_dataset)
print('#Epoch: {}, Train loss: {}'.format(epoch_id, train_loss))
mse, ci = evaluate(exe, test_program, model, test_dataset, best_mse)
if mse < best_mse:
best_mse, best_ci, best_ep = mse, ci, epoch_id
if os.path.exists(best_model):
shutil.rmtree(best_model)
fluid.io.save_params(exe, best_model, train_program)
metric = 'Epoch: {}, Best MSE: {}, Best CI: {}'.format(epoch_id, best_mse, best_ci)
print(metric)
with open(eval_txt, 'w') as f:
f.write(metric)
else:
print('No improvement in epoch {}'.format(epoch_id))
metric = open(os.path.join(eval_txt), 'r').read()
print('===== Current best:\n{}'.format(metric))
os.listdir(os.getcwd())
###Output
_____no_output_____
###Markdown
`eval.txt` and folder `gin_best_model` are saved after training. Inference
###Code
import pgl
from rdkit import Chem
from pahelix.utils.compound_tools import smiles_to_graph_data
from pahelix.utils.protein_tools import ProteinTokenizer
protein_example = 'MENKKKDKDKSDDRMARPSGRSGHNTRGTGSSSSGVLMVGPNFRVGKKIGCGNFGELRLGKNLYTNEYVAIKLEPMKSRAPQLHLEYRFYKQLGSGDGIPQVYYFGPCGKYNAMVLELLGPSLEDLFDLCDRTFSLKTVLMIAIQLISRMEYVHSKNLIYRDVKPENFLIGRPGNKTQQVIHIIDFGLAKEYIDPETKKHIPYREHKSLTGTARYMSINTHLGKEQSRRDDLEALGHMFMYFLRGSLPWQGLKADTLKERYQKIGDTKRATPIEVLCENFPEMATYLRYVRRLDFFEKPDYDYLRKLFTDLFDRKGYMFDYEYDWIGKQLPTPVGAVQQDPALSSNREAHQHRDKMQQSKNQSADHRAAWDSQQANPHHLRAHLAADRHGGSVQVVSSTNGELNTDDPTAGRSNAPITAPTEVEVMDETKCCCFFKRRKRKTIQRHK'
drug_example = 'CCN1C2=C(C=CC(=C2)OC)SC1=CC(=O)C'
len(protein_example)
isomeric_smiles = Chem.MolToSmiles(Chem.MolFromSmiles(drug_example), isomericSmiles=True)
compound_graph = smiles_to_graph_data(isomeric_smiles)
isomeric_smiles
tokenizer = ProteinTokenizer()
protein_seq = tokenizer.gen_token_ids(protein_example)
len(protein_seq)
###Output
_____no_output_____
###Markdown
Add padding or cut the protein sequence when use fixed maximum protein length.
###Code
protein_seq = np.array(protein_seq, dtype=np.int64)
if max_protein_len > 0:
protein_token_ids = np.zeros(max_protein_len) + ProteinTokenizer.padding_token_ID
n = min(max_protein_len, len(protein_seq))
protein_token_ids[:n] = np.array(protein_seq)[:n]
protein_seq = protein_token_ids
len(protein_seq)
###Output
_____no_output_____
###Markdown
Create the `feed_dict` for compound graph.
###Code
atom_numeric_feat = np.concatenate([
compound_graph['atom_degrees'],
compound_graph['atom_Hs'],
compound_graph['atom_implicit_valence'],
compound_graph['atom_is_aromatic'].reshape([-1, 1])
], axis=1).astype(np.float32)
g = pgl.graph.Graph(
num_nodes = len(compound_graph['atom_type']),
edges = compound_graph['edges'],
node_feat = {
'atom_type': compound_graph['atom_type'].reshape([-1, 1]),
'chirality_tag': compound_graph['chirality_tag'].reshape([-1, 1]),
'atom_numeric_feat': atom_numeric_feat
},
edge_feat = {
'bond_type': compound_graph['bond_type'].reshape([-1, 1]),
'bond_direction': compound_graph['bond_direction'].reshape([-1, 1])
})
join_graph = pgl.graph.MultiGraph([g])
feed_dict = model.compound_graph_wrapper.to_feed(join_graph)
###Output
_____no_output_____
###Markdown
Update the `feed_dict` for protein sequence.
###Code
protein_token = [protein_seq]
protein_length = [0, protein_seq.size]
feed_dict['protein_token'] = np.concatenate(protein_token).reshape([-1, 1]).astype('int64')
feed_dict['protein_token_lod'] = np.add.accumulate(protein_length).reshape([1, -1]).astype('int32')
feed_dict['label'] = np.array([[1.0]]).astype(np.float32) # just a placeholder
pred, = exe.run(test_program, feed=feed_dict, fetch_list=[model.pred], return_numpy=True)
###Output
_____no_output_____
###Markdown
Predicted Kd value:
###Code
pred[0][0]
###Output
_____no_output_____
|
machinelearning_kaggle.ipynb
|
###Markdown
predict할 데이터의 값이 1로 일정함....... 새로운 x값을도입해야 할것 같음...또...정확도면에서 상당히 떨어지고, x의 유동값이 필요 할것 같음
###Code
import pickle
pickle.dump(lr, open('./saves/kaggle_lr.pkl','wb'))
y2_predict = lr.predict(x2_train)
y2_predict.shape, y2_train.shape
y2_result = y2_train - y2_predict
y2_result
lr.score(x2_train, y2_train)
###Output
_____no_output_____
|
python_hw/Day2.ipynb
|
###Markdown
8/4/2021---Kura Labs---**Python** 1. Strings
###Code
name = input('What is your name? ')
color = input('What is your favourite color? ')
print(name + ' likes ' + color)
###Output
_____no_output_____
###Markdown
2. Script that converts weight
###Code
weight_lbs = input('What is your weight in pounds (lbs)? ')
weight_kg = float(weight_lbs)*0.45
print('Your weight in kilogramme (kg) is ', round(weight_kg,3),'.')
###Output
What is your weight in pounds (lbs)? 190
Your weight in kilogramme (kg) is 85.5 .
###Markdown
3. Home buying.
###Code
credit = input('What is your credit score? ')
Price = input('What is the price of the house? ')
if int(credit) > 699:
print('The buyer has good credit.')
down_payment = 0.1*float(Price)
else:
print('The buyer does not have good credit.')
down_payment = 0.3 * float(Price)
print(f"They need to pay down: ${down_payment}")
###Output
_____no_output_____
|
notebooks/Part VI CNN example.ipynb
|
###Markdown
Read the data: morphological labels Labels, assigned visually by astronomers in the GAMA collaboration:
###Code
morph = pd.read_csv(os.path.join("data","morphology.txt"), sep=" ")
###Output
_____no_output_____
###Markdown
There are two distinct labels, with no info on self-consistency: HubbleType and isElliptical
###Code
morph.head()
###Output
_____no_output_____
###Markdown
2451 galaxies do not have a HubbleType:
###Code
morph.HubbleType.value_counts()
morph.isElliptical.value_counts()
###Output
_____no_output_____
###Markdown
Process the labels Our goal will be to develop a model which can predict a correct label given a galaxy image.Let's focus on predicting the `isElliptical` label, and take a random sample of 2500 galaxies with the label "Elliptical" and 2500 with the label "NotElliptical". We will also need to select the corresponding images.
###Code
mask = morph.isElliptical == "NotElliptical"
df0 = morph[mask].sample(2500, random_state=0)
df0.head()
mask = morph.isElliptical == "Elliptical"
df1 = morph[mask].sample(2500, random_state=0)
df1.head()
###Output
_____no_output_____
###Markdown
Merge the data frames and check it is sensible:
###Code
data = pd.concat( (df0,df1) )
data.isElliptical.value_counts()
###Output
_____no_output_____
###Markdown
Create an array of integer labels, i.e. convert the string labels 'Elliptical' and 'NotElliptical' to integers
###Code
labdict = { 'NotElliptical':0, 'Elliptical':1 }
labels = np.array( [ labdict[s] for s in data.isElliptical ] )
###Output
_____no_output_____
###Markdown
Read the data: galaxy imagesRead the images associated with our subset of the label data (with IDs lining up row by row)
###Code
loa = [ np.array( Image.open(os.path.join("data","images","{}_giH.png").format(i)), dtype=np.uint8 ) for i in data.id ]
images = np.array( loa )
###Output
_____no_output_____
###Markdown
There are 5000 total images, and each one has size 28x28x3 pixels:
###Code
images.shape
###Output
_____no_output_____
###Markdown
Currently, the image data is stored as integer values in the range of 0 to 255. For machine learning applications, we need to rescale this data to the range 0 to 1 and convert to float.
###Code
print( images.min(), images.max() )
images = np.float32(images)/255.
print( images.min(), images.max() )
###Output
_____no_output_____
###Markdown
Inspect the data To recap, our data has been processed into two numpy arrays: `images` and `labels`.Let's look at some random galaxies in the dataset along with their label (0=NotElliptical, 1=Elliptical)
###Code
show_random(images, labels )
###Output
_____no_output_____
###Markdown
Build the CNN
###Code
images.shape[1:]
def build( input_shape=images.shape[1:], num_classes=len(np.unique(labels)) ):
# note the input shape is simply the shape of 'x' without the first dimension = (50,50,1)
# i.e. the number of datapoints in the training set does not matter
model = Sequential()
# Layers:
model.add(Conv2D(3, input_shape=input_shape, kernel_size=(3, 3), activation='relu'))
#model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(3, (3, 3), activation='relu'))
#model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(4, (2, 2), activation='relu'))
#model.add(MaxPooling2D(pool_size=(3, 3)))
model.add(Dropout(0.25))
model.add(Flatten())
#model.add(Dense(128, activation='relu'))
#model.add(Dropout(0.5))
# Final layer (fully connected)
if num_classes == 2:
model.add( Dense(1, activation='sigmoid') )
model.compile( optimizer=Adadelta(), loss=binary_crossentropy, metrics=['accuracy'] )
elif num_classes > 2:
model.add(Dense(num_classes, activation='softmax'))
model.compile(optimizer=Adadelta(), loss=categorical_crossentropy, metrics=['accuracy'])
return model
model = build()
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 26, 26, 3) 84
_________________________________________________________________
dropout_1 (Dropout) (None, 26, 26, 3) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 24, 24, 3) 84
_________________________________________________________________
dropout_2 (Dropout) (None, 24, 24, 3) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 23, 23, 4) 52
_________________________________________________________________
dropout_3 (Dropout) (None, 23, 23, 4) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 2116) 0
_________________________________________________________________
dense_1 (Dense) (None, 1) 2117
=================================================================
Total params: 2,337
Trainable params: 2,337
Non-trainable params: 0
_________________________________________________________________
###Markdown
Train the modelBe sure to reserve some of the data for validation
###Code
model = build()
history = model.fit( images, labels, batch_size=128, epochs=30, verbose=1, validation_split=0.2 )
# Watch as the training accuracy begins at 50% and slowly climbs to around 90%. Validation accuracy is similar.
###Output
Train on 4000 samples, validate on 1000 samples
Epoch 1/30
4000/4000 [==============================] - 5s 1ms/step - loss: 2.5123 - acc: 0.6022 - val_loss: 0.8835 - val_acc: 0.3760
Epoch 2/30
4000/4000 [==============================] - 4s 1ms/step - loss: 0.7096 - acc: 0.6150 - val_loss: 0.8354 - val_acc: 0.3350
Epoch 3/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.6174 - acc: 0.6345 - val_loss: 0.8291 - val_acc: 0.3430
Epoch 4/30
4000/4000 [==============================] - 4s 1ms/step - loss: 0.5950 - acc: 0.6465 - val_loss: 0.8644 - val_acc: 0.3540
Epoch 5/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.5734 - acc: 0.6660 - val_loss: 0.7430 - val_acc: 0.5020
Epoch 6/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.5509 - acc: 0.6873 - val_loss: 0.8023 - val_acc: 0.4660
Epoch 7/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.5183 - acc: 0.7570 - val_loss: 0.7453 - val_acc: 0.7660
Epoch 8/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.4860 - acc: 0.7800 - val_loss: 0.7488 - val_acc: 0.7620
Epoch 9/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.4640 - acc: 0.8005 - val_loss: 0.8382 - val_acc: 0.7120
Epoch 10/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.4447 - acc: 0.8043 - val_loss: 0.5000 - val_acc: 0.9210
Epoch 11/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.4203 - acc: 0.8220 - val_loss: 0.5871 - val_acc: 0.8470
Epoch 12/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.4076 - acc: 0.8267 - val_loss: 1.0732 - val_acc: 0.5890
Epoch 13/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3791 - acc: 0.8458 - val_loss: 0.4426 - val_acc: 0.9290
Epoch 14/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3702 - acc: 0.8485 - val_loss: 0.5002 - val_acc: 0.8760
Epoch 15/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3463 - acc: 0.8542 - val_loss: 1.1183 - val_acc: 0.5290
Epoch 16/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3502 - acc: 0.8552 - val_loss: 0.5496 - val_acc: 0.8360
Epoch 17/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3312 - acc: 0.8700 - val_loss: 0.6226 - val_acc: 0.7790
Epoch 18/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3366 - acc: 0.8640 - val_loss: 0.3229 - val_acc: 0.9740
Epoch 19/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3319 - acc: 0.8650 - val_loss: 0.3318 - val_acc: 0.9660
Epoch 20/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3199 - acc: 0.8708 - val_loss: 0.5939 - val_acc: 0.7780
Epoch 21/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3032 - acc: 0.8798 - val_loss: 0.5468 - val_acc: 0.8100
Epoch 22/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.3021 - acc: 0.8775 - val_loss: 0.8226 - val_acc: 0.6410
Epoch 23/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2772 - acc: 0.8915 - val_loss: 0.4830 - val_acc: 0.8520
Epoch 24/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2935 - acc: 0.8820 - val_loss: 0.5019 - val_acc: 0.8370
Epoch 25/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2750 - acc: 0.8930 - val_loss: 0.3348 - val_acc: 0.9420
Epoch 26/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2923 - acc: 0.8832 - val_loss: 0.7129 - val_acc: 0.6990
Epoch 27/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2950 - acc: 0.8760 - val_loss: 0.7135 - val_acc: 0.6840
Epoch 28/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2826 - acc: 0.8868 - val_loss: 0.4592 - val_acc: 0.8500
Epoch 29/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2740 - acc: 0.8885 - val_loss: 0.9955 - val_acc: 0.5450
Epoch 30/30
4000/4000 [==============================] - 5s 1ms/step - loss: 0.2637 - acc: 0.8920 - val_loss: 0.5314 - val_acc: 0.8000
###Markdown
Plot the training history
###Code
import matplotlib.pyplot as plt
import pylab
history_dict = history.history
f, (ax1, ax2) = plt.subplots(2, 1, figsize=(12, 12), dpi= 80)
ax1.plot(history_dict['loss'], 'o--', label='Training')
ax1.plot(history_dict['val_loss'], 'o--', label='Validation')
ax1.set_xlabel('Number of Epocs')
ax1.set_ylabel('Loss')
ax1.legend()
ax2.plot(history_dict['acc'], 'o--', label='Training')
ax2.plot(history_dict['val_acc'], 'o--', label='Validation')
ax2.set_xlabel('Number of Epocs')
ax2.set_ylabel('Accuracy')
ax2.legend()
###Output
_____no_output_____
###Markdown
Inspect the predictions- The predictions are probabilities between 0 and 1 that the given galaxy is an Elliptical.
###Code
predictions = model.predict( images )[:,0] # need to subset to get the correct shape
predictions
show_random(images, labels, predictions)
###Output
_____no_output_____
|
Complete-Python-3-Bootcamp-master/01-Python Comparison Operators/02-Chained Comparison Operators.ipynb
|
###Markdown
Chained Comparison OperatorsAn interesting feature of Python is the ability to *chain* multiple comparisons to perform a more complex test. You can use these chained comparisons as shorthand for larger Boolean Expressions.In this lecture we will learn how to chain comparison operators and we will also introduce two other important statements in Python: **and** and **or**.Let's look at a few examples of using chains:
###Code
1 < 2 < 3
###Output
_____no_output_____
###Markdown
The above statement checks if 1 was less than 2 **and** if 2 was less than 3. We could have written this using an **and** statement in Python:
###Code
1<2 and 2<3
###Output
_____no_output_____
###Markdown
The **and** is used to make sure two checks have to be true in order for the total check to be true. Let's see another example:
###Code
1 < 3 > 2
###Output
_____no_output_____
###Markdown
The above checks if 3 is larger than both of the other numbers, so you could use **and** to rewrite it as:
###Code
1<3 and 3>2
###Output
_____no_output_____
###Markdown
It's important to note that Python is checking both instances of the comparisons. We can also use **or** to write comparisons in Python. For example:
###Code
1==2 or 2<3
###Output
_____no_output_____
###Markdown
Note how it was true; this is because with the **or** operator, we only need one *or* the other to be true. Let's see one more example to drive this home:
###Code
1==1 or 100==1
###Output
_____no_output_____
|
7-spark-streaming/7-spark-streaming.ipynb
|
###Markdown
Spark streamingDans les tutoriels précédents nous avons toujours travaillé avec des données statiques sous forme d'import/export.Voyons ici un premier exemple de données qui évoluent.Pour ce faire, nous vous proposons de streamer les tweets de l'insee avec spark streaming.L'idée derrière la notion de streaming est celle du flot continu de données dont les méthodes d'analyse par batch ne répondent pas aux enjeux de vélocité. Dans la statistique publique le besoin est faible pour le moment. Pour être précis il y a trois notions :* batch processing * micro batch processing ( flot traité comme des batchs de très courte durée comme une seconde)* stream processing ( chaque ligne est un évenement et déclenchera une réaction dans le SI, developpement évenementiel ) Dans notre cas avec spark nous sommes plutôt sur du micro-batch Pré-requis La source de donnéeUn petit programme est executé sur le datalab afin de streamer **les tweets contenant insee ou inseeFr**.En général les données proviennent d'un broker de message comme kafka mais pour simplifier le tutoriel ces tweets sont écrits sous la forme de petits fichiers au fil de l'eau dans le bucket suivant :* **s3a://projet-spark-lab/diffusion/tweets**Regardons le contenu de ce répertoire avec la commande hadoop en prenant les 2 fichiers les plus récents
###Code
!hadoop fs -ls -t "s3a://projet-spark-lab/diffusion/tweets/input" | grep "tweets"| head -n2
###Output
2022-03-29 07:18:58,600 INFO impl.MetricsConfig: Loaded properties from hadoop-metrics2.properties
2022-03-29 07:18:58,713 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
2022-03-29 07:18:58,713 INFO impl.MetricsSystemImpl: s3a-file-system metrics system started
drwxrwxrwx - jovyan jovyan 0 2022-03-29 07:21 s3a://projet-spark-lab/diffusion/tweets/input/2021
-rw-rw-rw- 1 jovyan jovyan 7904 2022-02-10 21:51 s3a://projet-spark-lab/diffusion/tweets/input/2021-09-10-22-33-56
grep: write error: Broken pipe
2022-03-29 07:21:14,110 INFO impl.MetricsSystemImpl: Stopping s3a-file-system metrics system...
2022-03-29 07:21:14,112 INFO impl.MetricsSystemImpl: s3a-file-system metrics system stopped.
2022-03-29 07:21:14,112 INFO impl.MetricsSystemImpl: s3a-file-system metrics system shutdown complete.
###Markdown
Vous pouvez voir le contenu d'un fichier en executant cette commande laissée en exemple (affichage un peu long).```!hadoop fs -ls -r -t "s3a://projet-spark-lab/diffusion/tweets/input" | head -n2 |awk '{print $8}' | xargs -I{} hadoop fs -cat {} | head -n1```Les fichiers contiennent des tweets au format json de l'api twitter. Le schéma de la donnéePour simplifier le schema de la donnée a été mis ici au format pickle à côté du notebook dans 7-streaming/schema.p Spark streamingHistoriquement, spark proposait spark streaming il propose aujourd'hui spark streaming et spark structured streaming qui vient répondre aux uses cases les plus standards avec des données structurées (avec un schéma).On peut avoir majoritairement les sources suivantes :* Plutot pour du test, il existe le type **rate** pour générer de fausses données (timestamp, long) et **tcp** pour récupérer des données envoyées via une socket tcp* Plutot dans la vraie vie, il existe le type **fichier** pour scruter un répertoire et lire les fichiers ou le type **kafka** pour lire des topics de cette solution de broker de message.Quoique les avantages de kafka pourraient être discutés dans un futur tutoriel, pour simplifier la mise à disposition de celui-ci nous allons donc nous baser sur les données présentes dans le bucket **"s3a://projet-spark-lab/diffusion/tweets/input"**Nous allons essayer de faire le streaming suivant:* Streamer les données et avoir les hashtags les plus présents dans les tweets concernant l'Insee dans les 3 dernières heures glissantes.Nous allons par souci de simplification et pour ne pas persister des données en écriture via ce tuto enregistre le résultat de ce streaming dans une table en mémoire.Nous pourrions faire sur les tweets d'autres manipulations, charge à chacun d'être inventif:* Faire des stats sur les retweets, sur les mentions @User les plus présents, sur les comptes, sur les médias mentionnant l'insee.... Déclaration du context spark (c'est toujours ou presque la même chanson)
###Code
from pyspark.sql import SparkSession
from pyspark import SparkConf, SparkContext
import os
conf = SparkConf()
#url par défaut d'une api kubernetes accédé depuis l'intérieur du cluster (ici le notebook tourne lui même dans kubernetes)
conf.setMaster("k8s://https://kubernetes.default.svc:443")
#image des executors spark: pour des raisons de simplicité on réutilise l'image du notebook
conf.set("spark.kubernetes.container.image", os.environ['IMAGE_NAME'])
# Nom du compte de service pour contacter l'api kubernetes : attention le package du datalab crée lui même cette variable d'enviromment.
# Dans un pod du cluster kubernetes il faut lire le fichier /var/run/secrets/kubernetes.io/serviceaccount/token
# Néanmoins ce paramètre est inutile car le contexte kubernetes local de ce notebook est préconfiguré
# conf.set("spark.kubernetes.authenticate.driver.serviceAccountName", os.environ['KUBERNETES_SERVICE_ACCOUNT'])
# Nom du namespace kubernetes
conf.set("spark.kubernetes.namespace", os.environ['KUBERNETES_NAMESPACE'])
# Nombre d'executeur spark, il se lancera autant de pods kubernetes que le nombre indiqué.
conf.set("spark.executor.instances", "5")
# Mémoire alloué à la JVM
# Attention par défaut le pod kubernetes aura une limite supérieur qui dépend d'autres paramètres.
# On manipulera plus bas pour vérifier la limite de mémoire totale d'un executeur
conf.set("spark.executor.memory", "4g")
conf.set("spark.kubernetes.driver.pod.name", os.environ['KUBERNETES_POD_NAME'])
# Paramètres d'enregistrement des logs spark d'application
# Attention ce paramètres nécessitent la création d'un dossier spark-history. Spark ne le fait pas lui même pour des raisons obscurs
# import s3fs
# endpoint = "https://"+os.environ['AWS_S3_ENDPOINT']
# fs = s3fs.S3FileSystem(client_kwargs={'endpoint_url': endpoint})
# fs.touch('s3://tm8enk/spark-history/.keep')
# sparkconf.set("spark.eventLog.enabled","true")
# sparkconf.set("spark.eventLog.dir","s3a://tm8enk/spark-history")
#ici pour gérer le dateTimeFormatter dépendant de la verion de java...
conf.set("spark.sql.legacy.timeParserPolicy","LEGACY")
#conf.set("spark.sql.session.timeZone", "UTC")
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("streaming").config(conf = conf).getOrCreate()
###Output
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/opt/spark/jars/spark-unsafe_2.12-3.2.0.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
2022-03-29 07:21:51,323 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
2022-03-29 07:21:52,474 WARN util.Utils: Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
2022-03-29 07:21:53,993 WARN spark.ExecutorAllocationManager: Dynamic allocation without a shuffle service is an experimental feature.
###Markdown
Lancons la définition de ce que l'on veut streamerPlusieurs options sont disponibles, nous les laissons avec les valeurs par défaut :* .option("latestFirst","false") lire le fichier avec la date de modification la plus récente en premier ou non* .option("maxFileAge","1 week") l'age maximum du fichier* .option("maxFilesPerTrigger","no max") nombre maximum de fichier par execution de stream* .option("cleanSource","off") action à faire sur les fichiers lus (off :rien, archived:déplacer, delete:supprimer)
###Code
import pickle
schema = pickle.load( open( "schema.p", "rb" ) )
df = spark.readStream.format("json") \
.schema(schema) \
.option("latestFirst","true") \
.load("s3a://projet-spark-lab/diffusion/tweets/input")
###Output
2022-03-29 07:31:12,390 WARN util.package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
###Markdown
Lors de la définition de ce que l'on streame, rien ne se passe c'est toujours le côté lazy de spark tant qu'aucune action sur ce streame n'a été définie et tant que la méthode start() n'a pas été executée sur le stream rien ne passe Création de la table in memory sur les hashtagsOn doit définir ici la transformation à appliquer, on vous commente le schéma de l'objet tweet un peu long mais le contenu du tweet se trouve dans la colonne text on se base sur spark-sql pour manipuler l'objet df qui est du type Dataframe.Ici on lui demande d'ajouter la colonne word en splittant le contenu du tweet par l'espace puis cette liste de mot d'en faire une colonne de mot de filtrer ceux commencant par et ensuite de faire un group by.Pour cela on utilise les pyspark.sql.functions
###Code
#df.printSchema()
from pyspark.sql.functions import explode,split
from pyspark.sql.functions import col
tweets_tab = df.withColumn('word', explode(split(col('text'), ' '))) \
.filter(col('word').contains('#')) \
.groupBy('word') \
.count() \
.sort('count', ascending=False)
###Output
_____no_output_____
###Markdown
Maintenant il faut lui dire sous quelle forme maintenir cette transformationIl existe plusieurs output retenons celles-ci :* une table parquet que vous connaissez maintenant (qu'on pourra donc décrire dans hive et ensuite par redash!)* console (écrire le résultat dans la console)* memory (consolider une table sql en mémoire du cluster)Il existe aussi plusieurs mode d'output: https://spark.apache.org/docs/latest/structured-streaming-programming-guide.htmloutput-modes* complete (a chaque itération de stream redonne tout le resultat)* append (a chaque itération de stream ne donne que les nouvelles lignes)* update (seulement les lignes qui ont changées)Ci-dessous nous lui demandons de faire un streame en prenant les nouveaux fichiers tous les 10 seconds et de mettre le résultat complet en mémoire.
###Code
tweets_tab.writeStream. \
outputMode("complete"). \
format("memory"). \
queryName("tweetquery_group_hashtag"). \
trigger(processingTime='10 seconds'). \
start()
###Output
2022-03-29 07:31:17,105 WARN streaming.ResolveWriteToStream: Temporary checkpoint location created which is deleted normally when the query didn't fail: /tmp/temporary-71bb797c-5e30-4f6b-8dd9-0fc05cc204f4. If it's required to delete it under any circumstances, please set spark.sql.streaming.forceDeleteTempCheckpointLocation to true. Important to know deleting temp checkpoint folder is best effort.
2022-03-29 07:31:17,132 WARN streaming.ResolveWriteToStream: spark.sql.adaptive.enabled is not supported in streaming DataFrames/Datasets and will be disabled.
2022-03-29 07:31:17,448 WARN streaming.FileStreamSource: 'latestFirst' is true. New files will be processed first, which may affect the watermark
value. In addition, 'maxFileAge' will be ignored.
###Markdown
C'est parti le streaming a démarré voyons ca dans la spark-ui, on a un nouvel onglet streaming avec la liste des streams actifsOn peut cliquer dessus y avoir le débit, le temps d'execution et autres métriques.On peut via l'api récupérer les streams en cours et les arreter, si vous faites start sur un stream déjà en cours, il va pas apprécier il faut auparavant l'arréter.
###Code
#for stream in spark.streams.active:
# print("streaming", stream.name, "avec l'id", stream.id, "en cours")
# spark.streams.get(stream.id).stop()
###Output
2022-03-29 07:31:21,022 WARN streaming.FileStreamSource: Listed 30594 file(s) in 3552 ms
[Stage 0:====================================> (7139 + 10) / 10000]
###Markdown
Requetons cette tableLe fait de l'avoir déclaré in memory au nom de "tweetquery_group_hashtag" nous permet de la requeter ou de l'exhiber via le spark thrift server (voir autre tutoriel).Il est préférable d'attendre quelques secondes avant d'executer cette celle le temps que le streaming se lance, liste les fichiers s3 et execute les traitements.
###Code
spark.sql("select * from tweetquery_group_hashtag order by count desc limit 10").show()
###Output
+--------------------+-----+
| word|count|
+--------------------+-----+
| #Français| 892|
| #France| 466|
| #croissance| 409|
| #Paris\n\n(2017| 331|
| #immigrés,| 300|
|#retouralavienormale| 285|
| #FakeNews.\nQuand| 239|
| #Darmanin| 192|
| #profs| 189|
| #décès]| 184|
+--------------------+-----+
###Markdown
Watermark**Ok mais n'avions pas dit que nous voulions ce hastag sur les derners 24h de tweets?**Ici, spark streame et conserve toutes les données streamées,l'output mode complet lui fait remettre toutes les données. Aussi nous préférions avoir les données sur 3h par exemple glissant par fenetre de 5 minutes.Les tweets contiennent une colonne date created_at. **{"created_at":"Thu Apr 08 15:43:36 +0000 2021"**Nous pourrions importe les fonctions pyspark.sql.functions mais on peut aussi avec selectExpr directement les utiliser ainsi :* on transforme la date en timestamp dans le fuseau horaire de Paris * on demande a spark de gérer un watermark sur 3 heures par rapport à la colonne timestamp (a lui de supprimer les tweets dépassant ce seuil donc.* on lui demande de faire un groupe by word count en gardant une fenetre de 5 minutes
###Code
from pyspark.sql.functions import window, col,from_utc_timestamp,to_timestamp,explode, split
tweets_tab_24=df \
.withColumn("timestamp",to_timestamp('created_at', 'EEE MMM d HH:mm:ss Z yyyy')) \
.withColumn("word",explode(split("text",' '))) \
.filter(col("word").contains('#')) \
.withWatermark("timestamp", "1 minute") \
.groupBy(
window("timestamp", "3 hours","5 minutes"),
"word") \
.count()
###Output
_____no_output_____
###Markdown
Code test qui a servi à trouver le bon pattern```from pyspark.sql import Rowfrom pyspark.sql.functions import from_unixtime, unix_timestamp, from_utc_timestamp, min, maxspark.sql("set spark.sql.legacy.timeParserPolicy=LEGACY")rdd = spark.sparkContext.parallelize([u'Thu Apr 08 15:43:36 +0000 2021'])row = Row("ts")df = rdd.map(row).toDF()df.show()df.withColumn("ts", from_utc_timestamp(to_timestamp("ts", "EEE MMM d HH:mm:ss Z yyyy"),"Europe/Paris")).show()```On fait idem que précédent pour mettre in memory le résultat mais le mode ne peut pas etre complet en watermark puisqu'on veut delete au fur et a mesure.
###Code
tweets_tab_24.writeStream.outputMode("append").trigger(processingTime='1 minute').format("memory").queryName("data").start()
from IPython.display import display, clear_output
from datetime import datetime
spark.sql('select * from data').show(10,False)
###Output
+------+----+-----+
|window|word|count|
+------+----+-----+
+------+----+-----+
###Markdown
Ici Spark entretien en mémoire un dataframe de 3 colonnes :* intervalle de temps de 5 minutes window.start-window.end* mot* nombre d'occurence du motCe dataframe ne contient que les intervalles de temps des 5 dernières minutes des 3 dernières heures, les intervalles plus ancien sont supprimés.Si un tweet arrive par hasard avec un timestamp plus vieux que 3h le watermark l'élimine aussi des aggrégats.En mode append, on obtient pour chaque execution de batch les nouvelles lignes que l'on peut ensuite persistées via un DataFrameWriter ou Synk.Par example un FileSynk pour écrire sur S3, jdbcSynk ou CassandraSynk.Pour le tutoriel on utilise le MemorySynk qui renseigne donc une table en mémoire qui grossit de chaque append au fur et à mesure du temps.
###Code
from IPython.display import display, clear_output
from datetime import datetime
import time
for i in range(6):
clear_output(wait=True)
print("A", datetime.now(), "le top 20 des hastags sur les tweets mentionnait l'insee dans les 3 dernières heures est :")
display(spark.sql("select * from data where window.start > current_timestamp()-INTERVAL 200 minutes order by word desc" ).show())
time.sleep(30)
spark.stop()
###Output
_____no_output_____
|
notebooks/alpha_vantage.ipynb
|
###Markdown
This notebook takes care of pulling the raw data from the Alpha Vantage API and writing it to csv files. 1) Orginal plan was to pull all available daily and hourly data for the two big index ETFs, SPY and QQQ However, no luck with the hourly data. I thought technical data went back more than the first "slice" but no luck. So for now, I'm going to expand on the number of ETFs for growing the dataset. In the future, I could calculate the technical indicators for hourly data (or find a source that likely isn't free).
###Code
import requests
import pandas as pd
import time
# symbols and technical indicators [code, interval, name]
# https://www.alphavantage.co/documentation/#technical-indicators
#
# got rid of JNK (weirdly high open z-score mean), HYG (weirdly low low z-score mean), and EWZ/IEF (infinite end values)
#symbol_list = ['SPY','QQQ','XLF','EEM','XLE','SLV','FXI','GDX','EFA','TLT','LQD','XLU','XLV','XLI','IEMG','VWO','XLK','IEF','XLB','JETS','BND']
symbol_list = ['SPY']
tech_list = [['SMA',50,'Technical Analysis: SMA'],
['EMA',21,'Technical Analysis: EMA'],
['RSI',14,'Technical Analysis: RSI']]
for symbol in symbol_list:
url = f"https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol={symbol}&outputsize=full&apikey=PDS8Y8E8KULJVDET"
r = requests.get(url)
data = r.json()
df_price = pd.DataFrame(data['Time Series (Daily)']).T
print(df_price.head())
time.sleep(15)
for tech in tech_list:
url = f"https://www.alphavantage.co/query?function={tech[0]}&symbol={symbol}&interval=daily&time_period={tech[1]}&series_type=close&apikey=PDS8Y8E8KULJVDET"
r = requests.get(url)
data = r.json()
df_tech = pd.DataFrame(data[tech[2]]).T
df_price = df_price.merge(df_tech, how='inner', left_index=True, right_index=True)
time.sleep(15)
df_price.to_csv(f"../data/raw/{symbol}_daily.csv")
print(f"{symbol} saved")
###Output
1. open 2. high 3. low 4. close 5. adjusted close 6. volume \
2021-12-07 464.41 468.88 458.6546 468.28 468.28 92791114
2021-12-06 456.13 460.79 453.56 458.79 458.79 98977532
2021-12-03 459.17 460.3 448.92 453.42 453.42 137331647
2021-12-02 450.73 459.07 450.31 457.4 457.4 127637758
2021-12-01 461.64 464.67 450.29 450.5 450.5 132485835
7. dividend amount 8. split coefficient
2021-12-07 0.0000 1.0
2021-12-06 0.0000 1.0
2021-12-03 0.0000 1.0
2021-12-02 0.0000 1.0
2021-12-01 0.0000 1.0
SPY saved
|
docs/running/jupyter-widgets.ipynb
|
###Markdown
Jupyter WidgetsSimpler GUI for running TARDIS - a collection of widgets provided by TARDIS to explore simulation data easily within Jupyter Notebook.
###Code
# Import the tardis widgets module
import tardis.widgets as tw
###Output
/home/jals/miniconda3/envs/tardis/lib/python3.6/importlib/_bootstrap.py:219: QAWarning: pyne.data is not yet QA compliant.
return f(*args, **kwds)
###Markdown
Shell InfoThis widget allows you to get fractional abundances of each shell - all the way from elements to ions to levels - by just clicking on the rows you want to explore!There are two ways in which you can generate the widget: Using Simulation object
###Code
# Create a Simulation object by running tardis
from tardis import run_tardis
sim = run_tardis('tardis_example.yml')
# Now use it to create a shell info widget
shell_info = tw.shell_info_from_simulation(sim)
# Call display method of shell_info
shell_info.display()
###Output
_____no_output_____
###Markdown
You can interact with the widget produced in output above (which may not be visible) like this: Using saved simulations (HDF files)
###Code
# Use a tardis simulation saved as HDF file to create shell info widget
shell_info = tw.shell_info_from_hdf('/tmp/sim_example.hdf')
# Display it
shell_info.display()
###Output
_____no_output_____
|
Laboratorios/C1_data_analysis/02_numpy/laboratorio_02.ipynb
|
###Markdown
MAT281 - Laboratorio N°02 Objetivos de la clase* Reforzar los conceptos básicos de numpy. Contenidos* [Problema 01](p1)* [Problema 02](p2)* [Problema 03](p3) Problema 01Una **media móvil simple** (SMA) es el promedio de los últimos $k$ datos anteriores, es decir, sea $a_1$,$a_2$,...,$a_n$ un arreglo $n$-dimensional, entonces la SMA se define por:$$sma(k) =\dfrac{1}{k}(a_{n}+a_{n-1}+...+a_{n-(k-1)}) = \dfrac{1}{k}\sum_{i=0}^{k-1}a_{n-i} $$ Por otro lado podemos definir el SMA con una venta móvil de $n$ si el resultado nos retorna la el promedio ponderado avanzando de la siguiente forma:* $a = [1,2,3,4,5]$, la SMA con una ventana de $n=2$ sería: * sma(2): [mean(1,2),mean(2,3),mean(3,4)] = [1.5, 2.5, 3.5, 4.5] * sma(3): [mean(1,2,3),mean(2,3,4),mean(3,4,5)] = [2.,3.,4.]Implemente una función llamada `sma` cuyo input sea un arreglo unidimensional $a$ y un entero $n$, y cuyo ouput retorne el valor de la media móvil simple sobre el arreglo de la siguiente forma:* **Ejemplo**: *sma([5,3,8,10,2,1,5,1,0,2], 2)* = $[4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ]$En este caso, se esta calculando el SMA para un arreglo con una ventana de $n=2$.**Hint**: utilice la función `numpy.cumsum`
###Code
import numpy as np
def sma(a, n): #funcion que calcula la media movil
sma = np.zeros((1, len(a)- (n-1))) # creo arreglo con la cantidad de columnas necesarias segun la ventana
for i in range(0, sma.shape[1]):
sma[0, i] = ((1/n) * np.sum(a[i: i+n])) #calculo el promedio con los datos de la ventana
return sma
#Me da un error con las dimensiones ya que yo retorno un arreglo de dimension (1,4) y se pide verificar con uno de
#dimension (4,), pero los resultados que me dan para los ejemplos están correctos, los adjunto aqui abajo
print(sma([1,2,3,4,5], 2))
print(sma([5,3,8,10,2,1,5,1,0,2], 2))
# ejemplo 01
a = [1,2,3,4,5]
np.testing.assert_array_equal(
sma(a, n=2),
np.array([1.5, 2.5, 3.5, 4.5])
)
# ejemplo 02
a = [5,3,8,10,2,1,5,1,0,2]
np.testing.assert_array_equal(
sma(a, n=2),
np.array([4. , 5.5, 9. , 6. , 1.5, 3. , 3. , 0.5, 1. ])
)
###Output
_____no_output_____
###Markdown
Problema 02La función **strides($a,n,p$)**, corresponde a transformar un arreglo unidimensional $a$ en una matriz de $n$ columnas, en el cual las filas se van construyendo desfasando la posición del arreglo en $p$ pasos hacia adelante.* Para el arreglo unidimensional $a$ = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10], la función strides($a,4,2$), corresponde a crear una matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. El resultado tendría que ser algo así:$$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$$Implemente una función llamada `strides(a,4,2)` cuyo input sea un arreglo unidimensional y retorne la matriz de $4$ columnas, cuyos desfaces hacia adelante se hacen de dos en dos. * **Ejemplo**: *strides($a$,4,2)* =$\begin{pmatrix} 1& 2 &3 &4 \\ 3& 4&5&6 \\ 5& 6 &7 &8 \\ 7& 8 &9 &10 \\ \end{pmatrix}$
###Code
def strides(a, n, p):
filas = 0
j= 0
while j != len(a)-p: #verifico hasta llegar a la ultima fila que se creará
j += (n-p)
filas+=1 #contador de cantidad de filas
j=0
stride_1 = np.zeros((filas, n)) #defino matriz de ceros con las dimensiones correspondientes
for i in range(0, filas):
stride_1[i] = a[j: j+n] #voy agragando en cada fila los valores correspondientes de la lista
j += (n-p) #aumento el contador desde donde toca empezar a tomar valores de la lista
return stride_1
# ejemplo 01
a = np.array([ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
n=4
p=2
np.testing.assert_array_equal(
strides(a,n,p),
np.array([
[ 1, 2, 3, 4],
[ 3, 4, 5, 6],
[ 5, 6, 7, 8],
[ 7, 8, 9, 10]])
)
###Output
_____no_output_____
###Markdown
Problema 03Un **cuadrado mágico** es una matriz de tamaño $n \times n$ de números enteros positivos tal que la suma de los números por columnas, filas y diagonales principales sea la misma. Usualmente, los números empleados para rellenar las casillas son consecutivos, de 1 a $n^2$, siendo $n$ el número de columnas y filas del cuadrado mágico.Si los números son consecutivos de 1 a $n^2$, la suma de los números por columnas, filas y diagonales principales es igual a : $$M_{n} = \dfrac{n(n^2+1)}{2}$$Por ejemplo, * $A= \begin{pmatrix} 4& 9 &2 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$,es un cuadrado mágico.* $B= \begin{pmatrix} 4& 2 &9 \\ 3& 5&7 \\ 8& 1 &6 \end{pmatrix}$, no es un cuadrado mágico.Implemente una función llamada `es_cudrado_magico` cuyo input sea una matriz cuadrada de tamaño $n$ con números consecutivos de $1$ a $n^2$ y cuyo ouput retorne *True* si es un cuadrado mágico o 'False', en caso contrario* **Ejemplo**: *es_cudrado_magico($A$)* = True, *es_cudrado_magico($B$)* = False**Hint**: Cree una función que valide la mariz es cuadrada y que sus números son consecutivos del 1 a $n^2$.
###Code
def es_cuadrada(A): #funcion para verificar que la matriz es cuadrada
return A.shape[0] == A.shape[1] #verifico que las filas sean iguales a las columnas
def son_consecutivos(A): #función para ver si las entradas de una matriz son numeros consecutivos
Flag = False
list_comp = [i for i in range(1, (A.shape[0])**2 + 1)] #defino lista con la cantidad de numeros como filas al cuadrado se
#tengan
list_A = []
for i in range(0, A.shape[0]):
for j in range(0, A.shape[1]):
list_A.append(A[i][j]) #agrego las entradas de la matriz a una lista
list_A.sort() #ordeno la lista con las entradas de la matriz
if list_A == list_comp: #verifico si son iguales ambas listas
Flag = True
return Flag
def es_cuadrado_magico(A): #funcion para ver si es cuadrado magico
Flag = True
if es_cuadrada(A) and son_consecutivos(A): #verifico que se cumplan las condiciones necesarias
M_n = A.shape[0] * (A.shape[0]**2 + 1) / 2 #como los numeros son consecutivos defino la suma del cuadrado magico
if np.sum(np.diag(A)) != M_n: #verifico si falla la suma de la diagonal
Flag = False
return Flag
for i in range(0, A.shape[0]):
if np.sum(A[i, 0: A.shape[1]]) != M_n: #verifico si falla la suma de las filas
Flag = False
return Flag
if np.sum(A[0: A.shape[1], i]) != M_n: #verifico si falla la suma de las columnas
Flag = False
return Flag
return Flag
if not (es_cuadrada(A) and son_consecutivos):
return 'La matriz ingresada no es cuadrada ni posee entradas consecutivas'
if not(es_cuadrada(A)):
return 'La matriz ingresada no es cuadrada'
if not(son_consecutivos(A)):
return 'La matriz ingresada no posee entradas consecutivas'
# ejemplo 01
A = np.array([[4,9,2],[3,5,7],[8,1,6]])
assert es_cuadrado_magico(A) == True, "ejemplo 01 incorrecto"
# ejemplo 02
B = np.array([[4,2,9],[3,5,7],[8,1,6]])
assert es_cuadrado_magico(B) == False, "ejemplo 02 incorrecto"
###Output
_____no_output_____
|
notebooks/MHD_1d.ipynb
|
###Markdown
MHD Equation with CentPy in 1D Import packages
###Code
# Install the centpy package
!pip install centpy
# Import numpy and centpy for the solution
import numpy as np
import centpy
# Imports functions from matplotlib and setup for the animation
import matplotlib.pyplot as plt
from matplotlib import animation
from IPython.display import HTML
###Output
_____no_output_____
###Markdown
Equation We solve the equations of ideal magnetohydrodynamics in 1D \begin{equation} \partial_t \begin{bmatrix} \rho \\ \rho v_x \\ \rho v_y \\ \rho v_z \\ B_y \\ B_z \\ E \end{bmatrix} + \partial_x \begin{bmatrix} \rho v_x \\ \rho v_x^2 + p^* - B_x^2 \\ \rho v_x v_y - B_x B_y \\\rho v_x v_z - B_x B_z \\ B_y v_x - B_x v_y \\ B_z v_x - B_x v_z \\(E+p^*) v_x - B_x (B_x v_x + B_y v_y + B_z v_Z) \end{bmatrix} = 0 \end{equation}where the total pressure is given by \begin{equation}p^* = p + \frac{1}{2} (B_x^2 + B_y^2 + B_z^2)\end{equation}with the equation of state\begin{equation}p = (\gamma-1) \left(E-\frac{1}{2} \rho (v_x^2+v_y^2+v_z^2) - \frac{1}{2}(B_x^2 + B_y^2 + B_z^2)\right), \qquad \gamma=2.0\end{equation}The solution is computed on the domain $(x,t)\in([-1,1]\times[0,0.2])$ with initial data for a *Brio-Wu shock tube*:\begin{equation}(\rho, v_x, v_y, v_z, B_y, B_z, p)_{t=0} = \begin{cases}(1,0,0,0,1,0,1) & \text{if} & -1<x\leq 0 \\(0.125, 0, 0, 0, -1, 0, 0.1) & \text{if} & \ \ 0<x<1\end{cases}\end{equation}and Dirichlet boundary data set by initial data on each boundary. The solution is computed using 400 cells and CFL number 0.475.
###Code
pars = centpy.Pars1d(x_init=-1., x_final=1., t_final=0.2, dt_out=0.002, J=400, cfl=0.475, scheme="fd2")
pars.B1 = 0.75
# MHD equation
class MHD1d(centpy.Equation1d):
def pressure(self, u):
return u[:, 6] - 0.5*((u[:, 1] ** 2 + u[:, 2] ** 2 + u[:, 3] ** 2)/u[:, 0])
- 0.5 * (self.B1 ** 2 + u[:, 4] ** 2 + u[:, 5] ** 2)
def initial_data(self):
u = np.zeros((self.J + 4, 7))
midpoint = int(self.J / 2) + 2
# Left side
u[:midpoint, 0] = 1.0
u[:midpoint, 1] = 0.0
u[:midpoint, 2] = 0.0
u[:midpoint, 3] = 0.0
u[:midpoint, 4] = 1.0
u[:midpoint, 5] = 0.0
u[:midpoint, 6] = 1.0 + 25.0 / 32.0
# Right side
u[midpoint:, 0] = 0.125
u[midpoint:, 1] = 0.0
u[midpoint:, 2] = 0.0
u[midpoint:, 3] = 0.0
u[midpoint:, 4] = -1.0
u[midpoint:, 5] = 0.0
u[midpoint:, 6] = 0.1 + 25.0 / 32.0
return u
def boundary_conditions(self, u):
left_v = [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 1.0 + 25.0 / 32.0]
right_v = [0.125, 0.0, 0.0, 0.0, -1.0, 0.0, 0.1 + 25.0 / 32]
if self.odd:
u[0] = left_v
u[-1] = right_v
u[-2] = right_v
else:
u[0] = left_v
u[1] = left_v
u[-1] = right_v
def flux_x(self, u):
f = np.zeros_like(u)
B1 = self.B1
p_star = self.pressure(u) + 0.5 * (B1 ** 2 + u[:, 4] ** 2 + u[:, 5] ** 2)
f[:, 0] = u[:, 1]
f[:, 1] = u[:, 1] ** 2 / u[:, 0] + p_star
f[:, 2] = u[:, 1] * u[:, 2] / u[:, 0] - B1 * u[:, 4]
f[:, 3] = u[:, 1] * u[:, 3] / u[:, 0] - B1 * u[:, 5]
f[:, 4] = u[:, 1] * u[:, 4] / u[:, 0] - B1 * u[:, 2] / u[:, 0]
f[:, 5] = u[:, 1] * u[:, 5] / u[:, 0] - B1 * u[:, 3] / u[:, 0]
f[:, 6] = (u[:, 6] + p_star) * (u[:, 1] / u[:, 0]) - B1 * (
B1 * u[:, 1] + u[:, 2] * u[:, 4] + u[:, 3] * u[:, 5]
) / u[:, 0]
return f
def spectral_radius_x(self, u):
rho = u[:, 0]
u_x = u[:, 1] / rho
p = self.pressure(u)
A = 2.0 * p / rho
B = (self.B1 ** 2 + u[:, 4] ** 2 + u[:, 5] ** 2) / rho
cf = np.sqrt(
0.5 * (A + B + np.sqrt((A + B) ** 2 - 4.0 * A * self.B1 ** 2 / rho))
)
return np.abs(u_x) + cf
###Output
_____no_output_____
###Markdown
Solution
###Code
eqn = MHD1d(pars)
soln = centpy.Solver1d(eqn)
soln.solve()
###Output
_____no_output_____
###Markdown
Animation
###Code
# Animation
j0 = slice(2,-2)
fig = plt.figure(figsize=(12,6))
ax1=fig.add_subplot(1,3,1)
ax2=fig.add_subplot(2,3,2)
ax3=fig.add_subplot(2,3,3)
ax4=fig.add_subplot(2,3,5)
ax5=fig.add_subplot(2,3,6)
line_u=[]
for ax in [ax1,ax2,ax3,ax4,ax5]:
ax.set_xlim(-1, 1)
line_u.append(ax.plot([], [], linewidth=1, marker='o', markersize=2)[0])
ax1.set_xlabel('x')
ax4.set_xlabel('x')
ax5.set_xlabel('x')
ax2.set_xticks([])
ax3.set_xticks([])
ax2.set_yticks([])
ax3.set_yticks([])
ax4.set_yticks([])
ax5.set_yticks([])
ax1.set_ylabel(r'$\rho$', fontsize=12)
ax2.set_ylabel(r'$v_x$', fontsize=12)
ax3.set_ylabel(r'$v_y$', fontsize=12)
ax4.set_ylabel(r'$B_y$', fontsize=12)
ax5.set_ylabel(r'$p$', fontsize=12)
# Primitive variables
rho=soln.u_n[:,j0,0]
v_x = soln.u_n[:,j0,1]/soln.u_n[:,j0,0]
v_y = soln.u_n[:,j0,2]/soln.u_n[:,j0,0]
B_y = soln.u_n[:,j0,4]
ax1.set_ylim(np.min(rho), np.max(rho))
ax2.set_ylim(np.min(v_x), np.max(v_x))
ax3.set_ylim(np.min(v_y), np.max(v_y))
ax4.set_ylim(np.min(B_y), np.max(B_y))
ax5.set_ylim(0.5, 2.)
def animate(i):
p = eqn.pressure(soln.u_n[i,j0,:])
line_u[0].set_data(soln.x[j0], rho[i])
line_u[1].set_data(soln.x[j0], v_x[i])
line_u[2].set_data(soln.x[j0], v_y[i])
line_u[3].set_data(soln.x[j0], B_y[i])
line_u[4].set_data(soln.x[j0], p)
plt.close()
anim = animation.FuncAnimation(fig, animate, frames=soln.Nt, interval=100, blit=False);
HTML(anim.to_html5_video())
###Output
_____no_output_____
|
notebooks/figures/publish/TestPtAtkinsonTideModule.ipynb
|
###Markdown
Test `pt_atkinson_tide` ModuleRender figure object produced by the `nowcast.figures.publish.pt_atkinson_tide` module.Provides data for visual testing to confirm that refactoring has not adversely changed figure for web page.Set-up and function call replicates as nearly as possible what is done in the `nowcast.workers.make_plots` worker. Notebooks like this should be developed in a[Nowcast Figures Development Environment](https://salishsea-nowcast.readthedocs.io/en/latest/figures/fig_dev_env.html)so that all of the necessary dependency packages are installed.The development has to be done on a workstation that has the Salish Sea Nowcast system `/results/` parition mounted.
###Code
import io
from pathlib import Path
import arrow
import netCDF4 as nc
import yaml
from nowcast.figures.publish import pt_atkinson_tide
%matplotlib inline
config = '''
timezone: Canada/Pacific
ssh:
tidal_predictions: ../../../tidal_predictions/
run:
results_archive:
forecast: /results/SalishSea/forecast.201812/
forecast2: /results/SalishSea/forecast2.201812/
'''
config = yaml.safe_load(io.StringIO(config))
run_date = arrow.get('2020-02-09')
run_type = 'forecast'
dmy = run_date.format('DDMMMYY').lower()
start_day = {
'nowcast': run_date.format('YYYYMMDD'),
'forecast': run_date.shift(days=+1).format('YYYYMMDD'),
}
end_day = {
'nowcast': run_date.format('YYYYMMDD'),
'forecast': run_date.shift(days=+2).format('YYYYMMDD'),
}
results_home = Path(config['run']['results_archive'][run_type])
results_dir = results_home/dmy
grid_T_hr = nc.Dataset(
str(results_dir/'SalishSea_1h_{0}_{1}_grid_T.nc'
.format(start_day[run_type], end_day[run_type])))
tidal_predictions = config['ssh']['tidal_predictions']
%%timeit -n1 -r1
# Refactored rendering of figure
from importlib import reload
from nowcast.figures import website_theme
reload(pt_atkinson_tide)
reload(website_theme)
fig = pt_atkinson_tide.make_figure(
grid_T_hr, tidal_predictions, config['timezone'], theme=website_theme)
###Output
_____no_output_____
|
01 Machine Learning/scikit_examples_jupyter/text/plot_document_classification_20newsgroups.ipynb
|
###Markdown
Classification of text documents using sparse featuresThis is an example showing how scikit-learn can be used to classify documentsby topics using a bag-of-words approach. This example uses a scipy.sparsematrix to store the features and demonstrates various classifiers that canefficiently handle sparse matrices.The dataset used in this example is the 20 newsgroups dataset. It will beautomatically downloaded, then cached.The bar plot indicates the accuracy, training time (normalized) and test time(normalized) of each classifier.
###Code
# Author: Peter Prettenhofer <[email protected]>
# Olivier Grisel <[email protected]>
# Mathieu Blondel <[email protected]>
# Lars Buitinck
# License: BSD 3 clause
import logging
import numpy as np
from optparse import OptionParser
import sys
from time import time
import matplotlib.pyplot as plt
from sklearn.datasets import fetch_20newsgroups
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.feature_extraction.text import HashingVectorizer
from sklearn.feature_selection import SelectFromModel
from sklearn.feature_selection import SelectKBest, chi2
from sklearn.linear_model import RidgeClassifier
from sklearn.pipeline import Pipeline
from sklearn.svm import LinearSVC
from sklearn.linear_model import SGDClassifier
from sklearn.linear_model import Perceptron
from sklearn.linear_model import PassiveAggressiveClassifier
from sklearn.naive_bayes import BernoulliNB, ComplementNB, MultinomialNB
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neighbors import NearestCentroid
from sklearn.ensemble import RandomForestClassifier
from sklearn.utils.extmath import density
from sklearn import metrics
# Display progress logs on stdout
logging.basicConfig(level=logging.INFO,
format='%(asctime)s %(levelname)s %(message)s')
# parse commandline arguments
op = OptionParser()
op.add_option("--report",
action="store_true", dest="print_report",
help="Print a detailed classification report.")
op.add_option("--chi2_select",
action="store", type="int", dest="select_chi2",
help="Select some number of features using a chi-squared test")
op.add_option("--confusion_matrix",
action="store_true", dest="print_cm",
help="Print the confusion matrix.")
op.add_option("--top10",
action="store_true", dest="print_top10",
help="Print ten most discriminative terms per class"
" for every classifier.")
op.add_option("--all_categories",
action="store_true", dest="all_categories",
help="Whether to use all categories or not.")
op.add_option("--use_hashing",
action="store_true",
help="Use a hashing vectorizer.")
op.add_option("--n_features",
action="store", type=int, default=2 ** 16,
help="n_features when using the hashing vectorizer.")
op.add_option("--filtered",
action="store_true",
help="Remove newsgroup information that is easily overfit: "
"headers, signatures, and quoting.")
def is_interactive():
return not hasattr(sys.modules['__main__'], '__file__')
# work-around for Jupyter notebook and IPython console
argv = [] if is_interactive() else sys.argv[1:]
(opts, args) = op.parse_args(argv)
if len(args) > 0:
op.error("this script takes no arguments.")
sys.exit(1)
print(__doc__)
op.print_help()
print()
# #############################################################################
# Load some categories from the training set
if opts.all_categories:
categories = None
else:
categories = [
'alt.atheism',
'talk.religion.misc',
'comp.graphics',
'sci.space',
]
if opts.filtered:
remove = ('headers', 'footers', 'quotes')
else:
remove = ()
print("Loading 20 newsgroups dataset for categories:")
print(categories if categories else "all")
data_train = fetch_20newsgroups(subset='train', categories=categories,
shuffle=True, random_state=42,
remove=remove)
data_test = fetch_20newsgroups(subset='test', categories=categories,
shuffle=True, random_state=42,
remove=remove)
print('data loaded')
# order of labels in `target_names` can be different from `categories`
target_names = data_train.target_names
def size_mb(docs):
return sum(len(s.encode('utf-8')) for s in docs) / 1e6
data_train_size_mb = size_mb(data_train.data)
data_test_size_mb = size_mb(data_test.data)
print("%d documents - %0.3fMB (training set)" % (
len(data_train.data), data_train_size_mb))
print("%d documents - %0.3fMB (test set)" % (
len(data_test.data), data_test_size_mb))
print("%d categories" % len(target_names))
print()
# split a training set and a test set
y_train, y_test = data_train.target, data_test.target
print("Extracting features from the training data using a sparse vectorizer")
t0 = time()
if opts.use_hashing:
vectorizer = HashingVectorizer(stop_words='english', alternate_sign=False,
n_features=opts.n_features)
X_train = vectorizer.transform(data_train.data)
else:
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,
stop_words='english')
X_train = vectorizer.fit_transform(data_train.data)
duration = time() - t0
print("done in %fs at %0.3fMB/s" % (duration, data_train_size_mb / duration))
print("n_samples: %d, n_features: %d" % X_train.shape)
print()
print("Extracting features from the test data using the same vectorizer")
t0 = time()
X_test = vectorizer.transform(data_test.data)
duration = time() - t0
print("done in %fs at %0.3fMB/s" % (duration, data_test_size_mb / duration))
print("n_samples: %d, n_features: %d" % X_test.shape)
print()
# mapping from integer feature name to original token string
if opts.use_hashing:
feature_names = None
else:
feature_names = vectorizer.get_feature_names()
if opts.select_chi2:
print("Extracting %d best features by a chi-squared test" %
opts.select_chi2)
t0 = time()
ch2 = SelectKBest(chi2, k=opts.select_chi2)
X_train = ch2.fit_transform(X_train, y_train)
X_test = ch2.transform(X_test)
if feature_names:
# keep selected feature names
feature_names = [feature_names[i] for i
in ch2.get_support(indices=True)]
print("done in %fs" % (time() - t0))
print()
if feature_names:
feature_names = np.asarray(feature_names)
def trim(s):
"""Trim string to fit on terminal (assuming 80-column display)"""
return s if len(s) <= 80 else s[:77] + "..."
# #############################################################################
# Benchmark classifiers
def benchmark(clf):
print('_' * 80)
print("Training: ")
print(clf)
t0 = time()
clf.fit(X_train, y_train)
train_time = time() - t0
print("train time: %0.3fs" % train_time)
t0 = time()
pred = clf.predict(X_test)
test_time = time() - t0
print("test time: %0.3fs" % test_time)
score = metrics.accuracy_score(y_test, pred)
print("accuracy: %0.3f" % score)
if hasattr(clf, 'coef_'):
print("dimensionality: %d" % clf.coef_.shape[1])
print("density: %f" % density(clf.coef_))
if opts.print_top10 and feature_names is not None:
print("top 10 keywords per class:")
for i, label in enumerate(target_names):
top10 = np.argsort(clf.coef_[i])[-10:]
print(trim("%s: %s" % (label, " ".join(feature_names[top10]))))
print()
if opts.print_report:
print("classification report:")
print(metrics.classification_report(y_test, pred,
target_names=target_names))
if opts.print_cm:
print("confusion matrix:")
print(metrics.confusion_matrix(y_test, pred))
print()
clf_descr = str(clf).split('(')[0]
return clf_descr, score, train_time, test_time
results = []
for clf, name in (
(RidgeClassifier(tol=1e-2, solver="sag"), "Ridge Classifier"),
(Perceptron(max_iter=50, tol=1e-3), "Perceptron"),
(PassiveAggressiveClassifier(max_iter=50, tol=1e-3),
"Passive-Aggressive"),
(KNeighborsClassifier(n_neighbors=10), "kNN"),
(RandomForestClassifier(n_estimators=100), "Random forest")):
print('=' * 80)
print(name)
results.append(benchmark(clf))
for penalty in ["l2", "l1"]:
print('=' * 80)
print("%s penalty" % penalty.upper())
# Train Liblinear model
results.append(benchmark(LinearSVC(penalty=penalty, dual=False,
tol=1e-3)))
# Train SGD model
results.append(benchmark(SGDClassifier(alpha=.0001, max_iter=50,
penalty=penalty)))
# Train SGD with Elastic Net penalty
print('=' * 80)
print("Elastic-Net penalty")
results.append(benchmark(SGDClassifier(alpha=.0001, max_iter=50,
penalty="elasticnet")))
# Train NearestCentroid without threshold
print('=' * 80)
print("NearestCentroid (aka Rocchio classifier)")
results.append(benchmark(NearestCentroid()))
# Train sparse Naive Bayes classifiers
print('=' * 80)
print("Naive Bayes")
results.append(benchmark(MultinomialNB(alpha=.01)))
results.append(benchmark(BernoulliNB(alpha=.01)))
results.append(benchmark(ComplementNB(alpha=.1)))
print('=' * 80)
print("LinearSVC with L1-based feature selection")
# The smaller C, the stronger the regularization.
# The more regularization, the more sparsity.
results.append(benchmark(Pipeline([
('feature_selection', SelectFromModel(LinearSVC(penalty="l1", dual=False,
tol=1e-3))),
('classification', LinearSVC(penalty="l2"))])))
# make some plots
indices = np.arange(len(results))
results = [[x[i] for x in results] for i in range(4)]
clf_names, score, training_time, test_time = results
training_time = np.array(training_time) / np.max(training_time)
test_time = np.array(test_time) / np.max(test_time)
plt.figure(figsize=(12, 8))
plt.title("Score")
plt.barh(indices, score, .2, label="score", color='navy')
plt.barh(indices + .3, training_time, .2, label="training time",
color='c')
plt.barh(indices + .6, test_time, .2, label="test time", color='darkorange')
plt.yticks(())
plt.legend(loc='best')
plt.subplots_adjust(left=.25)
plt.subplots_adjust(top=.95)
plt.subplots_adjust(bottom=.05)
for i, c in zip(indices, clf_names):
plt.text(-.3, i, c)
plt.show()
###Output
_____no_output_____
|
docs/guide/parsing.ipynb
|
###Markdown
Parsing STIX Content Parsing STIX content is as easy as calling the [parse()](../api/stix2.parsing.rststix2.parsing.parse) function on a JSON string, dictionary, or file-like object. It will automatically determine the type of the object. The STIX objects within `bundle` objects, and any cyber observables contained within `observed-data` objects will be parsed as well.**Parsing a string**
###Code
from stix2 import parse
input_string = """{
"type": "observed-data",
"id": "observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf",
"spec_version": "2.1",
"created": "2016-04-06T19:58:16.000Z",
"modified": "2016-04-06T19:58:16.000Z",
"first_observed": "2015-12-21T19:00:00Z",
"last_observed": "2015-12-21T19:00:00Z",
"number_observed": 50,
"objects": {
"0": {
"type": "file",
"hashes": {
"SHA-256": "0969de02ecf8a5f003e3f6d063d848c8a193aada092623f8ce408c15bcb5f038"
}
}
}
}"""
obj = parse(input_string)
print(type(obj))
print(obj)
###Output
_____no_output_____
###Markdown
**Parsing a dictionary**
###Code
input_dict = {
"type": "identity",
"id": "identity--311b2d2d-f010-4473-83ec-1edf84858f4c",
"spec_version": "2.1",
"created": "2015-12-21T19:59:11Z",
"modified": "2015-12-21T19:59:11Z",
"name": "Cole Powers",
"identity_class": "individual"
}
obj = parse(input_dict)
print(type(obj))
print(obj)
###Output
_____no_output_____
###Markdown
Parsing STIX Content Parsing STIX content is as easy as calling the [parse()](../api/stix2.core.rststix2.core.parse) function on a JSON string, dictionary, or file-like object. It will automatically determine the type of the object. The STIX objects within `bundle` objects, and the cyber observables contained within `observed-data` objects will be parsed as well.**Parsing a string**
###Code
from stix2 import parse
input_string = """{
"type": "observed-data",
"id": "observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf",
"created": "2016-04-06T19:58:16.000Z",
"modified": "2016-04-06T19:58:16.000Z",
"first_observed": "2015-12-21T19:00:00Z",
"last_observed": "2015-12-21T19:00:00Z",
"number_observed": 50,
"objects": {
"0": {
"type": "file",
"hashes": {
"SHA-256": "0969de02ecf8a5f003e3f6d063d848c8a193aada092623f8ce408c15bcb5f038"
}
}
}
}"""
obj = parse(input_string)
print(type(obj))
print(obj)
###Output
_____no_output_____
###Markdown
**Parsing a dictionary**
###Code
input_dict = {
"type": "identity",
"id": "identity--311b2d2d-f010-4473-83ec-1edf84858f4c",
"created": "2015-12-21T19:59:11Z",
"modified": "2015-12-21T19:59:11Z",
"name": "Cole Powers",
"identity_class": "individual"
}
obj = parse(input_dict)
print(type(obj))
print(obj)
###Output
_____no_output_____
###Markdown
**Parsing a file-like object**
###Code
file_handle = open("/tmp/stix2_store/course-of-action/course-of-action--d9727aee-48b8-4fdb-89e2-4c49746ba4dd.json")
obj = parse(file_handle)
print(type(obj))
print(obj)
###Output
_____no_output_____
###Markdown
Parsing Custom STIX Content Parsing custom STIX objects and/or STIX objects with custom properties is also completed easily with [parse()](../api/stix2.core.rststix2.core.parse). Just supply the keyword argument ``allow_custom=True``. When ``allow_custom`` is specified, [parse()](../api/stix2.core.rststix2.core.parse) will attempt to convert the supplied STIX content to known STIX 2 domain objects and/or previously defined [custom STIX 2 objects](custom.ipynb). If the conversion cannot be completed (and ``allow_custom`` is specified), [parse()](../api/stix2.core.rststix2.core.parse) will treat the supplied STIX 2 content as valid STIX 2 objects and return them. **Warning: Specifying allow_custom may lead to critical errors if further processing (searching, filtering, modifying etc...) of the custom content occurs where the custom content supplied is not valid STIX 2**. This is an axiomatic possibility as the ``stix2`` library cannot guarantee proper processing of unknown custom STIX 2 objects that were explicitly flagged to be allowed, and thus may not be valid.For examples of parsing STIX 2 objects with custom STIX properties, see [Custom STIX Content: Custom Properties](custom.ipynbCustom-Properties)For examples of parsing defined custom STIX 2 objects, see [Custom STIX Content: Custom STIX Object Types](custom.ipynbCustom-STIX-Object-Types)For retrieving STIX 2 content from a source (e.g. file system, TAXII) that may possibly have custom STIX 2 content unknown to the user, the user can create a STIX 2 DataStore/Source with the flag ``allow_custom=True``. As mentioned, this will configure the DataStore/Source to allow for unknown STIX 2 content to be returned (albeit not converted to full STIX 2 domain objects and properties); the ``stix2`` library may preclude processing the unknown content, if the content is not valid or actual STIX 2 domain objects and properties.
###Code
from taxii2client import Collection
from stix2 import CompositeDataSource, FileSystemSource, TAXIICollectionSource
# to allow for the retrieval of unknown custom STIX2 content,
# just create *Stores/*Sources with the 'allow_custom' flag
# create FileSystemStore
fs = FileSystemSource("/path/to/stix2_data/", allow_custom=True)
# create TAXIICollectionSource
colxn = Collection('http://taxii_url')
ts = TAXIICollectionSource(colxn, allow_custom=True)
###Output
_____no_output_____
###Markdown
Parsing STIX Content Parsing STIX content is as easy as calling the [parse()](../api/stix2.parsing.rststix2.parsing.parse) function on a JSON string, dictionary, or file-like object. It will automatically determine the type of the object. The STIX objects within `bundle` objects will be parsed as well.**Parsing a string**
###Code
from stix2 import parse
input_string = """{
"type": "observed-data",
"id": "observed-data--b67d30ff-02ac-498a-92f9-32f845f448cf",
"spec_version": "2.1",
"created": "2016-04-06T19:58:16.000Z",
"modified": "2016-04-06T19:58:16.000Z",
"first_observed": "2015-12-21T19:00:00Z",
"last_observed": "2015-12-21T19:00:00Z",
"number_observed": 50,
"object_refs": [
"file--5d2dc832-b137-4e8c-97b2-5b00c18be611"
]
}"""
obj = parse(input_string)
print(type(obj))
print(obj.serialize(pretty=True))
###Output
_____no_output_____
###Markdown
**Parsing a dictionary**
###Code
input_dict = {
"type": "identity",
"id": "identity--311b2d2d-f010-4473-83ec-1edf84858f4c",
"spec_version": "2.1",
"created": "2015-12-21T19:59:11Z",
"modified": "2015-12-21T19:59:11Z",
"name": "Cole Powers",
"identity_class": "individual"
}
obj = parse(input_dict)
print(type(obj))
print(obj.serialize(pretty=True))
###Output
_____no_output_____
###Markdown
**Parsing a file-like object**
###Code
file_handle = open("/tmp/stix2_store/course-of-action/course-of-action--d9727aee-48b8-4fdb-89e2-4c49746ba4dd/20170531213041022744.json")
obj = parse(file_handle)
print(type(obj))
print(obj.serialize(pretty=True))
###Output
_____no_output_____
###Markdown
Parsing Custom STIX Content Parsing custom STIX objects and/or STIX objects with custom properties is also completed easily with [parse()](../api/stix2.parsing.rststix2.parsing.parse). Just supply the keyword argument ``allow_custom=True``. When ``allow_custom`` is specified, [parse()](../api/stix2.parsing.rststix2.parsing.parse) will attempt to convert the supplied STIX content to known STIX 2 domain objects and/or previously defined [custom STIX 2 objects](custom.ipynb). If the conversion cannot be completed (and ``allow_custom`` is specified), [parse()](../api/stix2.parsing.rststix2.parsing.parse) will treat the supplied STIX 2 content as valid STIX 2 objects and return them. This is an axiomatic possibility as the ``stix2`` library cannot guarantee proper processing of unknown custom STIX 2 objects that were explicitly flagged to be allowed, and thus may not be valid.**Warning**Specifying allow_custom may lead to critical errors if further processing (searching, filtering, modifying etc...) of the custom content occurs where the custom content supplied is not valid STIX 2For examples of parsing STIX 2 objects with custom STIX properties, see [Custom STIX Content: Custom Properties](custom.ipynbCustom-Properties)For examples of parsing defined custom STIX 2 objects, see [Custom STIX Content: Custom STIX Object Types](custom.ipynbCustom-STIX-Object-Types)For retrieving STIX 2 content from a source (e.g. file system, TAXII) that may possibly have custom STIX 2 content unknown to the user, the user can create a STIX 2 DataStore/Source with the flag ``allow_custom=True``. As mentioned, this will configure the DataStore/Source to allow for unknown STIX 2 content to be returned (albeit not converted to full STIX 2 domain objects and properties); the ``stix2`` library may preclude processing the unknown content, if the content is not valid or actual STIX 2 domain objects and properties.
###Code
from taxii2client import Collection
from stix2 import CompositeDataSource, FileSystemSource, TAXIICollectionSource
# to allow for the retrieval of unknown custom STIX2 content,
# just create *Stores/*Sources with the 'allow_custom' flag
# create FileSystemStore
fs = FileSystemSource("/path/to/stix2_data/", allow_custom=True)
# create TAXIICollectionSource
colxn = Collection('http://taxii_url')
ts = TAXIICollectionSource(colxn, allow_custom=True)
###Output
_____no_output_____
###Markdown
**Parsing a file-like object**
###Code
file_handle = open("/tmp/stix2_store/course-of-action/course-of-action--d9727aee-48b8-4fdb-89e2-4c49746ba4dd/20170531213041022744.json")
obj = parse(file_handle)
print(type(obj))
print(obj)
###Output
_____no_output_____
###Markdown
Parsing Custom STIX Content Parsing custom STIX objects and/or STIX objects with custom properties is also completed easily with [parse()](../api/stix2.parsing.rststix2.parsing.parse). Just supply the keyword argument ``allow_custom=True``. When ``allow_custom`` is specified, [parse()](../api/stix2.parsing.rststix2.parsing.parse) will attempt to convert the supplied STIX content to known STIX 2 domain objects and/or previously defined [custom STIX 2 objects](custom.ipynb). If the conversion cannot be completed (and ``allow_custom`` is specified), [parse()](../api/stix2.parsing.rststix2.parsing.parse) will treat the supplied STIX 2 content as valid STIX 2 objects and return them. This is an axiomatic possibility as the ``stix2`` library cannot guarantee proper processing of unknown custom STIX 2 objects that were explicitly flagged to be allowed, and thus may not be valid.**Warning**Specifying allow_custom may lead to critical errors if further processing (searching, filtering, modifying etc...) of the custom content occurs where the custom content supplied is not valid STIX 2For examples of parsing STIX 2 objects with custom STIX properties, see [Custom STIX Content: Custom Properties](custom.ipynbCustom-Properties)For examples of parsing defined custom STIX 2 objects, see [Custom STIX Content: Custom STIX Object Types](custom.ipynbCustom-STIX-Object-Types)For retrieving STIX 2 content from a source (e.g. file system, TAXII) that may possibly have custom STIX 2 content unknown to the user, the user can create a STIX 2 DataStore/Source with the flag ``allow_custom=True``. As mentioned, this will configure the DataStore/Source to allow for unknown STIX 2 content to be returned (albeit not converted to full STIX 2 domain objects and properties); the ``stix2`` library may preclude processing the unknown content, if the content is not valid or actual STIX 2 domain objects and properties.
###Code
from taxii2client import Collection
from stix2 import CompositeDataSource, FileSystemSource, TAXIICollectionSource
# to allow for the retrieval of unknown custom STIX2 content,
# just create *Stores/*Sources with the 'allow_custom' flag
# create FileSystemStore
fs = FileSystemSource("/path/to/stix2_data/", allow_custom=True)
# create TAXIICollectionSource
colxn = Collection('http://taxii_url')
ts = TAXIICollectionSource(colxn, allow_custom=True)
###Output
_____no_output_____
|
Time_Series_Project.ipynb
|
###Markdown
Import Library
###Code
import numpy as np
import pandas as pd
from keras.layers import Dense, LSTM, Bidirectional
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Import the DatasetFrist we have import the dataset, and dataset we will use the name is a datatraining.txt.also we add **delimiter** because our data has a comma seperator.
###Code
df = pd.read_csv('datatraining.txt', delimiter=',', quoting = 3)
df.head()
###Output
_____no_output_____
###Markdown
And then we have to check the data, does the data have data loss.with fungtion **isnull().sum()** we can see the missing data from our data. But our data is not lost, so we don't need to fill in the missing data.
###Code
df.isnull().sum()
###Output
_____no_output_____
###Markdown
Also we will check the data with **info()**
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
Index: 8143 entries, "1" to "8143"
Data columns (total 7 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 "date" 8143 non-null object
1 "Temperature" 8143 non-null float64
2 "Humidity" 8143 non-null float64
3 "Light" 8143 non-null float64
4 "CO2" 8143 non-null float64
5 "HumidityRatio" 8143 non-null float64
6 "Occupancy" 8143 non-null int64
dtypes: float64(5), int64(1), object(1)
memory usage: 508.9+ KB
###Markdown
Split the data to Training Set and Test SetBecause what we need is the date and Temperature column, so we only take that column. We can leave the rest.And look at the visuals of the dataset.
###Code
X = df.iloc[:, 0].values
y = df.iloc[:, 1].values
plt.figure(figsize=(15,9))
plt.plot(X, y)
plt.title('Temperature average',
fontsize=20);
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, shuffle=False)
###Output
_____no_output_____
###Markdown
Then we create a function to receive a series/attribute that we have converted to numpy type, then return the labels and attributes of the dataset in batch form.
###Code
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift = 1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[-1:]))
return ds.batch(batch_size).prefetch(1)
###Output
_____no_output_____
###Markdown
Make a ArchitectureAlso we will create the architecture for our Time Series model.
###Code
train_set = windowed_dataset(y_train, window_size=32, batch_size=50, shuffle_buffer=1000)
val_set = windowed_dataset(y_test, window_size=32, batch_size=50, shuffle_buffer=1000)
model = tf.keras.models.Sequential([
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.LSTM(60),
tf.keras.layers.Dense(512, activation="relu"),
tf.keras.layers.Dropout(0.25),
tf.keras.layers.Dense(128, activation="relu"),
tf.keras.layers.Dense(64, activation="relu"),
tf.keras.layers.Dense(1),
])
###Output
_____no_output_____
###Markdown
Make a Callback ClassSince we will be aiming for the MAE of the model < 10% of the data scale, we will see what score we want to achieve.
###Code
Mae = (df['"Temperature"'].max() - df['"Temperature"'].min()) * 10/100
print(Mae)
###Output
0.418
###Markdown
We create a Callback Class
###Code
class myCallback(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs={}):
if(logs.get('mae')<0.4 and logs.get('val_mae')<0.4):
print("\nMAE dari model < 10% skala data")
self.model.stop_training = True
callbacks = myCallback()
###Output
_____no_output_____
###Markdown
Optimizer and Train the Dataset
###Code
optimizer = tf.keras.optimizers.SGD(lr=1.0000e-04, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set,epochs=100,validation_data = val_set,callbacks=[callbacks])
###Output
Epoch 1/100
###Markdown
Visual the Result
###Code
plt.plot(history.history['mae'])
plt.plot(history.history['val_mae'])
plt.title('accuracy Model')
plt.ylabel('Mae')
plt.xlabel('epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss Model')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
###Output
_____no_output_____
|
eval/eval-species.ipynb
|
###Markdown
Build SPECIES docker image
###Code
import docker
DOCKERFILE_PATH = "../images/SPECIES"
client = docker.from_env()
client.images.build(path=DOCKERFILE_PATH, tag="species:latest")
###Output
_____no_output_____
###Markdown
Run SPECIES and parse results
###Code
import shutil
def run_species(input_dir, output_dir):
if os.path.isdir(output_dir):
shutil.rmtree(output_dir)
os.makedirs(output_dir)
volume = {os.path.abspath(input_dir): {'bind': '/home/species/corpus', 'mode': 'ro'}}
client = docker.from_env()
image = client.images.get("species:latest")
response = client.containers.run(image, "species /home/species/corpus", volumes=volume, remove=True)
with open(os.path.join(output_dir, "species.tags"), "w+") as f:
f.write(response.decode("utf-8"))
return os.path.join(output_dir, "species.tags")
from glob import glob
def parse_species(input_dir, tags_filename, output_dir):
tags = pd.read_csv(tags_filename, sep="\t", header=None)
tags.columns = ["document", "start", "end", "text", "#species id"]
for document in glob(os.path.join(input_dir, "*.txt")):
document = os.path.basename(document)
doc_tags = tags[tags["document"] == document]
doc_ann = doc_tags.drop(columns=["#species id", "document"])
doc_ann = doc_ann.astype({'start': 'int32', 'end': 'int32'})
doc_ann = doc_ann.drop_duplicates()
doc_ann.reset_index(inplace=True, drop=True)
doc_ann = doc_ann.rename('T{}'.format)
doc_ann["end"] = doc_ann["end"].apply(lambda x: int(x)+1) # To align with LINNAEUS and COPIOUS char offsets
doc_ann.insert(0, "type", ["LIVB"]*doc_ann.shape[0])
ann_filename = document.split(".")[0]+".ann"
doc_ann.to_csv(os.path.join(output_dir, ann_filename), sep="\t", header=False)
os.remove(tags_filename)
###Output
_____no_output_____
###Markdown
Eval SPECIES on test corpora
###Code
from eval_utils import *
###Output
_____no_output_____
###Markdown
Eval on LINNAEUS GSC
###Code
PATH_TO_LINNAEUS_GT = '../corpora/LINNAEUS_GSC_brat/linnaeus_ascii/test'
PATH_TO_LINNAEUS_PRED = './output/SPECIES/LINNAEUS_pred'
tags_filename = run_species(PATH_TO_LINNAEUS_GT, PATH_TO_LINNAEUS_PRED)
parse_species(PATH_TO_LINNAEUS_GT, tags_filename, PATH_TO_LINNAEUS_PRED)
get_precision_recall_f1_single_corpus(PATH_TO_LINNAEUS_PRED, PATH_TO_LINNAEUS_GT, criterion=exact)
get_precision_recall_f1_single_corpus(PATH_TO_LINNAEUS_PRED, PATH_TO_LINNAEUS_GT, criterion=approximate)
FN, FP, TP = get_FN_FP_TP_single_corpus(PATH_TO_LINNAEUS_PRED, PATH_TO_LINNAEUS_GT, criterion=exact)
FP
###Output
_____no_output_____
###Markdown
Eval on S800 GSC
###Code
PATH_TO_S800_GT = '../corpora/S800_GSC_brat/s800/test'
PATH_TO_S800_PRED = "./output/SPECIES/S800_pred"
tags_filename = run_species(PATH_TO_S800_GT, PATH_TO_S800_PRED)
parse_species(PATH_TO_S800_GT, tags_filename, PATH_TO_S800_PRED)
get_precision_recall_f1_single_corpus(PATH_TO_S800_PRED, PATH_TO_S800_GT, criterion=exact)
get_precision_recall_f1_single_corpus(PATH_TO_S800_PRED, PATH_TO_S800_GT, criterion=approximate)
FN, FP, TP = get_FN_FP_TP_single_corpus(PATH_TO_S800_PRED, PATH_TO_S800_GT, criterion=exact)
FP
###Output
_____no_output_____
###Markdown
Eval on COPIOUS GSC
###Code
PATH_TO_COPIOUS_GT = '../corpora/COPIOUS_GSC_brat/copious_ascii/test'
PATH_TO_COPIOUS_PRED = "./output/SPECIES/COPIOUS_pred"
tags_filename = run_species(PATH_TO_COPIOUS_GT, PATH_TO_COPIOUS_PRED)
parse_species(PATH_TO_COPIOUS_GT, tags_filename, PATH_TO_COPIOUS_PRED)
get_precision_recall_f1_single_corpus(PATH_TO_COPIOUS_PRED, PATH_TO_COPIOUS_GT, criterion=exact)
get_precision_recall_f1_single_corpus(PATH_TO_COPIOUS_PRED, PATH_TO_COPIOUS_GT, criterion=approximate)
FN, FP, TP = get_FN_FP_TP_single_corpus(PATH_TO_COPIOUS_PRED, PATH_TO_COPIOUS_GT, criterion=exact)
FP
###Output
_____no_output_____
###Markdown
Eval on BB task corpus
###Code
PATH_TO_BB_GT = '../corpora/BB_GSC_brat/bb_ascii/test'
PATH_TO_BB_PRED = "./output/SPECIES/BB_pred"
tags_filename = run_species(PATH_TO_BB_GT, PATH_TO_BB_PRED)
parse_species(PATH_TO_BB_GT, tags_filename, PATH_TO_BB_PRED)
get_precision_recall_f1_single_corpus(PATH_TO_BB_PRED, PATH_TO_BB_GT, criterion=exact)
get_precision_recall_f1_single_corpus(PATH_TO_BB_PRED, PATH_TO_BB_GT, criterion=approximate)
FN, FP, TP = get_FN_FP_TP_single_corpus(PATH_TO_BB_PRED, PATH_TO_BB_GT, criterion=exact)
FN
###Output
_____no_output_____
|
fundamentals/pandas_intro.ipynb
|
###Markdown
A Quick Overview of Pandas for CheminformaticsThis notebook provides an overview of the Pandas library for data handling and manipulation in Python scripts. Install the necessary Python libraries
###Code
!pip install pandas numpy seaborn
###Output
Requirement already satisfied: pandas in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (1.3.2)
Requirement already satisfied: numpy in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (1.22.2)
Requirement already satisfied: seaborn in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (0.11.2)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from pandas) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from pandas) (2021.1)
Requirement already satisfied: scipy>=1.0 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from seaborn) (1.7.3)
Requirement already satisfied: matplotlib>=2.2 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from seaborn) (3.4.3)
Requirement already satisfied: pillow>=6.2.0 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from matplotlib>=2.2->seaborn) (8.3.1)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from matplotlib>=2.2->seaborn) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from matplotlib>=2.2->seaborn) (1.3.2)
Requirement already satisfied: cycler>=0.10 in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from matplotlib>=2.2->seaborn) (0.10.0)
Requirement already satisfied: six in /opt/anaconda3/envs/rdkit_2021_08/lib/python3.9/site-packages (from cycler>=0.10->matplotlib>=2.2->seaborn) (1.16.0)
###Markdown
Import the necessary Python libraries
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Enable the display of plots from Pandas in a Jupyter notebook
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
Reading Data From and SD FileRead in a file containing data from hERG assays in the ChEMBL database
###Code
df = pd.read_csv("https://raw.githubusercontent.com/PatWalters/practical_cheminformatics_tutorials/main/data/ChEMBL_hERG.csv")
###Output
_____no_output_____
###Markdown
Examine the number of rows and columns in the dataframe
###Code
df.shape
###Output
_____no_output_____
###Markdown
Getting an Overview of the DataWe can also look at datatype for each of the columns
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We can also use the "columns" method to look at the column names.
###Code
df.columns
###Output
_____no_output_____
###Markdown
The "describe" method provides summary statistics for numeric columns.
###Code
df.describe()
###Output
_____no_output_____
###Markdown
Converting DatatypesNote that Pandas thinks that the molregno column is an integer. This is not what we want, we want this column to be a string. Let's fix it.
###Code
df.molregno = df.molregno.apply(str)
df.dtypes
###Output
_____no_output_____
###Markdown
Finding Duplicate MoleculesRecall that our dataframe contains 8989 rows. Let's see how many unique molregno values are in the dataframe. Duplicate molregno values will be same molecule, so we'll average the values for those molecules.
###Code
len(df.molregno.unique())
###Output
_____no_output_____
###Markdown
Examining Assay TypesThe dataframe contains two types of assays, binding assays (B), and functional assays (F). Let's make a bar chart to see how many of each are in the dataframe.
###Code
df.assay_type.value_counts().plot(kind='bar')
###Output
_____no_output_____
###Markdown
We will limit our analysis to only binding assays.
###Code
df = df.query("assay_type == 'B'")
df.shape
###Output
_____no_output_____
###Markdown
Aggregating DataIn order to combine rows that contain the same molecule, we will use the "groupby" function.
###Code
gb = df.groupby("molregno")
###Output
_____no_output_____
###Markdown
We will iterate over the groups add the name and the mean of the multiple replicates to a temporary_list. Once this is finished we will create a new dataframe with the molecule name and the average IC50.
###Code
row_list = []
for k,v in gb:
row_list.append([k,v.standard_value.mean()])
row_df = pd.DataFrame(row_list,columns=["name","standard_value"])
###Output
_____no_output_____
###Markdown
Let's see how many rows and columns are in our new dataframe. Note that this is the same as the number of unique values of molregno.
###Code
row_df.shape
###Output
_____no_output_____
###Markdown
Examining the Data DistributionNow we will make a plot of the distribution of IC50 values. To do this, we will use the Seaborn Python library.
###Code
import seaborn as sns
###Output
_____no_output_____
###Markdown
First we will set a few variables to make the plots look better.
###Code
sns.set(rc={'figure.figsize': (15, 12)})
sns.set(font_scale=1.5)
sns.set_style('whitegrid')
###Output
_____no_output_____
###Markdown
Now we can make the plot.
###Code
ax = sns.displot(row_df.standard_value,kind="kde")
###Output
_____no_output_____
###Markdown
Note that the plot above isn't very informative. Most over the values are small, but there are some large values that are skewing the scale on the x-axis. Let's plot the pIC50, which is the negative log of the IC50. To do this, we'll first create a column containing the pIC50.
###Code
row_df["pIC50"] = -np.log10(row_df.standard_value * 1e-9)
row_df.head()
###Output
_____no_output_____
###Markdown
Let's make another plot, this time we'll plot the pIC50 distribution.
###Code
ax = sns.displot(row_df.pIC50,kind="kde")
###Output
_____no_output_____
###Markdown
Checking For Null ValuesCheck the dataframe to see if we have any null values.
###Code
row_df.dropna().shape
row_df.shape
###Output
_____no_output_____
###Markdown
The shapes of the data frame and the dataframe without null values are the same, so we're good. Sorting the DataSort the data by pIC50, note that the values with pIC50 approximately equal to zero (the first few rows) are almost certainly data input errors. These compounds are reported to have IC50s of 10^9nM, which is 1M. I seriously doubt that the compounds would even be soluble at that concentration.
###Code
row_df.sort_values("pIC50",ascending=True).head()
###Output
_____no_output_____
###Markdown
Selecting High Confidence DataLet's look at the distribution of confidence scores associated with our original dataset.
###Code
df.confidence_score.value_counts()
###Output
_____no_output_____
###Markdown
We will create a new dataframe with only the molecules have a confidence score of 9.
###Code
score_9 = df.query("confidence_score == 9")
score_9.shape
###Output
_____no_output_____
###Markdown
Let's try to add a column to the new dataframe that we created. Note that this throws an exception because the new dataframe is just a reference to the original dataframe.
###Code
score_9["extra"] = 3
###Output
/var/folders/jh/f_7r7rqn3yvgbxg68_d95_p80000gq/T/ipykernel_61595/2616250860.py:1: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
score_9["extra"] = 3
###Markdown
What we really want to do is create a new dataframe, we can do this with the "copy" method.
###Code
score_9 = df.query("confidence_score == 9").copy()
###Output
_____no_output_____
###Markdown
Now adding a new column works.
###Code
score_9['extra'] = 3
score_9.head()
###Output
_____no_output_____
###Markdown
Let's say that we want to create a new column with more descriptive names for the data quality. We can do this using the "map" function.
###Code
level_map = {8: 'fair', 9: 'good'}
df['confidence_level'] = df.confidence_score.map(level_map)
df.head()
###Output
_____no_output_____
###Markdown
We can make a bar plot of the data quality with the descriptions on the x-axis.
###Code
ax = df.confidence_level.value_counts().plot(kind="bar")
ax.tick_params(axis='x', rotation=0)
###Output
_____no_output_____
###Markdown
We can also make a boxplot to compare the IC50 distributions for the good quality data and the fair quality data.
###Code
ax = sns.boxplot(data=df,x="confidence_level",y="standard_value")
ax.set(yscale="log",xlabel="Confidence Level",ylabel="IC50 (nM)")
###Output
_____no_output_____
|
notebooks/demo_yields.ipynb
|
###Markdown
Example for yield data
###Code
#spatial field yield data from a combine harvester
yields = pd.read_csv('../data/cropdata/Bavaria/yields/yields2018.csv', sep=",",encoding = "ISO-8859-1", engine='python')
yields = yields[['Name','Latitude', 'Longitude', 'Elevation(m)','Ertr.masse (Nass)(tonne/ha)','Ertr.masse (Tr.)(tonne/ha)','Ertr.vol (Tr.)(L/ha)', 'ErtragNass', 'ErtragTr', 'Feuchtigkeit(%)', 'Jahr','TAG' ]]
# linear interpolated on a weekly basis for winter wheat
training = pd.read_excel("../data/cropdata/Bavaria/yields/result_split_S2A_linear_W_WW_2018.xlsx")
#not interpolated daily data/ weather is daily
daily_training = pd.read_excel("../data/cropdata/Bavaria/yields/satellite_data_orginal.xlsx")
# summary with nitrogen, yield and polygon ..."field-level yield in dt/ha !"
summary = pd.read_excel("../data/cropdata/Bavaria/yields/fields_summary.xlsx")
###Output
_____no_output_____
###Markdown
1D timeseries
###Code
# plot one field and the corresponding 1D timeseries
# The dataset includes daily water needs, raw bands, indices, weather etc.
fig, (ax1, ax2) = plt.subplots(1, 2,figsize=(12, 5))
daily_training[daily_training.Name == 'Baumacker'][['ETC_NDWI']].plot(ax=ax2)
field = summary[summary.Name == 'Baumacker']
field['Polygon'] = gpd.GeoSeries.from_wkt(field['Polygon'])
gdf = gpd.GeoDataFrame(field, geometry='Polygon')
gdf.plot(ax=ax1)
###Output
/tmp/ipykernel_1550/2091479612.py:5: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
field['Polygon'] = gpd.GeoSeries.from_wkt(field['Polygon'])
###Markdown
Pixel-based yield and satellite data
###Code
#plot the corresponding combine harvester data for the same field
from shapely.geometry import Point
f, (ax1,ax2) = plt.subplots(1, 2,figsize=(10, 5))
size=40
field = yields[yields.Name == 'Baumacker']
geometry = [Point(xy) for xy in zip(field.Longitude, field.Latitude)]
crs = {'init': 'epsg:4326'}
gdf = gpd.GeoDataFrame(field, crs=crs, geometry=geometry)
minx, miny, maxx, maxy = gdf.total_bounds
ax2.axes.get_xaxis().set_visible(False)
ax2.axes.get_yaxis().set_visible(False)
ax2.scatter(y=field.Latitude, x=field.Longitude, alpha=1,cmap=plt.get_cmap("jet_r"), c=field['Ertr.masse (Nass)(tonne/ha)'],s=2.2)
gdf.plot(ax=ax1)
f.tight_layout()
plt.show()
# and now lets explore the corresponding sentinel-2 timeseries (L2A) for the combine harvester data
!pip install rasterio
import rasterio as rio
from tqdm.auto import tqdm
tqdm.pandas()
#every image has
fp1 = '../data/cropdata/Bavaria/yields/sat_images_10m/Baumacker/01a0c00ccb65ec1618c82ec40cd78ce1/response.tiff'
fp2 = '../data/cropdata/Bavaria/yields/sat_images_10m/Baumacker/28f87c250b090e2436505b0db2931e90/response.tiff'
fp3 = '../data/cropdata/Bavaria/yields/sat_images_10m/Baumacker/f99a4ff29ac6917833a1b427344d00a6/response.tiff'
raster1 = rio.open(fp1)
raster2 = rio.open(fp2)
raster3 = rio.open(fp3)
from rasterio.plot import show
# data description:
# ["CLM", "dataMask", "B01", "B02", "B03", "B04","B05", "B06","B07", "B08","B8A", "B09", "B11","B12"]
fig, (axr, axg, axb) = plt.subplots(1,3, figsize=(21,7))
show((raster1, 6), ax=axr, cmap='Reds', title='red channel')
show((raster1, 5), ax=axg, cmap='Greens', title='green channel')
show((raster1, 4), ax=axb, cmap='Blues', title='blue channel')
def plot(image, factor=1, _min=0, _max=1):
"""
visualize satellite images
"""
fig = plt.subplots(nrows=1, ncols=1, figsize=(15, 7))
if np.issubdtype(image.dtype, np.floating):
plt.imshow(np.minimum(image * factor, 1), vmin=_min, vmax=_max)
else:
plt.imshow(image, vmin=_min, vmax=_max)
with rio.open(fp1, 'r') as ds:
arr3 = ds.read()
# every image has 19 bands with 66 x 31 pixels
# time_interval:'2018-03-01' - '2018-07-30'
# Level L2A
# Winter Wheat: 'Baumacker', 'D8', 'Dichtlacker', 'Heindlacker', 'Heng', 'Holzacker', 'Neulandsiedlung', 'Itzling2', 'Itzling5',
# 'Itzling6', 'Schluetterfabrik','Thalhausen138', 'Thalhausen141', 'Voettingerfeld'
#
# Image bands:
# ["CLM", "dataMask", "B01", "B02", "B03", "B04","B05", "B06","B07", "B08","B8A", "B09", "B11","B12", sunAzimuthAngles, sunZenithAngles, viewAzimuthMean, viewZenithMean, NDWI]
# CLM stands for clouds 1 / no clouds 0
# there are also meta information and an index
arr3.shape
import numpy as np
arr3 = np.moveaxis(arr3, 0, -1)
arr3.shape
plot(arr3[:, :, [6, 5, 4]],4.5)
###Output
_____no_output_____
|
DataSimilarity_Tika.ipynb
|
###Markdown
Analysis of Media and Semantic Forensics in Scientific Literature Calculating Data Similarity using Tika
###Code
#Install all the requriements
!pip install -r requirements.txt
import random
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
random.seed(1996)
#preprocessing
raw_bik=pd.read_csv('New_Bik.csv')
#columns change to string
raw_bik['Authors']=raw_bik['Authors'].astype('string')
raw_bik['Authors']=raw_bik['Authors'].fillna('')
raw_bik['Title']=raw_bik['Title'].astype('string')
raw_bik['Title']=raw_bik['Title'].fillna('')
raw_bik['Citation']=raw_bik['Citation'].astype('string')
raw_bik['Citation']=raw_bik['Citation'].fillna('')
raw_bik['DOI']=raw_bik['DOI'].astype('string')
raw_bik['DOI']=raw_bik['DOI'].fillna('')
raw_bik['FINDINGS']=raw_bik['FINDINGS'].astype('string')
raw_bik['FINDINGS']=raw_bik['FINDINGS'].fillna('')
raw_bik['Correction Date']=raw_bik['Correction Date'].astype('string')
raw_bik['Correction Date']=raw_bik['Correction Date'].fillna('')
raw_bik['URL']=raw_bik['URL'].astype('string')
raw_bik['URL']=raw_bik['URL'].fillna('')
raw_bik['Home Site']=raw_bik['Home Site'].astype('string')
raw_bik['Home Site']=raw_bik['Home Site'].fillna('')
raw_bik['First Author Affiliation']=raw_bik['First Author Affiliation'].astype('string')
raw_bik['First Author Affiliation']=raw_bik['First Author Affiliation'].fillna('')
raw_bik['First Author Degree']=raw_bik['First Author Degree'].astype('string')
raw_bik['First Author Degree']=raw_bik['First Author Degree'].fillna('')
raw_bik['First Author Degree Area']=raw_bik['First Author Degree Area'].astype('string')
raw_bik['First Author Degree Area']=raw_bik['First Author Degree Area'].fillna('')
raw_bik['university_name']=raw_bik['university_name'].astype('string')
raw_bik['university_name']=raw_bik['university_name'].fillna('')
raw_bik['country_x']=raw_bik['country_x'].astype('string')
raw_bik['country_x']=raw_bik['country_x'].fillna('')
raw_bik['world_rank_y']=raw_bik['world_rank_y'].astype('string')
raw_bik['world_rank_y']=raw_bik['world_rank_y'].fillna('')
raw_bik['country_y']=raw_bik['country_y'].astype('string')
raw_bik['country_y']=raw_bik['country_y'].fillna('')
raw_bik['num_students']=raw_bik['num_students'].astype('string')
raw_bik['num_students']=raw_bik['num_students'].fillna('')
raw_bik['international_students']=raw_bik['international_students'].astype('string')
raw_bik['international_students']=raw_bik['international_students'].fillna('')
raw_bik['female_male_ratio']=raw_bik['female_male_ratio'].astype('string')
raw_bik['female_male_ratio']=raw_bik['female_male_ratio'].fillna('')
raw_bik['city_ascii']=raw_bik['city_ascii'].astype('string')
raw_bik['city_ascii']=raw_bik['city_ascii'].fillna('')
raw_bik['state_id']=raw_bik['state_id'].astype('string')
raw_bik['state_id']=raw_bik['state_id'].fillna('')
raw_bik['state_name']=raw_bik['state_name'].astype('string')
raw_bik['state_name']=raw_bik['state_name'].fillna('')
raw_bik['county_name']=raw_bik['county_name'].astype('string')
raw_bik['county_name']=raw_bik['county_name'].fillna('')
raw_bik['source']=raw_bik['source'].astype('string')
raw_bik['source']=raw_bik['source'].fillna('')
raw_bik['military']=raw_bik['military'].astype('string')
raw_bik['military']=raw_bik['military'].fillna('')
raw_bik['incorporated']=raw_bik['incorporated'].astype('string')
raw_bik['incorporated']=raw_bik['incorporated'].fillna('')
raw_bik['timezone']=raw_bik['timezone'].astype('string')
raw_bik['timezone']=raw_bik['timezone'].fillna('')
raw_bik['zips']=raw_bik['zips'].astype('string')
raw_bik['zips']=raw_bik['zips'].fillna('')
raw_bik['county']=raw_bik['county'].astype('string')
raw_bik['county']=raw_bik['county'].fillna('')
raw_bik['labor_force']=raw_bik['labor_force'].astype('string')
raw_bik['labor_force']=raw_bik['labor_force'].fillna('')
raw_bik['employed']=raw_bik['employed'].astype('string')
raw_bik['employed']=raw_bik['employed'].fillna('')
raw_bik['unemployed']=raw_bik['unemployed'].astype('string')
raw_bik['unemployed']=raw_bik['unemployed'].fillna('')
#List to string
raw_bik['Lab Size']=raw_bik['Lab Size'].astype('string')
raw_bik['Lab Size']=raw_bik['Lab Size'].fillna('')
raw_bik['Pub Rate']=raw_bik['Pub Rate'].astype('string')
raw_bik['Pub Rate']=raw_bik['Pub Rate'].fillna('')
raw_bik['Other Journals']=raw_bik['Other Journals'].astype('string')
raw_bik['Other Journals']=raw_bik['Other Journals'].fillna('')
raw_bik
#Cosine Similarity
from cosine_similarity import *
computeScores2('New_Bik.csv', raw_bik,'cosine_similarity_bik.csv')
#Jaro Winkler Similarity
from jaro_winkler import *
computeScores2_JW('New_Bik.csv', 'jaro_winkler_bik.csv')
#Bell Curve fitting/Gaussian overlap
from gaussian_overlap import *
computeScores2_GO('New_Bik.csv', raw_bik,'gaussian_overlap_bik.csv')
#Levenshtein Similarity
from levenshtein import *
computeScores2_LS('New_Bik.csv', 'levenshtein_bik.csv')
###Output
_____no_output_____
###Markdown
Clustering and Visualization
###Code
#Visualization
#Heatmap for the combination datasets
cs=pd.read_csv('cosine_similarity_bik.csv')
cs=cs.pivot("x-coordinate",'y-coordinate','Similarity_score')
jw=pd.read_csv('jaro_winkler_bik.csv')
jw=jw.pivot("x-coordinate",'y-coordinate','Similarity_score')
go=pd.read_csv('gaussian_overlap_bik.csv')
go=go.pivot("x-coordinate",'y-coordinate','Similarity_score')
ls=pd.read_csv('levenshtein_bik.csv')
ls=ls.pivot("x-coordinate",'y-coordinate','Similarity_score')
#Cosine Similarity
plt.figure(figsize=(30,30))
sns.heatmap(cs)
plt.show()
#Jaro Winkler Similarity
plt.figure(figsize=(30,30))
sns.heatmap(jw)
plt.show()
#Bell Curve fitting/Gaussian overlap
plt.figure(figsize=(30,30))
sns.heatmap(go)
plt.show()
#Levenshtein Similarity
plt.figure(figsize=(30,30))
sns.heatmap(ls)
plt.show()
#Data types of the file #Make this list for your own file and preprocess accordingly
#config_bik = ['str','str','str','str','int','float','float','float','float','float','str','int',
# 'str','float','float','float','int','str','str', 'str','str','str','str', 'float','str','str','str',
# 'float','str','float','float','float', 'float','float','float','float','float','float','float','float',
# 'str','str','float', 'float', 'float', 'float','float','float','str','float','str','str', 'float',
# 'str', 'str', 'str','float','str','float', 'float','float','float','str', 'str','str','str','float',
# 'str', 'str','str','str', 'str','float']
###Output
_____no_output_____
|
CrowdsourcingML_Draft_Notebook.ipynb
|
###Markdown
CrowdsourcingML on Amazon Data
###Code
pip install scikit-multilearn
###Output
Requirement already satisfied: scikit-multilearn in /usr/local/lib/python3.7/dist-packages (0.2.0)
###Markdown
Import Necessary Libraries
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
%matplotlib inline
from sklearn import model_selection
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
# Machine Learning Algorithms
from skmultilearn.problem_transform import ClassifierChain
from sklearn.naive_bayes import GaussianNB
from sklearn.cluster import KMeans
from sklearn.linear_model import LogisticRegression
from sklearn.naive_bayes import BernoulliNB
# Metric Libraries
from sklearn.metrics import accuracy_score
from sklearn import metrics
import warnings
warnings.filterwarnings("ignore")
###Output
_____no_output_____
###Markdown
Loading the Dataset
###Code
df = pd.read_csv("amazon.csv")
df.head()
###Output
_____no_output_____
###Markdown
**Rename the Columns**
###Code
df.columns = ['worker_id', 'task_id', 'worker_reviews', 'expert_reviews', 'time_taken']
df.head(2)
###Output
_____no_output_____
###Markdown
**Check Data Types**
###Code
df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 9999 entries, 0 to 9998
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 worker_id 9999 non-null object
1 task_id 9999 non-null int64
2 worker_reviews 9999 non-null int64
3 expert_reviews 9999 non-null int64
4 time_taken 9999 non-null int64
dtypes: int64(4), object(1)
memory usage: 390.7+ KB
###Markdown
**Get a Description of float and integer variables**
###Code
df.describe().T
###Output
_____no_output_____
###Markdown
**Check the size of the dataset**
###Code
df.shape
###Output
_____no_output_____
###Markdown
**Print dataset column names**
###Code
columns = df.columns
columns
###Output
_____no_output_____
###Markdown
**Get the count of unique values in the columns**
###Code
for col in columns:
print(f'Length of Unique values in {col} is: {len(df[col].unique())}')
###Output
Length of Unique values in worker_id is: 143
Length of Unique values in task_id is: 500
Length of Unique values in worker_reviews is: 2
Length of Unique values in expert_reviews is: 2
Length of Unique values in time_taken is: 527
###Markdown
Data Cleaning **Check for null values**
###Code
df.isna().sum()
###Output
_____no_output_____
###Markdown
**Check for Duplicated Values**
###Code
df.duplicated().sum()
###Output
_____no_output_____
###Markdown
**Get dummy values for the worker_id column**
###Code
df.drop(['worker_id'], axis=1, inplace=True)
df.shape
# OUTLIERS : Checking for Outliers by plotting a visual for the taken cars only.
#
# defining a funtion that takes the dataset name and numeric columns list as arguments
# then returns a visual for the columns_list
#
plt.style.use('bmh')
out_taken = df[['task_id', 'worker_reviews', 'expert_reviews', 'time_taken']]
# Plotting Outliers for the Taken vehicles
#
_t, taken = pd.DataFrame.boxplot(out_taken, return_type='both', widths = 0.2)
outliers = [flier.get_ydata() for flier in taken["fliers"]]
out_liers = [i.tolist() for i in outliers]
print("Outlier ranges for Taken Cars.\n", len(outliers))
print("Outlier ranges for Taken Cars.\n", out_liers)
# Function for counting number of outliers in our data columns and cheking the percentage for each
# ----
#
def detect_outlier(data):
outliers=[]
threshold=3
mean_1 = np.mean(data)
std_1 =np.std(data)
for y in data:
z_score= (y - mean_1)/std_1
if np.abs(z_score) > threshold:
outliers.append(y)
return outliers
# Counting number of outliers in our data columns and cheking the percentage for each column using z-score
#
#
for col in df:
rows, columns = df.shape
percent_coefficient = float(100 / rows)
outliers = detect_outlier(df[col])
outliers_count = len(outliers)
outliers_percentage = outliers_count * percent_coefficient
print(f"{col} has {outliers_count} outliers in total, which is {outliers_percentage:.2}% of data")
# Getting ouliers from our dataframe using a z-test
#
from scipy import stats
z = np.abs(stats.zscore(df))
print(z)
# Dropping and Confirming that our outliers have been dropped from the dataset.
#
df_o = df[(z < 3).all(axis=1)]
print(f"Previous dataframe size : {df.shape[0]}")
print(f"New dataframe size: {df_o.shape[0]}")
df = df_o.copy()
df.shape
###Output
_____no_output_____
###Markdown
Exploratory Data Analysis Scatter Plots Worker Reviews
###Code
sns.lmplot(data=df, x="time_taken", y="task_id", col="worker_reviews", hue="worker_reviews")
###Output
_____no_output_____
###Markdown
Expert Reviews
###Code
sns.lmplot(data=df, x="time_taken", y="task_id", col="expert_reviews", hue="expert_reviews")
x = df[(df['task_id']>5200) & (df['task_id']<5700)]
x['task_id'].unique()
x.head()
###Output
_____no_output_____
###Markdown
Joint Plots Worker Reviews
###Code
sns.jointplot(data=df, x="time_taken", y="task_id", hue="worker_reviews")
plt.title('Time Taken and Task ID Joint Plot for Worker Reviews')
###Output
_____no_output_____
###Markdown
Expert Reviews
###Code
sns.jointplot(data=df, x="time_taken", y="task_id", hue="expert_reviews")
plt.title('Time Taken and Task ID Joint Plot for Expert Reviews')
###Output
_____no_output_____
###Markdown
Implementing the solution **Split data into x(features) and y(labels)**
###Code
x = df[['task_id', 'time_taken']]
y = df[['worker_reviews', 'expert_reviews']]
y.head(2)
###Output
_____no_output_____
###Markdown
**Split data into train(80%)and test(20%)**
###Code
X_train, X_test, y_train, y_test = train_test_split(x, y, test_size=0.20, random_state=12)
###Output
_____no_output_____
###Markdown
Classifier Chains Classifier chains is a machine learning method for problem transformation in multi-label classification. It combines the computational efficiency of the Binary Relevance method while still being able to take the label dependencies into account for classification.>>Each model makes a prediction in the order specified by the chain using all of the available features provided to the model plus the predictions of models that are earlier in the chain.>>When predicting, the true labels will not be available. Instead the predictions of each model are passed on to the subsequent models in the chain to be used as features.>>Clearly the order of the chain is important. The first model in the chain has no information about the other labels while the last model in the chain has features indicating the presence of all of the other labels. In general one does not know the optimal ordering of the models in the chain so typically many randomly ordered chains are fit and their predictions are averaged together. GaussianNB Classifier
###Code
# using classifier chains
# initialize classifier chains multi-label classifier
# with a gaussian naive bayes base classifier
gaussian = GaussianNB()
gaussian_clf = ClassifierChain(gaussian)
sc = StandardScaler()
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# train
gaussian_clf.fit(X_train, y_train)
# predict
gaussian_preds = gaussian_clf.predict(X_test)
print(f'Gaussian accuracy score: {accuracy_score(y_test,gaussian_preds)*100}%')
gaussian_preds = pd.DataFrame.sparse.from_spmatrix(gaussian_preds)
gaussian_preds.columns=['worker', 'expert']
gaussian_preds.head()
gaussian_pred_w = gaussian_preds['worker']
y_test_gw = y_test['worker_reviews']
gaussian_pred_e = gaussian_preds['expert']
y_test_ge = y_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_test_gw, gaussian_pred_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_test_gw, gaussian_pred_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_test_ge, gaussian_pred_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_test_ge, gaussian_pred_e)*100}")
###Output
Worker Precision:, 88.9917695473251
Worker Recall:, 88.26530612244898
Expert Precision:, 100.0
Expert Recall:, 99.08256880733946
###Markdown
**Confusion Matrix for Workers**
###Code
cm = pd.crosstab(y_test_gw, gaussian_pred_w, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Workers.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
###Output
Predicted 0 1.0
Actual
0 71 83
1 100 108
###Markdown
**Confusion Matrix for Experts**
###Code
cm = pd.crosstab(y_test_ge, gaussian_pred_e, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Experts.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
###Output
Predicted 0 1.0
Actual
0 77 82
1 94 109
###Markdown
Logistic Regression
###Code
#Logistic Regression
log_reg = LogisticRegression()
lr_clf = ClassifierChain(log_reg)
X_train = sc.fit_transform(X_train)
X_test = sc.transform(X_test)
# train
lr_clf.fit(X_train, y_train)
# predict
log_reg_preds = lr_clf.predict(X_test)
from sklearn import metrics
print(f'Accuracy: {accuracy_score(y_test,log_reg_preds)*100}%')
log_reg_preds = pd.DataFrame.sparse.from_spmatrix(log_reg_preds)
log_reg_preds.columns=['worker_reviews', 'expert_reviews']
log_reg_preds.head(2)
log_reg_pred_w = log_reg_preds['worker_reviews']
y_test_lw = y_test['worker_reviews']
log_reg_pred_e = log_reg_preds['expert_reviews']
y_test_le = y_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_test_lw, log_reg_pred_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_test_lw, log_reg_pred_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_test_le, log_reg_pred_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_test_le, log_reg_pred_e)*100}")
###Output
Worker Precision:, 88.90030832476874
Worker Recall:, 88.26530612244898
Expert Precision:, 100.0
Expert Recall:, 99.69418960244649
###Markdown
**Confusion Matrix for Workers**
###Code
cm = pd.crosstab(y_test_lw, log_reg_pred_w, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Workers.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
###Output
Predicted 0 1.0
Actual
0 71 83
1 99 109
###Markdown
**Confusion Matrix for Experts**
###Code
cm = pd.crosstab(y_test_le, log_reg_pred_e, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Experts.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
###Output
Predicted 0 1.0
Actual
0 76 83
1 93 110
###Markdown
KMeans Classifier
###Code
#KMeans
k_means = KMeans(n_clusters=2, random_state=2, n_init=2)
kmeans_clf = ClassifierChain(k_means)
# train
kmeans_clf.fit(X_train, y_train)
# predict
kmeans_preds = kmeans_clf.predict(X_test)
print(f'KMeans accuracy score: {accuracy_score(y_test,kmeans_preds)*100}%')
kmeans_preds = pd.DataFrame.sparse.from_spmatrix(kmeans_preds)
kmeans_preds.columns=['worker_reviews', 'expert_reviews']
kmeans_preds.head()
kmeans_pred_w = kmeans_preds['worker_reviews']
y_test_kmw = y_test['worker_reviews']
kmeans_pred_e = kmeans_preds['expert_reviews']
y_test_kme = y_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_test_kmw, kmeans_pred_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_test_kmw, kmeans_pred_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_test_kme, kmeans_pred_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_test_kme, kmeans_pred_e)*100}")
cm = pd.crosstab(y_test_kmw, kmeans_pred_w, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Worker.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
cm = pd.crosstab(y_test_kme, kmeans_pred_e, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Experts.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
###Output
Predicted 0 1.0
Actual
0 89 70
1 107 96
###Markdown
Naive Bayes Classifier
###Code
#Naive Bayes(Bernouli)
bernNB = BernoulliNB()
bernNB_clf = ClassifierChain(bernNB)
# train
bernNB_clf.fit(X_train, y_train)
# predict
bernNB_preds = bernNB_clf.predict(X_test)
print(f'Naive Bayes accuracy score: {accuracy_score(y_test,bernNB_preds)*100}%')
bernNB_preds = pd.DataFrame.sparse.from_spmatrix(bernNB_preds)
bernNB_preds.columns=['worker_reviews', 'expert_reviews']
bernNB_preds.head()
bernNB_preds_w = bernNB_preds['worker_reviews']
y_test_nbw = y_test['worker_reviews']
bernNB_preds_e = bernNB_preds['expert_reviews']
y_test_nbe = y_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_test_nbe, bernNB_preds_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_test_nbe, bernNB_preds_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_test_nbe, bernNB_preds_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_test_nbe, bernNB_preds_e)*100}")
cm = pd.crosstab(y_test_nbw, bernNB_preds_w, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Worker.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
cm = pd.crosstab(y_test_nbe, bernNB_preds_e, rownames=['Actual'], colnames=['Predicted'])
print(cm)
fig, (ax1) = plt.subplots(ncols=1, figsize=(5,5))
sns.heatmap(cm,
xticklabels=['negative', 'positive'],
yticklabels=['negative', 'positive'],
annot=True,ax=ax1,
linewidths=.2,linecolor="Darkblue", cmap="Blues")
plt.title('Confusion Matrix for Worker.', fontsize=14)
plt.show()
# 1= default
# 0 = No default
###Output
Predicted 0 1.0
Actual
0 75 84
1 93 110
###Markdown
5-fold cross validation
###Code
print('5-fold cross validation: \n')
labels = ['Gaussian', 'Logistic Regression', 'K Means', 'Naive Bayes']
for clf, label in zip([gaussian_clf, lr_clf, kmeans_clf, bernNB_clf], labels):
scores = model_selection.cross_val_score(clf, x, y, cv=5, scoring='accuracy')
print('Accuracy: %0.2f (+/- %0.2f) [%s]' %(scores.mean()*100, scores.std(), label))
###Output
5-fold cross validation:
Accuracy: 87.34 (+/- 0.02) [Gaussian]
Accuracy: 88.14 (+/- 0.02) [Logistic Regression]
Accuracy: 70.97 (+/- 0.35) [K Means]
Accuracy: 24.69 (+/- 0.14) [Naive Bayes]
###Markdown
Now we can proceed to identify bias using our algorithms. Bias detection and mitigation Install aif360
###Code
pip install aif360[all]
###Output
Requirement already satisfied: aif360[all] in /usr/local/lib/python3.7/dist-packages (0.4.0)
Requirement already satisfied: pandas>=0.24.0 in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (1.1.5)
Requirement already satisfied: scikit-learn>=0.22.1 in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (0.22.2.post1)
Requirement already satisfied: numpy>=1.16 in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (1.19.5)
Requirement already satisfied: tempeh in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (0.1.12)
Requirement already satisfied: matplotlib in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (3.2.2)
Requirement already satisfied: scipy<1.6.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (1.4.1)
Requirement already satisfied: cvxpy>=1.0; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (1.0.31)
Requirement already satisfied: sphinx-rtd-theme; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (0.5.2)
Requirement already satisfied: jupyter; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (1.0.0)
Requirement already satisfied: pytest>=3.5; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (3.6.4)
Requirement already satisfied: adversarial-robustness-toolbox>=1.0.0; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (1.6.1)
Requirement already satisfied: tensorflow>=1.13.1; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (2.4.1)
Requirement already satisfied: sphinx; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (1.8.5)
Requirement already satisfied: lime; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (0.2.0.1)
Requirement already satisfied: tqdm; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (4.41.1)
Requirement already satisfied: BlackBoxAuditing; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (0.1.54)
Requirement already satisfied: fairlearn==0.4.6; extra == "all" in /usr/local/lib/python3.7/dist-packages (from aif360[all]) (0.4.6)
Requirement already satisfied: pytz>=2017.2 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24.0->aif360[all]) (2018.9)
Requirement already satisfied: python-dateutil>=2.7.3 in /usr/local/lib/python3.7/dist-packages (from pandas>=0.24.0->aif360[all]) (2.8.1)
Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.7/dist-packages (from scikit-learn>=0.22.1->aif360[all]) (1.0.1)
Requirement already satisfied: requests in /usr/local/lib/python3.7/dist-packages (from tempeh->aif360[all]) (2.23.0)
Requirement already satisfied: memory-profiler in /usr/local/lib/python3.7/dist-packages (from tempeh->aif360[all]) (0.58.0)
Requirement already satisfied: shap in /usr/local/lib/python3.7/dist-packages (from tempeh->aif360[all]) (0.39.0)
Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.7/dist-packages (from matplotlib->aif360[all]) (0.10.0)
Requirement already satisfied: pyparsing!=2.0.4,!=2.1.2,!=2.1.6,>=2.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->aif360[all]) (2.4.7)
Requirement already satisfied: kiwisolver>=1.0.1 in /usr/local/lib/python3.7/dist-packages (from matplotlib->aif360[all]) (1.3.1)
Requirement already satisfied: osqp>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from cvxpy>=1.0; extra == "all"->aif360[all]) (0.6.2.post0)
Requirement already satisfied: ecos>=2 in /usr/local/lib/python3.7/dist-packages (from cvxpy>=1.0; extra == "all"->aif360[all]) (2.0.7.post1)
Requirement already satisfied: multiprocess in /usr/local/lib/python3.7/dist-packages (from cvxpy>=1.0; extra == "all"->aif360[all]) (0.70.11.1)
Requirement already satisfied: scs>=1.1.3 in /usr/local/lib/python3.7/dist-packages (from cvxpy>=1.0; extra == "all"->aif360[all]) (2.1.3)
Requirement already satisfied: docutils<0.17 in /usr/local/lib/python3.7/dist-packages (from sphinx-rtd-theme; extra == "all"->aif360[all]) (0.16)
Requirement already satisfied: ipykernel in /usr/local/lib/python3.7/dist-packages (from jupyter; extra == "all"->aif360[all]) (4.10.1)
Requirement already satisfied: qtconsole in /usr/local/lib/python3.7/dist-packages (from jupyter; extra == "all"->aif360[all]) (5.0.3)
Requirement already satisfied: ipywidgets in /usr/local/lib/python3.7/dist-packages (from jupyter; extra == "all"->aif360[all]) (7.6.3)
Requirement already satisfied: jupyter-console in /usr/local/lib/python3.7/dist-packages (from jupyter; extra == "all"->aif360[all]) (5.2.0)
Requirement already satisfied: nbconvert in /usr/local/lib/python3.7/dist-packages (from jupyter; extra == "all"->aif360[all]) (5.6.1)
Requirement already satisfied: notebook in /usr/local/lib/python3.7/dist-packages (from jupyter; extra == "all"->aif360[all]) (5.3.1)
Requirement already satisfied: py>=1.5.0 in /usr/local/lib/python3.7/dist-packages (from pytest>=3.5; extra == "all"->aif360[all]) (1.10.0)
Requirement already satisfied: more-itertools>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from pytest>=3.5; extra == "all"->aif360[all]) (8.7.0)
Requirement already satisfied: attrs>=17.4.0 in /usr/local/lib/python3.7/dist-packages (from pytest>=3.5; extra == "all"->aif360[all]) (20.3.0)
Requirement already satisfied: six>=1.10.0 in /usr/local/lib/python3.7/dist-packages (from pytest>=3.5; extra == "all"->aif360[all]) (1.15.0)
Requirement already satisfied: pluggy<0.8,>=0.5 in /usr/local/lib/python3.7/dist-packages (from pytest>=3.5; extra == "all"->aif360[all]) (0.7.1)
Requirement already satisfied: atomicwrites>=1.0 in /usr/local/lib/python3.7/dist-packages (from pytest>=3.5; extra == "all"->aif360[all]) (1.4.0)
Requirement already satisfied: setuptools in /usr/local/lib/python3.7/dist-packages (from pytest>=3.5; extra == "all"->aif360[all]) (54.2.0)
Requirement already satisfied: numba~=0.53.1 in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.53.1)
Requirement already satisfied: Pillow in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (7.1.2)
Requirement already satisfied: pydub in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.25.1)
Requirement already satisfied: resampy in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.2.2)
Requirement already satisfied: cma in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (3.0.3)
Requirement already satisfied: mypy in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.812)
Requirement already satisfied: statsmodels in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.10.2)
Requirement already satisfied: ffmpeg-python in /usr/local/lib/python3.7/dist-packages (from adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.2.0)
Requirement already satisfied: termcolor~=1.1.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.1.0)
Requirement already satisfied: h5py~=2.10.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (2.10.0)
Requirement already satisfied: flatbuffers~=1.12.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.12)
Requirement already satisfied: absl-py~=0.10 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (0.12.0)
Requirement already satisfied: keras-preprocessing~=1.1.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.1.2)
Requirement already satisfied: gast==0.3.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (0.3.3)
Requirement already satisfied: wheel~=0.35 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (0.36.2)
Requirement already satisfied: grpcio~=1.32.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.32.0)
Requirement already satisfied: tensorflow-estimator<2.5.0,>=2.4.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (2.4.0)
Requirement already satisfied: opt-einsum~=3.3.0 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (3.3.0)
Requirement already satisfied: protobuf>=3.9.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (3.12.4)
Requirement already satisfied: wrapt~=1.12.1 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.12.1)
Requirement already satisfied: google-pasta~=0.2 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (0.2.0)
Requirement already satisfied: tensorboard~=2.4 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (2.4.1)
Requirement already satisfied: astunparse~=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.6.3)
Requirement already satisfied: typing-extensions~=3.7.4 in /usr/local/lib/python3.7/dist-packages (from tensorflow>=1.13.1; extra == "all"->aif360[all]) (3.7.4.3)
Requirement already satisfied: imagesize in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (1.2.0)
Requirement already satisfied: packaging in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (20.9)
Requirement already satisfied: babel!=2.0,>=1.3 in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (2.9.0)
Requirement already satisfied: Pygments>=2.0 in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (2.6.1)
Requirement already satisfied: Jinja2>=2.3 in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (2.11.3)
Requirement already satisfied: sphinxcontrib-websupport in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (1.2.4)
Requirement already satisfied: alabaster<0.8,>=0.7 in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (0.7.12)
Requirement already satisfied: snowballstemmer>=1.1 in /usr/local/lib/python3.7/dist-packages (from sphinx; extra == "all"->aif360[all]) (2.1.0)
Requirement already satisfied: scikit-image>=0.12 in /usr/local/lib/python3.7/dist-packages (from lime; extra == "all"->aif360[all]) (0.16.2)
Requirement already satisfied: networkx in /usr/local/lib/python3.7/dist-packages (from BlackBoxAuditing; extra == "all"->aif360[all]) (2.5.1)
Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/dist-packages (from requests->tempeh->aif360[all]) (1.24.3)
Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/dist-packages (from requests->tempeh->aif360[all]) (2020.12.5)
Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.7/dist-packages (from requests->tempeh->aif360[all]) (3.0.4)
Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.7/dist-packages (from requests->tempeh->aif360[all]) (2.10)
Requirement already satisfied: psutil in /usr/local/lib/python3.7/dist-packages (from memory-profiler->tempeh->aif360[all]) (5.4.8)
Requirement already satisfied: cloudpickle in /usr/local/lib/python3.7/dist-packages (from shap->tempeh->aif360[all]) (1.3.0)
Requirement already satisfied: slicer==0.0.7 in /usr/local/lib/python3.7/dist-packages (from shap->tempeh->aif360[all]) (0.0.7)
Requirement already satisfied: qdldl in /usr/local/lib/python3.7/dist-packages (from osqp>=0.4.1->cvxpy>=1.0; extra == "all"->aif360[all]) (0.1.5.post0)
Requirement already satisfied: dill>=0.3.3 in /usr/local/lib/python3.7/dist-packages (from multiprocess->cvxpy>=1.0; extra == "all"->aif360[all]) (0.3.3)
Requirement already satisfied: jupyter-client in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter; extra == "all"->aif360[all]) (5.3.5)
Requirement already satisfied: tornado>=4.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter; extra == "all"->aif360[all]) (5.1.1)
Requirement already satisfied: traitlets>=4.1.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter; extra == "all"->aif360[all]) (5.0.5)
Requirement already satisfied: ipython>=4.0.0 in /usr/local/lib/python3.7/dist-packages (from ipykernel->jupyter; extra == "all"->aif360[all]) (5.5.0)
Requirement already satisfied: jupyter-core in /usr/local/lib/python3.7/dist-packages (from qtconsole->jupyter; extra == "all"->aif360[all]) (4.7.1)
Requirement already satisfied: ipython-genutils in /usr/local/lib/python3.7/dist-packages (from qtconsole->jupyter; extra == "all"->aif360[all]) (0.2.0)
Requirement already satisfied: pyzmq>=17.1 in /usr/local/lib/python3.7/dist-packages (from qtconsole->jupyter; extra == "all"->aif360[all]) (22.0.3)
Requirement already satisfied: qtpy in /usr/local/lib/python3.7/dist-packages (from qtconsole->jupyter; extra == "all"->aif360[all]) (1.9.0)
Requirement already satisfied: widgetsnbextension~=3.5.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter; extra == "all"->aif360[all]) (3.5.1)
Requirement already satisfied: jupyterlab-widgets>=1.0.0; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter; extra == "all"->aif360[all]) (1.0.0)
Requirement already satisfied: nbformat>=4.2.0 in /usr/local/lib/python3.7/dist-packages (from ipywidgets->jupyter; extra == "all"->aif360[all]) (5.1.3)
Requirement already satisfied: prompt-toolkit<2.0.0,>=1.0.0 in /usr/local/lib/python3.7/dist-packages (from jupyter-console->jupyter; extra == "all"->aif360[all]) (1.0.18)
Requirement already satisfied: defusedxml in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter; extra == "all"->aif360[all]) (0.7.1)
Requirement already satisfied: mistune<2,>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter; extra == "all"->aif360[all]) (0.8.4)
Requirement already satisfied: entrypoints>=0.2.2 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter; extra == "all"->aif360[all]) (0.3)
Requirement already satisfied: pandocfilters>=1.4.1 in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter; extra == "all"->aif360[all]) (1.4.3)
Requirement already satisfied: bleach in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter; extra == "all"->aif360[all]) (3.3.0)
Requirement already satisfied: testpath in /usr/local/lib/python3.7/dist-packages (from nbconvert->jupyter; extra == "all"->aif360[all]) (0.4.4)
Requirement already satisfied: terminado>=0.8.1 in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter; extra == "all"->aif360[all]) (0.9.4)
Requirement already satisfied: Send2Trash in /usr/local/lib/python3.7/dist-packages (from notebook->jupyter; extra == "all"->aif360[all]) (1.5.0)
Requirement already satisfied: llvmlite<0.37,>=0.36.0rc1 in /usr/local/lib/python3.7/dist-packages (from numba~=0.53.1->adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.36.0)
Requirement already satisfied: mypy-extensions<0.5.0,>=0.4.3 in /usr/local/lib/python3.7/dist-packages (from mypy->adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.4.3)
Requirement already satisfied: typed-ast<1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from mypy->adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (1.4.3)
Requirement already satisfied: patsy>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from statsmodels->adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.5.1)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from ffmpeg-python->adversarial-robustness-toolbox>=1.0.0; extra == "all"->aif360[all]) (0.16.0)
Requirement already satisfied: werkzeug>=0.11.15 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.0.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (0.4.4)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.8.0)
Requirement already satisfied: markdown>=2.6.8 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (3.3.4)
Requirement already satisfied: google-auth<2,>=1.6.3 in /usr/local/lib/python3.7/dist-packages (from tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.28.1)
Requirement already satisfied: MarkupSafe>=0.23 in /usr/local/lib/python3.7/dist-packages (from Jinja2>=2.3->sphinx; extra == "all"->aif360[all]) (1.1.1)
Requirement already satisfied: sphinxcontrib-serializinghtml in /usr/local/lib/python3.7/dist-packages (from sphinxcontrib-websupport->sphinx; extra == "all"->aif360[all]) (1.1.4)
Requirement already satisfied: PyWavelets>=0.4.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.12->lime; extra == "all"->aif360[all]) (1.1.1)
Requirement already satisfied: imageio>=2.3.0 in /usr/local/lib/python3.7/dist-packages (from scikit-image>=0.12->lime; extra == "all"->aif360[all]) (2.4.1)
Requirement already satisfied: decorator<5,>=4.3 in /usr/local/lib/python3.7/dist-packages (from networkx->BlackBoxAuditing; extra == "all"->aif360[all]) (4.4.2)
Requirement already satisfied: pickleshare in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter; extra == "all"->aif360[all]) (0.7.5)
Requirement already satisfied: simplegeneric>0.8 in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter; extra == "all"->aif360[all]) (0.8.1)
Requirement already satisfied: pexpect; sys_platform != "win32" in /usr/local/lib/python3.7/dist-packages (from ipython>=4.0.0->ipykernel->jupyter; extra == "all"->aif360[all]) (4.8.0)
Requirement already satisfied: jsonschema!=2.5.0,>=2.4 in /usr/local/lib/python3.7/dist-packages (from nbformat>=4.2.0->ipywidgets->jupyter; extra == "all"->aif360[all]) (2.6.0)
Requirement already satisfied: wcwidth in /usr/local/lib/python3.7/dist-packages (from prompt-toolkit<2.0.0,>=1.0.0->jupyter-console->jupyter; extra == "all"->aif360[all]) (0.2.5)
Requirement already satisfied: webencodings in /usr/local/lib/python3.7/dist-packages (from bleach->nbconvert->jupyter; extra == "all"->aif360[all]) (0.5.1)
Requirement already satisfied: ptyprocess; os_name != "nt" in /usr/local/lib/python3.7/dist-packages (from terminado>=0.8.1->notebook->jupyter; extra == "all"->aif360[all]) (0.7.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /usr/local/lib/python3.7/dist-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (1.3.0)
Requirement already satisfied: importlib-metadata; python_version < "3.8" in /usr/local/lib/python3.7/dist-packages (from markdown>=2.6.8->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (3.10.1)
Requirement already satisfied: cachetools<5.0,>=2.0.0 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (4.2.1)
Requirement already satisfied: rsa<5,>=3.1.4; python_version >= "3.6" in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (4.7.2)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /usr/local/lib/python3.7/dist-packages (from google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (0.2.8)
Requirement already satisfied: oauthlib>=3.0.0 in /usr/local/lib/python3.7/dist-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (3.1.0)
Requirement already satisfied: zipp>=0.5 in /usr/local/lib/python3.7/dist-packages (from importlib-metadata; python_version < "3.8"->markdown>=2.6.8->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (3.4.1)
Requirement already satisfied: pyasn1>=0.1.3 in /usr/local/lib/python3.7/dist-packages (from rsa<5,>=3.1.4; python_version >= "3.6"->google-auth<2,>=1.6.3->tensorboard~=2.4->tensorflow>=1.13.1; extra == "all"->aif360[all]) (0.4.8)
###Markdown
a) Identifying Bias in the Actual Data
###Code
positive_df = df[df['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = df[df['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 4939
Num unprivileged: 4920
Unprivileged ratio: 0.11483739837398374
Privileged ratio: 0.9024093946142944
Disparate Impact: 0.12725643046199364
###Markdown
The industry standard is a four-fifths rule: if the unprivileged group receives a positive outcome less than 80% of their proportion of the privilege group, this is a disparate impact violation. However, you may decide to increase this for your business.In this scenario, we are below the threshold of 0.98 so we deem this to be fair.A disparate income ratio of 1 indicates complete equality. b) Identifying Bias in the Predicted Data Before Mitigation Disparate Impact in Gaussian>**(Before Bias Mitigation)**
###Code
positive_df = gaussian_preds[gaussian_preds['worker'] == 1]
num_of_privileged = len(positive_df)
negative_df = gaussian_preds[gaussian_preds['worker'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 972
Num unprivileged: 1000
Unprivileged ratio: 0.0
Privileged ratio: 1.0
___________________________________________________
Disparate Impact: 0.0
###Markdown
Disparate Impact in Logistic Regression>**(Before Bias Mitigation)**
###Code
positive_df = log_reg_preds[log_reg_preds['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = log_reg_preds[log_reg_preds['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 973
Num unprivileged: 999
Unprivileged ratio: 0.005005005005005005
Privileged ratio: 1.0
___________________________________________________
Disparate Impact: 0.005005005005005005
###Markdown
Disparate Impact in Kmeans>**(Before Bias Mitigation)**
###Code
positive_df = kmeans_preds[kmeans_preds['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = kmeans_preds[kmeans_preds['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 173
Num unprivileged: 1799
Unprivileged ratio: 0.4263479710950528
Privileged ratio: 0.5549132947976878
___________________________________________________
Disparate Impact: 0.7683145729108765
###Markdown
Disparate Impact in Naive Bayes>**(Before Bias Mitigation)**
###Code
positive_df = bernNB_preds[bernNB_preds['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = bernNB_preds[bernNB_preds['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 981
Num unprivileged: 991
Unprivileged ratio: 0.0
Privileged ratio: 1.0
___________________________________________________
Disparate Impact: 0.0
###Markdown
Mitigating Bias with AI Fairness 360
###Code
import aif360
from aif360.algorithms.preprocessing import DisparateImpactRemover
binaryLabelDataset = aif360.datasets.BinaryLabelDataset(
favorable_label=1,
unfavorable_label=0,
df=df,
label_names=['expert_reviews'],
protected_attribute_names=['worker_reviews'])
###Output
_____no_output_____
###Markdown
Transforming the Data
###Code
di = DisparateImpactRemover(repair_level = 1.0)
dataset_transf_train = di.fit_transform(binaryLabelDataset)
transformed = dataset_transf_train.convert_to_dataframe()[0]
transformed.describe().T
x_trans = transformed[['task_id', 'time_taken']]
y = transformed[['worker_reviews', 'expert_reviews']]
scaler = StandardScaler()
x_trans = scaler.fit_transform(x_trans)
x_trans_train,x_trans_test,y_trans_train,y_trans_test = train_test_split(x_trans, y, test_size=0.2, random_state = 0)
###Output
_____no_output_____
###Markdown
Models GaussianNB Classifier
###Code
gaussian_clf.fit(x_trans_train, y_trans_train)
y_trans_preds_g = gaussian_clf.predict(x_trans_test)
print(f'Gaussian accuracy score: {accuracy_score(y_trans_test, y_trans_preds_g)*100}%\n')
# Convert predictions from sparse matrix to dataframe.
y_trans_preds_g = pd.DataFrame.sparse.from_spmatrix(y_trans_preds_g)
y_trans_preds_g.columns=['worker_reviews', 'expert_reviews']
# Split the labels into two. (wokers and experts)
gaussian_trans_pred_w = y_trans_preds_g['worker_reviews']
y_trans_test_gw = y_trans_test['worker_reviews']
gaussian_trans_pred_e = y_trans_preds_g['expert_reviews']
y_trans_test_ge = y_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_trans_test_gw, gaussian_trans_pred_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_trans_test_gw, gaussian_trans_pred_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_trans_test_gw, gaussian_trans_pred_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_trans_test_gw, gaussian_trans_pred_e)*100}")
###Output
Gaussian accuracy score: 68.86409736308316%
Worker Precision:, 65.44293695131684
Worker Recall:, 82.16432865731463
Expert Precision:, 65.44293695131684
Expert Recall:, 82.16432865731463
###Markdown
Logistic Regression
###Code
lr_clf.fit(x_trans_train, y_trans_train)
y_trans_preds_lr = lr_clf.predict(x_trans_test)
print(f'Logistic accuracy score: {accuracy_score(y_trans_test, y_trans_preds_lr)*100}%\n')
# Convert predictions from sparse matrix to dataframe.
y_trans_preds_lr = pd.DataFrame.sparse.from_spmatrix(y_trans_preds_lr)
y_trans_preds_lr.columns=['worker_reviews', 'expert_reviews']
# Split the labels into two. (wokers and experts)
lr_trans_pred_w = y_trans_preds_lr['worker_reviews']
y_trans_test_lw = y_trans_test['worker_reviews']
lr_trans_pred_e = y_trans_preds_lr['expert_reviews']
y_trans_test_le = y_trans_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_trans_test_lw, lr_trans_pred_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_trans_test_lw, lr_trans_pred_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_trans_test_lw, lr_trans_pred_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_trans_test_lw, lr_trans_pred_e)*100}")
###Output
Logistic accuracy score: 69.26977687626776%
Worker Precision:, 67.85063752276868
Worker Recall:, 74.64929859719439
Expert Precision:, 67.85063752276868
Expert Recall:, 74.64929859719439
###Markdown
KMeans Classifier
###Code
# train
kmeans_clf.fit(x_trans_train, y_trans_train)
# predict
kmeans_trans_preds = kmeans_clf.predict(x_trans_test)
print(f'KMeans accuracy score: {accuracy_score(y_trans_test,kmeans_trans_preds)*100}%\n')
# Convert predictions from sparse matrix to dataframe.
kmeans_trans_preds = pd.DataFrame.sparse.from_spmatrix(kmeans_trans_preds)
kmeans_trans_preds.columns=['worker_reviews', 'expert_reviews']
# Split the labels into two. (wokers and experts)
kmeans_trans_pred_w = kmeans_trans_preds['worker_reviews']
y_trans_test_kw = y_trans_test['worker_reviews']
kmeans_trans_pred_e = kmeans_trans_preds['expert_reviews']
y_trans_test_ke = y_trans_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_trans_test_kw, kmeans_trans_pred_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_trans_test_kw, kmeans_trans_pred_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_trans_test_kw, kmeans_trans_pred_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_trans_test_kw, kmeans_trans_pred_e)*100}")
###Output
KMeans accuracy score: 18.255578093306287%
Worker Precision:, 43.87755102040816
Worker Recall:, 8.617234468937877
Expert Precision:, 17.72151898734177
Expert Recall:, 9.819639278557114
###Markdown
Naive Bayes Classifier
###Code
# train
bernNB_clf.fit(x_trans_train, y_trans_train)
# predict
bernNB_trans_preds = bernNB_clf.predict(x_trans_test)
print(f'BernouliNB accuracy score: {accuracy_score(y_trans_test,bernNB_trans_preds)*100}%\n')
# Convert predictions from sparse matrix to dataframe.
bernNB_trans_preds = pd.DataFrame.sparse.from_spmatrix(bernNB_trans_preds)
bernNB_trans_preds.columns=['worker_reviews', 'expert_reviews']
# Split the labels into two. (wokers and experts)
bernNB_trans_pred_w = bernNB_trans_preds['worker_reviews']
y_trans_test_bern_w = y_trans_test['worker_reviews']
bernNB_trans_pred_e = bernNB_trans_preds['expert_reviews']
y_trans_test_bern_e = y_trans_test['expert_reviews']
print(f"Worker Precision:, {metrics.precision_score(y_trans_test_bern_w, bernNB_trans_pred_w)*100}")
print(f"Worker Recall:, {metrics.recall_score(y_trans_test_bern_w, bernNB_trans_pred_w)*100}\n")
print(f"Expert Precision:, {metrics.precision_score(y_trans_test_bern_w, bernNB_trans_pred_e)*100}")
print(f"Expert Recall:, {metrics.recall_score(y_trans_test_bern_w, bernNB_trans_pred_e)*100}")
###Output
BernouliNB accuracy score: 69.32048681541582%
Worker Precision:, 67.40478299379983
Worker Recall:, 76.25250501002004
Expert Precision:, 67.40478299379983
Expert Recall:, 76.25250501002004
###Markdown
c) Identifying Bias in the Transformed Data
###Code
positive_df = transformed[transformed['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = transformed[transformed['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 4939
Num unprivileged: 4920
Unprivileged ratio: 0.11483739837398374
Privileged ratio: 0.9024093946142944
___________________________________________________
Disparate Impact: 0.12725643046199364
###Markdown
d) Identifying Bias in the Data After Using Machine Learning Models. Disparate Impact in GaussianNB> **After Bias Mitigation**
###Code
positive_df = y_trans_preds_g[y_trans_preds_g['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = y_trans_preds_g[y_trans_preds_g['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact_a_Gaussian = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact_a_Gaussian}')
###Output
Num privileged: 1253
Num unprivileged: 719
Unprivileged ratio: 0.0
Privileged ratio: 1.0
___________________________________________________
Disparate Impact: 0.0
###Markdown
Disparate Impact in Logistic Regression> **After Bias Mitigation**
###Code
positive_df = y_trans_preds_lr[y_trans_preds_lr['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = y_trans_preds_lr[y_trans_preds_lr['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 1098
Num unprivileged: 874
Unprivileged ratio: 0.0
Privileged ratio: 1.0
___________________________________________________
Disparate Impact: 0.0
###Markdown
Disparate Impact in Kmeans> **After Bias Mitigation**
###Code
positive_df = kmeans_trans_preds[kmeans_trans_preds['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = kmeans_trans_preds[kmeans_trans_preds['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 196
Num unprivileged: 1776
Unprivileged ratio: 0.27871621621621623
Privileged ratio: 0.29591836734693877
___________________________________________________
Disparate Impact: 0.9418685927306617
###Markdown
Disparate Impact in Naive Bayes> **After Bias Mitigation**
###Code
positive_df = bernNB_trans_preds[bernNB_trans_preds['worker_reviews'] == 1]
num_of_privileged = len(positive_df)
negative_df = bernNB_trans_preds[bernNB_trans_preds['worker_reviews'] == 0]
num_of_unprivileged = len(negative_df)
print(f'Num privileged: {num_of_privileged}')
print(f'Num unprivileged: {num_of_unprivileged}\n')
unprivileged_outcomes = negative_df[negative_df['expert_reviews'] == 1].shape[0]
unprivileged_ratio = unprivileged_outcomes/num_of_unprivileged
print(f'Unprivileged ratio: {unprivileged_ratio}')
privileged_outcomes = positive_df[positive_df['expert_reviews'] == 1].shape[0]
privileged_ratio = privileged_outcomes/num_of_privileged
print(f'Privileged ratio: {privileged_ratio}\n')
print('___________________________________________________')
# Calculating disparate impact
disparate_impact = unprivileged_ratio / privileged_ratio
print(f'Disparate Impact: {disparate_impact}')
###Output
Num privileged: 1129
Num unprivileged: 843
Unprivileged ratio: 0.0
Privileged ratio: 1.0
___________________________________________________
Disparate Impact: 0.0
|
titanic-machine-learning-from-disaster/20210503-submission-v1.ipynb
|
###Markdown
Data definitions- survival Survival 0 = No, 1 = Yes- pclass Ticket class 1 = 1st, 2 = 2nd, 3 = 3rd- sex Sex - Age Age in years - sibsp of siblings / spouses aboard the Titanic - parch of parents / children aboard the Titanic - ticket Ticket number - fare Passenger fare - cabin Cabin number - embarked Port of Embarkation C = Cherbourg, Q = Queenstown, S = Southampton Sample Notebook- https://www.kaggle.com/sinakhorami/titanic-best-working-classifier- https://www.kaggle.com/startupsci/titanic-data-science-solutions
###Code
display(train_df.describe())
display(train_df.describe(include=['O']))
display(train_df.info())
display(test_df.describe())
display(test_df.describe(include=['O']))
display(test_df.info())
all_data = [train_df, test_df]
train_df[['Survived', 'Pclass']].groupby('Pclass').mean().sort_values(by='Pclass', ascending=False).style.bar(color=["blue"], axis=0, align='mid')
train_df[['Survived', 'Sex']].groupby('Sex').mean().sort_values(by='Sex', ascending=False)
def calc_family_size(x):
return x['SibSp'] + x['Parch'] + 1
train_df['FamilySize'] = train_df.apply(lambda x: calc_family_size(x), axis=1)
test_df['FamilySize'] = test_df.apply(lambda x: calc_family_size(x), axis=1)
train_df[['Survived', 'FamilySize']].groupby('FamilySize').mean().sort_values(by='FamilySize', ascending=False).style.bar(color=["blue"], axis=0, align='mid')
def calc_is_alone(x):
if x['FamilySize'] == 1:
return 1
return 0
train_df['IsAlone'] = train_df.apply(lambda x: calc_is_alone(x), axis=1)
test_df['IsAlone'] = test_df.apply(lambda x: calc_is_alone(x), axis=1)
train_df[['Survived', 'IsAlone']].groupby('IsAlone').mean().sort_values(by='IsAlone', ascending=False).style.bar(color=["blue"], axis=0, align='mid')
train_df['Embarked'] = train_df['Embarked'].fillna('S')
test_df['Embarked'] = test_df['Embarked'].fillna('S')
train_df[['Survived', 'Embarked']].groupby('Embarked').mean().sort_values(by='Embarked', ascending=False).style.bar(color=["blue"], axis=0, align='mid')
train_df['Fare'] = train_df['Fare'].fillna(train_df['Fare'].median())
test_df['Fare'] = test_df['Fare'].fillna(train_df['Fare'].median())
train_df['CategoricalFare'] = pd.qcut(train_df['Fare'], 4, labels=[1, 2, 3, 4])
test_df['CategoricalFare'] = pd.qcut(test_df['Fare'], 4, labels=[1, 2, 3, 4])
train_df[['Survived', 'CategoricalFare']].groupby('CategoricalFare').mean().style.bar(color=['blue'], axis=0, align='mid')
print(train_df['Age'].isnull().sum(), test_df['Age'].isnull().sum())
def fill_age(dataset):
age_avg = dataset['Age'].mean()
age_std = dataset['Age'].std()
age_null_count = dataset['Age'].isnull().sum()
age_null_random_list = np.random.randint(age_avg - age_std, age_avg + age_std, size=age_null_count)
dataset.loc[np.isnan(dataset['Age']), 'Age'] = age_null_random_list
dataset['Age'] = dataset['Age'].astype(int)
fill_age(train_df)
fill_age(test_df)
print(train_df['Age'].isnull().sum(), test_df['Age'].isnull().sum())
_, age_bins = pd.cut(train_df['Age'], 5, retbins=True)
print(age_bins)
train_df['CategoricalAge'] = pd.cut(train_df['Age'], 5, labels=[1, 2, 3, 4, 5])
test_df['CategoricalAge'] = pd.cut(test_df['Age'], 5, labels=[1, 2, 3, 4, 5])
train_df[['Survived', 'CategoricalAge']].groupby('CategoricalAge').mean().style.bar(color=['blue'], axis=0, align='mid')
import re
rare_title = ['Lady', 'Countess','Capt', 'Col', 'Don', 'Dr', 'Major', 'Rev', 'Sir', 'Jonkheer', 'Dona']
def get_title(name):
title_search = re.search(' ([A-Za-z]+)\.', name)
if title_search:
t = title_search.group(1)
if t in rare_title:
return 'Rare'
elif t in ['Mlle', 'Mss']:
return 'Miss'
elif t in ['Mms']:
return 'Mr'
else:
return t
return ''
train_df['Title'] = train_df['Name'].apply(get_title)
test_df['Title'] = test_df['Name'].apply(get_title)
train_df[['Survived', 'Title']].groupby('Title').mean().style.bar(color=['blue'], axis=0, align='mid')
print(train_df['Title'].isnull().sum(), test_df['Title'].isnull().sum())
train_df['SexCategory'] = train_df['Sex'].map({'female': 0, 'male': 1}).astype(int)
test_df['SexCategory'] = test_df['Sex'].map({'female': 0, 'male': 1}).astype(int)
title_mapping = {'Mr': 1, 'Miss': 2, 'Mrs': 3, 'Master': 4, 'Rare': 5}
train_df['Title'] = train_df['Title'].map(title_mapping)
train_df['Title'] = train_df['Title'].fillna(0)
test_df['Title'] = test_df['Title'].map(title_mapping)
test_df['Title'] = test_df['Title'].fillna(0)
train_df['Embarked'] = train_df['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
test_df['Embarked'] = test_df['Embarked'].map( {'S': 0, 'C': 1, 'Q': 2} ).astype(int)
train_df['CategoricalFare'] = train_df['CategoricalFare'].astype(int)
test_df['CategoricalFare'] = test_df['CategoricalFare'].astype(int)
train_df['CategoricalAge'] = train_df['CategoricalAge'].astype(int)
test_df['CategoricalAge'] = test_df['CategoricalAge'].astype(int)
train_df_ready = train_df.copy()
test_df_ready = test_df.copy()
drop_elements = ['PassengerId', 'Name', 'Ticket', 'Cabin', 'SibSp', 'Parch', 'Sex']
train_df_ready.drop(drop_elements, axis=1, inplace=True)
test_df_ready.drop(drop_elements, axis=1, inplace=True)
print(train_df_ready.info())
import seaborn as sns
from sklearn.model_selection import StratifiedShuffleSplit
from sklearn.metrics import accuracy_score, log_loss
from sklearn.neighbors import KNeighborsClassifier
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier, GradientBoostingClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis, QuadraticDiscriminantAnalysis
from sklearn.linear_model import LogisticRegression
classifiers = [
KNeighborsClassifier(3),
SVC(probability=True),
DecisionTreeClassifier(),
RandomForestClassifier(),
AdaBoostClassifier(),
GradientBoostingClassifier(),
GaussianNB(),
LinearDiscriminantAnalysis(),
QuadraticDiscriminantAnalysis(),
LogisticRegression(max_iter=1000)
]
log_cols = ['Classifier', 'Accuracy']
log = pd.DataFrame(columns=log_cols)
sss = StratifiedShuffleSplit(n_splits=10, test_size=0.1, random_state=0)
features = [
'Pclass',
'Embarked',
'FamilySize',
'IsAlone',
'CategoricalFare',
'CategoricalAge',
'Title',
'SexCategory'
]
X = train_df_ready[features].values
y = train_df_ready['Survived'].values
acc_dict = {}
for train_index, test_index in sss.split(X, y):
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
for clf in classifiers:
name = clf.__class__.__name__
clf.fit(X_train, y_train)
train_predictions = clf.predict(X_test)
acc = accuracy_score(y_test, train_predictions)
if name in acc_dict:
acc_dict[name] += acc
else:
acc_dict[name] = acc
for clf in acc_dict:
acc_dict[clf] = acc_dict[clf] / 10.0
log_entry = pd.DataFrame([[clf, acc_dict[clf]]], columns=log_cols)
log = log.append(log_entry)
plt.xlabel('Accuracy')
plt.title('Classifier Accuracy')
sns.set_color_codes("muted")
sns.barplot(x='Accuracy', y='Classifier', data=log, color="b")
display(log)
# 0.840000
candidate_classifier = RandomForestClassifier()
candidate_classifier.fit(X, y)
result = candidate_classifier.predict(test_df_ready[features])
result_df = pd.DataFrame(result)
print(result_df.shape, test_df_ready.shape)
submission = pd.DataFrame({
'PassengerId': test_df['PassengerId'],
'Survived': result
})
submission.to_csv('./inputdata/submission.csv', index=False)
###Output
_____no_output_____
|
notebooks/basic_ml/07_Data_and_Models.ipynb
|
###Markdown
Data and ModelsIn the subsequent lessons, we will continue to learn deep learning. But we've ignored a fundamental concept about data and modeling: quality and quantity. View on practicalAI Run in Google Colab View code on GitHub Set up In a nutshell, a machine learning model consumes input data and produces predictions. The quality of the predictions directly corresponds to the quality and quantity of data you train the model with; **garbage in, garbage out**. Check out this [VentureBeat article](https://venturebeat.com/2018/06/30/understanding-the-practical-applications-of-business-ai/) on where it makes sense to use AI and how to properly apply it. We're going to go through all the concepts with concrete code examples and some synthesized data to train our models on. The task is to determine whether a tumor will be benign (harmless) or malignant (harmful) based on leukocyte (white blood cells) count and blood pressure. This is a synethic dataset that we created and has no clinical relevance.
###Code
# Use TensorFlow 2.x
%tensorflow_version 2.x
import os
import numpy as np
import tensorflow as tf
# Arguments
SEED = 1234
DATA_FILE = 'tumors.csv'
REDUCED_DATA_FILE = 'tumors_reduced.csv'
SHUFFLE = True
TRAIN_SIZE = 0.70
VAL_SIZE = 0.15
TEST_SIZE = 0.15
NUM_EPOCHS = 5
BATCH_SIZE = 32
HIDDEN_DIM = 100
LEARNING_RATE = 1e-3
# Set seed for reproducability
np.random.seed(SEED)
tf.random.set_seed(SEED)
###Output
_____no_output_____
###Markdown
Data
###Code
import matplotlib.pyplot as plt
import pandas as pd
from pandas.plotting import scatter_matrix
import urllib
###Output
_____no_output_____
###Markdown
Operations
###Code
# Upload data from GitHub to notebook's local drive
url = "https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/tumors.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(DATA_FILE, 'wb') as fp:
fp.write(html)
# Raw data
df = pd.read_csv(DATA_FILE, header=0)
df.head()
# Define X and y
X = df[['leukocyte_count', 'blood_pressure']].values
y = df['tumor_class'].values
# Plot data
colors = {'benign': 'red', 'malignant': 'blue'}
plt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], s=25, edgecolors='k')
plt.xlabel('leukocyte count')
plt.ylabel('blood pressure')
plt.legend(['malignant ', 'benign'], loc="upper right")
plt.show()
###Output
_____no_output_____
###Markdown
We want to choose features that have strong predictive signal for our task. If you want to improve performance, you need to continuously do feature engineering by collecting and adding new signals. So you may run into a new feature that has high correlation (orthogonal signal) with your existing features but it may still possess som unique signal to boost your predictive performance.
###Code
# Correlation matrix
scatter_matrix(df, figsize=(5, 5));
df.corr()
###Output
_____no_output_____
###Markdown
Split data
###Code
import collections
import json
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Components
###Code
def train_val_test_split(X, y, val_size, test_size, shuffle):
"""Split data into train/val/test datasets.
"""
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, stratify=y, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, stratify=y_train, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
###Output
_____no_output_____
###Markdown
Operations
###Code
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
class_counts = dict(collections.Counter(y))
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"X_train[0]: {X_train[0]}")
print (f"y_train[0]: {y_train[0]}")
print (f"Classes: {class_counts}")
###Output
X_train: (722, 2), y_train: (722,)
X_val: (128, 2), y_val: (128,)
X_test: (150, 2), y_test: (150,)
X_train[0]: [18.01865938 15.48133647]
y_train[0]: benign
Classes: {'malignant': 611, 'benign': 389}
###Markdown
Label encoder
###Code
import json
from sklearn.preprocessing import LabelEncoder
# Output vectorizer
y_tokenizer = LabelEncoder()
# Fit on train data
y_tokenizer = y_tokenizer.fit(y_train)
print (f"classes: {y_tokenizer.classes_}")
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = y_tokenizer.transform(y_train)
y_val = y_tokenizer.transform(y_val)
y_test = y_tokenizer.transform(y_test)
print (f"y_train[0]: {y_train[0]}")
# Class weights
counts = collections.Counter(y_train)
class_weights = {_class: 1.0/count for _class, count in counts.items()}
print (f"class counts: {counts},\nclass weights: {class_weights}")
###Output
class counts: Counter({1: 441, 0: 281}),
class weights: {0: 0.0035587188612099642, 1: 0.0022675736961451248}
###Markdown
Standardize data
###Code
from sklearn.preprocessing import StandardScaler
# Standardize the data (mean=0, std=1) using training data
X_scaler = StandardScaler().fit(X_train)
# Apply scaler on training and test data (don't standardize outputs for classification)
standardized_X_train = X_scaler.transform(X_train)
standardized_X_val = X_scaler.transform(X_val)
standardized_X_test = X_scaler.transform(X_test)
# Check
print (f"standardized_X_train: mean: {np.mean(standardized_X_train, axis=0)[0]}, std: {np.std(standardized_X_train, axis=0)[0]}")
print (f"standardized_X_val: mean: {np.mean(standardized_X_val, axis=0)[0]}, std: {np.std(standardized_X_val, axis=0)[0]}")
print (f"standardized_X_test: mean: {np.mean(standardized_X_test, axis=0)[0]}, std: {np.std(standardized_X_test, axis=0)[0]}")
###Output
standardized_X_train: mean: 3.938600753633857e-15, std: 0.9999999999999998
standardized_X_val: mean: 0.06571155649025341, std: 0.9625041074006321
standardized_X_test: mean: -0.09679265967370689, std: 0.9864056087200104
###Markdown
Model Let's fit a model on this synthetic data.
###Code
import itertools
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.models import Model
from tensorflow.keras.models import Model
###Output
_____no_output_____
###Markdown
Components
###Code
# MLP
class MLP(Model):
def __init__(self, hidden_dim, num_classes):
super(MLP, self).__init__()
self.fc1 = Dense(units=hidden_dim, activation='relu')
self.fc2 = Dense(units=num_classes, activation='softmax')
def call(self, x_in, training=False):
"""Forward pass."""
z = self.fc1(x_in)
y_pred = self.fc2(z)
return y_pred
def sample(self, input_shape):
x_in = Input(shape=input_shape)
return Model(inputs=x_in, outputs=self.call(x_in)).summary()
def plot_multiclass_decision_boundary(model, X, y, savefig_fp=None):
"""Plot the multiclass decision boundary for a model that accepts 2D inputs.
Arguments:
model {function} -- trained model with function model.predict(x_in).
X {numpy.ndarray} -- 2D inputs with shape (N, 2).
y {numpy.ndarray} -- 1D outputs with shape (N,).
"""
# Axis boundaries
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101),
np.linspace(y_min, y_max, 101))
# Create predictions
x_in = np.c_[xx.ravel(), yy.ravel()]
y_pred = model.predict(x_in)
y_pred = np.argmax(y_pred, axis=1).reshape(xx.shape)
# Plot decision boundary
plt.contourf(xx, yy, y_pred, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# Plot
if savefig_fp:
plt.savefig(savefig_fp, format='png')
###Output
_____no_output_____
###Markdown
Operations
###Code
# Model Arguments
INPUT_DIM = X_train.shape[-1]
NUM_CLASSES = len(df.tumor_class.unique())
# Initialize the model
model = MLP(hidden_dim=HIDDEN_DIM,
num_classes=NUM_CLASSES)
model.sample(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=SparseCategoricalCrossentropy(),
metrics=['accuracy'])
# Training
model.fit(x=standardized_X_train,
y=y_train,
validation_data=(standardized_X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
class_weight=class_weights,
shuffle=False,
verbose=1)
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
print (f"sample probability: {pred_test[0]}")
pred_train = np.argmax(pred_train, axis=1)
pred_test = np.argmax(pred_test, axis=1)
print (f"sample class: {pred_test[0]}")
# Accuracy
train_acc = accuracy_score(y_train, pred_train)
test_acc = accuracy_score(y_test, pred_test)
print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}")
###Output
train acc: 0.96, test acc: 0.93
###Markdown
We're going to plot a white point, which we know belongs to the malignant tumor class. Our well trained model here would accurately predict that it is indeed a malignant tumor!
###Code
# Visualize the decision boundary
plt.figure(figsize=(8,5))
plt.title("Test")
plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test)
# Sample point near the decision boundary
mean_leukocyte_count, mean_blood_pressure = X_scaler.transform(
[[np.mean(df.leukocyte_count), np.mean(df.blood_pressure)]])[0]
plt.scatter(mean_leukocyte_count+0.05, mean_blood_pressure-0.05, s=200,
c='b', edgecolor='w', linewidth=2)
# Annotate
plt.annotate('true: malignant,\npred: malignant',
color='white',
xy=(mean_leukocyte_count, mean_blood_pressure),
xytext=(0.4, 0.65),
textcoords='figure fraction',
fontsize=16,
arrowprops=dict(facecolor='white', shrink=0.1)
)
plt.show()
###Output
_____no_output_____
###Markdown
Great! We received great performances on both our train and test data splits. We're going to use this dataset to show the importance of data quality and quantity. Data quality and quantity Let's remove some training data near the decision boundary and see how robust the model is now.
###Code
# Upload data from GitHub to notebook's local drive
url = "https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/tumors_reduced.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(REDUCED_DATA_FILE, 'wb') as fp:
fp.write(html)
# Raw reduced data
df_reduced = pd.read_csv(REDUCED_DATA_FILE, header=0)
df_reduced.head()
# Define X and y
X = df_reduced[['leukocyte_count', 'blood_pressure']].values
y = df_reduced['tumor_class'].values
# Plot data
colors = {'benign': 'red', 'malignant': 'blue'}
plt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], s=25, edgecolors='k')
plt.xlabel('leukocyte count')
plt.ylabel('blood pressure')
plt.legend(['malignant ', 'benign'], loc="upper right")
plt.show()
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
class_counts = dict(collections.Counter(y_train))
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"X_train[0]: {X_train[0]}")
print (f"y_train[0]: {y_train[0]}")
print (f"Classes: {class_counts}")
# Encode class labels
y_tokenizer = LabelEncoder()
y_tokenizer = y_tokenizer.fit(y_train)
num_classes = len(y_tokenizer.classes_)
y_train = y_tokenizer.transform(y_train)
y_val = y_tokenizer.transform(y_val)
y_test = y_tokenizer.transform(y_test)
# Class weights
counts = collections.Counter(y_train)
class_weights = {_class: 1.0/count for _class, count in counts.items()}
print (f"class counts: {counts},\nclass weights: {class_weights}")
# Standardize inputs using training data
X_scaler = StandardScaler().fit(X_train)
standardized_X_train = X_scaler.transform(X_train)
standardized_X_val = X_scaler.transform(X_val)
standardized_X_test = X_scaler.transform(X_test)
# Initialize the model
model = MLP(hidden_dim=HIDDEN_DIM,
num_classes=NUM_CLASSES)
model.sample(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=SparseCategoricalCrossentropy(),
metrics=['accuracy'])
# Training
model.fit(x=standardized_X_train,
y=y_train,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_data=(standardized_X_val, y_val),
shuffle=False,
class_weight=class_weights,
verbose=1)
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
print (f"sample probability: {pred_test[0]}")
pred_train = np.argmax(pred_train, axis=1)
pred_test = np.argmax(pred_test, axis=1)
print (f"sample class: {pred_test[0]}")
# Accuracy
train_acc = accuracy_score(y_train, pred_train)
test_acc = accuracy_score(y_test, pred_test)
print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}")
# Visualize the decision boundary
plt.figure(figsize=(8,5))
plt.title("Test")
plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test)
# Sample point near the decision boundary (same point as before)
plt.scatter(mean_leukocyte_count+0.05, mean_blood_pressure-0.05, s=200,
c='b', edgecolor='w', linewidth=2)
# Annotate
plt.annotate('true: malignant,\npred: benign',
color='white',
xy=(mean_leukocyte_count, mean_blood_pressure),
xytext=(0.45, 0.60),
textcoords='figure fraction',
fontsize=16,
arrowprops=dict(facecolor='white', shrink=0.1)
)
plt.show()
###Output
_____no_output_____
###Markdown
Data and ModelsIn the subsequent lessons, we will continue to learn deep learning. But we've ignored a fundamental concept about data and modeling: quality and quantity. In a nutshell, a machine learning model consumes input data and produces predictions. The quality of the predictions directly corresponds to the quality and quantity of data you train the model with; **garbage in, garbage out**. Check out this [article](https://venturebeat.com/2018/06/30/understanding-the-practical-applications-of-business-ai/) on where it makes sense to use AI and how to properly apply it. We're going to go through all the concepts with concrete code examples and some synthesized data to train our models on. The task is to determine whether a tumor will be benign (harmless) or malignant (harmful) based on leukocyte (white blood cells) count and blood pressure. This is a synethic dataset that we created and has no clinical relevance. Full dataset We'll first train a model with the entire dataset. Later we'll remove a subset of the dataset and see the effect it has on our model. Data Load data
###Code
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from pandas.plotting import scatter_matrix
import urllib
SEED = 1234
DATA_FILE = 'tumors.csv'
# Set seed for reproducibility
np.random.seed(SEED)
# Load data from GitHub to this notebook's local drive
url = "https://raw.githubusercontent.com/madewithml/practicalAI/master/data/tumors.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(DATA_FILE, 'wb') as fp:
fp.write(html)
# Raw data
df = pd.read_csv(DATA_FILE, header=0)
df.head()
# Define X and y
X = df[['leukocyte_count', 'blood_pressure']].values
y = df['tumor_class'].values
# Plot data
colors = {'benign': 'red', 'malignant': 'blue'}
plt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], s=25, edgecolors='k')
plt.xlabel('leukocyte count')
plt.ylabel('blood pressure')
plt.legend(['malignant ', 'benign'], loc="upper right")
plt.show()
###Output
_____no_output_____
###Markdown
We want to choose features that have strong predictive signal for our task. If you want to improve performance, you need to continuously do feature engineering by collecting and adding new signals. So you may run into a new feature that has high correlation (orthogonal signal) with your existing features but it may still possess some unique signal to boost your predictive performance.
###Code
# Correlation matrix
scatter_matrix(df, figsize=(5, 5));
df.corr()
###Output
_____no_output_____
###Markdown
Split data
###Code
import collections
from sklearn.model_selection import train_test_split
TRAIN_SIZE = 0.70
VAL_SIZE = 0.15
TEST_SIZE = 0.15
SHUFFLE = True
def train_val_test_split(X, y, val_size, test_size, shuffle):
"""Split data into train/val/test datasets.
"""
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, stratify=y, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, stratify=y_train, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
class_counts = dict(collections.Counter(y))
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"Sample point: {X_train[0]} → {y_train[0]}")
print (f"Classes: {class_counts}")
###Output
X_train: (722, 2), y_train: (722,)
X_val: (128, 2), y_val: (128,)
X_test: (150, 2), y_test: (150,)
Sample point: [18.01865938 15.48133647] → benign
Classes: {'malignant': 611, 'benign': 389}
###Markdown
Label encoder
###Code
from sklearn.preprocessing import LabelEncoder
# Output vectorizer
y_tokenizer = LabelEncoder()
# Fit on train data
y_tokenizer = y_tokenizer.fit(y_train)
classes = list(y_tokenizer.classes_)
print (f"classes: {classes}")
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = y_tokenizer.transform(y_train)
y_val = y_tokenizer.transform(y_val)
y_test = y_tokenizer.transform(y_test)
print (f"y_train[0]: {y_train[0]}")
# Class weights
counts = collections.Counter(y_train)
class_weights = {_class: 1.0/count for _class, count in counts.items()}
print (f"class counts: {counts},\nclass weights: {class_weights}")
###Output
class counts: Counter({1: 441, 0: 281}),
class weights: {0: 0.0035587188612099642, 1: 0.0022675736961451248}
###Markdown
Standardize data
###Code
from sklearn.preprocessing import StandardScaler
# Standardize the data (mean=0, std=1) using training data
X_scaler = StandardScaler().fit(X_train)
# Apply scaler on training and test data (don't standardize outputs for classification)
X_train = X_scaler.transform(X_train)
X_val = X_scaler.transform(X_val)
X_test = X_scaler.transform(X_test)
# Check (means should be ~0 and std should be ~1)
print (f"X_train[0]: mean: {np.mean(X_train[:, 0], axis=0):.1f}, std: {np.std(X_train[:, 0], axis=0):.1f}")
print (f"X_train[1]: mean: {np.mean(X_train[:, 1], axis=0):.1f}, std: {np.std(X_train[:, 1], axis=0):.1f}")
print (f"X_val[0]: mean: {np.mean(X_val[:, 0], axis=0):.1f}, std: {np.std(X_val[:, 0], axis=0):.1f}")
print (f"X_val[1]: mean: {np.mean(X_val[:, 1], axis=0):.1f}, std: {np.std(X_val[:, 1], axis=0):.1f}")
print (f"X_test[0]: mean: {np.mean(X_test[:, 0], axis=0):.1f}, std: {np.std(X_test[:, 0], axis=0):.1f}")
print (f"X_test[1]: mean: {np.mean(X_test[:, 1], axis=0):.1f}, std: {np.std(X_test[:, 1], axis=0):.1f}")
###Output
X_train[0]: mean: 0.0, std: 1.0
X_train[1]: mean: -0.0, std: 1.0
X_val[0]: mean: 0.1, std: 1.0
X_val[1]: mean: 0.1, std: 1.0
X_test[0]: mean: -0.1, std: 1.0
X_test[1]: mean: -0.1, std: 1.0
###Markdown
Modeling Model
###Code
# Use TensorFlow 2.x
%tensorflow_version 2.x
import tensorflow as tf
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
# Set seed for reproducability
tf.random.set_seed(SEED)
INPUT_DIM = 2 # X is 2-dimensional
HIDDEN_DIM = 100
NUM_CLASSES = 2
class MLP(Model):
def __init__(self, hidden_dim, num_classes):
super(MLP, self).__init__(name='mlp')
self.fc1 = Dense(units=hidden_dim, activation='relu', name='W1')
self.fc2 = Dense(units=num_classes, activation='softmax', name='W2')
def call(self, x_in, training=False):
z = self.fc1(x_in)
y_pred = self.fc2(z)
return y_pred
def summary(self, input_shape):
x_in = Input(shape=input_shape, name='X')
summary = Model(inputs=x_in, outputs=self.call(x_in), name=self.name)
summary.summary() # parameter summary
print ("\n\nWEIGHTS:") # weights summary
for layer in self.layers:
print ("_"*50)
print (layer.name)
for w in layer.weights:
print (f"\t{w.name} → {w.shape}")
print ("\n\nFORWARD PASS:")
return plot_model(summary, show_shapes=True) # forward pass
# Initialize model
model = MLP(hidden_dim=HIDDEN_DIM, num_classes=NUM_CLASSES)
# Summary
model.summary(input_shape=(INPUT_DIM,))
###Output
Model: "mlp"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
X (InputLayer) [(None, 2)] 0
_________________________________________________________________
W1 (Dense) (None, 100) 300
_________________________________________________________________
W2 (Dense) (None, 2) 202
=================================================================
Total params: 502
Trainable params: 502
Non-trainable params: 0
_________________________________________________________________
WEIGHTS:
__________________________________________________
W1
W1_11/kernel:0 → (2, 100)
W1_11/bias:0 → (100,)
__________________________________________________
W2
W2_11/kernel:0 → (100, 2)
W2_11/bias:0 → (2,)
FORWARD PASS:
###Markdown
Training
###Code
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.metrics import SparseCategoricalAccuracy
from tensorflow.keras.optimizers import Adam
LEARNING_RATE = 1e-3
NUM_EPOCHS = 5
BATCH_SIZE = 32
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=SparseCategoricalCrossentropy(),
metrics=[SparseCategoricalAccuracy()])
# Training
model.fit(x=X_train,
y=y_train,
validation_data=(X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
class_weight=class_weights,
shuffle=False,
verbose=1)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 722 samples, validate on 128 samples
Epoch 1/5
722/722 [==============================] - 2s 3ms/sample - loss: 0.0016 - sparse_categorical_accuracy: 0.7008 - val_loss: 0.0014 - val_sparse_categorical_accuracy: 0.7812
Epoch 2/5
722/722 [==============================] - 0s 122us/sample - loss: 0.0012 - sparse_categorical_accuracy: 0.8864 - val_loss: 0.0011 - val_sparse_categorical_accuracy: 0.8359
Epoch 3/5
722/722 [==============================] - 0s 118us/sample - loss: 8.6852e-04 - sparse_categorical_accuracy: 0.9072 - val_loss: 8.7229e-04 - val_sparse_categorical_accuracy: 0.8672
Epoch 4/5
722/722 [==============================] - 0s 112us/sample - loss: 6.7633e-04 - sparse_categorical_accuracy: 0.9321 - val_loss: 7.1961e-04 - val_sparse_categorical_accuracy: 0.8828
Epoch 5/5
722/722 [==============================] - 0s 120us/sample - loss: 5.4061e-04 - sparse_categorical_accuracy: 0.9571 - val_loss: 6.0240e-04 - val_sparse_categorical_accuracy: 0.9219
###Markdown
Evaluation
###Code
import itertools
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
def plot_confusion_matrix(y_true, y_pred, classes, cmap=plt.cm.Blues):
"""Plot a confusion matrix using ground truth and predictions."""
# Confusion matrix
cm = confusion_matrix(y_true, y_pred)
cm_norm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
# Figure
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm, cmap=plt.cm.Blues)
fig.colorbar(cax)
# Axis
plt.title("Confusion matrix")
plt.ylabel("True label")
plt.xlabel("Predicted label")
ax.set_xticklabels([''] + classes)
ax.set_yticklabels([''] + classes)
ax.xaxis.set_label_position('bottom')
ax.xaxis.tick_bottom()
# Values
thresh = cm.max() / 2.
for i, j in itertools.product(range(cm.shape[0]), range(cm.shape[1])):
plt.text(j, i, f"{cm[i, j]:d} ({cm_norm[i, j]*100:.1f}%)",
horizontalalignment="center",
color="white" if cm[i, j] > thresh else "black")
# Display
plt.show()
def plot_multiclass_decision_boundary(model, X, y, savefig_fp=None):
"""Plot the multiclass decision boundary for a model that accepts 2D inputs.
Arguments:
model {function} -- trained model with function model.predict(x_in).
X {numpy.ndarray} -- 2D inputs with shape (N, 2).
y {numpy.ndarray} -- 1D outputs with shape (N,).
"""
# Axis boundaries
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101),
np.linspace(y_min, y_max, 101))
# Create predictions
x_in = np.c_[xx.ravel(), yy.ravel()]
y_pred = model.predict(x_in)
y_pred = np.argmax(y_pred, axis=1).reshape(xx.shape)
# Plot decision boundary
plt.contourf(xx, yy, y_pred, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# Plot
if savefig_fp:
plt.savefig(savefig_fp, format='png')
# Predictions
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
print (f"sample probability: {pred_test[0]}")
pred_train = np.argmax(pred_train, axis=1)
pred_test = np.argmax(pred_test, axis=1)
print (f"sample class: {pred_test[0]}")
# Accuracy
train_acc = accuracy_score(y_train, pred_train)
test_acc = accuracy_score(y_test, pred_test)
print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}")
###Output
train acc: 0.96, test acc: 0.93
###Markdown
We're going to plot a white point, which we know belongs to the malignant tumor class. Our well trained model here would accurately predict that it is indeed a malignant tumor!
###Code
# Visualize the decision boundary
plt.figure(figsize=(8,5))
plt.title("Test")
plot_multiclass_decision_boundary(model=model, X=X_test, y=y_test)
# Sample point near the decision boundary
mean_leukocyte_count, mean_blood_pressure = X_scaler.transform(
[[np.mean(df.leukocyte_count), np.mean(df.blood_pressure)]])[0]
plt.scatter(mean_leukocyte_count+0.05, mean_blood_pressure-0.05, s=200,
c='b', edgecolor='w', linewidth=2)
# Annotate
plt.annotate('true: malignant,\npred: malignant',
color='white',
xy=(mean_leukocyte_count, mean_blood_pressure),
xytext=(0.4, 0.65),
textcoords='figure fraction',
fontsize=16,
arrowprops=dict(facecolor='white', shrink=0.1)
)
plt.show()
###Output
_____no_output_____
###Markdown
Great! We received great performances on both our train and test data splits. We're going to use this dataset to show the importance of data quality and quantity. Reduced dataset Let's remove some training data near the decision boundary and see how robust the model is now. Data Load data
###Code
REDUCED_DATA_FILE = 'tumors_reduced.csv'
# Load data from GitHub to this notebook's local drive
url = "https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/tumors_reduced.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(REDUCED_DATA_FILE, 'wb') as fp:
fp.write(html)
# Raw reduced data
df_reduced = pd.read_csv(REDUCED_DATA_FILE, header=0)
df_reduced.head()
# Define X and y
X = df_reduced[['leukocyte_count', 'blood_pressure']].values
y = df_reduced['tumor_class'].values
# Plot data
colors = {'benign': 'red', 'malignant': 'blue'}
plt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], s=25, edgecolors='k')
plt.xlabel('leukocyte count')
plt.ylabel('blood pressure')
plt.legend(['malignant ', 'benign'], loc="upper right")
plt.show()
###Output
_____no_output_____
###Markdown
Split data
###Code
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
class_counts = dict(collections.Counter(y_train))
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"Sample point: {X_train[0]} → {y_train[0]}")
print (f"Classes: {class_counts}")
###Output
X_train: (520, 2), y_train: (520,)
X_val: (92, 2), y_val: (92,)
X_test: (108, 2), y_test: (108,)
Sample point: [14.4110029 13.14842457] → benign
Classes: {'benign': 281, 'malignant': 239}
###Markdown
Label encoder
###Code
# Encode class labels
y_tokenizer = LabelEncoder()
y_tokenizer = y_tokenizer.fit(y_train)
num_classes = len(y_tokenizer.classes_)
y_train = y_tokenizer.transform(y_train)
y_val = y_tokenizer.transform(y_val)
y_test = y_tokenizer.transform(y_test)
# Class weights
counts = collections.Counter(y_train)
class_weights = {_class: 1.0/count for _class, count in counts.items()}
print (f"class counts: {counts},\nclass weights: {class_weights}")
###Output
class counts: Counter({0: 281, 1: 239}),
class weights: {0: 0.0035587188612099642, 1: 0.0041841004184100415}
###Markdown
Standardize data
###Code
# Standardize inputs using training data
X_scaler = StandardScaler().fit(X_train)
X_train = X_scaler.transform(X_train)
X_val = X_scaler.transform(X_val)
X_test = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Modeling Model
###Code
# Initialize model
model = MLP(hidden_dim=HIDDEN_DIM, num_classes=NUM_CLASSES)
###Output
_____no_output_____
###Markdown
Training
###Code
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=SparseCategoricalCrossentropy(),
metrics=[SparseCategoricalAccuracy()])
# Training
model.fit(x=X_train,
y=y_train,
validation_data=(X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
class_weight=class_weights,
shuffle=False,
verbose=1)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train on 520 samples, validate on 92 samples
Epoch 1/5
520/520 [==============================] - 0s 823us/sample - loss: 0.0024 - sparse_categorical_accuracy: 0.8615 - val_loss: 0.0020 - val_sparse_categorical_accuracy: 0.9239
Epoch 2/5
520/520 [==============================] - 0s 122us/sample - loss: 0.0017 - sparse_categorical_accuracy: 0.9942 - val_loss: 0.0016 - val_sparse_categorical_accuracy: 0.9457
Epoch 3/5
520/520 [==============================] - 0s 123us/sample - loss: 0.0012 - sparse_categorical_accuracy: 0.9981 - val_loss: 0.0012 - val_sparse_categorical_accuracy: 0.9674
Epoch 4/5
520/520 [==============================] - 0s 118us/sample - loss: 8.8105e-04 - sparse_categorical_accuracy: 0.9981 - val_loss: 9.2301e-04 - val_sparse_categorical_accuracy: 0.9674
Epoch 5/5
520/520 [==============================] - 0s 113us/sample - loss: 6.4092e-04 - sparse_categorical_accuracy: 0.9981 - val_loss: 7.2682e-04 - val_sparse_categorical_accuracy: 0.9674
###Markdown
Evaluation
###Code
# Predictions
pred_train = model.predict(X_train)
pred_test = model.predict(X_test)
print (f"sample probability: {pred_test[0]}")
pred_train = np.argmax(pred_train, axis=1)
pred_test = np.argmax(pred_test, axis=1)
print (f"sample class: {pred_test[0]}")
# Accuracy
train_acc = accuracy_score(y_train, pred_train)
test_acc = accuracy_score(y_test, pred_test)
print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}")
# Visualize the decision boundary
plt.figure(figsize=(8,5))
plt.title("Test")
plot_multiclass_decision_boundary(model=model, X=X_test, y=y_test)
# Sample point near the decision boundary (same point as before)
plt.scatter(mean_leukocyte_count+0.05, mean_blood_pressure-0.05, s=200,
c='b', edgecolor='w', linewidth=2)
# Annotate
plt.annotate('true: malignant,\npred: benign',
color='white',
xy=(mean_leukocyte_count, mean_blood_pressure),
xytext=(0.45, 0.60),
textcoords='figure fraction',
fontsize=16,
arrowprops=dict(facecolor='white', shrink=0.1)
)
plt.show()
###Output
_____no_output_____
###Markdown
Data and ModelsIn the subsequent lessons, we will continue to learn deep learning. But we've ignored a fundamental concept about data and modeling: quality and quantity. View on practicalAI Run in Google Colab View code on GitHub Set up In a nutshell, a machine learning model consumes input data and produces predictions. The quality of the predictions directly corresponds to the quality and quantity of data you train the model with; **garbage in, garbage out**. Check out this [VentureBeat article](https://venturebeat.com/2018/06/30/understanding-the-practical-applications-of-business-ai/) on where it makes sense to use AI and how to properly apply it. We're going to go through all the concepts with concrete code examples and some synthesized data to train our models on. The task is to determine whether a tumor will be benign (harmless) or malignant (harmful) based on leukocyte (white blood cells) count and blood pressure. This is a synethic dataset that we created and has no clinical relevance.
###Code
# Use TensorFlow 2.x
%tensorflow_version 2.x
import os
import numpy as np
import tensorflow as tf
# Arguments
SEED = 1234
DATA_FILE = 'tumors.csv'
REDUCED_DATA_FILE = 'tumors_reduced.csv'
SHUFFLE = True
TRAIN_SIZE = 0.70
VAL_SIZE = 0.15
TEST_SIZE = 0.15
NUM_EPOCHS = 5
BATCH_SIZE = 32
HIDDEN_DIM = 100
LEARNING_RATE = 1e-3
# Set seed for reproducibility
np.random.seed(SEED)
tf.random.set_seed(SEED)
###Output
_____no_output_____
###Markdown
Data
###Code
import matplotlib.pyplot as plt
import pandas as pd
from pandas.plotting import scatter_matrix
import urllib
###Output
_____no_output_____
###Markdown
Operations
###Code
# Upload data from GitHub to notebook's local drive
url = "https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/tumors.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(DATA_FILE, 'wb') as fp:
fp.write(html)
# Raw data
df = pd.read_csv(DATA_FILE, header=0)
df.head()
# Define X and y
X = df[['leukocyte_count', 'blood_pressure']].values
y = df['tumor_class'].values
# Plot data
colors = {'benign': 'red', 'malignant': 'blue'}
plt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], s=25, edgecolors='k')
plt.xlabel('leukocyte count')
plt.ylabel('blood pressure')
plt.legend(['malignant ', 'benign'], loc="upper right")
plt.show()
###Output
_____no_output_____
###Markdown
We want to choose features that have strong predictive signal for our task. If you want to improve performance, you need to continuously do feature engineering by collecting and adding new signals. So you may run into a new feature that has high correlation (orthogonal signal) with your existing features but it may still possess some unique signal to boost your predictive performance.
###Code
# Correlation matrix
scatter_matrix(df, figsize=(5, 5));
df.corr()
###Output
_____no_output_____
###Markdown
Split data
###Code
import collections
import json
from sklearn.model_selection import train_test_split
###Output
_____no_output_____
###Markdown
Components
###Code
def train_val_test_split(X, y, val_size, test_size, shuffle):
"""Split data into train/val/test datasets.
"""
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=test_size, stratify=y, shuffle=shuffle)
X_train, X_val, y_train, y_val = train_test_split(
X_train, y_train, test_size=val_size, stratify=y_train, shuffle=shuffle)
return X_train, X_val, X_test, y_train, y_val, y_test
###Output
_____no_output_____
###Markdown
Operations
###Code
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X=X, y=y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
class_counts = dict(collections.Counter(y))
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"X_train[0]: {X_train[0]}")
print (f"y_train[0]: {y_train[0]}")
print (f"Classes: {class_counts}")
###Output
X_train: (722, 2), y_train: (722,)
X_val: (128, 2), y_val: (128,)
X_test: (150, 2), y_test: (150,)
X_train[0]: [18.01865938 15.48133647]
y_train[0]: benign
Classes: {'malignant': 611, 'benign': 389}
###Markdown
Label encoder
###Code
import json
from sklearn.preprocessing import LabelEncoder
# Output vectorizer
y_tokenizer = LabelEncoder()
# Fit on train data
y_tokenizer = y_tokenizer.fit(y_train)
print (f"classes: {y_tokenizer.classes_}")
# Convert labels to tokens
print (f"y_train[0]: {y_train[0]}")
y_train = y_tokenizer.transform(y_train)
y_val = y_tokenizer.transform(y_val)
y_test = y_tokenizer.transform(y_test)
print (f"y_train[0]: {y_train[0]}")
# Class weights
counts = collections.Counter(y_train)
class_weights = {_class: 1.0/count for _class, count in counts.items()}
print (f"class counts: {counts},\nclass weights: {class_weights}")
###Output
class counts: Counter({1: 441, 0: 281}),
class weights: {0: 0.0035587188612099642, 1: 0.0022675736961451248}
###Markdown
Standardize data
###Code
from sklearn.preprocessing import StandardScaler
# Standardize the data (mean=0, std=1) using training data
X_scaler = StandardScaler().fit(X_train)
# Apply scaler on training and test data (don't standardize outputs for classification)
standardized_X_train = X_scaler.transform(X_train)
standardized_X_val = X_scaler.transform(X_val)
standardized_X_test = X_scaler.transform(X_test)
# Check
print (f"standardized_X_train: mean: {np.mean(standardized_X_train, axis=0)[0]}, std: {np.std(standardized_X_train, axis=0)[0]}")
print (f"standardized_X_val: mean: {np.mean(standardized_X_val, axis=0)[0]}, std: {np.std(standardized_X_val, axis=0)[0]}")
print (f"standardized_X_test: mean: {np.mean(standardized_X_test, axis=0)[0]}, std: {np.std(standardized_X_test, axis=0)[0]}")
###Output
standardized_X_train: mean: 3.938600753633857e-15, std: 0.9999999999999998
standardized_X_val: mean: 0.06571155649025341, std: 0.9625041074006321
standardized_X_test: mean: -0.09679265967370689, std: 0.9864056087200104
###Markdown
Model Let's fit a model on this synthetic data.
###Code
import itertools
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import confusion_matrix
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Input
from tensorflow.keras.losses import SparseCategoricalCrossentropy
from tensorflow.keras.models import Model
from tensorflow.keras.models import Model
###Output
_____no_output_____
###Markdown
Components
###Code
# MLP
class MLP(Model):
def __init__(self, hidden_dim, num_classes):
super(MLP, self).__init__()
self.fc1 = Dense(units=hidden_dim, activation='relu')
self.fc2 = Dense(units=num_classes, activation='softmax')
def call(self, x_in, training=False):
"""Forward pass."""
z = self.fc1(x_in)
y_pred = self.fc2(z)
return y_pred
def sample(self, input_shape):
x_in = Input(shape=input_shape)
return Model(inputs=x_in, outputs=self.call(x_in)).summary()
def plot_multiclass_decision_boundary(model, X, y, savefig_fp=None):
"""Plot the multiclass decision boundary for a model that accepts 2D inputs.
Arguments:
model {function} -- trained model with function model.predict(x_in).
X {numpy.ndarray} -- 2D inputs with shape (N, 2).
y {numpy.ndarray} -- 1D outputs with shape (N,).
"""
# Axis boundaries
x_min, x_max = X[:, 0].min() - 0.1, X[:, 0].max() + 0.1
y_min, y_max = X[:, 1].min() - 0.1, X[:, 1].max() + 0.1
xx, yy = np.meshgrid(np.linspace(x_min, x_max, 101),
np.linspace(y_min, y_max, 101))
# Create predictions
x_in = np.c_[xx.ravel(), yy.ravel()]
y_pred = model.predict(x_in)
y_pred = np.argmax(y_pred, axis=1).reshape(xx.shape)
# Plot decision boundary
plt.contourf(xx, yy, y_pred, cmap=plt.cm.Spectral, alpha=0.8)
plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# Plot
if savefig_fp:
plt.savefig(savefig_fp, format='png')
###Output
_____no_output_____
###Markdown
Operations
###Code
# Model Arguments
INPUT_DIM = X_train.shape[-1]
NUM_CLASSES = len(df.tumor_class.unique())
# Initialize the model
model = MLP(hidden_dim=HIDDEN_DIM,
num_classes=NUM_CLASSES)
model.sample(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=SparseCategoricalCrossentropy(),
metrics=['accuracy'])
# Training
model.fit(x=standardized_X_train,
y=y_train,
validation_data=(standardized_X_val, y_val),
epochs=NUM_EPOCHS,
batch_size=BATCH_SIZE,
class_weight=class_weights,
shuffle=False,
verbose=1)
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
print (f"sample probability: {pred_test[0]}")
pred_train = np.argmax(pred_train, axis=1)
pred_test = np.argmax(pred_test, axis=1)
print (f"sample class: {pred_test[0]}")
# Accuracy
train_acc = accuracy_score(y_train, pred_train)
test_acc = accuracy_score(y_test, pred_test)
print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}")
###Output
train acc: 0.96, test acc: 0.93
###Markdown
We're going to plot a white point, which we know belongs to the malignant tumor class. Our well trained model here would accurately predict that it is indeed a malignant tumor!
###Code
# Visualize the decision boundary
plt.figure(figsize=(8,5))
plt.title("Test")
plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test)
# Sample point near the decision boundary
mean_leukocyte_count, mean_blood_pressure = X_scaler.transform(
[[np.mean(df.leukocyte_count), np.mean(df.blood_pressure)]])[0]
plt.scatter(mean_leukocyte_count+0.05, mean_blood_pressure-0.05, s=200,
c='b', edgecolor='w', linewidth=2)
# Annotate
plt.annotate('true: malignant,\npred: malignant',
color='white',
xy=(mean_leukocyte_count, mean_blood_pressure),
xytext=(0.4, 0.65),
textcoords='figure fraction',
fontsize=16,
arrowprops=dict(facecolor='white', shrink=0.1)
)
plt.show()
###Output
_____no_output_____
###Markdown
Great! We received great performances on both our train and test data splits. We're going to use this dataset to show the importance of data quality and quantity. Data quality and quantity Let's remove some training data near the decision boundary and see how robust the model is now.
###Code
# Upload data from GitHub to notebook's local drive
url = "https://raw.githubusercontent.com/practicalAI/practicalAI/master/data/tumors_reduced.csv"
response = urllib.request.urlopen(url)
html = response.read()
with open(REDUCED_DATA_FILE, 'wb') as fp:
fp.write(html)
# Raw reduced data
df_reduced = pd.read_csv(REDUCED_DATA_FILE, header=0)
df_reduced.head()
# Define X and y
X = df_reduced[['leukocyte_count', 'blood_pressure']].values
y = df_reduced['tumor_class'].values
# Plot data
colors = {'benign': 'red', 'malignant': 'blue'}
plt.scatter(X[:, 0], X[:, 1], c=[colors[_y] for _y in y], s=25, edgecolors='k')
plt.xlabel('leukocyte count')
plt.ylabel('blood pressure')
plt.legend(['malignant ', 'benign'], loc="upper right")
plt.show()
# Create data splits
X_train, X_val, X_test, y_train, y_val, y_test = train_val_test_split(
X, y, val_size=VAL_SIZE, test_size=TEST_SIZE, shuffle=SHUFFLE)
class_counts = dict(collections.Counter(y_train))
print (f"X_train: {X_train.shape}, y_train: {y_train.shape}")
print (f"X_val: {X_val.shape}, y_val: {y_val.shape}")
print (f"X_test: {X_test.shape}, y_test: {y_test.shape}")
print (f"X_train[0]: {X_train[0]}")
print (f"y_train[0]: {y_train[0]}")
print (f"Classes: {class_counts}")
# Encode class labels
y_tokenizer = LabelEncoder()
y_tokenizer = y_tokenizer.fit(y_train)
num_classes = len(y_tokenizer.classes_)
y_train = y_tokenizer.transform(y_train)
y_val = y_tokenizer.transform(y_val)
y_test = y_tokenizer.transform(y_test)
# Class weights
counts = collections.Counter(y_train)
class_weights = {_class: 1.0/count for _class, count in counts.items()}
print (f"class counts: {counts},\nclass weights: {class_weights}")
# Standardize inputs using training data
X_scaler = StandardScaler().fit(X_train)
standardized_X_train = X_scaler.transform(X_train)
standardized_X_val = X_scaler.transform(X_val)
standardized_X_test = X_scaler.transform(X_test)
# Initialize the model
model = MLP(hidden_dim=HIDDEN_DIM,
num_classes=NUM_CLASSES)
model.sample(input_shape=(INPUT_DIM,))
# Compile
model.compile(optimizer=Adam(lr=LEARNING_RATE),
loss=SparseCategoricalCrossentropy(),
metrics=['accuracy'])
# Training
model.fit(x=standardized_X_train,
y=y_train,
batch_size=BATCH_SIZE,
epochs=NUM_EPOCHS,
validation_data=(standardized_X_val, y_val),
shuffle=False,
class_weight=class_weights,
verbose=1)
# Predictions
pred_train = model.predict(standardized_X_train)
pred_test = model.predict(standardized_X_test)
print (f"sample probability: {pred_test[0]}")
pred_train = np.argmax(pred_train, axis=1)
pred_test = np.argmax(pred_test, axis=1)
print (f"sample class: {pred_test[0]}")
# Accuracy
train_acc = accuracy_score(y_train, pred_train)
test_acc = accuracy_score(y_test, pred_test)
print (f"train acc: {train_acc:.2f}, test acc: {test_acc:.2f}")
# Visualize the decision boundary
plt.figure(figsize=(8,5))
plt.title("Test")
plot_multiclass_decision_boundary(model=model, X=standardized_X_test, y=y_test)
# Sample point near the decision boundary (same point as before)
plt.scatter(mean_leukocyte_count+0.05, mean_blood_pressure-0.05, s=200,
c='b', edgecolor='w', linewidth=2)
# Annotate
plt.annotate('true: malignant,\npred: benign',
color='white',
xy=(mean_leukocyte_count, mean_blood_pressure),
xytext=(0.45, 0.60),
textcoords='figure fraction',
fontsize=16,
arrowprops=dict(facecolor='white', shrink=0.1)
)
plt.show()
###Output
_____no_output_____
|
notebooks/Christensenellaceae/.ipynb_checkpoints/01_genomes-checkpoint.ipynb
|
###Markdown
GoalCreate genome collection of Christensenellaceae MAGs and isolate genomes in order to produce Christensenellales-specific primers Var
###Code
work_dir = '/ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae/'
clade = 'Christensenellaceae'
taxid = 990719
threads = 8
###Output
_____no_output_____
###Markdown
Init
###Code
library(dplyr)
library(tidyr)
library(data.table)
library(tidytable)
library(ggplot2)
library(LeyLabRMisc)
library(curl)
df.dims()
setDTthreads(threads)
make_dir(work_dir)
###Output
Directory already exists: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae/
###Markdown
Genomes From genbank ```OUTDIR=/ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellales/genomes/NCBI/Christensenellaceaemkdir -p $OUTDIRncbi-genome-download -p 12 -s genbank -F fasta -t 990719 -o $OUTDIR bacteria``` From UHGG
###Code
F = file.path('/ebio/abt3_projects/databases_no-backup/UHGG/2019_09', 'genomes-nr_metadata.tsv')
genomes = Fread(F) %>%
filter.(grepl('o__Christensenellales', Lineage))
genomes
genomes_f = genomes %>%
filter.(grepl('f__Christensenellaceae', Lineage))
genomes_f
# downloading
get_file = function(url, base_dir){
outfile = file.path(base_dir, 'genomes', 'UHGG', gsub('.+/', '', url))
message('Downloading: ', url)
curl_download(url, outfile, mode = "wb")
}
ret = genomes_f$FTP_download %>%
lapply(get_file, work_dir)
ret %>% length
###Output
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME067772.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME076687.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME076875.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME078695.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME091014.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME091497.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME092036.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME092709.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME092769.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME093349.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME094667.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-015/MGYG-HGUT-01550/genomes1/GUT_GENOME096561.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-015/MGYG-HGUT-01593/genomes1/GUT_GENOME097725.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME105882.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME106152.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME107011.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME107393.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-018/MGYG-HGUT-01822/genomes1/GUT_GENOME111314.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME125784.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME125867.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME125876.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-020/MGYG-HGUT-02096/genomes1/GUT_GENOME127701.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-021/MGYG-HGUT-02193/genomes1/GUT_GENOME131740.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-021/MGYG-HGUT-02193/genomes1/GUT_GENOME137430.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-024/MGYG-HGUT-02411/genomes1/GUT_GENOME142595.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-025/MGYG-HGUT-02523/genomes1/GUT_GENOME147109.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-027/MGYG-HGUT-02701/genomes1/GUT_GENOME158644.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME163028.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-021/MGYG-HGUT-02193/genomes1/GUT_GENOME173449.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME185233.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME185690.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME188736.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME189660.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME189879.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME190234.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME190484.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME191624.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME192586.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME194019.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME194470.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-032/MGYG-HGUT-03227/genomes1/GUT_GENOME215504.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-033/MGYG-HGUT-03304/genomes1/GUT_GENOME222296.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME252750.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-021/MGYG-HGUT-02193/genomes1/GUT_GENOME253521.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME253845.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME255745.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME255791.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-017/MGYG-HGUT-01747/genomes1/GUT_GENOME256796.gff.gz
Downloading: ftp://ftp.ebi.ac.uk/pub/databases/metagenomics/mgnify_genomes/human-gut/v1.0/all_genomes/MGYG-HGUT-042/MGYG-HGUT-04235/genomes1/GUT_GENOME260354.gff.gz
###Markdown
Parsing gff files```(genome) @ rick:/ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae/genomes/UHGG$ find . -name "*.gff.gz" | xargs -I % /ebio/abt3_projects/databases_no-backup/UHGG/2019_09/prokka_gff2fasta.py %``` TUK MAGs
###Code
F = '/ebio/abt3_projects/Anxiety_Twins_Metagenomes/data/metagenome/TUK-5projects/LLMGA/v0.12/LLG/rnd1/final_MAGs.tsv'
TUK = Fread(F)
TUK
TUK = TUK %>%
filter.(Family == 'Christensenellaceae')
TUK
copy_file = function(F, base_dir){
outfile = file.path(base_dir, basename(F))
stopifnot(F != outfile)
file.copy(F, outfile)
}
TUK$Fasta %>%
lapply(copy_file, base_dir=file.path(work_dir, 'TUK'))
###Output
_____no_output_____
###Markdown
List of all genomes
###Code
files = list_files(file.path(work_dir, 'genomes'), '.fna')
samps = data.frame(Name = files %>% as.character %>% basename,
Fasta = files,
Domain = 'Bacteria',
Taxid = taxid) %>%
mutate(Fasta = gsub('/+', '/', Fasta))
samps
# writing file
outfile = file.path(work_dir, 'genomes_raw.txt')
write_table(samps, outfile)
###Output
File written: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae/genomes//genomes_raw.txt
###Markdown
LLG Config
###Code
cat_file(file.path(work_dir, '../config_llg.yaml'))
###Output
# table with genome --> fasta_file information
samples_file: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae/genomes/genomes_raw.txt
# output location
output_dir: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae/LLG_output/
# temporary file directory (your username will be added automatically)
tmp_dir: /ebio/abt3_scratch/
# batch processing of genomes for certain steps
## increase to better parallelize
batches: 5
# Domain of genomes ('Archaea' or 'Bacteria)
## Use "Skip" if provided as a "Domain" column in the genome table
Domain: Skip
# software parameters
# Use "Skip" to skip any of these steps. If no params for rule, use ""
# dRep MAGs are not further analyzed, but you can de-rep & then use the de-rep genome table as input.
params:
ionice: -c 3
# assembly assessment
seqkit: ""
quast: Skip #""
multiqc_on_quast: ""
checkm: ""
# de-replication (CheckM recommended)
drep:
algorithm: auto # will select fastANI if >1000 genomes, else accurate mode
params: -comp 50 -con 5 -sa 0.999
# taxonomy
sourmash:
compute: Skip #--scaled 10000 -k 31
gather: -k 31
gtdbtk:
classify_wf: --min_perc_aa 10
# genome pairwise ANI
fastani: Skip #--fragLen 3000 --minFraction 0.2 -k 16
dashing: Skip # -k 31 --full-tsv
comparem_aai: Skip # --evalue 0.001
# gene annotation
gene_call:
prokka: Skip #""
multiqc_on_prokka: ""
prodigal: Skip #""
eggnog_mapper: Skip #""
eggnog_mapper_annot: ""
# rRNA (16S alignment & phylogeny)
barrnap: Skip #--lencutoff 0.8
vsearch_per_genome_drep: --id 0.95 # Skip to prevent drep of 16S copies within each genome
qiime2_fasttree: ""
qiime2_iqtree: --p-alrt 1000 --p-abayes --p-lbp 1000 --p-substitution-model 'GTR+I+G'
# genome phylogeny
phylophlan_config: Skip #--map_dna diamond --db_aa diamond --map_aa diamond --msa mafft --trim trimal --tree1 fasttree --tree2 raxml
phylophlan:
accuracy: --auto # --auto will select --fast if >2000 genomes, otherwise --accurate
other_params: --diversity high --min_num_markers 50
# phenotype
traitar: Skip #""
# biosynthetic gene clusters (BGCs)
antismash: Skip #--cb-knownclusters --cb-subclusters --asf
DeepBGC: Skip #--score 0.5 --classifier-score 0.5 --prodigal-meta-mode
# antimicrobial resistance (AMR)
abricate: Skip #--minid 75 --mincov 80
# CRISPRs
cctyper: Skip #--prodigal meta
# databases
databases:
checkM_data: /ebio/abt3_projects/databases_no-backup/checkM/
sourmash: /ebio/abt3_projects/databases_no-backup/sourmash/genbank-k31.sbt.json
sourmash_lca: /ebio/abt3_projects/databases_no-backup/sourmash/genbank-k31.lca.json.gz
gtdbtk: /ebio/abt3_projects/databases_no-backup/GTDB/release95/gtdbtk/db_info.md
phylophlan: /ebio/abt3_projects/databases_no-backup/phylophlan/PhyloPhlan/phylophlan.faa.bz2
eggnog: /ebio/abt3_projects/databases_no-backup/Eggnog/v2/eggnog.db
eggnog_diamond: /ebio/abt3_projects/databases_no-backup/Eggnog/v2/eggnog_proteins.dmnd
antismash: /ebio/abt3_projects/databases_no-backup/antismash/v5/
deepbgc: /ebio/abt3_projects/databases_no-backup/DeepBGC/
traitar: /ebio/abt3_projects/databases_no-backup/pfam/traitar/
taxdump: # used for adding taxids to GTDB-Tk classifications
names: /ebio/abt3_projects/databases_no-backup/GTDB/release95/taxdump/names.dmp
nodes: /ebio/abt3_projects/databases_no-backup/GTDB/release95/taxdump/nodes.dmp
abricate:
ncbi: /ebio/abt3_projects/databases_no-backup/abricate/ncbi/sequences
card: /ebio/abt3_projects/databases_no-backup/abricate/card/sequences
resfinder: /ebio/abt3_projects/databases_no-backup/abricate/resfinder/sequences
argannot: /ebio/abt3_projects/databases_no-backup/abricate/argannot/sequences
bacmet2: /ebio/abt3_projects/databases_no-backup/abricate/bacmet2/sequences
vfdb: /ebio/abt3_projects/databases_no-backup/abricate/vfdb/sequences
megares: /ebio/abt3_projects/databases_no-backup/abricate/megares/sequences
plasmidfinder: /ebio/abt3_projects/databases_no-backup/abricate/plasmidfinder/sequences
# snakemake pipeline
pipeline:
snakemake_folder: ./
script_folder: ./bin/scripts/
use_shared_mem: True
name: LLG
###Markdown
Run ```(snakemake) @ rick:/ebio/abt3_projects/software/dev/ll_pipelines/llg$ screen -L -s llg-christ ./snakemake_sge.sh /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae/config_llg.yaml 30 -F``` Samples table of high quality genomes
###Code
# checkM summary
checkm = file.path(work_dir, 'LLG_output', 'checkM', 'checkm_qa_summary.tsv') %>%
read.delim(sep='\t')
checkm
# dRep summary
drep = file.path(work_dir, 'LLG_output', 'drep', 'checkm_markers_qa_summary.tsv') %>%
read.delim(sep='\t') %>%
mutate(Bin.Id = gsub('.+/', '', genome),
Bin.Id = gsub('\\.fna$', '', Bin.Id))
drep
# de-replicated genomes
drep_gen = file.path(work_dir, 'LLG_output', 'drep', 'dereplicated_genomes.tsv') %>%
read.delim(sep='\t')
drep_gen
# GTDBTk summary
tax = file.path(work_dir, 'LLG_output', 'gtdbtk', 'gtdbtk_summary_wTaxid.tsv') %>%
read.delim(, sep='\t') %>%
separate(classification,
c('Domain', 'Phylum', 'Class', 'Order', 'Family', 'Genus', 'Species'),
sep=';') %>%
select(-note, -classification_method, -pplacer_taxonomy,
-other_related_references.genome_id.species_name.radius.ANI.AF.)
tax
# checking overlap
cat('-- drep --\n')
overlap(basename(as.character(drep_gen$Fasta)),
basename(as.character(drep$genome)))
cat('-- checkm --\n')
overlap(drep$Bin.Id, checkm$Bin.Id)
cat('-- gtdbtk --\n')
overlap(drep$Bin.Id, tax$user_genome)
# joining based on Bin.Id
drep = drep %>%
inner_join(checkm, c('Bin.Id')) %>%
mutate(GEN = genome %>% as.character %>% basename) %>%
inner_join(drep_gen %>% mutate(GEN = Fasta %>% as.character %>% basename),
by=c('GEN')) %>%
inner_join(tax, c('Bin.Id'='user_genome')) #%>%
drep
# summarizing the taxonomy
df.dims(20)
drep %>%
group_by(Order, Family, Genus) %>%
summarize(n_genomes = n(), .groups='drop')
df.dims()
# filtering by quality
hq_genomes = drep %>%
filter(completeness >= 90,
contamination < 5,
Strain.heterogeneity < 50)
hq_genomes
# filtering by taxonomy
hq_genomes = hq_genomes %>%
filter(Family == 'f__Christensenellaceae')
hq_genomes
# summarizing the taxonomy
df.dims(20)
hq_genomes %>%
group_by(Order, Family, Genus, Species) %>%
summarize(n_genomes = n(), .groups='drop')
df.dims()
# summarizing
hq_genomes$Completeness %>% summary_x('Completeness')
hq_genomes$X..contigs %>% summary_x('No. of contigs')
hq_genomes$Mean.contig.length..bp. %>% summary_x('Mean contig length')
hq_genomes$X..predicted.genes %>% summary_x('No. of genes')
hq_genomes$N50..contigs. %>% summary_x('N50')
# writing samples table for LLPRIMER
outfile = file.path(work_dir, 'LLG_output', 'samples_genomes_hq.txt')
hq_genomes %>%
select(Bin.Id, Fasta) %>%
rename('Taxon' = Bin.Id) %>%
mutate(Taxon = gsub('_chromosome.+', '', Taxon),
Taxon = gsub('_bin_.+', '', Taxon),
Taxon = gsub('_genomic', '', Taxon),
Taxon = gsub('_annotated_assembly', '', Taxon),
Taxid = taxid) %>%
write_table(outfile)
###Output
File written: /ebio/abt3_projects/software/dev/ll_pipelines/llprimer/experiments/christensenellaceae//LLG_output/samples_genomes_hq.txt
###Markdown
sessionInfo
###Code
sessionInfo()
###Output
_____no_output_____
|
tutorials/usecases/UC01 - Sentiment Classifier - Private Datasets - (Secure Training).ipynb
|
###Markdown
Sentiment Classification - Private Datasets - (Training)------ **Author:**- Alan Aboudib: [Twitter](https://twitter.com/alan_aboudib) | [LinkedIn](https://www.linkedin.com/in/ala-aboudib/) | [Slack](https://app.slack.com/client/T6963A864/DDKH3SXKL/user_profile/UDKH3SH8S) ----- Problem Statement Suppose you run a deep learning company that provides NLP expertise. You have two clients: Bob and Alice. Each of them runs their website where users can write reviews about movies they had watched.Bob and Alice have heard of the excellent services you provide and ask you to create a sentiment classifier to help them automatically assign a sentiment (positive or negative) to each user's review.Now you think that this is a really good opportunity. If you pool data from both Bob's and Alice's datasets, you would be able to create a bigger dataset that you can use to train a better classifier.But... It turns out you are not allowed to do this; both datasets are private.You are informed that privacy regulations in both Bob's and Alice's countries, prevent them from revealing their data to any third party. You cannot move Bob's data to your company's machines. Same for Alice's. Each dataset is constrained to live on its owner's machine, and they cannot be mixed to create a bigger dataset.Now you think about OpenMined, and their great library called PySyft that provides the possibility to perform Federated Learning and Encrypted Computations. With that, you will be able to train a single model on both datasets at the same time. And YOUR ARE RIGHT!However, ...As you know, text datasets cannot be consumed directly for training a neural network. You need to create numerical representations of each text review before the network written with PySyft can consume it. Reviews should first be tokenized, preprocessed, and vector embedding should be used instead of plaintext to train the network. But how to do such preprocessing if you are not allowed to have access to plaintext data? **SyferText** can help you! With SyferText, you can define preprocessing components that you can send over a network to Bob's and Alice's machines to perform preprocessing remotely, blindly, and in a completely secure fashion. SyferText components do all the work from processing plaintext to obtaining its vector representation and encrypting it to hand it over to PySyft models for training. All without you accessing the data, and without the data quitting its owner's machine.If you are wondering how that works, keep on following this tutorial.**Let's summarize:**1. You need to create a bigger dataset out of Bob's and Alice's smaller datasets. *(PySyft has the arsenal for that)*2. You need to prepare and preprocess the text data on Bob's and Alice's machines without revealing it, without moving any datasets to your machine, and without the need to work directly on Bob's or Alice's machines. *(SyferText to the rescue)*For this tutorial, we are going to work with the IMDB movie review dataset, which is a publically available dataset. But we are going to break it into two parts, send each part to a different PySyft worker. We consider that each part is a private dataset owned by its PySyft worker. -4. Importing libraries Let's first install and import some libraries that we are going to be used all along with this tutorial:
###Code
!pip install -r requirements.txt
# SyferText imports
import syfertext
from syfertext.pipeline import SimpleTagger
# Import useful utility functions for this tutorial
from utils import download_dataset
# PySyft and PyTorch import
import syft as sy
from syft.generic.string import String
import torch
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import torch.optim as optim
# Useful imports
import numpy as np
from tqdm import tqdm
import csv
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sb
import os
from pprint import pprint
sb.set()
###Output
_____no_output_____
###Markdown
-3. Download the Dataset (IGNORE THIS STEP IF YOU HAVE ALREADY DONE IT) The dataset will be downloaded in a folder called `./imdb` in the same directory as the current notebook's. Four files are going to be downloaded:- `imdb.csv`: This is the dataset file containing 50K labeled reviews. It is a csv file composed of two columns: `review` and `sentiment`. The `review` column holds the review's text, and the `sentiment` column has one of two values: 'positive' or 'negative' to describe the overall sentiment of the review.- `stop_word_en.txt`: This is just a text file with a list of stop words, according to NLTK.- `imdb_vocab.txt`: a list of all vocabulary of the dataset. One word per line.- `imdb_polarity.txt`: It hold the polarity value of each word in `imdb_vocab.txt`. A word that appears more often in positive reviews will have a higher polarity value than one that more frequently encountered in negative reviews.It is important to note that we consider, for this use case, that only the dataset `imdb.csv` is considered private. All other files in the above list are not under any privacy constraints.Please run the below cell in order to download the dataset.
###Code
# The URL template to all dataset files
url_template = 'https://raw.githubusercontent.com/AlanAboudib/dataset_imdb/master/%s'
# File names to be downloaded from the using the URL template above
files = ['imdb.csv', 'imdb_vocab.txt', 'imdb_polarity.txt', 'stop_word_en.txt']
# Construct the list of urls
urls = [url_template % file for file in files]
# The dataset name and its root folder
dataset_name = 'imdb'
root_path = './imdb'
# Create the dataset folder if it is not already there
if not os.path.exists('./imdb'):
os.mkdir('./imdb')
# Start downloading
download_dataset(dataset_name = dataset_name,
urls = urls,
root_path = root_path
)
###Output
Preparing to download dataset: `imdb` ...
###Markdown
-2. Preparing the work environment As I explained in the introduction, we will simulate a work environment with three main actors, a company (me) and two clients owning two private datasets (Bob and Alice). In PySyft terminology, this translates to creating a worker to represent each actor. We will also need a fourth worker, the crypto provider, which provides the primitives for using Secure Multi-Party Encryption (SMPC) that we will apply to encrypt word embeddings and the model itself before training. Let's create the workers with PySyft:
###Code
# Create a torch hook for PySyft
hook = sy.TorchHook(torch)
# Create some PySyft workers
me = hook.local_worker # This is the worker representing the deep learning company
bob = sy.VirtualWorker(hook, id = 'bob') # Bob owns the first dataset
alice = sy.VirtualWorker(hook, id = 'alice') # Alice owns the second dataset
crypto_provider = sy.VirtualWorker(hook, id = 'crypto_provider') # provides encryption primitive for SMPC
# Create a summary writer for logging performance with Tensorboard
writer = SummaryWriter()
###Output
_____no_output_____
###Markdown
-1. Simulating Private Datasets To simulate two private datasets owned by two different clients, Bob and Alice, we will do the following:1. Load the whole dataset in `imdb.csv` locally (the `me` worker). This dataset will be loaded as a list of dictionaries that has the following format: `[ {'review': , 'label': }, {...}, {...}]`2. Split the dataset into two parts, one for Bob and the other for Alice. Each part will also be split into a training set and a validation set. This will create four lists: `train_bob`, `valid_bob`, `train_alice`, `valid_alice`. Each list has the same format I mentioned above.3. Each element in the four lists will be sent to the corresponding worker. This will change the content of the lists, as depicted in **Figure(1)**. Each list will hold PySyft pointers to the texts and labels instead of the objects themselves. Figure(1): The reviews and their labels are remotely located on Bob's and Alice's remote machines, only pointers to them are kept by the local worker (the company's machine). Let's load the dataset locally:
###Code
# Set the path to the dataset file
dataset_path = './imdb/imdb.csv'
# store the dataset as a list of dictionaries
# each dictionary has two keys, 'review' and 'label'
# the 'review' element is a PySyft String
# the 'label' element is an integer with 1 for 'positive'
# and 0 for 'negative' review
dataset_local = []
with open(dataset_path, 'r') as dataset_file:
# Create a csv reader object
reader = csv.DictReader(dataset_file)
for elem in reader:
# Create one entry
example = dict(review = String(elem['review']),
label = 1 if elem['sentiment'] == 'positive' else 0
)
# add to the local dataset
dataset_local.append(example)
###Output
_____no_output_____
###Markdown
Here is how an element in the list looks like:
###Code
example = dataset_local[10]
pprint(example)
###Output
{'label': 0,
'review': 'Phil the Alien is one of those quirky films where the humour is based around the oddness of everything rather than actual punchlines.<br /><br />At first it was very odd and pretty funny but as the movie progressed I didn\'t find the jokes or oddness funny anymore.<br /><br />Its a low budget film (thats never a problem in itself), there were some pretty interesting characters, but eventually I just lost interest.<br /><br />I imagine this film would appeal to a stoner who is currently partaking.<br /><br />For something similar but better try "Brother from another planet"'}
###Markdown
Let's check out the data types:
###Code
print(type(example['review']))
print(type(example['label']))
###Output
<class 'syft.generic.string.String'>
<class 'int'>
###Markdown
This review text is a PySyft `String` object. The label is an integer. Let's split the dataset into two equal parts and send each part to a different worker simulating two remote datasets as I mentioned above:
###Code
# Create two datasets, one for Bob, and the other for Alice
dataset_bob, dataset_alice = train_test_split(dataset_local[:25000], train_size = 0.5)
# Now create a validation set for Bob, and another for Alice
train_bob, val_bob = train_test_split(dataset_bob, train_size = 0.7)
train_alice, val_alice = train_test_split(dataset_alice, train_size = 0.7)
###Output
_____no_output_____
###Markdown
And now I will make the dataset remote:
###Code
# A function that sends the content of each split to a remote worker
def make_remote_dataset(dataset, worker):
# Got through each example in the dataset
for example in dataset:
# Send each review text
example['review'] = example['review'].send(worker)
# Send each label as a one-hot-enceded vector
one_hot_label = torch.zeros(2).scatter(0, torch.Tensor([example['label']]).long(), 1)
# Send the review label
example['label'] = one_hot_label.send(worker)
###Output
_____no_output_____
###Markdown
Notice that the above function transforms the label into a one-hot-encoded format before sending it to a remote worker. So if the sentiment is negative, the corresponding tensor will hold `[1,0]`, and if it is positive, the label will be `[0,1]`. I can finally create the remote datasets:
###Code
# Bob's remote dataset
make_remote_dataset(train_bob, bob)
make_remote_dataset(val_bob, bob)
# Alice's remote dataset
make_remote_dataset(train_alice, alice)
make_remote_dataset(val_alice, alice)
###Output
_____no_output_____
###Markdown
Let me show you what an element of Bob's dataset look like:
###Code
# Take an element from the dataset
example = train_bob[10]
print(type(example['review']))
print(example['label'])
###Output
<class 'syft.generic.pointers.string_pointer.StringPointer'>
(Wrapper)>[PointerTensor | me:43565217098 -> bob:62978770308]
###Markdown
Wow, the text type is now a PySyft `StringPointer` that points to the real `String` object located in Bob's machine. The label type is a PySyft `PointerTensor`. Let's check out the location of the real text and label:
###Code
print(example['review'].location)
print(example['label'].location)
###Output
<VirtualWorker id:bob #objects:25000>
<VirtualWorker id:bob #objects:25000>
###Markdown
Well, you can see it for yourself, they are located in Bob's machine. This confirms **Figure(1)**. The datasets are now ready, and so is the work environment. Let's start the fun with SyferText :) 0. Create a `SyferText` Language object The Language object in SyferText is the master object. It orchestrates all the work done by SyferText. Let's create one:
###Code
# Create a Language object with SyferText
nlp = syfertext.load('en_core_web_lg', owner = me)
###Output
_____no_output_____
###Markdown
Whenever you create a Language object as we did above, a pipeline will be created. At initialization, a pipeline only contains a tokenizer. You can see this for yourself using the `pipeline_template` property:
###Code
nlp.pipeline_template
###Output
_____no_output_____
###Markdown
Notice that the tokenizer entry has a property called `remote` set to `True`. This means that we allow the tokenizer to be sent to a remote worker in case the string to be tokenized live there.We can add more components to the pipeline by using the `add_pipe` method of the Language class. One component we can add is a `SimpleTagger` object. This is a SyferText object that we can use to set custom attributes to individual tokens. In this tutorial, I will create two such taggers: One that tags tokens that are stop-words, the other tags each token as polar or not. By tagging a token, I mean setting a custom attribute to that token and assigning it a given value that we call a `tag`. For example, I set an attribute called `is_stop` with a value `True` for a stop word, and `False` otherwise.You can refer to **Figure(2)** to see how a pipeline is distributed on multiple workers when the dataset to preprocess is remote. 0.1 Create a tagger for stop words We will start by creating the stop-word tagger. Let's first load the stop-word file into a list of words:
###Code
# Load the list of stop words
with open('./imdb/stop_word_en.txt', 'r') as f:
stop_words = set(f.read().splitlines())
###Output
_____no_output_____
###Markdown
Now we create the tagger which is an object of the `SimpleTagger` class:
###Code
# Create a simple tagger object to tag stop words
stop_tagger = SimpleTagger(attribute = 'is_stop',
lookups = stop_words,
tag = True,
default_tag = False,
case_sensitive = False
)
###Output
_____no_output_____
###Markdown
Notice that I pass the list of words as the `lookups` arguments. Every token in the `Doc` object will be given a custom attribute called `is_stop`. Every time a stop word is found, this attribute will be given the value `True` specified by the `tag` argument of the `SimpleTagger` class initializer, otherwise, the `default_tag` will be used, which I set to `False`. 0.2 Create a tagger for most polar words In the same way, we created a tagger for stop words. We are now going to create another tagger for polar words, i.e., words that are more biased toward a positive or negative sentiment. Let's load the corresponding files `imdb_vocab.txt` and `imdb_polarity.txt`:
###Code
# Load the polarity info
with open('./imdb/imdb_vocab.txt', 'r') as f:
imdb_words = f.read().splitlines()
with open('./imdb/imdb_polarity.txt', 'r') as f:
polarity = [float(line) for line in f.read().splitlines()]
###Output
_____no_output_____
###Markdown
Let me show you the distribution of polarity values:
###Code
# Create the histogram of polarity values
fig, ax = plt.subplots(figsize = (10,5))
sb.distplot(polarity, kde = False, ax = ax)
ax.set_xlabel('Sentiment Polarity Value')
ax.set_ylabel('Frequency')
ax.set_title("Distribution of Polarity Values in the IMDB dataset");
###Output
_____no_output_____
###Markdown
Notice that the grand majority of words seem to be unbiased toward a specific sentiment. So let's create a tagger that tags only tokens that are most polar by setting a custom attribute we will call `is_polar` to `True` and `False` otherwise:
###Code
# Choose low/high polarity cutoff values
low_cutoff = -0.5
high_cutoff = 0.5
# Create a list of polar tokens
polar_tokens = [token for i, token in enumerate(imdb_words)
if polarity[i] > high_cutoff or
polarity[i] < low_cutoff]
###Output
_____no_output_____
###Markdown
Using the list of polar words above, we can now create the tagger:
###Code
polarity_tagger = SimpleTagger(attribute = 'is_polar',
lookups = polar_tokens,
tag = True,
default_tag = False,
case_sensitive = False
)
###Output
_____no_output_____
###Markdown
0.3 Adding the taggers to the pipeline We can now add each tagger we created above to the pipeline by using the `add_pipe()` method of the `Language` class. However, in the next cell, I give you the possibility to decide for yourself which components you wish to add.Here is what I recommend you do:1. First, run this tutorial without adding a tagger.2. Restart the notebook and rerun the tutorial with `use_stop_tagger = True`.3. Restart the notebook and run the tutorial again with both `use_stop_tagger = True` and `use_polarity_tagger = True`.I will show you the results of each such run at the end of this notebook.
###Code
use_stop_tagger = False
use_polarity_tagger = False
# Tokens with these custom tags
# will be excluded from creating
# the Doc vector
excluded_tokens = {}
###Output
_____no_output_____
###Markdown
Notice that in the above cell. I create a dictionary called `excluded_tokens`. It will be used later in this tutorial when we create embedding vectors for reviews. It enables us to exclude some tokens when we create a document embedding. Such exclusion will be based on the value of the custom attributes we set with the taggers.Now let's add the stop word tagger to the pipeline (If `use_stop_tagger = True`). Notice that I set the argument `remote = True`. This tells the `Language` object that it is allowed to send the pipe component to the remote worker.
###Code
if use_stop_tagger:
# Add the stop word to the pipeline
nlp.add_pipe(name = 'stop tagger',
component = stop_tagger,
remote = True
)
# Tokens with 'is_stop' = True are
# not going to be used when creating the
# Doc vector
excluded_tokens['is_stop'] = {True}
###Output
_____no_output_____
###Markdown
Same for adding the polar word tagger:
###Code
if use_polarity_tagger:
# Add the polarity tagger to the pipeline
nlp.add_pipe(name = 'polarity tagger',
component = polarity_tagger,
remote = True
)
# Tokens with 'is_polar' = False are
# not going to be used when creating the
# Doc vector
excluded_tokens['is_polar'] = {False}
###Output
_____no_output_____
###Markdown
Let's check out what pipe components are included in the pipeline:
###Code
nlp.pipeline_template
###Output
_____no_output_____
###Markdown
1. Create a Dataset class Now that we have the remote datasets ready for use, and that SyferText's `Language` object set up with the appropriate pipeline, it's time to create data loaders that will take over the task of creating batches for training and validation.We will be using regular PyTorch data loaders to accomplish that. Each batch will be composed of a mix of training examples coming from both Bob's and Alice's datasets. Actually, for the data loader, there is only one big dataset, it is entirely ignorant of the fact that data is distributed over different workers. Each example in the batch contains an encrypted version of one review's embedding vector and its encrypted label. For this tutorial, I compute such a vector as an average of the review's individual token vectors taken from the `en_core_web_lg` language model. Of course, all tokens with custom tags indicated in `excluded_tokens` won't be taken into account in computing a review's vector.If you look at **Figure(2)** you can see the big picture of how SyferText remotely preprocesses a single review text: 1. First, the `Language` object `nlp` is used to preprocess one review on Bob's or Alice's machine.2. The object `nlp` determines that the real review text is actually remote, so it sends a subpipeline containing the required pipeline components we defined to the corresponding worker.3. The subpipeline is run, and a `Doc` object is created on the remote worker containing the review's individual tokens appropriately tokenized and tagged.4. On the local worker, a `DocPointer` object is created, pointing to that `Doc` object.5. By calling `get_encrypted_vector()` on the `DocPointer`, the call is forwarded to `Doc`, which, in turn, computes the `Doc` vector, encrypts it with SMPC using PySyft and returns it to the caller at the local worker.6. The PyTorch dataloader takes this encrypted vector and appends it to the training or validation batch.Notice that at no moment in the process, the plaintext data of the remote datasets are revealed to the local worker. *Privacy is preserved thanks to SyferText and PySyft!* Figure(2): A pipeline on the local worker only contains pointers to subpipelines carrying out the actual preprocessing on remote workers. All of the steps described above, except for *step 6.* are carried out in the `__getitem__()` method of the custom PyTorch `Dataset` object that I define below. Please take a few minutes to check it out below:
###Code
class DatasetIMDB(Dataset):
def __init__(self, sets, share_workers, crypto_provider, nlp):
"""Initialize the Dataset object
Args:
sets (list): A list containing all training OR
all validation sets to be used.
share_workers (list): A list of workers that will
be used to hold the SMPC shares.
crypto_provider (worker): A worker that will
provide SMPC primitives for encryption.
nlp: This is SyferText's Language object containing
the preprocessing pipeline.
"""
self.sets = sets
self.crypto_provider = crypto_provider
self.workers = share_workers
# Create a single dataset unifying all datasets.
# A property called `self.dataset` is created
# as a result of this call.
self._create_dataset()
# The language model
self.nlp = nlp
def __getitem__(self, index):
"""In this function, preprocessing with SyferText
of one review will be triggered. Encryption will also
be performed and the encrypted vector will be obtained.
The encrypted label will be computed too.
Args:
index (int): This is an integer received by the
PyTorch DataLoader. It specifies the index of
the example to be fetched. This actually indexes
one example in `self.dataset` which pools over
examples of all the remote datasets.
"""
# get the example
example = self.dataset[index]
# Run the preprocessing pipeline on
# the review text and get a DocPointer object
doc_ptr = self.nlp(example['review'])
# Get the encrypted vector embedding for the document
vector_enc = doc_ptr.get_encrypted_vector(bob,
alice,
crypto_provider = self.crypto_provider,
requires_grad = True,
excluded_tokens = excluded_tokens
)
# Encrypte the target label
label_enc = example['label'].fix_precision().share(bob,
alice,
crypto_provider = self.crypto_provider,
requires_grad = True
).get()
return vector_enc, label_enc
def __len__(self):
"""Returns the combined size of all of the
remote training/validation sets.
"""
# The size of the combined datasets
return len(self.dataset)
def _create_dataset(self):
"""Create a single list unifying examples from all remote datasets
"""
# Initialize the dataset
self.dataset = []
# populate the dataset list
for dataset in self.sets:
for example in dataset:
self.dataset.append(example)
@staticmethod
def collate_fn(batch):
"""The collat_fn method to be used by the
PyTorch data loader.
"""
# Unzip the batch
vectors, targets = list(zip(*batch))
# concatenate the vectors
vectors = torch.stack(vectors)
#concatenate the labels
targets = torch.stack(targets)
return vectors, targets
###Output
_____no_output_____
###Markdown
Let's now create two such `DatasetIMDB` objects, one for training and the other for validation:
###Code
# Instantiate a training Dataset object
trainset = DatasetIMDB(sets = [train_bob,
train_alice],
share_workers = [bob, alice],
crypto_provider = crypto_provider,
nlp = nlp
)
# Instantiate a validation Dataset object
valset = DatasetIMDB(sets = [val_bob,
val_alice],
share_workers = [bob, alice],
crypto_provider = crypto_provider,
nlp = nlp
)
###Output
_____no_output_____
###Markdown
2. Create a DataLoader Let's now choose some hyper parameters for training and validation, and create the PyTorch data loaders:
###Code
# Set some hyper parameters
learning_rate = 0.001
batch_size = 32
epochs = 1
# Instantiate the DataLoader object for the training set
trainloader = DataLoader(trainset, shuffle = True,
batch_size = batch_size, num_workers = 0,
collate_fn = trainset.collate_fn)
# Instantiate the DataLoader object for the validation set
valloader = DataLoader(valset, shuffle = True,
batch_size = batch_size, num_workers = 0,
collate_fn = valset.collate_fn)
###Output
_____no_output_____
###Markdown
3. Create an Encrypted Classifier The sentiment classifier I use here is simply a linear layer with `300` input features, which is the size of the embedding vectors computed by SyferText. A ReLU activation is then applied. The network has two outputs, one for negative sentiments and the other for positive ones.
###Code
class Classifier(torch.nn.Module):
def __init__(self, in_features, out_features):
super(Classifier, self).__init__()
self.fc = torch.nn.Linear(in_features, out_features)
def forward(self, x):
logits = self.fc(x)
probs = F.relu(logits)
return probs, logits
###Output
_____no_output_____
###Markdown
I should now initialize and encrypt the classifier. Encryption here should, of course, use the same workers to hold the shares and the same primitives used to encrypt the document vectors.
###Code
# Create the classifer
classifier = Classifier(in_features = 300, out_features = 2)
# Apply SMPC encryption
classifier = classifier.fix_precision().share(bob, alice,
crypto_provider = crypto_provider,
requires_grad = True
)
print(classifier)
###Output
Classifier(
(fc): Linear(in_features=300, out_features=2, bias=True)
)
###Markdown
And finally, I create an optimizer. Notice that the optimizer does not need to be encrypted since it operates separately within each worker holding the classifier's and embeddings' shares. We need to make it operate on fixed precision numbers that are used to encode shares.
###Code
optim = optim.SGD(params = classifier.parameters(),
lr = learning_rate)
optim = optim.fix_precision()
###Output
_____no_output_____
###Markdown
4. Start training Woohoo!!! You are now ready to launch training. Notice that we use MSE as a training loss, which is not the best choice for a classification task. I choose to use it since the `NLLLoss()` is not yet implemented in PySyft for SMPC mode. But it is an issue that is currently being worked on.To view the training and validation curves for loss and accuracy, you need to run `Tensorboard`. Just open a terminal, navigate to the folder containing this notebook, and run:```$ tensorboard --logdir runs/```Then open your favorite web browser and go to `localhost:6006`.The below cell will produce no outputs. But you be able to see performance curves on Tensorboard.
###Code
for epoch in range(epochs):
for iter, (vectors, targets) in enumerate(trainloader):
# Set train mode
classifier.train()
# Zero out previous gradients
optim.zero_grad()
# Predict sentiment probabilities
probs, logits = classifier(vectors)
# Compute loss and accuracy
loss = ((probs - targets)**2).sum()
# Get the predicted labels
preds = probs.argmax(dim=1)
targets = targets.argmax(dim=1)
# Compute the prediction accuracy
accuracy = (preds == targets).sum()
accuracy = accuracy.get().float_precision()
accuracy = 100 * (accuracy / batch_size)
# Backpropagate the loss
loss.backward()
# Update weights
optim.step()
# Decrypt the loss for logging
loss = loss.get().float_precision()
# Log to Tensorboard
writer.add_scalar('train/loss', loss, epoch * len(trainloader) + iter )
writer.add_scalar('train/acc', accuracy, epoch * len(trainloader) + iter )
""" Perform validation on exactly one batch """
# Set validation mode
classifier.eval()
for vectors, targets in valloader:
probs, logits = classifier(vectors)
loss = ((probs - targets)**2).sum()
preds = probs.argmax(dim=1)
targets = targets.argmax(dim=1)
accuracy = preds.eq(targets).sum()
accuracy = accuracy.get().float_precision()
accuracy = 100 * (accuracy / batch_size)
loss = loss.get().float_precision()
# Log to tensorboard
writer.add_scalar('val/loss', loss, epoch * len(trainloader) + iter )
writer.add_scalar('val/acc', accuracy, epoch * len(trainloader) + iter )
break
writer.close()
###Output
_____no_output_____
###Markdown
Now that training is finished, let me prove to you that as I explained in **Figure(2)**, both Bob and Alice have `SubPipeline` objects on their machines sent by SyferText that contain the pipeline components I defined above. Just run the following cells.
###Code
# On bob's machine
[bob._objects[id] for id in bob._objects if isinstance(bob._objects[id], syfertext.SubPipeline)]
# On Alices's machine
[alice._objects[id] for id in alice._objects if isinstance(alice._objects[id], syfertext.SubPipeline)]
###Output
_____no_output_____
###Markdown
Sentiment Classification - Private Datasets - (Training)------ **Author:**- Alan Aboudib: [Twitter](https://twitter.com/alan_aboudib) | [LinkedIn](https://www.linkedin.com/in/ala-aboudib/) | [Slack](https://app.slack.com/client/T6963A864/DDKH3SXKL/user_profile/UDKH3SH8S) ----- Problem Statement Suppose you run a deep learning company that provides NLP expertise. You have two clients: Bob and Alice. Each of them runs their own website were users can write reviews about movies they had watched.Bob and Alice have heard of the great services you provide and asked you to create a sentiment classifier to help them automatically assign a sentiment (positive or negative) to each user's review.Now you think that this is a really good opportunity. If you pool data from both Bob's and Alice's datasets, you would be able to create a bigger dataset that you can use to train a better classifer.But... It turns out you are not allowed to do this; both datasets are private.You are informed that privacy regulations in both Bob's and Alice's countries, prevent them from revealing their data to any third party. You cannot move Bob's data to your company's machines. Same for Alice's. Each dataset is constrained to live on its owner's machine, and they cannot be mixed together to create a bigger dataset.Now you think about OpenMined, and their great library called PySyft that provides the possiblity to perform Federated Learning and Encrypted Computations. With that, you will be able to train a single model on both datasets at the same time. and YOUR ARE RIGHT!However, ...As you know, text datasets cannot be consumed directly for training a neural network. You need to create numerical representations of each text review before network written with PySyft can consume it. Reviews should first tokenized, preprocessed and vector embedding should be used instead of plaintext to train the network. But how to do such preprocessing if you are not allowed to have access to plaintext data? **SyferText** can help you! With SyferText, you can define preprocessing components that you can send over a network to Bob's and Alice's machines to perform preprocessing remotely, blindly and in a completely secure fashion. SyferText components do all the work from processing plaintext to obtaining its vector representation and encrypting it to hand it over to PySyft models for training. All without you accessing the data, and without the data quitting its owner's machine.If you are wondering how that works, keep on following this tutorial.**Let's summarize:**1. You need to create a bigger dataset out of Bob's and Alice's smaller datasets. *(PySyft has the arsenal for that)*2. You need to prepare and preprocess the text data on Bob's and Alice's machines without revealing it, without moving any datasets to your machine, and without the need to work directly on Bob's or Alice's machines. *(SyferText to the rescue)*For this tutorial, we are going to work with the IMDB movie review dataset. This is a public dataset. But we are going to break it into two parts, send each part to a differet PySyft work. We consider that each part is a private dataset owned by its PySyft worker. -4. Importing libraries Let's first install and import some libraries that we are going to be used all along this tutorial:
###Code
!pip install -r requirements.txt
# SyferText imports
import syfertext
from syfertext.pipeline import SimpleTagger
# Import useful utility functions for this tutoria
from utils import download_dataset
# PySyft and PyTorch import
import syft as sy
from syft.generic.string import String
import torch
import torch.nn.functional as F
from torch.utils.tensorboard import SummaryWriter
from torch.utils.data import DataLoader
from torch.utils.data import Dataset
import torch.optim as optim
# Useful imports
import numpy as np
from tqdm import tqdm
import csv
from sklearn.model_selection import train_test_split
import matplotlib.pyplot as plt
import seaborn as sb
import os
from pprint import pprint
sb.set()
###Output
_____no_output_____
###Markdown
-3. Download the Dataset (IGNORE THIS STEP IF YOU HAVE ALREADY DONE IT) The dataset will be downloaded in a folder called `./imdb` in the same directory as the current notebook's. Four files are going to be downloaded:- `imdb.csv`: This is the dataset file containing 50K labeled reviews. It is a csv file composed of two columns: `review` and `sentiment`. The `review` column holds the review's text, and the `sentiment` column has one of two values: 'positive' or 'negative' to describe the overall sentiment of the review.- `stop_word_en.txt`: This is just a text file with a list of stop words according to NLTK.- `imdb_vocab.txt`: a list of all vocabulary of the dataset. One word per line.- `imdb_polarity.txt`: It hold the polarity value of each word in `imdb_vocab.txt`. A word that appears more often in positive reviews will have a higher polarity value than one that more frequently encountered in negative reviews.It is important to note that we consider, for this use case, that only the dataset `imdb.csv` is considered private. All other files in the above list are not under any privacy constraints.Please run the below cell in order to download the dataset.
###Code
# The URL template to all dataset files
url_template = 'https://raw.githubusercontent.com/AlanAboudib/dataset_imdb/master/%s'
# File names to be downloaded from the using the URL template above
files = ['imdb.csv', 'imdb_vocab.txt', 'imdb_polarity.txt', 'stop_word_en.txt']
# Construct the list of urls
urls = [url_template % file for file in files]
# The dataset name and its root folder
dataset_name = 'imdb'
root_path = './imdb'
# Create the dataset folder if it is not already there
if not os.path.exists('./imdb'):
os.mkdir('./imdb')
# Start downloading
download_dataset(dataset_name = dataset_name,
urls = urls,
root_path = root_path
)
###Output
Preparing to download dataset: `imdb` ...
###Markdown
-2. Preparing the work environment As I explained in the introduction, we will simulate a work environment with three main actors, a company (me) and two clients owning two private datasets (Bob and Alice). In PySyft terminology, this translates to creating a worker to represent each actor. We will also need a fourth worker, the crypto provider, which provides the primitives for using Secure Multi-Party Encryption (SMPC) that we will apply to encrypt word embeddings and the model itself before training. Let's create the workers with PySyft:
###Code
# Create a torch hook for PySyft
hook = sy.TorchHook(torch)
# Create some PySyft workers
me = hook.local_worker # This is the worker representing the deep learning company
bob = sy.VirtualWorker(hook, id = 'bob') # Bob owns the first dataset
alice = sy.VirtualWorker(hook, id = 'alice') # Alice owns the second dataset
crypto_provider = sy.VirtualWorker(hook, id = 'crypto_provider') # provides encryption primitive for SMPC
# Create a summary writer for logging performance with Tensorboard
writer = SummaryWriter()
###Output
_____no_output_____
###Markdown
-1. Simulating Private Datasets In order to simulate two private datasets owned by two different clients, Bob and Alice. We will do the following:1. Load the whole dataset in `imdb.csv` locally (the `me` worker). This dataset will be loaded as a list of dictionaries that has the following format: `[ {'review': , 'label': }, {...}, {...}]`2. Split the dataset into two parts, one for Bob and the other for Alice. Each part will be also split into a training set and a validation set. This will create four lists: `train_bob`, `valid_bob`, `train_alice`, `valid_alice`. Each list has the same format I mentioned above.3. Each element in the four lists will be sent to the corresponding worker. This will change the content of the lists as depicted in **Figure(1)**. Each list willl hold PySyft pointers to the texts and labels instead of the objects themselves. Figure(1): The reviews and their labels are remotely located on Bob's and Alice's remote machines, only pointers to them are kept by the local worker (the company's machine). Let's load the dataset locally:
###Code
# Set the path to the dataset file
dataset_path = './imdb/imdb.csv'
# store the dataset as a list of dictionaries
# each dictionary has two keys, 'review' and 'label'
# the 'review' element is a PySyft String
# the 'label' element is an integer with 1 for 'positive'
# and 0 for 'negative' review
dataset_local = []
with open(dataset_path, 'r') as dataset_file:
# Create a csv reader object
reader = csv.DictReader(dataset_file)
for elem in reader:
# Create one entry
example = dict(review = String(elem['review']),
label = 1 if elem['sentiment'] == 'positive' else 0
)
# add to the local dataset
dataset_local.append(example)
###Output
_____no_output_____
###Markdown
Here is how an element in the list looks like:
###Code
example = dataset_local[10]
pprint(example)
###Output
{'label': 0,
'review': 'Phil the Alien is one of those quirky films where the humour is based around the oddness of everything rather than actual punchlines.<br /><br />At first it was very odd and pretty funny but as the movie progressed I didn\'t find the jokes or oddness funny anymore.<br /><br />Its a low budget film (thats never a problem in itself), there were some pretty interesting characters, but eventually I just lost interest.<br /><br />I imagine this film would appeal to a stoner who is currently partaking.<br /><br />For something similar but better try "Brother from another planet"'}
###Markdown
Let's check out the data types:
###Code
print(type(example['review']))
print(type(example['label']))
###Output
<class 'syft.generic.string.String'>
<class 'int'>
###Markdown
This review text is a PySyft `String` object. The label is an integer. Let's split the dataset into two equal parts and send each part to a different worker simulating two remote datasets as I mentioned above:
###Code
# Create two datasets, one for Bob, and the other for Alice
dataset_bob, dataset_alice = train_test_split(dataset_local[:25000], train_size = 0.5)
# Now create a validation set for Bob, and another for Alice
train_bob, val_bob = train_test_split(dataset_bob, train_size = 0.7)
train_alice, val_alice = train_test_split(dataset_alice, train_size = 0.7)
###Output
_____no_output_____
###Markdown
And now I will make the dataset remote:
###Code
# A function that sends the content of each split to a remote worker
def make_remote_dataset(dataset, worker):
# Got through each example in the dataset
for example in dataset:
# Send each review text
example['review'] = example['review'].send(worker)
# Send each label as a one-hot-enceded vector
one_hot_label = torch.zeros(2).scatter(0, torch.Tensor([example['label']]).long(), 1)
# Send the review label
example['label'] = one_hot_label.send(worker)
###Output
_____no_output_____
###Markdown
Notice that the above function, transforms the label to a one-hot-encoded format before sending it to a remote worker. So if the sentiment is negative, the corresponding tensor will hold `[1,0]`, and if it is positive, the label will be `[0,1]`. I can finally create the remote datasets:
###Code
# Bob's remote dataset
make_remote_dataset(train_bob, bob)
make_remote_dataset(val_bob, bob)
# Alice's remote dataset
make_remote_dataset(train_alice, alice)
make_remote_dataset(val_alice, alice)
###Output
_____no_output_____
###Markdown
Let me show you what an element of Bob's dataset look like:
###Code
# Take an element from the dataset
example = train_bob[10]
print(type(example['review']))
print(example['label'])
###Output
<class 'syft.generic.pointers.string_pointer.StringPointer'>
(Wrapper)>[PointerTensor | me:43565217098 -> bob:62978770308]
###Markdown
Wow, the text type is now a PySyft `StringPointer` that points to the real `String` object located in Bob's machine. The label type is a PySyft `PointerTensor`. Let's check out the location of the real text and label:
###Code
print(example['review'].location)
print(example['label'].location)
###Output
<VirtualWorker id:bob #objects:25000>
<VirtualWorker id:bob #objects:25000>
###Markdown
Well, you can see it for yourself, they are located in Bob's machine. This confirms **Figure(1)**. The datasets are now ready, and so is the work environment. Let's start the fun with SyferText :) 0. Create a `SyferText` Language object The Language object in SyferText is the master object. It orchestrates all the work done by SyferText. Let's create one:
###Code
# Create a Language object with SyferText
nlp = syfertext.load('en_core_web_lg', owner = me)
###Output
_____no_output_____
###Markdown
Whenever you create a Language object as we did above, a pipeline will be created. At initialization, a pipeline only contains a tokenizer. You can see this for yourself using the `pipeline_template` property:
###Code
nlp.pipeline_template
###Output
_____no_output_____
###Markdown
Notice that the tokenizer entry has a propery called `remote` set to `True`. This means that we allow the tokenizer to be sent to a remote worker in case the string to be tokenized live there.We can add more components to the pipeline by using the `add_pipe` method of the Language class. One component we can add is a `SimpleTagger` object. This is a SyferText object that we can use to set custom attributes to individual tokens. In this tutorial, I will create two such taggers: One that tags tokens that are stop words, the other tags each token as polar or not. By tagging a token, I mean setting a custom attribute to that token and assigning it a given value that we call a `tag`. For example, I set an attribute called `is_stop` with a value `True` for a stop word, and `False` otherwise.You can refer to **Figure(2)** to see how a pipeline is distributed on multiple workers when the dataset to preprocess is remote. 0.1 Create a tagger for stop words We will start by creating the stop-word tagger. Let's first load the stop word file into a list of words:
###Code
# Load the list of stop words
with open('./imdb/stop_word_en.txt', 'r') as f:
stop_words = set(f.read().splitlines())
###Output
_____no_output_____
###Markdown
Now we create the tagger which is an object of the `SimpleTagger` class:
###Code
# Create a simple tagger object to tag stop words
stop_tagger = SimpleTagger(attribute = 'is_stop',
lookups = stop_words,
tag = True,
default_tag = False,
case_sensitive = False
)
###Output
_____no_output_____
###Markdown
Notice that I pass the list of words as the `lookups` arguments. Every token in the `Doc` object will be given a custom attribute called `is_stop`. Every time a stop word is found, this attribute will be given the value `True` specified by the `tag` argument of the `SimpleTagger` class initialiser, otherwise, the `default_tag` will be used, which I set to `False`. 0.2 Create a tagger for most polar words In the same way we created a tagger for stop words. We are now going to create another tagger for polar words, i.e., words that are more biased toward a positive or a negative sentiment. Let's load the corresponding files `imdb_vocab.txt` and `imdb_polarity.txt`:
###Code
# Load the polarity info
with open('./imdb/imdb_vocab.txt', 'r') as f:
imdb_words = f.read().splitlines()
with open('./imdb/imdb_polarity.txt', 'r') as f:
polarity = [float(line) for line in f.read().splitlines()]
###Output
_____no_output_____
###Markdown
Let me show you the distribution of polarity values:
###Code
# Create the histogram of polarity values
fig, ax = plt.subplots(figsize = (10,5))
sb.distplot(polarity, kde = False, ax = ax)
ax.set_xlabel('Sentiment Polarity Value')
ax.set_ylabel('Frequency')
ax.set_title("Distribution of Polarity Values in the IMDB dataset");
###Output
_____no_output_____
###Markdown
Notice that the grand majority of words seem to be unbiased toward a specific sentiment. So let's create a tagger that tags only tokens that are most polar by setting a custom attribute we will call `is_polar` to `True` and `False` otherwise:
###Code
# Choose low/high polarity cutoff values
low_cutoff = -0.5
high_cutoff = 0.5
# Create a list of polar tokens
polar_tokens = [token for i, token in enumerate(imdb_words)
if polarity[i] > high_cutoff or
polarity[i] < low_cutoff]
###Output
_____no_output_____
###Markdown
Using the list of polar wordsabove, we can now create the tagger:
###Code
polarity_tagger = SimpleTagger(attribute = 'is_polar',
lookups = polar_tokens,
tag = True,
default_tag = False,
case_sensitive = False
)
###Output
_____no_output_____
###Markdown
0.3 Adding the taggers to the pipeline We can now add each tagger we created above to the the pipeline by using the `add_pipe()` method of the `Language` class. However, in the following cell, I give you the possibility to decide for yourself which components you wish to add.Here is what I recommend you do:1. First run this tutorial without adding an tagger.2. Restart the notebook and run the tutorial again with `use_stop_tagger = True`.3. Restart the notebook and run the tutorial again with both `use_stop_tagger = True` and `use_polarity_tagger = True`.I will actually show you the results of each such run at the end of this notebook.
###Code
use_stop_tagger = False
use_polarity_tagger = False
# Tokens with these custom tags
# will be excluded from creating
# the Doc vector
excluded_tokens = {}
###Output
_____no_output_____
###Markdown
Notice that in the above cell. I create a dictionary called `excluded_tokens`. It will be used later in this tutorial when we create embedding vectors for reviews. It enables us to execlude some tokens when we create a document embedding. Such exclusion will be based on the value of the custom attributes we set with the taggers.Now let's add the stop word tagger to the pipeline (If `use_stop_tagger = True`). Notice that I set the argument `remote = True`. This tells the `Language` object that it is allowed to send the pipe component to the remote worker.
###Code
if use_stop_tagger:
# Add the stop word to the pipeline
nlp.add_pipe(name = 'stop tagger',
component = stop_tagger,
remote = True
)
# Tokens with 'is_stop' = True are
# not going to be used when creating the
# Doc vector
excluded_tokens['is_stop'] = {True}
###Output
_____no_output_____
###Markdown
Same for adding the polar word tagger:
###Code
if use_polarity_tagger:
# Add the polarity tagger to the pipeline
nlp.add_pipe(name = 'polarity tagger',
component = polarity_tagger,
remote = True
)
# Tokens with 'is_polar' = False are
# not going to be used when creating the
# Doc vector
excluded_tokens['is_polar'] = {False}
###Output
_____no_output_____
###Markdown
Let's check out what pipe components are included in the pipeline:
###Code
nlp.pipeline_template
###Output
_____no_output_____
###Markdown
1. Create a Dataset class Now that we have the remote datasets ready for use, and that SyferText's `Language` object set up with the appropriate pipeline, it's time to create data loaders that will take over the task of creating batches for training and validation.We will be using regular PyTorch data loaders to accomplish that. Each batch will be composed of a mix of training examples coming from both Bob's and Alice's datasets. Actually, for the data loader, there is only one big dataset, it is completely ignorant of the fact that data is distributed over different workers. Each example in the batch contains an encrypted version of one review's embedding vector and its encrypted label. For this tutorial, I compute such a vector as an average of the review's individual token vectors taken from the `en_core_web_lg` language model. Of course, all tokens with custom tags indicated in `excluded_tokens` won't be taken into account in computing a review's vector.If you look at **Figure(2)** you can see the big picture of how a single review text is remotely preprocessed by SyferText: 1. First, the `Language` object `nlp` is used to preprocess one review on Bob's or Alice's machine.2. The object `nlp` determines that the real review text is actually remote, so it sends a subpipeline containing the required pipeline components we defined to the corresponding worker.3. The subpipeline is run and a `Doc` object is created on the remote worker containing the review's individual tokens appropriately tokenized and tagged.4. On the local worker, a `DocPointer` object is created pointing to that `Doc` object.5. By calling `get_encrypted_vector()` on the `DocPointer`, the call is forwarded to `Doc`, which, in turn, computes the `Doc` vector, encrypts it with SMPC using PySyft and returns it to the caller at the local worker.6. The PyTorch dataloader takes this encrypted vector and appends it to the training or validation batch.Notice that at no moment in the process, the plaintext data of the remote datasets are revealed to the local worker. *Privacy is preserved thanks to SyferText and PySyft!* Figure(2): A pipeline on the local worker only contains pointers to subpipelines carrying out the actual preprocessing on remote workers. All of the steps described above, except for *step 6.* are carried out in the `__getitem__()` method of the custom PyTorch `Dataset` object that I define below. Please take a few minutes to check it out below:
###Code
class DatasetIMDB(Dataset):
def __init__(self, sets, share_workers, crypto_provider, nlp):
"""Initialize the Dataset object
Args:
sets (list): A list containing all training OR
all validation sets to be used.
share_workers (list): A list of workers that will
be used to hold the SMPC shares.
crypto_provider (worker): A worker that will
provide SMPC primitives for encryption.
nlp: This is SyferText's Language object containing
the preprocessing pipeline.
"""
self.sets = sets
self.crypto_provider = crypto_provider
self.workers = share_workers
# Create a single dataset unifying all datasets.
# A property called `self.dataset` is created
# as a result of this call.
self._create_dataset()
# The language model
self.nlp = nlp
def __getitem__(self, index):
"""In this function, preprocessing with SyferText
of one review will be triggered. Encryption will also
be performed and the encrypted vector will be obtained.
The encrypted label will be computed too.
Args:
index (int): This is an integer received by the
PyTorch DataLoader. It specifies the index of
the example to be fetched. This actually indexes
one example in `self.dataset` which pools over
examples of all the remote datasets.
"""
# get the example
example = self.dataset[index]
# Run the preprocessing pipeline on
# the review text and get a DocPointer object
doc_ptr = self.nlp(example['review'])
# Get the encrypted vector embedding for the document
vector_enc = doc_ptr.get_encrypted_vector(bob,
alice,
crypto_provider = self.crypto_provider,
requires_grad = True,
excluded_tokens = excluded_tokens
)
# Encrypte the target label
label_enc = example['label'].fix_precision().share(bob,
alice,
crypto_provider = self.crypto_provider,
requires_grad = True
).get()
return vector_enc, label_enc
def __len__(self):
"""Returns the combined size of all of the
remote training/validation sets.
"""
# The size of the combined datasets
return len(self.dataset)
def _create_dataset(self):
"""Create a single list unifying examples from all remote datasets
"""
# Initialize the dataset
self.dataset = []
# populate the dataset list
for dataset in self.sets:
for example in dataset:
self.dataset.append(example)
@staticmethod
def collate_fn(batch):
"""The collat_fn method to be used by the
PyTorch data loader.
"""
# Unzip the batch
vectors, targets = list(zip(*batch))
# concatenate the vectors
vectors = torch.stack(vectors)
#concatenate the labels
targets = torch.stack(targets)
return vectors, targets
###Output
_____no_output_____
###Markdown
Let's now create two such `DatasetIMDB` objects, one for training and the other for validation:
###Code
# Instantiate a training Dataset object
trainset = DatasetIMDB(sets = [train_bob,
train_alice],
share_workers = [bob, alice],
crypto_provider = crypto_provider,
nlp = nlp
)
# Instantiate a validation Dataset object
valset = DatasetIMDB(sets = [val_bob,
val_alice],
share_workers = [bob, alice],
crypto_provider = crypto_provider,
nlp = nlp
)
###Output
_____no_output_____
###Markdown
2. Create a DataLoader Let's now choose some hyper parameters for training and validation, and create the PyTorch data loaders:
###Code
# Set some hyper parameters
learning_rate = 0.001
batch_size = 32
epochs = 1
# Instantiate the DataLoader object for the training set
trainloader = DataLoader(trainset, shuffle = True,
batch_size = batch_size, num_workers = 0,
collate_fn = trainset.collate_fn)
# Instantiate the DataLoader object for the validation set
valloader = DataLoader(valset, shuffle = True,
batch_size = batch_size, num_workers = 0,
collate_fn = valset.collate_fn)
###Output
_____no_output_____
###Markdown
3. Create an Encrypted Classifier The sentiment classifier I use here is simply a linear layer with `300` input features which is the size of the embedding vectors computed by SyferText. A ReLU activation is then applied. The network has two outputs, one for negative sentiments and the other for positive ones.
###Code
class Classifier(torch.nn.Module):
def __init__(self, in_features, out_features):
super(Classifier, self).__init__()
self.fc = torch.nn.Linear(in_features, out_features)
def forward(self, x):
logits = self.fc(x)
probs = F.relu(logits)
return probs, logits
###Output
_____no_output_____
###Markdown
I should now initialize and encrypt the classifier. Encryption here should of course use the same workers to hold the shares and the same primitives used to encrypt the document vectors.
###Code
# Create the classifer
classifier = Classifier(in_features = 300, out_features = 2)
# Apply SMPC encryption
classifier = classifier.fix_precision().share(bob, alice,
crypto_provider = crypto_provider,
requires_grad = True
)
print(classifier)
###Output
Classifier(
(fc): Linear(in_features=300, out_features=2, bias=True)
)
###Markdown
And finally I create an optimizer. Notice that the optimizer does not need to be encrypted, since it operates separately within each worker holding the classifier's and embeddings' shares. We just need to make it operate on fixed precision numbers that are used to encode shares.
###Code
optim = optim.SGD(params = classifier.parameters(),
lr = learning_rate)
optim = optim.fix_precision()
###Output
_____no_output_____
###Markdown
4. Start training Woohoo!!! You are now ready to launch training. Notice that we use MSE as a training loss which is not the best choice for a classification task. I choose to use it since the `NLLLoss()` is not yet implemented in PySyft for SMPC mode. But it is an issue that is currently being worked on.In order to view the training and validation curves for loss and accuracy, you need to run `Tensorboard`. Just open a terminal, navigate to the folder containing this notebook, and run:```$ tensorboard --logdir runs/```Then open your favorite web browser and go to `localhost:6006`.The below cell will produce no outputs. But you be able to see performance curves on Tensorboard.
###Code
for epoch in range(epochs):
for iter, (vectors, targets) in enumerate(trainloader):
# Set train mode
classifier.train()
# Zero out previous gradients
optim.zero_grad()
# Predict sentiment probabilities
probs, logits = classifier(vectors)
# Compute loss and accuracy
loss = ((probs - targets)**2).sum()
# Get the predicted labels
preds = probs.argmax(dim=1)
targets = targets.argmax(dim=1)
# Compute the prediction accuracy
accuracy = (preds == targets).sum()
accuracy = accuracy.get().float_precision()
accuracy = 100 * (accuracy / batch_size)
# Backpropagate the loss
loss.backward()
# Update weights
optim.step()
# Decrypt the loss for logging
loss = loss.get().float_precision()
# Log to Tensorboard
writer.add_scalar('train/loss', loss, epoch * len(trainloader) + iter )
writer.add_scalar('train/acc', accuracy, epoch * len(trainloader) + iter )
""" Perform validation on exactly one batch """
# Set validation mode
classifier.eval()
for vectors, targets in valloader:
probs, logits = classifier(vectors)
loss = ((probs - targets)**2).sum()
preds = probs.argmax(dim=1)
targets = targets.argmax(dim=1)
accuracy = preds.eq(targets).sum()
accuracy = accuracy.get().float_precision()
accuracy = 100 * (accuracy / batch_size)
loss = loss.get().float_precision()
# Log to tensorboard
writer.add_scalar('val/loss', loss, epoch * len(trainloader) + iter )
writer.add_scalar('val/acc', accuracy, epoch * len(trainloader) + iter )
break
writer.close()
###Output
_____no_output_____
###Markdown
Now that training is finished, let me prove to you, that as I explained in **Figure(2)**, both Bob and Alice has `SubPipeline` objects on their machines sent by SyferText that contain the pipeline components I defined above. Just run the following cells.
###Code
# On bob's machine
[bob._objects[id] for id in bob._objects if isinstance(bob._objects[id], syfertext.SubPipeline)]
# On Alices's machine
[alice._objects[id] for id in alice._objects if isinstance(alice._objects[id], syfertext.SubPipeline)]
###Output
_____no_output_____
|
tests/notebooks/simtool/test_simtool.ipynb
|
###Markdown
SimTool TestTest of a simulation tool that accepts a bunch of different input types and writes different outputs.
###Code
DESCRIPTION = "Sample notebook testing and documentation"
%load_ext yamlmagic
import numpy as np
from simtool import DB
EXTRA_FILES = ["nanoHUB_logo_color.png"]
%%yaml INPUTS
some_text:
desc: Text to Write in Output Image
type: Text
maxlen: 20
value: 'Default Text'
volts:
desc: Value to Write in Output Image
type: Number
units: mV
value: 200
max: 1000
width:
desc: Width of Output Image in pixels
type: Integer
value: 400
min: 100
max: 2000
height:
desc: Height of Output Image in pixels
type: Integer
value: 200
min: 50
max: 1000
position:
desc: Position of text in image [x, y] in pixels
type: List
value: [20, 20]
options:
desc: Color and Font Size Options.
type: Dict
value: {'FontSize': 28, 'FontColor': 'red', 'Background': 'black'}
myarray:
type: Array
dim: 1
value: [ 0. , 0.2, 0.4, 0.6, 0.8, 1. , 1.2, 1.4, 1.6, 1.8, 2. ,
2.2, 2.4, 2.6, 2.8, 3. , 3.2, 3.4, 3.6, 3.8, 4. , 4.2,
4.4, 4.6, 4.8, 5. , 5.2, 5.4, 5.6, 5.8, 6. , 6.2, 6.4,
6.6, 6.8, 7. , 7.2, 7.4, 7.6, 7.8, 8. , 8.2, 8.4, 8.6,
8.8, 9. , 9.2, 9.4, 9.6, 9.8]
%%yaml OUTPUTS
volts:
desc: Input 'volts' returned from SimTool
type: Number
units: mV
myarray:
desc: The array that was input, doubled.
type: Array
PNG:
desc: Image as a PNG
type: Image
JPG:
desc: Image as a JPG
type: Image
GIF:
desc: Image as a GIF
type: Image
nanohub:
desc: Our logo!
type: Image
from simtool import getValidatedInputs
defaultInputs = getValidatedInputs(INPUTS)
if defaultInputs:
globals().update(defaultInputs)
###Output
_____no_output_____
###Markdown
**** Computation is Done Below ****
###Code
db = DB(OUTPUTS)
db.save('volts', volts)
db.save('volts', volts, display=True)
myarray = np.array(myarray)
db.save('myarray', myarray * 2)
db.save('myarray', myarray * 4.1, display=True)
# Generate output images for our SimTool based on input parameters
import PIL.Image
import PIL.ImageDraw
import PIL.ImageFont
img = PIL.Image.new('RGB', (width, height), color=options['Background'])
d = PIL.ImageDraw.Draw(img)
try:
font = PIL.ImageFont.truetype("/usr/share/fonts/truetype/inconsolata/Inconsolata.otf", options['FontSize'], encoding="unic")
except:
font = PIL.ImageFont.load_default()
d.text(position, '%s : %smV' % (some_text, volts), font=font, fill=options['FontColor'])
img.save('foo.png')
db.save('PNG', file='foo.png', display=True)
img = PIL.Image.new('RGB', (width, height), color=options['Background'])
d = PIL.ImageDraw.Draw(img)
d.text(position, '%s : %smV (JPG)' % (some_text, volts), font=font, fill=options['FontColor'])
# img.save('foo.jpg')
db.save('JPG', img, display=True)
img = PIL.Image.new('RGB', (width, height), color=options['Background'])
d = PIL.ImageDraw.Draw(img)
d.text(position, '%s : %smV (GIF)' % (some_text, volts), font=font, fill=options['FontColor'])
img.save('foo.gif')
db.save('GIF', file='foo.gif')
db.save('nanohub', file='nanoHUB_logo_color.png', display=True)
###Output
_____no_output_____
|
notebooks/Machine_Learning_with_Scikit_Learn.ipynb
|
###Markdown
Learn with us: www.zerotodeeplearning.comCopyright © 2021: Zero to Deep Learning ® Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Machine Learning with Scikit Learn
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
url = "https://raw.githubusercontent.com/zerotodeeplearning/ztdl-masterclasses/master/data/"
###Output
_____no_output_____
###Markdown
Regression
###Code
df = pd.read_csv(url + 'weight-height.csv')
df.head()
sns.scatterplot(data=df, x='Height', y='Weight', hue='Gender');
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
X = df[['Height']].values
y = df['Weight'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
model = LinearRegression()
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Exercise 1More features: `sqft`, `bdrms`, `age`, `price`- replace the dataset above with `housing-data.csv`- adapt the code so that there are no errors: - plot it using `sns.pairplot` - add more columns in the feature definition `X = ...`- train and evaluate the model- bonus points if you try with a different model like `Ridge` or `Lasso` Classification
###Code
df = pd.read_csv(url + 'isp_data.csv')
df.head()
sns.scatterplot(data=df, x='download', y='upload', hue='label');
X = df[['download', 'upload']].values
y = df['label'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import confusion_matrix
model = DecisionTreeClassifier(max_depth=3)
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_test, y_test)
y_pred = model.predict(X)
confusion_matrix(y, y_pred)
wrong_pred = X[y != y_pred]
fig, ax = plt.subplots(figsize=(20, 10))
plot_tree(model, fontsize=14, ax=ax, rounded=True, feature_names=['download', 'upload']);
def plot_decision_boundary(model, X, ax):
x_min = X[:, 0].min() - 0.1
x_max = X[:, 0].max() + 0.1
y_min = X[:, 1].min() - 0.1
y_max = X[:, 1].max() + 0.1
hticks = np.linspace(x_min, x_max, 101)
vticks = np.linspace(y_min, y_max, 101)
aa, bb = np.meshgrid(hticks, vticks)
ab = np.c_[aa.ravel(), bb.ravel()]
c = model.predict(ab)
cc = c.reshape(aa.shape)
ax.contourf(aa, bb, cc, cmap='bwr', alpha=0.2)
ax = sns.scatterplot(data=df, x='download', y='upload', hue='label');
ax.plot(wrong_pred[:, 0], wrong_pred[:, 1], 'or', markersize=10, alpha=0.4);
plot_decision_boundary(model, X, ax)
###Output
_____no_output_____
###Markdown
Exercise 2Use a different classifier. Replace the `DecisionTreeClassifier` with another classifier, e.g.:- `LogisticRegression`- `SVC`- `RandomForestClassifier`or any other model you can find here: https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.htmland compare their behavior with the decision tree. Clustering
###Code
df = pd.read_csv(url + '/iris.csv')
df.head()
df.plot.scatter(x='sepal_length', y='petal_length', title='Iris Flowers');
X = df.drop('species', axis=1).values
from sklearn.cluster import KMeans
model = KMeans(2)
model.fit(X)
centers = model.cluster_centers_
centers
plt.scatter(df.sepal_length, df.petal_length, c=model.labels_)
plt.scatter(centers[:,0], centers[:,2], marker='o', c='r', s=100)
plt.xlabel('sepal_length')
plt.ylabel('petal_length');
###Output
_____no_output_____
###Markdown
Learn with us: www.zerotodeeplearning.comCopyright © 2021: Zero to Deep Learning ® Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Machine Learning with Scikit Learn
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
url = "https://raw.githubusercontent.com/zerotodeeplearning/ztdl-masterclasses/master/data/"
###Output
_____no_output_____
###Markdown
Regression
###Code
df = pd.read_csv(url + 'weight-height.csv')
df.head()
sns.scatterplot(data=df, x='Height', y='Weight', hue='Gender');
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
X = df[['Height']].values
y = df['Weight'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
model = LinearRegression()
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Exercise 1More features: `sqft`, `bdrms`, `age`, `price`- replace the dataset above with `housing-data.csv`- adapt the code so that there are no errors: - plot it using `sns.pairplot` - add more columns in the feature definition `X = ...`- train and evaluate the model- bonus points if you try with a different model like `Ridge` or `Lasso` Classification
###Code
df = pd.read_csv(url + 'isp_data.csv')
df.head()
sns.scatterplot(data=df, x='download', y='upload', hue='label');
X = df[['download', 'upload']].values
y = df['label'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import confusion_matrix
model = DecisionTreeClassifier(max_depth=3)
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_test, y_test)
y_pred = model.predict(X)
confusion_matrix(y, y_pred)
wrong_pred = X[y != y_pred]
fig, ax = plt.subplots(figsize=(20, 10))
plot_tree(model, fontsize=14, ax=ax, rounded=True, feature_names=['download', 'upload']);
def plot_decision_boundary(model, X, ax):
x_min = X[:, 0].min() - 0.1
x_max = X[:, 0].max() + 0.1
y_min = X[:, 1].min() - 0.1
y_max = X[:, 1].max() + 0.1
hticks = np.linspace(x_min, x_max, 101)
vticks = np.linspace(y_min, y_max, 101)
aa, bb = np.meshgrid(hticks, vticks)
ab = np.c_[aa.ravel(), bb.ravel()]
c = model.predict(ab)
cc = c.reshape(aa.shape)
ax.contourf(aa, bb, cc, cmap='bwr', alpha=0.2)
ax = sns.scatterplot(data=df, x='download', y='upload', hue='label');
ax.plot(wrong_pred[:, 0], wrong_pred[:, 1], 'or', markersize=10, alpha=0.4);
plot_decision_boundary(model, X, ax)
###Output
_____no_output_____
###Markdown
Exercise 2Use a different classifier. Replace the `DecisionTreeClassifier` with another classifier, e.g.:- `LogisticRegression`- `SVC`- `RandomForestClassifier`or any other model you can find here: https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.htmland compare their behavior with the decision tree. Clustering
###Code
df = pd.read_csv(url + '/iris.csv')
df.head()
df.plot.scatter(x='sepal_length', y='petal_length', title='Iris Flowers');
X = df.drop('species', axis=1).values
from sklearn.cluster import KMeans
model = KMeans(2)
model.fit(X)
centers = model.cluster_centers_
centers
plt.scatter(df.sepal_length, df.petal_length, c=model.labels_)
plt.scatter(centers[:,0], centers[:,2], marker='o', c='r', s=100)
plt.xlabel('sepal_length')
plt.ylabel('petal_length');
###Output
_____no_output_____
###Markdown
Copyright 2020 Catalit LLC.
###Code
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Machine Learning with Scikit Learn
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
url = "https://raw.githubusercontent.com/zerotodeeplearning/ztdl-masterclasses/master/data/"
###Output
_____no_output_____
###Markdown
Regression
###Code
df = pd.read_csv(url + 'weight-height.csv')
df.head()
sns.scatterplot(data=df, x='Height', y='Weight', hue='Gender');
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
X = df[['Height']].values
y = df['Weight'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
model = LinearRegression()
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_test, y_test)
###Output
_____no_output_____
###Markdown
Exercise 1More features: `sqft`, `bdrms`, `age`, `price`- replace the dataset above with `housing-data.csv`- adapt the code so that there are no errors: - plot it using `sns.pairplot` - add more columns in the feature definition `X = ...`- train and evaluate the model- bonus points if you try with a different model like `Ridge` or `Lasso` Classification
###Code
df = pd.read_csv(url + 'isp_data.csv')
df.head()
sns.scatterplot(data=df, x='download', y='upload', hue='label');
X = df[['download', 'upload']].values
y = df['label'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
from sklearn.tree import DecisionTreeClassifier, plot_tree
from sklearn.metrics import confusion_matrix
model = DecisionTreeClassifier(max_depth=3)
model.fit(X_train, y_train)
model.score(X_train, y_train)
model.score(X_test, y_test)
y_pred = model.predict(X)
confusion_matrix(y, y_pred)
wrong_pred = X[y != y_pred]
fig, ax = plt.subplots(figsize=(20, 10))
plot_tree(model, fontsize=14, ax=ax, rounded=True, feature_names=['download', 'upload']);
def plot_decision_boundary(model, X, ax):
x_min = X[:, 0].min() - 0.1
x_max = X[:, 0].max() + 0.1
y_min = X[:, 1].min() - 0.1
y_max = X[:, 1].max() + 0.1
hticks = np.linspace(x_min, x_max, 101)
vticks = np.linspace(y_min, y_max, 101)
aa, bb = np.meshgrid(hticks, vticks)
ab = np.c_[aa.ravel(), bb.ravel()]
c = model.predict(ab)
cc = c.reshape(aa.shape)
ax.contourf(aa, bb, cc, cmap='bwr', alpha=0.2)
ax = sns.scatterplot(data=df, x='download', y='upload', hue='label');
ax.plot(wrong_pred[:, 0], wrong_pred[:, 1], 'or', markersize=10, alpha=0.4);
plot_decision_boundary(model, X, ax)
###Output
_____no_output_____
###Markdown
Exercise 2Use a different classifier. Replace the `DecisionTreeClassifier` with another classifier, e.g.:- `LogisticRegression`- `SVC`- `RandomForestClassifier`or any other model you can find here: https://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.htmland compare their behavior with the decision tree. Clustering
###Code
df = pd.read_csv(url + '/iris.csv')
df.head()
df.plot.scatter(x='sepal_length', y='petal_length', title='Iris Flowers');
X = df.drop('species', axis=1).values
from sklearn.cluster import KMeans
model = KMeans(2)
model.fit(X)
centers = model.cluster_centers_
centers
plt.scatter(df.sepal_length, df.petal_length, c=model.labels_)
plt.scatter(centers[:,0], centers[:,2], marker='o', c='r', s=100)
plt.xlabel('sepal_length')
plt.ylabel('petal_length');
###Output
_____no_output_____
|
American_Universities/New_York_University/New_York_University.ipynb
|
###Markdown
Full Code
###Code
lis=['Course','URL','University']
info=[]
info.append(lis)
from selenium import webdriver
import csv
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--no-sandbox')
driver = webdriver.Chrome("C:\\Users\MOHAN KUMAR SAH\Documents\My Work\PakkaIndia\chromedriver",chrome_options=chrome_options)
for i in range(12):
driver.get('https://wagner.nyu.edu/education/courses/search?search_api_fulltext=&field_course_semesters_offered=All&page='+str(i))
data1=driver.find_elements_by_css_selector('div.views-field.views-field-title')
for j in range(len(data1)):
c=data1[j].find_element_by_tag_name('a').text
url=data1[j].find_element_by_tag_name('a').get_attribute('href')
uni="New_York_University"
info.append([c,url,uni])
print(c,url,uni)
len(info)
with open('New_York_University.csv','w',encoding="utf-8",newline="") as file:
write=csv.writer(file)
for row in info:
write.writerow(row)
###Output
_____no_output_____
|
CarND-Term3-P2-Semantic-Segmentation.ipynb
|
###Markdown
Import libraries
###Code
import os.path
import tensorflow as tf
import helper
import warnings
from distutils.version import LooseVersion
import project_tests as tests
import time
###Output
_____no_output_____
###Markdown
Check TensorFlow Version
###Code
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
###Output
TensorFlow Version: 1.2.1
###Markdown
Check for a GPU
###Code
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
###Output
Default GPU Device: /gpu:0
###Markdown
Define load_vgg()
###Code
def load_vgg(sess, vgg_path):
"""
Load Pretrained VGG Model into TensorFlow.
:param sess: TensorFlow Session
:param vgg_path: Path to vgg folder, containing "variables/" and "saved_model.pb"
:return: Tuple of Tensors from VGG model (image_input, keep_prob, layer3_out, layer4_out, layer7_out)
"""
# TODO: Implement function
# Use tf.saved_model.loader.load to load the model and weights
vgg_tag = 'vgg16'
vgg_input_tensor_name = 'image_input:0'
vgg_keep_prob_tensor_name = 'keep_prob:0'
vgg_layer3_out_tensor_name = 'layer3_out:0'
vgg_layer4_out_tensor_name = 'layer4_out:0'
vgg_layer7_out_tensor_name = 'layer7_out:0'
# Load the saved model
tf.saved_model.loader.load(sess, [vgg_tag], vgg_path)
# Get the tensor layers by name
graph = tf.get_default_graph()
image_input = graph.get_tensor_by_name(vgg_input_tensor_name)
keep_prob = graph.get_tensor_by_name(vgg_keep_prob_tensor_name)
layer3_out = graph.get_tensor_by_name(vgg_layer3_out_tensor_name)
layer4_out = graph.get_tensor_by_name(vgg_layer4_out_tensor_name)
layer7_out = graph.get_tensor_by_name(vgg_layer7_out_tensor_name)
return image_input, keep_prob, layer3_out, layer4_out, layer7_out
###Output
_____no_output_____
###Markdown
Run test
###Code
tests.test_load_vgg(load_vgg, tf)
###Output
Tests Passed
###Markdown
Define layers()
###Code
def layers(vgg_layer3_out, vgg_layer4_out, vgg_layer7_out, num_classes):
"""
Create the layers for a fully convolutional network. Build skip-layers using the vgg layers.
:param vgg_layer3_out: TF Tensor for VGG Layer 3 output
:param vgg_layer4_out: TF Tensor for VGG Layer 4 output
:param vgg_layer7_out: TF Tensor for VGG Layer 7 output
:param num_classes: Number of classes to classify
:return: The Tensor for the last layer of output
"""
# TODO: Implement function
# Here we will use FCN-8 architecture developed at Berkeley. (https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf)
# Here is the encoder architecture
# conv7 = Do convolution on layer 7
# upsampled_conv7 = Upsample conv7
# conv4 = Do convolution on layer 4
# skip4 = Connect upsampled_conv7 to conv4
# upsampled_skip4 = Upsample skip4
# conv3 = Do convolution on layer 3
# skip3 = Connect upsampled_skip4 to conv3
# upsampled_skip3 = Upsample skip3
# output = upsampled_skip3
# Set standard deviation of weights
weights_stddev = 0.01
# Set L2 regularizer of weights
weights_l2_regularizer = 1e-3
# Do 1x1 convolution on vgg16 layer 7
conv7 = tf.layers.conv2d(vgg_layer7_out, filters = num_classes, kernel_size = 1, strides = (1,1), padding = 'same',
kernel_initializer = tf.random_normal_initializer(stddev = weights_stddev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(weights_l2_regularizer)
)
# Do unsample on vgg16 layer 7
upsampled_conv7 = tf.layers.conv2d_transpose(conv7, filters = num_classes, kernel_size = 4, strides = (2, 2), padding = 'same',
kernel_initializer = tf.random_normal_initializer(stddev = weights_stddev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(weights_l2_regularizer)
)
# Do 1x1 convolution on vgg16 layer 4
conv4 = tf.layers.conv2d(vgg_layer4_out, filters = num_classes, kernel_size = 1, strides = (1,1), padding = 'same',
kernel_initializer = tf.random_normal_initializer(stddev = weights_stddev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(weights_l2_regularizer)
)
# Do skip connection between unsampled_cov7 and conv4
skip4 = tf.add(upsampled_conv7, conv4)
# Do unsample on skip4
upsampled_skip4 = tf.layers.conv2d_transpose(skip4, filters = num_classes, kernel_size = 4, strides = (2, 2), padding = 'same',
kernel_initializer = tf.random_normal_initializer(stddev = weights_stddev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(weights_l2_regularizer)
)
# Do 1x1 convolution on vgg16 layer 3
conv3 = tf.layers.conv2d(vgg_layer3_out, filters = num_classes, kernel_size = 1, strides = (1,1), padding = 'same',
kernel_initializer = tf.random_normal_initializer(stddev = weights_stddev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(weights_l2_regularizer)
)
# Do skip connection between unsampled_skip4 and conv3
skip3 = tf.add(upsampled_skip4, conv3)
# Do unsample on skip3
upsampled_skip3 = tf.layers.conv2d_transpose(skip3, filters = num_classes, kernel_size = 16, strides = (8, 8), padding = 'same',
kernel_initializer = tf.random_normal_initializer(stddev = weights_stddev),
kernel_regularizer = tf.contrib.layers.l2_regularizer(weights_l2_regularizer)
)
# Output is the unsampled_skip3
output = upsampled_skip3
return output
###Output
_____no_output_____
###Markdown
Run test
###Code
tests.test_layers(layers)
###Output
Tests Passed
###Markdown
Define optimize()
###Code
def optimize(nn_last_layer, correct_label, learning_rate, num_classes):
"""
Build the TensorFLow loss and optimizer operations.
:param nn_last_layer: TF Tensor of the last layer in the neural network
:param correct_label: TF Placeholder for the correct label image
:param learning_rate: TF Placeholder for the learning rate
:param num_classes: Number of classes to classify
:return: Tuple of (logits, train_op, cross_entropy_loss)
"""
# TODO: Implement function
# Remember the output tensor is 4D so we have to reshape it to 2D
# logits is now a 2D tensor where each row represents a pixel and each column a class.
logits = tf.reshape(nn_last_layer, (-1, num_classes)) ## Remove this line???
# Reshape correct_label tensor to 2D
labels = tf.reshape(correct_label, (-1, num_classes))
# We can just use standard cross entropy loss function
cross_entropy_loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels = labels, logits = logits))
# Use Adam optimizer for training
optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate)
train_op = optimizer.minimize(cross_entropy_loss)
return logits, train_op, cross_entropy_loss
###Output
_____no_output_____
###Markdown
Run test
###Code
tests.test_optimize(optimize)
###Output
Tests Passed
###Markdown
Define train_nn()
###Code
def train_nn(sess, epochs, batch_size, get_batches_fn, train_op, cross_entropy_loss, input_image,
correct_label, keep_prob, learning_rate):
"""
Train neural network and print out the loss during training.
:param sess: TF Session
:param epochs: Number of epochs
:param batch_size: Batch size
:param get_batches_fn: Function to get batches of training data. Call using get_batches_fn(batch_size)
:param train_op: TF Operation to train the neural network
:param cross_entropy_loss: TF Tensor for the amount of loss
:param input_image: TF Placeholder for input images
:param correct_label: TF Placeholder for label images
:param keep_prob: TF Placeholder for dropout keep probability
:param learning_rate: TF Placeholder for learning rate
"""
# TODO: Implement function
# Run global variables initializer
sess.run(tf.global_variables_initializer())
# Start training
print("Training...")
print()
for epoch in range(epochs):
# Print result for record
print("EPOCH {} ...".format(epoch+1))
start_time = time.time()
for image, label in get_batches_fn(batch_size):
# Training
_, loss = sess.run([train_op, cross_entropy_loss],
feed_dict = {input_image: image, correct_label: label,
keep_prob: 0.5, learning_rate: 0.00001
}
)
print("Loss = {:.3f}".format(loss))
elapsed_time = time.time() - start_time
print("Elapsed time = {:.3f}".format(elapsed_time))
print()
# Finish training
print("Training finished.")
###Output
_____no_output_____
###Markdown
Run test
###Code
tests.test_train_nn(train_nn)
###Output
INFO:tensorflow:Restoring parameters from b'./data/vgg/variables/variables'
###Markdown
Define run()
###Code
def run():
num_classes = 2
image_shape = (160, 576)
data_dir = './data'
runs_dir = './runs'
tests.test_for_kitti_dataset(data_dir)
# Download pretrained vgg model
helper.maybe_download_pretrained_vgg(data_dir)
# OPTIONAL: Train and Inference on the cityscapes dataset instead of the Kitti dataset.
# You'll need a GPU with at least 10 teraFLOPS to train on.
# https://www.cityscapes-dataset.com/
with tf.Session() as sess:
# Path to vgg model
vgg_path = os.path.join(data_dir, 'vgg')
# Create function to get batches
get_batches_fn = helper.gen_batch_function(os.path.join(data_dir, 'data_road/training'), image_shape)
# OPTIONAL: Augment Images for better results
# https://datascience.stackexchange.com/questions/5224/how-to-prepare-augment-images-for-neural-network
# TODO: Build NN using load_vgg, layers, and optimize function
# Create placeholders
correct_label = tf.placeholder(tf.int32, [None, None, None, num_classes], name = 'correct_label')
learning_rate = tf.placeholder(tf.float32, name = 'learning_rate')
# Load the layers from the VGG16
input_image, keep_prob, layer3_out, layer4_out, layer7_out = load_vgg(sess, vgg_path)
# Construct new layers
output_layer = layers(layer3_out, layer4_out, layer7_out, num_classes)
# TODO: Train NN using the train_nn function
# Define optimizer
logits, train_op, cross_entropy_loss = optimize(output_layer, correct_label, learning_rate, num_classes)
# Define training epochs and batch size
epochs = 60
batch_size = 5
# print('Before training')
# Start training
train_nn(sess, epochs, batch_size, get_batches_fn, train_op, cross_entropy_loss, input_image, correct_label, keep_prob, learning_rate)
# print('After training')
print('Before saving inference data')
# TODO: Save inference data using helper.save_inference_samples
helper.save_inference_samples(runs_dir, data_dir, sess, image_shape, logits, keep_prob, input_image)
print('After saving inference data')
# OPTIONAL: Apply the trained model to a video
###Output
_____no_output_____
###Markdown
Run training
###Code
if __name__ == '__main__':
run()
###Output
_____no_output_____
|
tutorials/Tutorial_dataset_with_DEBIAI/Tutorial_dataset_with_DEBIAI.ipynb
|
###Markdown
DEBIAI Getting started :1- Data inporting from a CSV file- Creation of a DEBIAI project- Insertion of the data into the project- Statistical analysis2- Simple model training- Insertion of two model results into DEBIAI- Statistical Model comparaison- Creation of a new data selection3- Training of two new models- Results comparaison- Conclusion
###Code
import pandas as pd
import numpy as np
import tensorflow as tf
from debiai import debiai
###Output
2021-09-01 17:19:33.205437: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory
2021-09-01 17:19:33.205460: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
###Markdown
Download the csv file containing a simple wine quality dataset.P. Cortez, A. Cerdeira, F. Almeida, T. Matos and J. Reis.Modeling wine preferences by data mining from physicochemical properties. In Decision Support Systems, Elsevier, 47(4):547-553, 2009.https://archive.ics.uci.edu/ml/datasets/Wine+Quality
###Code
csv_file = tf.keras.utils.get_file('winequality.csv', 'https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-white.csv')
###Output
_____no_output_____
###Markdown
Read the csv file using pandas.
###Code
df = pd.read_csv(csv_file, delimiter=';')
df
###Output
_____no_output_____
###Markdown
Insert data into DEBIAI for a first step statistical analysis
###Code
# Creation of the DEBIAI wine quality project block structure
DEBIAI_block_structure = [
{
"name": "sampleId",
"inputs": [
{ "name": "fixed acidity", "type": "number"},
{ "name": "volatile acidity", "type": "number"},
{ "name": "citric acid", "type": "number"},
{ "name": "residual sugar", "type": "number"},
{ "name": "chlorides", "type": "number"},
{ "name": "free sulfur dioxide", "type": "number"},
{ "name": "total sulfur dioxide", "type": "number"},
{ "name": "density", "type": "number"},
{ "name": "pH", "type": "number"},
{ "name": "sulphates", "type": "number"},
{ "name": "alcohol", "type": "number"},
],
"groundTruth": [
{ "name": "quality", "type": "number"},
]
}
]
# Add an unique value column to the dataframe
df.insert(0, "sampleId", range(len(df.index)), True)
df.dtypes
###Output
_____no_output_____
###Markdown
Insert the dataframe into DEBIAI
###Code
DEBIAI_BACKEND_URL = 'http://localhost:3000/'
DEBIAI_PROJECT_NAME = 'winequality demo'
my_debiai = debiai.Debiai(DEBIAI_BACKEND_URL)
# Create or recreate the project
debiai_project = my_debiai.get_project(DEBIAI_PROJECT_NAME)
if debiai_project:
# Deleting the project if already existing
my_debiai.delete_project_byId(DEBIAI_PROJECT_NAME)
debiai_project = my_debiai.create_project(DEBIAI_PROJECT_NAME)
debiai_project.set_blockstructure(DEBIAI_block_structure)
# Add the dataframe
print("Adding the dataframe ~ sec")
debiai_project.add_samples_pd(df, get_hash=False)
###Output
Adding the dataframe ~ sec
Adding samples : [========================================] 100% 4898/4898 1s
###Markdown
The input data and the project are now ready to be analysed into the dashboard Statistical analysis : Model training Load data using `tf.data.Dataset`
###Code
trainingDf = df.copy()
trainingDf.pop('sampleId')
target = trainingDf.pop('quality')
dataset = tf.data.Dataset.from_tensor_slices((trainingDf.to_numpy(), target.values))
###Output
2021-09-01 17:19:41.245673: I tensorflow/compiler/jit/xla_cpu_device.cc:41] Not creating XLA devices, tf_xla_enable_xla_devices not set
2021-09-01 17:19:41.251452: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
2021-09-01 17:19:41.254283: W tensorflow/stream_executor/cuda/cuda_driver.cc:326] failed call to cuInit: UNKNOWN ERROR (303)
2021-09-01 17:19:41.258155: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (tomansion-HP-EliteBook-840-G4): /proc/driver/nvidia/version does not exist
2021-09-01 17:19:41.283075: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-09-01 17:19:41.293863: I tensorflow/compiler/jit/xla_gpu_device.cc:99] Not creating XLA devices, tf_xla_enable_xla_devices not set
###Markdown
Shuffle and batch the dataset.
###Code
train_dataset = dataset.shuffle(len(trainingDf)).batch(1)
###Output
_____no_output_____
###Markdown
Create and train two models
###Code
def get_compiled_model():
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10, activation='relu'),
tf.keras.layers.Dense(10)
])
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
return model
model1 = get_compiled_model()
model1.fit(train_dataset, epochs=2)
model2 = get_compiled_model()
model2.fit(train_dataset, epochs=1)
from scipy.special import softmax
def predict_from_pd(trainingDf, model):
inp = trainingDf.to_numpy()
preds = model.predict(inp)
return pd.concat([pd.DataFrame(
[
[str(i), str(np.argmax(pred)), str(
round(np.max(softmax(pred)) * 100, 2))]
], columns=["sampleId", "prediction", "percent"])
for (i, pred) in enumerate(preds)], ignore_index=True)
results1 = predict_from_pd(trainingDf, model1)
results1
results2 = predict_from_pd(trainingDf, model2)
results2
###Output
_____no_output_____
###Markdown
Insert the model results into DEBIAI for a results statistical analysis
###Code
# debiai_project.delete_model("Model 2e")
# debiai_project.delete_model("Model 4e")
# Creating the two DEBIAI models
DEBIAI_molel_name1 = "Model 2e"
DEBIAI_molel_name2 = "Model 4e"
debiai_model1 = debiai_project.create_model(DEBIAI_molel_name1)
debiai_model2 = debiai_project.create_model(DEBIAI_molel_name2)
# Set the DEBIAI expected_results structure.
DEBIAI_result_struct = [
{ "name": "prediction", "type": "number" },
{ "name": "percent", "type": "number" }
]
debiai_project.set_expected_results(DEBIAI_result_struct)
# Add the molel results
debiai_model1.add_results_df(results1)
debiai_model2.add_results_df(results2)
###Output
Adding results : [========================================] 100% 4898/4898 Model 2e 6s
Adding results : [========================================] 100% 4898/4898 Model 4e 1s
###Markdown
The model results should now appear on the dashboard Molel performance analysis DEBIAI dataset generation Genration of a smaller less biased dataset based on the last models errors with the dashboard.
###Code
debiai_project = my_debiai.get_project('winequality demo')
debiai_project.get_selections()
selection = debiai_project.get_selection('less biased')
selection
# Loading the selection as a dataframe
selection_df = selection.get_dataframe()
print(selection_df)
print(selection_df.dtypes)
selection_df.pop('sampleId')
target = selection_df.pop('quality')
dataset2 = tf.data.Dataset.from_tensor_slices((selection_df.to_numpy(), target.values))
train_dataset2 = dataset2.shuffle(len(selection_df)).batch(1)
model3 = get_compiled_model()
model3.fit(train_dataset2, epochs=2)
model4 = get_compiled_model()
model4.fit(train_dataset2, epochs=4)
results3 = predict_from_pd(trainingDf, model3)
results3
results4 = predict_from_pd(trainingDf, model4)
results4
# Creating the two DEBIAI models
DEBIAI_molel_name3 = "Model LB 2e"
DEBIAI_molel_name4 = "Model LB 4e"
debiai_model3 = debiai_project.create_model(DEBIAI_molel_name3)
debiai_model4 = debiai_project.create_model(DEBIAI_molel_name4)
# Add the molel results
debiai_model3.add_results_df(results3)
debiai_model4.add_results_df(results4)
###Output
Adding results : [========================================] 100% 4898/4898 Model LB 2e 0s
Adding results : [========================================] 100% 4898/4898 Model LB 4e 1s
###Markdown
The new model results should now appear on the dashboard Second molel performance analysis Training on a dataset directly from the DEBIAI selection
###Code
train_dataset_imported = selection.get_tf_dataset()
train_dataset_imported = train_dataset_imported.shuffle(selection.nbSamples).batch(1)
model5 = get_compiled_model()
model5.fit(train_dataset_imported, epochs=15)
results5 = predict_from_pd(trainingDf, model5)
results5
# Creating the last DEBIAI model
DEBIAI_molel_name3 = "Model LB 2e"
debiai_model5 = debiai_project.create_model("Model LB 15e")
# Add the molel results
debiai_model5.add_results_df(results5)
###Output
Adding results : [========================================] 100% 4898/4898 Model LB 15e 3s
|
docs/notebooks/simulation/example_PAT_simulations.ipynb
|
###Markdown
One and two electron Hamiltonian This model is valid for a double-dot system tuned to the transition from (1,0) to (0,1) or with two electrons for (1,1) to (2,0).Author: Pieter Eendebak ([email protected]), Bruno Buijtendorp ([email protected])
###Code
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
%matplotlib inline
sp.init_printing(use_latex='latex')
###Output
_____no_output_____
###Markdown
One electron Hamiltonian Define 1-electron double dot Hamiltoniane is detuning, $t$ is tunnel coupling. The basis we work in is (1,0) and (0,1).
###Code
e, t = sp.symbols('e t')
H = sp.Matrix([[e/2, t],[t, -e/2]])
sp.pprint(H)
#%% Get normalized eigenvectors and eigenvalues
eigvec_min = H.eigenvects()[0][2][0].normalized()
eigval_min = H.eigenvects()[0][0]
eigvec_plus = H.eigenvects()[1][2][0].normalized()
eigval_plus = H.eigenvects()[1][0]
#%% Lambdify eigenvalues to make them numerical functions of e and t (nicer plotting)
eigval_min_func = sp.lambdify((e,t), eigval_min , 'numpy')
eigval_plus_func = sp.lambdify((e,t), eigval_plus, 'numpy')
#%% Plot energy levels
t_value = 1
plot_x_limit = 5
Npoints_x = 1000
erange = np.linspace(-plot_x_limit, plot_x_limit, Npoints_x)
levelfig, levelax = plt.subplots()
levelax.plot(erange, eigval_min_func(erange , t_value), label='$S-$')
levelax.plot(erange, eigval_plus_func(erange, t_value), label ='$S+$')
levelax.set_title('Energy levels for double-dot in one-electron regime, t = %.1f' % t_value)
plt.plot(erange, erange/2, ':c', label='avoided crossing')
plt.plot(erange, -erange/2, ':c')
plt.legend()
levelax.set_xlabel('detuning $(uev)$')
levelax.set_ylabel('energy $(ueV)$')
_=plt.axis('tight')
#%% Plot energy level differences
SminS = eigval_plus_func(erange , t_value) - eigval_min_func(erange, t_value)
plt.figure()
plt.plot(erange, SminS, label='$E_{S_+} - E_{S_-}$')
plt.title('Energy transitions for double-dot in one-electron regime, t = %.1f $\mu eV$' % (t_value))
plt.legend()
plt.ylabel('$\Delta E$ $ (\mu eV)$')
plt.xlabel('$\epsilon$ $ (\mu eV)$')
#%% Get S(1,0) component of eigenvectors
eigcomp_min = eigvec_min[0]
eigcomp_plus = eigvec_plus[0]
#%% Plot S(1,0) components squared (probabilities) of eigenvectors as function of detuning
t_value = 1
erange = np.linspace(-20,20,500)
plot_x_limit = 20
# Lambdify eigenvector components to make them functions of e and t
eigcompmin_func = sp.lambdify((e,t), eigcomp_min , 'numpy')
eigcompplus_func = sp.lambdify((e,t), eigcomp_plus, 'numpy')
fig2, ax2 = plt.subplots()
ax2.plot(erange,eigcompmin_func(erange, t_value)**2, label='$S_-$')
ax2.plot(erange,eigcompplus_func(erange, t_value)**2, label='$S_+$')
ax2.set_xlabel('detuning, ($\mu$eV)')
ax2.set_ylabel('(1,0) coefficient squared')
_=plt.legend()
###Output
_____no_output_____
###Markdown
Two-electron Hamiltonian Define 2-electron double dot Hamiltoniane is detuning, t is tunnel coupling. The basis we work in is: {S(2,0), S(1,1), T(1,1)}
###Code
e, t = sp.symbols('e t')
# Basis: {S(2,0), S(1,1), T(1,1)}
H = sp.Matrix([[e, sp.sqrt(2)*t, 0],[sp.sqrt(2)*t, 0, 0],[0, 0, 0]])
#%% Get normalized eigenvectors and eigenvalues
eigvec_min = H.eigenvects()[1][2][0].normalized()
eigval_min = H.eigenvects()[1][0]
eigvec_plus = H.eigenvects()[2][2][0].normalized()
eigval_plus = H.eigenvects()[2][0]
eigvec_T = H.eigenvects()[0][2][0].normalized()
eigval_T = H.eigenvects()[0][0]
#%% Lambdify eigenvalues to make them numerical functions of e and t (nicer plotting)
eigval_min_func = sp.lambdify((e,t), eigval_min , 'numpy')
eigval_plus_func = sp.lambdify((e,t), eigval_plus, 'numpy')
#%% Plot energy levels
t_value = 1
plot_x_limit = 5
Npoints_x = 1000
erange = np.linspace(-plot_x_limit, plot_x_limit, Npoints_x)
levelfig, levelax = plt.subplots()
levelax.plot(erange, [eigval_T]*len(erange), label='T(1,1)')
levelax.plot(erange, eigval_min_func(erange , t_value), label='$S_-$')
levelax.plot(erange, eigval_plus_func(erange, t_value), label ='$S_+$')
levelax.set_title('Energy levels for double-dot in two-electron regime, t = %.1f' % t_value)
plt.legend()
levelax.set_xlabel('detuning $(uev)$')
levelax.set_ylabel('energy $(ueV)$')
plt.axis('tight')
#%% Plot energy level differences
SminS = eigval_plus_func(erange , t_value) - eigval_min_func(erange, t_value)
S20minT = eigval_plus_func(erange, t_value)
TminS11 = -eigval_min_func(erange, t_value)
plt.figure()
plt.plot(erange, SminS, label='$E_{S_+} - E_{S_-}$')
plt.plot(erange, S20minT, label = '$E_{S_+} - E_T$')
plt.plot(erange, TminS11, label = '$E_T - E_{S_-}$')
plt.title('Energy transitions for double-dot in two-electron regime, t = %.1f $\mu eV$' % (t_value))
plt.legend()
plt.ylabel('$\Delta E$ $ (\mu eV)$')
plt.xlabel('$\epsilon$ $ (\mu eV)$')
#%% Get S(2,0) component of eigenvectors
eigcomp_min = eigvec_min[0]
eigcomp_plus = eigvec_plus[0]
eigcomp_T = eigvec_T[0]
#%% Plot S(2,0) components squared (probabilities) of eigenvectors as function of detuning
t_value = 1
erange = np.linspace(-20,20,500)
plot_x_limit = 20
# Lambdify eigenvector components to make them functions of e and t
eigcompmin_func = sp.lambdify((e,t), eigcomp_min , 'numpy')
eigcompplus_func = sp.lambdify((e,t), eigcomp_plus, 'numpy')
fig2, ax2 = plt.subplots()
ax2.plot(erange,eigcompmin_func(erange, t_value)**2, label='$S_-$')
ax2.plot(erange,eigcompplus_func(erange, t_value)**2, label='$S_+$')
ax2.plot(erange,[eigcomp_T]*len(erange), label='$T$')
ax2.set_xlabel('Detuning ($\mu$eV)')
ax2.set_ylabel('S(2,0) coefficient squared')
_=plt.legend()
###Output
_____no_output_____
###Markdown
One and two electron Hamiltonian This model is valid for a double-dot system tuned to the transition from (1,0) to (0,1) or with two electrons for (1,1) to (2,0).Author: Pieter Eendebak ([email protected]), Bruno Buijtendorp ([email protected])
###Code
import numpy as np
import matplotlib.pyplot as plt
import sympy as sp
%matplotlib inline
sp.init_printing(use_latex='latex')
###Output
_____no_output_____
###Markdown
One electron Hamiltonian Define 1-electron double dot Hamiltoniane is detuning, $t$ is tunnel coupling. The basis we work in is (1,0) and (0,1).
###Code
e, t = sp.symbols('e t')
H = sp.Matrix([[e/2, t],[t, -e/2]])
sp.pprint(H)
#%% Get normalized eigenvectors and eigenvalues
eigvec_min = H.eigenvects()[0][2][0].normalized()
eigval_min = H.eigenvects()[0][0]
eigvec_plus = H.eigenvects()[1][2][0].normalized()
eigval_plus = H.eigenvects()[1][0]
#%% Lambdify eigenvalues to make them numerical functions of e and t (nicer plotting)
eigval_min_func = sp.lambdify((e,t), eigval_min , 'numpy')
eigval_plus_func = sp.lambdify((e,t), eigval_plus, 'numpy')
#%% Plot energy levels
t_value = 1
plot_x_limit = 5
Npoints_x = 1000
erange = np.linspace(-plot_x_limit, plot_x_limit, Npoints_x)
levelfig, levelax = plt.subplots()
levelax.plot(erange, eigval_min_func(erange , t_value), label='$S-$')
levelax.plot(erange, eigval_plus_func(erange, t_value), label ='$S+$')
levelax.set_title('Energy levels for double-dot in one-electron regime, t = %.1f' % t_value)
plt.plot(erange, erange/2, ':c', label='avoided crossing')
plt.plot(erange, -erange/2, ':c')
plt.legend()
levelax.set_xlabel('detuning $(uev)$')
levelax.set_ylabel('energy $(ueV)$')
_=plt.axis('tight')
#%% Plot energy level differences
SminS = eigval_plus_func(erange , t_value) - eigval_min_func(erange, t_value)
plt.figure()
plt.plot(erange, SminS, label='$E_{S_+} - E_{S_-}$')
plt.title('Energy transitions for double-dot in one-electron regime, t = %.1f $\mu eV$' % (t_value))
plt.legend()
plt.ylabel('$\Delta E$ $ (\mu eV)$')
plt.xlabel('$\epsilon$ $ (\mu eV)$')
#%% Get S(1,0) component of eigenvectors
eigcomp_min = eigvec_min[0]
eigcomp_plus = eigvec_plus[0]
#%% Plot S(1,0) components squared (probabilities) of eigenvectors as function of detuning
t_value = 1
erange = np.linspace(-20,20,500)
plot_x_limit = 20
# Lambdify eigenvector components to make them functions of e and t
eigcompmin_func = sp.lambdify((e,t), eigcomp_min , 'numpy')
eigcompplus_func = sp.lambdify((e,t), eigcomp_plus, 'numpy')
fig2, ax2 = plt.subplots()
ax2.plot(erange,eigcompmin_func(erange, t_value)**2, label='$S_-$')
ax2.plot(erange,eigcompplus_func(erange, t_value)**2, label='$S_+$')
ax2.set_xlabel('detuning, ($\mu$eV)')
ax2.set_ylabel('(1,0) coefficient squared')
_=plt.legend()
###Output
_____no_output_____
###Markdown
Two-electron Hamiltonian Define 2-electron double dot Hamiltoniane is detuning, t is tunnel coupling. The basis we work in is: {S(2,0), S(1,1), T(1,1)}
###Code
e, t = sp.symbols('e t')
# Basis: {S(2,0), S(1,1), T(1,1)}
H = sp.Matrix([[e, sp.sqrt(2)*t, 0],[sp.sqrt(2)*t, 0, 0],[0, 0, 0]])
#%% Get normalized eigenvectors and eigenvalues
eigvec_min = H.eigenvects()[1][2][0].normalized()
eigval_min = H.eigenvects()[1][0]
eigvec_plus = H.eigenvects()[2][2][0].normalized()
eigval_plus = H.eigenvects()[2][0]
eigvec_T = H.eigenvects()[0][2][0].normalized()
eigval_T = H.eigenvects()[0][0]
#%% Lambdify eigenvalues to make them numerical functions of e and t (nicer plotting)
eigval_min_func = sp.lambdify((e,t), eigval_min , 'numpy')
eigval_plus_func = sp.lambdify((e,t), eigval_plus, 'numpy')
#%% Plot energy levels
t_value = 1
plot_x_limit = 5
Npoints_x = 1000
erange = np.linspace(-plot_x_limit, plot_x_limit, Npoints_x)
levelfig, levelax = plt.subplots()
levelax.plot(erange, [eigval_T]*len(erange), label='T(1,1)')
levelax.plot(erange, eigval_min_func(erange , t_value), label='$S_-$')
levelax.plot(erange, eigval_plus_func(erange, t_value), label ='$S_+$')
levelax.set_title('Energy levels for double-dot in two-electron regime, t = %.1f' % t_value)
plt.legend()
levelax.set_xlabel('detuning $(uev)$')
levelax.set_ylabel('energy $(ueV)$')
plt.axis('tight')
#%% Plot energy level differences
SminS = eigval_plus_func(erange , t_value) - eigval_min_func(erange, t_value)
S20minT = eigval_plus_func(erange, t_value)
TminS11 = -eigval_min_func(erange, t_value)
plt.figure()
plt.plot(erange, SminS, label='$E_{S_+} - E_{S_-}$')
plt.plot(erange, S20minT, label = '$E_{S_+} - E_T$')
plt.plot(erange, TminS11, label = '$E_T - E_{S_-}$')
plt.title('Energy transitions for double-dot in two-electron regime, t = %.1f $\mu eV$' % (t_value))
plt.legend()
plt.ylabel('$\Delta E$ $ (\mu eV)$')
plt.xlabel('$\epsilon$ $ (\mu eV)$')
#%% Get S(2,0) component of eigenvectors
eigcomp_min = eigvec_min[0]
eigcomp_plus = eigvec_plus[0]
eigcomp_T = eigvec_T[0]
#%% Plot S(2,0) components squared (probabilities) of eigenvectors as function of detuning
t_value = 1
erange = np.linspace(-20,20,500)
plot_x_limit = 20
# Lambdify eigenvector components to make them functions of e and t
eigcompmin_func = sp.lambdify((e,t), eigcomp_min , 'numpy')
eigcompplus_func = sp.lambdify((e,t), eigcomp_plus, 'numpy')
fig2, ax2 = plt.subplots()
ax2.plot(erange,eigcompmin_func(erange, t_value)**2, label='$S_-$')
ax2.plot(erange,eigcompplus_func(erange, t_value)**2, label='$S_+$')
ax2.plot(erange,[eigcomp_T]*len(erange), label='$T$')
ax2.set_xlabel('Detuning ($\mu$eV)')
ax2.set_ylabel('S(2,0) coefficient squared')
_=plt.legend()
###Output
_____no_output_____
|
SIR Models.ipynb
|
###Markdown
The SIR epidemic modelA simple mathematical description of the spread of a disease in a population is the so-called SIR model, which divides the (fixed) population of N individuals into three "compartments" which may vary as a function of time, t:S(t) are those susceptible but not yet infected with the disease;I(t) is the number of infectious individuals;R(t) are those individuals who have recovered from the disease and now have immunity to it.The SIR model describes the change in the population of each of these compartments in terms of two parameters, β and γ. β describes the effective contact rate of the disease: an infected individual comes into contact with βN other individuals per unit time (of which the fraction that are susceptible to contracting the disease is S/N). γ is the mean recovery rate: that is, 1/γ is the mean period of time during which an infected individual can pass it on.The following Python code integrates these equations for a disease characterised by parameters β=0.2, 1/γ=10days in a population of N=1000 (perhaps 'flu in a school). The model is started with a single infected individual on day 0: I(0)=1. The plotted curves of S(t), I(t) and R(t) are styled to look a bit nicer than Matplotlib's defaults.
###Code
import numpy as np
from scipy.integrate import odeint
import matplotlib.pyplot as plt
# Total population, N.
N = 1000
# Initial number of infected and recovered individuals, I0 and R0.
I0, R0 = 1, 0
# Everyone else, S0, is susceptible to infection initially.
S0 = N - I0 - R0
# Contact rate, beta, and mean recovery rate, gamma, (in 1/days).
beta, gamma = 0.2, 1./10
# A grid of time points (in days)
t = np.linspace(0, 160, 160)
# The SIR model differential equations.
def deriv(y, t, N, beta, gamma):
S, I, R = y
dSdt = -beta * S * I / N
dIdt = beta * S * I / N - gamma * I
dRdt = gamma * I
return dSdt, dIdt, dRdt
# Initial conditions vector
y0 = S0, I0, R0
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma))
S, I, R = ret.T
# Plot the data on three separate curves for S(t), I(t) and R(t)
fig = plt.figure(facecolor='w')
ax = fig.add_subplot(111, axisbelow=True)
ax.plot(t, S/1000, 'b', alpha=0.5, lw=2, label='Susceptible')
ax.plot(t, I/1000, 'r', alpha=0.5, lw=2, label='Infected')
ax.plot(t, R/1000, 'g', alpha=0.5, lw=2, label='Recovered with immunity')
ax.set_xlabel('Time /days')
ax.set_ylabel('Number (1000s)')
ax.set_ylim(0,1.2)
ax.yaxis.set_tick_params(length=0)
ax.xaxis.set_tick_params(length=0)
ax.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
plt.show()
###Output
_____no_output_____
|
python/d2l-en/tensorflow/chapter_attention-mechanisms/transformer.ipynb
|
###Markdown
Transformer:label:`sec_transformer`We have compared CNNs, RNNs, and self-attention in:numref:`subsec_cnn-rnn-self-attention`.Notably,self-attentionenjoys both parallel computation andthe shortest maximum path length.Therefore natually,it is appealing to design deep architecturesby using self-attention.Unlike earlier self-attention modelsthat still rely on RNNs for input representations :cite:`Cheng.Dong.Lapata.2016,Lin.Feng.Santos.ea.2017,Paulus.Xiong.Socher.2017`,the transformer modelis solely based on attention mechanismswithout any convolutional or recurrent layer :cite:`Vaswani.Shazeer.Parmar.ea.2017`.Though originally proposedfor sequence to sequence learning on text data,transformers have beenpervasive in a wide range ofmodern deep learning applications,such as in areas of language, vision, speech, and reinforcement learning. ModelAs an instance of the encoder-decoderarchitecture,the overall architecture ofthe transformeris presented in :numref:`fig_transformer`.As we can see,the transformer is composed of an encoder and a decoder.Different fromBahdanau attentionfor sequence to sequence learningin :numref:`fig_s2s_attention_details`,the input (source) and output (target)sequence embeddingsare added with positional encodingbefore being fed intothe encoder and the decoderthat stack modules based on self-attention.:width:`500px`:label:`fig_transformer`Now we provide an overview of thetransformer architecture in :numref:`fig_transformer`.On a high level,the transformer encoder is a stack of multiple identical layers,where each layerhas two sublayers (either is denoted as $\mathrm{sublayer}$).The firstis a multi-head self-attention poolingand the second is a positionwise feed-forward network.Specifically,in the encoder self-attention,queries, keys, and values are all from thethe outputs of the previous encoder layer.Inspired by the ResNet design in :numref:`sec_resnet`,a residual connection is employedaround both sublayers.In the transformer,for any input $\mathbf{x} \in \mathbb{R}^d$ at any position of the sequence,we require that $\mathrm{sublayer}(\mathbf{x}) \in \mathbb{R}^d$ so thatthe residual connection $\mathbf{x} + \mathrm{sublayer}(\mathbf{x}) \in \mathbb{R}^d$ is feasible.This addition from the residual connection is immediatelyfollowed by layer normalization :cite:`Ba.Kiros.Hinton.2016`.As a result, the transformer encoder outputs a $d$-dimensional vector representation for each position of the input sequence.The transformer decoder is alsoa stack of multiple identical layers with residual connections and layer normalizations.Besides the two sublayers described inthe encoder, the decoder insertsa third sublayer, known asthe encoder-decoder attention,between these two.In the encoder-decoder attention,queries are from theoutputs of the previous decoder layer,and the keys and values arefrom the transformer encoder outputs.In the decoder self-attention,queries, keys, and values are all from thethe outputs of the previous decoder layer.However,each position in the decoder isallowed to only attend to all positions in the decoderup to that position.This *masked* attentionpreserves the auto-regressive property,ensuring that the prediction only depends on those output tokens that have been generated.We have already described and implementedmulti-head attention based on scaled dot-productsin :numref:`sec_multihead-attention`and positional encoding in :numref:`subsec_positional-encoding`.In the following,we will implement the rest of the transformer model.
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
from d2l import tensorflow as d2l
###Output
_____no_output_____
###Markdown
[**Positionwise Feed-Forward Networks**]The positionwise feed-forward networktransformsthe representation at all the sequence positionsusing the same MLP.This is why we call it *positionwise*.In the implementation below,the input `X` with shape(batch size, number of time steps or sequence length in tokens, number of hidden units or feature dimension)will be transformed by a two-layer MLP intoan output tensor of shape(batch size, number of time steps, `ffn_num_outputs`).
###Code
#@save
class PositionWiseFFN(tf.keras.layers.Layer):
"""Positionwise feed-forward network."""
def __init__(self, ffn_num_hiddens, ffn_num_outputs, **kwargs):
super().__init__(*kwargs)
self.dense1 = tf.keras.layers.Dense(ffn_num_hiddens)
self.relu = tf.keras.layers.ReLU()
self.dense2 = tf.keras.layers.Dense(ffn_num_outputs)
def call(self, X):
return self.dense2(self.relu(self.dense1(X)))
###Output
_____no_output_____
###Markdown
The following exampleshows that [**the innermost dimensionof a tensor changes**] tothe number of outputs inthe positionwise feed-forward network.Since the same MLP transformsat all the positions,when the inputs at all these positions are the same,their outputs are also identical.
###Code
ffn = PositionWiseFFN(4, 8)
ffn(tf.ones((2, 3, 4)))[0]
###Output
_____no_output_____
###Markdown
Residual Connection and Layer NormalizationNow let us focus onthe "add & norm" component in :numref:`fig_transformer`.As we described at the beginningof this section,this is a residual connection immediatelyfollowed by layer normalization.Both are key to effective deep architectures.In :numref:`sec_batch_norm`,we explained how batch normalizationrecenters and rescales across the examples withina minibatch.Layer normalization is the same as batch normalizationexcept that the formernormalizes across the feature dimension.Despite its pervasive applicationsin computer vision,batch normalizationis usually empiricallyless effective than layer normalizationin natural language processingtasks, whose inputs are oftenvariable-length sequences.The following code snippet[**compares the normalization across different dimensionsby layer normalization and batch normalization**].
###Code
ln = tf.keras.layers.LayerNormalization()
bn = tf.keras.layers.BatchNormalization()
X = tf.constant([[1, 2], [2, 3]], dtype=tf.float32)
print('layer norm:', ln(X), '\nbatch norm:', bn(X, training=True))
###Output
layer norm: tf.Tensor(
[[-0.998006 0.9980061]
[-0.9980061 0.998006 ]], shape=(2, 2), dtype=float32)
batch norm: tf.Tensor(
[[-0.998006 -0.9980061 ]
[ 0.9980061 0.99800587]], shape=(2, 2), dtype=float32)
###Markdown
Now we can implement the `AddNorm` class[**using a residual connection followed by layer normalization**].Dropout is also applied for regularization.
###Code
#@save
class AddNorm(tf.keras.layers.Layer):
"""Residual connection followed by layer normalization."""
def __init__(self, normalized_shape, dropout, **kwargs):
super().__init__(**kwargs)
self.dropout = tf.keras.layers.Dropout(dropout)
self.ln = tf.keras.layers.LayerNormalization(normalized_shape)
def call(self, X, Y, **kwargs):
return self.ln(self.dropout(Y, **kwargs) + X)
###Output
_____no_output_____
###Markdown
The residual connection requires thatthe two inputs are of the same shapeso that [**the output tensor also has the same shape after the addition operation**].
###Code
add_norm = AddNorm([1, 2], 0.5) # Normalized_shape is: [i for i in range(len(input.shape))][1:]
add_norm(tf.ones((2, 3, 4)), tf.ones((2, 3, 4)), training=False).shape
###Output
_____no_output_____
###Markdown
EncoderWith all the essential components to assemblethe transformer encoder,let us start byimplementing [**a single layer within the encoder**].The following `EncoderBlock` classcontains two sublayers: multi-head self-attention and positionwise feed-forward networks,where a residual connection followed by layer normalization is employedaround both sublayers.
###Code
#@save
class EncoderBlock(tf.keras.layers.Layer):
"""Transformer encoder block."""
def __init__(self, key_size, query_size, value_size, num_hiddens,
norm_shape, ffn_num_hiddens, num_heads, dropout, bias=False, **kwargs):
super().__init__(**kwargs)
self.attention = d2l.MultiHeadAttention(key_size, query_size, value_size, num_hiddens,
num_heads, dropout, bias)
self.addnorm1 = AddNorm(norm_shape, dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm2 = AddNorm(norm_shape, dropout)
def call(self, X, valid_lens, **kwargs):
Y = self.addnorm1(X, self.attention(X, X, X, valid_lens, **kwargs), **kwargs)
return self.addnorm2(Y, self.ffn(Y), **kwargs)
###Output
_____no_output_____
###Markdown
As we can see,[**any layer in the transformer encoderdoes not change the shape of its input.**]
###Code
X = tf.ones((2, 100, 24))
valid_lens = tf.constant([3, 2])
norm_shape = [i for i in range(len(X.shape))][1:]
encoder_blk = EncoderBlock(24, 24, 24, 24, norm_shape, 48, 8, 0.5)
encoder_blk(X, valid_lens, training=False).shape
###Output
_____no_output_____
###Markdown
In the following [**transformer encoder**] implementation,we stack `num_layers` instances of the above `EncoderBlock` classes.Since we use the fixed positional encodingwhose values are always between -1 and 1,we multiply values of the learnable input embeddingsby the square root of the embedding dimensionto rescale before summing up the input embedding and the positional encoding.
###Code
#@save
class TransformerEncoder(d2l.Encoder):
"""Transformer encoder."""
def __init__(self, vocab_size, key_size, query_size, value_size,
num_hiddens, norm_shape, ffn_num_hiddens, num_heads,
num_layers, dropout, bias=False, **kwargs):
super().__init__(**kwargs)
self.num_hiddens = num_hiddens
self.embedding = tf.keras.layers.Embedding(vocab_size, num_hiddens)
self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
self.blks = [EncoderBlock(
key_size, query_size, value_size, num_hiddens, norm_shape,
ffn_num_hiddens, num_heads, dropout, bias) for _ in range(
num_layers)]
def call(self, X, valid_lens, **kwargs):
# Since positional encoding values are between -1 and 1, the embedding
# values are multiplied by the square root of the embedding dimension
# to rescale before they are summed up
X = self.pos_encoding(self.embedding(X) * tf.math.sqrt(
tf.cast(self.num_hiddens, dtype=tf.float32)), **kwargs)
self.attention_weights = [None] * len(self.blks)
for i, blk in enumerate(self.blks):
X = blk(X, valid_lens, **kwargs)
self.attention_weights[
i] = blk.attention.attention.attention_weights
return X
###Output
_____no_output_____
###Markdown
Below we specify hyperparameters to [**create a two-layer transformer encoder**].The shape of the transformer encoder outputis (batch size, number of time steps, `num_hiddens`).
###Code
encoder = TransformerEncoder(200, 24, 24, 24, 24, [1, 2], 48, 8, 2, 0.5)
encoder(tf.ones((2, 100)), valid_lens, training=False).shape
###Output
_____no_output_____
###Markdown
DecoderAs shown in :numref:`fig_transformer`,[**the transformer decoderis composed of multiple identical layers**].Each layer is implemented in the following`DecoderBlock` class,which contains three sublayers:decoder self-attention,encoder-decoder attention,and positionwise feed-forward networks.These sublayers employa residual connection around themfollowed by layer normalization.As we described earlier in this section,in the masked multi-head decoder self-attention(the first sublayer),queries, keys, and valuesall come from the outputs of the previous decoder layer.When training sequence-to-sequence models,tokens at all the positions (time steps)of the output sequenceare known.However,during predictionthe output sequence is generated token by token;thus,at any decoder time steponly the generated tokenscan be used in the decoder self-attention.To preserve auto-regression in the decoder,its masked self-attentionspecifies `dec_valid_lens` so thatany queryonly attends toall positions in the decoderup to the query position.
###Code
class DecoderBlock(tf.keras.layers.Layer):
# The `i`-th block in the decoder
def __init__(self, key_size, query_size, value_size, num_hiddens,
norm_shape, ffn_num_hiddens, num_heads, dropout, i, **kwargs):
super().__init__(**kwargs)
self.i = i
self.attention1 = d2l.MultiHeadAttention(key_size, query_size, value_size, num_hiddens, num_heads, dropout)
self.addnorm1 = AddNorm(norm_shape, dropout)
self.attention2 = d2l.MultiHeadAttention(key_size, query_size, value_size, num_hiddens, num_heads, dropout)
self.addnorm2 = AddNorm(norm_shape, dropout)
self.ffn = PositionWiseFFN(ffn_num_hiddens, num_hiddens)
self.addnorm3 = AddNorm(norm_shape, dropout)
def call(self, X, state, **kwargs):
enc_outputs, enc_valid_lens = state[0], state[1]
# During training, all the tokens of any output sequence are processed
# at the same time, so `state[2][self.i]` is `None` as initialized.
# When decoding any output sequence token by token during prediction,
# `state[2][self.i]` contains representations of the decoded output at
# the `i`-th block up to the current time step
if state[2][self.i] is None:
key_values = X
else:
key_values = tf.concat((state[2][self.i], X), axis=1)
state[2][self.i] = key_values
if kwargs["training"]:
batch_size, num_steps, _ = X.shape
# Shape of `dec_valid_lens`: (`batch_size`, `num_steps`), where
# every row is [1, 2, ..., `num_steps`]
dec_valid_lens = tf.repeat(tf.reshape(tf.range(1, num_steps + 1),
shape=(-1, num_steps)), repeats=batch_size, axis=0)
else:
dec_valid_lens = None
# Self-attention
X2 = self.attention1(X, key_values, key_values, dec_valid_lens, **kwargs)
Y = self.addnorm1(X, X2, **kwargs)
# Encoder-decoder attention. Shape of `enc_outputs`: (`batch_size`, `num_steps`, `num_hiddens`)
Y2 = self.attention2(Y, enc_outputs, enc_outputs, enc_valid_lens, **kwargs)
Z = self.addnorm2(Y, Y2, **kwargs)
return self.addnorm3(Z, self.ffn(Z), **kwargs), state
###Output
_____no_output_____
###Markdown
To facilitate scaled dot-product operationsin the encoder-decoder attentionand addition operations in the residual connections,[**the feature dimension (`num_hiddens`) of the decoder isthe same as that of the encoder.**]
###Code
decoder_blk = DecoderBlock(24, 24, 24, 24, [1, 2], 48, 8, 0.5, 0)
X = tf.ones((2, 100, 24))
state = [encoder_blk(X, valid_lens), valid_lens, [None]]
decoder_blk(X, state, training=False)[0].shape
###Output
_____no_output_____
###Markdown
Now we [**construct the entire transformer decoder**]composed of `num_layers` instances of `DecoderBlock`.In the end,a fully-connected layer computes the predictionfor all the `vocab_size` possible output tokens.Both of the decoder self-attention weightsand the encoder-decoder attention weightsare stored for later visualization.
###Code
class TransformerDecoder(d2l.AttentionDecoder):
def __init__(self, vocab_size, key_size, query_size, value_size,
num_hiddens, norm_shape, ffn_num_hidens, num_heads, num_layers, dropout, **kwargs):
super().__init__(**kwargs)
self.num_hiddens = num_hiddens
self.num_layers = num_layers
self.embedding = tf.keras.layers.Embedding(vocab_size, num_hiddens)
self.pos_encoding = d2l.PositionalEncoding(num_hiddens, dropout)
self.blks = [DecoderBlock(key_size, query_size, value_size, num_hiddens, norm_shape,
ffn_num_hiddens, num_heads, dropout, i) for i in range(num_layers)]
self.dense = tf.keras.layers.Dense(vocab_size)
def init_state(self, enc_outputs, enc_valid_lens, *args):
return [enc_outputs, enc_valid_lens, [None] * self.num_layers]
def call(self, X, state, **kwargs):
X = self.pos_encoding(self.embedding(X) * tf.math.sqrt(tf.cast(self.num_hiddens, dtype=tf.float32)), **kwargs)
self._attention_weights = [[None] * len(self.blks) for _ in range(2)] # 2 Attention layers in decoder
for i, blk in enumerate(self.blks):
X, state = blk(X, state, **kwargs)
# Decoder self-attention weights
self._attention_weights[0][i] = blk.attention1.attention.attention_weights
# Encoder-decoder attention weights
self._attention_weights[1][i] = blk.attention2.attention.attention_weights
return self.dense(X), state
@property
def attention_weights(self):
return self._attention_weights
###Output
_____no_output_____
###Markdown
[**Training**]Let us instantiate an encoder-decoder modelby following the transformer architecture.Here we specify thatboth the transformer encoder and the transformer decoderhave 2 layers using 4-head attention.Similar to :numref:`sec_seq2seq_training`,we train the transformer modelfor sequence to sequence learning on the English-French machine translation dataset.
###Code
num_hiddens, num_layers, dropout, batch_size, num_steps = 32, 2, 0.1, 64, 10
lr, num_epochs, device = 0.005, 200, d2l.try_gpu()
ffn_num_hiddens, num_heads = 64, 4
key_size, query_size, value_size = 32, 32, 32
norm_shape = [2]
train_iter, src_vocab, tgt_vocab = d2l.load_data_nmt(batch_size, num_steps)
encoder = TransformerEncoder(
len(src_vocab), key_size, query_size, value_size, num_hiddens, norm_shape,
ffn_num_hiddens, num_heads, num_layers, dropout)
decoder = TransformerDecoder(
len(tgt_vocab), key_size, query_size, value_size, num_hiddens, norm_shape,
ffn_num_hiddens, num_heads, num_layers, dropout)
net = d2l.EncoderDecoder(encoder, decoder)
d2l.train_seq2seq(net, train_iter, lr, num_epochs, tgt_vocab, device)
###Output
loss 0.029, 1353.6 tokens/sec on <tensorflow.python.eager.context._EagerDeviceContext object at 0x7f2390471e50>
###Markdown
After training,we use the transformer modelto [**translate a few English sentences**] into French and compute their BLEU scores.
###Code
engs = ['go .', "i lost .", 'he\'s calm .', 'i\'m home .']
fras = ['va !', 'j\'ai perdu .', 'il est calme .', 'je suis chez moi .']
for eng, fra in zip(engs, fras):
translation, dec_attention_weight_seq = d2l.predict_seq2seq(
net, eng, src_vocab, tgt_vocab, num_steps, True)
print(f'{eng} => {translation}, ',
f'bleu {d2l.bleu(translation, fra, k=2):.3f}')
###Output
go . => va !, bleu 1.000
i lost . => j'ai perdu ., bleu 1.000
###Markdown
Let us [**visualize the transformer attention weights**] when translating the last English sentence into French.The shape of the encoder self-attention weightsis (number of encoder layers, number of attention heads, `num_steps` or number of queries, `num_steps` or number of key-value pairs).
###Code
enc_attention_weights = tf.reshape(
tf.concat(net.encoder.attention_weights, 0),
(num_layers, num_heads, -1, num_steps))
enc_attention_weights.shape
###Output
_____no_output_____
###Markdown
In the encoder self-attention,both queries and keys come from the same input sequence.Since padding tokens do not carry meaning,with specified valid length of the input sequence,no query attends to positions of padding tokens.In the following,two layers of multi-head attention weightsare presented row by row.Each head independently attendsbased on a separate representation subspaces of queries, keys, and values.
###Code
d2l.show_heatmaps(
enc_attention_weights, xlabel='Key positions', ylabel='Query positions',
titles=['Head %d' % i for i in range(1, 5)], figsize=(7, 3.5))
###Output
_____no_output_____
###Markdown
[**To visualize both the decoder self-attention weights and the encoder-decoder attention weights,we need more data manipulations.**]For example,we fill the masked attention weights with zero.Note thatthe decoder self-attention weightsand the encoder-decoder attention weightsboth have the same queries:the beginning-of-sequence token followed bythe output tokens.
###Code
dec_attention_weights_2d = [head[0] for step in dec_attention_weight_seq
for attn in step
for blk in attn for head in blk]
dec_attention_weights_filled = tf.convert_to_tensor(
np.asarray(pd.DataFrame(dec_attention_weights_2d).fillna(
0.0).values).astype(np.float32))
dec_attention_weights = tf.reshape(dec_attention_weights_filled, shape=(
-1, 2, num_layers, num_heads, num_steps))
dec_self_attention_weights, dec_inter_attention_weights = tf.transpose(
dec_attention_weights, perm=(1, 2, 3, 0, 4))
print(dec_self_attention_weights.shape, dec_inter_attention_weights.shape)
###Output
(2, 4, 10, 10) (2, 4, 10, 10)
###Markdown
Due to the auto-regressive property of the decoder self-attention,no query attends to key-value pairs after the query position.
###Code
# Plus one to include the beginning-of-sequence token
d2l.show_heatmaps(
dec_self_attention_weights[:, :, :, :len(translation.split()) + 1],
xlabel='Key positions', ylabel='Query positions',
titles=['Head %d' % i for i in range(1, 5)], figsize=(7, 3.5))
###Output
_____no_output_____
###Markdown
Similar to the case in the encoder self-attention,via the specified valid length of the input sequence,[**no query from the output sequenceattends to those padding tokens from the input sequence.**]
###Code
d2l.show_heatmaps(
dec_inter_attention_weights, xlabel='Key positions',
ylabel='Query positions', titles=['Head %d' % i for i in range(1, 5)],
figsize=(7, 3.5))
###Output
_____no_output_____
|
_notebooks/2021-06-24-sentence-embeddings.ipynb
|
###Markdown
Applications of Sentence Embeddings> for Persian Language- toc: true- branch: master- badges: true- image: images/sentence_embedding.png- comments: true- author: Sajjad Ayoubi- categories: [implementation]
###Code
!pip install -q sentence_transformers
!pip install -q mtranslate
###Output
[K |████████████████████████████████| 81kB 9.5MB/s
[K |████████████████████████████████| 2.5MB 29.7MB/s
[K |████████████████████████████████| 1.2MB 42.9MB/s
[K |████████████████████████████████| 3.3MB 39.7MB/s
[K |████████████████████████████████| 901kB 52.8MB/s
[?25h Building wheel for sentence-transformers (setup.py) ... [?25l[?25hdone
Building wheel for mtranslate (setup.py) ... [?25l[?25hdone
###Markdown
How I use text embeddings (for data augmentation) filtered translation with multilingual Sentence Embbeding
###Code
from sentence_transformers import SentenceTransformer
from transformers import AutoModel, AutoTokenizer
import torch
from tqdm.autonotebook import tqdm
from mtranslate import translate
class SentenceSimilarityMultiLang():
def __init__(self, model_name='stsb-xlm-r-multilingual'):
# add device
self.model = SentenceTransformer(model_name)
def __call__(self, text):
# tokenization step
sentence_embeddings = self.model.encode(text, convert_to_tensor=True)
return sentence_embeddings.unsqueeze(1)
def cosine_similarity(self, a, b):
a, b = self([a, b])
return torch.cosine_similarity(a, b).item()
ssml = SentenceSimilarityMultiLang()
in_persian = 'چگونه می توانم به شما کمک کنم؟'
in_english = 'How can I help you?'
ssml.cosine_similarity(in_persian, in_english)
in_persian = 'میتونم به شما کمک کنم؟'
in_english = 'How can I help you?'
ssml.cosine_similarity(in_persian, in_english)
in_persian = 'نحوه ای کمک به دیگران را بیان کنید؟'
in_english = 'How can I help you?'
ssml.cosine_similarity(in_persian, in_english)
class TransWithSimilarityCheck():
def __init__(self, languages=None, min_score=.9, similar_model_name='stsb-xlm-r-multilingual'):
self.languages = languages
self.min_score = min_score
self.sentence_similar = SentenceSimilarityMultiLang(
model_name=similar_model_name)
def _translator(self, sentence):
return translate(sentence, from_language='en', to_language='fa')
def __call__(self, sentences):
augmented = []
for i, s in tqdm(enumerate(sentences), total=len(sentences)):
aug = self._translator(s)
score = self.sentence_similar.cosine_similarity(s, aug)
if score >= self.min_score:
augmented.append({'id': i, 'aug': aug, 'score': score})
return augmented
augmenter = TransWithSimilarityCheck(languages=['en', 'fa'], min_score=.9)
augmented = augmenter(['How can I help you?'])
print(augmented)
augmenter = TransWithSimilarityCheck(languages=['en', 'fa'], min_score=.5)
augmented = augmenter(['easy peasy let me squeezy'])
print(augmented)
###Output
_____no_output_____
###Markdown
filtered back translation with similar Sentence Embbeding
###Code
class GoogleBackTranslator():
def __init__(self, n_diff=1):
self.n_diff = n_diff
def __call__(self, sentence, languages):
# any languages from fa ....
for i, lang in enumerate(languages[:-1]):
sentence = translate(sentence, from_language=lang, to_language=languages[i+1])
# last back
back_translated = translate(sentence, from_language=languages[i+1], to_language=languages[0])
tokens = set(back_translated.split(' '))
if len(tokens.intersection(sentence.split(' '))) >= len(tokens)-self.n_diff:
return '[||]' # return the SAME token
return back_translated
###Output
_____no_output_____
###Markdown
- good back translation
###Code
bk = GoogleBackTranslator(n_diff=2)
bk('امروز چند شنبس؟', ['fa', 'en'])
###Output
_____no_output_____
###Markdown
- bad back translation
###Code
bk = GoogleBackTranslator(n_diff=2)
bk('چجوری میشه از سایت شما خرید کرد؟', ['fa', 'ru'])
class SentenceSimilarity():
def __init__(self, model_name='m3hrdadfi/bert-fa-base-uncased-wikitriplet-mean-tokens', max_len=16, device='cpu'):
self.model_name = model_name
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModel.from_pretrained(model_name).eval()
self.max_len = max_len
self.device = device
def __call__(self, text):
# tokenization step
tokens = self.tokenizer(text, truncation=True, padding='max_length',
max_length=self.max_len, return_tensors='pt')
# model.forward step
with torch.no_grad():
embeddings = self.model(**tokens).last_hidden_state
# Create masked embeddings (just expend size)
mask = tokens['attention_mask'].unsqueeze(-1).expand(embeddings.shape).float()
# create sentence embedding (sum embs / sum mask)
sentence_embeddings = torch.sum(embeddings * mask, dim=1) / torch.clamp(mask.sum(1), min=1e-9)
# expand dim for each embedding (helpful for cosine similarity)
return sentence_embeddings.unsqueeze(1)
def cosine_similarity(self, a, b):
a, b = self([a, b])
return torch.cosine_similarity(a, b).item()
ss = SentenceSimilarity(max_len=32)
###Output
_____no_output_____
###Markdown
- positive example
###Code
ss.cosine_similarity(a='برای ترک کامل سیگار چه باید کرد؟', b='برای ترک کامل سیگار چه کاری باید انجام دهید؟')
###Output
_____no_output_____
###Markdown
- negative example
###Code
ss.cosine_similarity(a='برای ترک کامل ورزش چه باید کرد؟', b='برای ترک کامل سیگار چه کاری باید انجام دهید؟')
class FilteredBackTranslation():
# TODO: Parrallel BackTranslator
def __init__(self, min_score=.8, n_diff=1, similar_model_name='m3hrdadfi/bert-fa-base-uncased-wikitriplet-mean-tokens'):
self.min_score = min_score
self.back_translator = GoogleBackTranslator(n_diff=n_diff)
self.sentence_similar = SentenceSimilarity(model_name=similar_model_name)
# best languages I find work well for Persian BackTranslation
self.languages = [['fa', 'en'], ['fa', 'ru'], ['fa', 'ar'], ['fa', 'fr']]
def __call__(self, sentences, top_chain=2):
augmented = []
for i, s in tqdm(enumerate(sentences), total=len(sentences)):
paraphrazes = []
scores = []
# 1:57~30ms 2:85~1m, 3:101~1.4m 4:114~2.1m
for langs in self.languages[:top_chain]:
aug = self.back_translator(s, languages=langs)
if aug not in paraphrazes:
score = self.sentence_similar.cosine_similarity(s, aug)
if score >= self.min_score:
scores.append(score)
paraphrazes.append(aug)
if len(scores)>0:
augmented.append({'id': i, 'org': s, 'aug': paraphrazes, 'score': scores})
return augmented
augmenter = FilteredBackTranslation(min_score=.9)
sentences = ['برای ترک کامل سیگار باید چی کار کرد؟']
augmenter(sentences, top_chain=4)
sentences = ['چه جوری میتونم وزنم رو کم کنم؟']
augmenter(sentences, top_chain=4)
sentences = ['راه های درمان خودشیفتگی را بیان کنید؟']
augmenter(sentences, top_chain=4)
###Output
_____no_output_____
|
python/session 4 - matplotllib/matplotlib.ipynb
|
###Markdown
data-hub:* [Website](https://data-hub.ir/)* [Youtube](https://www.youtube.com/channel/UCrBcbQWcD0ortWqHAlP94ug)* [Github](https://github.com/datahub-ir)* Telegram Channel: @**data_hub_ir*** Telegram Group: **@data_jobs** مقدمه مصورسازی داده بخش مهمی از تجزیه و تحلیل داده است. مصورسازی به ما کمک می کند تا روابط بین متغیرها را بشناسیم و همچنین تشخیص دهیم کدام متغیرها مهم هستند یا میتوانند در مقدار متغیر دیگری تاثیر بگذارند.کتابخانه matplotlib یکی از کتابخانههای ترسیم در زبان برنامهسازی پایتون است. مجموعه matplotlib.pyplot شامل توابعی است که به ما کمک میکنند با مشخص کردن بخش های اساسی یک نمودار، آن را رسم کنیم. در این آموزش ما از مجموعه داده mpg استفاده میکنیم. این مجموعه شامل اطلاعات جمعآوری شده از ۲۳۴ خودرو است. ستونهای این مجموعه داده عبارتاند از:> * model: نام مدل> * displ: اندازه موتور> * year: سال ساخت> * cyl: تعداد سیلندر> * hwy: مسافت طیشده با یک گالن سوخت در بزرگراه> * cty: مسافت طیشده با یک گان سوخت در شهر> * fl: نوع سوخت> * class: کلاس ماشین نصب کتابخانه matplotlib برای نصب یا بروزرسانی کتابخانه matplotlib میتوانید دستور زیر را در محیط command line وارد کنید: ```bashpip install matplotlib```
###Code
!python --version
!pip install matplotlib
pip show matplotlib
###Output
Name: matplotlib
Version: 3.5.0
Summary: Python plotting package
Home-page: https://matplotlib.org
Author: John D. Hunter, Michael Droettboom
Author-email: [email protected]
License: PSF
Location: c:\users\mohammad\appdata\local\programs\python\python39\lib\site-packages
Requires: cycler, fonttools, kiwisolver, numpy, packaging, pillow, pyparsing, python-dateutil, setuptools-scm
Required-by: descartes, mizani, plotnine, seaborn
Note: you may need to restart the kernel to use updated packages.
###Markdown
پس از اتمام فرایند نصب ، برای استفاده از آن لازم است کتابخانه را به کد اضافه کنید؛ توجه داشته باشید برای سهولت استفاده، مخفف plt را برای فراخوانی matplotlib.pyplot تعریف میکنیم.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
تابع plot
###Code
x = np.linspace(0, 25, 250)
plt.plot(x, np.sin(x))
plt.plot(x, np.cos(x), '--')
plt.plot(x, np.cos(x)+.1)
plt.show()
plt.scatter(x[:10], x[:10]+2)
plt.show()
plt.scatter(x, np.sin(x))
plt.show()
###Output
_____no_output_____
###Markdown
برای رسم هر تابع دلخواه میتوان از تابع plot استفاده کرد. این تابع با ورودی گرفتن مجموعه مقادیر x و y تابعی را با استفاده از آنها رسم میکند.در مثال زیر تابع sin در پنچاه نقطه در بازهی $[-\pi, \; \pi]$ رسم شده است.
###Code
pi = np.pi
x = np.linspace(start=-pi, stop=pi, num=10)
y = np.sin(x)
plt.plot(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
برای نمایش نمودار از تابع show استفاده میکنیم.در نمودار زیر با تغییر پارامترهای markersize، marker، linewidth، linestyle و color صورتهای مختلف نمودار قبلی را رسم کنید.پارامترهای markersize و linewidth عدد طبیعیاند. همچنین در جدول زیر برخی از مقادیر ممکن برای پارامترهای marker و linestyle آمده است. | پارامتر |مقدار ||:----------|:------------------:|| `marker` | '.' '*' '+' || `linestyle` | '-' '--' '-.' ':' |
###Code
plt.plot(x, y, color='red', linewidth=1, linestyle=':', marker='+', markersize=10)
plt.show()
###Output
_____no_output_____
###Markdown
همانطور که احتمالا متوجه شدید با تعیین پارامتر marker میتوان مکان دقیق دادهها را در نمودار نشان داد.
###Code
age = [21,12,32,45,37,18,28,52,5,40,48,15]
height = [160,135,170,165,173,168,175,159,105,171,155,158]
# Set figure size
plt.figure(figsize=(12,6))
# Add a main title
plt.title("Plot of Age vs. Height (in cms)\n", fontsize=20, fontstyle='italic')
# X- and Y-label with fontsize
plt.xlabel("Age (years)", fontsize=16)
plt.ylabel("Height (cms)", fontsize=16)
# Turn on grid
plt.grid(True)
# Set Y-axis value limit
plt.ylim(100,200)
# X- and Y-axis ticks customization with fontsize and placement
plt.xticks([i*5 for i in range(12)], fontsize=15)
plt.yticks(fontsize=15)
# Main plotting function with choice of color, marker size, and marker edge color
plt.scatter(x=age, y=height, c='orange', s=150, edgecolors='k')
# Adding bit of text to the plot
plt.text(x=15, y=105, s="Height increaes up to around \n20 years and then tapers off", fontsize=15,
rotation=30, linespacing=2)
plt.text(x=22, y=185, s="Nobody has a height beyond 180 cm", fontsize=15)
# Adding a vertical line
plt.vlines(x=20, ymin=100, ymax=180, linestyles='dashed', color='blue', lw=3)
# Adding a horizontal line
plt.hlines(y=180, xmin=0, xmax=55, linestyles='dashed', color='red', lw=3)
# Adding a legend
plt.legend(['Height in cms'], loc=2, fontsize=14)
# Final show method
plt.show()
###Output
_____no_output_____
###Markdown
نمودار پراکندگی (scatter plot) در نمودار پراکندگی میتوان دادهها را با استفاده از دو یا سه ویژگی عددی در دستگاه مختصات رسم کنیم. برای این کار از تابع scatter استفاده میکنیم. ابتدا مجموعه داده mpg را میخوانیم.
###Code
mpg_csv = "mpg.csv"
df = pd.read_csv(mpg_csv)
df.head()
###Output
_____no_output_____
###Markdown
به عنوان مثال نمودار پراکندگی مقدار مسافت طیشده با یک گالن سوخت (hwy) نسبت به اندازه موتور خودرو (displ) را رسم میکنیم.
###Code
plt.scatter(x=df['displ'], y=df['hwy'], c='blue', alpha=0.4)
plt.title('hwy - displ')
plt.xlabel('displ')
plt.ylabel('hwy')
plt.show()
###Output
_____no_output_____
###Markdown
برای استفاده از تابع scatter لازم است مقادیر هر بعد را مشخص نماییم. از طرفی با استفاده از پارامتر alpha میزان شفافیت هر نشان را مشخص میکنیم. بنابراین در مکانهایی که نشان پررنگتر است، در واقع دادههای بیشتری رو هم قرار گرفتهاند.برای تعیین عنوان نمودار از تابع title و برای نامگذاری محورها از توابع xlabel و ylabel استفاده میکنیم.
###Code
rng = np.random.RandomState(0)
colors = rng.rand(len(df))
sizes = 1000 * rng.rand(len(df))
#size متناسب با بزرگی اعداده
plt.scatter(x=df['displ'], y=df['hwy'], c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar()
###Output
_____no_output_____
###Markdown
حال میخواهیم با استفاده از ویژگی تعداد سیلندر دادهها را رنگ کنیم.
###Code
df.head()
plt.scatter(x=df['displ'], y=df['hwy'], c=df['cyl'],alpha=0.5)
plt.title('hwy - displ')
plt.xlabel('displ')
plt.ylabel('hwy')
plt.colorbar()
plt.show()
###Output
_____no_output_____
###Markdown
به کمک تابع colorbar میتوانیم طیف رنگ را نمایش دهیم. همان طور که انتظار داشتیم، خودروهای با تعداد سیلندر بیشتر مصرف سوخت بیشتری دارند. In Matplotlib, the figure (an instance of the class plt.Figure) can be thought of as a single container that contains all the objects representing axes, graphics, text, and labels. The axes (an instance of the class plt.Axes) is what we see above: a bounding box with ticks and labels, which will eventually contain the plot elements that make up our visualization. Throughout this book, we'll commonly use the variable name fig to refer to a figure instance, and ax to refer to an axes instance or group of axes instances.
###Code
fig = plt.figure()
ax = plt.axes()
x = np.linspace(0, 10, 1000)
ax.plot(x, np.sin(x))
###Output
_____no_output_____
###Markdown
Alternatively, we can use the pylab interface and let the figure and axes be created for us in the background(see [Two Interfaces for the Price of One](04.00-Introduction-To-Matplotlib.ipynbTwo-Interfaces-for-the-Price-of-One) for a discussion of these two interfaces):
###Code
plt.plot(x, np.sin(x))
x = np.linspace(0, 10, 1000)
fig, ax = plt.subplots()
ax.plot(x, np.sin(x), '-b', label='Sine')
ax.plot(x, np.cos(x), '--r', label='Cosine')
ax.axis('equal')
leg = ax.legend()
ax.legend(loc='upper left', frameon=False)
fig
ax.legend(frameon=False, loc='lower center', ncol=2)
fig
###Output
_____no_output_____
###Markdown
نمودار سهبعدی در این بخش مجموعه دادگان mpg را با استفاده از سه ویژگی عددی در دستگاه مختصات رسم میکنیم. به این منظور از ابزار mplot3d در کنار matplotlib استفاده میکنیم.
###Code
from mpl_toolkits import mplot3d
fig = plt.figure()
ax = plt.axes(projection='3d')
ax.scatter3D(df['displ'], df['hwy'], df['cyl'])
plt.show()
ax = plt.axes(projection='3d')
ax.scatter3D(df['displ'], df['hwy'], df['cyl'], c=df['cyl'])
plt.show()
###Output
_____no_output_____
###Markdown
همانطور که مشاهده میشود با اضافه کردن ویژگی تعداد سیلندر دادهها به خوبی خوشهبندی شدهاند.
###Code
fig.savefig('my_figure.png')
###Output
_____no_output_____
###Markdown
نمودار میلهای (bar plot) با استفاده از این نمودار میتوان کمیت عددی مربوط به مقادیر متفاوت یک کمیت دستهای را نمایش داد و با هم مقایسه کرد.
###Code
groups = ['G1', 'G2', 'G3', 'G4', 'G5']
scores = [20, 34, 30, 32, 27]
bar_width = 0.3
plt.bar(groups, scores, width=bar_width, color='black')
plt.xlabel('Groups')
plt.ylabel('Scores')
plt.show()
###Output
_____no_output_____
###Markdown
در نمودار میلهای بعد میانگین hwy و cty را برای کلاسهای متفاوت خودروها به صورت همزمان نشان میدهیم.
###Code
df['class'].unique()
classes = df['class'].unique()
barwidth = 0.3
cty_mean = []
hwy_mean = []
for x in classes:
cty_mean.append(df[df['class'] == x]['cty'].mean())
hwy_mean.append(df[df['class'] == x]['hwy'].mean())
index = pd.factorize(classes)[0] + 1
index
plt.bar(index, cty_mean, barwidth, color='black', label='mean city miles per gallon')
plt.legend()
plt.show()
plt.bar(index, cty_mean, barwidth, color='black', label='mean city miles per gallon')
plt.legend()
plt.xticks(index, classes)
plt.show()
plt.bar(index, hwy_mean, barwidth, color='red', label='mean highway miles per gallon')
plt.legend()
plt.xticks(index, classes)
plt.show()
plt.bar(index, cty_mean, barwidth, color='black', label='mean city miles per gallon')
plt.bar(index + barwidth, hwy_mean, barwidth, color='purple', label='mean highway miles per gallon')
plt.xticks(index + barwidth / 2, classes)
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
تابع xticks با گرفتن مکان دستهها و نامشان، آنها را روی محور x نمایش می دهد.هنگامی که چند نمودار را در یک شکل رسم میکنیم، میتوان با تعیین پارامتر label برای نمودارها و فراخوانی تابع legend، برچسب تعیین شده برای هر نمودار را در شکل نشان دهیم.دقت کنید در این شکل ما دو نمودار را به صورت همزمان به تصویر کشیدیم. در حالت کلی میتوانیم تعداد دلخواهی نمودار با نوع مطلوب در یک شکل رسم کنیم. Histogram
###Code
weight = [55,35,77,68,70,60,72,69,18,65,82,48]
import numpy as np
plt.figure(figsize=(5,5))
# Main plot function 'hist'
plt.hist(weight, color='red', edgecolor='k', alpha=0.75, bins=7)
plt.title("Histogram of patient weight", fontsize=18)
plt.xlabel("Weight in kgs", fontsize=15)
plt.xticks(fontsize=15)
plt.yticks(fontsize=15)
plt.show()
###Output
_____no_output_____
###Markdown
نمودار جعبهای (box plot) در نمودار جعبهای خلاصهای از دادگان شامل کمینه، چارک اول، میانه، چارک سوم و بیشینه نمایش داده میشود. در کد زیر از توابع تعریف شده روی dataframe برای کشیدن نمودار استفاده شده است. برای مطالعهی بیشتر میتوانید به این لینک مراجعه کنید.
###Code
df.describe()
plt.style.use('ggplot')
df.boxplot(column=['cty', 'hwy'],showmeans=True)
plt.show()
###Output
_____no_output_____
###Markdown
در نمودار زیر دادهها ابتدا بر حسب ویژگی class گروهبندی شدهاند، سپس نمودار جعبهای برای هر دسته رسم شده است.
###Code
df.boxplot(column=['hwy'], by='class')
plt.show()
###Output
_____no_output_____
###Markdown
Pandas DataFrames support some visualizations directly!
###Code
df.plot.scatter('displ', 'hwy')
plt.show()
df['hwy'].plot.hist(bins=5,figsize=(5,5),edgecolor='k')
plt.xlabel('hwy percentage')
plt.show()
###Output
_____no_output_____
|
c2_improving_dnn/w1_Gradient+Checking+v1.ipynb
|
###Markdown
Gradient CheckingWelcome to the final assignment for this week! In this assignment you will learn to implement and use gradient checking. You are part of a team working to make mobile payments available globally, and are asked to build a deep learning model to detect fraud--whenever someone makes a payment, you want to see if the payment might be fraudulent, such as if the user's account has been taken over by a hacker. But backpropagation is quite challenging to implement, and sometimes has bugs. Because this is a mission-critical application, your company's CEO wants to be really certain that your implementation of backpropagation is correct. Your CEO says, "Give me a proof that your backpropagation is actually working!" To give this reassurance, you are going to use "gradient checking".Let's do it!
###Code
# Packages
import numpy as np
from testCases import *
from gc_utils import sigmoid, relu, dictionary_to_vector, vector_to_dictionary, gradients_to_vector
###Output
_____no_output_____
###Markdown
1) How does gradient checking work?Backpropagation computes the gradients $\frac{\partial J}{\partial \theta}$, where $\theta$ denotes the parameters of the model. $J$ is computed using forward propagation and your loss function.Because forward propagation is relatively easy to implement, you're confident you got that right, and so you're almost 100% sure that you're computing the cost $J$ correctly. Thus, you can use your code for computing $J$ to verify the code for computing $\frac{\partial J}{\partial \theta}$. Let's look back at the definition of a derivative (or gradient):$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$If you're not familiar with the "$\displaystyle \lim_{\varepsilon \to 0}$" notation, it's just a way of saying "when $\varepsilon$ is really really small."We know the following:- $\frac{\partial J}{\partial \theta}$ is what you want to make sure you're computing correctly. - You can compute $J(\theta + \varepsilon)$ and $J(\theta - \varepsilon)$ (in the case that $\theta$ is a real number), since you're confident your implementation for $J$ is correct. Lets use equation (1) and a small value for $\varepsilon$ to convince your CEO that your code for computing $\frac{\partial J}{\partial \theta}$ is correct! 2) 1-dimensional gradient checkingConsider a 1D linear function $J(\theta) = \theta x$. The model contains only a single real-valued parameter $\theta$, and takes $x$ as input.You will implement code to compute $J(.)$ and its derivative $\frac{\partial J}{\partial \theta}$. You will then use gradient checking to make sure your derivative computation for $J$ is correct. **Figure 1** : **1D linear model** The diagram above shows the key computation steps: First start with $x$, then evaluate the function $J(x)$ ("forward propagation"). Then compute the derivative $\frac{\partial J}{\partial \theta}$ ("backward propagation"). **Exercise**: implement "forward propagation" and "backward propagation" for this simple function. I.e., compute both $J(.)$ ("forward propagation") and its derivative with respect to $\theta$ ("backward propagation"), in two separate functions.
###Code
# GRADED FUNCTION: forward_propagation
def forward_propagation(x, theta):
"""
Implement the linear forward propagation (compute J) presented in Figure 1 (J(theta) = theta * x)
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
J -- the value of function J, computed using the formula J(theta) = theta * x
"""
### START CODE HERE ### (approx. 1 line)
J = x * theta
### END CODE HERE ###
return J
x, theta = 2, 4
J = forward_propagation(x, theta)
print ("J = " + str(J))
###Output
J = 8
###Markdown
**Expected Output**: ** J ** 8 **Exercise**: Now, implement the backward propagation step (derivative computation) of Figure 1. That is, compute the derivative of $J(\theta) = \theta x$ with respect to $\theta$. To save you from doing the calculus, you should get $dtheta = \frac { \partial J }{ \partial \theta} = x$.
###Code
# GRADED FUNCTION: backward_propagation
def backward_propagation(x, theta):
"""
Computes the derivative of J with respect to theta (see Figure 1).
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
Returns:
dtheta -- the gradient of the cost with respect to theta
"""
### START CODE HERE ### (approx. 1 line)
dtheta = x
### END CODE HERE ###
return dtheta
x, theta = 2, 4
dtheta = backward_propagation(x, theta)
print ("dtheta = " + str(dtheta))
###Output
dtheta = 2
###Markdown
**Expected Output**: ** dtheta ** 2 **Exercise**: To show that the `backward_propagation()` function is correctly computing the gradient $\frac{\partial J}{\partial \theta}$, let's implement gradient checking.**Instructions**:- First compute "gradapprox" using the formula above (1) and a small value of $\varepsilon$. Here are the Steps to follow: 1. $\theta^{+} = \theta + \varepsilon$ 2. $\theta^{-} = \theta - \varepsilon$ 3. $J^{+} = J(\theta^{+})$ 4. $J^{-} = J(\theta^{-})$ 5. $gradapprox = \frac{J^{+} - J^{-}}{2 \varepsilon}$- Then compute the gradient using backward propagation, and store the result in a variable "grad"- Finally, compute the relative difference between "gradapprox" and the "grad" using the following formula:$$ difference = \frac {\mid\mid grad - gradapprox \mid\mid_2}{\mid\mid grad \mid\mid_2 + \mid\mid gradapprox \mid\mid_2} \tag{2}$$You will need 3 Steps to compute this formula: - 1'. compute the numerator using np.linalg.norm(...) - 2'. compute the denominator. You will need to call np.linalg.norm(...) twice. - 3'. divide them.- If this difference is small (say less than $10^{-7}$), you can be quite confident that you have computed your gradient correctly. Otherwise, there may be a mistake in the gradient computation.
###Code
# GRADED FUNCTION: gradient_check
def gradient_check(x, theta, epsilon = 1e-7):
"""
Implement the backward propagation presented in Figure 1.
Arguments:
x -- a real-valued input
theta -- our parameter, a real number as well
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Compute gradapprox using left side of formula (1). epsilon is small enough, you don't need to worry about the limit.
### START CODE HERE ### (approx. 5 lines)
thetaplus = theta + epsilon # Step 1
thetaminus = theta - epsilon # Step 2
J_plus = forward_propagation(x, thetaplus) # Step 3
J_minus = forward_propagation(x, thetaminus) # Step 4
gradapprox = (J_plus - J_minus) / (2*epsilon) # Step 5
### END CODE HERE ###
# Check if gradapprox is close enough to the output of backward_propagation()
### START CODE HERE ### (approx. 1 line)
grad = backward_propagation(x, theta)
### END CODE HERE ###
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference < 1e-7:
print ("The gradient is correct!")
else:
print ("The gradient is wrong!")
return difference
x, theta = 2, 4
difference = gradient_check(x, theta)
print("difference = " + str(difference))
###Output
The gradient is correct!
difference = 2.91933588329e-10
###Markdown
**Expected Output**:The gradient is correct! ** difference ** 2.9193358103083e-10 Congrats, the difference is smaller than the $10^{-7}$ threshold. So you can have high confidence that you've correctly computed the gradient in `backward_propagation()`. Now, in the more general case, your cost function $J$ has more than a single 1D input. When you are training a neural network, $\theta$ actually consists of multiple matrices $W^{[l]}$ and biases $b^{[l]}$! It is important to know how to do a gradient check with higher-dimensional inputs. Let's do it! 3) N-dimensional gradient checking The following figure describes the forward and backward propagation of your fraud detection model. **Figure 2** : **deep neural network***LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID*Let's look at your implementations for forward propagation and backward propagation.
###Code
def forward_propagation_n(X, Y, parameters):
"""
Implements the forward propagation (and computes the cost) presented in Figure 3.
Arguments:
X -- training set for m examples
Y -- labels for m examples
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
W1 -- weight matrix of shape (5, 4)
b1 -- bias vector of shape (5, 1)
W2 -- weight matrix of shape (3, 5)
b2 -- bias vector of shape (3, 1)
W3 -- weight matrix of shape (1, 3)
b3 -- bias vector of shape (1, 1)
Returns:
cost -- the cost function (logistic cost for one example)
"""
# retrieve parameters
m = X.shape[1]
W1 = parameters["W1"]
b1 = parameters["b1"]
W2 = parameters["W2"]
b2 = parameters["b2"]
W3 = parameters["W3"]
b3 = parameters["b3"]
# LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SIGMOID
Z1 = np.dot(W1, X) + b1
A1 = relu(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = relu(Z2)
Z3 = np.dot(W3, A2) + b3
A3 = sigmoid(Z3)
# Cost
logprobs = np.multiply(-np.log(A3),Y) + np.multiply(-np.log(1 - A3), 1 - Y)
cost = 1./m * np.sum(logprobs)
cache = (Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3)
return cost, cache
###Output
_____no_output_____
###Markdown
Now, run backward propagation.
###Code
def backward_propagation_n(X, Y, cache):
"""
Implement the backward propagation presented in figure 2.
Arguments:
X -- input datapoint, of shape (input size, 1)
Y -- true "label"
cache -- cache output from forward_propagation_n()
Returns:
gradients -- A dictionary with the gradients of the cost with respect to each parameter, activation and pre-activation variables.
"""
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2, Z3, A3, W3, b3) = cache
dZ3 = A3 - Y
dW3 = 1./m * np.dot(dZ3, A2.T)
db3 = 1./m * np.sum(dZ3, axis=1, keepdims = True)
dA2 = np.dot(W3.T, dZ3)
dZ2 = np.multiply(dA2, np.int64(A2 > 0))
dW2 = 1./m * np.dot(dZ2, A1.T)
db2 = 1./m * np.sum(dZ2, axis=1, keepdims = True)
dA1 = np.dot(W2.T, dZ2)
dZ1 = np.multiply(dA1, np.int64(A1 > 0))
dW1 = 1./m * np.dot(dZ1, X.T)
db1 = 1./m * np.sum(dZ1, axis=1, keepdims = True)
gradients = {"dZ3": dZ3, "dW3": dW3, "db3": db3,
"dA2": dA2, "dZ2": dZ2, "dW2": dW2, "db2": db2,
"dA1": dA1, "dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients
###Output
_____no_output_____
###Markdown
You obtained some results on the fraud detection test set but you are not 100% sure of your model. Nobody's perfect! Let's implement gradient checking to verify if your gradients are correct. **How does gradient checking work?**.As in 1) and 2), you want to compare "gradapprox" to the gradient computed by backpropagation. The formula is still:$$ \frac{\partial J}{\partial \theta} = \lim_{\varepsilon \to 0} \frac{J(\theta + \varepsilon) - J(\theta - \varepsilon)}{2 \varepsilon} \tag{1}$$However, $\theta$ is not a scalar anymore. It is a dictionary called "parameters". We implemented a function "`dictionary_to_vector()`" for you. It converts the "parameters" dictionary into a vector called "values", obtained by reshaping all parameters (W1, b1, W2, b2, W3, b3) into vectors and concatenating them.The inverse function is "`vector_to_dictionary`" which outputs back the "parameters" dictionary. **Figure 2** : **dictionary_to_vector() and vector_to_dictionary()** You will need these functions in gradient_check_n()We have also converted the "gradients" dictionary into a vector "grad" using gradients_to_vector(). You don't need to worry about that.**Exercise**: Implement gradient_check_n().**Instructions**: Here is pseudo-code that will help you implement the gradient check.For each i in num_parameters:- To compute `J_plus[i]`: 1. Set $\theta^{+}$ to `np.copy(parameters_values)` 2. Set $\theta^{+}_i$ to $\theta^{+}_i + \varepsilon$ 3. Calculate $J^{+}_i$ using to `forward_propagation_n(x, y, vector_to_dictionary(`$\theta^{+}$ `))`. - To compute `J_minus[i]`: do the same thing with $\theta^{-}$- Compute $gradapprox[i] = \frac{J^{+}_i - J^{-}_i}{2 \varepsilon}$Thus, you get a vector gradapprox, where gradapprox[i] is an approximation of the gradient with respect to `parameter_values[i]`. You can now compare this gradapprox vector to the gradients vector from backpropagation. Just like for the 1D case (Steps 1', 2', 3'), compute: $$ difference = \frac {\| grad - gradapprox \|_2}{\| grad \|_2 + \| gradapprox \|_2 } \tag{3}$$
###Code
# GRADED FUNCTION: gradient_check_n
def gradient_check_n(parameters, gradients, X, Y, epsilon = 1e-7):
"""
Checks if backward_propagation_n computes correctly the gradient of the cost output by forward_propagation_n
Arguments:
parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3":
grad -- output of backward_propagation_n, contains gradients of the cost with respect to the parameters.
x -- input datapoint, of shape (input size, 1)
y -- true "label"
epsilon -- tiny shift to the input to compute approximated gradient with formula(1)
Returns:
difference -- difference (2) between the approximated gradient and the backward propagation gradient
"""
# Set-up variables
parameters_values, _ = dictionary_to_vector(parameters)
grad = gradients_to_vector(gradients)
num_parameters = parameters_values.shape[0]
J_plus = np.zeros((num_parameters, 1))
J_minus = np.zeros((num_parameters, 1))
gradapprox = np.zeros((num_parameters, 1))
# Compute gradapprox
for i in range(num_parameters):
# Compute J_plus[i]. Inputs: "parameters_values, epsilon". Output = "J_plus[i]".
# "_" is used because the function you have to outputs two parameters but we only care about the first one
### START CODE HERE ### (approx. 3 lines)
thetaplus = np.copy(parameters_values) # Step 1
thetaplus[i][0] = thetaplus[i][0] + epsilon # Step 2
J_plus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaplus)) # Step 3
### END CODE HERE ###
# Compute J_minus[i]. Inputs: "parameters_values, epsilon". Output = "J_minus[i]".
### START CODE HERE ### (approx. 3 lines)
thetaminus = np.copy(parameters_values) # Step 1
thetaminus[i][0] = thetaminus[i][0] - epsilon # Step 2
J_minus[i], _ = forward_propagation_n(X, Y, vector_to_dictionary(thetaminus)) # Step 3
### END CODE HERE ###
# Compute gradapprox[i]
### START CODE HERE ### (approx. 1 line)
gradapprox[i] = (J_plus[i] - J_minus[i]) / (2*epsilon)
### END CODE HERE ###
# Compare gradapprox to backward propagation gradients by computing difference.
### START CODE HERE ### (approx. 1 line)
numerator = np.linalg.norm(grad - gradapprox) # Step 1'
denominator = np.linalg.norm(grad) + np.linalg.norm(gradapprox) # Step 2'
difference = numerator / denominator # Step 3'
### END CODE HERE ###
if difference > 2e-7:
print ("\033[93m" + "There is a mistake in the backward propagation! difference = " + str(difference) + "\033[0m")
else:
print ("\033[92m" + "Your backward propagation works perfectly fine! difference = " + str(difference) + "\033[0m")
return difference
X, Y, parameters = gradient_check_n_test_case()
cost, cache = forward_propagation_n(X, Y, parameters)
gradients = backward_propagation_n(X, Y, cache)
difference = gradient_check_n(parameters, gradients, X, Y)
###Output
[92mYour backward propagation works perfectly fine! difference = 1.18855520355e-07[0m
|
Examples/ModelFlow features/ModelFlow, extend DataFrame.ipynb
|
###Markdown
ModelFlow, a toolkitPython is an incredible and versatile language embedded a powerful ecosystem. For datascience the Pandas library is a powerful "Swiss Army Knife".In economic and for modeling banks we need **lagged variables**and **simultaneous formulas** (circular references in Excel speak).ModelFlow, a toolkit to enable lagged variables and simultaneous formulas.This notebook ModelFlow to extend dataframes. Other notebooks show ModelFlow as a class. JupyterThis is a Jupyter notebook. Jupyter is a Python Shell You will notice **input cells** (marked:In\[\]) and **output cells** (marked: Out\[\]) It is live, so you can try it out yourself, if you have access to theModelFlow toolkit, else you just have to watch.This Jupyter notebook show how ModelFlow can extend pandas dataframes to run models. The notebook focus on a simple example and do not explore all the features andoptions. Also the models are toy models created to be small but still illustrative. Import stuff
###Code
import pandas as pd # Python data science library
import sys
from IPython.display import SVG, display
sys.path.append('modelflow/')
import modelmf # This will extend pandas dataframes with ModelFlow
###Output
_____no_output_____
###Markdown
Create a Pandas DataframeWe make up some data.Pandas dataframes are tables with **row** and **column** names. Columns are variables, and rows are the time dimension.
###Code
df = pd.DataFrame({'LOAN': [100,100,100,100],'SECURITIES': [10,11,12,13],
'CASH': [4,4,4,4], 'DEPOSIT' : [100,100,100,100],
'BONDS':[1,2,3,10], 'NEW_LOAN' : [1,20,30,40] },
index=[2018,2019,2020,2021])
df
###Output
_____no_output_____
###Markdown
A model, where Pandas don't work out of the box A very small stylized dynamic model of the balance sheet of a bank is created.
###Code
fmodel = '''\
£ Stock
ASSETS = LOAN + SECURITIES + CASH
FUNDING = DEPOSIT + BONDS
EQUITY = ASSETS - FUNDING
LIABILITIES = FUNDING + EQUITY
£ stock flow
DEPOSIT = DEPOSIT(-1) + NEW_DEPOSIT
LOAN = LOAN(-1)+ NEW_LOAN
NEW_BONDS = (NEW_LOAN - NEW_DEPOSIT)
BONDS = BONDS(-1) + NEW_BONDS'''
###Output
_____no_output_____
###Markdown
Apply the model on the dataframe.To do this we use dataframe.mfcalc.
###Code
df.mfcalc(fmodel)
###Output
Will start calculating: testmodel
2019 solved
2020 solved
2021 solved
testmodel calculated
###Markdown
Notice:* The model is run from 2019. It cant run 2018 as as there is no values for laggged variables in 2018. * The model is calculated even when the formulas where not in the logical order. * Variables in the model missing from the dataframe are set to 0 There is more The result from a model run can be used straight in python programs.But, A model instance ```.mf``` contains* The first and last solution of the model* The directed graph of which variable contributes to which variable* All formulas in the model This makes it a powerful tool for model and result analysis. Make another experiment First we update some exogenous variables (variables which are only on the right hand side of the model). Then we run the model again.
###Code
df['NEW_LOAN']= [1,40,50,80]
df['NEW_DEPOSIT']= [1,30,25,50]
df.mfcalc(fmodel)
###Output
Will start calculating: testmodel
2019 solved
2020 solved
2021 solved
testmodel calculated
###Markdown
Visualizing The results can be compared and visualized. Wildcards can be used to select the variables to visualize.If this is not sufficient the whole suite of Python visualization (as Matplotlib, Seaborn, Plotly) can be used on top of the resulting dataframes. Plot the last result
###Code
_ = df.mf['*'].plot()
###Output
_____no_output_____
###Markdown
Plot the difference between the first and last run
###Code
_ = df.mf['*'].dif.plot()
###Output
_____no_output_____
###Markdown
Or as heatmap
###Code
_ = df.mf[['*']].dif.heat(title='All',annot=True)
###Output
_____no_output_____
###Markdown
The stucture of the model (dependency graph)
###Code
df.mf.drawmodel()
df.mf.drawmodel(all =1,svg=1)
###Output
_____no_output_____
###Markdown
What explains the difference for a variable Which of the input variables explains the difference of the results of a formula between two runs. If we have:$y = f(a,b)$and we have two solutions where the variables differs by $\Delta y, \Delta a, \Delta b$How much of $\Delta y$ can be explained by $\Delta a$ and $\Delta b$ ?Analytical the attributions $\Omega a$ and $\Omega b$ can be calculated like this: $\Delta y = \underbrace{\Delta a \frac{\partial {f}}{\partial{a}}(a,b)}_{\Omega a} + \underbrace{\Delta b \frac{\partial {f}}{\partial{b}}(a,b)}_{\Omega b}+Residual$ If we have two experiments:\begin{eqnarray} y_0&=&𝑓(a_{0},b_{0}) \\y_1&=&𝑓(a_0+\Delta a,b_{0}+ \Delta b)\end{eqnarray}ModelFlow will do a numerical approximation of $\Omega a$ and $\Omega b$.\begin{eqnarray} \Omega f_a&=&f(a_1,b_1 )-f(a_1-\Delta a,b_1) \\\Omega f_b&=&f(a_1,b_1 )-f(a_1,b_1-\Delta b)\end{eqnarray}If the model is fairly linear, the residual will be small. \begin{eqnarray}residual = \Omega f_a + \Omega f_b -(y_1 - y_0) \end{eqnarray} Now look at generations of attributions
###Code
_= df.mf.bonds.explain(up=2,HR=0,pdf=0)
###Output
_____no_output_____
###Markdown
ModelFlow, a toolkitPython is an incredible and versatile language embedded a powerful ecosystem. For datascience the Pandas library is a powerful "Swiss Army Knife".In economic and for modeling banks we need **lagged variables**and **simultaneous formulas** (circular references in Excel speak).ModelFlow, a toolkit to enable lagged variables and simultaneous formulas.This notebook ModelFlow to extend dataframes. Other notebooks show ModelFlow as a class. JupyterThis is a Jupyter notebook. Jupyter is a Python Shell You will notice **input cells** (marked:In\[\]) and **output cells** (marked: Out\[\]) It is live, so you can try it out yourself, if you have access to theModelFlow toolkit, else you just have to watch.This Jupyter notebook show how ModelFlow can extend pandas dataframes to run models. The notebook focus on a simple example and do not explore all the features andoptions. Also the models are toy models created to be small but still illustrative. Import stuff
###Code
import pandas as pd # Python data science library
import sys
from IPython.display import SVG, display
import modelmf # This will extend pandas dataframes with ModelFlow
###Output
_____no_output_____
###Markdown
Create a Pandas DataframeWe make up some data.Pandas dataframes are tables with **row** and **column** names. Columns are variables, and rows are the time dimension.
###Code
df = pd.DataFrame({'LOAN': [100,0,0,0],'SECURITIES': [10,11,12,13],
'CASH': [4,4,4,4], 'DEPOSIT' : [100,100,100,100],
'BONDS':[1,2,3,10], 'NEW_LOAN' : [1,20,30,40] },
index=[2018,2019,2020,2021])
df
###Output
_____no_output_____
###Markdown
A model, where Pandas don't work out of the box A very small stylized dynamic model of the balance sheet of a bank is created.
###Code
fmodel = '''\
£ Stock
ASSETS = LOAN + SECURITIES + CASH
FUNDING = DEPOSIT + BONDS
EQUITY = ASSETS - FUNDING
LIABILITIES = FUNDING + EQUITY
£ stock flow
DEPOSIT = DEPOSIT(-1) + NEW_DEPOSIT
LOAN = LOAN(-1)+ NEW_LOAN
NEW_BONDS = (NEW_LOAN - NEW_DEPOSIT)
BONDS = BONDS(-1) + NEW_BONDS'''
###Output
_____no_output_____
###Markdown
Apply the model on the dataframe.To do this we use dataframe.mfcalc.
###Code
df.mfcalc(fmodel)
###Output
_____no_output_____
###Markdown
Notice:* The model is run from 2019. It cant run 2018 as as there is no values for laggged variables in 2018. * The model is calculated even when the formulas where not in the logical order. * Variables in the model missing from the dataframe are set to 0 There is more The result from a model run can be used straight in python programs.But, A model instance ```.mf``` contains* The first and last solution of the model* The directed graph of which variable contributes to which variable* All formulas in the model This makes it a powerful tool for model and result analysis. Make another experiment First we update some exogenous variables (variables which are only on the right hand side of the model). Then we run the model again.
###Code
df['NEW_LOAN']= [1,40,50,80]
df['NEW_DEPOSIT']= [1,30,25,50]
df.mfcalc(fmodel)
###Output
_____no_output_____
###Markdown
Visualizing The results can be compared and visualized. Wildcards can be used to select the variables to visualize.If this is not sufficient the whole suite of Python visualization (as Matplotlib, Seaborn, Plotly) can be used on top of the resulting dataframes. Plot the last result
###Code
_ = df.mf['*'].plot()
###Output
_____no_output_____
###Markdown
Plot the difference between the first and last run
###Code
_ = df.mf['*'].dif.plot()
###Output
_____no_output_____
###Markdown
Or as heatmap
###Code
_ = df.mf[['*']].dif.heat(title='All',annot=True)
###Output
_____no_output_____
###Markdown
The stucture of the model (dependency graph)
###Code
df.mf.drawmodel()
df.mf.drawmodel(all =1,svg=1)
###Output
_____no_output_____
###Markdown
What explains the difference for a variable Which of the input variables explains the difference of the results of a formula between two runs. If we have:$y = f(a,b)$and we have two solutions where the variables differs by $\Delta y, \Delta a, \Delta b$How much of $\Delta y$ can be explained by $\Delta a$ and $\Delta b$ ?Analytical the attributions $\Omega a$ and $\Omega b$ can be calculated like this: $\Delta y = \underbrace{\Delta a \frac{\partial {f}}{\partial{a}}(a,b)}_{\Omega a} + \underbrace{\Delta b \frac{\partial {f}}{\partial{b}}(a,b)}_{\Omega b}+Residual$ If we have two experiments:\begin{eqnarray} y_0&=&𝑓(a_{0},b_{0}) \\y_1&=&𝑓(a_0+\Delta a,b_{0}+ \Delta b)\end{eqnarray}ModelFlow will do a numerical approximation of $\Omega a$ and $\Omega b$.\begin{eqnarray} \Omega f_a&=&f(a_1,b_1 )-f(a_1-\Delta a,b_1) \\\Omega f_b&=&f(a_1,b_1 )-f(a_1,b_1-\Delta b)\end{eqnarray}If the model is fairly linear, the residual will be small. \begin{eqnarray}residual = \Omega f_a + \Omega f_b -(y_1 - y_0) \end{eqnarray} Now look at generations of attributions
###Code
_= df.mf.bonds.explain(up=2,HR=0,pdf=0)
###Output
_____no_output_____
|
Amazon Sentiment Analysis/Surface Pro 7/.ipynb_checkpoints/Data Preparation-checkpoint.ipynb
|
###Markdown
Send request to Amazon
###Code
def scrape(url):
headers = {
'authority': 'www.amazon.com',
'pragma': 'no-cache',
'cache-control': 'no-cache',
'dnt': '1',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (X11; CrOS x86_64 8172.45.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.64 Safari/537.36',
'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'sec-fetch-site': 'none',
'sec-fetch-mode': 'navigate',
'sec-fetch-dest': 'document',
'accept-language': 'en-GB,en-US;q=0.9,en;q=0.8',
}
# Download the page using requests
print("Downloading %s"%url)
r = requests.get(url, headers=headers)
# print("my r", r.text)
# Simple check to check if page was blocked (Usually 503)
if r.status_code > 500:
if "To discuss automated access to Amazon data please contact" in r.text:
print("Page %s was blocked by Amazon. Please try using better proxies\n"%url)
else:
print("Page %s must have been blocked by Amazon as the status code was %d"%(url,r.status_code))
return None
# Pass the HTML of the page and create
# return e.extract(r.text)
return r.text
myBaseUrl = "https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber="
#myBaseUrl = "https://www.amazon.in/Apple-MacBook-Air-13-3-inch-MQD32HN/product-reviews/B073Q5R6VR/ref=cm_cr_dp_d_show_all_btm?ie=UTF8&amp;reviewerType=all_reviews"
full_urls = []
for i in range(1,29):
full_urls.append(myBaseUrl+str(i))
###Output
_____no_output_____
###Markdown
Random delay
###Code
def sleep(alpha, beta):
rand = random.Random()
time.sleep(rand.uniform(alpha, beta))
###Output
_____no_output_____
###Markdown
Store stars and comments in 2 array
###Code
comments = []
starts = []
for url in full_urls:
data = scrape(url)
soup = BeautifulSoup(data, 'lxml')
full_content = soup.find_all('div',id="cm_cr-review_list")
text = full_content[0].find_all('span', {'class':"review-text"})
rating = full_content[0].find_all('i', {'class':"review-rating"})
for t, r in zip(text, rating):
comments.append(t.text)
starts.append(r.text)
sleep(5, 10)
###Output
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=1
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=2
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=3
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=4
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=5
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=6
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=7
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=8
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=9
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=10
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=11
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=12
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=13
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=14
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=15
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=16
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=17
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=18
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=19
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=20
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=21
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=22
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=23
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=24
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=25
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=26
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=27
Downloading https://www.amazon.com/product-reviews/B07YNJ6BQL/ref=cm_cr_arp_d_viewopt_sr?ie=UTF8&filterByStar=all_stars&reviewerType=all_reviews&pageNumber=28
###Markdown
Convert to Pandas DataFrame
###Code
import pandas as pd
df = pd.DataFrame({'stars': starts, 'comments':comments})
df
df['stars'] = df['stars'].str.replace('out of 5 ','')
df['comments'] = df['comments'].str.replace('\n','')
df
df.to_csv('Surface Pro 7.csv')
summarised_results = df["stars"].value_counts()
plt.bar(summarised_results.keys(), summarised_results.values)
plt.show()
review-title
title = full_content[0].find_all('a', {'class':"review-title"})
helpful_num = full_content[0].find_all('span', {'class':"helpful-vote-statement"})
len(title)
###Output
_____no_output_____
###Markdown
Drop title and helpful number because not all reviews has it.
###Code
len(helpful_num)
###Output
_____no_output_____
|
tutorial/graph_converters.ipynb
|
###Markdown
Graph Converters As neural networks becomes complex and one of components in a system,we sometimes want to convert a network as we want. Typical usecase is for inference.We want to merge or change some layers in a network as a high-level optimization for the inference speed.Also, there are other usecases: adding new layers to keep track some stats,adding quantize/dequantize layers for a quantized inference,decomposing a layer as combination of a low-rank ones,changing a network architecture for the neural architecture search based on an original network architecture,changing the tensor format from the channel first to channel last and opposite, and so on.Let's look at the simple cases1. batch normalization folding2. channel last conversionAs a reference network, use the follows.
###Code
# ResNet-50 for inference
import nnabla as nn
import nnabla.functions as F
import nnabla.parametric_functions as PF
import numpy as np
from nnabla.utils.inspection import pprint
from nnabla.models.imagenet import ResNet50
model = ResNet50()
batch_size = 1
x = nn.Variable((batch_size,) + model.input_shape)
y = model(x, training=False)
###Output
_____no_output_____
###Markdown
Batch Normalization Folding See the resnet architecture.
###Code
pprint(y)
###Output
_____no_output_____
###Markdown
Now, we can see the batch normalization. For the inference, we do not need to computethe batch normalization explicitly by folding the batch normalization parametersif there is e.g., a convolution before the batch normalization.To fold the batch normalization, use BatchNormalizationFoldingModifier as the following.
###Code
import nnabla.experimental.graph_converters as GC
modifiers = [GC.BatchNormalizationFoldingModifier()]
gc = GC.GraphConverter(modifiers)
yy = gc.convert(y)
###Output
_____no_output_____
###Markdown
Again, see the resnet architecture converted.
###Code
pprint(yy)
###Output
_____no_output_____
###Markdown
You can see that the converterd network does not contain the batch normalization any more!In some cases, we can not fold the batch normalization, but the batch normalization can also be self-folded,i.e., the four parameters: scale, bias, running mean, running variance can be two other scale and bias.For doing this, use BatchNormalizationSelfFoldingModifier. Channel Last Conversion In NVIDIA latest GPU architectures since Volta, it supports TensorCore to accelerate the computatoinal performance. To boost the performance as maximum as possible, we need the channel-last tensor format aka NHWC. In NNabla, the default tensor format is the channel first aka NCHW, so as to utilize TensorCore, we need to change the tensor format to NHWC format.ChannelLastModifier convert a network with NCHW tesnor format to another network with NHWC tensor format.
###Code
import nnabla.experimental.graph_converters as GC
modifiers = [GC.ChannelLastModifier([x])]
gc = GC.GraphConverter(modifiers)
yy = gc.convert(y)
###Output
_____no_output_____
###Markdown
Let's see the resnet architecture converted.
###Code
pprint(yy)
###Output
_____no_output_____
###Markdown
We can find the channel dimension changed at the last!If we want to access to the inputs of which tensor format converted,
###Code
x_cl = modifiers[0].inputs_cl[0]
print(x_cl)
###Output
_____no_output_____
|
Programmeerelementen/Datatypes/0410_DatastructuurNumPy.ipynb
|
###Markdown
EXTRA DATASTRUCTUUR MET NUMPY: MATRIX De module NumPy laat toe om meer wetenschappelijke berekeningen te doen en om bv. te werken met matrices. Importeer eerst de module
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
1. Byte Een bit is een informatie-eenheid. De term is afkomstig van binary digit. Het is een eenheid die enkel de waarden 0 en 1 kan aannemen. Acht bits vormen samen een byte. Er zijn zo 256 mogelijke combinaties van 0 en 1 die samen een byte vormen. Natuurlijke getallen zijn gehele getallen die positief zijn. Het is dus niet nodig om het toestandsteken expliciet te vermelden (de + hoef je niet te noteren). De natuurlijke getallen van 0 t.e.m. 255 kan je voorstellen met een byte. 77 bv. komt overeen met 01001101. In de module NumPy kan je een natuurlijk getal van 0 tot 255 opslaan met één byte. In dat geval wordt het type `uint8` *(8-bits unsigned integer)* gebruikt.In het volgende puntje 2. NumPy ndarray: matrix vind je daar voorbeelden van. 2. NumPy ndarray: matrix *Een tabel van getallen noemt men in de wiskunde een matrix.* Voorbeeld: De matrix $\begin{bmatrix} 1 & 2 & 0 \\ 3 & 4 & 5 \end{bmatrix} $ is een $2x3$-matrix. Dat is een matrix met $2$ rijen en $3$ kolommen. In de wiskunde zegt men dat $2x3$ de *dimensie* is van de matrix. Een matrix is een voorbeeld van een *2D-array*. Voer de volgende code-cellen uit.
###Code
matrix1 = np.array([[1, 2, 0], [3, 4, 5]]) # matrix met 2 rijen en 3 kolommen, m.a.w. 2X3-matrix
print(matrix1)
type(matrix1)
matrix1.ndim
###Output
_____no_output_____
###Markdown
`matrix1` is een object dat het type *ndarray* heeft (ndarray staat voor nD-array). Hier gaat het over een 2D-array. Deze matrix heeft rijen en kolommen, vandaar n = 2. Opgelet: je mag dit niet verwarren met de wiskundige dimensie van een matrix (zie notebook 'Tensoren').Je geeft de matrix rij voor rij in in de code-cel. Naast het kenmerk *ndim*, heeft een object met type *ndarray* nog andere kenmerken, zoals *shape*, *dtype* en *size*. Die kenmerken vraag je als volgt op:
###Code
matrix1.shape # geeft aantal rijen en aantal kolommen, nl. wiskundige dimensie
matrix1.dtype # geeft type van elementen van matrix
matrix1.size # geeft aantal elementen van matrix
###Output
_____no_output_____
###Markdown
Je kan zulke objecten ook als argument meegeven aan NumPy-functies zoals `sum()` en `mean()`. Door bv. de volgende code-cel uit te voeren, bereken je de som van alle elementen van `matrix1`.
###Code
np.sum(matrix1)
###Output
_____no_output_____
###Markdown
Zoals vermeld in 1. Byte voorziet de module NumPy de mogelijkheid om een natuurlijk getal van 0 tot 255 op te slaan met één byte. Om dat te doen moet je het type `uint8` *(8-bits unsigned integer)* gebruiken. Voer de volgende code-cel uit om te zien hoe dat werkt.
###Code
matrix2 = np.array([[7, 0, 1], [5, 1, 2]], dtype="uint8") # type elementen zelf kiezen
print(matrix2)
matrix2.dtype
matrix3 = matrix1.astype("uint8") # type elementen wijzigen
print(matrix3)
matrix3.dtype
###Output
_____no_output_____
###Markdown
Een matrix is een voorbeeld van een ndarray. Standaard gebruikt NumPy het type int64 voor gehele getallen. Het bereik van int64 is van -9223372036854775808 tot 9223372036854775807. Je kan het type dat de elementen hebben ook zelf bepalen met dtype of je kan het type wijzigen met astype(). Voor een ndarray met natuurlijke getallen van 0 tot 255 kan je kiezen voor het type uint8.Attributen opvragen doe je als volgt: het aantal rijen en kolommen van een matrix met shape, het type dat de elementen van de matrix hebben met dtype en het aantal elementen van de matrix met size.Van een ndarray kan je ook de waarde van $n$ opvragen. Je doet dat met ndim. Meer voorbeelden van ndarray vind je in de notebook 'Tensoren'. Opdracht 2.1 Beschouw de matrix $\begin{bmatrix} -1 & 0 & 0 \\ 2 & -5 & 12 \\ 0 & 4 & -2\end{bmatrix} $.Men noemt deze een *vierkante matrix*, omdat ze evenveel rijen als kolommen heeft.Geef a.d.h.v. NumPy deze vierkante matrix in en vraag het aantal elementen en de (wiskundige) dimensie op. Beschouw de *kolommatrix* $\begin{bmatrix} -10 \\ 2 \\ 0 \end{bmatrix} $. Geef a.d.h.v. NumPy deze kolommatrix in en vraag het aantal elementen en de (wiskundige) dimensie op. Kan je nu ook raden wat een *rijmatrix* is?Geef a.d.h.v. NumPy een rijmatrix in met 6 elementen die het type uint8 hebben. Vraag de (wiskundige) dimensie en het type van de elementen op. Opdracht 2.2Met de NumPy-functies `sum()` en `mean()` kan je respectievelijk de som en het gemiddelde van alle elementen van een Numpy *ndarray* berekenen.- Bereken de som van alle elementen van de gegeven vierkante matrix.- Bereken het gemiddelde van alle elementen van de gegeven kolommatrix. Alle elementen van een NumPy-array optellen doe je met de NumPy-functie sum(), het gemiddelde ervan berekenen doe je met de NumPy-functie mean(). 3. NumPy ndarray: NumPy-lijst Je weet al dat je in NumPy ook met *lijsten van getallen* kan werken. Zo'n NumPy-lijst is een *1D-array*. Voer de volgende code-cellen uit.
###Code
lijst = np.array([1, 2, 3, 4, 5, 6])
print(lijst)
lijst.ndim
lijst.dtype
###Output
_____no_output_____
|
Rethinking_2/Chp_02.ipynb
|
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Mon Jan 03 2022
Python implementation: CPython
Python version : 3.9.7
IPython version : 7.29.0
arviz : 0.11.4
numpy : 1.21.2
matplotlib: 3.5.1
pymc3 : 3.11.4
scipy : 1.6.3
Watermark: 2.2.0
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic aproximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(' Mean, Standard deviation\np {:.2}, {:.2}'.format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print('5.5%, 94.5% \n{:.2}, {:.2}'.format(pi[0], pi[1]))
###Output
_____no_output_____
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1,3, figsize=(21,7))
for idx, ps in enumerate(zip(w,n)):
data = np.repeat((0, 1), (ps[1]-ps[0], ps[0]))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc='upper left')
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
UsageError: Line magic function `%watermark` not found.
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses—under a value of p=0.5
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book the following code is not inside a function, but this way is easier to play with different parameters
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"success = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic aproximation
###Code
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1)
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
mean_q["p"], std_q
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
pi
###Output
_____no_output_____
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
json 2.0.9
pymc3 3.8
numpy 1.17.4
arviz 0.6.1
autopep8 1.4.4
last updated: Mon Jan 13 2020
CPython 3.7.3
IPython 7.11.1
watermark 2.0.2
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses—under a value of p=0.5
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book the following code is not inside a function, but this way is easier to play with different parameters
###Code
def posterior_grid_approx(prior_gen, grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = prior_gen(p_grid)
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
def uniform_prior(p_grid):
return np.repeat(5, len(p_grid)) # uniform
def truncated_prior(p_grid):
return (p_grid >= 0.5).astype(int) # truncated
def double_exp_prior(p_grid):
return np.exp(- 5 * abs(p_grid - 0.5)) # double exp
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
points = (5, 20, 100, 1000)
def do_plotting(prior_func, points, w, n):
_, ax = plt.subplots(1, len(points), figsize=(20, 5))
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(prior_func, ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"success = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
do_plotting(uniform_prior, points, w, n)
do_plotting(truncated_prior, points, w, n)
do_plotting(double_exp_prior, points, w, n)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic aproximation
###Code
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1)
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP(maxeval=1e6)
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
mean_q["p"], std_q
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
pi
###Output
_____no_output_____
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
###Output
_____no_output_____
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic aproximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(' Mean, Standard deviation\np {:.2}, {:.2}'.format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print('5.5%, 94.5% \n{:.2}, {:.2}'.format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1,3, figsize=(21,7))
for idx, ps in enumerate(zip(w,n)):
data = np.repeat((0, 1), (ps[1]-ps[0], ps[0]))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc='upper left')
###Output
logp = -1.8075, ||grad|| = 1.5: 100%|████████████████████████████████████████████████████████████| 7/7 [00:00<?, ?it/s]
logp = -2.6477, ||grad|| = 3: 100%|████████████████████████████████████████████████████| 7/7 [00:00<00:00, 3512.82it/s]
logp = -4.0055, ||grad|| = 6: 100%|██████████████████████████████████████████████████████████████| 7/7 [00:00<?, ?it/s]
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
numpy 1.16.2
pymc3 3.8
arviz 0.5.1
last updated: Tue Jun 16 2020
CPython 3.7.3
IPython 7.12.0
watermark 2.0.2
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
fig, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses—under a value of p=0.5
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book the following code is not inside a function, but this way is easier to play with different parameters
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
"""
"""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"success = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic aproximation
###Code
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1)
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum())
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
mean_q["p"], std_q
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
pi
###Output
_____no_output_____
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
json 2.0.9
pymc3 3.8
numpy 1.17.4
arviz 0.6.1
autopep8 1.4.4
last updated: Mon Jan 13 2020
CPython 3.7.3
IPython 7.11.1
watermark 2.0.2
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic aproximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_aproximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc3 : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
###Markdown
Code 2.1
###Code
ways = np.array([0, 3, 8, 9, 0])
ways / ways.sum()
###Output
_____no_output_____
###Markdown
Code 2.2$$Pr(w \mid n, p) = \frac{n!}{w!(n − w)!} p^w (1 − p)^{n−w}$$The probability of observing six W’s in nine tosses — below a value of $p=0.5$.
###Code
stats.binom.pmf(6, n=9, p=0.5)
###Output
_____no_output_____
###Markdown
Code 2.3 and 2.5Computing the posterior using a grid approximation.In the book, the following code is not inside a function, but this way it is easier to play with different parameters.
###Code
def posterior_grid_approx(grid_points=5, success=6, tosses=9):
""""""
# define grid
p_grid = np.linspace(0, 1, grid_points)
# define prior
prior = np.repeat(5, grid_points) # uniform
# prior = (p_grid >= 0.5).astype(int) # truncated
# prior = np.exp(- 5 * abs(p_grid - 0.5)) # double exp
# compute likelihood at each point in the grid
likelihood = stats.binom.pmf(success, tosses, p_grid)
# compute product of likelihood and prior
unstd_posterior = likelihood * prior
# standardize the posterior, so it sums to 1
posterior = unstd_posterior / unstd_posterior.sum()
return p_grid, posterior
###Output
_____no_output_____
###Markdown
Code 2.3
###Code
w, n = 6, 9
_, ax = plt.subplots(1, 2, figsize=(12, 5))
points = (5, 20)
for idx, ps in enumerate(points):
p_grid, posterior = posterior_grid_approx(ps, w, n)
ax[idx].plot(p_grid, posterior, "o-", label=f"successes = {w}\ntosses = {n}")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("posterior probability")
ax[idx].set_title(f"{ps} points")
ax[idx].legend(loc=0)
###Output
_____no_output_____
###Markdown
Code 2.6Computing the posterior using the quadratic approximation (quad).
###Code
np.repeat((0, 1), (3, 6))
data = np.repeat((0, 1), (3, 6))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
# display summary of quadratic approximation
print(" Mean, Standard deviation\np {:.2}, {:.2}".format(mean_q["p"], std_q[0]))
# Compute the 89% percentile interval
norm = stats.norm(mean_q, std_q)
prob = 0.89
z = stats.norm.ppf([(1 - prob) / 2, (1 + prob) / 2])
pi = mean_q["p"] + std_q * z
print("5.5%, 94.5% \n{:.2}, {:.2}".format(pi[0], pi[1]))
###Output
5.5%, 94.5%
0.42, 0.92
###Markdown
Code 2.7
###Code
# analytical calculation
w, n = 6, 9
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, w + 1, n - w + 1), label="True posterior")
# quadratic approximation
plt.plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
plt.legend(loc=0)
plt.title(f"n = {n}")
plt.xlabel("Proportion water");
# Figure 2.8
x = np.linspace(0, 1, 100)
w, n = [6, 12, 24], [9, 18, 36]
fig, ax = plt.subplots(1, 3, figsize=(21, 7))
for idx, ps in enumerate(zip(w, n)):
data = np.repeat((0, 1), (ps[1] - ps[0], ps[0]))
with pm.Model() as normal_approximation:
p = pm.Uniform("p", 0, 1) # uniform priors
w = pm.Binomial("w", n=len(data), p=p, observed=data.sum()) # binomial likelihood
mean_q = pm.find_MAP()
std_q = ((1 / pm.find_hessian(mean_q, vars=[p])) ** 0.5)[0]
ax[idx].plot(x, stats.beta.pdf(x, ps[0] + 1, ps[1] - ps[0] + 1), label="True posterior")
ax[idx].plot(x, stats.norm.pdf(x, mean_q["p"], std_q), label="Quadratic approximation")
ax[idx].set_xlabel("probability of water")
ax[idx].set_ylabel("density")
ax[idx].set_title(r"$n={}$".format(ps[1]))
ax[idx].legend(loc="upper left")
###Output
_____no_output_____
###Markdown
Code 2.8
###Code
n_samples = 1000
p = np.zeros(n_samples)
p[0] = 0.5
W = 6
L = 3
for i in range(1, n_samples):
p_new = stats.norm(p[i - 1], 0.1).rvs(1)
if p_new < 0:
p_new = -p_new
if p_new > 1:
p_new = 2 - p_new
q0 = stats.binom.pmf(W, n=W + L, p=p[i - 1])
q1 = stats.binom.pmf(W, n=W + L, p=p_new)
if stats.uniform.rvs(0, 1) < q1 / q0:
p[i] = p_new
else:
p[i] = p[i - 1]
az.plot_kde(p, label="Metropolis approximation")
x = np.linspace(0, 1, 100)
plt.plot(x, stats.beta.pdf(x, W + 1, L + 1), "C1", label="True posterior")
plt.legend();
%watermark -n -u -v -iv -w
###Output
Last updated: Sun Dec 20 2020
Python implementation: CPython
Python version : 3.8.5
IPython version : 7.19.0
arviz : 0.10.0
pymc : 3.9.3
numpy : 1.19.4
matplotlib: 3.3.3
scipy : 1.5.4
Watermark: 2.1.0
|
notebooks/ch-labs/Lab01_QuantumCircuits.ipynb
|
###Markdown
Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, provider.backends( ) shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](/course/ch-states/the-atoms-of-computation)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits**Goal**Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit).An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw()
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('aer_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = backend.run(qc,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer**Goal**Execute AND gate on a real quantum system and learn how the noise properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have access, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, `provider.backends( )` shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_lima')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convenient way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. In this exercise, we select one of the IBM Quantum systems: `ibmq_quito`.
###Code
# run this cell
backend = provider.get_backend('ibmq_quito')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related information is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration). Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, circuits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the required connectivity')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = backend.run(qc_trans, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. First, examine `ibmq_quito` through the widget by running the cell below.
###Code
backend
###Output
_____no_output_____
###Markdown
&128211; Determine three qubit initial layout considering the error map and assign it to the list variable layout2.
###Code
layout =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout.**your answer:** Execute `AND` gate on `ibmq_quito` by running the cell below.
###Code
output_all = []
qc_trans_all = []
prob_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans, output = AND(input1, input2, backend, layout)
output_all.append(output)
qc_trans_all.append(qc_trans)
prob = output[str(int( input1=='1' and input2=='1' ))]/8192
prob_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementary Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosely corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibm_lagos` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_vigo with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[0]) )
qc_trans_all[0].draw()
print('Transpiled AND gate circuit for ibmq_vigo with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[1]) )
qc_trans_all[1].draw()
print('Transpiled AND gate circuit for ibmq_vigo with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[2]) )
qc_trans_all[2].draw()
print('Transpiled AND gate circuit for ibmq_vigo with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob_all[3]) )
qc_trans_all[3].draw()
###Output
_____no_output_____
###Markdown
Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, provider.backends( ) shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
###Markdown
Lab 1 Quantum Circuits Prerequisite- [Qiskit basics](https://qiskit.org/documentation/tutorials/circuits/1_getting_started_with_qiskit.html)- [Ch.1.2 The Atoms of Computation](https://qiskit.org/textbook/ch-states/atoms-computation.html)Other relevant materials- [Access IBM Quantum Systems](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems)- [IBM Quantum Systems Configuration](https://quantum-computing.ibm.com/docs/manage/backends/configuration)- [Transpile](https://qiskit.org/documentation/apidoc/transpiler.html)- [IBM Quantum account](https://quantum-computing.ibm.com/docs/manage/account/ibmq)- [Quantum Circuits](https://qiskit.org/documentation/apidoc/circuit.html)
###Code
from qiskit import *
from qiskit.visualization import plot_histogram
import numpy as np
###Output
_____no_output_____
###Markdown
Part 1: Classical logic gates with quantum circuits<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Create quantum circuit functions that can compute the XOR, AND, NAND and OR gates using the NOT gate (expressed as x in Qiskit), the CNOT gate (expressed as cx in Qiskit) and the Toffoli gate (expressed as ccx in Qiskit) .An implementation of the `NOT` gate is provided as an example.
###Code
def NOT(inp):
"""An NOT gate.
Parameters:
inp (str): Input, encoded in qubit 0.
Returns:
QuantumCircuit: Output NOT circuit.
str: Output value measured from qubit 0.
"""
qc = QuantumCircuit(1, 1) # A quantum circuit with a single qubit and a single classical bit
qc.reset(0)
# We encode '0' as the qubit state |0⟩, and '1' as |1⟩
# Since the qubit is initially |0⟩, we don't need to do anything for an input of '0'
# For an input of '1', we do an x to rotate the |0⟩ to |1⟩
if inp=='1':
qc.x(0)
# barrier between input state and gate operation
qc.barrier()
# Now we've encoded the input, we can do a NOT on it using x
qc.x(0)
#barrier between gate operation and measurement
qc.barrier()
# Finally, we extract the |0⟩/|1⟩ output of the qubit and encode it in the bit c[0]
qc.measure(0,0)
qc.draw('mpl')
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp in ['0', '1']:
qc, out = NOT(inp)
print('NOT with input',inp,'gives output',out)
display(qc.draw())
print('\n')
###Output
NOT with input 0 gives output 1
###Markdown
&128211; XOR gateTakes two binary strings as input and gives one as output.The output is '0' when the inputs are equal and '1' otherwise.
###Code
def XOR(inp1,inp2):
"""An XOR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 1.
"""
qc = QuantumCircuit(2, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
# barrier between input state and gate operation
qc.barrier()
# this is where your program for quantum XOR gate goes
# barrier between input state and gate operation
qc.barrier()
qc.measure(1,0) # output from qubit 1 is measured
#We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
#Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = XOR(inp1, inp2)
print('XOR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; AND gateTakes two binary strings as input and gives one as output.The output is `'1'` only when both the inputs are `'1'`.
###Code
def AND(inp1,inp2):
"""An AND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(2))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum AND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc, backend, shots=1, memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = AND(inp1, inp2)
print('AND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; NAND gateTakes two binary strings as input and gives one as output.The output is `'0'` only when both the inputs are `'1'`.
###Code
def NAND(inp1,inp2):
"""An NAND gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output NAND circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum NAND gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = NAND(inp1, inp2)
print('NAND with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
&128211; OR gateTakes two binary strings as input and gives one as output.The output is '1' if either input is '1'.
###Code
def OR(inp1,inp2):
"""An OR gate.
Parameters:
inpt1 (str): Input 1, encoded in qubit 0.
inpt2 (str): Input 2, encoded in qubit 1.
Returns:
QuantumCircuit: Output XOR circuit.
str: Output value measured from qubit 2.
"""
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
# this is where your program for quantum OR gate goes
qc.barrier()
qc.measure(2, 0) # output from qubit 2 is measured
# We'll run the program on a simulator
backend = Aer.get_backend('qasm_simulator')
# Since the output will be deterministic, we can use just a single shot to get it
job = execute(qc,backend,shots=1,memory=True)
output = job.result().get_memory()[0]
return qc, output
## Test the function
for inp1 in ['0', '1']:
for inp2 in ['0', '1']:
qc, output = OR(inp1, inp2)
print('OR with inputs',inp1,inp2,'gives output',output)
display(qc.draw())
print('\n')
###Output
_____no_output_____
###Markdown
Part 2: AND gate on Quantum Computer<div style="background: E8E7EB; border-radius: 5px;-moz-border-radius: 5px;"> <p style="background: 800080; border-radius: 5px 5px 0px 0px; padding: 10px 0px 10px 10px; font-size:18px; color:white; ">Goal <p style=" padding: 0px 0px 10px 10px; font-size:16px;">Execute AND gate on two quantum systems and learn how the different circuit properties affect the result.In Part 1 you made an `AND` gate from quantum gates, and executed it on the simulator. Here in Part 2 you will do it again, but instead run the circuits on a real quantum computer. When using a real quantum system, one thing you should keep in mind is that present day quantum computers are not fault tolerant; they are noisy.The 'noise' in a quantum system is the collective effects of all the things that should not happen, but nevertheless do. Noise results in outputs are not always what we would expect. There is noise associated with all processes in a quantum circuit: preparing the initial state, applying gates, and qubit measurement. For the gates, noise levels can vary between different gates and between different qubits. `cx` gates are typically more noisy than any single qubit gate.Here we will use the quantum systems from the IBM Quantum Experience. If you do not have acess, you can do so [here](https://qiskit.org/documentation/install.htmlaccess-ibm-quantum-systems).Now that you are ready to use the real quantum computer, let's begin. Step 1. Choosing a device First load the account from the credentials saved on disk by running the following cell:
###Code
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
After your account is loaded, you can see the list of providers that you have access to by running the cell below. Each provider offers different systems for use. For open users, there is typically only one provider `ibm-q/open/main`:.
###Code
IBMQ.providers()
###Output
_____no_output_____
###Markdown
Let us grab the provider using `get_provider`. The command, provider.backends( ) shows you the list of backends that are available to you from the selected provider.
###Code
provider = IBMQ.get_provider('ibm-q')
provider.backends()
###Output
_____no_output_____
###Markdown
Among these options, you may pick one of the systems to run your circuits on. All except the `ibmq_qasm_simulator` all are real quantum computers that you can use. The differences among these systems resides in the number of qubits, their connectivity, and the system error rates. Upon executing the following cell you will be presented with a widget that displays all of the information about your choice of the backend. You can obtain information that you need by clicking on the tabs. For example, backend status, number of qubits and the connectivity are under `configuration` tab, where as the `Error Map` tab will reveal the latest noise information for the system.
###Code
import qiskit.tools.jupyter
backend_ex = provider.get_backend('ibmq_16_melbourne')
backend_ex
###Output
_____no_output_____
###Markdown
For our AND gate circuit, we need a backend with three or more qubits, which is true for all the real systems except for `ibmq_armonk`. Below is an example of how to filter backends, where we filter for number of qubits, and remove simulators:
###Code
backends = provider.backends(filters = lambda x:x.configuration().n_qubits >= 2 and not x.configuration().simulator
and x.status().operational==True)
backends
###Output
_____no_output_____
###Markdown
One convienent way to choose a system is using the `least_busy` function to get the backend with the lowest number of jobs in queue. The downside is that the result might have relatively poor accuracy because, not surprisingly, the lowest error rate systems are the most popular.
###Code
from qiskit.providers.ibmq import least_busy
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= 2 and
not x.configuration().simulator and x.status().operational==True))
backend
###Output
_____no_output_____
###Markdown
Real quantum computers need to be recalibrated regularly, and the fidelity of a specific qubit or gate can change over time. Therefore, which system would produce results with less error can vary. `ibmq_athens` tends to show relatively low error rates.In this exercise, we select two systems: `ibmq_athens` for its low error rates, and `ibmqx2` for its additional connectivity, in particular its triangular connectivity, that will be useful for circuits with Toffoli gates.
###Code
# run this cell
backend1 = provider.get_backend('ibmqx2')
backend2 = provider.get_backend('ibmq_athens')
###Output
_____no_output_____
###Markdown
Step 2. Define AND function for a real deviceWe now define the AND function. We choose 8192 as the the number of shots, the maximum number of shots for open IBM systems, to reduce the variance in the final result. Related informations is well explained [here](https://quantum-computing.ibm.com/docs/manage/backends/configuration) Qiskit Transpiler It is important to know that when running a circuit on a real quantum computer, cicruits typically need to be transpiled for the backend that you select so that the circuit contains only those gates that the quantum computer can actually perform. Primarily this involves the addition of swap gates so that two-qubit gates in the circuit map to those pairs of qubits on the device that can actually perform these gates. The following cell shows the AND gate represented as a Toffoli gate decomposed into single- and two-qubit gates, which are the only types of gate that can be run on IBM hardware. Provided that CNOT gates can be performed between all three qubits, a triangle topology, no other gates are required.
###Code
qc_and = QuantumCircuit(3)
qc_and.ccx(0,1,2)
print('AND gate')
display(qc_and.draw())
print('\n\nTranspiled AND gate with all the reqiured connectiviy')
qc_and.decompose().draw()
###Output
AND gate
###Markdown
In addition, there are often optimizations that the transpiler can perform that reduce the overall gate count, and thus total length of the input circuits. Note that the addition of swaps to match the device topology, and optimizations for reducing the length of a circuit are at odds with each other. In what follows we will make use of `initial_layout` that allows us to pick the qubits on a device used for the computation and `optimization_level`, an argument that allows selecting from internal defaults for circuit swap mapping and optimization methods to perform.You can learn more about transpile function in depth [here](https://qiskit.org/documentation/apidoc/transpiler.html). Let's modify AND function in Part1 properly for the real system with the transpile step included.
###Code
from qiskit.tools.monitor import job_monitor
# run the cell to define AND gate for real quantum system
def AND(inp1, inp2, backend, layout):
qc = QuantumCircuit(3, 1)
qc.reset(range(3))
if inp1=='1':
qc.x(0)
if inp2=='1':
qc.x(1)
qc.barrier()
qc.ccx(0, 1, 2)
qc.barrier()
qc.measure(2, 0)
qc_trans = transpile(qc, backend, initial_layout=layout, optimization_level=3)
job = execute(qc_trans, backend, shots=8192)
print(job.job_id())
job_monitor(job)
output = job.result().get_counts()
return qc_trans, output
###Output
_____no_output_____
###Markdown
When you submit jobs to quantum systems, `job_monitor` will start tracking where your submitted job is in the pipeline. Case A) Three qubits on ibmqx2 with the triangle connectivity First, examine `ibmqx2` using the widget introduced earlier. Find a group of three qubits with triangle conntection and determine your initial layout.
###Code
# run this cell for the widget
backend1
###Output
_____no_output_____
###Markdown
&128211; Assign your choice of layout to the list variable layout1 in the cell below
###Code
# Assign your choice of the initial_layout to the variable layout1 as a list
# ex) layout1 = [0,2,4]
layout1 =
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for your choice of initial layout. Execute `AND` gate on `ibmqx2` by running the cell below.
###Code
output1_all = []
qc_trans1_all = []
prob1_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans1, output1 = AND(input1, input2, backend1, layout1)
output1_all.append(output1)
qc_trans1_all.append(qc_trans1)
prob = output1[str(int( input1=='1' and input2=='1' ))]/8192
prob1_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print( '{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Once your job is finished by running, you can then easily access the results via:```pythonresults = backend.retrieve_job('JOB_ID').result().```Your job_ids will be printed out through the `AND` function defined above. You can also find the job_ids from the results under your `IQX` account. More information can be found [here](https://quantum-computing.ibm.com/docs/manage/account/ibmq). Case B) Three qubits on ibmq_athens for the linear nearest neighbor connectivity Examine `ibmq_athens` through the widget by running the cell below.
###Code
backend2
###Output
_____no_output_____
###Markdown
&128211; Find three qubits with the linear nearest neighbor connectivity. Determine the initial layout considering the error map and assign it to the list variable layout2.
###Code
layout2 = []
###Output
_____no_output_____
###Markdown
&128211; Describe the reason for choice of initial layout. Execute `AND` gate on `ibmq_athens` by running the cell below.
###Code
output2_all = []
qc_trans2_all = []
prob2_all = []
worst = 1
best = 0
for input1 in ['0','1']:
for input2 in ['0','1']:
qc_trans2, output2 = AND(input1, input2, backend2, layout2)
output2_all.append(output2)
qc_trans2_all.append(qc_trans2)
prob = output2[str(int( input1=='1' and input2=='1' ))]/8192
prob2_all.append(prob)
print('\nProbability of correct answer for inputs',input1,input2)
print('{:.2f}'.format(prob) )
print('---------------------------------')
worst = min(worst,prob)
best = max(best, prob)
print('')
print('\nThe highest of these probabilities was {:.2f}'.format(best))
print('The lowest of these probabilities was {:.2f}'.format(worst))
###Output
_____no_output_____
###Markdown
Step 3. Interpret the result There are several quantities that distinguish the circuits. Chief among them is the **circuit depth**. Circuit depth is defined in detail [here](https://qiskit.org/documentation/apidoc/circuit.html) (See the Supplementray Information and click the Quantum Circuit Properties tab). Circuit depth is proportional to the number of gates in a circuit, and loosly corresponds to the runtime of the circuit on hardware. Therefore, circuit depth is an easy to compute metric that can be used to estimate the fidelity of an executed circuit.A second important value is the number of **nonlocal** (multi-qubit) **gates** in a circuit. On IBM Quantum systems, the only nonlocal gate that can physically be performed is the CNOT gate. Recall that CNOT gates are the most expensive gates to perform, and thus the total number of these gates also serves as a good benchmark for the accuracy of the final output. A) Circuit depth and result accuracy Running the cells below will display the four transpiled AND gate circuit diagrams with the corresponding inputs that executed on `ibmq_athens` and their circuit depths with the success probability for producing correct answer.
###Code
print('Transpiled AND gate circuit for ibmq_athens with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[0]) )
qc_trans2_all[0].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[1]) )
qc_trans2_all[1].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans2_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[2]) )
qc_trans2_all[2].draw()
print('Transpiled AND gate circuit for ibmq_athens with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans2_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans2_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob2_all[3]) )
qc_trans2_all[3].draw()
###Output
_____no_output_____
###Markdown
&128211; Explain reason for the disimmilarity of the circuits. Descibe the relations between the property of the circuit and the accuracy of the outcomes. B) Qubit connectivity and circuit depth Investigate the transpiled circuits for `ibmqx2` by running the cells below.
###Code
print('Transpiled AND gate circuit for ibmqx2 with input 0 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[0].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[0].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[0]) )
qc_trans1_all[0].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 0 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[1].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[1].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[1]) )
qc_trans1_all[1].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 0')
print('\nThe circuit depth : {}'.format (qc_trans1_all[2].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[2].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[2]) )
qc_trans1_all[2].draw()
print('Transpiled AND gate circuit for ibmqx2 with input 1 1')
print('\nThe circuit depth : {}'.format (qc_trans1_all[3].depth()))
print('# of nonlocal gates : {}'.format (qc_trans1_all[3].num_nonlocal_gates()))
print('Probability of correct answer : {:.2f}'.format(prob1_all[3]) )
qc_trans1_all[3].draw()
###Output
_____no_output_____
|
analysis/Edouard/EDA.ipynb
|
###Markdown
Top 8 Features on the Moon
###Code
moondata = df[df['Planet Name']=='Moon']
sns.countplot(data=moondata, y='FeatureType',order =moondata['FeatureType'].value_counts().index[:5])
###Output
_____no_output_____
###Markdown
This plot displays the top 5 feature types on the moon. This data plot shows how the moon mostly consists of satellite features and craters. Top 5 Features on Venus
###Code
venusdata = df[df['Planet Name'] == 'Venus']
venusdata.reset_index()
sns.countplot(data = venusdata , y='FeatureType', order=venusdata['FeatureType'].value_counts().index[:8])
###Output
_____no_output_____
###Markdown
Venus Planet Feature Map
###Code
venus_features = sns.scatterplot(data = venusdata, y='Latitude of Center of Planetary Feature',x='Longitude of Center of Planetary Feature',hue='FeatureType')
plt.xlabel("Longitude", size=20)
plt.ylabel("Latitude", size=20)
venus_features.legend(loc='center left', bbox_to_anchor=(1.00, 0.5), ncol=1)
# map of geo locations of all planetary features on the venus
###Output
_____no_output_____
###Markdown
Moon Feature Map
###Code
moon_features = sns.scatterplot(data = moondata, y='Latitude of Center of Planetary Feature',x='Longitude of Center of Planetary Feature', hue='FeatureType')
moon_features.set(xlim=(0, 360))
moon_features.legend(loc='center left', bbox_to_anchor=(1.00, 0.5), ncol=1)
plt.xlabel("Longitude", size=20)
plt.ylabel("Latitude", size=20)
# map of geo locations of all planetary features on the moon
###Output
_____no_output_____
###Markdown
**Moon Features and Size Map**This section will focus on soley moon data points
###Code
#cmap = sns.cubehelix_palette(rot=-.2, as_cmap=True)
#c = sns.color_palette("flare", as_cmap=True)
moon_feat_size_map = sns.relplot(
data=moondata,
y="Latitude of Center of Planetary Feature", x="Longitude of Center of Planetary Feature",
hue="FeatureType", size="Size of Planetary Feature(km)",
sizes=(1, 200))
moon_feat_size_map.set(title='Moon Features and Size Map')
moon_feat_size_map.set(xlim=(0, 360))
moon_feat_size_map.set_xlabels("Longitude", size=20)
moon_feat_size_map.set_ylabels("Latitude", size=20)
moon_feat_size_map.despine(left=True, bottom=True)
###Output
_____no_output_____
###Markdown
The visualization illustrates the many types of features and their sizes located on the moon. Each feature is placed based on their Longitude and Latitude location. Every point on the graph is sized based on the planet feature it represents. In order to understand the types of features as an avg. viewer you would need to translate the greek names of features to their english meanings. **Venus Features and Size Map**
###Code
venus_feat_size_map = sns.relplot(
data=venusdata,
y="Latitude of Center of Planetary Feature", x="Longitude of Center of Planetary Feature",
hue="FeatureType", size="Size of Planetary Feature(km)",
sizes=(1, 200))
venus_feat_size_map.set(xlim=(0, 360))
venus_feat_size_map.ax.xaxis.grid(True, "minor", linewidth=.25)
venus_feat_size_map.ax.yaxis.grid(True, "minor", linewidth=.25)
venus_feat_size_map.set_xlabels("Longitude", size=20)
venus_feat_size_map.set_ylabels("Latitude", size=20)
venus_feat_size_map.despine(left=True, bottom=True)
###Output
_____no_output_____
###Markdown
This visualization displays to the location and size of features on venus. Similar to the moon plot from above. In comparison to the moon venus has much fewer satellite features in its surface. This visualization shows that venus has an abundance of craters and mountains(mons). A feature more prevelent on the surface are 'Regio' which are large areas on the surface that color distinctions from adjacent areas. Venus has many of these, indicating a colorful surface when compared to other bodies such as the moon. Mars Features and Size Map
###Code
marsdata = df[df['Planet Name']== 'Mars']
mars_feat_size_map = sns.relplot(
data=marsdata,
y="Latitude of Center of Planetary Feature", x="Longitude of Center of Planetary Feature",
hue="FeatureType", size="Size of Planetary Feature(km)",
sizes=(1, 200))
mars_feat_size_map.set(xlim=(0, 360))
mars_feat_size_map.ax.xaxis.grid(True, "minor", linewidth=.25)
mars_feat_size_map.ax.yaxis.grid(True, "minor", linewidth=.25)
mars_feat_size_map.set_xlabels("Longitude", size=20)
mars_feat_size_map.set_ylabels("Latitude", size=20)
mars_feat_size_map.despine(left=True, bottom=True)
###Output
_____no_output_____
|
lectures/01_intro/code/learn-pandas/lessons/02 - Lesson.ipynb
|
###Markdown
Lesson 2 These tutorials are also available through an email course, please visit http://www.hedaro.com/pandas-tutorial to sign up today. **Create Data** - We begin by creating our own data set for analysis. This prevents the end user reading this tutorial from having to download any files to replicate the results below. We will export this data set to a text file so that you can get some experience pulling data from a text file. **Get Data** - We will learn how to read in the text file containing the baby names. The data consist of baby names born in the year 1880. **Prepare Data** - Here we will simply take a look at the data and make sure it is clean. By clean I mean we will take a look inside the contents of the text file and look for any anomalities. These can include missing data, inconsistencies in the data, or any other data that seems out of place. If any are found we will then have to make decisions on what to do with these records. **Analyze Data** - We will simply find the most popular name in a specific year. **Present Data** - Through tabular data and a graph, clearly show the end user what is the most popular name in a specific year. ***NOTE: Make sure you have looked through all previous lessons as the knowledge learned in previous lessons will be needed for this exercise.*** > ***Numpy*** will be used to help generate the sample data set. Importing the libraries is the first step we will take in the lesson.
###Code
# Import all libraries needed for the tutorial
import pandas as pd
from numpy import random
import matplotlib.pyplot as plt
import sys #only needed to determine Python version number
import matplotlib #only needed to determine Matplotlib version number
# Enable inline plotting
%matplotlib inline
print('Python version ' + sys.version)
print('Pandas version ' + pd.__version__)
print('Matplotlib version ' + matplotlib.__version__)
###Output
_____no_output_____
###Markdown
Create Data The data set will consist of 1,000 baby names and the number of births recorded for that year (1880). We will also add plenty of duplicates so you will see the same baby name more than once. You can think of the multiple entries per name simply being different hospitals around the country reporting the number of births per baby name. So if two hospitals reported the baby name "Bob", the data will have two values for the name Bob. We will start by creating the random set of baby names.
###Code
# The inital set of baby names
names = ['Bob','Jessica','Mary','John','Mel']
###Output
_____no_output_____
###Markdown
To make a random list of 1,000 baby names using the five above we will do the following: * Generate a random number between 0 and 4 To do this we will be using the functions ***seed***, ***randint***, ***len***, ***range***, and ***zip***.
###Code
# This will ensure the random samples below can be reproduced.
# This means the random samples will always be identical.
random.seed?
random.randint?
len?
range?
zip?
###Output
_____no_output_____
###Markdown
**seed(500)** - Create seed**randint(low=0,high=len(names))** - Generate a random integer between zero and the length of the list "names". **names[n]** - Select the name where its index is equal to n. **for i in range(n)** - Loop until i is equal to n, i.e. 1,2,3,....n. **random_names** = Select a random name from the name list and do this n times.
###Code
random.seed(500)
random_names = [names[random.randint(low=0,high=len(names))] for i in range(1000)]
# Print first 10 records
random_names[:10]
###Output
_____no_output_____
###Markdown
Generate a random numbers between 0 and 1000
###Code
# The number of births per name for the year 1880
births = [random.randint(low=0,high=1000) for i in range(1000)]
births[:10]
###Output
_____no_output_____
###Markdown
Merge the ***names*** and the ***births*** data set using the ***zip*** function.
###Code
BabyDataSet = list(zip(random_names,births))
BabyDataSet[:10]
###Output
_____no_output_____
###Markdown
We are basically done creating the data set. We now will use the ***pandas*** library to export this data set into a csv file. ***df*** will be a ***DataFrame*** object. You can think of this object holding the contents of the BabyDataSet in a format similar to a sql table or an excel spreadsheet. Lets take a look below at the contents inside ***df***.
###Code
df = pd.DataFrame(data = BabyDataSet, columns=['Names', 'Births'])
df[:10]
###Output
_____no_output_____
###Markdown
* Export the dataframe to a ***text*** file. We can name the file ***births1880.txt***. The function ***to_csv*** will be used to export. The file will be saved in the same location of the notebook unless specified otherwise.
###Code
df.to_csv?
###Output
_____no_output_____
###Markdown
The only parameters we will use is ***index*** and ***header***. Setting these parameters to False will prevent the index and header names from being exported. Change the values of these parameters to get a better understanding of their use.
###Code
df.to_csv('births1880.txt',index=False,header=False)
###Output
_____no_output_____
###Markdown
Get Data To pull in the text file, we will use the pandas function *read_csv*. Let us take a look at this function and what inputs it takes.
###Code
pd.read_csv?
###Output
_____no_output_____
###Markdown
Even though this functions has many parameters, we will simply pass it the location of the text file. Location = C:\Users\TYPE_USER_NAME\.xy\startups\births1880.txt ***Note:*** Depending on where you save your notebooks, you may need to modify the location above.
###Code
Location = r'C:\Users\david\notebooks\update\births1880.txt'
df = pd.read_csv(Location)
###Output
_____no_output_____
###Markdown
Notice the ***r*** before the string. Since the slashes are special characters, prefixing the string with a ***r*** will escape the whole string.
###Code
df.info()
###Output
_____no_output_____
###Markdown
Info says: * There are ***999*** records in the data set * There is a column named ***Mary*** with 999 values * There is a column named ***968*** with 999 values * Out of the ***two*** columns, one is ***numeric***, the other is ***non numeric*** To actually see the contents of the dataframe we can use the ***head()*** function which by default will return the first five records. You can also pass in a number n to return the top n records of the dataframe.
###Code
df.head()
###Output
_____no_output_____
###Markdown
This brings us to our first problem of the exercise. The ***read_csv*** function treated the first record in the text file as the header names. This is obviously not correct since the text file did not provide us with header names. To correct this we will pass the ***header*** parameter to the *read_csv* function and set it to ***None*** (means null in python).
###Code
df = pd.read_csv(Location, header=None)
df.info()
###Output
_____no_output_____
###Markdown
Info now says: * There are ***1000*** records in the data set * There is a column named ***0*** with 1000 values * There is a column named ***1*** with 1000 values * Out of the ***two*** columns, one is ***numeric***, the other is ***non numeric*** Now lets take a look at the last five records of the dataframe
###Code
df.tail()
###Output
_____no_output_____
###Markdown
If we wanted to give the columns specific names, we would have to pass another paramter called ***names***. We can also omit the *header* parameter.
###Code
df = pd.read_csv(Location, names=['Names','Births'])
df.head(5)
###Output
_____no_output_____
###Markdown
You can think of the numbers [0,1,2,3,4,...] as the row numbers in an Excel file. In pandas these are part of the ***index*** of the dataframe. You can think of the index as the primary key of a sql table with the exception that an index is allowed to have duplicates. ***[Names, Births]*** can be though of as column headers similar to the ones found in an Excel spreadsheet or sql database. Delete the txt file now that we are done using it.
###Code
import os
os.remove(Location)
###Output
_____no_output_____
###Markdown
Prepare Data The data we have consists of baby names and the number of births in the year 1880. We already know that we have 1,000 records and none of the records are missing (non-null values). We can verify the "Names" column still only has five unique names. We can use the ***unique*** property of the dataframe to find all the unique records of the "Names" column.
###Code
# Method 1:
df['Names'].unique()
# If you actually want to print the unique values:
for x in df['Names'].unique():
print(x)
# Method 2:
print(df['Names'].describe())
###Output
_____no_output_____
###Markdown
Since we have multiple values per baby name, we need to aggregate this data so we only have a baby name appear once. This means the 1,000 rows will need to become 5. We can accomplish this by using the ***groupby*** function.
###Code
df.groupby?
# Create a groupby object
name = df.groupby('Names')
# Apply the sum function to the groupby object
df = name.sum()
df
###Output
_____no_output_____
###Markdown
Analyze Data To find the most popular name or the baby name with the higest birth rate, we can do one of the following. * Sort the dataframe and select the top row* Use the ***max()*** attribute to find the maximum value
###Code
# Method 1:
Sorted = df.sort_values(['Births'], ascending=False)
Sorted.head(1)
# Method 2:
df['Births'].max()
###Output
_____no_output_____
###Markdown
Present Data Here we can plot the ***Births*** column and label the graph to show the end user the highest point on the graph. In conjunction with the table, the end user has a clear picture that **Bob** is the most popular baby name in the data set.
###Code
# Create graph
df['Births'].plot.bar()
print("The most popular name")
df.sort_values(by='Births', ascending=False)
###Output
_____no_output_____
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.