text
stringlengths 2.5k
6.39M
| kind
stringclasses 3
values |
---|---|
# Travelling Salesman Problem (TSP)
If we have a list of city and distance between cities, travelling salesman problem is to find out the least sum of the distance visiting all the cities only once.
<img src="https://user-images.githubusercontent.com/5043340/45661145-2f8a7a80-bb37-11e8-99d1-42368906cfff.png" width="400">
Please prepare the blueqat first.
```
!pip3 install blueqat
```
Import libraries and make an instance
```
import blueqat.wq as wq
import numpy as np
a = wq.Opt()
```
## Example
Let's see the example we have 4 cities ABCD and we have to visit these cities once. All the cities are connected each other with the distance value as below.
<img src="https://user-images.githubusercontent.com/5043340/45661003-8ba0cf00-bb36-11e8-95fc-573e77ded327.png" width="400">
## Qubomatrix
We need a QUBO matrix to solve this problem on ising model.
Now we have a cost function as this,
$H = \sum_{v=1}^n\left( 1-\sum_{j=1}^N x_{v,j} \right)^2 + \sum_{j=1}^N\left(1-\sum_{v=1}^Nx_{v,j} \right)^2 + B\sum_{(u,v)\in E}W_{u,v}\sum_{j=1}^N x_{u,j} x_{v,j+1}$ ・・・・・(1)
$x_{vj}$ is a binary value if visit city $v$ on $j$ th order.
$x_{vj} = 1$ (if visit city v on jth order)、$0$ (not visit)
We need${N}^2$×${N}^2$ of matrix for N cities.
Now we have 4 cities, so finally we need 4*4 matrix.
Simly we show $x_{vj}$ as $q_i$
$x_{11}, x_{12}, x_{13}, x_{14}$ → $q_0, q_1, q_2, q_3$
$x_{21}, x_{22}, x_{23}, x_{24}$ → $q_4, q_5, q_6, q_7$
$x_{31}, x_{32}, x_{33}, x_{34}$ → $q_8, q_{9}, q_{10}, q_{11}$
$x_{41}, x_{42}, x_{43}, x_{44}$ → $q_{12}, q_{13}, q_{14}, q_{15}$
We put number as ABCD cities as $x$1:A、2:B、3:C、4:D
To calculate the TSP we need 2 constraint term and 1 cost function
* Visit just once on every city.
* Visit just one city on jth order.
* Minimize the total distance.
## Visit just once on every city
<img src="https://user-images.githubusercontent.com/5043340/45663268-8a749f80-bb40-11e8-8c4a-8b2ad1dd3f35.png" width="400">
If we think about the constraint visit just once on every city, we have to think about just one qubit on every row will be 1 and others should be 0.
たとえば、$q_0+q_1+q_2+q_3 = 1$. We think this on all of the row and we get.
${(1-q_0-q_1-q_2-q_3)^2+(1-q_4-q_5-q_6-q_7)^2+(1-q_8-q_9-q_{10}-q_{11})^2+(1-q_{12}-q_{13}-q_{14}-q_{15})^2
}$
## Visit just one city on jth order
Think about the second constraint.
<img src="https://user-images.githubusercontent.com/5043340/45666641-1bec0d80-bb51-11e8-87f7-0d1bb522f2e8.png" width="400">
Now we have to think about the column that only one qubit on every col is 1 and others should be 0.
${(1-q_0-q_4-q_8-q_{12})^2+(1-q_1-q_5-q_9-q_{13})^2+(1-q_2-q_6-q_{10}-q_{14})^2+(1-q_{3}-q_{7}-q_{11}-q_{15})^2
}$
Finally we have,
${2q_0q_1 + 2q_0q_{12} + 2q_0q_2 + 2q_0q_3 + 2q_0q_4 + 2q_0q_8 - 2q_0}$
${+ 2q_1q_{13} + 2q_1q_2 + 2q_1q_3 + 2q_1q_5 + 2q_1q_9 - 2q_1}$
${ + 2q_{10}q_{11} + 2q_{10}q_{14} + 2q_{10}q_2 + 2q_{10}q_6 + 2q_{10}q_8 + 2q_{10}q_9 - 2q_{10} }$
${+ 2q_{11}q_{15} + 2q_{11}q_3 + 2q_{11}q_7 + 2q_{11}q_8 + 2q_{11}q_9 - 2q_{11}}$
${+ 2q_{12}q_{13} + 2q_{12}q_{14} + 2q_{12}q_{15} + 2q_{12}q_4 + 2q_{12}q_8 - 2q_{12} }$
${+ 2q_{13}q_{14}+ 2q_{13}q_{15} + 2q_{13}q_5 + 2q_{13}q_9 - 2q_{13} }$
${+ 2q_{14}q_{15} + 2q_{14}q_2 + 2q_{14}q_6 - 2q_{14}}$
${+ 2q_{15}q_3 + 2q_{15}q_7 - 2q_{15}}$
${+ 2q_2q_3 + 2q_2q_6 - 2q_2 + 2q_3q_7 - 2q_3 }$
${+ 2q_4q_5 + 2q_4q_6 + 2q_4q_7 + 2q_4q_8 - 2q_4 + 2q_5q_6 + 2q_5q_7 + 2q_5q_9 - 2q_5 }$
${ +2q_6q_7 - 2q_6 - 2q_7 + 2q_8q_9 - 2q_8 - 2q_9 + 8}$
Write down on a QUBO matrix and we have
<img src="https://user-images.githubusercontent.com/5043340/45666980-42f70f00-bb52-11e8-93a7-245e9d0f5609.png" width="400">
## Minimize the total distance
Finally we have to think about the cost function of the total sum of distance and we get this QUBO matrix thinking about the distance between two cities as Jij on the matrix.
<img src="https://user-images.githubusercontent.com/5043340/45667633-f3661280-bb54-11e8-9fbe-5dba63749b1d.png" width="400">
## Add all of the equation and calculate
We choose the parameter B=0.25 and get the final QUBO matrix which is the sum of all matrix.
## Calculate
Put the QUBO on python and start calculating.
```
a.qubo=np.array([
[-2,2,2,2,2,0,0,0,2,0,0,0,2,0,0,0],
[0,-2,2,2,0,2,0,0,0,2,0,0,0,2,0,0],
[0,0,-2,2,0,0,2,0,0,0,2,0,0,0,2,0],
[0,0,0,-2,0,0,0,2,0,0,0,2,0,0,0,2],
[0,0,0,0,-2,2,2,2,2,0,0,0,2,0,0,0],
[0,0,0,0,0,-2,2,2,0,2,0,0,0,2,0,0],
[0,0,0,0,0,0,-2,2,0,0,2,0,0,0,2,0],
[0,0,0,0,0,0,0,-2,0,0,0,2,0,0,0,2],
[0,0,0,0,0,0,0,0,-2,2,2,2,2,0,0,0],
[0,0,0,0,0,0,0,0,0,-2,2,2,0,2,0,0],
[0,0,0,0,0,0,0,0,0,0,-2,2,0,0,2,0],
[0,0,0,0,0,0,0,0,0,0,0,-2,0,0,0,2],
[0,0,0,0,0,0,0,0,0,0,0,0,-2,2,2,2],
[0,0,0,0,0,0,0,0,0,0,0,0,0,-2,2,2],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,-2,2],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-2],
])
+np.array([
[0,0,0,0,0,2,0,2,0,1,0,1,0,3,0,3],
[0,0,0,0,2,0,2,0,1,0,1,0,3,0,3,0],
[0,0,0,0,0,2,0,2,0,1,0,1,0,3,0,3],
[0,0,0,0,2,0,2,0,1,0,1,0,3,0,3,0],
[0,0,0,0,0,0,0,0,0,4,0,4,0,2,0,2],
[0,0,0,0,0,0,0,0,4,0,4,0,2,0,2,0],
[0,0,0,0,0,0,0,0,0,4,0,4,0,2,0,2],
[0,0,0,0,0,0,0,0,4,0,4,0,2,0,2,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,2],
[0,0,0,0,0,0,0,0,0,0,0,0,2,0,2,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,2,0,2],
[0,0,0,0,0,0,0,0,0,0,0,0,2,0,2,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0],
])*0.25
answer = a.sa()
```
And now we have,
```
print(answer)
```
Result is
[1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0]
This shows that the city should be visited from A→C→D→B→A
|
github_jupyter
|
# Spark on Tour
## Ejemplo de procesamiento de datos en streaming para generar un dashboard en NRT
En este notebook vamos a ver un ejemplo completo de como se podría utilizar la API de streaming estructurado de Spark para procesar un stream de eventos de puntuación en vivo, en el tiempo real, y generar como salida un conjunto de estadísticas, o valores agregados, con los que poder construir un dashboard de visualización y monitorización en tiempo real.
Particularmente vamos a simular una plataforma de vídeo bajo demanda en la que los usuarios están viendo pelítculas y puntuándolas. Tomaremos los eventos de puntuación que van entrando en streaming, y genrar, en tiempo real, estadísticas de visualización agredas por género, de forma que podamos monitorizar qué películas son las más populates en este momento.
### Importamos librerías, definimos esquemas e inicializamos la sesión Spark.
```
import findspark
findspark.init()
import pyspark
from pyspark.sql.types import *
from pyspark.sql import SparkSession
import pyspark.sql.functions as f
from IPython.display import clear_output
import plotly.express as px
ratingSchema = StructType([
StructField("user", IntegerType()),
StructField("movie", IntegerType()),
StructField("rating", FloatType())
])
movieSchema = StructType([
StructField("movie", IntegerType()),
StructField("title", StringType()),
StructField("genres", StringType())
])
def foreach_batch_function(df, epoch_id):
mostPopularMovies = df.limit(10).toPandas()
clear_output()
print(mostPopularMovies)
#setup spark session
sparkSession = (SparkSession.builder
.appName("Movie ratings streaming")
.master("local[*]")
.config("spark.scheduler.mode", "FAIR")
.getOrCreate())
sparkSession.sparkContext.setLogLevel("ERROR")
```
### Leemos el dataset de películas
```
movies = sparkSession.read.csv("/tmp/movielens/movies.csv", schema=movieSchema, header=True)
movies.show()
```
### Inicializamos la carga del stream de puntuaciones desde Apache Kafka
```
dataset = (sparkSession
.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:29092")
.option("subscribe", "ratings")
.load())
dataset = dataset.selectExpr("CAST(value AS STRING)")
dataset = dataset.select(f.from_json(f.col("value"), ratingSchema).alias("data")).select("data.*")
```
### Agrupamos por película y sumamos visualizaciones y media de puntuación
```
dataset = dataset.select("movie", "rating") \
.groupBy("movie") \
.agg(f.count("rating").alias("num_ratings"), f.avg("rating").alias("avg_rating"))
```
### Mezclamos con el dataset de películas para obtener el título
```
dataset = dataset.join(movies, dataset["movie"] == movies["movie"], "left_outer") \
.drop(movies["movie"]) \
.drop("genres")
```
### Ordenamos la salida por número de votaciones (visualizaciones)
```
dataset = dataset.select("movie", "title", "avg_rating", "num_ratings") \
.sort(f.desc("num_ratings"))
```
### Ejecutamos el procesamiento en streaming
```
query = dataset \
.writeStream \
.outputMode("complete") \
.format("console") \
.trigger(processingTime='5 seconds') \
.foreachBatch(foreach_batch_function) \
.start()
query.explain()
query.awaitTermination()
```
|
github_jupyter
|
# Federated Tensorflow Mnist Tutorial
# Long-Living entities update
* We now may have director running on another machine.
* We use Federation API to communicate with Director.
* Federation object should hold a Director's client (for user service)
* Keeping in mind that several API instances may be connacted to one Director.
* We do not think for now how we start a Director.
* But it knows the data shape and target shape for the DataScience problem in the Federation.
* Director holds the list of connected envoys, we do not need to specify it anymore.
* Director and Envoys are responsible for encrypting connections, we do not need to worry about certs.
* Yet we MUST have a cert to communicate to the Director.
* We MUST know the FQDN of a Director.
* Director communicates data and target shape to the Federation interface object.
* Experiment API may use this info to construct a dummy dataset and a `shard descriptor` stub.
```
# Install dependencies if not already installed
# !pip install tensorflow==2.3.1
```
## Connect to the Federation
```
# Create a federation
from openfl.interface.interactive_api.federation import Federation
# please use the same identificator that was used in signed certificate
client_id = 'api'
cert_dir = 'cert'
director_node_fqdn = 'localhost'
director_port=50051
# 1) Run with API layer - Director mTLS
# If the user wants to enable mTLS their must provide CA root chain, and signed key pair to the federation interface
# cert_chain = f'{cert_dir}/root_ca.crt'
# api_certificate = f'{cert_dir}/{client_id}.crt'
# api_private_key = f'{cert_dir}/{client_id}.key'
# federation = Federation(
# client_id=client_id,
# director_node_fqdn=director_node_fqdn,
# director_port=director_port,
# cert_chain=cert_chain,
# api_cert=api_certificate,
# api_private_key=api_private_key
# )
# --------------------------------------------------------------------------------------------------------------------
# 2) Run with TLS disabled (trusted environment)
# Federation can also determine local fqdn automatically
federation = Federation(
client_id=client_id,
director_node_fqdn=director_node_fqdn,
director_port=director_port,
tls=False
)
shard_registry = federation.get_shard_registry()
shard_registry
# First, request a dummy_shard_desc that holds information about the federated dataset
dummy_shard_desc = federation.get_dummy_shard_descriptor(size=10)
dummy_shard_dataset = dummy_shard_desc.get_dataset('train')
sample, target = dummy_shard_dataset[0]
f"Sample shape: {sample.shape}, target shape: {target.shape}"
```
## Describing FL experimen
```
from openfl.interface.interactive_api.experiment import TaskInterface, DataInterface, ModelInterface, FLExperiment
```
### Register model
```
from layers import create_model, optimizer
framework_adapter = 'openfl.plugins.frameworks_adapters.keras_adapter.FrameworkAdapterPlugin'
model = create_model()
MI = ModelInterface(model=model, optimizer=optimizer, framework_plugin=framework_adapter)
```
### Register dataset
```
import numpy as np
from tensorflow.keras.utils import Sequence
class DataGenerator(Sequence):
def __init__(self, shard_descriptor, batch_size):
self.shard_descriptor = shard_descriptor
self.batch_size = batch_size
self.indices = np.arange(len(shard_descriptor))
self.on_epoch_end()
def __len__(self):
return len(self.indices) // self.batch_size
def __getitem__(self, index):
index = self.indices[index * self.batch_size:(index + 1) * self.batch_size]
batch = [self.indices[k] for k in index]
X, y = self.shard_descriptor[batch]
return X, y
def on_epoch_end(self):
np.random.shuffle(self.indices)
class MnistFedDataset(DataInterface):
def __init__(self, **kwargs):
super().__init__(**kwargs)
@property
def shard_descriptor(self):
return self._shard_descriptor
@shard_descriptor.setter
def shard_descriptor(self, shard_descriptor):
"""
Describe per-collaborator procedures or sharding.
This method will be called during a collaborator initialization.
Local shard_descriptor will be set by Envoy.
"""
self._shard_descriptor = shard_descriptor
self.train_set = shard_descriptor.get_dataset('train')
self.valid_set = shard_descriptor.get_dataset('val')
def __getitem__(self, index):
return self.shard_descriptor[index]
def __len__(self):
return len(self.shard_descriptor)
def get_train_loader(self):
"""
Output of this method will be provided to tasks with optimizer in contract
"""
if self.kwargs['train_bs']:
batch_size = self.kwargs['train_bs']
else:
batch_size = 32
return DataGenerator(self.train_set, batch_size=batch_size)
def get_valid_loader(self):
"""
Output of this method will be provided to tasks without optimizer in contract
"""
if self.kwargs['valid_bs']:
batch_size = self.kwargs['valid_bs']
else:
batch_size = 32
return DataGenerator(self.valid_set, batch_size=batch_size)
def get_train_data_size(self):
"""
Information for aggregation
"""
return len(self.train_set)
def get_valid_data_size(self):
"""
Information for aggregation
"""
return len(self.valid_set)
```
### Create Mnist federated dataset
```
fed_dataset = MnistFedDataset(train_bs=64, valid_bs=512)
```
## Define and register FL tasks
```
TI = TaskInterface()
import time
import tensorflow as tf
from layers import train_acc_metric, val_acc_metric, loss_fn
@TI.register_fl_task(model='model', data_loader='train_dataset', \
device='device', optimizer='optimizer')
def train(model, train_dataset, optimizer, device, loss_fn=loss_fn, warmup=False):
start_time = time.time()
# Iterate over the batches of the dataset.
for step, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
logits = model(x_batch_train, training=True)
loss_value = loss_fn(y_batch_train, logits)
grads = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
# Update training metric.
train_acc_metric.update_state(y_batch_train, logits)
# Log every 200 batches.
if step % 200 == 0:
print(
"Training loss (for one batch) at step %d: %.4f"
% (step, float(loss_value))
)
print("Seen so far: %d samples" % ((step + 1) * 64))
if warmup:
break
# Display metrics at the end of each epoch.
train_acc = train_acc_metric.result()
print("Training acc over epoch: %.4f" % (float(train_acc),))
# Reset training metrics at the end of each epoch
train_acc_metric.reset_states()
return {'train_acc': train_acc,}
@TI.register_fl_task(model='model', data_loader='val_dataset', device='device')
def validate(model, val_dataset, device):
# Run a validation loop at the end of each epoch.
for x_batch_val, y_batch_val in val_dataset:
val_logits = model(x_batch_val, training=False)
# Update val metrics
val_acc_metric.update_state(y_batch_val, val_logits)
val_acc = val_acc_metric.result()
val_acc_metric.reset_states()
print("Validation acc: %.4f" % (float(val_acc),))
return {'validation_accuracy': val_acc,}
```
## Time to start a federated learning experiment
```
# create an experimnet in federation
experiment_name = 'mnist_experiment'
fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name)
# The following command zips the workspace and python requirements to be transfered to collaborator nodes
fl_experiment.start(model_provider=MI,
task_keeper=TI,
data_loader=fed_dataset,
rounds_to_train=5,
opt_treatment='CONTINUE_GLOBAL')
fl_experiment.stream_metrics()
```
|
github_jupyter
|
```
from google.colab import drive
drive.mount('/content/drive')
import os
print(os.getcwd())
os.chdir('/content/drive/My Drive/Colab Notebooks/summarization')
print(os.listdir())
import os
import numpy as np
import pandas as pd
import sys
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
import tensorflow as tf
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
import keras
orig = os.getcwd()
print(orig)
#Loading Data and Preparing vocab
from keras.preprocessing.sequence import pad_sequences
from keras.preprocessing.text import Tokenizer
tokenizer = Tokenizer(num_words=sys.maxsize,filters ='', lower=False, oov_token = '<OOV>')
print(dir(tokenizer))
#Initializing tokenizer for Vocabulary
data1 = open('sent').readlines()
data2 = open('summ2').readlines()
tokenizer.fit_on_texts(data1)
tokenizer.fit_on_texts(data2)
print("No. of articles and summ",len(data2),len(data1))
dictionary = tokenizer.word_index
word2idx = {}
idx2word = {}
num_encoder_tokens = len(tokenizer.word_index)+1
num_decoder_tokens = len(tokenizer.word_index)+1
for k, v in dictionary.items():
word2idx[k] = v
idx2word[v] = k
#Encoding data to integers
sent = tokenizer.texts_to_sequences(data1)
summ = tokenizer.texts_to_sequences(data2)
#padding sequences
#Finding the maximum sequence length
MAX_INPUT_LENGTH = max(len(i.split()) for i in data1)
print(MAX_INPUT_LENGTH)
MAX_TARGET_LENGTH = max(len(j.split()) for j in data2)
print(MAX_TARGET_LENGTH)
padded_sent = pad_sequences(sent, maxlen = MAX_INPUT_LENGTH,padding = 'post')
padded_summ = pad_sequences(summ, maxlen = MAX_TARGET_LENGTH,padding = 'post')
print(padded_sent.shape,padded_summ.shape,type(padded_sent))
#preparing training data
encoder_input_data = padded_sent.copy()
decoder_input_data = padded_summ.copy()
# print(decoder_input_data[0],decoder_input_data[1])
decoder_target_data = np.roll(decoder_input_data, -1, axis = -1)
decoder_target_data[:,-1] = 0
# encoder_input_data.reshape(-1,1,MAX_INPUT_LENGTH)
# decoder_input_data = decoder_input_data.reshape(-1,1,MAX_TARGET_LENGTH)
decoder_target_data = decoder_target_data.reshape(-1,MAX_TARGET_LENGTH,1)
# encoder_input_data = tf.one_hot(encoder_input_data, len(tokenizer.word_index))
# decoder_input_data = tf.one_hot(decoder_input_data, len(tokenizer.word_index))
# decoder_target_data = tf.one_hot(decoder_target_data, len(tokenizer.word_index))
print(encoder_input_data.shape,decoder_input_data.shape,decoder_target_data.shape)
# print(decoder_input_data[0],decoder_target_data[0])
# Preparing GloVe
EMBEDDING_DIM = 300
embeddings_index = {}
f = open(os.path.join('', 'glove.6B.{}d.txt'.format(EMBEDDING_DIM)))
for line in f:
values = line.split()
word = values[0]
coefs = np.asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
"fishtailed" in embeddings_index
#Embedding matrix
embedding_matrix = np.zeros((len(tokenizer.word_index)+1, EMBEDDING_DIM),dtype='float32')
for word,i in tokenizer.word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
print(embedding_matrix.shape)
#Creating the Bidirectional model
from keras.layers import Embedding
from keras.layers import Dense, LSTM, Input, concatenate
from keras.models import Model
batch_size = 32
epochs = 10
HIDDEN_UNITS_ENC = 256
num_samples = 10000
encoder_inputs = Input(shape=(MAX_INPUT_LENGTH,), name='encoder_inputs')
embedding_layer = Embedding(num_encoder_tokens, EMBEDDING_DIM, weights=[embedding_matrix],
input_length=MAX_INPUT_LENGTH, trainable=False, name='embedding_layer')
encoder_rnn = LSTM(units=HIDDEN_UNITS_ENC, return_state=True, dropout=0.5, recurrent_dropout=0.5,name='encoder_lstm')
encoder_output, state_h_f, state_c_f = encoder_rnn(embedding_layer(encoder_inputs))
encoder_rnn2 = LSTM(units=HIDDEN_UNITS_ENC, return_state=True, dropout=0.5, recurrent_dropout=0.5,
go_backwards=True,name='encoder_lstm_backward')
encoder_output, state_h_b, state_c_b = encoder_rnn2(embedding_layer(encoder_inputs))
state_h = concatenate([state_h_f, state_h_b])
state_c = concatenate([state_c_f, state_c_b])
encoder_states = [state_h, state_c]
decoder_inputs = Input(shape=(None,), name='decoder_inputs')
embedding_layer = Embedding(num_decoder_tokens, EMBEDDING_DIM, weights=[embedding_matrix], trainable=False, name='emb_2')
decoder_lstm = LSTM(HIDDEN_UNITS_ENC * 2, return_sequences=True, return_state=True, dropout=0.5,
recurrent_dropout=0.5, name='decoder_lstm')
decoder_outputs, state_h, state_c = decoder_lstm(embedding_layer(decoder_inputs), initial_state=encoder_states)
decoder_dense = Dense(num_decoder_tokens, name='decoder_dense')
decoder_outputs = decoder_dense(decoder_outputs)
model = Model([encoder_inputs, decoder_inputs], decoder_outputs)
print(model.summary())
# visualize model structure
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(model, show_shapes=True, show_layer_names=False,
rankdir='TB',dpi=65).create(prog='dot', format='svg'))
model.compile(optimizer='rmsprop', loss='sparse_categorical_crossentropy', metrics=['acc'])
model.fit([encoder_input_data,decoder_input_data],decoder_target_data,batch_size = batch_size, epochs = epochs,validation_split=0.9)
model.save('s2s.h5')
from keras.models import load_model
model = load_model('s2s.h5')
#inference step
encoder_model = Model(encoder_inputs, encoder_states)
# encoder_model.summary()
decoder_state_input_h = Input(shape = (HIDDEN_UNITS_ENC*2,))
decoder_state_input_c = Input(shape = (HIDDEN_UNITS_ENC*2,))
decoder_states_inputs = [decoder_state_input_h,decoder_state_input_c]
decoder_output, state_h, state_c = decoder_lstm(embedding_layer(decoder_inputs), initial_state = decoder_states_inputs)
decoder_states = [state_h,state_c]
decoder_outputs = decoder_dense(decoder_output)
decoder_model = Model([decoder_inputs] + decoder_states_inputs, [decoder_outputs] + decoder_states)
decoder_model.summary()
# visualize model structure
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
SVG(model_to_dot(decoder_model, show_shapes=True, show_layer_names=False,
rankdir='TB',dpi = 70).create(prog='dot', format='svg'))
#decoding sequences
def decode_sequence(input_seq):
# Encode the input as state vectors.
states_value = encoder_model.predict(input_seq)
# Generate empty target sequence of length 1.
target_seq = np.zeros((1,1))
target_seq[0, 0] = tokenizer.word_index["<BOS>"]
# Sampling loop for a batch of sequences
# (to simplify, here we assume a batch of size 1).
stop_condition = False
decoded_sentence = ''
while not stop_condition:
output_tokens, h, c = decoder_model.predict(
[target_seq] + states_value)
# Sample a token
sampled_token_index = np.argmax(output_tokens[0,0])
sampled_char = idx2word[sampled_token_index]
# print(sampled_token_index,end=" ")
decoded_sentence += sampled_char + " "
# Exit condition: either hit max length
# or find stop character.
if (sampled_char == '<EOS>' or
len(decoded_sentence) > MAX_TARGET_LENGTH):
stop_condition = True
# Update the target sequence (of length 1).
target_seq[0, 0] = sampled_token_index
# Update states
states_value = [h, c]
return decoded_sentence
seq = 1
input_seq = encoder_input_data[seq:seq+1]
decoded_sentence = decode_sequence(input_seq)
print('-')
print('Article:', data1[seq].strip())
print('Actual Summary:', data2[seq][5:-5])
print('Predicted Summary:', decoded_sentence)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/Tessellate-Imaging/monk_v1/blob/master/study_roadmaps/2_transfer_learning_roadmap/6_freeze_base_network/1.1)%20Understand%20the%20effect%20of%20freezing%20base%20model%20in%20transfer%20learning%20-%201%20-%20mxnet.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Goals
### Understand the role of freezing models in transfer learning
### Why freeze/unfreeze base models in transfer learning
### Use comparison feature to appropriately set this parameter on custom dataset
### You will be using lego bricks dataset to train the classifiers
# What is freezing base network
- To recap you have two parts in your network
- One that already existed, the pretrained one, the base network
- The new sub-network or a single layer you added
-The hyper-parameter we can see here: Freeze base network
- Freezing base network makes the base network untrainable
- The base network now acts as a feature extractor and only the next half is trained
- If you do not freeze the base network the entire network is trained
# Table of Contents
## [Install](#0)
## [Freeze Base network in densenet121 and train a classifier](#1)
## [Unfreeze base network in densenet121 and train another classifier](#2)
## [Compare both the experiment](#3)
<a id='0'></a>
# Install Monk
## Using pip (Recommended)
- colab (gpu)
- All bakcends: `pip install -U monk-colab`
- kaggle (gpu)
- All backends: `pip install -U monk-kaggle`
- cuda 10.2
- All backends: `pip install -U monk-cuda102`
- Gluon bakcned: `pip install -U monk-gluon-cuda102`
- Pytorch backend: `pip install -U monk-pytorch-cuda102`
- Keras backend: `pip install -U monk-keras-cuda102`
- cuda 10.1
- All backend: `pip install -U monk-cuda101`
- Gluon bakcned: `pip install -U monk-gluon-cuda101`
- Pytorch backend: `pip install -U monk-pytorch-cuda101`
- Keras backend: `pip install -U monk-keras-cuda101`
- cuda 10.0
- All backend: `pip install -U monk-cuda100`
- Gluon bakcned: `pip install -U monk-gluon-cuda100`
- Pytorch backend: `pip install -U monk-pytorch-cuda100`
- Keras backend: `pip install -U monk-keras-cuda100`
- cuda 9.2
- All backend: `pip install -U monk-cuda92`
- Gluon bakcned: `pip install -U monk-gluon-cuda92`
- Pytorch backend: `pip install -U monk-pytorch-cuda92`
- Keras backend: `pip install -U monk-keras-cuda92`
- cuda 9.0
- All backend: `pip install -U monk-cuda90`
- Gluon bakcned: `pip install -U monk-gluon-cuda90`
- Pytorch backend: `pip install -U monk-pytorch-cuda90`
- Keras backend: `pip install -U monk-keras-cuda90`
- cpu
- All backend: `pip install -U monk-cpu`
- Gluon bakcned: `pip install -U monk-gluon-cpu`
- Pytorch backend: `pip install -U monk-pytorch-cpu`
- Keras backend: `pip install -U monk-keras-cpu`
## Install Monk Manually (Not recommended)
### Step 1: Clone the library
- git clone https://github.com/Tessellate-Imaging/monk_v1.git
### Step 2: Install requirements
- Linux
- Cuda 9.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu90.txt`
- Cuda 9.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu92.txt`
- Cuda 10.0
- `cd monk_v1/installation/Linux && pip install -r requirements_cu100.txt`
- Cuda 10.1
- `cd monk_v1/installation/Linux && pip install -r requirements_cu101.txt`
- Cuda 10.2
- `cd monk_v1/installation/Linux && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Linux && pip install -r requirements_cpu.txt`
- Windows
- Cuda 9.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu90.txt`
- Cuda 9.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu92.txt`
- Cuda 10.0 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu100.txt`
- Cuda 10.1 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu101.txt`
- Cuda 10.2 (Experimental support)
- `cd monk_v1/installation/Windows && pip install -r requirements_cu102.txt`
- CPU (Non gpu system)
- `cd monk_v1/installation/Windows && pip install -r requirements_cpu.txt`
- Mac
- CPU (Non gpu system)
- `cd monk_v1/installation/Mac && pip install -r requirements_cpu.txt`
- Misc
- Colab (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_colab.txt`
- Kaggle (GPU)
- `cd monk_v1/installation/Misc && pip install -r requirements_kaggle.txt`
### Step 3: Add to system path (Required for every terminal or kernel run)
- `import sys`
- `sys.path.append("monk_v1/");`
## Dataset - LEGO Classification
- https://www.kaggle.com/joosthazelzet/lego-brick-images/
```
! wget --load-cookies /tmp/cookies.txt "https://docs.google.com/uc?export=download&confirm=$(wget --save-cookies /tmp/cookies.txt --keep-session-cookies --no-check-certificate 'https://docs.google.com/uc?export=download&id=1RB_f2Kv3vkBXcQnCSVqCvaZFBHizQacl' -O- | sed -rn 's/.*confirm=([0-9A-Za-z_]+).*/\1\n/p')&id=1RB_f2Kv3vkBXcQnCSVqCvaZFBHizQacl" -O LEGO.zip && rm -rf /tmp/cookies.txt
! unzip -qq LEGO.zip
if os.path.isfile("LEGO/train/.DS_Store"):
os.system("rm LEGO/train/.DS_Store");
```
# Imports
```
#Using mxnet-gluon backend
# When installed using pip
from monk.gluon_prototype import prototype
# When installed manually (Uncomment the following)
#import os
#import sys
#sys.path.append("monk_v1/");
#sys.path.append("monk_v1/monk/");
#from monk.gluon_prototype import prototype
```
<a id='1'></a>
# Freeze Base network in densenet121 and train a classifier
## Creating and managing experiments
- Provide project name
- Provide experiment name
- For a specific data create a single project
- Inside each project multiple experiments can be created
- Every experiment can be have diferent hyper-parameters attached to it
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "Freeze_Base_Network");
```
### This creates files and directories as per the following structure
workspace
|
|--------Project
|
|
|-----Freeze_Base_Network
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
## Set dataset and select the model
## Quick mode training
- Using Default Function
- dataset_path
- model_name
- freeze_base_network
- num_epochs
## Sample Dataset folder structure
parent_directory
|
|
|------cats
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------dogs
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
## Modifyable params
- dataset_path: path to data
- model_name: which pretrained model to use
- freeze_base_network: Retrain already trained network or not
- num_epochs: Number of epochs to train for
```
gtf.Default(dataset_path="LEGO/train",
model_name="densenet121",
freeze_base_network=True, # Set this param as true
num_epochs=5);
#Read the summary generated once you run this cell.
```
## From the summary above
- Model Params
Model name: densenet121
Use Gpu: True
Use pretrained: True
Freeze base network: True
## Another thing to notice from summary
Model Details
Loading pretrained model
Model Loaded on device
Model name: densenet121
Num of potentially trainable layers: 242
Num of actual trainable layers: 1
### There are a total of 242 layers
### Since we have freezed base network only 1 is trainable, the final layer
## Train the classifier
```
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
## Validating the trained classifier
## Load the experiment in validation mode
- Set flag eval_infer as True
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "Freeze_Base_Network", eval_infer=True);
```
## Load the validation dataset
```
gtf.Dataset_Params(dataset_path="LEGO/valid");
gtf.Dataset();
```
## Run validation
```
accuracy, class_based_accuracy = gtf.Evaluate();
```
### Accuracy achieved - 86.063
(You may get a different result)
<a id='2'></a>
# Unfreeze Base network in densenet121 and train a classifier
## Creating and managing experiments
- Provide project name
- Provide experiment name
- For a specific data create a single project
- Inside each project multiple experiments can be created
- Every experiment can be have diferent hyper-parameters attached to it
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "Unfreeze_Base_Network");
```
### This creates files and directories as per the following structure
workspace
|
|--------Project
|
|
|-----Freeze_Base_Network (Previously created)
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
|
|
|-----Unfreeze_Base_Network (Created Now)
|
|-----experiment-state.json
|
|-----output
|
|------logs (All training logs and graphs saved here)
|
|------models (all trained models saved here)
## Set dataset and select the model
## Quick mode training
- Using Default Function
- dataset_path
- model_name
- freeze_base_network
- num_epochs
## Sample Dataset folder structure
parent_directory
|
|
|------cats
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
|------dogs
|
|------img1.jpg
|------img2.jpg
|------.... (and so on)
## Modifyable params
- dataset_path: path to data
- model_name: which pretrained model to use
- freeze_base_network: Retrain already trained network or not
- num_epochs: Number of epochs to train for
```
gtf.Default(dataset_path="LEGO/train",
model_name="densenet121",
freeze_base_network=False, # Set this param as false
num_epochs=5);
#Read the summary generated once you run this cell.
```
## From the summary above
- Model Params
Model name: densenet121
Use Gpu: True
Use pretrained: True
Freeze base network: False
## Another thing to notice from summary
Model Details
Loading pretrained model
Model Loaded on device
Model name: densenet121
Num of potentially trainable layers: 242
Num of actual trainable layers: 242
### There are a total of 242 layers
### Since we have unfreezed base network all 242 layers are trainable including the final layer
## Train the classifier
```
#Start Training
gtf.Train();
#Read the training summary generated once you run the cell and training is completed
```
## Validating the trained classifier
## Load the experiment in validation mode
- Set flag eval_infer as True
```
gtf = prototype(verbose=1);
gtf.Prototype("Project", "Unfreeze_Base_Network", eval_infer=True);
```
## Load the validation dataset
```
gtf.Dataset_Params(dataset_path="LEGO/valid");
gtf.Dataset();
```
## Run validation
```
accuracy, class_based_accuracy = gtf.Evaluate();
```
### Accuracy achieved - 99.31
(You may get a different result)
<a id='3'></a>
# Compare both the experiment
```
# Invoke the comparison class
from monk.compare_prototype import compare
```
### Creating and managing comparison experiments
- Provide project name
```
# Create a project
gtf = compare(verbose=1);
gtf.Comparison("Compare-effect-of-freezing");
```
### This creates files and directories as per the following structure
workspace
|
|--------comparison
|
|
|-----Compare-effect-of-freezing
|
|------stats_best_val_acc.png
|------stats_max_gpu_usage.png
|------stats_training_time.png
|------train_accuracy.png
|------train_loss.png
|------val_accuracy.png
|------val_loss.png
|
|-----comparison.csv (Contains necessary details of all experiments)
### Add the experiments
- First argument - Project name
- Second argument - Experiment name
```
gtf.Add_Experiment("Project", "Freeze_Base_Network");
gtf.Add_Experiment("Project", "Unfreeze_Base_Network");
```
### Run Analysis
```
gtf.Generate_Statistics();
```
## Visualize and study comparison metrics
### Training Accuracy Curves
```
from IPython.display import Image
Image(filename="workspace/comparison/Compare-effect-of-freezing/train_accuracy.png")
```
### Training Loss Curves
```
from IPython.display import Image
Image(filename="workspace/comparison/Compare-effect-of-freezing/train_loss.png")
```
### Validation Accuracy Curves
```
from IPython.display import Image
Image(filename="workspace/comparison/Compare-effect-of-freezing/val_accuracy.png")
```
### Validation loss curves
```
from IPython.display import Image
Image(filename="workspace/comparison/Compare-effect-of-freezing/val_loss.png")
```
## Accuracies achieved on validation dataset
### With freezing base network - 86.063
### Without freezing base network - 99.31
#### For this classifier, keeping the base network trainable seems to be a good option. Thus for other data it may result in overfitting the training data
(You may get a different result)
# Goals Completed
### Understand the role of freezing models in transfer learning
### Why freeze/unfreeze base models in transfer learning
### Use comparison feature to appropriately set this parameter on custom dataset
|
github_jupyter
|
```
from nornir import InitNornir
nr = InitNornir(config_file="config.yaml")
```
# Executing tasks
Now that you know how to initialize nornir and work with the inventory let's see how we can leverage it to run tasks on groups of hosts.
Nornir ships a bunch of tasks you can use directly without having to code them yourself. You can check them out [here](../../plugins/tasks/index.rst).
Let's start by executing the `ls -la /tmp` command on all the device in `cmh` of type `host`:
```
from nornir.plugins.tasks import commands
from nornir.plugins.functions.text import print_result
cmh_hosts = nr.filter(site="cmh", role="host")
result = cmh_hosts.run(task=commands.remote_command,
command="ls -la /tmp")
print_result(result, vars=["stdout"])
```
So what have we done here? First we have imported the `commands` and `text` modules. Then we have narrowed down nornir to the hosts we want to operate on. Once we have selected the devices we wanted to operate on we have run two tasks:
1. The task `commands.remote_command` which runs the specified `command` in the remote device.
2. The function `print_result` which just prints on screen the result of an executed task or group of tasks.
Let's try with another example:
```
from nornir.plugins.tasks import networking
cmh_spines = nr.filter(site="bma", role="spine")
result = cmh_spines.run(task=networking.napalm_get,
getters=["facts"])
print_result(result)
```
Pretty much the same pattern, just different task on different devices.
## What is a task
Let's take a look at what a task is. In it's simplest form a task is a function that takes at least a [Task](../../ref/api/task.rst#nornir.core.task.Task) object as argument. For instance:
```
def hi(task):
print(f"hi! My name is {task.host.name} and I live in {task.host['site']}")
nr.run(task=hi, num_workers=1)
```
The task object has access to `nornir`, `host` and `dry_run` attributes.
You can call other tasks from within a task:
```
def available_resources(task):
task.run(task=commands.remote_command,
name="Available disk",
command="df -h")
task.run(task=commands.remote_command,
name="Available memory",
command="free -m")
result = cmh_hosts.run(task=available_resources)
print_result(result, vars=["stdout"])
```
You probably noticed in your previous example that you can name your tasks.
Your task can also accept any extra arguments you may need:
```
def count(task, to):
print(f"{task.host.name}: {list(range(0, to))}")
cmh_hosts.run(task=count,
num_workers=1,
to=10)
cmh_hosts.run(task=count,
num_workers=1,
to=20)
```
## Tasks vs Functions
You probably noticed we introduced the concept of a `function` when we talked about `print_result`. The difference between tasks and functions is that tasks are meant to be run per host while functions are helper functions meant to be run globally.
|
github_jupyter
|
# 2040 le cap des 100% de voitures électriques
*Etude data - Projet 8 - @Nalron (août 2020)*\
*Traitement des données sur Jupyter Notebook (Distribution Anaconda)*\
*Etude réalisée en langage Python*
Visualisation des Tableaux de bord: [Tableau Public](https://public.tableau.com/profile/nalron#!/vizhome/ElectricCarsFrance2040/Vuedensemble)
---
# Rappel des missions
### [Mission 1 : Positionnement de la voiture électrique en France](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook01.ipynb)
Évolution du parc automobile électrique à 2 ans.<br>
Identification et classification des inégalités locales des voitures électriques.<br>
Autonomie et consommation moyenne d'une voiture électrique.
### [Mission 2 : Besoin des déploiements en IRVE](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook02.ipynb)
Évolution du nombre de points de recharge disponibles ouverts au public.<br>
Analyse de la répartition par borne de recharge, type de prise et catégorie d’aménageur.<br>
Utilisation des ratios pour le dimensionnement d'un maillage de taille optimale.<br>
Prévision du nombre de PDC à horizon 2025.<br>
### [Mission 3 : Appel de charge au réseau électrique](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook03.ipynb)
Analyse de la consommation d'électricité en France et des filières de production.<br>
Profiler un pic d’utilisation des bornes de recharge.<br>
Courbe de charge réseau électrique pour répondre aux nouveaux modes de consommation.
---
```
#Import des principales librairies Python
import pandas as pd
import plotly.figure_factory as ff
import requests
import seaborn as sns
%pylab inline
```
## Mission 2 : Besoin des déploiements en IRVE<a id="borne">
__`Traitement des données sur les points de charge par typologie`__
Ce jeu de données présente le nombre total de points de charge en France continentale.
Les points de charge sont matérialisés par un socle de prise sur lequel un véhicule électrique peut potentiellement se brancher. Une borne de recharge peut comporter un ou plusieurs points de charge. Les données présentées segmentent les points de charge en trois typologies :
- Les points de charge « accessible au public » correspondent aux points de charge accessibles dans les commerces (supermarché, concession automobile…), parking, sites publics ou stations en voirie.
- Les points de charge « particulier » sont des points de charges privés localisés dans le résidentiel collectif (immeubles, copropriétés…) ou individuel (pavillons).
- Les points de charge « société » sont des points de charge privés localisés dans les sociétés et réservés à l’activité de la société ou à la recharge des véhicules électriques des employés.
Le jeu de données a été élaboré par Enedis à partir de ses données propres combinées avec certaines données externes, issues des sociétés Girève et AAA Data. Les données sur les points de charge « particulier » et « société » sont une reconstitution de l’existant construite par Enedis sur la base d’hypothèses. Ces hypothèses s’appuient sur l’évolution du marché du véhicule électrique.
```
#Chargement du jeu de données "nombre-de-points-de-charge-par-typologie.csv"
irve_type = pd.read_csv('p8_data/nombre-de-points-de-charge-par-typologie.csv', sep=';')
display(irve_type.shape)
display(irve_type.head())
#Analyse des valeurs de la variable 'Nombre'
irve_type['Nombre'].unique()
#plt.figure(figsize=(12,3))
irve_type.boxplot(column= 'Nombre', by='Année')
plt.show()
```
Il ne semble pas avoir de valeur aberrante dans les valeurs de la variable 'Nombre'. Pour rappel, ici nous avons les points de charge électriques quantifiés par année et trimestre.
```
#Mise en forme plus logique des données selon l'année et le trimestre
irve_type = irve_type.pivot_table(index=['Année', 'Trimestre'],
columns='Typologie',
values='Nombre').reset_index()
irve_type.columns.name = None
irve_type
#Calcul des évolutions en % entre chaque trimestre
for i, row in irve_type.iterrows():
if i+1 < len(irve_type):
number_public = ((irve_type.loc[i+1, 'Accessible au public']
- irve_type.loc[i, 'Accessible au public']) / (irve_type.loc[i, 'Accessible au public'])*100)
irve_type.loc[i+1, '%Public'] = round(number_public, 2)
if i+1 < len(irve_type):
number_particulier = ((irve_type.loc[i+1, 'Particulier']
- irve_type.loc[i, 'Particulier']) / (irve_type.loc[i, 'Particulier'])*100)
irve_type.loc[i+1, '%Particulier'] = round(number_particulier, 2)
if i+1 < len(irve_type):
number_societe = ((irve_type.loc[i+1, 'Société']
- irve_type.loc[i, 'Société']) / (irve_type.loc[i, 'Société'])*100)
irve_type.loc[i+1, '%Société'] = round(number_societe, 2)
else :
irve_type.fillna(0, inplace=True)
pass
#Modification des Trimestres pour obtenir un Time Series
irve_type.replace({'T1' : '31-03',
'T2' : '30-06',
'T3' : '30-09',
'T4' : '31-12'},
inplace=True)
irve_type['Time'] = irve_type['Année'].astype(str)+ str("-")+irve_type['Trimestre']
irve_type['Time'] = pd.to_datetime(irve_type['Time'], format="%Y-%d-%m")
#Affichage du dataframe enrichi
irve_type
#Affichage des types de données /Variables
irve_type.dtypes
#Sauvegarde
irve_type.to_csv('p8_datatable/irve_type.csv')
#Analyse des valeurs manquantes du jeu de données
irve_type.isna().any()
#Analyse des valeurs doublons du jeu de données
irve_type.duplicated().any()
#Années traitées dans ce jeu de données list
list(irve_type['Année'].unique())
```
__`Traitement des données sur les bornes de recharge pour vehicules electriques (IRVE)`__
Ce fichier est une version consolidée des sources suivantes: Stations Tesla, Bornes de la Métropole de Rennes, Bornes dans les Concessions Renault, Bornes Autolib', Plus de Bornes, opérateur en Provence, Compagnie Nationale du Rhône, Magasins E.Leclerc
Données ajoutées en décembre 2014: Vincipark/Sodetrel, Grand Lyon, Morbihan Energies
Données ajoutées en octobre 2015: Magasins AUCHAN, Concessions NISSAN, Réseau ALTERBASE, SyDEV, Freshmile, EFFIA
Données ajoutées en mai 2016: SDE18, SDE24, SDE28, SDE32, MOVeasy, Seine Aval, SIEML, SDESM, Vienne
```
#Chargement du jeu de données "fichier-consolide-des-bornes-de-recharge-pour-vehicules-electriques-irve"
irve = pd.read_csv('p8_data/fichier-consolide-des-bornes-de-recharge-pour-vehicules-electriques-irve.csv',
sep=';')
display(irve.shape)
display(irve.head())
```
Le premier point de contrôle passe par la recherche d'éventuels doublons. Notons que le contexte métier nécessite de la rigueur dans l'interprétation de certaines variables, l'amalgame entre station, borne et point de charge est régulièrement rencontré. Donc, "id_station" n'est pas le sous-ensemble le plus approprié à l'identification de doublons, une station de recharge peut avoir plusieurs points de charge, et l'identifiant ne tient pas compte du point de charge. Notons que "id_pdc" permet d'obtenir des identifiants uniques pouvant cette fois-ci être pris comme sous-ensemble.
```
#Test de recherche des éventuels doublons à partir de la variable 'id_pdc'
irve.duplicated(subset='id_pdc').sum()
```
Notons que le fichier mis à disposition sur le site data.gouv.fr annonce plusieurs consolidations selon les années 2014 à 2016 et 2018. Attention, quelques opérateurs comme Tesla, Nissan, Auchan, etc… ne sont plus observés dans la version de juin 2020 et même depuis plusieurs mois. Non pas parce que ces stations de recharge ont été retirées, mais par logique d'uniformisation selon une charte d'utilisation "Fichiers à destination des aménageurs et opérateurs publics et privés d'infrastructures de recharge pour véhicules électriques" consultable sur [data.gouv.fr](https://www.data.gouv.fr/fr/datasets/fichiers-pour-les-infrastructures-de-recharge-de-vehicules-electriques/)
<em>Le décret 2017-26 du 12 janvier 2017 fixe les exigences requises pour la configuration des points de recharge à publier sur un nouveau fichier désormais en CSV. L'aménageur, ou l'opérateur désigné le cas échéant, prend les mesures appropriées pour que ces données soient en permanence tenues à jour et rendues publiques sur data.gouv.fr</em>
<u>Dans le cadre de l'étude, les opérateurs (ou principaux opérateurs) identifiés comme manquants seront réintégrés dans l'échantillon.</u>
```
#Combien de stations de recharge (en anglais Charging Station Pool) à Juin 2020?
irve.id_station.nunique()
#Combien de bornes de recharge (en anglais Charging Station) à Juin 2020?
irve.id_pdc.nunique()
```
**Combien de points de charge (en anglais Charging Point ou EVSE) à Juin 2020?**
Selon la définition de l'AFIREV, le point de charge représente le nombre d'emplacement individuel permettant le stationnement du véhicule pendant le temps de charge, donc le nombre de prises de la borne. Le jeu de données `irve` ne permet pas de le quantifier directement, malgré la présence d'une variable 'nbre_pdc' qui ne représente que la borne et non le nombre de prises. Notons qu'il est nécessaire d'enrichir les données par une estimation des prises de chacune des bornes, ce calcul pourra être réalisé à l'aide de la variable 'type_prise'. <u>Cet enrichissement sera fait plus tard après intégration des opérateurs manquants.</u>
### Exploitation des opérateurs et aménageurs manquants
```
#Chargement du jeu de données de l'enseigne "Mobive"
#https://www.data.gouv.fr/fr/datasets/infrastructures-de-recharge-pour-vehicules-electriques-mobive-1/
mobive = pd.read_csv('p8_data/irve-mobive-20200331.csv', sep=';', decimal=",")
display(mobive.shape)
display(mobive.head())
#Test de matching des variables avant concaténation
display(irve.columns)
display(mobive.columns)
#Chargement du jeu de données de la grande distribution LECLERC
#https://www.data.gouv.fr/fr/datasets/localisation-des-bornes-de-recharge-
#pour-vehicules-electriques-dans-les-magasins-e-leclerc/
leclerc = pd.read_csv('p8_data/leclerc.csv', sep=';', decimal=",")
display(leclerc.shape)
display(leclerc.head())
#Test de matching des variables avant concaténation
display(irve.columns)
display(leclerc.columns)
#Divergences à traiter avant concaténation des données
leclerc.rename(columns={
'nom_station': 'n_amenageur',
'nom_porteur': 'n_enseigne',
'ID_station': 'id_station',
'adresse_station': 'ad_station',
'longitude_WSG84': 'Xlongitude',
'latitude_WSG84': 'Ylatitude',
'type_connecteur': 'type_prise',
'type_charge': 'puiss_max'
}, inplace=True)
#Remplacement des modalités de la variable 'puiss_max'
leclerc['puiss_max'] = 22
#Chargement du jeu de données des bornes de la grande distribution AUCHAN
#https://www.data.gouv.fr/fr/datasets/reseau-bornes-de-recharge-rapide-auchan/
auchan = pd.read_csv('p8_data/auchan.csv', sep=';')
display(auchan.shape)
display(auchan.head())
#Fusion des variables relatives à l'adresse de la station
auchan['ad_station'] = auchan['ADRESSE'] + str(' ') + auchan['CP'].astype(str) + str(' ') + auchan['Unnamed: 5']
#Renommage des variables à traiter avant concaténation des données
auchan.rename(columns={
'LIEU': 'n_amenageur',
'Latitude': 'Ylatitude',
'Longitude': 'Xlongitude'
}, inplace=True)
auchan.drop(columns=['N°', 'ADRESSE', 'CP', 'LIEN CHARGEMAP', 'Dept',
'Unnamed: 5', 'Unnamed: 9', 'Unnamed: 10'], inplace=True)
#Intégration d'une variable 'puiss_max' représentatif de la puissance maximale
#disponible dans plus de 90% des centres commerciaux AUCHAN
auchan['puiss_max'] = 50
#Chargement du jeu de données des bornes des parkings EFFIA
#https://www.data.gouv.fr/fr/datasets/bornes-de-recharge-pour-vehicules-electriques-parking-effia/
effia = pd.read_csv('p8_data/effia.csv', sep=';')
display(effia.shape)
display(effia.head())
#Renommage des variables à traiter avant concaténation des données
effia.rename(columns={
'nom_station': 'n_amenageur',
'adresse_station': 'ad_station',
'latitude_WSG84': 'Ylatitude',
'longitude_WSG84': 'Xlongitude',
'type_connecteur': 'type_prise',
'type_charge': 'puiss_max',
'nom_porteur': 'n_enseigne'
}, inplace=True)
effia.drop(index=0, inplace=True)
effia.drop(columns=['ID_station', 'observations', 'Unnamed: 11', 'Unnamed: 12', 'Unnamed: 13','Unnamed: 14'],
inplace=True)
#Changement de modalité de la variable 'puiss_max'
effia['puiss_max'] = 3.7
#Chargement du jeu de données des bornes des parkings VINCI
vinci = pd.read_csv('p8_data/vincipark.csv', sep=';')
display(vinci.shape)
display(vinci.head())
#Renommage des variables à traiter avant concaténation des données
vinci.rename(columns={
'nom_station': 'n_station',
'adresse_station': 'ad_station',
'latitude': 'Ylatitude',
'longitude': 'Xlongitude',
'nom_porteur': 'n_enseigne',
'type_connecteur': 'type_prise',
}, inplace=True)
vinci.drop(columns=['ID_station', 'type_charge'], inplace=True)
#Chargement du jeu de données des bornes TESLA connecteur Recharge à destination
#https://www.data.gouv.fr/fr/datasets/recharge-a-destination-tesla/
tesla = pd.read_csv('p8_data/irve-tesla-destination-charging-20181130.csv', sep=';')
display(tesla.shape)
display(tesla.head())
#Changement de modalité pour la variable 'type_prise'
tesla['type_prise'] = "Tesla Type 2"
#Remplacement de la modalité 'A Cheda' par 'Tesla'
tesla['n_amenageur'].replace('A Cheda', 'Tesla', inplace=True)
#Renommage des variables à traiter avant concaténation des données
tesla.rename(columns={'Xlatitude': 'Ylatitude'}, inplace=True)
tesla.drop(columns=['ID_station', 'ID_pdc'], inplace=True)
#Chargement du jeu de données des bornes TESLA Supercharger
#https://www.data.gouv.fr/fr/datasets/stations-supercharger-tesla/
tesla_supercharger = pd.read_csv('p8_data/irve-tesla-supercharger-20181130.csv', sep=';')
display(tesla_supercharger.shape)
display(tesla_supercharger.head())
#Renommage d'une variable à traiter avant concaténation des données
tesla_supercharger.rename(columns={'accessibilite' : 'accessibilité'}, inplace=True)
#Changement de modalité pour la variable 'type_prise'
tesla_supercharger['type_prise'] = "Tesla Supercharger"
#Chargement du jeu de données des bornes des Concessionnaires NISSAN
#https://www.data.gouv.fr/fr/datasets/reseau-bornes-de-recharge-rapide-concessions-nissan/
nissan = pd.read_csv('p8_data/nissan.csv', sep=';')
display(nissan.shape)
display(nissan.head())
#Suppression d'une observation NaN
nissan.drop(index=58, inplace= True)
#Adaptation de la variable représentative des modalités de l'adresse
nissan['ad_station'] = nissan['ADRESSE'] + str(' ') + nissan['CP'].astype(str) + str(' ') + nissan['VILLE']
#Renommage des variables à traiter avant concaténation des données
nissan.rename(columns={
'LIEU': 'n_enseigne',
'Type': 'type_prise',
'Latitude': 'Ylatitude',
'Longitude': 'Xlongitude'
}, inplace=True)
nissan.drop(columns=['ADRESSE', 'CP', 'Dept', 'VILLE', 'Code concession', 'Unnamed: 8', 'Téléphone',
'Directeur Concession Nissan', 'Unnamed: 13', 'Unnamed: 14', 'LIEN CHARGEMAP'], inplace=True)
#Chargement du jeu de données des bornes des Concessionnaires RENAULT
#https://www.data.gouv.fr/fr/datasets/reseau-bornes-de-recharge-rapide-concessions-nissan/
renault = pd.read_csv('p8_data/renault.csv', sep=';', decimal=",")
display(renault.shape)
display(renault.head())
#Renommage des variables à traiter avant concaténation des données
renault.rename(columns={
'nom_station': 'n_station',
'longitude_WSG84': 'Ylatitude',
'latitude_WSG84': 'Xlongitude',
'nom_porteur': 'n_enseigne',
'type_connecteur': 'type_prise',
'type_charge':'puiss_max',
'observation': 'observations'
}, inplace=True)
renault.drop(columns=['ID_station', 'adresse_station', 'nbre_pdc'], inplace=True)
#Intégration d'une variable 'puiss_max'
renault['puiss_max'] = 22
#Concaténation des jeux de données
irvePlus = pd.concat([irve, mobive, leclerc, auchan, effia, vinci, tesla, tesla_supercharger, nissan, renault],
sort=False).reset_index(drop=True)
#Affichage des 5 premières observations
irvePlus.head()
#Affichage du nombre d'observations
#Ici une observation représente une Borne de recharge
len(irvePlus)
#Analyse des valeurs manquantes
irvePlus.isna().sum()
```
Les précédentes manipulations font que des valeurs et modalités doivent être manquantes, visibles ci-dessus. Notons que dans le contexte de l'étude, il n'est pas nécessaire d'avoir 100% des données suivant les observations, voyons comment traiter ces NaN.
#### Traitement NaN des variables n_amenageur, n_operateur et n_enseigne
```
#Traitement des NaN relatifs aux aménageurs selon l'enseigne
irvePlus[irvePlus['n_amenageur'].isna()]['n_enseigne'].unique()
#Boucle permettant le remplacement des valeurs manquantes des aménageurs selon condition
for i, row in irvePlus.iterrows():
if row['n_enseigne'] == 'SIPLEC':
irvePlus.loc[i, 'n_amenageur'] = 'LECLERC'
elif row['n_enseigne'] == 'EFFIA':
irvePlus.loc[i, 'n_amenageur'] = 'EFFIA'
elif row['n_enseigne'] == 'Sodetrel':
irvePlus.loc[i, 'n_amenageur'] = 'IZIVIA'
elif row['n_enseigne'] == 'Concession NISSAN' or row['n_enseigne'] == 'NISSAN WEST EUROPE TRAINING' or row['n_enseigne'] == 'Siège NISSAN France':
irvePlus.loc[i, 'n_amenageur'] = 'NISSAN'
elif row['n_enseigne'] == 'Renault':
irvePlus.loc[i, 'n_amenageur'] = 'RENAULT'
else :
pass
#Traitement des NaN relatifs aux opérateurs selon l'aménageur
irvePlus[irvePlus['n_operateur'].isna()]['n_amenageur'].unique()
#Boucle permettant le remplacement des valeurs manquantes des aménageurs selon condition
for i, row in irvePlus.iterrows():
if row['n_amenageur'] == 'LECLERC':
irvePlus.loc[i, 'n_operateur'] = 'LECLERC'
elif row['n_amenageur'] == 'AUCHAN ':
irvePlus.loc[i, 'n_operateur'] = 'AUCHAN'
elif row['n_amenageur'] == 'EFFIA':
irvePlus.loc[i, 'n_operateur'] = 'EFFIA'
elif row['n_amenageur'] == 'IZIVIA':
irvePlus.loc[i, 'n_operateur'] = 'IZIVIA'
elif row['n_amenageur'] == 'NISSAN':
irvePlus.loc[i, 'n_operateur'] = 'NISSAN'
elif row['n_amenageur'] == 'RENAULT':
irvePlus.loc[i, 'n_operateur'] = 'RENAULT'
else :
pass
#Traitement des NaN relatifs aux enseignes selon l'opérateur
irvePlus[irvePlus['n_enseigne'].isna()]['n_operateur'].unique()
#Boucle permettant le remplacement des valeurs manquantes des enseignes selon condition
for i, row in irvePlus.iterrows():
if row['n_operateur'] == 'CITEOS/FRESHMILE':
irvePlus.loc[i, 'n_enseigne'] = 'Scame'
elif row['n_operateur'] == 'New motion':
irvePlus.loc[i, 'n_enseigne'] = 'New motion'
elif row['n_operateur'] == 'MOUVELECVAR':
irvePlus.loc[i, 'n_enseigne'] = 'MOUVELECVAR'
elif row['n_operateur'] == 'SAINT-LOUIS':
irvePlus.loc[i, 'n_enseigne'] = 'SAINT-LOUIS'
elif row['n_operateur'] == 'SPIE':
irvePlus.loc[i, 'n_enseigne'] = 'SDEY'
elif row['n_operateur'] == 'AUCHAN':
irvePlus.loc[i, 'n_enseigne'] = 'AUCHAN'
else :
pass
```
#### Traitement NaN des variables Xlongitude et Ylatitude
```
#Traitement des deux NaN 'Xlongitude' et 'Ylatitude'
irvePlus[irvePlus['Xlongitude'].isna()]
#Intégration manuelle des 4 valeurs
irvePlus['Xlongitude'][2188] = 4.0811882
irvePlus['Xlongitude'][2189] = 4.0811882
irvePlus['Ylatitude'][2188] = 46.0822754
irvePlus['Ylatitude'][2189] = 46.0822754
```
#### Traitement NaN de la variable 'type_prise'
```
#Traitement des valeurs NaN identifiées pour le type de prise
irvePlus[irvePlus['type_prise'].isna()]['n_operateur'].unique()
```
Globalement le groupe Auchan à équipé ses parkings de bornes Type 2 + CHAdeMO, notons que l'échantillon sera donc complété selon cette hypothèse, hypothèse retenue et préférable devant les NaN.
```
#Boucle permettant le remplacement des valeurs manquantes identifiées ci-dessus
for i, row in irvePlus.iterrows():
if row['n_operateur'] == 'AUCHAN':
irvePlus.loc[i, 'type_prise'] = 'Type 2 + CHAdeMO'
```
#### Traitement NaN de la variable 'puiss_max'
```
#Traitement des valeurs NaN identifiées pour la puissance max.
#Le type de prise permet de pouvoir intervenir sur ces valeurs manquantes
irvePlus[irvePlus['puiss_max'].isna()]['type_prise'].unique()
#Boucle permettant le remplacement des valeurs manquantes des puissances max. identifiées ci-dessus
for i, row in irvePlus.iterrows():
if row['type_prise'] == 'TE-T3':
irvePlus.loc[i, 'puiss_max'] = 22
elif row['type_prise'] == 'TE-T2':
irvePlus.loc[i, 'puiss_max'] = 22
elif row['type_prise'] == 'DC Chademo - 44 kWh':
irvePlus.loc[i, 'puiss_max'] = 44
elif row['type_prise'] == 'DC Chademo - 44 kWh + \nAC Type 3 - 43 kWh':
irvePlus.loc[i, 'puiss_max'] = 44
else:
pass
#Nouvelle situation après ce traitement NaN
irvePlus.isna().sum()
```
L'enrichissement de l'échantillon de départ `irve` rend l'exploitation de la variable 'id_pdc' obsolète. En effet, la concaténation avec les autres sources de données permet un recensement plus complet du réseau, mais sans pouvoir obtenir une charte d'utilisation commune et complète. En l'occurrence les id des points de charge ne sont plus complets, notons donc qu'il est nécessaire d'intégrer un identifiant unique à cet usage.
```
#Intégration d'un identifiant unique par Point de charge
irvePlus['id_borne']= np.arange(1, len(irvePlus)+1)
```
Il n'est pas nécessaire de traiter toutes les valeurs NaN, dans le contexte de l'étude les précédents traitements semblent être suffisants. Voyons immédiatement comment enrichir et optimiser ce qui peut l'être, comme par exemple les puissances et les types de prise.
#### Traitement à des fins d'uniformisation des modalités / valeurs de la variable 'puiss_max'
```
#Affichage des modalités et valeurs de la variable 'puiss_max'
irvePlus.puiss_max.unique()
```
Difficilement exploitable, on peut comprendre que chaque "acteur" à l'origine des fichiers ait pu nommer les puissances selon ses propres codes ou habitudes, mais il est nécessaire de pouvoir clarifier le tout. Notons que l'étude menée est porteuse d'un message plus perceptible quelque soit l'interlocuteur, voyons comment mettre en place un classement des puissances.
```
#Boucle pour éliminer les modalités dont l'unité 'kva' est mentionnée
for x in irvePlus['puiss_max']:
if x == '36kva':
irvePlus['puiss_max'].replace(x, '36', inplace=True)
elif x == '22kva':
irvePlus['puiss_max'].replace(x, '22', inplace=True)
elif x == '48kva':
irvePlus['puiss_max'].replace(x, '48', inplace=True)
elif x == '43-50':
irvePlus['puiss_max'].replace(x, '50', inplace=True)
else:
pass
#Recherche des valeurs '0', '0.0' et 60.000
irvePlus[(irvePlus.puiss_max == '0') | (irvePlus.puiss_max == '0.0') | (irvePlus.puiss_max == '60.000')]
#Remplacement des valeurs '0', '0.0' et '60.000'
irvePlus['puiss_max'].replace('0', 22, inplace=True)
irvePlus['puiss_max'].replace('0.0', 22, inplace=True)
irvePlus['puiss_max'].replace('60.000', 22, inplace=True)
#Changement du type de donnée variable 'puiss_max' afin de faciliter son traitement
irvePlus['puiss_max'] = irvePlus.puiss_max.astype(float)
#Classification des puissances via une boucle sous condition
class_puiss = []
for value in irvePlus.puiss_max:
if value <= 3.7 :
class_puiss.append('Recharge normale 3,7 kVA')
elif value > 3.7 and value <=20 :
class_puiss.append('Recharge accélérée de 3,7 à 20 kVA')
elif value == 22 :
class_puiss.append('Recharge accélérée 22 kVA')
elif value >= 43 and value <= 50 :
class_puiss.append('Recharge rapide 43 à 50 kVA')
else :
class_puiss.append('Recharge haute puissance 100 à 350 kVA')
#Intégration d'une nouvelle variable 'class_puiss'
irvePlus['class_puiss'] = class_puiss
```
#### Traitement à des fins d'uniformisation des modalités de la variable 'type_prise'
```
irvePlus.type_prise.unique()
```
Le constat reste le même que pour les puissances, les modalités listées ci-dessus sont difficilement exploitables en l'état. Notons qu'il est nécessaire de pouvoir classifier correctement et de manière lisible les connecteurs des bornes.
```
#Création de listes groupant les diverses typologies rencontrées dans l'échantillon
list_ef_x1 = ['EF', 'E/F', 'E', 'AC socket']
list_tesla_supercharger_x1 = ['Tesla Supercharger']
list_chademo_x1 = ['CHADEMO', 'CAHDEMO', 'CHAdeMO', 'Chademo', 'chademo', 'DC Chademo - 44 kWh', 'CHAdeMO-EU']
list_t2_x1 = ['T2', 'T2 câble attaché', 'Borne PULSE QC-50 de chez LAFON, Recharge Rapide sur prise T2',
'semi-rapide', 'AC plug', 'Tesla Type 2', '22', '23']
list_t3_x1 = ['T3', 'Type 3c', 'DC Chademo - 44 kWh + \nAC Type 3 - 43 kWh']
list_combo_x1 = ['COMBO', 'Combo 2', 'Combo2', 'COMBO 2', 'combo2', 'combo', 'Borne LAFON - recharge rapide 43AC-50DC']
list_combo_ccs350_x1 = ['CCS350-CCS350-CCS350-CCS350', 'CCS350-CCS350-CCS350-CCS350-CCS350-CCS350']
list_t2_ef_x2 = ['EF - T2', 'T2 - E/F', 'E/F-T2', 'T2 - EF', 'T2/EF', 'T2-EF', 'T2-AC Triphasé', 'T2/TE', 'E/F - T2',
'E/F + T2', 'EF/T2', 'T2-E/F', 'TE-T2', 'T2S-E/F', 'EF-T2', 'EF - T2', 'Type 2 - E/F', 'T2 – E/F',
'Borne SESAME de chez Sobem / Recharge de Type C , recharge accélérée, 2 prises sur chaque PDC : E/F et T2',
'Borne SESAME de chez Sobem / Recharge de Type C , recharge acc?l?r?e, 2 prises sur chaque PDC : E/F et T2',
'E/F-T5', 'E/F-T7', 'E/F + T4', 'T2*E']
list_t3_ef_x2 = ['EF - T3', 'T3 - EF', 'E/F + T3', 'EF/T3', 'TE-T3', 'T3 et EF', 'Type 3 - E/F', 'T3-EF',
'EF-T3', 'E/F-T3', 'T3-E/F']
list_t2_chademo_x2 = ['T2-CHAdeMO', 'Type 2 + CHAdeMO']
list_chademo_combo_x2 = ['CHADEMO - COMBO', 'CHAdeMO-Combo', 'Combo-Chademo', 'Combo2-CHAdeMO', 'CHAdeMo-Combo']
list_combo_ccs350_chademo_t2_x3 = ['CCS350-CCS350-CCS50-CHAdeMO - T2',
'CCS350-CCS350-CCS350-CCS350-CCS50-CHAdeMO - T2']
list_t2_t3_ef__x3 = ['EF - T2 - T3', 'EF - T2 - t3', 'T2-T3-EF', 'T3-EF-T2', 'T2-T2-EF']
list_chademo_combo_ef_x3 = ['A/C - Combo - CHAdeMO']
list_t2_combo_chademo_x3 = ['T2-Combo2-CHAdeMO', 'T2 Combo Chademo', 'Combo-ChaDeMo-T2', 'CHADEMO - COMBO -T2',
'CHAdeMO-Combo-T2 câble attaché']
#Intégration des colonnes booléennes
irvePlus['EF'] = False
irvePlus['Type 2'] = False
irvePlus['Type 3'] = False
irvePlus['Combo'] = False
irvePlus['Combo CCS350'] = False
irvePlus['Chademo'] = False
irvePlus['Tesla Supercharger'] = False
#Boucle itérative selon liste condition
for i, row in irvePlus.iterrows():
if row['type_prise'] in list_ef_x1:
irvePlus.loc[i, 'EF'] = True
elif row['type_prise'] in list_t2_x1:
irvePlus.loc[i, 'Type 2'] = True
elif row['type_prise'] in list_t3_x1:
irvePlus.loc[i, 'Type 3'] = True
elif row['type_prise'] in list_combo_x1:
irvePlus.loc[i, 'Combo'] = True
elif row['type_prise'] in list_combo_ccs350_x1:
irvePlus.loc[i, 'Combo CCS350'] = True
elif row['type_prise'] in list_chademo_x1:
irvePlus.loc[i, 'Chademo'] = True
elif row['type_prise'] in list_tesla_supercharger_x1:
irvePlus.loc[i, 'Tesla Supercharger'] = True
elif row['type_prise'] in list_t2_ef_x2:
irvePlus.loc[i, 'Type 2'] = True
irvePlus.loc[i, 'EF'] = True
elif row['type_prise'] in list_t3_ef_x2:
irvePlus.loc[i, 'Type 3'] = True
irvePlus.loc[i, 'EF'] = True
elif row['type_prise'] in list_t2_chademo_x2:
irvePlus.loc[i, 'Type 2'] = True
irvePlus.loc[i, 'Chademo'] = True
elif row['type_prise'] in list_chademo_combo_x2:
irvePlus.loc[i, 'Chademo'] = True
irvePlus.loc[i, 'Combo'] = True
elif row['type_prise'] in list_combo_ccs350_chademo_t2_x3:
irvePlus.loc[i, 'Type 2'] = True
irvePlus.loc[i, 'Chademo'] = True
irvePlus.loc[i, 'Combo CCS350'] = True
elif row['type_prise'] in list_t2_t3_ef__x3:
irvePlus.loc[i, 'Type 2'] = True
irvePlus.loc[i, 'Type 3'] = True
irvePlus.loc[i, 'EF'] = True
elif row['type_prise'] in list_chademo_combo_ef_x3:
irvePlus.loc[i, 'Chademo'] = True
irvePlus.loc[i, 'Combo'] = True
irvePlus.loc[i, 'EF'] = True
elif row['type_prise'] in list_t2_combo_chademo_x3:
irvePlus.loc[i, 'Type 2'] = True
irvePlus.loc[i, 'Chademo'] = True
irvePlus.loc[i, 'Combo'] = True
else:
pass
```
#### Traitement des valeurs manquantes identifiées dans le comptage des points de charge
```
#Identification des aménageurs concernés par le nombre de pdc manquant
irvePlus[irvePlus.nbre_pdc.isna()]['n_amenageur'].unique()
```
Notons que la diversité ci-dessus n'apporte aucune solution pour pouvoir identifier les 'nbre_pdc' manquants. L'option choisie ici, sera de comptabiliser les connecteurs (booléens) sous condition que la valeur de 'nbre_pdc' soit inconnue, dans le cas contraire la valeur d'origine sera conservée.
```
#Remplacement des valeurs manquantes par une valeur flottante 0.0
irvePlus.nbre_pdc.fillna(0.0, inplace=True)
#Remplacement des valeurs 0.0 par la somme des True Values correspondant aux connecteurs EF, Type 2, etc…
for i, row in irvePlus.iterrows():
if row['nbre_pdc'] == 0.0:
number = sum(irvePlus[['EF', 'Type 2', 'Type 3', 'Chademo', 'Combo', 'Combo CCS350', 'Tesla Supercharger']],
axis=1)
irvePlus.loc[i, 'nbre_pdc'] = number[i]
#Comptage des connecteurs suivant le type
display(irvePlus['EF'].value_counts())
display(irvePlus['Type 2'].value_counts())
display(irvePlus['Type 3'].value_counts())
display(irvePlus['Chademo'].value_counts())
display(irvePlus['Combo'].value_counts())
display(irvePlus['Combo CCS350'].value_counts())
display(irvePlus['Tesla Supercharger'].value_counts())
```
#### Enrichissement de l'échantillon en intégrant une catégorisation des aménageurs
Cette étape permettra de pouvoir obtenir une vision plus explicite de qui sont les aménageurs IRVE sur notre territoire. Il semble pertinent de pouvoir mieux comprendre comment s'organise l'implantation des bornes.
```
#Aperçu de la diversité des aménageurs à l'origine de l'implantation des bornes en France
irvePlus.n_amenageur.unique()[:30]
#Liste des catégories pouvant rassembler les aménageurs identifiées dans l'échantillon
#Collectivités territoriales
list_c_t = ['Aix-Marseille-Provence', 'BREST METROPOLE', 'CAPG', 'CAPL', 'CARF', 'CC VITRY CHAMPAGNE ET DER',
'CC de la Côtičre', 'CCPA', 'CCPHVA', 'CCVBA', 'CELLIEU', 'CGLE', 'CHARLIEU','CHAUSSON MATERIAUX',
'CHAZELLES SUR LYON', 'CNR', 'COMMELLE VERNAY',"Communauté Urbaine d'Arras", 'CANTAL', 'Aéroports de Paris SA',
"Communauté d'Agglomération Douaisis Agglo","Communauté d'Agglomération Maubeuge Val de Sambre", 'SODETREL ',
"Communauté d'Agglomération Valenciennes Métropole", "Communauté d'Agglomération du Boulonnais", 'SMOYS',
"Communauté d'Agglomération du Pays de Saint Omer", 'Communauté de Communes Flandre-Lys', 'SMEG 30',
'Communauté de Communes de la Haute Vallée de Chevreuse', "Communauté de Communes du Coeur d'Ostrevent",
'Communauté de Communes du Haut-Pays Montreuillois', "Communauté de Communes du Pays d'Opale",
'Communauté de Communes du Pays de Lumbres', "Commune d'Eguisheim",'FDEL 46', 'FDEL 46', 'FEURS',
'FONTANÈS', 'FRAISSES', 'GENILAC', 'GOLF CLUB DE LYON', 'GPSO-MEUDON', 'Grenoble-Alpes Métropole',
'Hauts-de-France', 'Herault Energies 34', 'ISTRES', "L'ETRAT", "L'HORME", 'LA FOUILLOUSE', 'LA GRAND CROIX',
'LA PACAUDIÈRE', 'LA RICAMARIE', 'LA TALAUDIÈRE', 'LA VALLA EN GIER', 'LE COTEAU', 'LORETTE','Le Pont du Gard',
'MABLY', 'MARLHES', 'MONTAGNY', 'MONTBRISON', 'MOUVELECVAR', 'MRN', 'Modulo (Mobilité Locale Durable)',
'Montpellier Mediterranee Metropole', 'Métropole Européenne de Lille', 'NEULISE', 'ORLEANS METROPOLE',
'PANISSIERES', 'PARIGNY', 'PERREUX','REGNY', 'RENAISON', 'RIORGES', 'ROANNE', 'ROCHE LA MOLIÈRE',
'SABLE SUR SARTHE', "SAINT ANDRÉ D'APCHON", 'SAINT ANDRÉ LE PUY', 'SAINT BONNET LE CHÂTEAU',
'SAINT CHRISTO EN JAREZ', 'SAINT CYR', 'SAINT ETIENNE ROCHETAILLÉE', 'SAINT ETIENNE SAINT VICTOR SUR LOIRE',
'SAINT GALMIER', 'SAINT GENEST LERPT', 'SAINT HÉAND', 'SAINT JUST SAINT RAMBERT', 'SAINT LÉGER SUR ROANNE',
'SAINT MARCELLIN EN FOREZ', 'SAINT MARTIN LA PLAINE', 'SAINT MAURICE EN GOURGOIS', 'SAINT PAUL EN JAREZ',
'SAINT ROMAIN EN JAREZ', 'SAINT ROMAIN LES ATHEUX', 'SAINT SAUVEUR EN RUE', 'SAINT SYMPHORIEN DE LAY', 'SAINT-LOUIS', 'SAINTE CROIX EN JAREZ',
'SALVIZINET', 'SAVIGNEUX', 'SDE 18', 'SDE 23', 'SDE 56', 'SDE 65', 'SDE07', 'SDE09', 'SDE29', 'SDE65', 'SDE76',
'SDEA10', 'SDED', 'SDEE48 48', 'SDESM', 'SDET 81', 'SDEY', "SDEY Syndicat Departemental d'Energies de l'Yonne",
'SE60', 'SEDI', 'SIDELC', 'SIED70', 'SIEDA 12', 'SIEEEN', 'SIEGE 27', 'SIEIL37', 'SIEML 49', 'SIPPEREC',
'SMA PNR Gatinais', 'SMED 13', 'SORBIERS', 'SOREGIES', 'SURY LE COMTAL', 'SYADEN 11', 'SYANE', 'SYDED',
'SYDEEL66 66', 'SYDESL', 'SYDEV 85', 'SYME05', 'Se 61', 'TE 53', "TERRITOIRE D'ENERGIE 90", 'Séolis', 'S‚olis',
"Syndicat Départemental d'Énergie de Loire-Atlantique (SYDELA)", 'FDEE 19', 'SDEPA 64', 'SDEG 16',
"Syndicat Départemental d'Énergies d'Eure et Loir (SDE28)", 'SDEE 47', 'SDEER 17', 'SYDEC 40',
"Syndicat Intercommunal de Distribution d'Electricité de Loir-et-Cher (SIDELC41)", 'SDE 24', 'SDEEG 33',
"Syndicat de l'Énergie de l'Orne (TE61)", 'Toulouse Metropole', 'UNIEUX', 'USEDA', 'USSON EN FOREZ',
'VEAUCHE', 'VILLARS', 'VILLE DE CAVAILLON', 'VILLE DE GAP', 'VILLE DE ROSHEIM', 'VILLEREST', "Ville d'Hazebrouck",
'Ville de Garches', 'Ville de Montrouge', 'Ville de Revel', 'Ville de Saverne', 'Ville de Viriat',
'Arcs 1950 Le Village - Parking', 'B&B Hôtel Lyon Eurexpo Chassieu', "Bastide Selva - Maison d'Hôtes",
'Baumanière les Baux de Provence', 'Belle Isle sur Risle','Benvengudo Hôtel Restaurant',
'Best Western Amarys Rambouillet', 'Best Western Golf Hôtel Lacanau','Best Western Grand Hôtel de Bordeaux',
'Best Western Hotel Alexandra', 'Best Western le Lavarin', 'Best Western Plus - Hôtel de la Paix',
'Best Western Plus - Hôtel de la Régate', 'Best Western Plus Cannes Riviera & spa',
'Best Western Plus Excelsior Chamonix', 'Best Western Plus Santa Maria', 'Brasserie des Eclusiers',
'Buffalo Grill de Foix', 'Caffe Mazzo', 'Camping BelleRive', 'Camping du Domaine de Massereau',
"Camping Ecolodge de l'Etoile d'Argens", 'Camping La Fontaine du Hallate en Morbihan', "Camping La Roche d'Ully",
'Camping Le Brasilia', 'Camping Palmira Beach', 'Camping Sunêlia Berrua', 'Camping Sunêlia Le Fief *****',
"Casino d'Évian - Evian Resort", "Casino d'Andernos - Le Miami", 'Casino De Plombières-Les-Bains',
'Casino de Pornichet', 'Casino Joa Antibes La Siesta', 'Casino JOA Le Boulou', 'Casino Le Domaine de Forges',
'Casino Partouche de Boulogne-sur-Mer', 'Casino Partouche de Palavas Les FLots','Castel Camping Le Brévedent',
'Castel Maintenon']
#Constructeurs Auto
list_auto = ['IONITY', 'Tesla', 'A Cheda', 'NISSAN', 'RENAULT']
#Parkings
list_parking = ['EFFIA', 'Alyse Parc Auto', 'Parking Bodin', 'Parking François 1er Interparking', 'TM _Parking']
#Centres commerciaux
list_centres_commerciaux = ['Centre commercial Grand Var', 'GEMO', 'Sičge Intermarché', 'Supermarchés COLRUYT', 'LECLERC', 'AUCHAN ', 'LECLERC',
'Centre Commercial Carrefour Villiers en Bière', 'Centre commercial Les Eléis', 'Centre Commercial Parly 2',
'Centre Commercial Waves Actisud', 'E-Leclerc Paray-le-Monial', 'Hyper U Sierentz', "Intermarché l'Isle sur le Doubs",
'Intermarché Mont près Chambord', 'Intermarché Ramonville', 'intermarché verneuil',
'Parc Commercial Les Portes de Soissons', 'Usines Center', 'CASA']
#Opérateurs privés
list_op_prive = ['SODETREL', 'IZIVIA', 'ELECTRIC 55 CHARGING', 'PLUS DE BORNES', 'BE TROM', 'BOEN', 'DOCUWORLD']
#Entreprises diverses
list_entreprise_diverse = ["Cattin - Grands Vins & Crémants d'Alsace", 'Caves Carrière', 'Champagne Bergere', 'Champagne Drappier',
'Champagne J de Telmont', 'Champagne Paul Dethune', 'Champagne Pertois-Moriset',
'Domaine Viticole Château de Chamirey', 'Dopff au Moulin', 'Jet Systems Hélicoptères Services']
#Hotels, restaurants, tourisme
list_tourisme = ["A L'Ecole Buissonière", 'Aa Saint-Omer Golf Club', 'Abbaye de Bussiere sur Ouche ', 'Abbaye de Talloires',
'Aigle des Neiges Hotel', 'Altapura', 'Aparthotel Adagio Genève Saint Genis Pouilly', 'Atmosphères Hôtel',
'Au Grès des Ouches', 'Au Pont Tournant', 'Auberge Bienvenue', 'Auberge Bressane de Buellas',
'Auberge de Cassagne & Spa ', 'Auberge de la Petite Reine', 'Auberge du Lac', 'Auberge du Mehrbächel',
'Auberge du Vieux Puits', 'Auberge Edelweiss', 'Auberge Ostapé', 'Auberge Sundgovienne', 'Aux Terrasses',
'Avancher Hôtel & Lodge, Restaurant & Bar', 'Château Beauregard', "Château d'Audrieu", "Château d'Igé****",
"Château d'Isenbourg Hôtel Restaurant", 'Château Dauzac', 'Château de Beaulieu', 'Château de Belmesnil',
'Château de Challanges', 'Château de Chapeau Cornu', 'Château de Chenonceau', 'Château de Clérac',
'Château de Germigney R&C Port-Lesney', 'Château de Gilly', "Château de l'Hoste", "Château de l'Ile",
'Château de la Presle', 'Château de la Treyne - Relais & Château', 'Château de Locguénolé',
'Château de Massillan', 'Château de Nazelles', 'Château de Noirieux', 'Château de Quesmy',
'Château de Riell - Relais & Châteaux', 'Château de Sacy', 'Château de Sissi', 'Château de St Paul',
'Château de Valmer', 'Château de Vault-de-Lugny', 'Château des Ducs de Joyeuse', 'Château du Galoupet',
'Château Fombrauge', 'Château Guiraud', 'Château Hôtel le Boisniard', 'Château Hourtin-Ducasse',
'Château La Coste', 'Château La Fleunie Hôtel/Restaurant', 'Château La Tour Carnet', 'Château Laborde Saint-Martin',
'Château Pape Clément', 'Château Sainte Sabine', 'Château Soutard', 'Château Talluy', 'Château Vignelaure',
'Châteaux de la Messardiere',"Chalet L'Orignal", 'Chalet M la Plagne', 'Chalet Marano Hôtel Restaurant & Spa',
"Chalet-Hôtel Le Chamois d'Or", "Chambre d'hôtes Le Crot Foulot", 'Charmhotel Au Bois le Sire',
'Chateau de Courban & Spa Nuxe', 'Château des Demoiselles', 'Chateau MontPlaisir', 'Chateau Prieuré Marquet',
'Circuit Paul Ricard', 'Circuits Automobiles LFG', 'Clos des Sens', 'Clos Marcamps', 'Club Les Ormes', 'CosyCamp',
'Courtyard Paris Roissy CDG', 'Crowne Plaza Montpellier Corum', 'Domaine Château du Faucon',
"Domaine d'Auriac - Relais & Châteaux", "Domaine d'Essendiéras", 'Domaine de Barive', 'Domaine de Barres',
'Domaine de Bournel', 'Domaine de Cabasse', 'Domaine de Crécy', 'Domaine de Divonne', "Domaine de l'Hostreiere",
'Domaine de la Corniche', "Domaine de la Forêt d'Orient - Hôtel Golf & Spa", 'Domaine de la Poignardiere',
'Domaine de la Tortinière', 'Domaine de la Tour', 'Domaine de Manville', 'Domaine de Mialaret',
'Domaine de Rochevilaine', 'Domaine de Saint-Géry', 'Domaine de Vaugouard', 'Domaine de Verchant',
'Domaine des Andéols', 'Domaine des Etangs', 'Domaine des Séquoias', 'Domaine du Bailli',
'Domaine du Château de Meursault', 'Domaine du Clos Fleuri', 'Domaine du Moulin', 'Domaine du Prieuré',
'Domaine du Revermont', 'Domaine Lafage', 'Domaine Selosse - Hôtel Les Avisés', 'Emerald Stay Apartments Morzine',
'Espace Montagne Grenoble', 'Eurotel', 'Evian Resort Golf Club', 'Ferme de la Rançonnière', 'Flocons de Sel',
'Gîte des Prés de Garnes', 'Gîte La Mystérieuse Ponts sur Seulles', 'Gîtes Bon Air Chalets Piscine Spa',
'Golden Tulip Le Grand Bé Saint Malo', 'Golden Tulip Sophia Antipolis', 'Golf Cap Malo', 'Golf Club Omaha Beach',
'Golf de Barbaroux - Open Golf Club', 'Golf de la Prée la Rochelle', 'Golf de la Sainte Baume - Open Golf Club',
'Golf de Marseille la Salette - Open Golf Club', 'Golf de Servanes - Open Golf Club',
'Golf du Touquet - Open Golf Club', 'Golf Hôtel Restaurant du Kempferhof', 'Golf International de Grenoble',
'Golf Les Gets', 'Grand Hôtel des Alpes', 'Grand Hôtel des Thermes', 'Grand Hotel La Cloche',
'Grand Parc du Puy du Fou', 'Hôtel-Restaurant & SPA Les Gentianettes', 'Hôtel-Restaurant Kleiber',
'Hôtel-Restaurant Le Grand Turc', 'Hôtel-Restaurant Le Mas du Terme', 'Hôtel & Spa Best Western Plus - Chassieu',
"Hôtel & Spa L'Equipe", 'Hôtel & Spa Les Violettes', 'Hôtel 202', 'Hôtel A Madonetta', 'Hôtel Akena',
'Hôtel AKENA de Saint-Witz', 'Hôtel Akena Dol de Bretagne', 'Hôtel Ampère', 'Hôtel Atena',
'Hôtel Au Coeur du Village', 'Hôtel B&B Colmar Expo', 'Hôtel Barrière - le Grand Hôtel Dinard',
'Hôtel Barrière Le Normandy Deauville', 'Hôtel Barrière Le Westminster', 'Hôtel Best Western Plus Metz Technopôle',
'Hôtel Cézanne', 'Hôtel Cala Di Greco', 'Hôtel Cap-Estel', 'Hôtel Capao', 'Hôtel Castel Burgond',
'Hôtel Castel Mouisson', 'Hôtel Cayrons', 'Hôtel Château de la Begude - Golf Opio Valbonne',
'Hôtel Château de la marlière', 'Hôtel Chais Monnet', 'Hôtel Champs Fleuris', 'Hôtel Chapelle et Parc',
'Hôtel Chez Camillou - Restaurant Cyril ATTRAZIC', 'Hôtel Cour des Loges', "Hôtel d'Angleterre",
'Hôtel Daumesnil-Vincennes', 'Hôtel de France', 'Hôtel de Greuze', 'Hôtel de la Cité', 'Hôtel des Dunes',
'Hôtel des Princes', 'Hôtel Diana Restaurant & Spa', 'Hôtel du Bois Blanc', 'Hôtel du Cap-Eden-Roc',
'Hôtel du Palais', 'Hôtel Escapade', 'Hôtel Fleur de Sel', 'Hôtel Golf Château de Chailly', 'Hôtel Ha(a)ïtza',
'Hôtel Husseren-les-Châteaux', 'Hôtel ibis Besançon Centre Ville', 'Hôtel Juana',
'Hôtel Kyriad Prestige Clermont-Ferrand', 'Hôtel Kyriad Prestige Lyon Saint-Priest Eurexpo',
'Hôtel Kyriad Prestige Strasbourg Nord', 'Hôtel Kyriad Prestige Vannes', "Hôtel l'Angleterre",
"Hôtel L'Estelle en Camargue ", 'Hôtel La Chaumière', 'Hôtel La Ferme', "Hôtel La Ferme D'Augustin",
'Hôtel La Sivolière', 'Hôtel La Villa', 'Hôtel La Villa Douce', 'Hôtel la Villa K', 'Hôtel Le Bellevue',
'Hôtel Le Bristol Paris', 'Hôtel Le Burdigala', 'Hôtel le Cèdre', 'Hôtel Le Capricorne', 'Hôtel Le Cep',
'Hôtel le Clos', 'Hôtel le M de Megève', 'Hôtel Le Mas des Herbes Blanches', 'Hôtel Le Morgane',
'Hôtel le Pic Blanc', 'Hôtel Le Relais des Champs', 'Hôtel Le Rivage', 'Hôtel Le Royal Barrière Deauville',
'Hôtel Le Vallon de Valrugues & Spa', 'Hôtel Les Airelles', 'Hôtel Les Bartavelles & SPA', 'Hôtel Les Bories & Spa',
'Hôtel Les Bouis', 'Hôtel Les Colonnes', 'Hôtel Les Esclargies', 'Hôtel Les Glycines et Spa', 'Hôtel Les Gravades',
'Hôtel Les Maritonnes Parc & Vignoble', 'Hôtel Les Trésoms', 'Hôtel Lodges Ste Victoire & Restaurant St-Estève',
'Hôtel Logis Châteaudun', 'Hôtel Lyon Métropole', 'Hôtel Marriott Roissy Charles de Gaulle Airport',
'Hôtel Mercure Côte Ouest Thalasso & Spa', 'Hôtel Mercure Caen Centre', 'Hôtel Mercure Epinal Centre',
'Hôtel Mercure Omaha Beach', 'Hôtel Mercure Reims Centre Cathedrale', 'Hôtel Miramar', 'Hôtel Mont-Blanc',
'Hôtel Negrecoste', 'Hôtel Parc Beaumont ', 'Hôtel Parc Victoria', 'Hôtel Parkest', 'Hôtel Radisson Blu 1835',
'Hôtel Radisson Blu Biarritz', "Hôtel Restaurant A l'Etoile", 'Hôtel Restaurant Alliance Couvent des Minimes',
'Hôtel Restaurant Au Boeuf Rouge', 'Hôtel Restaurant de la Tabletterie', 'Hôtel Restaurant des Bains',
'Hôtel Restaurant Edward 1er', 'Hôtel Restaurant Kyriad Montauban', 'Hôtel Restaurant La Ferme de Cupelin',
'Hôtel Restaurant Le Beauregard', 'Hôtel Restaurant Le Cerf', 'Hôtel Restaurant Le Noirlac',
'Hôtel Restaurant Le Tropicana', 'Hôtel Restaurant Les Oliviers', 'Hôtel Royal - Evian Resort',
'Hôtel Sezz Saint-Tropez - Restaurant Colette', 'Hôtel Stella', 'Hôtel U Capu Biancu',
'Hôtel, Restaurant Le Belvedere', 'Holiday Inn Blois centre ', 'Holiday Inn Express Paris - Velizy',
'Holiday Inn Lyon - Vaise', 'Honfleur Normandy Outlet', 'Hostellerie de la Pointe Saint Mathieu',
'Hostellerie de Levernois', 'Hostellerie La Briqueterie', 'Hostellerie La Farandole', 'Hostellerie Le Cèdre',
'Hotel & Spa Le Dahu', 'Hotel Alpen Roc', 'Hotel Bel Air - Brasserie La Terrasse', 'Hotel Castelbrac',
'Hotel du Clocher Villa Savoy ***', 'Hotel Ibis Manosque Cadarache', 'Hotel ibis Saint Brieuc Yffiniac',
'Hotel Imperial Garoupe', 'Hotel Koh-I Nor', "Hotel L'Alta Peyra", 'Hotel Le Club de Cavalière & Spa',
'Hotel Le Kaïla', 'Hotel le Manoir Saint Michel', 'Hotel Le Mans Country Club', 'Hotel le Montrachet',
'Hotel Le Pigonnet', 'Hotel Le Tillau', 'Hotel Les Bains de Cabourg - Thalazur', 'Hotel Maison Bras',
'Hotel Marina Corsica Porto Vecchio', 'Hotel Mercure Bordeaux Château Chartrons', 'Hotel Normandie',
'Hotel Restaurant de la poste', 'Hotel Restaurant Ferme Blanche', 'Hotel Restaurant Le Viscos',
'Hotel Restaurant Spa Le Rabelais', 'Hotel Royal Riviera', 'hotel Taj-I Mah*****',
'Hotel The Originals Domaine de La Groirie', 'Hotel The Originals Nantes Ouest Agora',
'Hotel-Restaurant Au Chêne Vert', 'Hyatt Paris Madeleine', 'Ibis Cergy Pontoise Le Port', 'Ibis La Roche sur Yon',
'Ibis Roanne', 'Ibis Styles - Mulsanne', 'Ibis Styles Mâcon Centre', 'Ibis Styles Paris Mairie de Clichy',
'Ibis Styles Tours Sud', 'Inter Hotel Acadie tremblay en france', 'Inter-Hôtel Alteora site du Futuroscope',
'Inter-Hôtel de la Chaussairie', 'Inter-Hôtel Le Cap', 'Inter-Hôtel Roanne Hélios', 'Inter-Hotel Albi le Cantepau',
'Inter-Hôtel du Lac', 'Inter-Hotel Ecoparc Montpellier Est', 'Inter-Hotel Saint Martial',
'Isulella Hôtel & Restaurant', 'Jiva Hill Resort', "Jum'Hôtel - Restaurant Atelier Grill",
'Kon Tiki - Riviera Villages ', 'Kube Hôtel Saint-Tropez', 'Kyriad Clermont-Ferrand Centre',
"L'Apogée Courchevel", "L'Assiette Champenoise", "L'Atelier", "L'atelier d'Edmond",
"L'Enclos Béarnais Maison d'hôtes", "L'Impérial Palace", "L'Oustalet Gigondas", "l'Oustau de Baumanière",
'La Bastide de Gordes', 'La Bastide de Tourtour Hôtel & Spa ', 'La Côte Saint Jacques & Spa',
'La Cheneaudière & Spa - Relais & Châteaux', 'La Coquillade Provence Village', 'La Ferme du Chozal',
'La Gentilhommiere', 'La Grande Maison de Bernard Magrez ', 'La Grande Terrasse Hôtel & Spa Mgallery',
'La Guitoune', 'La Jasoupe', 'La Maison de Rhodes', 'La Malouiniere des Longchamps', 'La Pinède Plage',
'La Pyramide Patrick Henriroux', 'La Réserve', 'La Réserve des Prés Verts Massages & Spa', 'La Réserve Ramatuelle',
'La Signoria - Relais & Châteaux', 'La Tannerie de Montreuil', 'La Vaucouleurs Golf Club', 'Lagardère Paris Racing',
'Le Barn', 'Le Beau Rivage', 'Le Binjamin', 'Le Bois Joli', 'Le Brittany & Spa', 'Le Château de la Tour',
'Le Chambard Relais & Châteaux', 'Le Clos de la Ribaudiere', 'Le Clos de Serre', 'Le Clos des Délices',
'Le Clos Saint Vincent', 'Le Clos Saint-Martin Hôtel & Spa', "Le Couvent des Minimes Hotel &SPA L'Occitane",
'Le Domaine de Montjoie', 'Le Domaine des Prés Verts Massages & Spa', "Le Fouquet's", 'Le Gîte de Garbay ',
'Le Grand Aigle Hôtel & Spa', "Le Grand Casino d'Annemasse ", 'Le Grand Hôtel Cannes',
"Le Grand Hôtel de l'Espérance", 'Le grand Monarque', 'Le Hameau Albert 1er', 'Le Hommet',
'Le Majestic Barrière Cannes', 'Le Manoir de Kerbot', 'Le Manoir des Impressionnistes',
'Le Mas Candille, Relais & Châteaux', 'Le Moulin de Vernègues', 'Le Palace de Menthon', 'Le Petit Nice Passedat',
'Le Phebus & Spa', 'Le Pigeonnier du Perron', 'Le Prieuré', 'Le Prieuré des Sources',
'Le Refuge des Près Verts Massages & Spa', 'Le Relais Bernard Loiseau', 'Le Relais du Boisniard', 'Le Richelieu',
'Le Saint-Barnabé Hôtel et Spa ', 'Le Saint-James', 'Les Châtaigniers de Florac', 'Les Cures Marines',
'Les Etangs de Corot', 'Les Fermes de Marie', 'Les Hôtels de Beauval', 'Les Haras Hôtel ', 'Les Hauts de Loire',
'Les Maisons de Bricourt', 'Les Manoirs Tourgeville', 'Les Orangeries', "Les Prés d'Eugénie - Michel Guérard",
'Les Prairies de la Mer', 'Les Sources de Caudalie', 'Les Terrasses du Port', "Les Vignobles de l'Escarelle",
'Logis Aigue Marine Hôtel', "Logis Au Comté D'Ornon", 'Logis Auberge de la Diège', 'Logis Auberge de la Tour',
'Logis Château de la Motte-Liessies', 'Logis Château de Labro', 'Logis Domaine du Relais de Vincey',
'Logis Grand Hôtels des Bains', 'Logis Hôtel & Spa Marina Adelphia', 'Logis Hôtel Acotel', "Logis Hôtel AR Milin'",
'Logis Hôtel Arcombelle', 'Logis Hôtel Bellevue', 'Logis Hôtel Center Brest', 'Logis Hôtel de la Clape',
'Logis Hôtel des Châteaux', 'Logis Hôtel des Elmes - Restaurant la Littorine', 'Logis Hôtel du Cheval Blanc',
'Logis Hôtel Le Prince Noir', 'Logis Hôtel Le Régent', 'Logis Hôtel le Régina', 'Logis Hôtel le Vernay',
'Logis Hôtel les 2 Rives', 'Logis Hôtel Les Pierres Dorées', 'Logis Hôtel Murtel',
'Logis Hôtel Restaurant Au cheval blanc', 'Logis Hôtel Restaurant La Brèche de Roland',
'Logis Hôtel Restaurant Spa Les Peupliers', 'Logis Hôtel Taillard', 'Logis Hostellerie du Périgord Vert',
'Logis Hostellerie Saint Vincent ', 'Logis Hotel le Céans', 'Logis Hotel Restaurant des Acacias',
"Logis L'Abreuvoir Hôtel Restaurant", "Logis L'Hôtel D'Arc", "Logis L'Orée du Bois", 'Logis La Résidence',
'Logis La Source du Mont', 'Logis Lacotel', 'Logis Le Moulin de la Coudre',
'Logis Le Moulin des Gardelles Hôtel-Restaurant', 'Logis Le Relais des Dix Crus', 'Logis Les Hauts de Montreuil',
'Logis Mas de la Feniere', 'Logis Relais du Gué de Selle', 'Lorraine Hôtel',
'M Gallery - La Cour des Consuls Hotel & Spa', 'Maison Addama', 'Maison Cazes', "Maison d'Hotes La Cimentelle",
'Maison des Algues', 'Maison Lameloise', 'Maison Pic', 'Mama Shelter', 'Mama Shelter Lyon',
'Mama Shelter Marseille', 'Manoir de Gressy', 'Manoir de la Poterie & SPA', 'Manoir de Pancemont',
'Manoir de Surville', 'Manoir Plessis Bellevue', 'Mas de Chastelas', 'Mas de la Crémaillère',
'Mas de la Grenouillère', 'Mas la Jaina', 'Mercure Bourges Hôtel de Bourbon', 'Mercure Cherbourg Centre Port',
'Mercure Grand Hotel des Thermes', 'Mercure Lille Centre Vieux Lille', 'Mercure Lyon Genas Eurexpo',
'Mineral Lodge', 'Misincu', 'MOB Hotel Lyon', 'Monte Carlo Beach Hôtel', 'Musée Würth France Erstein',
'Najeti Hôtel Château Tilques', "Najeti Hôtel de l'Univers", 'Najeti Hôtel La Magnaneraie',
'New Cottage & Spa de nage', 'Nouvel Hôtel', 'Novotel Chartres', 'Novotel La Rochelle Centre',
'Novotel Marseille Centre Prado Vélodrome', 'Novotel Noisy Marne la Vallée', 'Novotel Spa Rennes Centre Gare',
'Novotel Thalassa Dinard', 'Orée de Chartres', "Pêche de Vigne Spa et Maison d'Hôtes", 'Parc zoologique Cerza',
'Paris International Golf Club', 'Petit Hôtel Confidentiel', 'Pierre et Vacances Premium Le Crotoy',
"Pierre et Vacances Premium Les Terrasses d'Eos", "Pierre et Vacances Premium Presqu'Ile de la Touques",
'Pizza Del Arte', 'Plaza Madeleine', 'Punta Lara', "Qualys Hôtel d'Alsace", "Qualys Hôtel du Golf de l'Ailette",
'Qualys-Hotel Grand Hôtel Saint Pierre', 'Résidence de France', 'Résidence Le Balamina', 'Radisson Blu Hôtel Nice',
'Relais & Châteaux - La Ferme Saint Siméon', 'Relais & Châteaux Georges Blanc Parc & Spa', 'Relais Christine',
'Relais du Silence - Château de Perreux', 'Relais du Silence - Le Mas de Guilles',
'Relais du Silence Domaine du Normandoux', 'Relais du Silence Ker Moor Préférence',
'Relais du Silence La Mainaz Hôtel Restaurant', 'Relais du Silence Les Vignes de la Chapelle',
'Relais du Silence Manoir de la Roche Torin', 'Relais Thalasso Chateau des Tourelles',
'Relais Thalasso Hotel Atalante', 'Renaissance Arc de Triomphe', 'Resort Barrière Lille',
'Resort Barrière Ribeauvillé', 'Resort Résidence Pierre', 'Restaurant Del Arte', 'Restaurant DEL ARTE Ploërmel',
'Restaurant La Chaudanne', 'Restaurant La Ferme Saint Michel', "Restaurant La Grande Cascade - L'Auberge du Bonheur",
'Restaurant Les Amis du Lac', 'Ristorante Del Arte', 'Saint Charles Hôtel & Spa', 'Saint James Paris ',
'SAS Louis Moreau', 'Shangri-La Hotel Paris', 'SNIP Yachting', 'Splendid Hôtel & Spa', 'Stiletto Cabaret',
'Stras Kart', 'Sunélia Aluna Vacances', 'Sunêlia Camping du Ranc Davaine', 'Sunêlia Domaine de la Dragonnière',
'Sunêlia Domaine Les Ranchisses', 'Sunêlia La Ribeyre', 'Sunêlia Les 3 Vallées',
'Sunêlia Perla di Mare camping restaurant', 'Télécabine du Mont-Chéry', 'Terre Blanche Hotel Spa Golf Resort',
'Territoires Charente - ZAC Montagnes Ouest', "Toison d'Or", 'Valthoparc', 'Vichy Célestins Spa Hôtel',
'Villa Duflot', 'Villa Florentine - Restaurant Les Terrasses de Lyon', 'Villa Garbo Cannes', 'Villa La Coste',
'Villa Maïa', 'Villa Magnolia Parc', 'Villa Mas St Jean', 'Villa Morelia', 'Villa Regalido', 'Villa René Lalique',
'Village Les Armaillis', 'Vincent Cuisinier de Campagne', 'Yelloh Village Camping Le Sérignan-Plage',
'Yelloh Village Les Grands Pins', 'Yelloh Village Les Tournels']
#Intégration d'une nouvelle variable 'categ_amenageur' selon condition
irvePlus['categ_amenageur'] = irvePlus['n_amenageur'].copy()
for x in irvePlus['categ_amenageur']:
if x in list_c_t:
irvePlus['categ_amenageur'].replace(x, 'Collectivités territoriales', inplace=True)
elif x in list_auto:
irvePlus['categ_amenageur'].replace(x, 'Constructeurs Automobiles', inplace=True)
elif x in list_parking:
irvePlus['categ_amenageur'].replace(x, 'Sociétés de Parking', inplace=True)
elif x in list_centres_commerciaux:
irvePlus['categ_amenageur'].replace(x, 'Centres commerciaux', inplace=True)
elif x in list_op_prive:
irvePlus['categ_amenageur'].replace(x, 'Opérateurs privés', inplace=True)
elif x in list_entreprise_diverse:
irvePlus['categ_amenageur'].replace(x, 'Entreprises diverses', inplace=True)
elif x in list_tourisme:
irvePlus['categ_amenageur'].replace(x, 'Hôtels, Restaurants…', inplace=True)
else:
pass
```
#### Enrichissement de l'échantillon en intégrant code département, département et région
L'API Google Géocoding a été utilisée de manière à pouvoir extraire les données de géolocalisation attendues. Après quelques essais, plusieurs coordonnées 'Latitude' et 'Longitude' ont pu être identifiées comme non conformes (inversion de coordonnées, problème de format, etc…), un traitement au cas par cas de ces anomalies a été fait pour pouvoir utiliser l'API.
```
#Intervention sur quelques coordonnées atypiques
irvePlus['Ylatitude'].replace("43*96228900", 43.96228900, inplace=True)
irvePlus['Xlongitude'].replace('6?07\'44.1"E', 6.07441, inplace=True)
irvePlus['Xlongitude'].replace('6›09\'34.8"E', 6.09348, inplace=True)
#Changement du type de données sur les variables Latitude et Longitude
irvePlus['Ylatitude'] = irvePlus['Ylatitude'].astype(float)
irvePlus['Xlongitude'] = irvePlus['Xlongitude'].astype(float)
#Traitement des observations en anomalie après avoir effectué quelques tentatives
irvePlus.loc[1442, 'Ylatitude'] = 43.279831
irvePlus.loc[1442, 'Xlongitude'] = 6.577639
irvePlus.loc[1477, 'Ylatitude'] = 43.279831
irvePlus.loc[1477, 'Xlongitude'] = 6.577639
irvePlus.loc[1505, 'Ylatitude'] = 43.279831
irvePlus.loc[1505, 'Xlongitude'] = 6.577639
irvePlus.loc[2059, 'Ylatitude'] = 45.889087
irvePlus.loc[2059, 'Xlongitude'] = 4.893406
irvePlus.loc[2078, 'Ylatitude'] = 47.031041
irvePlus.loc[2078, 'Xlongitude'] = 5.108918
irvePlus.loc[8527, 'Ylatitude'] = 43.608195
irvePlus.loc[8527, 'Xlongitude'] = 5.003735
irvePlus.loc[8543, 'Ylatitude'] = 43.608195
irvePlus.loc[8543, 'Xlongitude'] = 5.003735
irvePlus.loc[10071, 'Ylatitude'] = 46.3026926
irvePlus.loc[10071, 'Xlongitude'] = 4.8321937
irvePlus.loc[10072, 'Ylatitude'] = 46.3027089
irvePlus.loc[10072, 'Xlongitude'] = 4.8234389
irvePlus.loc[10073, 'Ylatitude'] = 46.3026926
irvePlus.loc[10073, 'Xlongitude'] = 4.8321937
irvePlus.loc[10074, 'Ylatitude'] = 46.276451
irvePlus.loc[10074, 'Xlongitude'] = 4.038723
irvePlus.loc[10075, 'Ylatitude'] = 46.276451
irvePlus.loc[10075, 'Xlongitude'] = 4.038723
irvePlus.loc[10076, 'Ylatitude'] = 46.3027089
irvePlus.loc[10076, 'Xlongitude'] = 4.8234389
irvePlus.loc[13671, 'Ylatitude'] = 45.271378
irvePlus.loc[13671, 'Xlongitude'] = 0.043441
irvePlus.loc[13672, 'Ylatitude'] = 45.271378
irvePlus.loc[13672, 'Xlongitude'] = 0.043441
irvePlus.loc[13683, 'Ylatitude'] = 45.886326
irvePlus.loc[13683, 'Xlongitude'] = 0.582253
irvePlus.loc[13684, 'Ylatitude'] = 45.886326
irvePlus.loc[13684, 'Xlongitude'] = 0.582253
```
#### Attention !
__Le code suivant nécessite une clé d'API Google Geocode, non mis à disposition.
La variable "list_cp" a été sauvegardée pour éviter de lancer le script à chaque
exécution du Notebook, +/- 1 heure de temps.__
```
%%time
#Code permettant de préciser les codes postaux des bornes de recharge de l'échantillon
from urllib.request import urlopen
import sys
import json
from sys import stdout
from time import sleep
list_cp = []
for i, row in irvePlus.iterrows():
key = "*********************************"
url = "https://maps.googleapis.com/maps/api/geocode/json?"
url += "latlng=%s,%s&sensor=false&key=%s" % (row['Ylatitude'], row['Xlongitude'], key)
v = urlopen(url).read()
j = json.loads(v)
components = j['results'][0]['address_components']
for c in components:
if "postal_code" in c['types']:
cp = c['long_name']
list_cp.append(cp)
else:
pass
sys.stdout.write('\r' "Progress. "+ str(i+1) + "/" +str(len(irvePlus)) + " >>>>>>> ")
sys.stdout.flush()
```
Progress. 16112/16112 CPU times: user 4min 42s, sys: 24.6 s, total: 5min 7s
Wall time: 1h 15min 19s
A partir de la liste 'list_cp' on peut modifier les données de manière à obtenir les codes des départements, et donc enrichir l'échantillon d'une localisation selon les départements du pays.
```
#Sauvegarde de la variable
import pickle
#pickle.dump(list_cp, open('p8_datatable/list_cp.pickle', 'wb'))
with open('p8_datatable/list_cp.pickle', 'rb') as f:
list_cp = pickle.load(f)
#Création d'une liste propre aux codes des départements
cd = []
for c in list_cp.astype(str):
cd.append(c[:2])
#Intégration des nouvelles variables dans l'échantillon
irvePlus['code_postal'] = list_cp
irvePlus['code_dpt'] = cd
#Visualisation rapide de quelques observations
irvePlus[6000:6005]
#Visualisation des codes départements
irvePlus.code_dpt.unique()
#Modification de quelques codes pour pouvoir ensuite effectuer une jointure sans défaut
code_modif = ['01', '02', '03', '04', '05', '06', '07', '08', '09' ]
for x in irvePlus['code_dpt']:
if x in code_modif:
irvePlus['code_dpt'].replace(x, x[1:], inplace=True)
#Précision apportée à la Corse avec différenciation entre 2A et 2B
irvePlus.code_dpt.replace('20', '2A', inplace=True)
code_dpt_2b = [14106, 14107, 14662, 14663, 15070, 15071, 15377, 15378, 15379, 15561, 15562, 15799, 15800]
for i, row in irvePlus.iterrows():
if i in code_dpt_2b:
irvePlus.loc[i, "code_dpt"] = '2B'
#Enrichement des départements et régions via le fichier 'departements-francais.csv'
#Source : https://www.regions-et-departements.fr/departements-francais
dpt_fr = pd.read_csv('p8_data/departements-francais.csv', sep=';')
dpt_fr.rename(columns={'NUMÉRO': 'code_dpt', 'NOM': 'dpt', 'REGION': 'region',
'SUPERFICIE (km²)': 'superficie_km2', 'POPULATION': 'nbre_habitant'}, inplace=True)
dpt_fr.head()
#Jointure entre l'échantillon et le référentiel des départements et régions
irvePlus = pd.merge(irvePlus, dpt_fr[['code_dpt', 'dpt', 'region', 'superficie_km2', 'nbre_habitant']],
how='left', on = "code_dpt")
#Visualisation des 5 dernières lignes
irvePlus.tail()
#Estimation du nombre de stations de recharge (en anglais Charging Station Pool)
irvePlus.id_station.nunique()
#Estimation du nombre de bornes de recharge (en anglais Charging Station)
irvePlus.id_borne.nunique()
len(irvePlus.n_station.unique())
#Estimation du nombre de points de recharge (en anglais Charging Point)
irvePlus.nbre_pdc.sum()
```
Notons que selon les études la répartition établie ci-dessus diverge. Parfois par abus de langage entre borne de recharge et point de charge. Ici, il n'est pas réalisable d'avoir une granularité plus fine qui pourrait prendre en compte l'état de service de la borne.
```
#Sauvegarde
irvePlus.to_csv('p8_datatable/irvePlus.csv')
```
### Prévision du nombre de Points de charge à 5 ans
A partir de l'échantillon 'irve_type' basé sur des chiffres trimestriels, l'échantillon sera re-calibré par mois afin d'avoir une granularité plus fine des données.
```
#Rappel de l'échantillon 'irve_type' vu en début de Mission 2
irve_type
#Création d'un échantillon spécifique à la prévision
irve_type_month = irve_type.copy()
irve_type_month = irve_type_month[['Time', 'Accessible au public']].set_index('Time')
irve_type_month = irve_type_month.resample('M').sum().reset_index()
#Intégration de deux lignes d'observations manquantes
irve_type_month.loc[58] = ['2015-01-31 00:00:00', 0]
irve_type_month.loc[59] = ['2015-02-28 00:00:00', 0]
#Mise en forme de l'échantillon
irve_type_month['Time'] = pd.to_datetime(irve_type_month['Time'])
irve_type_month = irve_type_month.sort_values(by='Time').reset_index(drop=True)
#Ventilation des valeurs trimestrielles /Mois
seed(1)
for i, row in irve_type_month.iterrows():
if row['Time'] < pd.Timestamp('2015-03-31') :
irve_type_month.loc[i, 'Accessible au public'] = randint(5000, 8478)
elif (row['Time'] > pd.Timestamp('2015-03-31')) & (row['Time'] < pd.Timestamp('2015-06-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(8478, 10086)
elif (row['Time'] > pd.Timestamp('2015-06-30')) & (row['Time'] < pd.Timestamp('2015-09-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(10086, 10928)
elif (row['Time'] > pd.Timestamp('2015-09-30')) & (row['Time'] < pd.Timestamp('2015-12-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(10928, 11113)
elif (row['Time'] > pd.Timestamp('2015-12-31')) & (row['Time'] < pd.Timestamp('2016-03-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(11113, 12830)
elif (row['Time'] > pd.Timestamp('2016-03-31')) & (row['Time'] < pd.Timestamp('2016-06-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(12830, 13861)
elif (row['Time'] > pd.Timestamp('2016-06-30')) & (row['Time'] < pd.Timestamp('2016-09-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(12859, 13861)
elif (row['Time'] > pd.Timestamp('2016-09-30')) & (row['Time'] < pd.Timestamp('2016-12-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(13861, 16220)
elif (row['Time'] > pd.Timestamp('2016-12-31')) & (row['Time'] < pd.Timestamp('2017-03-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(16220, 17423)
elif (row['Time'] > pd.Timestamp('2017-03-31')) & (row['Time'] < pd.Timestamp('2017-06-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(17423, 19750)
elif (row['Time'] > pd.Timestamp('2017-06-30')) & (row['Time'] < pd.Timestamp('2017-09-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(19750, 20688)
elif (row['Time'] > pd.Timestamp('2017-09-30')) & (row['Time'] < pd.Timestamp('2017-12-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(19309, 20688)
elif (row['Time'] > pd.Timestamp('2017-12-31')) & (row['Time'] < pd.Timestamp('2018-03-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(19309, 26370)
elif (row['Time'] > pd.Timestamp('2018-03-31')) & (row['Time'] < pd.Timestamp('2018-06-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(22283, 26370)
elif (row['Time'] > pd.Timestamp('2018-06-30')) & (row['Time'] < pd.Timestamp('2018-09-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(22283, 24362)
elif (row['Time'] > pd.Timestamp('2018-09-30')) & (row['Time'] < pd.Timestamp('2018-12-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(24362, 26297)
elif (row['Time'] > pd.Timestamp('2018-12-31')) & (row['Time'] < pd.Timestamp('2019-03-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(26297, 27446)
elif (row['Time'] > pd.Timestamp('2019-03-31')) & (row['Time'] < pd.Timestamp('2019-06-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(27446, 28910)
elif (row['Time'] > pd.Timestamp('2019-06-30')) & (row['Time'] < pd.Timestamp('2019-09-30')):
irve_type_month.loc[i, 'Accessible au public'] = randint(28910, 31461)
elif (row['Time'] > pd.Timestamp('2019-09-30')) & (row['Time'] < pd.Timestamp('2019-12-31')):
irve_type_month.loc[i, 'Accessible au public'] = randint(30110, 31461)
else :
pass
#Affichage de l'échantillon
irve_type_month
#Sauvegarde
irve_type_month.to_csv('p8_datatable/irve_type_month.csv')
#Mise en oeuvre de l'algorithme Prophet (Facebook)
from fbprophet import Prophet
pdc_forecast_prophet = irve_type_month.copy()
pdc_forecast_prophet = pdc_forecast_prophet[['Time', 'Accessible au public']]
pdc_forecast_prophet.rename(columns={'Time': 'ds', 'Accessible au public': 'y'}, inplace=True)
pdc_forecast_prophet.tail()
#Sauvegarde
pdc_forecast_prophet.to_csv('p8_datatable/pdc_forecast_prophet.csv')
#Instanciation et entrainement du modèle
model = Prophet(yearly_seasonality=True, weekly_seasonality=False, daily_seasonality=False)
model.fit(pdc_forecast_prophet)
#Prévision du nombre de Points de charge à 5 ans
future = model.make_future_dataframe(periods=60, freq='M')
forecast = model.predict(future)
fig = model.plot(forecast)
fig.savefig('p8_img/forecast_prophet_pdc.png')
#Affichage des 5 derniers mois de prévision
forecast_pdc = model.predict(future)
forecast_pdc[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail()
#Sauvegarde
forecast_pdc.to_csv('p8_datatable/forecast_pdc.csv')
```
D'ici fin 2024 le maillage de Points de charge pourrait être étendu à environ 56 000 connecteurs, selon la prédiction de l'algorithme Prophet.
```
#Préparation des données (observations + prévisions) pour Test statistique
metric_forecast_pdc = forecast_pdc.set_index('ds')[['yhat']].join(pdc_forecast_prophet.set_index('ds').y).reset_index()
metric_forecast_pdc.dropna(inplace=True)
metric_forecast_pdc
#Mesures statistiques permettant d'évaluer le modèle
print("R2 = " + str(r2_score(metric_forecast_pdc['y'], metric_forecast_pdc['yhat'])))
print("MSE = " + str(mean_squared_error(metric_forecast_pdc['y'], metric_forecast_pdc['yhat'])))
print("RMSE = " + str(math.sqrt(mean_squared_error(metric_forecast_pdc['y'], metric_forecast_pdc['yhat']))))
print("MAE = " + str(mean_absolute_error(metric_forecast_pdc['y'], metric_forecast_pdc['yhat'])))
```
Les coefficients statistiques sont plus optimistes que ceux des précédentes prévisions. Le coefficient de détermination reste proche de 1, seulement les autres métriques d'écart sont assez élevées. En d'autres termes la robustesse du modèle n'est pas très satisfaisante.
<u>A des fins de comparaison, la méthode de Holt-winters est également exploitée.</u>
```
#Préparation des données
irve_forecast_hw = irve_type_month.copy()
irve_forecast_hw['Time'] = pd.to_datetime(irve_forecast_hw['Time'])
irve_forecast_hw.set_index('Time', inplace=True)
#Méthode ExponentialSmoothing de statsmodels est utilisée pour la modélisation d'Holt-Winters.
from statsmodels.tsa.api import ExponentialSmoothing
y = np.array(irve_forecast_hw['Accessible au public'])
hw = ExponentialSmoothing(y, seasonal_periods=12, trend='add', seasonal='add').fit()
hw_pred = hw.forecast(60)
#Visualisation de la prévision à 5 ans par Holt-Winters
plt.figure(figsize(16, 8))
plt.plot(irve_forecast_hw['Accessible au public'], label='PDC')
plt.plot(pd.date_range(irve_forecast_hw.index[len(y)-1], periods=60, freq='M'),
hw_pred, label='Prévision Holt-Winters')
plt.title("Points de charge ouverts au public en France d'ici 2024")
fig.savefig('p8_img/holtwinters_pdc.png')
plt.legend()
plt.show()
#Affichage des valeurs prédites
hw_pred
```
**Après ces deux modélisations, on peut conclure à un développement du réseau des points de charge (PDC ou Charging Point) entre 55 000 et 60 000 connecteurs d'ici fin 2024.**
[Retour vers la page notebook précédente (Positionnement de la voiture électrique de 2010 à 2019 et prévision à 2 ans)](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook01.ipynb)
[Voir la suite du projet : Appel de charge au réseau électrique (Profilage d'un pic de consommation en 2040, etc…)](https://github.com/nalron/project_electric_cars_france2040/blob/french_version/p8_notebook03.ipynb)
|
github_jupyter
|
# Linear Regression
## Setup
First, let's set up some environmental dependencies. These just make the numerics easier and adjust some of the plotting defaults to make things more legible.
```
# Python 3 compatability
from __future__ import division, print_function
from six.moves import range
# system functions that are always useful to have
import time, sys, os
# basic numeric setup
import numpy as np
# inline plotting
%matplotlib inline
# plotting
import matplotlib
from matplotlib import pyplot as plt
# seed the random number generator
np.random.seed(56101)
# re-defining plotting defaults
from matplotlib import rcParams
rcParams.update({'xtick.major.pad': '7.0'})
rcParams.update({'xtick.major.size': '7.5'})
rcParams.update({'xtick.major.width': '1.5'})
rcParams.update({'xtick.minor.pad': '7.0'})
rcParams.update({'xtick.minor.size': '3.5'})
rcParams.update({'xtick.minor.width': '1.0'})
rcParams.update({'ytick.major.pad': '7.0'})
rcParams.update({'ytick.major.size': '7.5'})
rcParams.update({'ytick.major.width': '1.5'})
rcParams.update({'ytick.minor.pad': '7.0'})
rcParams.update({'ytick.minor.size': '3.5'})
rcParams.update({'ytick.minor.width': '1.0'})
rcParams.update({'font.size': 30})
import dynesty
```
Linear regression is ubiquitous in research. In this example we'll fit a line
$$ y=mx+b $$
to data where the error bars have been underestimated and need to be inflated by a factor $f$. This example is taken from the [emcee documentation](http://dan.iel.fm/emcee/current/user/line/).
```
# truth
m_true = -0.9594
b_true = 4.294
f_true = 0.534
# generate mock data
N = 50
x = np.sort(10 * np.random.rand(N))
yerr = 0.1 + 0.5 * np.random.rand(N)
y_true = m_true * x + b_true
y = y_true + np.abs(f_true * y_true) * np.random.randn(N)
y += yerr * np.random.randn(N)
# plot results
plt.figure(figsize=(10, 5))
plt.errorbar(x, y, yerr=yerr, fmt='ko', ecolor='red')
plt.plot(x, y_true, color='blue', lw=3)
plt.xlabel(r'$X$')
plt.ylabel(r'$Y$')
plt.tight_layout()
```
We will assume the errors are Normal and impose uniform priors on $(m, b, \ln f)$.
```
# log-likelihood
def loglike(theta):
m, b, lnf = theta
model = m * x + b
inv_sigma2 = 1.0 / (yerr**2 + model**2 * np.exp(2 * lnf))
return -0.5 * (np.sum((y-model)**2 * inv_sigma2 - np.log(inv_sigma2)))
# prior transform
def prior_transform(utheta):
um, ub, ulf = utheta
m = 5.5 * um - 5.
b = 10. * ub
lnf = 11. * ulf - 10.
return m, b, lnf
```
Let's sample from this distribution using multiple bounding ellipsoids and random "staggers" (and alternative to random walks).
```
dsampler = dynesty.DynamicNestedSampler(loglike, prior_transform, ndim=3,
bound='multi', sample='rstagger')
dsampler.run_nested()
dres = dsampler.results
```
Let's see how we did.
```
from dynesty import plotting as dyplot
truths = [m_true, b_true, np.log(f_true)]
labels = [r'$m$', r'$b$', r'$\ln f$']
fig, axes = dyplot.traceplot(dsampler.results, truths=truths, labels=labels,
fig=plt.subplots(3, 2, figsize=(16, 12)))
fig.tight_layout()
fig, axes = dyplot.cornerplot(dres, truths=truths, show_titles=True,
title_kwargs={'y': 1.04}, labels=labels,
fig=plt.subplots(3, 3, figsize=(15, 15)))
```
|
github_jupyter
|
# LASSO and Ridge Regression
This function shows how to use TensorFlow to solve lasso or ridge regression for $\boldsymbol{y} = \boldsymbol{Ax} + \boldsymbol{b}$
We will use the iris data, specifically: $\boldsymbol{y}$ = Sepal Length, $\boldsymbol{x}$ = Petal Width
```
# import required libraries
import matplotlib.pyplot as plt
import sys
import numpy as np
import tensorflow as tf
from sklearn import datasets
from tensorflow.python.framework import ops
# Specify 'Ridge' or 'LASSO'
regression_type = 'LASSO'
# clear out old graph
ops.reset_default_graph()
# Create graph
sess = tf.Session()
```
## Load iris data
```
# iris.data = [(Sepal Length, Sepal Width, Petal Length, Petal Width)]
iris = datasets.load_iris()
x_vals = np.array([x[3] for x in iris.data])
y_vals = np.array([y[0] for y in iris.data])
```
## Model Parameters
```
# Declare batch size
batch_size = 50
# Initialize placeholders
x_data = tf.placeholder(shape=[None, 1], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
# make results reproducible
seed = 13
np.random.seed(seed)
tf.set_random_seed(seed)
# Create variables for linear regression
A = tf.Variable(tf.random_normal(shape=[1,1]))
b = tf.Variable(tf.random_normal(shape=[1,1]))
# Declare model operations
model_output = tf.add(tf.matmul(x_data, A), b)
```
## Loss Functions
```
# Select appropriate loss function based on regression type
if regression_type == 'LASSO':
# Declare Lasso loss function
# Lasso Loss = L2_Loss + heavyside_step,
# Where heavyside_step ~ 0 if A < constant, otherwise ~ 99
lasso_param = tf.constant(0.9)
heavyside_step = tf.truediv(1., tf.add(1., tf.exp(tf.multiply(-50., tf.subtract(A, lasso_param)))))
regularization_param = tf.multiply(heavyside_step, 99.)
loss = tf.add(tf.reduce_mean(tf.square(y_target - model_output)), regularization_param)
elif regression_type == 'Ridge':
# Declare the Ridge loss function
# Ridge loss = L2_loss + L2 norm of slope
ridge_param = tf.constant(1.)
ridge_loss = tf.reduce_mean(tf.square(A))
loss = tf.expand_dims(tf.add(tf.reduce_mean(tf.square(y_target - model_output)), tf.multiply(ridge_param, ridge_loss)), 0)
else:
print('Invalid regression_type parameter value',file=sys.stderr)
```
## Optimizer
```
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.001)
train_step = my_opt.minimize(loss)
```
## Run regression
```
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec = []
for i in range(1500):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = np.transpose([x_vals[rand_index]])
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss[0])
if (i+1)%300==0:
print('Step #' + str(i+1) + ' A = ' + str(sess.run(A)) + ' b = ' + str(sess.run(b)))
print('Loss = ' + str(temp_loss))
print('\n')
```
## Extract regression results
```
# Get the optimal coefficients
[slope] = sess.run(A)
[y_intercept] = sess.run(b)
# Get best fit line
best_fit = []
for i in x_vals:
best_fit.append(slope*i+y_intercept)
```
## Plot results
```
%matplotlib inline
# Plot the result
plt.plot(x_vals, y_vals, 'o', label='Data Points')
plt.plot(x_vals, best_fit, 'r-', label='Best fit line', linewidth=3)
plt.legend(loc='upper left')
plt.title('Sepal Length vs Pedal Width')
plt.xlabel('Pedal Width')
plt.ylabel('Sepal Length')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title(regression_type + ' Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
tested; Gopal
```
|
github_jupyter
|
## Use a Decision Optimization model deployed in Watson Machine Learning
This notebook shows you how to create and monitor jobs, and get solutions using the Watson Machine Learning Python Client.
This example only applies to Decision Optimization in Watson Machine Learning Local and Cloud Pak for Data/Watson Studio Local.
In order to use this example, you must first have deployed the Diet example.
A Python API is provided to submit input data, solve, and get results.
```
# Uninstall the Watson Machine Learning client Python client based on v3 APIs
!pip uninstall watson-machine-learning-client -y
# Install WML client API
!pip install ibm-watson-machine-learning
from ibm_watson_machine_learning import APIClient
# Instantiate a client using credentials
wml_credentials = {
"apikey": "<API_key>",
"url": "<instance_url>"
}
client = APIClient(wml_credentials)
# Find the space ID
space_name = '<SPACE NAME>'
space_id = [x['metadata']['id'] for x in client.spaces.get_details()['resources'] if x['entity']['name'] == space_name][0]
client.set.default_space(space_id)
# Import pandas library
import pandas as pd
# initialize list of lists
diet_food = pd.DataFrame([ ["Roasted Chicken", 0.84, 0, 10],
["Spaghetti W/ Sauce", 0.78, 0, 10],
["Tomato,Red,Ripe,Raw", 0.27, 0, 10],
["Apple,Raw,W/Skin", 0.24, 0, 10],
["Grapes", 0.32, 0, 10],
["Chocolate Chip Cookies", 0.03, 0, 10],
["Lowfat Milk", 0.23, 0, 10],
["Raisin Brn", 0.34, 0, 10],
["Hotdog", 0.31, 0, 10]] , columns = ["name","unit_cost","qmin","qmax"])
diet_food_nutrients = pd.DataFrame([
["Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2],
["Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0, 0, 42.2],
["Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1],
["Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21, 0.3],
["Grapes", 15.1, 3.4, 0.1, 24, 0.2, 4.1, 0.2],
["Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0, 9.3, 0.9],
["Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0, 11.7, 8.1],
["Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4, 27.9, 4],
["Hotdog", 242.1, 23.5, 2.3, 0, 0, 18, 10.4 ]
] , columns = ["Food","Calories","Calcium","Iron","Vit_A","Dietary_Fiber","Carbohydrates","Protein"])
diet_nutrients = pd.DataFrame([
["Calories", 2000, 2500],
["Calcium", 800, 1600],
["Iron", 10, 30],
["Vit_A", 5000, 50000],
["Dietary_Fiber", 25, 100],
["Carbohydrates", 0, 300],
["Protein", 50, 100]
], columns = ["name","qmin","qmax"])
```
You can find the deployment ID in the Analytics deployment spaces.
Or by listing the deployment using the API.

```
client.deployments.list()
# Get the deployment ID from the Model name.
# Note, that there could be several deployments for one model
model_name = "diet"
deployment_uid = [x['metadata']['id'] for x in client.deployments.get_details()['resources'] if x['entity']['name'] == model_name][0]
print(deployment_uid)
```
Create and monitor a job with inline data for your deployed model.
Create a payload containing inline input data.
Create a new job with this payload and the deployment.
Get the job_uid.
```
solve_payload = {
client.deployments.DecisionOptimizationMetaNames.INPUT_DATA: [
{
"id":"diet_food.csv",
"values" : diet_food
},
{
"id":"diet_food_nutrients.csv",
"values" : diet_food_nutrients
},
{
"id":"diet_nutrients.csv",
"values" : diet_nutrients
}
],
client.deployments.DecisionOptimizationMetaNames.OUTPUT_DATA: [
{
"id":".*\.csv"
}
]
}
job_details = client.deployments.create_job(deployment_uid, solve_payload)
job_uid = client.deployments.get_job_uid(job_details)
print( job_uid )
```
Display job status until it is completed.
The first job of a new deployment might take some time as a compute node must be started.
```
from time import sleep
while job_details['entity']['decision_optimization']['status']['state'] not in ['completed', 'failed', 'canceled']:
print(job_details['entity']['decision_optimization']['status']['state'] + '...')
sleep(5)
job_details=client.deployments.get_job_details(job_uid)
print( job_details['entity']['decision_optimization']['status']['state'])
job_details['entity']['decision_optimization']['status']
```
Extract and display solution.
Display the output solution.
Display the KPI Total Calories value.
```
solution_table=[x for x in job_details['entity']['decision_optimization']['output_data'] if x['id'] == 'solution.csv'][0]
# Create a dataframe for the solution
solution = pd.DataFrame(solution_table['values'],
columns = solution_table['fields'])
solution.head()
print( job_details['entity']['decision_optimization']['solve_state']['details']['KPI.Total Calories'] )
```
|
github_jupyter
|
## Preparation
Welcome to the Vectice tutorial notebook!
Through this notebook, we will be illustrating how to log the following information into Vectice using the Vectice Python library:
- Dataset versions
- Model versions
- Runs and lineage
For more information on the tutorial, please refer to the "Vectice Tutorial Page" inside the app.
## Setup
Install Vectice
```
#Install Vectice Python library
# In this tutorial we will do code versioning using github, we also support gitlab
# and bitbucket: !pip install -q "vectice[github, gitlab, bitbucket]"
!pip install --q vectice[github]
#Verify if Vectice python library was installed
!pip3 show vectice
```
Here, the our data is stored in GCS. We should install the following GCS packages in order to be able to get it.
```
## GCS packages
!pip3 install --q fsspec
!pip3 install --q gcsfs
## Import the required packages for data preparation and model training
import string
from math import sqrt
import os
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
# Load scikit-learn packages
from sklearn.model_selection import train_test_split # Model Selection
from sklearn.metrics import mean_absolute_error, mean_squared_error # Model Evaluation
from sklearn.linear_model import LinearRegression # Linear Regression
from sklearn.tree import DecisionTreeRegressor, plot_tree # Decision Tree Regression
from sklearn.ensemble import RandomForestRegressor # Random Forest Regression
```
### Connect and authenticate to Vectice API
```
#Import the Vectice library
from vectice import Vectice
from vectice.models import JobType
from vectice.entity.model import ModelType
import logging
logging.basicConfig(level=logging.INFO)
# Specify the API endpoint for Vectice.
os.environ['VECTICE_API_ENDPOINT']= "beta.vectice.com"
# To use the Vectice Python library, you first need to authenticate your account using an API key.
# You can generate an API key from the Vectice UI, by going to the "API Tokens" tab in your workspace
# Copy and paste your API key here
os.environ['VECTICE_API_TOKEN'] = "QkZWM9EJD.0XeWYNgrVy7K69jq5azA4QkZWM9EJDpBPOLMm1xbl2w8vGR03d"
# Next, you need to specify the tutorial project where you will run this notebook using a
# "Project Token". You can find the "Project Token" under the "Settings" tab of your project.
# Copy and paste your Project Token here
# autocode = True enables you to track your git changes for your code automatically everytime you execute a run (see below).
vectice = Vectice(project_token="BpR8Go6eh84vybzZaLWj", autocode= True)
```
## Create a run
A run is an execution of a job. You can think of a job like a grouping of runs.
When creating a run we need to specify:
1) a job name (mandatory)
2) a job type (optional)
3) a run name (optional)
Job names, job types and run names are useful to group and search runs in the Vectice UI.
You can also specify inputs when you start your run and outputs when you end it. The inputs can be code, dataset and model versions and the outputs can be dataset and model versions.
```
vectice.create_run("job_name", JobType.PREPARATION, "run name").with_properties([("run key", "run prop")])
vectice.start_run(inputs=[inputs])
vectice.end_run(outputs=[outputs])
```
You can also use the Python context manager (with) to manage runs. This helps to end the run and it also marks its status as failed in the Vectice UI in case we have an error in the run.
```
vectice.create_run("job_name", JobType.PREPARATION, "run name").with_properties([("run key", "run prop")])
with vectice.start_run(inputs=[inputs]) as run:
#Add your code here
run.add_outputs(outputs=[outputs])
```
## Create a dataset and a dataset version
There are three ways to create a dataset in Vectice:
1- Creating a dataset without a connection
```
### Creating a dataset without a connection
vectice.create_dataset(dataset_name="dataset name",data_properties=[("key", "prop"), ("key2", "prop2")])
```
2- Creating a dataset with a connection
Getting the list of connections in the Workspace:
```
vectice.list_connections()
## Creating a dataset with a connection
vectice.create_dataset_with_connection_name(connection_name="connection name",
dataset_name="dataset name",
files=["gs://file_path/file_name.csv"],
data_properties=[("key", "prop"), ("key2", "prop2")])
## We can also use vectice.create_dataset_with_connection_id()
```
3- Create a dataset and a dataset version at the same time
When creating a new dataset version, if the parent dataset doesn't exist in the project, a new dataset is created automatically and it will contain the first version we created.
```
dataset_version = vectice.create_dataset_version().with_parent_name("new dataset").with_properties([("key", "prop")])
```
The Vectice library automatically detects if there have been changes to the dataset you are using. If it detects changes, it will generate a new version of your dataset automatically. Else, it's going to use the latest version of your dataset.
We can get the list of the datasets we have in the project by calling **vectice.list_datasets()**
```
vectice.list_datasets().list
```
We can also get the list of dataset versions by calling **vectice.list_dataset_versions(dataset_id)**
### Attach a dataset version as input or output to a run
```
vectice.create_run("job_name", JobType.PREPARATION, "run name").with_properties([("run key", "run prop")])
vectice.start_run(inputs=[dataset_version])
vectice.end_run
```
You can also use another existing dataset version by using the existing version name, number or id (if you use the id, you don't need to specify the parent dataset name or id).
```
dataset_version = vectice.create_dataset_version().with_parent_name("dataset").with_existing_version_number(1)
vectice.create_run("job_name", JobType.PREPARATION, "run name").with_properties([("run key", "run prop")])
vectice.start_run(inputs=[dataset_version])
vectice.end_run
```
## Create a code version
Vectice enables you to track your source code by creating code versions. This can be done automatically and manually.
### Creating a code version automatically
If you are using your local environment with GIT installed or JupyterLab etc... the code tracking can be automated by setting autocode=True when creating the Vectice instance.
### Creating a code version manually
You can create a code version manually by using:
1- **vectice.create_code_version_with_github_uri()** for GitHub
2- **vectice.create_code_version_with_gitlab_uri()** for GitLab
3- **vectice.create_code_version_with_bitbucket_uri()** for Bitbucket
```
## Example for code versioning with GitHub
code_version = Vectice.create_code_version_with_gitlab_uri("https://github.com/vectice/vectice-examples",
"Notebooks/Tutorial/Jupyter_notebooks/GCS_data/Tutorial_notebook_GCS_data.ipynb")
vectice.create_run("Job name", JobType.PREPARATION, "Run name").with_properties([("run key", "run prop")])
vectice.start_run(inputs=[code_version])
vectice.end_run()
```
## Creating models and model versions
Vectice enables you to create your models and model versions and log the metrics, hyperparameters and model properties
When creating a model version, if there is a model with the same name as the given model name in your project, a new model version is added to the given model. Else, a new model is created automatically.
```
Vectice.create_model_version().with_parent_name('Regressor')
```
You can declare your model metrics, hyperparameters, properties, type, the used algorithme and model attachments when creating a model version.
```
metrics = [('metric', value), ('metric 2', value)]
properties = [('property', value), ('property 2', value)]
model_version = vectice.create_model_version()
.with_parent_name("Regressor")
.with_algorithm("Decision Tree")
.with_type(ModelType.REGRESSION)
.with_properties(properties)
.with_metrics(metrics)
.with_attachments(["DecisionTree_6.png"])
.with_user_version()
```
Here we used with_user_version() for model versioning. You can provide a version name for your model version. An error will be thrown if the given user version already exists and if you don't provide a version name, the version name will be generated automatically.
### Attach a model version as input or output of a run
```
vectice.create_run("job_name", JobType.PREPARATION, "run name").with_properties([("run key", "run prop")])
vectice.start_run(inputs=[dataset_version])
metrics = [('metric', value), ('metric 2', value)]
properties = [('property', value), ('property 2', value)]
model_version = vectice.create_model_version().with_user_version().with_parent_name("Regressor").with_algorithm("Decision Tree").with_type(ModelType.REGRESSION).with_properties(properties).with_metrics(metrics).with_attachments(["DecisionTree_6.png"])
vectice.end_run(outputs=[model_version])
```
# Exercice
### Getting the data from GCS
We are going to load data stored in Google Cloud Storage, that is provided by Vectice for this tutorial.
You need a service account key to be able to get the data from your buckets on GCS. You can find more information about how to generate a key to access your data on GCS [here](https://doc.vectice.com/connections/google.html#google-cloud-storage).
```
## Provide the path to the service account JSON key file
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = 'readerKey.json'
# Once your file is loaded you can view your dataset in a Pandas dataframe.
df = pd.read_csv('gs://vectice_tutorial/kc_house_data_cleaned.csv')
# Run head to make sure the data was loaded properly
df.head()
```
### Data preparation
Let's split the dataset into train and test sets and save them in GCS. The GCS code has been commented out as the data has already been generated.
```
# The Vectice library automatically detects if there have been changes to the dataset you are using.
# If it detects changes, it will generate a new version of your dataset automatically. Else, it's going
# to use the latest version of your dataset.
# You can also use another dataset version by calling .with_existing_version_name('version name')
input_ds_version = vectice.create_dataset_version().with_parent_name("cleaned_kc_house_data")
# For this run, we will use the job name "80/20 Split" and the job type "PREPARATION"
# You can have multiple runs with the same job name
# We can use the Python context manager (with) to end the run and make its status as failed
## in the Vectice UI in case we have an error
vectice.create_run("80/20 Split", JobType.PREPARATION, "Data preparation")
with vectice.start_run(inputs=[input_ds_version]) as run:
# We will use an 80/20 split to prepare the data
test_size = 0.2
# We will set the random seed so we always generate the same split.
random_state = 42
train, test = train_test_split(df, test_size = test_size, random_state = random_state)
# We commented out the code to persist the training and testing test in GCS,
# because we already generated the data for you.
# We left the code below for convenience, in case you want to use your own credentials and GCS bucket.
# train.to_csv (r'gs://vectice_tutorial/training_data.csv', index = False, header = True)
# test.to_csv (r'gs://vectice_tutorial/testing_data.csv', index = False, header = True)
# Generate X_train, X_test, y_train, y_test, which we will need for modeling
X = df.drop("price", axis=1).values
y = df["price"].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=test_size, random_state=random_state)
# Let's create new versions of the training and testing dataset if the data has changed.
# We will use the existing dataset created by Albert, so that we can append new
# dataset versions to it.
train_ds_version = vectice.create_dataset_version().with_parent_name("train_cleaned_kc_house_data")
test_ds_version = vectice.create_dataset_version().with_parent_name("test_cleaned_kc_house_data")
# Attach the output datasets to the run.
run.add_outputs(outputs=[train_ds_version,test_ds_version])
# We can preview one of our generated outputs to make sure that everything was executed properly.
X_train
```
## Modeling
We can get the list of the models existing in the project by calling **vectice.list_models()**
```
vectice.list_models().list
```
### Decision tree model
In this section let's use the decision tree algorithm and compare the accuracy to the logistic regression algorithm. We will try different values for the tree_depth. We will log the model parameters and metrics in Vectice.
```
# We can do a few runs with different max depth for the tree.
# Just change the value below and re-run this cell.
# The model versions you created will show up in the Vectice UI as new versions
# of the "Regressor" Model. You can easily compare them from there.
tree_depth = 6
vectice.create_run("DT-Model", JobType.TRAINING)
# We can use the Python context manager (with) to end the run and make its status as failed
## in the Vectice UI in case we have an error
with vectice.start_run(inputs=[train_ds_version,test_ds_version]) as run:
dtr = DecisionTreeRegressor(max_depth=tree_depth, min_samples_split=50)
dtr.fit(X_train,y_train)
dtr_pred = dtr.predict(X_test)
data_feature_names = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors',
'waterfront', 'view', 'condition', 'grade', 'sqft_above',
'sqft_basement', 'yr_built', 'yr_renovated', 'zipcode', 'lat',
'long', 'sqft_living15', 'sqft_lot15']
# Visualize the Decision Tree Model
plt.figure(figsize=(25, 10))
plot_tree(dtr, feature_names=data_feature_names, filled=True, fontsize=10)
plt.savefig("DecisionTree_6.png")
# We save the plot in order to be able to attach to the model version.
## We can attach the decision tree plot to the model version by using .with_attachments([Attachments])
MAE = mean_absolute_error(dtr_pred, y_test)
RMSE = sqrt(mean_squared_error(dtr_pred, y_test))
print("Root Mean Squared Error:", RMSE)
print("Mean Absolute Error:", MAE)
# Here we use with_user_version() to create a new model version. You can provide a version name
## for your model version. An error will be thrown if the given user version already exists and
### if you don't provide a version name, the version name will be generated automatically.
properties = [("Tree Depth",str(tree_depth))]
metrics = [("RMSE", RMSE), ("MAE", MAE)]
model_version = vectice.create_model_version().with_user_version().with_parent_name("Regressor").with_algorithm("Decision Tree").with_type(ModelType.REGRESSION).with_properties(properties).with_metrics(metrics).with_attachments(["DecisionTree_6.png"])
## We add the created model version as output of the run
run.add_outputs(outputs=[model_version])
```
### Model versions table
You can also get all the model versions you created in previous runs, for offline analysis and understanding in more details what's driving the models performance.
```
vectice.list_model_versions_dataframe(1859)
```
### Update your model
Vectice enables you to update your model by using **vectice.update_model()**
```
vectice.update_model(parent_name="Regressor", model_type=ModelType.REGRESSION, description="Model description")
```
Thank you and congratulations! You have succesfully completed this tutorial.
In this notebooks we have illustrated how you can capture your experiments, hyper-parameters, dataset versions and metrics using Vectice Python library.
You can now leverage Vectice UI for analysis, documentation and to engage a business conversation around the findings.
Vectice enables you to:
1. Make your experiments more reproducible.
2. Track the data and code that is used for each experiment and model versions.
3. Document your projects' progress and collaborate with your team in Vectice's UI.
4. Discover previous work and reuse your team knowledge for new projects.
We are constantly improving the Vectice Python library and the Vectice application. Let us know what improvements you would like to see in the solution and what your favorite features are after completing this tutorial.
Feel free to explore more and come up with your own ideas on how to best start leveraging Vectice!
|
github_jupyter
|
# Anna KaRNNa
In this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.
This network is based off of Andrej Karpathy's [post on RNNs](http://karpathy.github.io/2015/05/21/rnn-effectiveness/) and [implementation in Torch](https://github.com/karpathy/char-rnn). Also, some information [here at r2rt](http://r2rt.com/recurrent-neural-networks-in-tensorflow-ii.html) and from [Sherjil Ozair](https://github.com/sherjilozair/char-rnn-tensorflow) on GitHub. Below is the general architecture of the character-wise RNN.
<img src="assets/charseq.jpeg" width="500">
```
import time
from collections import namedtuple
import numpy as np
import tensorflow as tf
```
First we'll load the text file and convert it into integers for our network to use.
```
with open('anna.txt', 'r') as f:
text=f.read()
vocab = set(text)
vocab_to_int = {c: i for i, c in enumerate(vocab)}
int_to_vocab = dict(enumerate(vocab))
chars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)
text[:100]
chars[:100]
```
Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.
Here I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.
The idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the `split_frac` keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.
```
def split_data(chars, batch_size, num_steps, split_frac=0.9):
"""
Split character data into training and validation sets, inputs and targets for each set.
Arguments
---------
chars: character array
batch_size: Size of examples in each of batch
num_steps: Number of sequence steps to keep in the input and pass to the network
split_frac: Fraction of batches to keep in the training set
Returns train_x, train_y, val_x, val_y
"""
slice_size = batch_size * num_steps
n_batches = int(len(chars) / slice_size)
# Drop the last few characters to make only full batches
x = chars[: n_batches*slice_size]
y = chars[1: n_batches*slice_size + 1]
# Split the data into batch_size slices, then stack them into a 2D matrix
x = np.stack(np.split(x, batch_size))
y = np.stack(np.split(y, batch_size))
# Now x and y are arrays with dimensions batch_size x n_batches*num_steps
# Split into training and validation sets, keep the virst split_frac batches for training
split_idx = int(n_batches*split_frac)
train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]
val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]
return train_x, train_y, val_x, val_y
train_x, train_y, val_x, val_y = split_data(chars, 10, 200)
train_x.shape
train_x[:,:10]
```
I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size `batch_size X num_steps`. For example, if we want our network to train on a sequence of 100 characters, `num_steps = 100`. For the next batch, we'll shift this window the next sequence of `num_steps` characters. In this way we can feed batches to the network and the cell states will continue through on each batch.
```
def get_batch(arrs, num_steps):
batch_size, slice_size = arrs[0].shape
n_batches = int(slice_size/num_steps)
for b in range(n_batches):
yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]
def build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,
learning_rate=0.001, grad_clip=5, sampling=False):
if sampling == True:
batch_size, num_steps = 1, 1
tf.reset_default_graph()
# Declare placeholders we'll feed into the graph
with tf.name_scope('inputs'):
inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')
x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')
with tf.name_scope('targets'):
targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')
y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')
y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])
keep_prob = tf.placeholder(tf.float32, name='keep_prob')
# Build the RNN layers
with tf.name_scope("RNN_cells"):
lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)
drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)
cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)
with tf.name_scope("RNN_init_state"):
initial_state = cell.zero_state(batch_size, tf.float32)
# Run the data through the RNN layers
with tf.name_scope("RNN_forward"):
outputs, state = tf.nn.dynamic_rnn(cell, x_one_hot, initial_state=initial_state)
final_state = state
# Reshape output so it's a bunch of rows, one row for each cell output
with tf.name_scope('sequence_reshape'):
seq_output = tf.concat(outputs, axis=1,name='seq_output')
output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')
# Now connect the RNN outputs to a softmax layer and calculate the cost
with tf.name_scope('logits'):
softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),
name='softmax_w')
softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')
logits = tf.matmul(output, softmax_w) + softmax_b
tf.summary.histogram('softmax_w', softmax_w)
tf.summary.histogram('softmax_b', softmax_b)
with tf.name_scope('predictions'):
preds = tf.nn.softmax(logits, name='predictions')
tf.summary.histogram('predictions', preds)
with tf.name_scope('cost'):
loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')
cost = tf.reduce_mean(loss, name='cost')
tf.summary.scalar('cost', cost)
# Optimizer for training, using gradient clipping to control exploding gradients
with tf.name_scope('train'):
tvars = tf.trainable_variables()
grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)
train_op = tf.train.AdamOptimizer(learning_rate)
optimizer = train_op.apply_gradients(zip(grads, tvars))
merged = tf.summary.merge_all()
# Export the nodes
export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',
'keep_prob', 'cost', 'preds', 'optimizer', 'merged']
Graph = namedtuple('Graph', export_nodes)
local_dict = locals()
graph = Graph(*[local_dict[each] for each in export_nodes])
return graph
```
## Hyperparameters
Here I'm defining the hyperparameters for the network. The two you probably haven't seen before are `lstm_size` and `num_layers`. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.
```
batch_size = 100
num_steps = 100
lstm_size = 512
num_layers = 2
learning_rate = 0.001
```
## Training
Time for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by `save_every_n`) I calculate the validation loss and save a checkpoint.
```
!mkdir -p checkpoints/anna
epochs = 10
save_every_n = 100
train_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)
model = build_rnn(len(vocab),
batch_size=batch_size,
num_steps=num_steps,
learning_rate=learning_rate,
lstm_size=lstm_size,
num_layers=num_layers)
saver = tf.train.Saver(max_to_keep=100)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
train_writer = tf.summary.FileWriter('./logs/2/train', sess.graph)
test_writer = tf.summary.FileWriter('./logs/2/test')
# Use the line below to load a checkpoint and resume training
#saver.restore(sess, 'checkpoints/anna20.ckpt')
n_batches = int(train_x.shape[1]/num_steps)
iterations = n_batches * epochs
for e in range(epochs):
# Train network
new_state = sess.run(model.initial_state)
loss = 0
for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):
iteration = e*n_batches + b
start = time.time()
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 0.5,
model.initial_state: new_state}
summary, batch_loss, new_state, _ = sess.run([model.merged, model.cost,
model.final_state, model.optimizer],
feed_dict=feed)
loss += batch_loss
end = time.time()
print('Epoch {}/{} '.format(e+1, epochs),
'Iteration {}/{}'.format(iteration, iterations),
'Training loss: {:.4f}'.format(loss/b),
'{:.4f} sec/batch'.format((end-start)))
train_writer.add_summary(summary, iteration)
if (iteration%save_every_n == 0) or (iteration == iterations):
# Check performance, notice dropout has been set to 1
val_loss = []
new_state = sess.run(model.initial_state)
for x, y in get_batch([val_x, val_y], num_steps):
feed = {model.inputs: x,
model.targets: y,
model.keep_prob: 1.,
model.initial_state: new_state}
summary, batch_loss, new_state = sess.run([model.merged, model.cost,
model.final_state], feed_dict=feed)
val_loss.append(batch_loss)
test_writer.add_summary(summary, iteration)
print('Validation loss:', np.mean(val_loss),
'Saving checkpoint!')
#saver.save(sess, "checkpoints/anna/i{}_l{}_{:.3f}.ckpt".format(iteration, lstm_size, np.mean(val_loss)))
tf.train.get_checkpoint_state('checkpoints/anna')
```
## Sampling
Now that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.
The network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.
```
def pick_top_n(preds, vocab_size, top_n=5):
p = np.squeeze(preds)
p[np.argsort(p)[:-top_n]] = 0
p = p / np.sum(p)
c = np.random.choice(vocab_size, 1, p=p)[0]
return c
def sample(checkpoint, n_samples, lstm_size, vocab_size, prime="The "):
prime = "Far"
samples = [c for c in prime]
model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)
saver = tf.train.Saver()
with tf.Session() as sess:
saver.restore(sess, checkpoint)
new_state = sess.run(model.initial_state)
for c in prime:
x = np.zeros((1, 1))
x[0,0] = vocab_to_int[c]
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
for i in range(n_samples):
x[0,0] = c
feed = {model.inputs: x,
model.keep_prob: 1.,
model.initial_state: new_state}
preds, new_state = sess.run([model.preds, model.final_state],
feed_dict=feed)
c = pick_top_n(preds, len(vocab))
samples.append(int_to_vocab[c])
return ''.join(samples)
checkpoint = "checkpoints/anna/i3560_l512_1.122.ckpt"
samp = sample(checkpoint, 2000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i200_l512_2.432.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i600_l512_1.750.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
checkpoint = "checkpoints/anna/i1000_l512_1.484.ckpt"
samp = sample(checkpoint, 1000, lstm_size, len(vocab), prime="Far")
print(samp)
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2 """Reloads all functions automatically"""
%matplotlib notebook
from irreversible_stressstrain import StressStrain as strainmodel
import test_suite as suite
import graph_suite as plot
import numpy as np
model = strainmodel('ref/HSRS/22').get_experimental_data()
slopes = suite.get_slopes(model)
second_deriv_slopes = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
# -- we think that yield occurs where the standard deviation is decreasing AND the slopes are mostly negative
def findYieldInterval(slopes, numberofsections):
def numneg(val):
return sum((val<0).astype(int))
# -- divide into ten intervals and save stddev of each
splitslopes = np.array_split(slopes,numberofsections)
splitseconds = np.array_split(second_deriv_slopes,numberofsections)
# -- displays the number of negative values in a range (USEFUL!!!)
for section in splitslopes:
print numneg(section), len(section)
print "-------------------------------"
for section in splitseconds:
print numneg(section), len(section)
divs = [np.std(vals) for vals in splitslopes]
# -- stddev of the whole thing
stdev = np.std(slopes)
interval = 0
slopesect = splitslopes[interval]
secondsect = splitseconds[interval]
print divs, stdev
# -- the proportion of slope values in an interval that must be negative to determine that material yields
cutoff = 3./4.
while numneg(slopesect)<len(slopesect)*cutoff and numneg(secondsect)<len(secondsect)*cutoff:
interval = interval + 1
"""Guard against going out of bounds"""
if interval==len(splitslopes): break
slopesect = splitslopes[interval]
secondsect = splitseconds[interval]
print
print interval
return interval
numberofsections = 15
interval_length = len(model)/numberofsections
"""
Middle of selected interval
Guard against going out of bounds
"""
yield_interval = findYieldInterval(slopes,numberofsections)
yield_index = min(yield_interval*interval_length + interval_length/2,len(model[:])-1)
yield_value = np.array(model[yield_index])[None,:]
print
print yield_value
```
## Make these estimates more reliable and robust
```
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
slopes = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
"""Now what if we have strain vs slope"""
strainvslope = suite.combine_data(strain,slopes)
strainvsecond = suite.combine_data(strain,second_deriv)
plot.plot2D(strainvsecond,'Strain','Slope',marker="ro")
plot.plot2D(model,'Strain','Stress',marker="ro")
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
slopes = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],slopes))
num_intervals = 80
interval_length = len(second_deriv)/num_intervals
split_2nd_derivs = np.array_split(second_deriv,num_intervals)
print np.mean(second_deriv)
down_index = 0
for index, section in enumerate(split_2nd_derivs):
if sum(section)<np.mean(slopes):
down_index = index
break
yield_index = down_index*interval_length
print strain[yield_index], stress[yield_index]
model = strainmodel('ref/HSRS/326').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
plot1 = suite.combine_data(strain,first_deriv)
plot2 = suite.combine_data(strain,second_deriv)
plot.plot2D(model)
plot.plot2D(plot1)
plot.plot2D(plot2)
```
### See when standard deviation of second derivative begins to decrease
```
model = strainmodel('ref/HSRS/222').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
ave_deviation = np.std(second_deriv)
deviation_second = [np.std(val) for val in np.array_split(second_deriv,30)]
yielding = 0
for index,value in enumerate(deviation_second):
if value != 0.0 and value<ave_deviation and index!=0:
yielding = index
break
print second_deriv
#print "It seems to yield at index:", yielding
#print "These are all of the standard deviations, by section:", deviation_second, "\n"
#print "The overall standard deviation of the second derivative is:", ave_deviation
```
## The actual yield values are as follows (These are approximate):
### ref/HSRS/22: Index 106 [1.3912797535, 900.2614980977]
### ref/HSRS/222: Index 119 [0, 904.6702299]
### ref/HSRS/326: Index 150 [6.772314989, 906.275032]
### Index of max standard deviation of the curve
```
model = strainmodel('ref/HSRS/22').get_experimental_data()
strain = model[:,0]
stress = model[:,1]
first_deriv = suite.get_slopes(model)
second_deriv = suite.get_slopes(suite.combine_data(model[:-1,0],first_deriv))
print second_deriv;
return;
chunks = 20
int_length = len(model[:])/chunks
deriv2spl = np.array_split(second_deriv,chunks)
deviation_second = [abs(np.mean(val)) for val in deriv2spl]
del(deviation_second[0])
print deviation_second
print np.argmax(deviation_second)
#print "The standard deviation of all the second derivatives is", np.std(second_deriv)
```
### If our data dips, we can attempt to find local maxima
```
import numpy as np
# -- climbs a discrete dataset to find local max
def hillclimber(data, guessindex = 0):
x = data[:,0]
y = data[:,1]
curx = x[guessindex]
cury = y[guessindex]
guessleft = max(0,guessindex-1)
guessright = min(len(x)-1,guessindex+1)
done = False
while not done:
left = y[guessleft]
right = y[guessright]
difleft = left-cury
difright = right-cury
if difleft<0 and difright<0 or (difleft==0 and difright==0):
done = True
elif difleft>difright:
cur = left
guessindex = guessleft
elif difright>difleft or difright==difleft:
cur = right
guessindex = guessright
return guessindex
func = lambda x: x**2
xs = np.linspace(0.,10.,5)
ys = func(xs)
data = suite.combine_data(xs,ys)
print hillclimber(data)
```
|
github_jupyter
|
# `scinum` example
```
from scinum import Number, Correlation, NOMINAL, UP, DOWN, ABS, REL
```
The examples below demonstrate
- [Numbers and formatting](#Numbers-and-formatting)
- [Defining uncertainties](#Defining-uncertainties)
- [Multiple uncertainties](#Multiple-uncertainties)
- [Configuration of correlations](#Configuration-of-correlations)
- [Automatic uncertainty propagation](#Automatic-uncertainty-propagation)
### Numbers and formatting
```
n = Number(1.234, 0.2)
n
```
The uncertainty definition is absolute. See the examples with [multiple uncertainties](#Multiple-uncertainties) for relative uncertainty definitions.
The representation of numbers (`repr`) in jupyter notebooks uses latex-style formatting. Internally, [`Number.str()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.str) is called, which - among others - accepts a `format` argument, defaulting to `"%s"` (configurable globally or per instance via [`Number.default_format`](https://scinum.readthedocs.io/en/latest/#scinum.Number.default_format)). Let's change the format for this notebook:
```
Number.default_format = "%.2f"
n
# or
n.str("%.3f")
```
### Defining uncertainties
Above, `n` is defined with a single, symmetric uncertainty. Here are some basic examples to access and play it:
```
# nominal value
print(n.nominal)
print(type(n.nominal))
# get the uncertainty
print(n.get_uncertainty())
print(n.get_uncertainty(direction=UP))
print(n.get_uncertainty(direction=DOWN))
# get the nominal value, shifted by the uncertainty
print(n.get()) # nominal value
print(n.get(UP)) # up variation
print(n.get(DOWN)) # down variation
# some more advanved use-cases:
# 1. get the multiplicative factor that would scale the nomninal value to the UP/DOWN varied ones
print("absolute factors:")
print(n.get(UP, factor=True))
print(n.get(DOWN, factor=True))
# 2. get the factor to obtain the uncertainty only (i.e., the relative unceratinty)
# (this is, of course, more useful in case of multiple uncertainties, see below)
print("\nrelative factors:")
print(n.get(UP, factor=True, diff=True))
print(n.get(DOWN, factor=True, diff=True))
```
There are also a few shorthands for the above methods:
```
# __call__ is forwarded to get()
print(n())
print(n(UP))
# u() is forwarded to get_uncertainty()
print(n.u())
print(n.u(direction=UP))
```
### Multiple uncertainties
Let's create a number that has two uncertainties: `"stat"` and `"syst"`. The `"stat"` uncertainty is asymmetric, and the `"syst"` uncertainty is relative.
```
n = Number(8848, {
"stat": (30, 20), # absolute +30-20 uncertainty
"syst": (REL, 0.5), # relative +-50% uncertainty
})
n
```
Similar to above, we can access the uncertainties and shifted values with [`get()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.get) (or `__call__`) and [`get_uncertainty()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.get_uncertainty) (or [`u()`](https://scinum.readthedocs.io/en/latest/#scinum.Number.u)). But this time, we can distinguish between the combined (in quadrature) value or the particular uncertainty sources:
```
# nominal value as before
print(n.nominal)
# get all uncertainties (stored absolute internally)
print(n.uncertainties)
# get particular uncertainties
print(n.u("syst"))
print(n.u("stat"))
print(n.u("stat", direction=UP))
# get the nominal value, shifted by particular uncertainties
print(n(UP, "stat"))
print(n(DOWN, "syst"))
# compute the shifted value for both uncertainties, added in quadrature without correlation (default but configurable)
print(n(UP))
```
As before, we can also access certain aspects of the uncertainties:
```
print("factors for particular uncertainties:")
print(n.get(UP, "stat", factor=True))
print(n.get(DOWN, "syst", factor=True))
print("\nfactors for the combined uncertainty:")
print(n.get(UP, factor=True))
print(n.get(DOWN, factor=True))
```
We can also apply some nice formatting:
```
print(n.str())
print(n.str("%.2f"))
print(n.str("%.2f", unit="m"))
print(n.str("%.2f", unit="m", force_asymmetric=True))
print(n.str("%.2f", unit="m", scientific=True))
print(n.str("%.2f", unit="m", si=True))
print(n.str("%.2f", unit="m", style="root"))
```
### Configuration of correlations
Let's assume that we have a second measurement for the quantity `n` we defined above,
```
n
```
and we measured it with the same sources of uncertainty,
```
n2 = Number(8920, {
"stat": (35, 15), # absolute +35-15 uncertainty
"syst": (REL, 0.3), # relative +-30% uncertainty
})
n2
```
Now, we want to compute the average measurement, including correct error propagation under consideration of sensible correlations. For more info on automatic uncertainty propagation, see the [subsequent section](#Automatic-uncertainty-propagation).
In this example, we want to fully correlate the *systematic* uncertainty, whereas we can treat *statistical* effects as uncorrelated. However, just wirting `(n + n2) / 2` will consider equally named uncertainty sources to be 100% correlated, i.e., both `syst` and `stat` uncertainties will be simply averaged. This is the default behavior in scinum as it is not possible (nor wise) to *guesstimate* the meaning of an uncertainty from its name.
While this approach is certainly correct for `syst`, we don't achieve the correct treatment for `stat`:
```
(n + n2) / 2
```
Instead, we need to define the correlation specifically for `stat`. This can be achieved in multiple ways, but the most pythonic way is to use a [`Correlation`](https://scinum.readthedocs.io/en/latest/#correlation) object.
```
(n @ Correlation(stat=0) + n2) / 2
```
**Note** that the statistical uncertainty decreased as desired, whereas the systematic one remained the same.
`Correlation` objects have a default value that can be set as the first positional, yet optional parameter, and itself defaults to one.
Internally, the operation `n @ Correlation(stat=0)` (or `n * Correlation(stat=0)` in Python 2) is evaluated prior to the addition of `n2` and generates a so-called [`DeferredResult`](https://scinum.readthedocs.io/en/latest/#deferredresult). This object carries the information of `n` and the correlation over to the next operation, at which point the uncertainty propagation is eventually resolved. As usual, in situations where the operator precedence might seem unclear, it is recommended to use parentheses to structure the expression.
### Automatic uncertainty propagation
Let's continue working with the number `n` from above.
Uncertainty propagation works in a pythonic way:
```
n + 200
n / 2
n**0.5
```
In cases such as the last one, formatting makes a lot of sense ...
```
(n**0.5).str("%.2f")
```
More complex operations such as `exp`, `log`, `sin`, etc, are provided on the `ops` object, which mimics Python's `math` module. The benefit of the `ops` object is that all its operations are aware of Gaussian error propagation rules.
```
from scinum import ops
# change the default format for convenience
Number.default_format = "%.3f"
# compute the log of n
ops.log(n)
```
The propagation is actually performed simultaneously per uncertainty source.
```
m = Number(5000, {"syst": 1000})
n + m
n / m
```
As described [above](#Configuration-of-correlations), equally named uncertainty sources are assumed to be fully correlated. You can configure the correlation in operations through `Correlation` objects, or by using explicit methods on the number object.
```
# n.add(m, rho=0.5, inplace=False)
# same as
n @ Correlation(0.5) + m
```
When you set `inplace` to `True` (the default), `n` is updated inplace.
```
n.add(m, rho=0.5)
n
```
|
github_jupyter
|
```
# !wget http://qim.fs.quoracdn.net/quora_duplicate_questions.tsv
import tensorflow as tf
import re
import numpy as np
import pandas as pd
from tqdm import tqdm
import collections
from unidecode import unidecode
from sklearn.cross_validation import train_test_split
def build_dataset(words, n_words):
count = [['PAD', 0], ['GO', 1], ['EOS', 2], ['UNK', 3], ['SEPARATOR', 4]]
count.extend(collections.Counter(words).most_common(n_words - 1))
dictionary = dict()
for word, _ in count:
dictionary[word] = len(dictionary)
data = list()
unk_count = 0
for word in words:
index = dictionary.get(word, 0)
if index == 0:
unk_count += 1
data.append(index)
count[0][1] = unk_count
reversed_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
return data, count, dictionary, reversed_dictionary
def str_idx(corpus, dic, maxlen, UNK=3):
X = np.zeros((len(corpus),maxlen))
for i in range(len(corpus)):
for no, k in enumerate(corpus[i][:maxlen][::-1]):
val = dic[k] if k in dic else UNK
X[i,-1 - no]= val
return X
def cleaning(string):
string = unidecode(string).replace('.', ' . ').replace(',', ' , ')
string = re.sub('[^A-Za-z\- ]+', ' ', string)
string = re.sub(r'[ ]+', ' ', string).strip()
return string.lower()
df = pd.read_csv('quora_duplicate_questions.tsv', delimiter='\t').dropna()
df.head()
left, right, label = df['question1'].tolist(), df['question2'].tolist(), df['is_duplicate'].tolist()
np.unique(label, return_counts = True)
for i in tqdm(range(len(left))):
left[i] = cleaning(left[i])
right[i] = cleaning(right[i])
left[i] = left[i] + ' SEPARATOR ' + right[i]
concat = ' '.join(left).split()
vocabulary_size = len(list(set(concat)))
data, count, dictionary, rev_dictionary = build_dataset(concat, vocabulary_size)
print('vocab from size: %d'%(vocabulary_size))
print('Most common words', count[4:10])
print('Sample data', data[:10], [rev_dictionary[i] for i in data[:10]])
def position_encoding(inputs):
T = tf.shape(inputs)[1]
repr_dim = inputs.get_shape()[-1].value
pos = tf.reshape(tf.range(0.0, tf.to_float(T), dtype=tf.float32), [-1, 1])
i = np.arange(0, repr_dim, 2, np.float32)
denom = np.reshape(np.power(10000.0, i / repr_dim), [1, -1])
enc = tf.expand_dims(tf.concat([tf.sin(pos / denom), tf.cos(pos / denom)], 1), 0)
return tf.tile(enc, [tf.shape(inputs)[0], 1, 1])
def layer_norm(inputs, epsilon=1e-8):
mean, variance = tf.nn.moments(inputs, [-1], keep_dims=True)
normalized = (inputs - mean) / (tf.sqrt(variance + epsilon))
params_shape = inputs.get_shape()[-1:]
gamma = tf.get_variable('gamma', params_shape, tf.float32, tf.ones_initializer())
beta = tf.get_variable('beta', params_shape, tf.float32, tf.zeros_initializer())
return gamma * normalized + beta
def self_attention(inputs, is_training, num_units, num_heads = 8, activation=None):
T_q = T_k = tf.shape(inputs)[1]
Q_K_V = tf.layers.dense(inputs, 3*num_units, activation)
Q, K, V = tf.split(Q_K_V, 3, -1)
Q_ = tf.concat(tf.split(Q, num_heads, axis=2), 0)
K_ = tf.concat(tf.split(K, num_heads, axis=2), 0)
V_ = tf.concat(tf.split(V, num_heads, axis=2), 0)
align = tf.matmul(Q_, K_, transpose_b=True)
align *= tf.rsqrt(tf.to_float(K_.get_shape()[-1].value))
paddings = tf.fill(tf.shape(align), float('-inf'))
lower_tri = tf.ones([T_q, T_k])
lower_tri = tf.linalg.LinearOperatorLowerTriangular(lower_tri).to_dense()
masks = tf.tile(tf.expand_dims(lower_tri,0), [tf.shape(align)[0],1,1])
align = tf.where(tf.equal(masks, 0), paddings, align)
align = tf.nn.softmax(align)
align = tf.layers.dropout(align, 0.1, training=is_training)
x = tf.matmul(align, V_)
x = tf.concat(tf.split(x, num_heads, axis=0), 2)
x += inputs
x = layer_norm(x)
return x
def ffn(inputs, hidden_dim, activation=tf.nn.relu):
x = tf.layers.conv1d(inputs, 4* hidden_dim, 1, activation=activation)
x = tf.layers.conv1d(x, hidden_dim, 1, activation=None)
x += inputs
x = layer_norm(x)
return x
class Model:
def __init__(self, size_layer, num_layers, embedded_size,
dict_size, learning_rate, dropout, kernel_size = 5):
def cnn(x, scope):
x += position_encoding(x)
with tf.variable_scope(scope, reuse = tf.AUTO_REUSE):
for n in range(num_layers):
with tf.variable_scope('attn_%d'%i,reuse=tf.AUTO_REUSE):
x = self_attention(x, True, size_layer)
with tf.variable_scope('ffn_%d'%i, reuse=tf.AUTO_REUSE):
x = ffn(x, size_layer)
with tf.variable_scope('logits', reuse=tf.AUTO_REUSE):
return tf.layers.dense(x, 2)[:, -1]
self.X = tf.placeholder(tf.int32, [None, None])
self.Y = tf.placeholder(tf.int32, [None])
encoder_embeddings = tf.Variable(tf.random_uniform([dict_size, embedded_size], -1, 1))
embedded_left = tf.nn.embedding_lookup(encoder_embeddings, self.X)
self.logits = cnn(embedded_left, 'left')
self.cost = tf.reduce_mean(
tf.nn.sparse_softmax_cross_entropy_with_logits(
logits = self.logits, labels = self.Y
)
)
self.optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(self.cost)
correct_pred = tf.equal(
tf.argmax(self.logits, 1, output_type = tf.int32), self.Y
)
self.accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
size_layer = 128
num_layers = 4
embedded_size = 128
learning_rate = 1e-4
maxlen = 50
batch_size = 128
dropout = 0.8
from sklearn.cross_validation import train_test_split
vectors = str_idx(left, dictionary, maxlen)
train_X, test_X, train_Y, test_Y = train_test_split(vectors, label, test_size = 0.2)
tf.reset_default_graph()
sess = tf.InteractiveSession()
model = Model(size_layer,num_layers,embedded_size,len(dictionary),learning_rate,dropout)
sess.run(tf.global_variables_initializer())
import time
EARLY_STOPPING, CURRENT_CHECKPOINT, CURRENT_ACC, EPOCH = 3, 0, 0, 0
while True:
lasttime = time.time()
if CURRENT_CHECKPOINT == EARLY_STOPPING:
print('break epoch:%d\n' % (EPOCH))
break
train_acc, train_loss, test_acc, test_loss = 0, 0, 0, 0
pbar = tqdm(range(0, len(train_X), batch_size), desc='train minibatch loop')
for i in pbar:
batch_x = train_X[i:min(i+batch_size,train_X.shape[0])]
batch_y = train_Y[i:min(i+batch_size,train_X.shape[0])]
acc, loss, _ = sess.run([model.accuracy, model.cost, model.optimizer],
feed_dict = {model.X : batch_x,
model.Y : batch_y})
assert not np.isnan(loss)
train_loss += loss
train_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
pbar = tqdm(range(0, len(test_X), batch_size), desc='test minibatch loop')
for i in pbar:
batch_x = test_X[i:min(i+batch_size,test_X.shape[0])]
batch_y = test_Y[i:min(i+batch_size,test_X.shape[0])]
acc, loss = sess.run([model.accuracy, model.cost],
feed_dict = {model.X : batch_x,
model.Y : batch_y})
test_loss += loss
test_acc += acc
pbar.set_postfix(cost=loss, accuracy = acc)
train_loss /= (len(train_X) / batch_size)
train_acc /= (len(train_X) / batch_size)
test_loss /= (len(test_X) / batch_size)
test_acc /= (len(test_X) / batch_size)
if test_acc > CURRENT_ACC:
print(
'epoch: %d, pass acc: %f, current acc: %f'
% (EPOCH, CURRENT_ACC, test_acc)
)
CURRENT_ACC = test_acc
CURRENT_CHECKPOINT = 0
else:
CURRENT_CHECKPOINT += 1
print('time taken:', time.time()-lasttime)
print('epoch: %d, training loss: %f, training acc: %f, valid loss: %f, valid acc: %f\n'%(EPOCH,train_loss,
train_acc,test_loss,
test_acc))
```
|
github_jupyter
|
# VAE outlier detection on CIFAR10
## Method
The Variational Auto-Encoder ([VAE](https://arxiv.org/abs/1312.6114)) outlier detector is first trained on a batch of unlabeled, but normal (*inlier*) data. Unsupervised training is desireable since labeled data is often scarce. The VAE detector tries to reconstruct the input it receives. If the input data cannot be reconstructed well, the reconstruction error is high and the data can be flagged as an outlier. The reconstruction error is either measured as the mean squared error (MSE) between the input and the reconstructed instance or as the probability that both the input and the reconstructed instance are generated by the same process.
## Dataset
[CIFAR10](https://www.cs.toronto.edu/~kriz/cifar.html) consists of 60,000 32 by 32 RGB images equally distributed over 10 classes.
```
import logging
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
tf.keras.backend.clear_session()
from tensorflow.keras.layers import Conv2D, Conv2DTranspose, Dense, Layer, Reshape, InputLayer
from tqdm import tqdm
from alibi_detect.models.losses import elbo
from alibi_detect.od import OutlierVAE
from alibi_detect.utils.fetching import fetch_detector
from alibi_detect.utils.perturbation import apply_mask
from alibi_detect.utils.saving import save_detector, load_detector
from alibi_detect.utils.visualize import plot_instance_score, plot_feature_outlier_image
logger = tf.get_logger()
logger.setLevel(logging.ERROR)
```
## Load CIFAR10 data
```
train, test = tf.keras.datasets.cifar10.load_data()
X_train, y_train = train
X_test, y_test = test
X_train = X_train.astype('float32') / 255
X_test = X_test.astype('float32') / 255
print(X_train.shape, y_train.shape, X_test.shape, y_test.shape)
```
## Load or define outlier detector
The pretrained outlier and adversarial detectors used in the example notebooks can be found [here](https://console.cloud.google.com/storage/browser/seldon-models/alibi-detect). You can use the built-in ```fetch_detector``` function which saves the pre-trained models in a local directory ```filepath``` and loads the detector. Alternatively, you can train a detector from scratch:
```
load_outlier_detector = True
filepath = 'my_path' # change to directory where model is downloaded
if load_outlier_detector: # load pretrained outlier detector
detector_type = 'outlier'
dataset = 'cifar10'
detector_name = 'OutlierVAE'
od = fetch_detector(filepath, detector_type, dataset, detector_name)
filepath = os.path.join(filepath, detector_name)
else: # define model, initialize, train and save outlier detector
latent_dim = 1024
encoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(32, 32, 3)),
Conv2D(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(128, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2D(512, 4, strides=2, padding='same', activation=tf.nn.relu)
])
decoder_net = tf.keras.Sequential(
[
InputLayer(input_shape=(latent_dim,)),
Dense(4*4*128),
Reshape(target_shape=(4, 4, 128)),
Conv2DTranspose(256, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(64, 4, strides=2, padding='same', activation=tf.nn.relu),
Conv2DTranspose(3, 4, strides=2, padding='same', activation='sigmoid')
])
# initialize outlier detector
od = OutlierVAE(threshold=.015, # threshold for outlier score
score_type='mse', # use MSE of reconstruction error for outlier detection
encoder_net=encoder_net, # can also pass VAE model instead
decoder_net=decoder_net, # of separate encoder and decoder
latent_dim=latent_dim,
samples=2)
# train
od.fit(X_train,
loss_fn=elbo,
cov_elbo=dict(sim=.05),
epochs=50,
verbose=False)
# save the trained outlier detector
save_detector(od, filepath)
```
## Check quality VAE model
```
idx = 8
X = X_train[idx].reshape(1, 32, 32, 3)
X_recon = od.vae(X)
plt.imshow(X.reshape(32, 32, 3))
plt.axis('off')
plt.show()
plt.imshow(X_recon.numpy().reshape(32, 32, 3))
plt.axis('off')
plt.show()
```
## Check outliers on original CIFAR images
```
X = X_train[:500]
print(X.shape)
od_preds = od.predict(X,
outlier_type='instance', # use 'feature' or 'instance' level
return_feature_score=True, # scores used to determine outliers
return_instance_score=True)
print(list(od_preds['data'].keys()))
```
### Plot instance level outlier scores
```
target = np.zeros(X.shape[0],).astype(int) # all normal CIFAR10 training instances
labels = ['normal', 'outlier']
plot_instance_score(od_preds, target, labels, od.threshold)
```
### Visualize predictions
```
X_recon = od.vae(X).numpy()
plot_feature_outlier_image(od_preds,
X,
X_recon=X_recon,
instance_ids=[8, 60, 100, 330], # pass a list with indices of instances to display
max_instances=5, # max nb of instances to display
outliers_only=False) # only show outlier predictions
```
## Predict outliers on perturbed CIFAR images
We perturb CIFAR images by adding random noise to patches (masks) of the image. For each mask size in `n_mask_sizes`, sample `n_masks` and apply those to each of the `n_imgs` images. Then we predict outliers on the masked instances:
```
# nb of predictions per image: n_masks * n_mask_sizes
n_mask_sizes = 10
n_masks = 20
n_imgs = 50
```
Define masks and get images:
```
mask_sizes = [(2*n,2*n) for n in range(1,n_mask_sizes+1)]
print(mask_sizes)
img_ids = np.arange(n_imgs)
X_orig = X[img_ids].reshape(img_ids.shape[0], 32, 32, 3)
print(X_orig.shape)
```
Calculate instance level outlier scores:
```
all_img_scores = []
for i in tqdm(range(X_orig.shape[0])):
img_scores = np.zeros((len(mask_sizes),))
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_orig[i].reshape(1, 32, 32, 3),
mask_size=mask_size,
n_masks=n_masks,
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
# predict outliers
od_preds_mask = od.predict(X_mask)
score = od_preds_mask['data']['instance_score']
# store average score over `n_masks` for a given mask size
img_scores[j] = np.mean(score)
all_img_scores.append(img_scores)
```
### Visualize outlier scores vs. mask sizes
```
x_plt = [mask[0] for mask in mask_sizes]
for ais in all_img_scores:
plt.plot(x_plt, ais)
plt.xticks(x_plt)
plt.title('Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier Score')
plt.show()
ais_np = np.zeros((len(all_img_scores), all_img_scores[0].shape[0]))
for i, ais in enumerate(all_img_scores):
ais_np[i, :] = ais
ais_mean = np.mean(ais_np, axis=0)
plt.title('Mean Outlier Score All Images for Increasing Mask Size')
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.plot(x_plt, ais_mean)
plt.xticks(x_plt)
plt.show()
```
### Investigate instance level outlier
```
i = 8 # index of instance to look at
plt.plot(x_plt, all_img_scores[i])
plt.xticks(x_plt)
plt.title('Outlier Scores Image {} for Increasing Mask Size'.format(i))
plt.xlabel('Mask size')
plt.ylabel('Outlier score')
plt.show()
```
Reconstruction of masked images and outlier scores per channel:
```
all_X_mask = []
X_i = X_orig[i].reshape(1, 32, 32, 3)
all_X_mask.append(X_i)
# apply masks
for j, mask_size in enumerate(mask_sizes):
# create masked instances
X_mask, mask = apply_mask(X_i,
mask_size=mask_size,
n_masks=1, # just 1 for visualization purposes
channels=[0,1,2],
mask_type='normal',
noise_distr=(0,1),
clip_rng=(0,1))
all_X_mask.append(X_mask)
all_X_mask = np.concatenate(all_X_mask, axis=0)
all_X_recon = od.vae(all_X_mask).numpy()
od_preds = od.predict(all_X_mask)
```
Visualize:
```
plot_feature_outlier_image(od_preds,
all_X_mask,
X_recon=all_X_recon,
max_instances=all_X_mask.shape[0],
n_channels=3)
```
## Predict outliers on a subset of features
The sensitivity of the outlier detector can not only be controlled via the `threshold`, but also by selecting the percentage of the features used for the instance level outlier score computation. For instance, we might want to flag outliers if 40% of the features (pixels for images) have an average outlier score above the threshold. This is possible via the `outlier_perc` argument in the `predict` function. It specifies the percentage of the features that are used for outlier detection, sorted in descending outlier score order.
```
perc_list = [20, 40, 60, 80, 100]
all_perc_scores = []
for perc in perc_list:
od_preds_perc = od.predict(all_X_mask, outlier_perc=perc)
iscore = od_preds_perc['data']['instance_score']
all_perc_scores.append(iscore)
```
Visualize outlier scores vs. mask sizes and percentage of features used:
```
x_plt = [0] + x_plt
for aps in all_perc_scores:
plt.plot(x_plt, aps)
plt.xticks(x_plt)
plt.legend(perc_list)
plt.title('Outlier Score for Increasing Mask Size and Different Feature Subsets')
plt.xlabel('Mask Size')
plt.ylabel('Outlier Score')
plt.show()
```
## Infer outlier threshold value
Finding good threshold values can be tricky since they are typically not easy to interpret. The `infer_threshold` method helps finding a sensible value. We need to pass a batch of instances `X` and specify what percentage of those we consider to be normal via `threshold_perc`.
```
print('Current threshold: {}'.format(od.threshold))
od.infer_threshold(X, threshold_perc=99) # assume 1% of the training data are outliers
print('New threshold: {}'.format(od.threshold))
```
|
github_jupyter
|
# Notes:
This notebook is used to predict demand of Victoria state (without using any future dataset)
```
import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from tsa_utils import *
from statsmodels.tsa.stattools import pacf
from sklearn.ensemble import RandomForestRegressor
import warnings
warnings.filterwarnings("ignore")
# show float in two decimal form
plt.style.use('ggplot')
pd.set_option('display.float_format',lambda x : '%.2f' % x)
```
## 1) Load dataset
```
df = pd.read_csv("../../data/all.csv").reset_index(drop=True)
df.head(3)
df = df[df.time <= '2021-08-11 23:30:00']
df.head(3)
```
## 3) Feature Engineering
```
drop_columns = ['demand_nsw',
'demand_sa',
'demand_tas',
'spot_price_nsw',
'spot_price_sa',
'spot_price_tas',
'spot_price_vic',
'inter_gen_nsw', 'inter_gen_sa', 'inter_gen_tas', 'inter_gen_vic',]
vic = df.drop(columns=drop_columns)
vic.columns = ['time', 'demand_vic', 'period']
vic.head(3)
# Feature engineering on datetime
vic['time'] = vic.time.astype('datetime64[ns]')
vic['month'] = vic.time.dt.month
vic['day'] = vic.time.dt.day
vic['day_of_year'] = vic.time.dt.dayofyear
vic['year'] = vic.time.dt.year
vic['weekday'] = vic['time'].apply(lambda x: x.weekday())
vic['week'] = vic.time.dt.week
vic['hour'] = vic.time.dt.hour
vic.loc[vic['month'].isin([12,1,2]), 'season'] = 1
vic.loc[vic['month'].isin([3,4,5]), 'season'] = 2
vic.loc[vic['month'].isin([6,7,8]), 'season'] = 3
vic.loc[vic['month'].isin([9, 10, 11]), 'season'] = 4
vic.tail(3)
# Add fourier terms
fourier_terms = add_fourier_terms(vic.time, year_k=3, week_k=3, day_k=3)
vic = pd.concat([vic, fourier_terms], 1).drop(columns=['datetime'])
vic.head(3)
# Plot autocorrelation
nlags=144
plot_tsc(vic.demand_vic, lags=nlags)
# Add nlag features (choosing the first 10 highest autocorrelation nlag)
dict_pacf = dict()
list_pacf = pacf(df['demand_vic'], nlags=nlags)
for nlag in range(nlags):
if nlag >= 48:
dict_pacf[nlag] = list_pacf[nlag]
dict_pacf = {k: v for k, v in sorted(dict_pacf.items(), key=lambda item: abs(item[1]), reverse=True)}
# 10 highest pacf nlag
max_pacf_nlags = list(dict_pacf.keys())[:5]
for nlag in max_pacf_nlags:
vic['n_lag'+str(nlag)] = df.reset_index()['demand_vic'].shift(nlag)
vic_train = vic[vic["time"] <= "2020-12-31 23:30:00"]
vic_cv = vic[(vic['time'] >= "2021-01-01 00:00:00") & (vic['time'] <= "2021-06-30 23:30:00")].reset_index(drop=True)
vic_test = vic[(vic['time'] >= "2021-07-01 00:00:00") & (vic['time'] <= "2021-08-11 23:30:00")].reset_index(drop=True)
X_train = vic_train.drop(columns=['demand_vic', 'time'])[nlags:]
y_train = vic_train.demand_vic[nlags:]
X_cv = vic_cv.drop(columns=['demand_vic', 'time'])
y_cv = vic_cv.demand_vic
X_test = vic_test.drop(columns=['demand_vic', 'time'])
y_test = vic_test.demand_vic
X_train.head(3)
X_train.columns
```
## 4) First look at Random Forest Regressor
```
rfr_clf = RandomForestRegressor(n_estimators=100)
rfr_clf = rfr_clf.fit(X_train, y_train)
print("Random Forest Regressor accuracy: ")
rfr_result = rfr_clf.predict(X_test)
rfr_residuals = y_test - rfr_result
print('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))
plt.figure(figsize=(20, 4))
plt.plot(y_test[:200], label='true value')
plt.plot(rfr_result[:200], label='predict')
plt.legend()
plt.show()
plt.figure(figsize=(20, 4))
plt.plot(rfr_residuals)
plt.show()
# Get numerical feature importances
importances = list(rfr_clf.feature_importances_)
# List of tuples with variable and importance
feature_importances = [(feature, round(importance, 2)) for feature, importance in zip(X_train.columns, importances)]
# Sort the feature importances by most important first
feature_importances = sorted(feature_importances, key = lambda x: x[1], reverse = True)
# Print out the feature and importances
[print('Variable: {:20} Importance: {}'.format(*pair)) for pair in feature_importances]
```
## 6) Predict CV and Test period demand
### 6.1) Predict CV period demand
```
X_train = vic_train.drop(columns=['demand_vic', 'time'])[nlags:]
y_train = vic_train.demand_vic[nlags:]
X_cv = vic_cv.drop(columns=['demand_vic', 'time'])
y_cv = vic_cv.demand_vic
rfr_clf = RandomForestRegressor(n_estimators=100)
rfr_clf = rfr_clf.fit(X_train, y_train)
print("Random Forest Regressor accuracy: ")
rfr_result = rfr_clf.predict(X_cv)
rfr_residuals = y_cv - rfr_result
print('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))
plt.figure(figsize=(20, 4))
plt.plot(y_cv, label='true value')
plt.plot(rfr_result, label='predict')
plt.legend()
plt.show()
vic_demand_cv_rfr = pd.DataFrame({'time': vic_cv.time, 'demand_vic': vic_cv.demand_vic})
vic_demand_cv_rfr['predicted_demand_vic'] = rfr_result
vic_demand_cv_rfr.tail(3)
vic_demand_cv_rfr.to_csv('predictions/vic_demand_unknow_cv_rfr.csv', index=False, header=True)
```
### 6.2) Predict Test period demand
```
idx_test_start = 61296 # index of df(full) where test start
X_train = vic.drop(columns=['demand_vic', 'time'])[nlags:idx_test_start]
y_train = vic.demand_vic[nlags:idx_test_start]
X_test = vic_test.drop(columns=['demand_vic', 'time'])
y_test = vic_test.demand_vic
rfr_clf = RandomForestRegressor(n_estimators=100, random_state=1)
rfr_clf = rfr_clf.fit(X_train, y_train)
print("Random Forest Regressor accuracy: ")
rfr_result = rfr_clf.predict(X_test)
rfr_residuals = y_test - rfr_result
print('Mean Absolute Percent Error:', round(np.mean(abs(rfr_residuals/y_test)), 4))
print('Root Mean Squared Error:', np.sqrt(np.mean(rfr_residuals**2)))
plt.figure(figsize=(20, 4))
plt.plot(y_test, label='true value')
plt.plot(rfr_result, label='predict')
plt.legend()
plt.show()
vic_demand_test_rfr = pd.DataFrame({'time': vic_test.time, 'demand_vic': vic_test.demand_vic})
vic_demand_test_rfr['predicted_demand_vic'] = rfr_result
vic_demand_test_rfr.tail(3)
vic_demand_test_rfr.to_csv('predictions/vic_demand_unknow_test_rfr.csv', index=False, header=True)
```
|
github_jupyter
|
# Converters for Quadratic Programs
Optimization problems in Qiskit's optimization module are represented with the `QuadraticProgram` class, which is generic and powerful representation for optimization problems. In general, optimization algorithms are defined for a certain formulation of a quadratic program and we need to convert our problem to the right type.
For instance, Qiskit provides several optimization algorithms that can handle Quadratic Unconstrained Binary Optimization (QUBO) problems. These are mapped to Ising Hamiltonians, for which Qiskit uses the `qiskit.aqua.operators` module, and then their ground state is approximated. For this optimization commonly known algorithms such as VQE or QAOA can be used as underlying routine. See the following tutorial about the [Minimum Eigen Optimizer](./03_minimum_eigen_optimizer.ipynb) for more detail. Note that also other algorithms exist that work differently, such as the `GroverOptimizer`.
To map a problem to the correct input format, the optimization module of Qiskit offers a variety of converters. In this tutorial we're providing an overview on this functionality. Currently, Qiskit contains the following converters.
- `InequalityToEquality`: converts inequality constraints into equality constraints with additional slack variables.
- `IntegerToBinary`: converts integer variables into binary variables and corresponding coefficients.
- `LinearEqualityToPenalty`: convert equality constraints into additional terms of the object function.
- `QuadraticProgramToQubo`: a wrapper for `InequalityToEquality`, `IntegerToBinary`, and `LinearEqualityToPenalty` for convenience.
## InequalityToEquality
`InequalityToEqualityConverter` converts inequality constraints into equality constraints with additional slack variables to remove inequality constraints from `QuadraticProgram`. The upper bounds and the lower bounds of slack variables will be calculated from the difference between the left sides and the right sides of constraints. Signs of slack variables depend on symbols in constraints such as $\leq$ and $\geq$.
The following is an example of a maximization problem with two inequality constraints. Variable $x$ and $y$ are binary variables and variable $z$ is an integer variable.
\begin{aligned}
& \text{maximize}
& 2x + y + z\\
& \text{subject to:}
& x+y+z \leq 5.5\\
& & x+y+z \geq 2.5\\
& & x, y \in \{0,1\}\\
& & z \in \{0,1,2,3,4,5,6,7\} \\
\end{aligned}
With `QuadraticProgram`, an optimization model of the problem is written as follows.
```
from qiskit_optimization import QuadraticProgram
qp = QuadraticProgram()
qp.binary_var('x')
qp.binary_var('y')
qp.integer_var(lowerbound=0, upperbound=7, name='z')
qp.maximize(linear={'x': 2, 'y': 1, 'z': 1})
qp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='LE', rhs=5.5,name='xyz_leq')
qp.linear_constraint(linear={'x': 1, 'y': 1, 'z': 1}, sense='GE', rhs=2.5,name='xyz_geq')
print(qp.export_as_lp_string())
```
Call `convert` method of `InequalityToEquality` to convert.
```
from qiskit_optimization.converters import InequalityToEquality
ineq2eq = InequalityToEquality()
qp_eq = ineq2eq.convert(qp)
print(qp_eq.export_as_lp_string())
```
After converting, the formulation of the problem looks like as the follows. As we can see, the inequality constraints are replaced with equality constraints with additional integer slack variables, $xyz\_leg\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$.
Let us explain how the conversion works. For example, the lower bound of the left side of the first constraint is $0$ which is the case of $x=0$, $y=0$, and $z=0$. Thus, the upperbound of the additional integer variable must be $5$ to be able to satisfy even the case of $x=0$, $y=0$, and $z=0$. Note that we cut off the part after the decimal point in the converted formulation since the left side of the first constraint in the original formulation can be only integer values. For the second constraint, basically we apply the same approach. However, the symbol in the second constraint is $\geq$, so we add minus before $xyz\_geq\text{@}int\_slack$ to be able to satisfy even the case of $x=1, y=1$, and $z=7$.
\begin{aligned}
& \text{maximize}
& 2x + y + z\\
& \text{subject to:}
& x+y+z+ xyz\_leg\text{@}int\_slack= 5\\
& & x+y+z+xyz\_geq\text{@}int\_slack= 3\\
& & x, y \in \{0,1\}\\
& & z \in \{0,1,2,3,4,5,6,7\} \\
& & xyz\_leg\text{@}int\_slack \in \{0,1,2,3,4,5\} \\
& & xyz\_geq\text{@}int\_slack \in \{0,1,2,3,4,5,6\} \\
\end{aligned}
## IntegerToBinary
`IntegerToBinary` converts integer variables into binary variables and coefficients to remove integer variables from `QuadraticProgram`. For converting, bounded-coefficient encoding proposed in [arxiv:1706.01945](https://arxiv.org/abs/1706.01945) (Eq. (5)) is used. For more detail of the encoding method, please see the paper.
We use the output of `InequalityToEquality` as starting point. Variable $x$ and $y$ are binary variables, while the variable $z$ and the slack variables $xyz\_leq\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$ are integer variables. We print the problem again for reference.
```
print(qp_eq.export_as_lp_string())
```
Call `convert` method of `IntegerToBinary` to convert.
```
from qiskit_optimization.converters import IntegerToBinary
int2bin = IntegerToBinary()
qp_eq_bin = int2bin.convert(qp_eq)
print(qp_eq_bin.export_as_lp_string())
```
After converting, integer variables $z$ is replaced with three binary variables $z\text{@}0$, $z\text{@}1$ and $z\text{@}2$ with coefficients 1, 2 and 4, respectively as the above.
The slack variables $xyz\_leq\text{@}int\_slack$ and $xyz\_geq\text{@}int\_slack$ that were introduced by `InequalityToEquality` are also both replaced with three binary variables with coefficients 1, 2, 2, and 1, 2, 3, respectively.
Note: Essentially the coefficients mean that the sum of these binary variables with coefficients can be the sum of a subset of $\{1, 2, 4\}$, $\{1, 2, 2\}$, and $\{1, 2, 3\}$ to represent that acceptable values $\{0, \ldots, 7\}$, $\{0, \ldots, 5\}$, and $\{0, \ldots, 6\}$, which respects the lower bound and the upper bound of original integer variables correctly.
`IntegerToBinary` also provides `interpret` method that is the functionality to translate a given binary result back to the original integer representation.
## LinearEqualityToPenalty
`LinearEqualityToPenalty` converts linear equality constraints into additional quadratic penalty terms of the objective function to map `QuadraticProgram` to an unconstrained form.
An input to the converter has to be a `QuadraticProgram` with only linear equality constraints. Those equality constraints, e.g. $\sum_i a_i x_i = b$ where $a_i$ and $b$ are numbers and $x_i$ is a variable, will be added to the objective function in the form of $M(b - \sum_i a_i x_i)^2$ where $M$ is a large number as penalty factor.
By default $M= 1e5$. The sign of the term depends on whether the problem type is a maximization or minimization.
We use the output of `IntegerToBinary` as starting point, where all variables are binary variables and all inequality constraints have been mapped to equality constraints.
We print the problem again for reference.
```
print(qp_eq_bin.export_as_lp_string())
```
Call `convert` method of `LinearEqualityToPenalty` to convert.
```
from qiskit_optimization.converters import LinearEqualityToPenalty
lineq2penalty = LinearEqualityToPenalty()
qubo = lineq2penalty.convert(qp_eq_bin)
print(qubo.export_as_lp_string())
```
After converting, the equality constraints are added to the objective function as additional terms with the default penalty factor $M=1e5$.
The resulting problem is now a QUBO and compatible with many quantum optimization algorithms such as VQE, QAOA and so on.
This gives the same result as before.
```
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright
```
|
github_jupyter
|
# Object Detection
*Object detection* is a form of computer vision in which a machine learning model is trained to classify individual instances of objects in an image, and indicate a *bounding box* that marks its location. Youi can think of this as a progression from *image classification* (in which the model answers the question "what is this an image of?") to building solutions where we can ask the model "what objects are in this image, and where are they?".
<p style='text-align:center'><img src='./images/object-detection.jpg' alt='A robot identifying fruit'/></p>
For example, a grocery store might use an object detection model to implement an automated checkout system that scans a conveyor belt using a camera, and can identify specific items without the need to place each item on the belt and scan them individually.
The **Custom Vision** cognitive service in Microsoft Azure provides a cloud-based solution for creating and publishing custom object detection models.
## Create a Custom Vision resource
To use the Custom Vision service, you need an Azure resource that you can use to train a model, and a resource with which you can publish it for applications to use. You can use the same resource for each of these tasks, or you can use different resources for each to allocate costs separately provided both resources are created in the same region. The resource for either (or both) tasks can be a general **Cognitive Services** resource, or a specific **Custom Vision** resource. Use the following instructions to create a new **Custom Vision** resource (or you can use an existing resource if you have one).
1. In a new browser tab, open the Azure portal at [https://portal.azure.com](https://portal.azure.com), and sign in using the Microsoft account associated with your Azure subscription.
2. Select the **+Create a resource** button, search for *custom vision*, and create a **Custom Vision** resource with the following settings:
- **Create options**: Both
- **Subscription**: *Your Azure subscription*
- **Resource group**: *Create a new resource group with a unique name*
- **Name**: *Enter a unique name*
- **Training location**: *Choose any available region*
- **Training pricing tier**: F0
- **Prediction location**: *The same as the training location*
- **Prediction pricing tier**: F0
> **Note**: If you already have an F0 custom vision service in your subscription, select **S0** for this one.
3. Wait for the resource to be created.
## Create a Custom Vision project
To train an object detection model, you need to create a Custom Vision project based on your trainign resource. To do this, you'll use the Custom Vision portal.
1. In a new browser tab, open the Custom Vision portal at [https://customvision.ai](https://customvision.ai), and sign in using the Microsoft account associated with your Azure subscription.
2. Create a new project with the following settings:
- **Name**: Grocery Detection
- **Description**: Object detection for groceries.
- **Resource**: *The Custom Vision resource you created previously*
- **Project Types**: Object Detection
- **Domains**: General
3. Wait for the project to be created and opened in the browser.
## Add and tag images
To train an object detection model, you need to upload images that contain the classes you want the model to identify, and tag them to indicate bounding boxes for each object instance.
1. Download and extract the training images from https://aka.ms/fruit-objects. The extracted folder contains a collection of images of fruit.
2. In the Custom Vision portal, in your object detection project, select **Add images** and upload all of the images in the extracted folder.
3. After the images have been uploaded, select the first one to open it.
4. Hold the mouse over any object in the image until an automatically detected region is displayed like the image below. Then select the object, and if necessary resize the region to surround it.
<p style='text-align:center'><img src='./images/object-region.jpg' alt='The default region for an object'/></p>
Alternatively, you can simply drag around the object to create a region.
5. When the region surrounds the object, add a new tag with the appropriate object type (*apple*, *banana*, or *orange*) as shown here:
<p style='text-align:center'><img src='./images/object-tag.jpg' alt='A tagged object in an image'/></p>
6. Select and tag each other object in the image, resizing the regions and adding new tags as required.
<p style='text-align:center'><img src='./images/object-tags.jpg' alt='Two tagged objects in an image'/></p>
7. Use the **>** link on the right to go to the next image, and tag its objects. Then just keep working through the entire image collection, tagging each apple, banana, and orange.
8. When you have finished tagging the last image, close the **Image Detail** editor and on the **Training Images** page, under **Tags**, select **Tagged** to see all of your tagged images:
<p style='text-align:center'><img src='./images/tagged-images.jpg' alt='Tagged images in a project'/></p>
## Train and test a model
Now that you've tagged the images in your project, you're ready to train a model.
1. In the Custom Vision project, click **Train** to train an object detection model using the tagged images. Select the **Quick Training** option.
2. Wait for training to complete (it might take ten minutes or so), and then review the *Precision*, *Recall*, and *mAP* performance metrics - these measure the prediction accuracy of the classification model, and should all be high.
3. At the top right of the page, click **Quick Test**, and then in the **Image URL** box, enter `https://aka.ms/apple-orange` and view the prediction that is generated. Then close the **Quick Test** window.
## Publish and consume the object detection model
Now you're ready to publish your trained model and use it from a client application.
1. At the top left of the **Performance** page, click **🗸 Publish** to publish the trained model with the following settings:
- **Model name**: detect-produce
- **Prediction Resource**: *Your custom vision **prediction** resource*.
2. After publishing, click the *settings* (⚙) icon at the top right of the **Performance** page to view the project settings. Then, under **General** (on the left), copy the **Project Id** and paste it into the code cell below replacing **YOUR_PROJECT_ID**.
> (*if you used a **Cognitive Services** resource instead of creating a **Custom Vision** resource at the beginning of this exercise, you can copy its key and endpoint from the right side of the project settings, paste it into the code cell below, and run it to see the results. Otherwise, continue completing the steps below to get the key and endpoint for your Custom Vision prediction resource*).
3. At the top left of the **Project Settings** page, click the *Projects Gallery* (👁) icon to return to the Custom Vision portal home page, where your project is now listed.
4. On the Custom Vision portal home page, at the top right, click the *settings* (⚙) icon to view the settings for your Custom Vision service. Then, under **Resources**, expand your *prediction* resource (<u>not</u> the training resource) and copy its **Key** and **Endpoint** values to the code cell below, replacing **YOUR_KEY** and **YOUR_ENDPOINT**.
5. Run the code cell below by clicking its green <span style="color:green">▷</span> button (at the top left of the cell) to set the variables to your project ID, key, and endpoint values.
```
project_id = 'YOUR_PROJECT_ID' # Replace with your project ID
cv_key = 'YOUR_KEY' # Replace with your prediction resource primary key
cv_endpoint = 'YOUR_ENDPOINT' # Replace with your prediction resource endpoint
model_name = 'detect-produce' # this must match the model name you set when publishing your model iteration exactly (including case)!
print('Ready to predict using model {} in project {}'.format(model_name, project_id))
```
Client applications can use the details above to connect to and your custom vision object detection model.
Run the following code cell, which uses your model to detect individual produce items in an image.
> **Note**: Don't worry too much about the details of the code. It uses the Python SDK for the Custom Vision service to submit an image to your model and retrieve predictions for detected objects. Each prediction consists of a class name (*apple*, *banana*, or *orange*) and *bounding box* coordinates that indicate where in the image the predicted object has been detected. The code then uses this information to draw a labelled box around each object on the image.
```
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
from msrest.authentication import ApiKeyCredentials
from matplotlib import pyplot as plt
from PIL import Image, ImageDraw, ImageFont
import numpy as np
import os
%matplotlib inline
# Load a test image and get its dimensions
test_img_file = os.path.join('data', 'object-detection', 'produce.jpg')
test_img = Image.open(test_img_file)
test_img_h, test_img_w, test_img_ch = np.array(test_img).shape
# Get a prediction client for the object detection model
credentials = ApiKeyCredentials(in_headers={"Prediction-key": cv_key})
predictor = CustomVisionPredictionClient(endpoint=cv_endpoint, credentials=credentials)
print('Detecting objects in {} using model {} in project {}...'.format(test_img_file, model_name, project_id))
# Detect objects in the test image
with open(test_img_file, mode="rb") as test_data:
results = predictor.detect_image(project_id, model_name, test_data)
# Create a figure to display the results
fig = plt.figure(figsize=(8, 8))
plt.axis('off')
# Display the image with boxes around each detected object
draw = ImageDraw.Draw(test_img)
lineWidth = int(np.array(test_img).shape[1]/100)
object_colors = {
"apple": "lightgreen",
"banana": "yellow",
"orange": "orange"
}
for prediction in results.predictions:
color = 'white' # default for 'other' object tags
if (prediction.probability*100) > 50:
if prediction.tag_name in object_colors:
color = object_colors[prediction.tag_name]
left = prediction.bounding_box.left * test_img_w
top = prediction.bounding_box.top * test_img_h
height = prediction.bounding_box.height * test_img_h
width = prediction.bounding_box.width * test_img_w
points = ((left,top), (left+width,top), (left+width,top+height), (left,top+height),(left,top))
draw.line(points, fill=color, width=lineWidth)
plt.annotate(prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100),(left,top), backgroundcolor=color)
plt.imshow(test_img)
```
View the resulting predictions, which show the objects detected and the probability for each prediction.
|
github_jupyter
|
# Essential: Static file management with SourceLoader
Data pipelines usually interact with external systems such as SQL databases. Using relative paths to find such files is error-prone as the path to the file depends on the file loading it, on the other hand, absolute paths are to restrictive, the path will only work in your current environment but will break others. Combining `Env` with `SourceLoader` provides a clean approach for managing static files.
```
from pathlib import Path
import pandas as pd
from sklearn import datasets
from IPython.display import display, Markdown
from ploomber import DAG, SourceLoader, with_env
from ploomber.tasks import PythonCallable, NotebookRunner, SQLUpload, SQLScript
from ploomber.products import File, SQLiteRelation
from ploomber.clients import SQLAlchemyClient
from ploomber.executors import Serial
# initialize a temporary directory
import tempfile
import os
tmp_dir = Path(tempfile.mkdtemp())
tmp_dir_static = tmp_dir / 'static'
tmp_dir_static.mkdir()
os.chdir(str(tmp_dir))
report_py = """
# static/report.py
# +
# This file is in jupytext light format
import seaborn as sns
import pandas as pd
# -
# + tags=['parameters']
# papermill will add the parameters below this cell
upstream = None
product = None
# -
# +
path = upstream['raw']
df = pd.read_parquet(path)
# -
# ## AGE distribution
# +
_ = sns.distplot(df.AGE)
# -
# ## Price distribution
# +
_ = sns.distplot(df.price)
# -
"""
clean_table_sql = """
-- static/clean_table.sql
DROP TABLE IF EXISTS {{product}};
CREATE TABLE {{product}}
AS SELECT * FROM {{upstream["raw_table"]}}
WHERE AGE < 100
"""
env_yaml = """
_module: '{{here}}'
path:
data: '{{here}}/data/'
static: '{{here}}/static/'
"""
(tmp_dir_static / 'report.py').write_text(report_py)
(tmp_dir_static / 'clean_table.sql').write_text(clean_table_sql)
(tmp_dir / 'env.yaml').write_text(env_yaml)
def display_file(file, syntax):
s = """
```{}
{}
```
""".format(syntax, file)
return display(Markdown(s))
```
Our working environment has an `env.yaml` file with a `static/` folder holding a SQL and a Python script.
```
! tree $tmp_dir
```
### Content of `env.yaml`
```
display_file(env_yaml, 'yaml')
```
### Content of `static/report.py`
```
display_file(report_py, 'python')
```
### Content of `static/create_table.sql`
```
display_file(clean_table_sql, 'sql')
```
### Pipeline declaration
```
def _get_data(product):
data = datasets.load_boston()
df = pd.DataFrame(data.data)
df.columns = data.feature_names
df['price'] = data.target
df.to_parquet(str(product))
@with_env
def make(env):
# NOTE: passing the executor parameter is only required for testing purposes, can be removed
dag = DAG(executor=Serial(build_in_subprocess=False))
client = SQLAlchemyClient('sqlite:///my_db.db')
dag.clients[SQLUpload] = client
dag.clients[SQLiteRelation] = client
dag.clients[SQLScript] = client
# initialize SourceLoader in our static directory
loader = SourceLoader(path=env.path.static)
get_data = PythonCallable(_get_data,
product=File(tmp_dir / 'raw.parquet'),
dag=dag,
name='raw')
# if we do not pass a name, the filename will be used as default
report = NotebookRunner(loader['report.py'],
product=File(tmp_dir / 'report.html'),
dag=dag,
kernelspec_name='python3')
raw_table = SQLUpload(source='{{upstream["raw"]}}',
product=SQLiteRelation(('raw', 'table')),
dag=dag,
name='raw_table')
# same here, no need to pass a name
clean_table = SQLScript(loader['clean_table.sql'],
product=SQLiteRelation(('clean', 'table')),
dag=dag)
get_data >> report
get_data >> raw_table >> clean_table
return dag
dag = make()
```
### Pipeline status
```
# Using SourceLoader automatically adds 'Location' column which points to the the source code location
dag.status()
dag.build()
```
## Advanced jinja2 features
`SourceLoader` initializes a proper jinja2 environment, so you can use features such as [macros](https://jinja.palletsprojects.com/en/2.11.x/templates/#macros), this is very useful to maximize SQL code reusability.
```
import shutil
shutil.rmtree(str(tmp_dir))
```
|
github_jupyter
|
## Simulation Procedures
## The progress of simulation
We simulate paired scDNA and RNA data following the procedure as illustrated in supplement (Figure S1). The simulation principle is to coherently generate scRNA and scDNA data from the same ground truth genetic copy number and clonality while also allowing adding sequencing platform specific noises.
```
import pandas as pd
import sys
sys.path.append('~/CCNMF/SimulationCode/')
# The Simulation.py (module) is restored in '~/simulationCode'
import Simulation as st
```
Specifically, we estimated the transition probabilty matrix as follows: we downloaded the [TCGA](https://www.cancer.gov/about-nci/organization/ccg/research/structural-genomics/tcga) genetic copy number difference GCN data from [cBioPortal](https://www.cbioportal.org/) with 171 triple-negative breast cancer basal samples on paired bulk RNA-seq and DNA-seq data. The below Probmatrix's cloumns are copy number from 1 to 5 as well as the rows.
```
ProbMatrix = [[0.42, 0.5, 0.08, 0, 0],
[0.02, 0.52, 0.46, 0, 0],
[0, 0, 0.5, 0.5, 0],
[0, 0, 0.01, 0.4, 0.59],
[0, 0, 0, 0.01, 0.99]]
```
The various configurations for simulated data. The details of each parameter are shown as the annotation.
```
Paramaters = {'Ncluster' : [3], # The number of clusters, 2 or 3
'Topology' : ['linear'], # The clonal structure of simulated data: 'linear' or 'bifurcate'
'C1Percent' : [0.5, 0.5], # The each cluster percentage if the data has 2 clusters
'C2Percent':[0.2, 0.4, 0.4], # The each cluster percentage if the data has 3 clusters
'Percentage' : [0.1, 0.2, 0.3, 0.4, 0.5], # The simulated copy number fraction in each cluster on various cases
'Outlier': [0.5], # The simulated outlier percentages in each cluster on various cases
'Dropout': [0.5]} # The simulated dropout percentages in each cluster on various cases
```
Simulate the Genetic Copy file for the pecific clone structure, nGenes is the number of genes, nCells is the number
of cells
```
Configure = st.GeneticCN(Paramaters, nGenes = 200, nCells = 100)
```
We simulate the scDNA data based on their associated clonal copy number profiles and transition probability matrix.
```
DNAmatrix = st.Simulate_DNA(ProbMatrix, Configure)
```
Simulate the scRNA data based on their associated clonal copy number profiles.
```
RNAmatrix = st.Simulate_RNA(Configure)
```
The above procedures are how to simulate the various copy number fractions in each cluster for linear structure with 3
clusters when the default of outlier percentange and dropout percentange are 0.5.
Meanwhile, if we need to simulate other configuration such as bifurcate structure with 3 clusters for various dropout percentages.
it is best to give the default of "Percentage" and "Dropout". Such as:
Paramaters = {'Ncluster' : [3], 'Topology' : ['bifurcate'], 'C1Percent' : [0.5, 0.5], 'C2Percent':[0.2, 0.4, 0.4], 'Percentage' : [0.5], 'Outlier': [0.5], 'Dropout': [0.1, 0.2, 0.3, 0.4, 0.5]}
Finally, save each pair datasets as '.csv' file.
```
DNA1 = DNAmatrix[1]
RNA1 = RNAmatrix[1]
DNA1 = pd.DataFrame(DNA1)
RNA1 = pd.DataFrame(RNA1)
DNA1.to_csv('DNA1.csv', index = 0)
RNA1.to_csv('RNA1.csv', index = 0)
```
|
github_jupyter
|
## Compile a training set using ASPCAP normalization
```
from utils_h5 import H5Compiler
from astropy.io import fits
import numpy as np
# To create a astroNN compiler instance
compiler_aspcap_train = H5Compiler()
compiler_aspcap_train.teff_low = 4000 # Effective Temperature Upper
compiler_aspcap_train.teff_high = 5500 # Effective Temperature Lower
compiler_aspcap_train.vscattercut = 1 # Velocity Scattering Upper
compiler_aspcap_train.starflagcut = True # STARFLAG == 0
compiler_aspcap_train.aspcapflagcut = True # ASPCAPFALG == 0
compiler_aspcap_train.ironlow = -10000. # [Fe/H] Lower
compiler_aspcap_train.continuum = False # use aspcap normalization
compiler_aspcap_train.SNR_low = 200 # SNR Lower
compiler_aspcap_train.SNR_high = 99999 # SNR Upper
compiler_aspcap_train.filename = 'aspcap_norm_train'
# To compile a .h5 datasets, use .compile() method
compiler_aspcap_train.compile()
```
## Compile a testing set using ASPCAP normalization
```
from utils_h5 import H5Compiler
from astropy.io import fits
import numpy as np
# To create a astroNN compiler instance
compiler_aspcap_test = H5Compiler()
compiler_aspcap_test.teff_low = 4000 # Effective Temperature Upper
compiler_aspcap_test.teff_high = 5500 # Effective Temperature Lower
compiler_aspcap_test.vscattercut = 1 # Velocity Scattering Upper
compiler_aspcap_test.starflagcut = True # STARFLAG == 0
compiler_aspcap_test.aspcapflagcut = True # ASPCAPFALG == 0
compiler_aspcap_test.ironlow = -10000. # [Fe/H] Lower
compiler_aspcap_test.continuum = False # use aspcap normalization
compiler_aspcap_test.SNR_low = 100 # SNR Lower
compiler_aspcap_test.SNR_high = 200 # SNR Upper
compiler_aspcap_test.filename = 'aspcap_norm_test'
# To compile a .h5 datasets, use .compile() method
compiler_aspcap_test.compile()
```
## Train a NN with ASPCAP normalization
```
import numpy as np
from utils_h5 import H5Loader
from astroNN.models import ApogeeBCNNCensored
loader = H5Loader('aspcap_norm_train') # continuum normalized dataset
loader.load_err = True
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y, x_err, y_err = loader.load()
bcnn = ApogeeBCNNCensored()
bcnn.num_hidden = [192, 64, 32, 16, 2] # default model size used in the paper
bcnn.max_epochs = 60 # default max epochs used in the paper
bcnn.autosave = True
bcnn.folder_name = 'aspcapStar_BCNNCensored'
bcnn.train(x, y, labels_err=y_err)
```
## Test the NN with ASPCAP normalization
```
import numpy as np
import pandas as pd
from astropy.stats import mad_std as mad
from utils_h5 import H5Loader
from astroNN.models import ApogeeBCNNCensored, load_folder
loader = H5Loader('aspcap_norm_test') # continuum normalized dataset
loader.load_err = True
loader.target = ['teff', 'logg', 'C', 'C1', 'N', 'O', 'Na', 'Mg', 'Al', 'Si', 'P', 'S', 'K',
'Ca', 'Ti', 'Ti2', 'V', 'Cr', 'Mn', 'Fe','Co', 'Ni']
x, y, x_err, y_err = loader.load()
bcnn = load_folder('aspcapStar_BCNNCensored')
pred, pred_error = bcnn.test(x, y)
residue = (pred - y)
bias = np.ma.median(np.ma.array(residue, mask=[y == -9999.]), axis=0)
scatter = mad(np.ma.array(residue, mask=[y == -9999.]), axis=0)
d = {'Name': bcnn.targetname, 'Bias': [f'{bias_single:.{3}f}' for bias_single in bias], 'Scatter': [f'{scatter_single:.{3}f}' for scatter_single in scatter]}
df = pd.DataFrame(data=d)
df
```
|
github_jupyter
|
```
import os; os.chdir('../')
from tqdm import tqdm
import pandas as pd
import numpy as np
from sklearn.neighbors import BallTree
%matplotlib inline
from urbansim_templates import modelmanager as mm
from urbansim_templates.models import MNLDiscreteChoiceStep
from urbansim.utils import misc
from scripts import datasources, models
import orca
```
### Load data
```
chts_persons = pd.read_csv('/home/mgardner/data/chts-orig/data/Deliv_PER.csv', low_memory=False)
chts_persons_lookup = pd.read_csv('/home/mgardner/data/chts-orig/data/LookUp_PER.csv')
chts_persons = pd.merge(
chts_persons.set_index(['SAMPN','PERNO']),
chts_persons_lookup.set_index(['SAMPN','PERNO']),
left_index=True, right_index=True,
suffixes=('_persons', '_lookup')).reset_index()
chts_homes = pd.read_csv('/home/mgardner/data/chts-orig/data/LookUp_Home.csv')
chts_persons = pd.merge(chts_persons, chts_homes, on='SAMPN')
# SF Bay Area only!
chts_persons = chts_persons[chts_persons['HCTFIP'].isin([1, 13, 41, 55, 75, 81, 85, 95, 97])].reset_index()
jobs = pd.read_csv('/home/mgardner/data/jobs_w_occup.csv')
buildings = pd.read_hdf('./data/bayarea_ual.h5', 'buildings')
parcels = pd.read_hdf('./data/bayarea_ual.h5', 'parcels')
```
### Get job coords
```
buildings = pd.merge(buildings, parcels[['x', 'y']], left_on='parcel_id', right_index=True)
jobs = pd.merge(jobs, buildings[['x', 'y']], left_on='building_id', right_index=True)
jobs.rename(columns={'x': 'lng', 'y': 'lat'}, inplace=True)
```
### Assign jobs a node ID
```
# load the network nodes
nodes = pd.read_csv('~/data/bay_area_full_strongly_nodes.csv')
nodes = nodes.set_index('osmid')
assert nodes.index.is_unique
# haversine requires data in form of [lat, lng] and inputs/outputs in units of radians
nodes_rad = np.deg2rad(nodes[['y', 'x']])
persons_rad = np.deg2rad(chts_persons[['WYCORD_lookup', 'WXCORD_lookup']])
jobs_rad = np.deg2rad(jobs[['lng', 'lat']])
# build the tree for fast nearest-neighbor search
tree = BallTree(nodes_rad, metric='haversine')
# query the tree for nearest node to each home
idx = tree.query(jobs_rad, return_distance=False)
jobs['node_id'] = nodes.iloc[idx[:,0]].index
jobs.to_csv('/home/mgardner/data/jobs_w_occup_and_node.csv', index=False)
```
### Assign CHTS persons a job ID
```
dists = []
no_job_info = []
no_work_coords = []
# new columnd in CHTS persons to store job_id
chts_persons.loc[:, 'job_id'] = None
# prepare jobs table
jobs.loc[:, 'taken'] = False
jobs.loc[:, 'x'] = jobs_rad['lng']
jobs.loc[:, 'y'] = jobs_rad['lat']
for i, person in tqdm(chts_persons.iterrows(), total=len(chts_persons)):
# only assign a job ID for employed persons with a fixed work location
if (person['EMPLY'] == 1) & (person['WLOC'] == 1):
# skip person if no CHTS industry or occupation
if (person['INDUS'] > 96) & (person['OCCUP'] > 96):
no_job_info.append(i)
continue
# skip person if no work location
elif pd.isnull(person[['WYCORD_lookup', 'WXCORD_lookup']]).any():
no_work_coords.append(i)
continue
# if CHTS industry is unknown, match jobs based on occupation only
elif person['INDUS'] > 96:
potential_jobs = jobs[
(jobs['occupation_id'] == person['OCCUP']) &
(jobs['taken'] == False)]
# if occupation is unknown, match jobs based on industry only
elif person['OCCUP'] > 96:
potential_jobs = jobs[
(jobs['naics'] == person['INDUS']) &
(jobs['taken'] == False)]
elif (person['INDUS'] < 97) & (person['OCCUP'] < 97):
# define potential jobs based on industry and occupation
potential_jobs = jobs[
(jobs['naics'] == person['INDUS']) &
(jobs['occupation_id'] == person['OCCUP']) &
(jobs['taken'] == False)]
# if no such jobs exist, define jobs by industry
if len(potential_jobs) == 0:
potential_jobs = jobs[
(jobs['naics'] == person['INDUS']) &
(jobs['taken'] == False)]
# if no such jobs exist, define jobs by occupation
if len(potential_jobs) == 0:
potential_jobs = jobs[
(jobs['occupation_id'] == person['OCCUP']) &
(jobs['taken'] == False)]
# otherwise, continue
if len(potential_jobs) == 0:
continue
# build the tree of potential jobs for fast nearest-neighbor search
tree = BallTree(potential_jobs[['y','x']], metric='haversine')
# query the tree for nearest job to each workplace
dist, idx = tree.query(persons_rad.iloc[i].values.reshape(1,-1), return_distance=True)
# save results
job = potential_jobs.iloc[idx[0][0]]
dists.append(dist[0][0])
chts_persons.loc[i, 'job_id'] = job['job_id']
jobs.loc[jobs['job_id'] == job['job_id'], 'taken'] = True
chts_persons.to_csv('./data/chts_persons_with_job_id.csv', index=False)
# convert dists from radians to kilometers
dists = [dist * 6371 for dist in dists]
pd.Series(dists).plot(kind='hist', bins=20000, xlim=(0, 20), normed=True)
print('Assigned job IDs to {0}% of workers with a fixed work location.'.format(
np.round(chts_persons.job_id.count() / len(
chts_persons[(chts_persons['EMPLY'] == 1) & (chts_persons['WLOC'] == 1)]) * 100, 1)))
print('{0}% had no industry/occupation info.'.format(
np.round(len(no_job_info) / len(
chts_persons[(chts_persons['EMPLY'] == 1) & (chts_persons['WLOC'] == 1)]) * 100, 1)))
print('{0}% had no work coordinates.'.format(
np.round(len(no_work_coords) / len(
chts_persons[(chts_persons['EMPLY'] == 1) & (chts_persons['WLOC'] == 1)]) * 100, 1)))
```
|
github_jupyter
|
# Semantic Text Summarization
Here we are using the semantic method to understand the text and also keep up the standards of the extractive summarization. The task is implemnted using the various pre-defined models such **BERT, BART, T5, XLNet and GPT2** for summarizing the articles. It is also comapared with a classical method i.e. **summarzation based on word frequencies**.
```
## installation
!pip install transformers --upgrade
!pip install bert-extractive-summarizer
!pip install neuralcoref
!python -m spacy download en_core_web_md
from transformers import pipeline
from summarizer import Summarizer, TransformerSummarizer
import pprint
pp = pprint.PrettyPrinter(indent=14)
## documentation for summarizer: https://huggingface.co/transformers/main_classes/pipelines.html#summarizationpipeline
# summarize with BART
summarizer_bart = pipeline(task='summarization', model="bart-large-cnn")
#summarize with BERT
summarizer_bert = Summarizer()
# summarize with T5
summarizer_t5 = pipeline(task='summarization', model="t5-large") # options: ‘t5-small’, ‘t5-base’, ‘t5-large’, ‘t5-3b’, ‘t5-11b’
#for T5 you can chose the size of the model. Everything above t5-base is very slow, even on GPU or TPU.
# summarize with XLNet
summarizer_xlnet = TransformerSummarizer(transformer_type="XLNet",transformer_model_key="xlnet-base-cased")
# summarize with GPT2
summarizer_gpt2 = TransformerSummarizer(transformer_type="GPT2",transformer_model_key="gpt2-medium")
data = '''
For the actual assembly of an module, the Material list of a complete module is displayed in order to make the necessary materials physically available. Also CAD model of the assembly and 2-D construction models can be viewed or printed out in order to be able to later on
to carry out individual steps.
Necessary steps: The material list, 3D model and 2D drawings of a complete assembly must be available.
'''
# Bart for Text - Summarization
print('Bart for Text - Summarization')
summary_bart = summarizer_bart(data, min_length=10, max_length=40) # change min_ and max_length for different output
pp.pprint(summary_bart[0]['summary_text'])
# BERT for Text - Summarization
print('\n BERT for Text - Summarization')
summary_bert = summarizer_bert(data, min_length=60)
full = ''.join(summary_bert)
pp.pprint(full)
# XLNet for Text - Summarization
print('\n XLNet for Text - Summarization')
summary_xlnet = summarizer_xlnet(data, min_length=60)
full = ''.join(summary_xlnet)
pp.pprint(full)
# GPT2 for Text - Summarization
print('\n GPT2 for Text - Summarization')
summary_gpt2 = summarizer_gpt2(data, min_length=60, ratio = 0.1)
full = ''.join(summary_gpt2)
pp.pprint(full)
# T5 for Text - Summarization
print('\n T5 for Text - Summarization')
summary_t5 = summarizer_t5(data, min_length=10) # change min_ and max_length for different output
pp.pprint(summary_t5[0]['summary_text'])
# a review on another data
data = '''
In the production of SMC (Sheet Moulding Compound), the maturing of the semi-finished product (resin+glass fibre) is of decisive importance.
The associated thickening of the material determines the viscosity and thus the quality of the end product.
Possible defects due to short maturing and soft semi-finished products are lack of fibre transport, while too long maturing and hard semi-finished products result in incompletely filled components.
By adjusting the press parameters such as closing force, closing speed, mould temperature etc., the fluctuations in thickening can normally be compensated.
By measuring the flowability/viscosity of the material or by measuring additional parameters during the manufacturing process, the ideal process window for the production of SMC is to be controlled even better.
'''
# Bart for Text - Summarization
print('Bart for Text - Summarization')
summary_bart = summarizer_bart(data, min_length=10, max_length=40) # change min_ and max_length for different output
pp.pprint(summary_bart[0]['summary_text'])
# BERT for Text - Summarization
print('\n BERT for Text - Summarization')
summary_bert = summarizer_bert(data, min_length=60)
full = ''.join(summary_bert)
pp.pprint(full)
# XLNet for Text - Summarization
print('\n XLNet for Text - Summarization')
summary_xlnet = summarizer_xlnet(data, min_length=60)
full = ''.join(summary_xlnet)
pp.pprint(full)
# GPT2 for Text - Summarization
print('\n GPT2 for Text - Summarization')
summary_gpt2 = summarizer_gpt2(data, min_length=60)
full = ''.join(summary_gpt2)
pp.pprint(full)
# T5 for Text - Summarization
print('\n T5 for Text - Summarization')
summary_t5 = summarizer_t5(data, min_length=10) # change min_ and max_length for different output
pp.pprint(summary_t5[0]['summary_text'])
# Text - Summarization using word frequencies
# importing libraries
import nltk
nltk.download('stopwords')
nltk.download('punkt')
from nltk.corpus import stopwords
from nltk.stem import PorterStemmer
from nltk.tokenize import word_tokenize, sent_tokenize
import bs4 as BeautifulSoup
import urllib.request
#fetching the content from the URL
fetched_data = urllib.request.urlopen('https://en.wikipedia.org/wiki/20th_century')
article_read = fetched_data.read()
#parsing the URL content and storing in a variable
article_parsed = BeautifulSoup.BeautifulSoup(article_read,'html.parser')
#returning <p> tags
paragraphs = article_parsed.find_all('p')
article_content = '''
In the production of SMC (Sheet Moulding Compound), the maturing of the semi-finished product (resin+glass fibre) is of decisive importance. The associated thickening of the material determines the viscosity and thus the quality of the end product. Possible defects due to short maturing and soft semi-finished products are lack of fibre transport, while too long maturing and hard semi-finished products result in incompletely filled components. By adjusting the press parameters such as closing force, closing speed, mould temperature etc., the fluctuations in thickening can normally be compensated. By measuring the flowability/viscosity of the material or by measuring additional parameters during the manufacturing process, the ideal process window for the production of SMC is to be controlled even better.
'''
#looping through the paragraphs and adding them to the variable
#for p in paragraphs:
# article_content += p.text
#print(article_content)
def _create_dictionary_table(text_string) -> dict:
#removing stop words
stop_words = set(stopwords.words("english"))
words = word_tokenize(text_string)
#reducing words to their root form
stem = PorterStemmer()
#creating dictionary for the word frequency table
frequency_table = dict()
for wd in words:
wd = stem.stem(wd)
if wd in stop_words:
continue
if wd in frequency_table:
frequency_table[wd] += 1
else:
frequency_table[wd] = 1
return frequency_table
def _calculate_sentence_scores(sentences, frequency_table) -> dict:
#algorithm for scoring a sentence by its words
sentence_weight = dict()
for sentence in sentences:
sentence_wordcount = (len(word_tokenize(sentence)))
sentence_wordcount_without_stop_words = 0
for word_weight in frequency_table:
if word_weight in sentence.lower():
sentence_wordcount_without_stop_words += 1
if sentence[:7] in sentence_weight:
sentence_weight[sentence[:7]] += frequency_table[word_weight]
else:
sentence_weight[sentence[:7]] = frequency_table[word_weight]
sentence_weight[sentence[:7]] = sentence_weight[sentence[:7]] / sentence_wordcount_without_stop_words
return sentence_weight
def _calculate_average_score(sentence_weight) -> int:
#calculating the average score for the sentences
sum_values = 0
for entry in sentence_weight:
sum_values += sentence_weight[entry]
#getting sentence average value from source text
average_score = (sum_values / len(sentence_weight))
return average_score
def _get_article_summary(sentences, sentence_weight, threshold):
sentence_counter = 0
article_summary = ''
for sentence in sentences:
if sentence[:7] in sentence_weight and sentence_weight[sentence[:7]] >= (threshold):
article_summary += " " + sentence
sentence_counter += 1
return article_summary
def _run_article_summary(article):
#creating a dictionary for the word frequency table
frequency_table = _create_dictionary_table(article)
#tokenizing the sentences
sentences = sent_tokenize(article)
#algorithm for scoring a sentence by its words
sentence_scores = _calculate_sentence_scores(sentences, frequency_table)
#getting the threshold
threshold = _calculate_average_score(sentence_scores)
#producing the summary
article_summary = _get_article_summary(sentences, sentence_scores, 1.1 * threshold)
return article_summary
if __name__ == '__main__':
summary_results = _run_article_summary(article_content)
print(summary_results)
# Text - Summarization using GenSim
from gensim.summarization.summarizer import summarize
print(summarize(data))
```
|
github_jupyter
|
# Importing the libraries
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
```
# Importing the datasets
```
dataset = pd.read_csv("train_ctrUa4K.csv")
dataset2 = pd.read_csv("test_lAUu6dG.csv")
dataset = dataset.drop(['Loan_ID'], axis = 1)
dataset2 = dataset2.drop(['Loan_ID'], axis = 1)
dataset.shape
dataset2.shape
```
# Analysing the Training dataset
```
dataset.head()
dataset.dtypes
dataset.count()
dataset.isna().sum()
dataset["Gender"].isnull().sum()
dataset.info()
dataset["Gender"].value_counts()
dataset["Education"].value_counts()
dataset["Self_Employed"].value_counts()
dataset["Property_Area"].value_counts()
dataset["Loan_Status"].value_counts()
```
# Visualising the datasets
```
categorical_columns = ['Gender', 'Married',
'Dependents', 'Education', 'Self_Employed', 'Property_Area','Credit_History','Loan_Amount_Term']
fig,axes = plt.subplots(4,2,figsize=(12,15))
for idx,cat_col in enumerate(categorical_columns):
row,col = idx//2,idx%2
sns.countplot(x=cat_col,data=dataset,hue='Loan_Status',ax=axes[row,col])
plt.scatter(dataset['ApplicantIncome'],dataset['CoapplicantIncome'])
import seaborn as sns
sns.violinplot(dataset['ApplicantIncome'], dataset['Gender']) #Variable Plot
sns.despine()
fig = plt.figure()
ax = fig.add_axes([0,0,1,1])
ax.bar(dataset['Loan_Status'],dataset['CoapplicantIncome'],color = "yellow")
plt.show()
sns.heatmap(dataset.corr(), annot=True)
fig, ax = plt.subplots()
ax.hist(dataset["Loan_Status"],color = "purple")
ax.set_title('loan approvl counts')
ax.set_xlabel('Loan status')
ax.set_ylabel('Frequency')
```
# Taking Care of Missing Values
```
dataset["Gender"].fillna("Male", inplace = True)
dataset["Married"].fillna("No", inplace = True)
dataset["Education"].fillna("Graduate", inplace = True)
dataset["Self_Employed"].fillna("No", inplace = True)
dataset["Property_Area"].fillna("Urban", inplace = True)
dataset.isnull().sum()
dataset2["Gender"].fillna("Male", inplace = True)
dataset2["Married"].fillna("No", inplace = True)
dataset2["Education"].fillna("Graduate", inplace = True)
dataset2["Self_Employed"].fillna("No", inplace = True)
dataset2["Property_Area"].fillna("Urban", inplace = True)
```
# Encodiing the categorical variable
```
train_df_encoded = pd.get_dummies(dataset,drop_first=True)
train_df_encoded.head()
train_df_encoded.shape
test_df_encoded = pd.get_dummies(dataset2,drop_first=True)
test_df_encoded.head()
test_df_encoded.shape
```
# Splitting the dependent and independewnt variable
```
X = train_df_encoded.drop(columns='Loan_Status_Y').values
y = train_df_encoded['Loan_Status_Y'].values
X.shape
X_test_run = test_df_encoded.values
from sklearn.preprocessing import StandardScaler
sc = StandardScaler()
X[:,0:4] = sc.fit_transform(X[:,0:4])
X_test_run[:,0:4] = sc.fit_transform(X_test_run[:,0:4])
```
# Splitting in to train and test
```
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,stratify =y,random_state =42)
print(X_train)
```
# Taking Care of numrical missing values
```
from sklearn.impute import SimpleImputer
imp = SimpleImputer(strategy='mean')
imp_train = imp.fit(X_train)
X_train = imp_train.transform(X_train)
X_test_imp = imp_train.transform(X_test)
X_test_run[0]
X_test_run= imp_train.transform(X_test_run)
```
# Testing different Clasification Models
## Logistic Regression
```
from sklearn.linear_model import LogisticRegression
log_classifier = LogisticRegression()
log_classifier.fit(X_train, y_train)
y_pred = log_classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Knearest
```
from sklearn.neighbors import KNeighborsClassifier
classifier = KNeighborsClassifier(n_neighbors = 5, metric = 'minkowski', p = 2)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='flag'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## SVM
```
from sklearn.svm import SVC
classifier = SVC(kernel = 'linear', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='gist_rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Kernal SVM
```
from sklearn.svm import SVC
classifier = SVC(kernel = 'rbf', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax,); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Naive Bayes
```
from sklearn.naive_bayes import GaussianNB
classifier = GaussianNB()
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Decision Tree
```
from sklearn.tree import DecisionTreeClassifier
classifier = DecisionTreeClassifier(criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='flag'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
## Random Forest
```
from sklearn.ensemble import RandomForestClassifier
classifier = RandomForestClassifier(n_estimators = 10, criterion = 'entropy', random_state = 0)
classifier.fit(X_train, y_train)
y_pred = classifier.predict(X_test_imp)
print(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
from sklearn.metrics import confusion_matrix, accuracy_score
cm = confusion_matrix(y_test, y_pred)
print(cm)
accuracy_score(y_test, y_pred)
from sklearn.metrics import f1_score
f1_score(y_test, y_pred, average=None)
ax= plt.subplot()
sns.heatmap(cm, annot=True, ax = ax, cmap='gist_rainbow'); #annot=True to annotate cells
# labels, title and ticks
ax.set_xlabel('Predicted labels');ax.set_ylabel('True labels');
ax.set_title('Confusion Matrix');
ax.xaxis.set_ticklabels(['Yes', 'No']); ax.yaxis.set_ticklabels(['Yes', 'No']);
```
# Predicting The Test dataset
## Selecting Logistic Regression Based on accuracy_score and f1Score
```
log_classifier.predict(X_test_run)
```
|
github_jupyter
|
```
%matplotlib inline
```
Failed Model Fits
=================
Example of model fit failures and how to debug them.
```
# Import the FOOOFGroup object
from fooof import FOOOFGroup
# Import simulation code to create test power spectra
from fooof.sim.gen import gen_group_power_spectra
# Import FitError, which we will use to help debug model fit errors
from fooof.core.errors import FitError
```
Model Fit Failures
------------------
The power spectrum model is not guaranteed to fit - sometimes the fit procedure can fail.
Model fit failures are rare, and they typically only happen on spectra that are
particular noisy, and/or are some kind of outlier for which the fitting procedure
fails to find a good model solution.
In general, model fit failures should lead to a clean exit, meaning that
a failed model fit does not lead to a code error. The failed fit will be encoded in
the results as a null model, and the code can continue onwards.
In this example, we will look through what it looks like when model fits fail.
```
# Simulate some example power spectra to use for the example
freqs, powers = gen_group_power_spectra(25, [1, 50], [1, 1], [10, 0.25, 3],
nlvs=0.1, freq_res=0.25)
# Initialize a FOOOFGroup object, with some desired settings
fg = FOOOFGroup(min_peak_height=0.1, max_n_peaks=6)
# Fit power spectra
fg.fit(freqs, powers)
```
If there are failed fits, these are stored as null models.
Let's check if there were any null models, from model failures, in the models
that we have fit so far. To do so, the :class:`~fooof.FOOOFGroup` object has some
attributes that provide information on any null model fits.
These attributes are:
- ``n_null_`` : the number of model results that are null
- ``null_inds_`` : the indices of any null model results
```
# Check for failed model fits
print('Number of Null models : \t', fg.n_null_)
print('Indices of Null models : \t', fg.null_inds_)
```
Inducing Model Fit Failures
~~~~~~~~~~~~~~~~~~~~~~~~~~~
So far, we have no model failures (as is typical).
For this example, to induce some model fits, we will use a trick to change the number of
iterations the model uses to fit parameters (`_maxfev`), making it much more likely to fail.
Note that in normal usage, you would likely never want to change the value of `_maxfev`,
and this here is a 'hack' of the code in order to induce reproducible failure modes
in simulated data.
```
# Hack the object to induce model failures
fg._maxfev = 50
# Try fitting again
fg.fit(freqs, powers)
```
As we can see, there are now some model fit failures! Note that, as above, it will
be printed out if there is as model fit failure when in verbose mode.
```
# Check how many model fit failures we have failed model fits
print('Number of Null models : \t', fg.n_null_)
print('Indices of Null models : \t', fg.null_inds_)
```
Debug Mode
----------
There are multiple possible reasons why a model fit failure can occur, or at least
multiple possible steps in the algorithm at which the fit failure can occur.
If you have a small number of fit failures, you can likely just exclude them.
However, if you have multiple fit failures, and/or you want to investigate why the
model is failing, you can use the debug mode to get a bit more information about
where the model is failing.
The debug mode will stop the FOOOF object catching and continuing any model
fit errors, allowing you to see where the error is happening, and get more
information about where it is failing.
Note that here we will run the fitting in a try / except to catch the error and
print it out, without the error actually being raised (for website purposes).
If you just want to see the error, you can run the fit call without the try/except.
```
# Set FOOOFGroup into debug mode
fg.set_debug_mode(True)
# Refit in debug mode, in which failed fits will raise an error
try:
fg.fit(freqs, powers)
except FitError as fooof_error:
print(fooof_error)
```
Debugging Model Fit Errors
~~~~~~~~~~~~~~~~~~~~~~~~~~
This debug mode should indicate in which step the model is failing, which might indicate
what aspects of the data to look into, and/or which settings to try and tweak.
Also, all known model fit failures should be caught by the object, and not raise an
error (when not in debug mode). If you are finding examples in which the model is failing
to fit, and raising an error (outside of debug mode), then this might be an unanticipated
issue with the model fit.
If you are unsure about why or how the model is failing to fit, consider
opening an `issue <https://github.com/fooof-tools/fooof/issues>`_ on the project
repository, and we will try to look into what seems to be happening.
|
github_jupyter
|
# Transfer Learning
In this notebook, you'll learn how to use pre-trained networks to solved challenging problems in computer vision. Specifically, you'll use networks trained on [ImageNet](http://www.image-net.org/) [available from torchvision](http://pytorch.org/docs/0.3.0/torchvision/models.html).
ImageNet is a massive dataset with over 1 million labeled images in 1000 categories. It's used to train deep neural networks using an architecture called convolutional layers. I'm not going to get into the details of convolutional networks here, but if you want to learn more about them, please [watch this](https://www.youtube.com/watch?v=2-Ol7ZB0MmU).
Once trained, these models work astonishingly well as feature detectors for images they weren't trained on. Using a pre-trained network on images not in the training set is called transfer learning. Here we'll use transfer learning to train a network that can classify our cat and dog photos with near perfect accuracy.
With `torchvision.models` you can download these pre-trained networks and use them in your applications. We'll include `models` in our imports now.
```
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets, transforms, models
```
Most of the pretrained models require the input to be 224x224 images. Also, we'll need to match the normalization used when the models were trained. Each color channel was normalized separately, the means are `[0.485, 0.456, 0.406]` and the standard deviations are `[0.229, 0.224, 0.225]`.
```
data_dir = 'Cat_Dog_data'
# TODO: Define transforms for the training data and testing data
train_transforms = transforms.Compose([transforms.RandomRotation(30),
transforms.RandomResizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
test_transforms = transforms.Compose([transforms.Resize(255),
transforms.CenterCrop(224),
transforms.ToTensor(),
transforms.Normalize([0.485, 0.456, 0.406],
[0.229, 0.224, 0.225])])
# Pass transforms in here, then run the next cell to see how the transforms look
train_data = datasets.ImageFolder(data_dir + '/train', transform=train_transforms)
test_data = datasets.ImageFolder(data_dir + '/test', transform=test_transforms)
trainloader = torch.utils.data.DataLoader(train_data, batch_size=64, shuffle=True)
testloader = torch.utils.data.DataLoader(test_data, batch_size=64)
```
We can load in a model such as [DenseNet](http://pytorch.org/docs/0.3.0/torchvision/models.html#id5). Let's print out the model architecture so we can see what's going on.
```
model = models.densenet121(pretrained=True)
model
```
This model is built out of two main parts, the features and the classifier. The features part is a stack of convolutional layers and overall works as a feature detector that can be fed into a classifier. The classifier part is a single fully-connected layer `(classifier): Linear(in_features=1024, out_features=1000)`. This layer was trained on the ImageNet dataset, so it won't work for our specific problem. That means we need to replace the classifier, but the features will work perfectly on their own. In general, I think about pre-trained networks as amazingly good feature detectors that can be used as the input for simple feed-forward classifiers.
```
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
from collections import OrderedDict
classifier = nn.Sequential(OrderedDict([
('fc1', nn.Linear(1024, 500)),
('relu', nn.ReLU()),
('fc2', nn.Linear(500, 2)),
('output', nn.LogSoftmax(dim=1))
]))
model.classifier = classifier
```
With our model built, we need to train the classifier. However, now we're using a **really deep** neural network. If you try to train this on a CPU like normal, it will take a long, long time. Instead, we're going to use the GPU to do the calculations. The linear algebra computations are done in parallel on the GPU leading to 100x increased training speeds. It's also possible to train on multiple GPUs, further decreasing training time.
PyTorch, along with pretty much every other deep learning framework, uses [CUDA](https://developer.nvidia.com/cuda-zone) to efficiently compute the forward and backwards passes on the GPU. In PyTorch, you move your model parameters and other tensors to the GPU memory using `model.to('cuda')`. You can move them back from the GPU with `model.to('cpu')` which you'll commonly do when you need to operate on the network output outside of PyTorch. As a demonstration of the increased speed, I'll compare how long it takes to perform a forward and backward pass with and without a GPU.
```
import time
for device in ['cpu', 'cuda']:
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.001)
model.to(device)
for ii, (inputs, labels) in enumerate(trainloader):
# Move input and label tensors to the GPU
inputs, labels = inputs.to(device), labels.to(device)
start = time.time()
outputs = model.forward(inputs)
loss = criterion(outputs, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if ii==3:
break
print(f"Device = {device}; Time per batch: {(time.time() - start)/3:.3f} seconds")
```
You can write device agnostic code which will automatically use CUDA if it's enabled like so:
```python
# at beginning of the script
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
...
# then whenever you get a new Tensor or Module
# this won't copy if they are already on the desired device
input = data.to(device)
model = MyModule(...).to(device)
```
From here, I'll let you finish training the model. The process is the same as before except now your model is much more powerful. You should get better than 95% accuracy easily.
>**Exercise:** Train a pretrained models to classify the cat and dog images. Continue with the DenseNet model, or try ResNet, it's also a good model to try out first. Make sure you are only training the classifier and the parameters for the features part are frozen.
```
# Use GPU if it's available
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = models.densenet121(pretrained=True)
# Freeze parameters so we don't backprop through them
for param in model.parameters():
param.requires_grad = False
model.classifier = nn.Sequential(nn.Linear(1024, 256),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(256, 2),
nn.LogSoftmax(dim=1))
criterion = nn.NLLLoss()
# Only train the classifier parameters, feature parameters are frozen
optimizer = optim.Adam(model.classifier.parameters(), lr=0.003)
model.to(device);
epochs = 1
steps = 0
running_loss = 0
print_every = 5
for epoch in range(epochs):
for inputs, labels in trainloader:
steps += 1
# Move input and label tensors to the default device
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
loss = criterion(logps, labels)
optimizer.zero_grad()
loss.backward()
optimizer.step()
running_loss += loss.item()
if steps % print_every == 0:
test_loss = 0
accuracy = 0
model.eval()
with torch.no_grad():
for inputs, labels in testloader:
inputs, labels = inputs.to(device), labels.to(device)
logps = model.forward(inputs)
batch_loss = criterion(logps, labels)
test_loss += batch_loss.item()
# Calculate accuracy
ps = torch.exp(logps)
top_p, top_class = ps.topk(1, dim=1)
equals = top_class == labels.view(*top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor)).item()
print(f"Epoch {epoch+1}/{epochs}.. "
f"Train loss: {running_loss/print_every:.3f}.. "
f"Test loss: {test_loss/len(testloader):.3f}.. "
f"Test accuracy: {accuracy/len(testloader):.3f}")
running_loss = 0
model.train()
```
|
github_jupyter
|
```
#cell-width control
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:80% !important; }</style>"))
```
# Imports
```
#packages
import numpy
import tensorflow as tf
from tensorflow.core.example import example_pb2
#utils
import os
import random
import pickle
import struct
import time
from generators import *
#keras
import keras
from keras.preprocessing import text, sequence
from keras.preprocessing.text import Tokenizer
from keras.models import Model, Sequential
from keras.models import load_model
from keras.layers import Dense, Dropout, Activation, Concatenate, Dot, Embedding, LSTM, Conv1D, MaxPooling1D, Input, Lambda
#callbacks
from keras.callbacks import TensorBoard, ModelCheckpoint, Callback
```
# Seeding
```
sd = 7
from numpy.random import seed
seed(sd)
from tensorflow import set_random_seed
set_random_seed(sd)
```
# CPU usage
```
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152
os.environ["CUDA_VISIBLE_DEVICES"] = ""
```
# Global parameters
```
# Embedding
max_features = 400000
maxlen_text = 400
maxlen_summ = 80
embedding_size = 100 #128
# Convolution
kernel_size = 5
filters = 64
pool_size = 4
# LSTM
lstm_output_size = 70
# Training
batch_size = 32
epochs = 20
```
# Load data
```
#data_dir = '/mnt/disks/500gb/experimental-data-mini/experimental-data-mini/generator-dist-1to1/1to1/'
data_dir = '/media/oala/4TB/experimental-data/experiment-1_nonconform-models/generator-dist/1to1/'
#processing_dir = '/mnt/disks/500gb/stats-and-meta-data/400000/'
processing_dir = '/media/oala/4TB/experimental-data/stats-and-meta-data/400000/'
with open(data_dir+'partition.pickle', 'rb') as handle: partition = pickle.load(handle)
with open(data_dir+'labels.pickle', 'rb') as handle: labels = pickle.load(handle)
with open(processing_dir+'tokenizer.pickle', 'rb') as handle: tokenizer = pickle.load(handle)
embedding_matrix = numpy.load(processing_dir+'embedding_matrix.npy')
#the p_n constant
c = 80000
```
# Model
```
#2way input
text_input = Input(shape=(maxlen_text,embedding_size), dtype='float32')
summ_input = Input(shape=(maxlen_summ,embedding_size), dtype='float32')
#2way dropout
text_route = Dropout(0.25)(text_input)
summ_route = Dropout(0.25)(summ_input)
#2way conv
text_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(text_route)
summ_route = Conv1D(filters,
kernel_size,
padding='valid',
activation='relu',
strides=1)(summ_route)
#2way max pool
text_route = MaxPooling1D(pool_size=pool_size)(text_route)
summ_route = MaxPooling1D(pool_size=pool_size)(summ_route)
#2way lstm
text_route = LSTM(lstm_output_size)(text_route)
summ_route = LSTM(lstm_output_size)(summ_route)
#get dot of both routes
merged = Dot(axes=1,normalize=True)([text_route, summ_route])
#negate results
#merged = Lambda(lambda x: -1*x)(merged)
#add p_n constant
#merged = Lambda(lambda x: x + c)(merged)
#output
output = Dense(1, activation='sigmoid')(merged)
#define model
model = Model(inputs=[text_input, summ_input], outputs=[output])
model.compile(loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'])
```
# Train model
```
#callbacks
class BatchHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.losses = []
self.accs = []
def on_batch_end(self, batch, logs={}):
self.losses.append(logs.get('loss'))
self.accs.append(logs.get('acc'))
history = BatchHistory()
tensorboard = TensorBoard(log_dir='./logs', histogram_freq=0, batch_size=batch_size, write_graph=True, write_grads=True)
modelcheckpoint = ModelCheckpoint('best.h5', monitor='val_loss', verbose=0, save_best_only=True, mode='min', period=1)
#batch generator parameters
params = {'dim': [(maxlen_text,embedding_size),(maxlen_summ,embedding_size)],
'batch_size': batch_size,
'shuffle': True,
'tokenizer':tokenizer,
'embedding_matrix':embedding_matrix,
'maxlen_text':maxlen_text,
'maxlen_summ':maxlen_summ,
'data_dir':data_dir,
'sample_info':None}
#generators
training_generator = ContAllGenerator(partition['train'], labels, **params)
validation_generator = ContAllGenerator(partition['validation'], labels, **params)
# Train model on dataset
model.fit_generator(generator=training_generator,
validation_data=validation_generator,
use_multiprocessing=True,
workers=6,
epochs=epochs,
callbacks=[tensorboard, modelcheckpoint, history])
with open('losses.pickle', 'wb') as handle: pickle.dump(history.losses, handle, protocol=pickle.HIGHEST_PROTOCOL)
with open('accs.pickle', 'wb') as handle: pickle.dump(history.accs, handle, protocol=pickle.HIGHEST_PROTOCOL)
```
|
github_jupyter
|
## Dependencies
```
!pip install --quiet /kaggle/input/kerasapplications
!pip install --quiet /kaggle/input/efficientnet-git
import warnings, glob
from tensorflow.keras import Sequential, Model
import efficientnet.tfkeras as efn
from cassava_scripts import *
seed = 0
seed_everything(seed)
warnings.filterwarnings('ignore')
```
### Hardware configuration
```
# TPU or GPU detection
# Detect hardware, return appropriate distribution strategy
strategy, tpu = set_up_strategy()
AUTO = tf.data.experimental.AUTOTUNE
REPLICAS = strategy.num_replicas_in_sync
print(f'REPLICAS: {REPLICAS}')
```
# Model parameters
```
BATCH_SIZE = 8 * REPLICAS
HEIGHT = 512
WIDTH = 512
CHANNELS = 3
N_CLASSES = 5
TTA_STEPS = 0 # Do TTA if > 0
```
# Augmentation
```
def data_augment(image, label):
p_spatial = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_rotate = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_1 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_2 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# p_pixel_3 = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
p_crop = tf.random.uniform([], 0, 1.0, dtype=tf.float32)
# Flips
image = tf.image.random_flip_left_right(image)
image = tf.image.random_flip_up_down(image)
if p_spatial > .75:
image = tf.image.transpose(image)
# Rotates
if p_rotate > .75:
image = tf.image.rot90(image, k=3) # rotate 270º
elif p_rotate > .5:
image = tf.image.rot90(image, k=2) # rotate 180º
elif p_rotate > .25:
image = tf.image.rot90(image, k=1) # rotate 90º
# # Pixel-level transforms
# if p_pixel_1 >= .4:
# image = tf.image.random_saturation(image, lower=.7, upper=1.3)
# if p_pixel_2 >= .4:
# image = tf.image.random_contrast(image, lower=.8, upper=1.2)
# if p_pixel_3 >= .4:
# image = tf.image.random_brightness(image, max_delta=.1)
# Crops
if p_crop > .7:
if p_crop > .9:
image = tf.image.central_crop(image, central_fraction=.7)
elif p_crop > .8:
image = tf.image.central_crop(image, central_fraction=.8)
else:
image = tf.image.central_crop(image, central_fraction=.9)
elif p_crop > .4:
crop_size = tf.random.uniform([], int(HEIGHT*.8), HEIGHT, dtype=tf.int32)
image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
# # Crops
# if p_crop > .6:
# if p_crop > .9:
# image = tf.image.central_crop(image, central_fraction=.5)
# elif p_crop > .8:
# image = tf.image.central_crop(image, central_fraction=.6)
# elif p_crop > .7:
# image = tf.image.central_crop(image, central_fraction=.7)
# else:
# image = tf.image.central_crop(image, central_fraction=.8)
# elif p_crop > .3:
# crop_size = tf.random.uniform([], int(HEIGHT*.6), HEIGHT, dtype=tf.int32)
# image = tf.image.random_crop(image, size=[crop_size, crop_size, CHANNELS])
return image, label
```
## Auxiliary functions
```
# Datasets utility functions
def resize_image(image, label):
image = tf.image.resize(image, [HEIGHT, WIDTH])
image = tf.reshape(image, [HEIGHT, WIDTH, CHANNELS])
return image, label
def process_path(file_path):
name = get_name(file_path)
img = tf.io.read_file(file_path)
img = decode_image(img)
img, _ = scale_image(img, None)
# img = center_crop(img, HEIGHT, WIDTH)
return img, name
def get_dataset(files_path, shuffled=False, tta=False, extension='jpg'):
dataset = tf.data.Dataset.list_files(f'{files_path}*{extension}', shuffle=shuffled)
dataset = dataset.map(process_path, num_parallel_calls=AUTO)
if tta:
dataset = dataset.map(data_augment, num_parallel_calls=AUTO)
dataset = dataset.map(resize_image, num_parallel_calls=AUTO)
dataset = dataset.batch(BATCH_SIZE)
dataset = dataset.prefetch(AUTO)
return dataset
```
# Load data
```
database_base_path = '/kaggle/input/cassava-leaf-disease-classification/'
submission = pd.read_csv(f'{database_base_path}sample_submission.csv')
display(submission.head())
TEST_FILENAMES = tf.io.gfile.glob(f'{database_base_path}test_tfrecords/ld_test*.tfrec')
NUM_TEST_IMAGES = count_data_items(TEST_FILENAMES)
print(f'GCS: test: {NUM_TEST_IMAGES}')
model_path_list = glob.glob('/kaggle/input/97-cassava-leaf-effnetb3-scl-cce-512x512/*.h5')
model_path_list.sort()
print('Models to predict:')
print(*model_path_list, sep='\n')
```
# Model
```
def encoder_fn(input_shape):
inputs = L.Input(shape=input_shape, name='input_image')
base_model = efn.EfficientNetB3(input_tensor=inputs,
include_top=False,
weights=None,
pooling='avg')
model = Model(inputs=inputs, outputs=base_model.output)
return model
def classifier_fn(input_shape, N_CLASSES, encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
inputs = L.Input(shape=input_shape, name='input_image')
features = encoder(inputs)
features = L.Dropout(.5)(features)
features = L.Dense(512, activation='relu')(features)
features = L.Dropout(.5)(features)
output = L.Dense(N_CLASSES, activation='softmax', name='output', dtype='float32')(features)
output_healthy = L.Dense(1, activation='sigmoid', name='output_healthy', dtype='float32')(features)
output_cmd = L.Dense(1, activation='sigmoid', name='output_cmd', dtype='float32')(features)
model = Model(inputs=inputs, outputs=[output, output_healthy, output_cmd])
return model
with strategy.scope():
encoder = encoder_fn((None, None, CHANNELS))
model = classifier_fn((None, None, CHANNELS), N_CLASSES, encoder, trainable=False)
model.summary()
```
# Test set predictions
```
files_path = f'{database_base_path}test_images/'
test_size = len(os.listdir(files_path))
test_preds = np.zeros((test_size, N_CLASSES))
for model_path in model_path_list:
print(model_path)
K.clear_session()
model.load_weights(model_path)
if TTA_STEPS > 0:
test_ds = get_dataset(files_path, tta=True).repeat()
ct_steps = TTA_STEPS * ((test_size/BATCH_SIZE) + 1)
preds = model.predict(test_ds, steps=ct_steps, verbose=1)[0][:(test_size * TTA_STEPS)]
preds = np.mean(preds.reshape(test_size, TTA_STEPS, N_CLASSES, order='F'), axis=1)
test_preds += preds / len(model_path_list)
else:
test_ds = get_dataset(files_path, tta=False)
x_test = test_ds.map(lambda image, image_name: image)
test_preds += model.predict(x_test)[0] / len(model_path_list)
test_preds = np.argmax(test_preds, axis=-1)
test_names_ds = get_dataset(files_path)
image_names = [img_name.numpy().decode('utf-8') for img, img_name in iter(test_names_ds.unbatch())]
submission = pd.DataFrame({'image_id': image_names, 'label': test_preds})
submission.to_csv('submission.csv', index=False)
display(submission.head())
```
|
github_jupyter
|
# Quantum Machine Learning and TTN
Let's look at the Tree Tensor Network as a model for quantum machine learning.
## What you will learn
1. TTN model
2. Optimization
## Install Blueqat
```
!pip install blueqat
```
The model we are going to build is called TTN. The quantum circuit is as follows.
<img src="../tutorial-ja/img/253_img.png" width="25%">
It has a shape that takes on tree structure.
This circuit uses a one qubit arbitrary rotation gate (a combination of $Rz$ and $Ry$ gates) and a two qubit gate ($CX$ gate).
More details are as follows.
<img src="../tutorial-ja/img/253_img_2.png" width="35%">
```
from blueqat import Circuit
import matplotlib.pyplot as plt
import numpy as np
import time
%matplotlib inline
```
Configure hyperparameters and other settings.
```
np.random.seed(45)
# Number of steps of optimizetaion
nsteps = 2000
# Number of parameters of the quantum circuit to be optimized
nparams = 18
# Fineness of numerical differentiation
h = 0.01
# Learning rate
e = 0.01
# Initial parameter
param_init = [np.random.rand()*np.pi*2 for i in range(nparams)]
# list for containing results
arr = []
#1: train, 2: prediction
mode = 1
```
We create a model of the tree structure.
Set upthe input to the quantum circuit and the target label for it and start learning.
This time, the input data can be selected by arguments.
```
def TTN_Z(a, ran, mode=1):
# Input circuit
init = [Circuit(4).x[0,1], Circuit(4).x[2,3], Circuit(4).x[0], Circuit(4).x[1], Circuit(4).x[2], Circuit(4).x[0,2]]
# Target label
target = [1,1,-1,-1,-1,1]
# Circuit construction
u = init[ran]
u.rz(a[0])[0].ry(a[1])[0].rz(a[2])[0]
u.rz(a[3])[1].ry(a[4])[1].rz(a[5])[1]
u.rz(a[6])[2].ry(a[7])[2].rz(a[8])[2]
u.rz(a[9])[3].ry(a[10])[3].rz(a[11])[3]
u.cx[0,1].cx[2,3]
u.rz(a[12])[1].ry(a[13])[1].rz(a[14])[1]
u.rz(a[15])[3].ry(a[16])[3].rz(a[17])[3]
u.cx[1,3]
# Calculate expectation value from state vector
full = u.run()
expt = sum(np.abs(full[:8])**2)-sum(np.abs(full[8:])**2)
if(mode ==1):
# return error between label and prediction
return (expt - target[ran])**2
else:
return expt
```
Stochastic gradient descent (SGD) is used to learning.
At the start of each step, the input data is randomly selected from 6 ways (0 to 5), and the gradient is calculated and the parameters are updated.
In each step, the gradient calculation and parameter update are performed on only one data, but by repeating the process while randomly selecting input data, the system eventually learns to minimize the loss function for all data.
```
start = time.time()
param = param_init.copy()
for i in range(nsteps):
it = np.random.randint(0,6)
loss = TTN_Z(param, it, mode)
arr.append(loss)
param_new = [0 for i in range(nparams)]
for j in range(nparams):
_param = param.copy()
_param[j] += h
param_new[j] = param[j] - e*(TTN_Z(_param, it, mode) - loss)/h
param = param_new
plt.plot(arr)
plt.show()
print(time.time() - start)
```
It converged well.
Let's check it out.
```
target = [1,1,-1,-1,-1,1]
preds = []
for i in range(6):
pred = TTN_Z(param, i, mode=2)
preds.append(pred)
print("Prediction :", pred, " Target :", target[i])
```
From the above, we were able to learn a quantum circuit using the TTN model.
|
github_jupyter
|
```
# Importing needed libraries
import datetime
import pandas as pd
# Fetching the data from official site of Ministry of Health and Family Welfare | Government of India
try:
url = "https://www.mohfw.gov.in/"
dfs = pd.read_html(url)
for i in range(len(dfs)):
df = dfs[i]
if (len(df.columns) == 6):
cols_match = sum(df.columns==['S. No.', 'Name of State / UT', 'Active Cases*',
'Cured/Discharged/Migrated*', 'Deaths**', 'Total Confirmed cases*'])
if (cols_match == 6):
now = datetime.datetime.now()
dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
df.to_csv("/home/caesar/covid_19.csv")
break
except:
df = pd.read_csv("/home/caesar/covid_19.csv")
df = df.drop(columns=["Unnamed: 0"])
now = datetime.datetime.fromtimestamp(os.path.getmtime("/home/caesar/covid_19.csv"))
dt_string = now.strftime("%d/%m/%Y %H:%M:%S")
df
# Preprocessing the column data to remove any special characters inbetween because we cannot rely on the source completely.
for col in ['Active Cases*', 'Cured/Discharged/Migrated*', 'Deaths**', 'Total Confirmed cases*' ]:
if df[col].dtypes=='O':
df[col] = df[col].str.replace('\W', '')
# Fetching out the values from the DataFrame
m = 35
states = df["Name of State / UT"][:m]
active_cases = df["Active Cases*"][:m].astype(int)
confirmed_cases = df["Total Confirmed cases*"][:m].astype(int)
casualties = df["Deaths**"][:m].astype(int)
# Plotting the bar graph!
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
max_cases = max(active_cases)
total_active_cases = int(df["Active Cases*"][36])
total_death_casualties = df["Deaths**"][36]
total_cured_cases = df["Cured/Discharged/Migrated*"][36]
total_confirmed_cases = df["Total Confirmed cases*"][36]
barWidth = 0.4
r1 = np.arange(len(active_cases))
r2 = [x + barWidth for x in r1]
plt.figure(figsize=(16, 8))
plt.bar(r1, active_cases, color="royalblue", width=barWidth, edgecolor="white", label="Active Cases")
plt.bar(r2, casualties, color="orchid", width=barWidth, edgecolor="white", label="Death Casualties")
plt.xlabel("States / UT", fontweight="bold", fontsize=16)
plt.xticks([r + barWidth - 0.19 for r in range(len(active_cases))], states, rotation="78", fontsize=14)
plt.ylabel("Number of COVID-19 cases", fontweight="bold", fontsize=16)
plt.legend(fontsize=16, loc="upper right")
plt.text(-1, max_cases, "Total active cases: " + str(total_active_cases), fontsize=16)
plt.text(-1, max_cases - 2000, "Total death casualties: " + str(total_death_casualties), fontsize=16)
plt.text(-1, max_cases - 4000, "Total cured cases: " + str(total_cured_cases), fontsize=16)
plt.title("Data updated at: " + dt_string, loc="center")
plt.show()
# A more visualistic comparison!
mortality_rate = str(round(total_active_cases / int(total_confirmed_cases), 2))
sizes=[total_confirmed_cases, total_active_cases]
names=["Total Confirmed Cases", "Total Active Cases"]
plt.pie(sizes, explode=(0, 0.1), labels=names, autopct='%1.1f%%', shadow=True, startangle=90)
plt.text(-1, 1.2, "Mortality Rate: " + mortality_rate, fontsize=16)
plt.show()
# In case you need a more fancy donut-like graph!
from palettable.colorbrewer.qualitative import Pastel1_7
sizes=[total_confirmed_cases, total_active_cases]
names=["Total Confirmed Cases", "Total Active Cases"]
my_circle=plt.Circle((0,0), 0.7, color='white')
plt.pie(sizes, labels=names, colors=Pastel1_7.hex_colors, autopct='%1.1f%%', explode=(0, 0.1))
p=plt.gcf()
p.gca().add_artist(my_circle)
plt.text(-1, 1.2, "Mortality Rate: " + mortality_rate, fontsize=16)
plt.show()
```
|
github_jupyter
|
# Artificial Intelligence Nanodegree
## Machine Translation Project
In this notebook, sections that end with **'(IMPLEMENTATION)'** in the header indicate that the following blocks of code will require additional functionality which you must provide. Please be sure to read the instructions carefully!
## Introduction
In this notebook, you will build a deep neural network that functions as part of an end-to-end machine translation pipeline. Your completed pipeline will accept English text as input and return the French translation.
- **Preprocess** - You'll convert text to sequence of integers.
- **Models** Create models which accepts a sequence of integers as input and returns a probability distribution over possible translations. After learning about the basic types of neural networks that are often used for machine translation, you will engage in your own investigations, to design your own model!
- **Prediction** Run the model on English text.
```
%load_ext autoreload
%aimport helper, tests
%autoreload 1
import collections
import helper
import numpy as np
import project_tests as tests
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Model
from keras.layers import GRU, Input, Dense, TimeDistributed, Activation, RepeatVector, Bidirectional
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.losses import sparse_categorical_crossentropy
```
### Verify access to the GPU
The following test applies only if you expect to be using a GPU, e.g., while running in a Udacity Workspace or using an AWS instance with GPU support. Run the next cell, and verify that the device_type is "GPU".
- If the device is not GPU & you are running from a Udacity Workspace, then save your workspace with the icon at the top, then click "enable" at the bottom of the workspace.
- If the device is not GPU & you are running from an AWS instance, then refer to the cloud computing instructions in the classroom to verify your setup steps.
```
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
```
## Dataset
We begin by investigating the dataset that will be used to train and evaluate your pipeline. The most common datasets used for machine translation are from [WMT](http://www.statmt.org/). However, that will take a long time to train a neural network on. We'll be using a dataset we created for this project that contains a small vocabulary. You'll be able to train your model in a reasonable time with this dataset.
### Load Data
The data is located in `data/small_vocab_en` and `data/small_vocab_fr`. The `small_vocab_en` file contains English sentences with their French translations in the `small_vocab_fr` file. Load the English and French data from these files from running the cell below.
```
# Load English data
english_sentences = helper.load_data('data/small_vocab_en')
# Load French data
french_sentences = helper.load_data('data/small_vocab_fr')
print('Dataset Loaded')
```
### Files
Each line in `small_vocab_en` contains an English sentence with the respective translation in each line of `small_vocab_fr`. View the first two lines from each file.
```
for sample_i in range(2):
print('small_vocab_en Line {}: {}'.format(sample_i + 1, english_sentences[sample_i]))
print('small_vocab_fr Line {}: {}'.format(sample_i + 1, french_sentences[sample_i]))
```
From looking at the sentences, you can see they have been preprocessed already. The puncuations have been delimited using spaces. All the text have been converted to lowercase. This should save you some time, but the text requires more preprocessing.
### Vocabulary
The complexity of the problem is determined by the complexity of the vocabulary. A more complex vocabulary is a more complex problem. Let's look at the complexity of the dataset we'll be working with.
```
english_words_counter = collections.Counter([word for sentence in english_sentences for word in sentence.split()])
french_words_counter = collections.Counter([word for sentence in french_sentences for word in sentence.split()])
print('{} English words.'.format(len([word for sentence in english_sentences for word in sentence.split()])))
print('{} unique English words.'.format(len(english_words_counter)))
print('10 Most common words in the English dataset:')
print('"' + '" "'.join(list(zip(*english_words_counter.most_common(10)))[0]) + '"')
print()
print('{} French words.'.format(len([word for sentence in french_sentences for word in sentence.split()])))
print('{} unique French words.'.format(len(french_words_counter)))
print('10 Most common words in the French dataset:')
print('"' + '" "'.join(list(zip(*french_words_counter.most_common(10)))[0]) + '"')
```
For comparison, _Alice's Adventures in Wonderland_ contains 2,766 unique words of a total of 15,500 words.
## Preprocess
For this project, you won't use text data as input to your model. Instead, you'll convert the text into sequences of integers using the following preprocess methods:
1. Tokenize the words into ids
2. Add padding to make all the sequences the same length.
Time to start preprocessing the data...
### Tokenize (IMPLEMENTATION)
For a neural network to predict on text data, it first has to be turned into data it can understand. Text data like "dog" is a sequence of ASCII character encodings. Since a neural network is a series of multiplication and addition operations, the input data needs to be number(s).
We can turn each character into a number or each word into a number. These are called character and word ids, respectively. Character ids are used for character level models that generate text predictions for each character. A word level model uses word ids that generate text predictions for each word. Word level models tend to learn better, since they are lower in complexity, so we'll use those.
Turn each sentence into a sequence of words ids using Keras's [`Tokenizer`](https://keras.io/preprocessing/text/#tokenizer) function. Use this function to tokenize `english_sentences` and `french_sentences` in the cell below.
Running the cell will run `tokenize` on sample data and show output for debugging.
```
def tokenize(x):
"""
Tokenize x
:param x: List of sentences/strings to be tokenized
:return: Tuple of (tokenized x data, tokenizer used to tokenize x)
"""
# TODO: Implement
#x_tk = Tokenizer(char_level=True)
x_tk = Tokenizer(num_words=None, char_level=False)
x_tk.fit_on_texts(x)
return x_tk.texts_to_sequences(x), x_tk
#return None, None
tests.test_tokenize(tokenize)
# Tokenize Example output
text_sentences = [
'The quick brown fox jumps over the lazy dog .',
'By Jove , my quick study of lexicography won a prize .',
'This is a short sentence .']
text_tokenized, text_tokenizer = tokenize(text_sentences)
print(text_tokenizer.word_index)
print()
for sample_i, (sent, token_sent) in enumerate(zip(text_sentences, text_tokenized)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(sent))
print(' Output: {}'.format(token_sent))
```
### Padding (IMPLEMENTATION)
When batching the sequence of word ids together, each sequence needs to be the same length. Since sentences are dynamic in length, we can add padding to the end of the sequences to make them the same length.
Make sure all the English sequences have the same length and all the French sequences have the same length by adding padding to the **end** of each sequence using Keras's [`pad_sequences`](https://keras.io/preprocessing/sequence/#pad_sequences) function.
```
def pad(x, length=None):
"""
Pad x
:param x: List of sequences.
:param length: Length to pad the sequence to. If None, use length of longest sequence in x.
:return: Padded numpy array of sequences
"""
# TODO: Implement
#return None
if length is None:
length = max([len(sentence) for sentence in x])
#return pad_sequences(x, maxlen=length, padding='post')
return pad_sequences(x, maxlen=length, padding='post', truncating='post')
tests.test_pad(pad)
# Pad Tokenized output
test_pad = pad(text_tokenized)
for sample_i, (token_sent, pad_sent) in enumerate(zip(text_tokenized, test_pad)):
print('Sequence {} in x'.format(sample_i + 1))
print(' Input: {}'.format(np.array(token_sent)))
print(' Output: {}'.format(pad_sent))
```
### Preprocess Pipeline
Your focus for this project is to build neural network architecture, so we won't ask you to create a preprocess pipeline. Instead, we've provided you with the implementation of the `preprocess` function.
```
def preprocess(x, y):
"""
Preprocess x and y
:param x: Feature List of sentences
:param y: Label List of sentences
:return: Tuple of (Preprocessed x, Preprocessed y, x tokenizer, y tokenizer)
"""
preprocess_x, x_tk = tokenize(x)
preprocess_y, y_tk = tokenize(y)
preprocess_x = pad(preprocess_x)
preprocess_y = pad(preprocess_y)
# Keras's sparse_categorical_crossentropy function requires the labels to be in 3 dimensions
preprocess_y = preprocess_y.reshape(*preprocess_y.shape, 1)
return preprocess_x, preprocess_y, x_tk, y_tk
preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer =\
preprocess(english_sentences, french_sentences)
max_english_sequence_length = preproc_english_sentences.shape[1]
max_french_sequence_length = preproc_french_sentences.shape[1]
english_vocab_size = len(english_tokenizer.word_index)
french_vocab_size = len(french_tokenizer.word_index)
print('Data Preprocessed')
print("Max English sentence length:", max_english_sequence_length)
print("Max French sentence length:", max_french_sequence_length)
print("English vocabulary size:", english_vocab_size)
print("French vocabulary size:", french_vocab_size)
```
## Models
In this section, you will experiment with various neural network architectures.
You will begin by training four relatively simple architectures.
- Model 1 is a simple RNN
- Model 2 is a RNN with Embedding
- Model 3 is a Bidirectional RNN
- Model 4 is an optional Encoder-Decoder RNN
After experimenting with the four simple architectures, you will construct a deeper architecture that is designed to outperform all four models.
### Ids Back to Text
The neural network will be translating the input to words ids, which isn't the final form we want. We want the French translation. The function `logits_to_text` will bridge the gab between the logits from the neural network to the French translation. You'll be using this function to better understand the output of the neural network.
```
def logits_to_text(logits, tokenizer):
"""
Turn logits from a neural network into text using the tokenizer
:param logits: Logits from a neural network
:param tokenizer: Keras Tokenizer fit on the labels
:return: String that represents the text of the logits
"""
index_to_words = {id: word for word, id in tokenizer.word_index.items()}
index_to_words[0] = '<PAD>'
return ' '.join([index_to_words[prediction] for prediction in np.argmax(logits, 1)])
print('`logits_to_text` function loaded.')
```
### Model 1: RNN (IMPLEMENTATION)

A basic RNN model is a good baseline for sequence data. In this model, you'll build a RNN that translates English to French.
```
def simple_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a basic RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Build the layers
learning_rate = 0.01
input_seq = Input(input_shape[1:])
rnn = GRU(output_sequence_length, return_sequences=True)(input_seq)
#logits = TimeDistributed(Dense(french_vocab_size))(rnn)
logits = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(rnn)
#model = None
#model = Model(input_seq, Activation('softmax')(logits))
model = Model(inputs=input_seq, outputs=logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
tests.test_simple_model(simple_model)
# Reshaping the input to work with a basic RNN
tmp_x = pad(preproc_english_sentences, max_french_sequence_length)
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
simple_rnn_model = simple_model(
tmp_x.shape,
max_french_sequence_length,
english_vocab_size,
french_vocab_size)
simple_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(simple_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 2: Embedding (IMPLEMENTATION)

You've turned the words into ids, but there's a better representation of a word. This is called word embeddings. An embedding is a vector representation of the word that is close to similar words in n-dimensional space, where the n represents the size of the embedding vectors.
In this model, you'll create a RNN model using embedding.
```
def embed_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a RNN model using word embedding on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
learning_rate = 0.01
input_seq = Input(input_shape[1:])
embedding_layer = Embedding(input_dim=english_vocab_size, output_dim=output_sequence_length, mask_zero=False)(input_seq)
rnn = GRU(output_sequence_length, return_sequences=True)(embedding_layer)
logits = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(rnn)
model = Model(inputs=input_seq, outputs=logits)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
return model
#return None
tests.test_embed_model(embed_model)
# TODO: Reshape the input
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2]))
# TODO: Train the neural network
embed_rnn_model = embed_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
embed_rnn_model.summary()
embed_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# TODO: Print prediction(s)
print(logits_to_text(embed_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 3: Bidirectional RNNs (IMPLEMENTATION)

One restriction of a RNN is that it can't see the future input, only the past. This is where bidirectional recurrent neural networks come in. They are able to see the future data.
```
from keras.layers import Bidirectional
def bd_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a bidirectional RNN model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
#Config Hyperparameters
learning_rate = 0.01
#Create Model
inputs = Input(shape=input_shape[1:])
hidden_layer = Bidirectional(GRU(output_sequence_length, return_sequences=True))(inputs)
outputs = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(hidden_layer)
#Create Model from parameters defined above
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
#return None
return model
tests.test_bd_model(bd_model)
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# TODO: Train and Print prediction(s)
# Train the neural network
bd_rnn_model = bd_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
bd_rnn_model.summary()
bd_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(bd_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 4: Encoder-Decoder (OPTIONAL)
Time to look at encoder-decoder models. This model is made up of an encoder and decoder. The encoder creates a matrix representation of the sentence. The decoder takes this matrix as input and predicts the translation as output.
Create an encoder-decoder model in the cell below.
```
from keras.layers import RepeatVector
def encdec_model(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train an encoder-decoder model on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# OPTIONAL: Implement
#Config Hyperparameters
learning_rate = 0.01
latent_dim = 128
#Config Encoder
encoder_inputs = Input(shape=input_shape[1:])
encoder_gru = GRU(output_sequence_length)(encoder_inputs)
encoder_outputs = Dense(latent_dim, activation='relu')(encoder_gru)
#Config Decoder
decoder_inputs = RepeatVector(output_sequence_length)(encoder_outputs)
decoder_gru = GRU(latent_dim, return_sequences=True)(decoder_inputs)
output_layer = TimeDistributed(Dense(french_vocab_size, activation='softmax'))
outputs = output_layer(decoder_gru)
#Create Model from parameters defined above
model = Model(inputs=encoder_inputs, outputs=outputs)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
#return None
return model
tests.test_encdec_model(encdec_model)
# OPTIONAL: Train and Print prediction(s)
tmp_x = pad(preproc_english_sentences, preproc_french_sentences.shape[1])
tmp_x = tmp_x.reshape((-1, preproc_french_sentences.shape[-2], 1))
# Train the neural network
encdec_rnn_model = encdec_model(
tmp_x.shape,
preproc_french_sentences.shape[1],
len(english_tokenizer.word_index) + 1,
len(french_tokenizer.word_index) + 1)
encdec_rnn_model.summary()
encdec_rnn_model.fit(tmp_x, preproc_french_sentences, batch_size=1024, epochs=10, validation_split=0.2)
# Print prediction(s)
print(logits_to_text(encdec_rnn_model.predict(tmp_x[:1])[0], french_tokenizer))
```
### Model 5: Custom (IMPLEMENTATION)
Use everything you learned from the previous models to create a model that incorporates embedding and a bidirectional rnn into one model.
```
def model_final(input_shape, output_sequence_length, english_vocab_size, french_vocab_size):
"""
Build and train a model that incorporates embedding, encoder-decoder, and bidirectional RNN on x and y
:param input_shape: Tuple of input shape
:param output_sequence_length: Length of output sequence
:param english_vocab_size: Number of unique English words in the dataset
:param french_vocab_size: Number of unique French words in the dataset
:return: Keras model built, but not trained
"""
# TODO: Implement
#Config Hyperparameters
learning_rate = 0.01
latent_dim = 128
#Config Model
inputs = Input(shape=input_shape[1:])
embedding_layer = Embedding(input_dim=english_vocab_size, output_dim=output_sequence_length, mask_zero=False)(inputs)
#bd_layer = Bidirectional(GRU(output_sequence_length))(embedding_layer)
bd_layer = Bidirectional(GRU(256))(embedding_layer)
encoding_layer = Dense(latent_dim, activation='relu')(bd_layer)
decoding_layer = RepeatVector(output_sequence_length)(encoding_layer)
#output_layer = Bidirectional(GRU(latent_dim, return_sequences=True))(decoding_layer)
output_layer = Bidirectional(GRU(256, return_sequences=True))(decoding_layer)
outputs = TimeDistributed(Dense(french_vocab_size, activation='softmax'))(output_layer)
#Create Model from parameters defined above
model = Model(inputs=inputs, outputs=outputs)
model.compile(loss=sparse_categorical_crossentropy,
optimizer=Adam(learning_rate),
metrics=['accuracy'])
#return None
return model
tests.test_model_final(model_final)
print('Final Model Loaded')
# TODO: Train the final model
```
## Prediction (IMPLEMENTATION)
```
def final_predictions(x, y, x_tk, y_tk):
"""
Gets predictions using the final model
:param x: Preprocessed English data
:param y: Preprocessed French data
:param x_tk: English tokenizer
:param y_tk: French tokenizer
"""
# TODO: Train neural network using model_final
#model = None
#model = model_final(x.shape, y.shape[1], len(x_tk.word_index) + 1, len(y_tk.word_index) + 1)
model = model_final(x.shape, y.shape[1], len(x_tk.word_index) + 1, len(y_tk.word_index) + 1)
model.summary()
#model.fit(x, y, batch_size=1024, epochs=10, validation_split=0.2)
model.fit(x, y, batch_size=512, epochs=10, validation_split=0.2)
## DON'T EDIT ANYTHING BELOW THIS LINE
y_id_to_word = {value: key for key, value in y_tk.word_index.items()}
y_id_to_word[0] = '<PAD>'
sentence = 'he saw a old yellow truck'
sentence = [x_tk.word_index[word] for word in sentence.split()]
sentence = pad_sequences([sentence], maxlen=x.shape[-1], padding='post')
sentences = np.array([sentence[0], x[0]])
predictions = model.predict(sentences, len(sentences))
print('Sample 1:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[0]]))
print('Il a vu un vieux camion jaune')
print('Sample 2:')
print(' '.join([y_id_to_word[np.argmax(x)] for x in predictions[1]]))
print(' '.join([y_id_to_word[np.max(x)] for x in y[0]]))
final_predictions(preproc_english_sentences, preproc_french_sentences, english_tokenizer, french_tokenizer)
```
## Submission
When you're ready to submit, complete the following steps:
1. Review the [rubric](https://review.udacity.com/#!/rubrics/1004/view) to ensure your submission meets all requirements to pass
2. Generate an HTML version of this notebook
- Run the next cell to attempt automatic generation (this is the recommended method in Workspaces)
- Navigate to **FILE -> Download as -> HTML (.html)**
- Manually generate a copy using `nbconvert` from your shell terminal
```
$ pip install nbconvert
$ python -m nbconvert machine_translation.ipynb
```
3. Submit the project
- If you are in a Workspace, simply click the "Submit Project" button (bottom towards the right)
- Otherwise, add the following files into a zip archive and submit them
- `helper.py`
- `machine_translation.ipynb`
- `machine_translation.html`
- You can export the notebook by navigating to **File -> Download as -> HTML (.html)**.
### Generate the html
**Save your notebook before running the next cell to generate the HTML output.** Then submit your project.
```
# Save before you run this cell!
!!jupyter nbconvert *.ipynb
```
## Optional Enhancements
This project focuses on learning various network architectures for machine translation, but we don't evaluate the models according to best practices by splitting the data into separate test & training sets -- so the model accuracy is overstated. Use the [`sklearn.model_selection.train_test_split()`](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function to create separate training & test datasets, then retrain each of the models using only the training set and evaluate the prediction accuracy using the hold out test set. Does the "best" model change?
|
github_jupyter
|
```
%matplotlib inline
import numpy as np
import pandas as pd
import math
from scipy import stats
import pickle
from causality.analysis.dataframe import CausalDataFrame
from sklearn.linear_model import LinearRegression
import datetime
import matplotlib
import matplotlib.pyplot as plt
matplotlib.rcParams['font.sans-serif'] = "Gotham"
matplotlib.rcParams['font.family'] = "sans-serif"
import plotly
import plotly.graph_objs as go
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
init_notebook_mode(connected=True)
```
Open the data from past notebooks and correct them to only include years that are common between the data structures (>1999).
```
with open('VariableData/money_data.pickle', 'rb') as f:
income_data, housing_data, rent_data = pickle.load(f)
with open('VariableData/demographic_data.pickle', 'rb') as f:
demographic_data = pickle.load(f)
with open('VariableData/endowment.pickle', 'rb') as f:
endowment = pickle.load(f)
with open('VariableData/expander.pickle', 'rb') as f:
expander = pickle.load(f)
endowment = endowment[endowment['FY'] > 1997].reset_index()
endowment.drop('index', axis=1, inplace=True)
demographic_data = demographic_data[demographic_data['year'] > 1999].reset_index()
demographic_data.drop('index', axis=1, inplace=True)
income_data = income_data[income_data['year'] > 1999].reset_index()
income_data.drop('index', axis=1, inplace=True)
housing_data = housing_data[housing_data['year'] > 1999].reset_index()
housing_data.drop('index', axis=1, inplace=True)
rent_data = rent_data[rent_data['year'] > 1999].reset_index()
rent_data.drop('index', axis=1, inplace=True)
```
Read in the data on Harvard owned land and Cambridge's property records. Restrict the Harvard data to Cambridge, MA.
```
harvard_land = pd.read_excel("Spreadsheets/2018_building_reference_list.xlsx", header=3)
harvard_land = harvard_land[harvard_land['City'] == 'Cambridge']
cambridge_property = pd.read_excel("Spreadsheets/cambridge_properties.xlsx")
```
Restrict the Cambridge data to Harvard properties, and only use relevant columns.
```
cambridge_property = cambridge_property[cambridge_property['Owner_Name'].isin(['PRESIDENT & FELLOWS OF HARVARD COLLEGE', 'PRESIDENT & FELLOW OF HARVARD COLLEGE'])]
cambridge_property = cambridge_property[['Address', 'PropertyClass', 'LandArea', 'BuildingValue', 'LandValue', 'AssessedValue', 'SalePrice', 'SaleDate', 'Owner_Name']]
```
Fix the time data.
```
cambridge_property['SaleDate'] = pd.to_datetime(cambridge_property['SaleDate'], infer_datetime_format=True)
clean_property = cambridge_property.drop_duplicates(subset=['Address'])
clean_property.head()
type(clean_property['SaleDate'])
```
Only look at properties purchased after 2000.
```
recent_property = clean_property[clean_property['SaleDate'] > datetime.date(2000, 1, 1)]
property_numbers = recent_property[['LandArea', 'AssessedValue', 'SalePrice']]
num_recent = recent_property['Address'].count()
sum_properties = property_numbers.sum()
sum_properties
full_property_numbers = clean_property[['LandArea', 'AssessedValue', 'SalePrice']]
sum_full = full_property_numbers.sum()
delta_property = sum_properties / sum_full
delta_property
```
What can be gathered from above?
Since the year 2000, Harvard has increased its presence in Cambridge by about 3%, corresponding to about 2% of its overall assessed value, an increase of 281,219 square feet and \$115,226,500. Although the assessed value increase is so high, Harvard only paid \$57,548,900 for the property at their times of purchase.
To make some adjustments for inflation:
Note that the inflation rate since 2000 is ~37.8% (https://data.bls.gov/timeseries/CUUR0000SA0L1E?output_view=pct_12mths).
```
inflation_data = pd.read_excel("Spreadsheets/inflation.xlsx", header=11)
inflation_data = inflation_data[['Year', 'Jan']]
inflation_data['Year'] = pd.to_datetime(inflation_data['Year'], format='%Y')
inflation_data['CumulativeInflation'] = inflation_data['Jan'].cumsum()
inflation_data.rename(columns={'Year' : 'SaleDate'}, inplace=True)
recent_property['SaleDate'] = recent_property['SaleDate'].dt.year
inflation_data['SaleDate'] = inflation_data['SaleDate'].dt.year
recent_property = pd.merge(recent_property, inflation_data, how="left", on=['SaleDate'])
recent_property = recent_property.drop('Jan', 1)
recent_property['TodaySale'] = (1 + (recent_property['CumulativeInflation'] / 100)) * recent_property['SalePrice']
today_sale_sum = recent_property['TodaySale'].sum()
today_sale_sum
sum_properties['AssessedValue'] - today_sale_sum
```
Hence, adjusted for inflation, the sale price of the property Harvard has acquired since 2000 is \$65,929,240.
The difference between this value and the assessed value of the property (in 2018) is: \$49,297,260, showing that Harvard's property has appreciated in value even more than (twice more than) inflation would account for, illustrating a clear advantageous dynamic for Harvard.
```
sorted_df = recent_property.sort_values(by=['SaleDate'])
sorted_df = sorted_df.reset_index().drop('index', 1)
sorted_df['CumLand'] = sorted_df['LandArea'].cumsum()
sorted_df['CumValue'] = sorted_df['AssessedValue'].cumsum()
sorted_df
```
Graph the results.
```
def fitter(x, y, regr_x):
"""
Use linear regression to make a best fit line for a set of data.
Args:
x (numpy array): The independent variable.
y (numpy array): The dependent variable.
regr_x (numpy array): The array used to extrapolate the regression.
"""
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
return (slope * regr_x + intercept)
years = sorted_df['SaleDate'].as_matrix()
cum_land = sorted_df['CumLand'].as_matrix()
cum_value = sorted_df['CumValue'].as_matrix()
regr = np.arange(2000, 2012)
line0 = fitter(years, cum_land, regr)
trace0 = go.Scatter(
x = years,
y = cum_land,
mode = 'markers',
name='Harvard Land\n In Cambridge',
marker=go.Marker(color='#601014')
)
fit0 = go.Scatter(
x = regr,
y = line0,
mode='lines',
marker=go.Marker(color='#D2232A'),
name='Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = "The Change In Harvard's Land in Cambridge Since 2000",
font = dict(family='Gotham', size=18),
yaxis=dict(
title='Land Accumulated Since 2000 (Sq. Feet)'
),
xaxis=dict(
title='Year')
)
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename="land_changes")
graph2_df = pd.DataFrame(list(zip(regr, line0)))
graph2_df.to_csv('graph2.csv')
def grapher(x, y, city, title, ytitle, xtitle, filename):
slope, intercept, r_value, p_value, std_err = stats.linregress(x, y)
fit = slope * x + intercept
trace0 = go.Scatter(
x = x,
y = y,
mode = 'markers',
name=city,
marker=go.Marker(color='#D2232A')
)
fit0 = go.Scatter(
x = x,
y = fit,
mode='lines',
marker=go.Marker(color='#AC1D23'),
name='Linear Fit'
)
data = [trace0, fit0]
layout = go.Layout(
title = title,
font = dict(family='Gotham', size=12),
yaxis=dict(
title=ytitle
),
xaxis=dict(
title=xtitle)
)
fig = go.Figure(data=data, layout=layout)
return iplot(fig, filename=filename)
len(line0)
```
Restrict the demographic data to certain years (up to 2012) in order to fit the data well.
```
demographic_data = demographic_data[demographic_data['year'] < 2011]
rent_data = rent_data[rent_data['year'] < 2011]
housing_data = housing_data[housing_data['year'] < 2011]
x = cum_land
y = pd.to_numeric(demographic_data['c_black']).as_matrix()
z1 = pd.to_numeric(rent_data['cambridge']).as_matrix()
z2 = pd.to_numeric(housing_data['cambridge']).as_matrix()
endow_black = grapher(x, y, "Cambridge", "The Correlation Between Harvard Land Change and Black Population", "Black Population of Cambridge", "Land Change (Sq. Feet)", "land_black")
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1, 'z2': z2})
causal_land_black = X.zplot(x='x', y='y', z=['z1', 'z2'], z_types={'z1': 'c', 'z2': 'c'}, kind='line', color="#D2232A")
fig = causal_land_black.get_figure()
fig.set_size_inches(9, 5.5)
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
ax.set_title("The Controlled Correlation Between Land Use (Square Feet) and Black Population", fontproperties=gotham_black, size=10, color="#595959")
ax.set_xlabel("Land Use", fontproperties=gotham_book, fontsize=10, color="#595959")
for tick in ax.get_xticklabels():
tick.set_fontproperties(gotham_book)
tick.set_fontsize(10)
tick.set_color("#595959")
fig.savefig('images/black_land.svg', format='svg', dpi=2400, bbox_inches='tight')
z2
graph9_df = pd.DataFrame(X)
graph9_df.to_csv('graph9.csv')
y = pd.to_numeric(rent_data['cambridge']).as_matrix()
z1 = pd.to_numeric(housing_data['cambridge']).as_matrix()
X = CausalDataFrame({'x': x, 'y': y, 'z1': z1})
causal_land_rent = X.zplot(x='x', y='y', z=['z1'], z_types={'z1': 'c'}, kind='line', color="#D2232A")
fig = causal_land_rent.get_figure()
fig.set_size_inches(9, 5.5)
ax = plt.gca()
ax.set_frame_on(False)
ax.get_yaxis().set_visible(False)
ax.legend_.remove()
ax.set_title("The Controlled Correlation Between Land Use (Square Feet) and Rent", fontproperties=gotham_black, size=10, color="#595959")
ax.set_xlabel("Land Use", fontproperties=gotham_book, fontsize=10, color="#595959")
for tick in ax.get_xticklabels():
tick.set_fontproperties(gotham_book)
tick.set_fontsize(10)
tick.set_color("#595959")
fig.savefig('images/rent_land.svg', format='svg', dpi=1200, bbox_inches='tight')
```
|
github_jupyter
|
```
import git_access,api_access,git2repo
import json
from __future__ import division
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
import networkx as nx
import re
import git2data
import social_interaction
access_token = '--'
repo_owner = 'jankotek'
source_type = 'github_repo'
git_url = 'git://github.com/jankotek/jankotek-mapdb.git'
api_base_url = 'http://api.github.com'
repo_name = 'jankotek-mapdb'
url_type = 'issues'
url_details = 'comments'
access_project = api_access.git_api_access(access_token,repo_owner,source_type,git_url,api_base_url,repo_name)
issue_comment = access_project.get_comments(url_type = url_type,url_details = url_details)
issue = access_project.get_issues(url_type = url_type,url_details = '')
events = access_project.get_events(url_type = url_type,url_details = 'events')
git_repo = git2repo.git2repo(repo_url,repo_name)
repo = git_repo.clone_repo()
obj = repo.get('95693814f4c733ece3563b51c317c89203f1ff59')
print(obj)
commits = git_repo.get_commits()
committed_files = git_repo.get_committed_files()
issue_df = pd.DataFrame(issue, columns = ['Issue_number','user_logon','author_type','Desc','title'])
commit_df = pd.DataFrame(commits, columns=['commit_number', 'message', 'parent'])
events_df = pd.DataFrame(events, columns=['event_type', 'issue_number', 'commit_number'])
import re
issue_commit_dict = {}
for i in range(commit_df.shape[0]):
_commit_number = commit_df.loc[i,'commit_number']
_commit_message = commit_df.loc[i,'message']
_commit_parent = commit_df.loc[i,'parent']
res = re.search("#[0-9]+$", _commit_message)
if res is not None:
_issue_id = res.group(0)[1:]
issue_commit_dict[_commit_number] = _issue_id
links = events_df.dropna()
links.reset_index(inplace=True)
for i in range(links.shape[0]):
if links.loc[i,'commit_number'] in issue_commit_dict.keys():
continue
else:
issue_commit_dict[links.loc[i,'commit_number']] = links.loc[i,'issue_number']
issue_commit_temp = []
commit_df['issues'] = pd.Series([None]*commit_df.shape[0])
issue_df['commits'] = pd.Series([None]*issue_df.shape[0])
for i in range(commit_df.shape[0]):
_commit_number = commit_df.loc[i,'commit_number']
_commit_message = commit_df.loc[i,'message']
_commit_parent = commit_df.loc[i,'parent']
res = re.search("#[0-9]+$", _commit_message)
if res is not None:
_issue_id = res.group(0)[1:]
issue_commit_temp.append([_commit_number,np.int64(_issue_id)])
issue_commit_list_1 = np.array(issue_commit_temp)
links = events_df.dropna()
links.reset_index(inplace=True)
issue_commit_temp = []
for i in range(links.shape[0]):
if links.loc[i,'commit_number'] in issue_commit_list[:,0]:
continue
else:
issue_commit_temp.append([links.loc[i,'commit_number'],links.loc[i,'issue_number']])
issue_commit_list_2 = np.array(issue_commit_temp)
issue_commit_list = np.append(issue_commit_list_1,issue_commit_list_2, axis = 0)
df = pd.DataFrame(issue_commit_list2, columns = ['commit_id','issues']).drop_duplicates()
df = df.drop_duplicates()
df_unique_issues = df.issues.unique()
for i in df_unique:
i = np.int64(i)
commits = df[df['issues'] == i]['commit_id']
x = issue_df['Issue_number'] == i
j = x[x == True].index.values
if len(j) != 1:
continue
issue_df.at[j[0],'commits'] = commits.values
df_unique_commits = df.commit_id.unique()
for i in df_unique_commits:
issues = df[df['commit_id'] == i]['issues']
x = commit_df['commit_number'] == i
j = x[x == True].index.values
if len(j) != 1:
continue
commit_df.at[j[0],'issues'] = issues.values
commit_df
git_data = git2data.git2data(access_token,repo_owner,source_type,git_url,api_base_url,repo_name)
git_data.get_api_data()
git_data.get_commit_data()
committed_files_data = git_data.get_committed_files()
committed_files_df = pd.DataFrame(committed_files_data, columns = ['commit_id','file_id','file_mode','file_path'])
issue_data,commit_data = git_data.create_link()
issue_data.to_pickle('rails_issue.pkl')
commit_data.to_pickle('rails_commit.pkl')
committed_files_df.to_pickle('rails_committed_file.pkl')
type(committed_files_df.loc[1,'file_id'])
git_data.create_data()
si = social_interaction.create_social_inteaction_graph('rspec-rails')
x = si.get_user_node_degree()
x
import magician_package
x = magician_package.learner('pitsE',1,'C:\Users\suvod\AI4SE\magician_package\magician_package\data\preprocessed\')
import magician_package
data_path = '/home/suvo/AI4SE/pypi/magician_package/magician_package/data/preprocessed/'
data_file = 'pitsD'
result = magician_package.learner(data_file,'1',data_path)
```
|
github_jupyter
|
# 数据抓取:
> # Beautifulsoup简介
***
王成军
[email protected]
计算传播网 http://computational-communication.com
# 需要解决的问题
- 页面解析
- 获取Javascript隐藏源数据
- 自动翻页
- 自动登录
- 连接API接口
```
import urllib2
from bs4 import BeautifulSoup
```
- 一般的数据抓取,使用urllib2和beautifulsoup配合就可以了。
- 尤其是对于翻页时url出现规则变化的网页,只需要处理规则化的url就可以了。
- 以简单的例子是抓取天涯论坛上关于某一个关键词的帖子。
- 在天涯论坛,关于雾霾的帖子的第一页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=0&order=8&k=雾霾
- 第二页是:
http://bbs.tianya.cn/list.jsp?item=free&nextid=1&order=8&k=雾霾
# Beautiful Soup
> Beautiful Soup is a Python library designed for quick turnaround projects like screen-scraping. Three features make it powerful:
- Beautiful Soup provides a few simple methods. It doesn't take much code to write an application
- Beautiful Soup automatically converts incoming documents to Unicode and outgoing documents to UTF-8. Then you just have to specify the original encoding.
- Beautiful Soup sits on top of popular Python parsers like lxml and html5lib.
# Install beautifulsoup4
### open your terminal/cmd
> $ pip install beautifulsoup4
# 第一个爬虫
Beautifulsoup Quick Start
http://www.crummy.com/software/BeautifulSoup/bs4/doc/

```
url = 'file:///Users/chengjun/GitHub/cjc/data/test.html'
# http://computational-class.github.io/cjc/data/test.html
content = urllib2.urlopen(url).read()
soup = BeautifulSoup(content, 'html.parser')
soup
```
# html.parser
Beautiful Soup supports the html.parser included in Python’s standard library
# lxml
but it also supports a number of third-party Python parsers. One is the lxml parser `lxml`. Depending on your setup, you might install lxml with one of these commands:
> $ apt-get install python-lxml
> $ easy_install lxml
> $ pip install lxml
# html5lib
Another alternative is the pure-Python html5lib parser `html5lib`, which parses HTML the way a web browser does. Depending on your setup, you might install html5lib with one of these commands:
> $ apt-get install python-html5lib
> $ easy_install html5lib
> $ pip install html5lib
```
print(soup.prettify())
```
- html
- head
- title
- body
- p (class = 'title', 'story' )
- a (class = 'sister')
- href/id
# Select 方法
```
soup.select('.sister')
soup.select('.story')
soup.select('#link2')
soup.select('#link2')[0]['href']
```
# find_all方法
```
soup('p')
soup.find_all('p')
[i.text for i in soup('p')]
for i in soup('p'):
print i.text
for tag in soup.find_all(True):
print(tag.name)
soup('head') # or soup.head
soup('body') # or soup.body
soup('title') # or soup.title
soup('p')
soup.p
soup.title.name
soup.title.string
soup.title.text
# 推荐使用text方法
soup.title.parent.name
soup.p
soup.p['class']
soup.find_all('p', {'class', 'title'})
soup.find_all('p', class_= 'title')
soup.find_all('p', {'class', 'story'})
soup.find_all('p', {'class', 'story'})[0].find_all('a')
soup.a
soup('a')
soup.find(id="link3")
soup.find_all('a')
soup.find_all('a', {'class', 'sister'}) # compare with soup.find_all('a')
soup.find_all('a', {'class', 'sister'})[0]
soup.find_all('a', {'class', 'sister'})[0].text
soup.find_all('a', {'class', 'sister'})[0]['href']
soup.find_all('a', {'class', 'sister'})[0]['id']
soup.find_all(["a", "b"])
print(soup.get_text())
```
***
***
# 数据抓取:
> # 根据URL抓取微信公众号文章内容
***
***
王成军
[email protected]
计算传播网 http://computational-communication.com
```
from IPython.display import display_html, HTML
HTML('<iframe src=http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\
mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd\
width=500 height=500></iframe>')
# the webpage we would like to crawl
```
# 查看源代码
# Inspect
```
url = "http://mp.weixin.qq.com/s?__biz=MzA3MjQ5MTE3OA==&\
mid=206241627&idx=1&sn=471e59c6cf7c8dae452245dbea22c8f3&3rd=MzA3MDU4NTYzMw==&scene=6#rd"
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, 'html.parser')
title = soup.select("div .title #id1 img ")
for i in title:
print i.text
print soup.find('h2', {'class', 'rich_media_title'}).text
print soup.find('div', {'class', 'rich_media_meta_list'})
print soup.find('em').text
article = soup.find('div', {'class' , 'rich_media_content'}).text
print article
rmml = soup.find('div', {'class', 'rich_media_meta_list'})
date = rmml.find(id = 'post-date').text
rmc = soup.find('div', {'class', 'rich_media_content'})
content = rmc.get_text()
print title
print date
print content
```
# 尝试抓取阅读数和点赞数
```
url = 'http://mp.weixin.qq.com/s?src=3×tamp=1463191394&ver=1&signature=cV3qMIBeBTS6i8cJJOPu98j-H9veEPx0Y0BekUE7F6*sal9nkYG*w*FwDiaySIfR4XZL-XFbo2TFzrMxEniDETDYIMRuKmisV8xjfOcCjEWmPkYfK57G*cffYv4JxuM*RUtN8LUIg*n6Kd0AKB8--w=='
content = urllib2.urlopen(url).read() #获取网页的html文本
soup = BeautifulSoup(content, 'html.parser')
print soup
soup.find(id='sg_likeNum3')
```
# 作业:
- 抓取复旦新媒体微信公众号最新一期的内容
|
github_jupyter
|
# Library
```
import numpy as np
import torch
import torch.nn as nn
from utils import *
from dataset import TossingDataset
from torch.utils.data import DataLoader
```
# Model
```
class NaiveMLP(nn.Module):
def __init__(self, in_traj_num, pre_traj_num):
super(NaiveMLP, self).__init__()
self.hidden_dim = 128
self.fc_1 = nn.Sequential(
nn.Linear(in_traj_num * 2, self.hidden_dim),
nn.ReLU(inplace=True),
nn.Linear(self.hidden_dim, self.hidden_dim),
nn.ReLU(inplace=True),
)
self.fc_out = nn.Linear(self.hidden_dim, pre_traj_num * 2)
def forward(self, x):
x = self.fc_1(x)
x = self.fc_out(x)
return x
def train_model(model, train_loader, test_loader, num_epochs, optimizer, scheduler, criterion):
# Training the Model
min_test_dif = float('inf')
epoch_loss = []
for epoch in range(num_epochs):
batch_loss = []
for i, data in enumerate(train_loader):
# get the inputs
inputs = data['current_locs_gt']
locs_gt = data['future_locs_gt']
inputs = inputs.cuda()
locs_gt = locs_gt.cuda()
# zero the parameter gradients
optimizer.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion(outputs, locs_gt, inputs)
loss.backward()
optimizer.step()
batch_loss.append(loss.item())
# Results every epoch
cur_epoch_loss = sum(batch_loss) / len(batch_loss)
# Scheduler
scheduler.step(cur_epoch_loss)
# Test the network
train_dif = test_model(model, train_loader)
test_dif = test_model(model, test_loader)
# Print the result
print('Epoch: %d Train Loss: %.03f Train Dif: %.03f Test Dif: %.03f'
% (epoch, cur_epoch_loss, train_dif, test_dif))
epoch_loss.append(cur_epoch_loss)
if min_test_dif > test_dif:
min_test_dif = test_dif
print('Best')
return epoch_loss
def test_model(model, test_loader):
# Test the Model
model.eval()
batch_loss = []
for i, data in enumerate(test_loader):
# get the inputs
inputs = data['current_locs_gt']
locs_gt = data['future_locs_gt']
inputs = inputs.cuda()
locs_gt = locs_gt.cuda()
outputs = net(inputs)
loss = get_mean_distance(locs_gt, outputs)
batch_loss.append(loss.item())
# Results every epoch
cur_epoch_loss = sum(batch_loss) / len(batch_loss)
model.train()
return cur_epoch_loss
def get_mean_distance(locs_a, locs_b):
vector_len = locs_a.shape[1]
x_a = locs_a[:, :vector_len // 2]
y_a = locs_a[:, vector_len // 2:]
x_b = locs_b[:, :vector_len // 2]
y_b = locs_b[:, vector_len // 2:]
dif_x = (x_a - x_b) ** 2
dif_y = (y_a - y_b) ** 2
dif = dif_x + dif_y
return torch.mean(torch.sqrt(dif))
#################### Hyperparameters ####################
num_epochs = 50000
learning_rate = 0.001
weight_decay = 0
in_frames_num = 3
pre_frames_num = 15
factor = 0.95
patience = 40
batch_size = 16
#################### Hyperparameters ####################
net = NaiveMLP(in_traj_num=3, pre_traj_num=15).cuda()
criterion = PhysicalRegularization()
train_set = TossingDataset(
'./dataset/r1_k0.2/train',
in_traj_num=3,
pre_traj_num=15,
sample_num=32
)
test_set = TossingDataset(
'./dataset/r1_k0.2/test',
in_traj_num=3,
pre_traj_num=15
)
print(len(train_set), len(test_set))
train_loader = DataLoader(train_set, batch_size=batch_size, shuffle=True)
test_loader = DataLoader(test_set, batch_size=len(test_set), shuffle=False)
optimizer = torch.optim.Adam(net.parameters(), lr=learning_rate, weight_decay=weight_decay)
scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(
optimizer,
mode='min',
factor=factor,
patience=patience,
verbose=True,
threshold=1e-3
)
train_loss = train_model(
net,
train_loader,
test_loader,
num_epochs,
optimizer,
scheduler,
criterion
)
```
|
github_jupyter
|
## Appendix 1: Optional Refresher on the Unix Environment
### A1.1) A Quick Unix Overview
In Jupyter, many of the same Unix commands we use to navigate in the regular terminal can be used. (However, this is not true when we write standalone code outside Jupyter.) As a quick refresher, try each of the following:
```
ls
pwd
cd 2017oct04
```
We're in a new folder now, so issue commands in the next two cells to look at the folder content and list your current path:
```
ls
pwd
```
Now test out a few more things. In the blank cells below, try the following and discuss in your group what each does:
```
ls M52*fit
ls M52-001*fit
ls *V*
```
What does the asterisk symbol * do?
**Answer:** Is a placeholder (wildcard) for some text in a file/folder name.
Now, return to where you started, by moving up a directory:
(one directory up from where you are is denoted with `..`, while the current directory is denoted with `.`)
```
cd ..
```
### A1.2) A few more helpful commands
#### `mkdir` to *make* a new *dir*ectory:
`mkdir new_project_name`
#### `cp` to copy a file:
`cp existingfile newfilename`
or
`cp existingfile newlocation`
#### `mv` to move or rename a file:
`mv old_filename_oldlocation old_filename_newlocation`
or
`mv old_filename_oldlocation new_filename_oldlocation`
#### `rm` to *PERMANENTLY* delete (remove) a file... (use with caution):
`rm file_I_will_never_see_again`
#### In the six cells below:
(1) Make a new directory, called `temporary`
(2) Go into that new directory
(3) Move the file test_file.txt from the original directory above (`../test_file.txt`) into your current location using the `.`
(4) Create a copy of test_file.txt with a new, different filename of your choice.
(5) Delete the original test_file.txt
(6) Go back up into the original location where this notebook is located.
```
# Make a new directory, "temporary"
# Move into temporary
# Move the test_file.txt into this current location
# Create a copy of the test_file.txt, name the copy however you like
# Delete the original test_file.txt
# Change directories to original location of notebook.
```
If all went according to plan, the following command should show three directories, a zip file, a .png file, this notebook, and the Lab6 notebook:
```
ls
```
And the following command should show the contents of the `temporary` folder, so only your new text file (a copy of test_file.txt, which is now gone forever) within it:
```
ls ./temporary/
```
## Appendix 2: Optional Refresher on Conditional Statements and Iteration
### A2.1) Conditional Statements
The use of tests or _conditions_ to evaluate variables, values, etc., is a fundamental programming tool. Try executing each of the cells below:
```
2 < 5
3 > 7
x = 11
x > 10
2 * x < x
3.14 <= 3.14 # <= means less than or equal to; >= means greater than or equal to
42 == 42
3e8 != 3e9 # != means "not equal to"
type(True)
```
You see that conditions are either `True` or `False` (with no quotes!) These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python, the name Boolean is shortened to the type `bool`. It is the type of the results of true-false conditions or tests.
Now try executing the following two cells at least twice over, with inputs 50 and then 80.
```
temperature = float(input('What is the temperature in Fahrenheit? '))
if temperature > 70:
print('Wear shorts.')
else:
print('Wear long pants.')
```
The four lines in the previous cell are an if-else statement. There are two indented blocks: One comes right after the `if` heading in line 1 and is executed when the condition in the `if` heading is _true_. This is followed by an `else:` in line 3, followed by another indented block that is only execued when the original condition is _false_. In an if-else statement, exactly one of the two possible indented blocks is executed.
### A2.2) Iteration
Another important component in our arsenal of programming tools is iteration. Iteration means performing an operation repeatedly. We can execute a very simple example at the command line. Let's make a list of objects as follows:
```
names = ['Henrietta', 'Annie', 'Jocelyn', 'Vera']
for n in names:
print('There are ' + str(len(n)) + ' letters in ' + n)
```
This is an example of a for loop. The way a for loop works is a follows. We start with a list of objects -- in this example a list of strings, but it could be anything -- and then we say for variable in list:, followed by a block of code. The code inside the block will be executed once for every item in the list, and when it is executed the variable will be set equal to the appropriate list item. In this example, the list names had four objects in it, each a string. Thus the print statement inside the loop was executed four times. The first time it was executed, the variable n was set equal to `Henrietta`. The second time n was set equal to `Annie`, then `Jocelyn`, then `Vera`.
One of the most common types of loop is where you want to loop over numbers: 0, 1, 2, 3, .... To handle loops of this sort, python provides a simple command to construct a list of numbers to iterate over, called range. The command range(n) produces a list of numbers from 0 to n-1. For example:
```
for i in range(5):
print(i)
```
There are also other ways of iterating, which may be more convenient depending on what you're trying to do. A very common one is the while loop, which does exactly what it sounds like it should: it loops until some condition is met. For example:
```
i = 0 # This starts the initial value off at zero
while i < 11:
print(i)
i = i + 3 # This adds three to the value of i, then goes back to the line #3 to check if the condition is met
```
|
github_jupyter
|
<h1> Logistic Regression using Spark ML </h1>
Set up bucket
```
BUCKET='cloud-training-demos-ml' # CHANGE ME
os.environ['BUCKET'] = BUCKET
# Create spark session
from pyspark.sql import SparkSession
from pyspark import SparkContext
sc = SparkContext('local', 'logistic')
spark = SparkSession \
.builder \
.appName("Logistic regression w/ Spark ML") \
.getOrCreate()
print spark
print sc
from pyspark.mllib.classification import LogisticRegressionWithLBFGS
from pyspark.mllib.regression import LabeledPoint
```
<h2> Read dataset </h2>
```
traindays = spark.read \
.option("header", "true") \
.csv('gs://{}/flights/trainday.csv'.format(BUCKET))
traindays.createOrReplaceTempView('traindays')
spark.sql("SELECT * from traindays LIMIT 5").show()
from pyspark.sql.types import StringType, FloatType, StructType, StructField
header = 'FL_DATE,UNIQUE_CARRIER,AIRLINE_ID,CARRIER,FL_NUM,ORIGIN_AIRPORT_ID,ORIGIN_AIRPORT_SEQ_ID,ORIGIN_CITY_MARKET_ID,ORIGIN,DEST_AIRPORT_ID,DEST_AIRPORT_SEQ_ID,DEST_CITY_MARKET_ID,DEST,CRS_DEP_TIME,DEP_TIME,DEP_DELAY,TAXI_OUT,WHEELS_OFF,WHEELS_ON,TAXI_IN,CRS_ARR_TIME,ARR_TIME,ARR_DELAY,CANCELLED,CANCELLATION_CODE,DIVERTED,DISTANCE,DEP_AIRPORT_LAT,DEP_AIRPORT_LON,DEP_AIRPORT_TZOFFSET,ARR_AIRPORT_LAT,ARR_AIRPORT_LON,ARR_AIRPORT_TZOFFSET,EVENT,NOTIFY_TIME'
def get_structfield(colname):
if colname in ['ARR_DELAY', 'DEP_DELAY', 'DISTANCE', 'TAXI_OUT']:
return StructField(colname, FloatType(), True)
else:
return StructField(colname, StringType(), True)
schema = StructType([get_structfield(colname) for colname in header.split(',')])
inputs = 'gs://{}/flights/tzcorr/all_flights-00000-*'.format(BUCKET) # 1/30th; you may have to change this to find a shard that has training data
#inputs = 'gs://{}/flights/tzcorr/all_flights-*'.format(BUCKET) # FULL
flights = spark.read\
.schema(schema)\
.csv(inputs)
# this view can now be queried ...
flights.createOrReplaceTempView('flights')
```
<h2> Clean up </h2>
```
trainquery = """
SELECT
f.*
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True'
"""
traindata = spark.sql(trainquery)
print traindata.head(2) # if this is empty, try changing the shard you are using.
traindata.describe().show()
```
Note that the counts for the various columns are all different; We have to remove NULLs in the delay variables (these correspond to canceled or diverted flights).
<h2> Logistic regression </h2>
```
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.dep_delay IS NOT NULL AND
f.arr_delay IS NOT NULL
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
trainquery = """
SELECT
DEP_DELAY, TAXI_OUT, ARR_DELAY, DISTANCE
FROM flights f
JOIN traindays t
ON f.FL_DATE == t.FL_DATE
WHERE
t.is_train_day == 'True' AND
f.CANCELLED == '0.00' AND
f.DIVERTED == '0.00'
"""
traindata = spark.sql(trainquery)
traindata.describe().show()
def to_example(fields):
return LabeledPoint(\
float(fields['ARR_DELAY'] < 15), #ontime? \
[ \
fields['DEP_DELAY'], \
fields['TAXI_OUT'], \
fields['DISTANCE'], \
])
examples = traindata.rdd.map(to_example)
lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True)
print lrmodel.weights,lrmodel.intercept
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
lrmodel.clearThreshold()
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
lrmodel.setThreshold(0.7) # cancel if prob-of-ontime < 0.7
print lrmodel.predict([6.0,12.0,594.0])
print lrmodel.predict([36.0,12.0,594.0])
```
<h2> Predict with the model </h2>
First save the model
```
!gsutil -m rm -r gs://$BUCKET/flights/sparkmloutput/model
MODEL_FILE='gs://' + BUCKET + '/flights/sparkmloutput/model'
lrmodel.save(sc, MODEL_FILE)
print '{} saved'.format(MODEL_FILE)
lrmodel = 0
print lrmodel
```
Now retrieve the model
```
from pyspark.mllib.classification import LogisticRegressionModel
lrmodel = LogisticRegressionModel.load(sc, MODEL_FILE)
lrmodel.setThreshold(0.7)
print lrmodel.predict([36.0,12.0,594.0])
print lrmodel.predict([8.0,4.0,594.0])
```
<h2> Examine the model behavior </h2>
For dep_delay=20 and taxiout=10, how does the distance affect prediction?
```
lrmodel.clearThreshold() # to make the model produce probabilities
print lrmodel.predict([20, 10, 500])
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
dist = np.arange(10, 2000, 10)
prob = [lrmodel.predict([20, 10, d]) for d in dist]
sns.set_style("whitegrid")
ax = plt.plot(dist, prob)
plt.xlabel('distance (miles)')
plt.ylabel('probability of ontime arrival')
delay = np.arange(-20, 60, 1)
prob = [lrmodel.predict([d, 10, 500]) for d in delay]
ax = plt.plot(delay, prob)
plt.xlabel('departure delay (minutes)')
plt.ylabel('probability of ontime arrival')
```
<h2> Evaluate model </h2>
Evaluate on the test data
```
inputs = 'gs://{}/flights/tzcorr/all_flights-00001-*'.format(BUCKET) # you may have to change this to find a shard that has test data
flights = spark.read\
.schema(schema)\
.csv(inputs)
flights.createOrReplaceTempView('flights')
testquery = trainquery.replace("t.is_train_day == 'True'","t.is_train_day == 'False'")
print testquery
testdata = spark.sql(testquery)
examples = testdata.rdd.map(to_example)
testdata.describe().show() # if this is empty, change the shard you are using
def eval(labelpred):
cancel = labelpred.filter(lambda (label, pred): pred < 0.7)
nocancel = labelpred.filter(lambda (label, pred): pred >= 0.7)
corr_cancel = cancel.filter(lambda (label, pred): label == int(pred >= 0.7)).count()
corr_nocancel = nocancel.filter(lambda (label, pred): label == int(pred >= 0.7)).count()
cancel_denom = cancel.count()
nocancel_denom = nocancel.count()
if cancel_denom == 0:
cancel_denom = 1
if nocancel_denom == 0:
nocancel_denom = 1
return {'total_cancel': cancel.count(), \
'correct_cancel': float(corr_cancel)/cancel_denom, \
'total_noncancel': nocancel.count(), \
'correct_noncancel': float(corr_nocancel)/nocancel_denom \
}
# Evaluate model
lrmodel.clearThreshold() # so it returns probabilities
labelpred = examples.map(lambda p: (p.label, lrmodel.predict(p.features)))
print 'All flights:'
print eval(labelpred)
# keep only those examples near the decision threshold
print 'Flights near decision threshold:'
labelpred = labelpred.filter(lambda (label, pred): pred > 0.65 and pred < 0.75)
print eval(labelpred)
```
Copyright 2019 Google Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
github_jupyter
|
```
import pandas as pd
import warnings
import altair as alt
from urllib import request
import json
# fetch & enable a Spanish timeFormat locale.
with request.urlopen('https://raw.githubusercontent.com/d3/d3-time-format/master/locale/es-ES.json') as f:
es_time_format = json.load(f)
alt.renderers.set_embed_options(timeFormatLocale=es_time_format)
#warnings.filterwarnings('ignore')
df = pd.read_csv ('https://www.gstatic.com/covid19/mobility/Global_Mobility_Report.csv')
country = 'Mexico'
region = 'Mexico City'
sub_df = df[(df['country_region']== country) & (df['sub_region_1']==region)]
sub_df.loc[:,'date'] = pd.to_datetime(sub_df.loc[:,'date'])
# Change date here
sub_df = sub_df[(sub_df['date'] > '2020-02-15') & (sub_df['date'] < '2020-11-17')]
sub_df = sub_df.sort_values('date', ascending=True)
%run urban_theme.py
retail_recretation = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('retail_and_recreation_percent_change_from_baseline:Q', title = " "),
).properties(
title = "Lugares de ocio",
width = 450,
height = 250
)
grocery_pharmacy = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('grocery_and_pharmacy_percent_change_from_baseline:Q', title = " "),
).properties(
title = "Mercados y farmacias",
width = 450,
height = 250
)
parks = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('parks_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Parques y playas",
width = 450,
height = 250
)
transit = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('transit_stations_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Transporte público",
width = 450,
height = 250
)
workplaces = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('workplaces_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Lugares de trabajo",
width = 450,
height = 250
)
residential = alt.Chart(sub_df).mark_line(size=1).encode(
alt.X('date:T', title = " "),
alt.Y('residential_percent_change_from_baseline:Q', title = " ")
).properties(
title = "Residenciales",
width = 450,
height = 250
)
par1 = retail_recretation | grocery_pharmacy | parks
par2 = transit | workplaces | residential
mobility = par1 & par2
# Title of the concatened grpah
mobility_1 = mobility.properties(
title={
"text":["Movilidad en CDMX"],
"subtitle": ["Datos del 15 de febrero al 15 de noviembre de 2020.", " "],
}
)
#Add footer
alt.concat(mobility_1).properties(
title=alt.TitleParams(
['Fuente: Elaboración propia con datos del COVID-19 Community Mobility Reports de Google.', 'Jay Ballesteros (@jballesterosc_)'],
baseline='bottom',
orient='bottom',
anchor='end',
fontWeight='normal',
fontSize=12
)
)
```
|
github_jupyter
|
```
%cd ~/NetBeansProjects/ExpLosion/
from notebooks.common_imports import *
from gui.output_utils import *
from gui.user_code import pairwise_significance_exp_ids
query = {'expansions__decode_handler': 'SignifiedOnlyFeatureHandler',
'expansions__vectors__dimensionality': 100,
'expansions__vectors__rep': 0,
'expansions__vectors__unlabelled': 'turian'}
ids = Experiment.objects.filter(**query).values_list('id', flat=True)
print('ids are', ids)
df = dataframe_from_exp_ids(ids, {'Algorithm':'expansions__vectors__algorithm',
'Composer':'expansions__vectors__composer',
'Features': 'document_features_tr'})
ids = list(ids.values_list('id', flat=True))
for eid in ids + [1]:
exp = Experiment.objects.get(id=eid)
mean, low, high, _ = get_ci(eid)
print('%s & %.2f$\pm$%.2f \\\\'%(exp.expansions.vectors.composer, mean, (high-low)/2))
pairwise_significance_exp_ids(zip(ids, [1]*len(ids)), name_format=['expansions__vectors__composer'])
df.head()
def f1(x):
return '%1.2f' % x
# ddf = df.drop('folds', axis=1).groupby(['Composer', 'k']).agg([np.mean, np.std])
# ddf.columns = ddf.columns.droplevel(0)#.reset_index()
# ddf['Accuracy'] = ddf['mean'].map(f1) + "$\pm$" + ddf['std'].map(f1)
# ddf = ddf.drop(['mean', 'std'], axis=1).reset_index()
# print(ddf.pivot_table(values='Accuracy', index='k',
# columns='Composer', aggfunc=lambda x: x).to_latex(escape=False))
ddf = df.drop(['folds', 'Algorithm'], axis=1).groupby(['Composer', 'Features']).agg('mean').reset_index() # no need to drop unwanted columns
res = ddf.pivot_table(values='Accuracy', index='Composer', columns='Features')
print(res.to_latex(float_format=f1, na_rep='N/A'))
res.T
del res.index.name
del res.columns.name
for c in res.columns:
print(res[[c]].to_latex(float_format=f1, na_rep='N/A'))
res[[c]]
```
# Compare to word2vec qualitatively
```
from discoutils.thesaurus_loader import Vectors
from discoutils.tokens import DocumentFeature
v1 = Vectors.from_tsv('../FeatureExtractionToolkit/socher_vectors/turian_unigrams.h5')
v1.init_sims(n_neighbors=25)
v2 = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-wiki-15perc.unigr.strings.rep0')
v2.init_sims(n_neighbors=25)
def compare_neighbours(vectors, names, words=[], n_neighbours=5):
if not words:
words = random.sample([x for x in vectors[0].keys() if not x.count('_')], 10)
words_clean = [DocumentFeature.from_string(w).tokens[0].text for w in words]
data = []
for w, w_clearn in zip(words, words_clean):
this_row = []
for v in vectors:
neigh = v.get_nearest_neighbours(w)
# remove neigh with the same PoS (for turian)
new_neigh = []
for n, _ in neigh:
n1 = DocumentFeature.from_string(n).tokens[0].text
# print(n, n1)
if n1 not in new_neigh:
if n1 != w_clearn:
new_neigh.append(n1)
# print(new_neigh)
if neigh:
this_row.append(', '.join(n for n in new_neigh[:n_neighbours]))
else:
this_row.append(None)
data.append(this_row)
return pd.DataFrame(data, index=words_clean, columns=names)
# bunch of random words contained in both
words = 'andrade/N giant/J seize/V fundamental/J affidavit/N claim/V sikh/N rest/V israel/N arrow/N preventative/J torrential/J'.split()
df = compare_neighbours([v1, v2], ['turian', 'w2v'], words, n_neighbours=5)
df.to_csv('turian_vs_w2v.csv')
df
print(pd.DataFrame(df.turian).to_latex())
print(pd.DataFrame(df['w2v']).to_latex())
```
# How many words of each PoS type are there in turian's unigrams
Code below largely copied from `socher_vectors.py`
```
from scipy.io import loadmat
mat = loadmat('../FeatureExtractionToolkit/socher_vectors/vars.normalized.100.mat')
words = [w[0] for w in mat['words'].ravel()]
import nltk
from nltk import WordNetLemmatizer
import string
from collections import defaultdict
lmtzr = WordNetLemmatizer()
clean_to_dirty = defaultdict(list) # canonical -> [non-canonical]
dirty_to_clean = dict() # non-canonical -> canonical
to_keep = set() # which non-canonical forms forms we will keep
# todo this can be done based on frequency or something
for w in words:
if set(w).intersection(set(string.punctuation).union(set('0123456789'))):
# not a real word- contains digits or punctuation
continue
lemma = lmtzr.lemmatize(w.lower())
clean_to_dirty[lemma].append(w)
dirty_to_clean[w] = lemma
# decide which of possibly many non-canonical forms with the same lemma to keep
# prefer shorter and lowercased non-canonical forms
for lemma, dirty_list in clean_to_dirty.items():
if len(dirty_list) > 1:
best_lemma = min(dirty_list, key=lambda w: (len(w), not w.islower()))
else:
best_lemma = dirty_list[0]
to_keep.add(best_lemma)
pos_tagged = [nltk.pos_tag([w]) for w in to_keep]
from collections import defaultdict
pos_coarsification_map = defaultdict(lambda: "UNK")
pos_coarsification_map.update({"JJ": "J",
"JJN": "J",
"JJS": "J",
"JJR": "J",
"VB": "V",
"VBD": "V",
"VBG": "V",
"VBN": "V",
"VBP": "V",
"VBZ": "V",
"NN": "N",
"NNS": "N",
"NNP": "N",
"NPS": "N",
"NP": "N",
"RB": "RB",
"RBR": "RB",
"RBS": "RB",
"DT": "DET",
"WDT": "DET",
"IN": "CONJ",
"CC": "CONJ",
"PRP": "PRON",
"PRP$": "PRON",
"WP": "PRON",
"WP$": "PRON",
".": "PUNCT",
":": "PUNCT",
":": "PUNCT",
"": "PUNCT",
"'": "PUNCT",
"\"": "PUNCT",
"'": "PUNCT",
"-LRB-": "PUNCT",
"-RRB-": "PUNCT"})
pos_tags = [pos_coarsification_map[x[0][1]] for x in pos_tagged]
from collections import Counter
Counter(pos_tags)
```
# Let's look at the embeddings
```
from sklearn.manifold import TSNE
from sklearn.preprocessing import normalize
sns.set_style('white')
def draw_tsne_embeddings(v):
# pairs of words from Mikolov et al (2013)- Distributed word reprs and their compositionality
# ignored some pairs that do not have a vectors
words = 'china/N beijing/N russia/N moscow/N japan/N tokyo/N turkey/N ankara/N france/N \
paris/N italy/N rome/N greece/N athens/N germany/N berlin/N portugal/N lisbon/N spain/N madrid/N'.split()
mat = np.vstack([v.get_vector(w).A for w in words])
reduced = TSNE(init='pca').fit_transform(normalize(mat))
# ax = plt.fig
plt.scatter(reduced[:, 0], reduced[:, 1]);
# point labels
for i, txt in enumerate(words):
plt.annotate(txt, (reduced[i, 0], reduced[i, 1]), fontsize=20);
# lines between country-capital pairs
for i in range(len(words)):
if i %2 != 0:
continue
plt.plot([reduced[i, 0], reduced[i+1, 0]],
[reduced[i, 1], reduced[i+1, 1]], alpha=0.5, color='black')
# remove all junk from plot
sns.despine(left=True, bottom=True)
plt.gca().xaxis.set_major_locator(plt.NullLocator())
plt.gca().yaxis.set_major_locator(plt.NullLocator())
v = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-wiki-100perc.unigr.strings.rep0') # look very good
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-w2v-wiki.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
v = Vectors.from_tsv('../FeatureExtractionToolkit/word2vec_vectors/word2vec-gigaw-100perc.unigr.strings.rep0') # ok
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-w2v-gigaw.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
v = Vectors.from_tsv('../FeatureExtractionToolkit/socher_vectors/turian_unigrams.h5') # terrible
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-turian.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
v = Vectors.from_tsv('../FeatureExtractionToolkit/glove/vectors.miro.h5') # terrible
draw_tsne_embeddings(v)
plt.savefig('plot-mikolov-tsne-glove-wiki.pdf', format='pdf', dpi=300, bbox_inches='tight', pad_inches=0.1)
```
|
github_jupyter
|
# Lussen
Looping `for` a `while`
## `for` lussen
```
for i in [0, 1, 2]:
print("i is", i)
for i in range(0, 3):
print("i is", i)
for x in [10, 15, 2020]:
print("x is", x)
```
```python
for i in ...:
print("Gefeliciteerd")
```
Hoe kan dit 10 keer worden uitgevoerd? Hier is een reeks aan oplossingen mogelijk...
```
for i in range(10):
print("Gefeliciteerd!")
```
## `for` fun(cties)
```
def fun1():
for i in range(0, 3):
print("i is", i)
return
def fun2():
for i in range(0, 3):
print("i is", i)
return
```
## Meer `for`
```
def fun1B():
for i in range(1, 6):
if i % 2 == 0:
print("i is", i)
return
fun1B()
def fun2B():
for i in range(1, 6):
if i % 2 == 0:
print("i is", i)
return
fun2B()
def fun3B():
for i in range(1,6):
if i % 2 == 0:
print("i is", i)
return
fun3B()
```
```python
def fun3B():
for i in range(1,6):
if i % 2 == 0:
print("i is", i)
return
```

## Hmmm
Iteratieve oplossingen
Blokken herhalen tot een conditie is bereikt, lussen!
Bereken de faculteit van een getal
```asm
00 read r1
01 setn r13 1
02 jeqzn r1 6
03 mul r13 r13 r1
04 addn r1 -1
05 jumpn 02
06 write r13
07 halt
```
- `02` tot `r1` gelijk is aan 0 ...
- `03` vermenigvuldig `r13` met `r1` en overschrijf `r13` met het resultaat
- `04` verminder `r1` met 1
- `05` herhaal
### Hmmm denken in Python
```python
def fac(x): # 00 read r1
result = 1 # 01 setn r13 1
while x != 0: # 02 jeqzn r1 6
result *= x # 03 mul r13 r13 r1
x -= 1 # 04 addn r1 -1
# 05 jumpn 02
return result # 06 write r13
# 07 halt
```
We hebben nu de voordelen van *expliciete* lussen én op zichzelf staande functies!
### Recursieve versie
```
def fac(x):
"""faculty, recursive version
"""
if x == 0:
return 1
else:
return x * fac(x - 1)
```
```asm
00 read r1
01 setn r15 42
02 call r14 5
03 jump 21
04 nop
05 jnezn r1 8
06 setn r13 1
07 jumpr r14
08 storer r1 r15
09 addn r15 1
10 storer r14 r15
11 addn r15 1
12 addn r1 -1
13 call r14 5
14 addn r15 -1
15 loadr r14 r15
16 addn r15 -1
17 loadr r1 r15
18 mul r13 r13 r1
19 jumpr r14
20 nop
21 write r13
22 halt
```
## Iteratief ontwerp
Lussen! Variabelen!
`for`
```python
for x in [40, 41, 42]:
print(x)
```
`while`
```python
x = 42
while x > 0:
print(x)
x -= 1
```
Variabelen
```python
x = 41
x += 1 # addn r1 1
```
### Variabelen
Variëren!
```python
age = 41
```
```python
age = age + 1
```
Wat hetzelfde is als ...
```python
age += 1
```
### In het kort
```python
hwToGo = 7
hwToGo = hwToGo - 1
```
```python
hwToGo -= 1
```
```python
total = 21000000
total = total * 2
```
```python
total *= 2
```
```python
u235 = 84000000000000000;
u235 = u235 / 2
```
```python
u235 /= 2
```
## `for`!
```python
for x in [2, 4, 6, 8]:
print("x is", x)
print("Done!")
```
### Stap voor stap
1. ken elk element toe aan `x`
```python
for x in [2, 4, 6, 8]:
```
2. de BODY of BLOCK gebruikt de waarde van `x`
3. en vervolg de lus met de het volgende element
```python
print("x is", x)
```
4. code na de lus wordt wordt pas uitgevoerd als de lus klaar is!
```python
print("Done!")
```
### Faculteit met `for`
```
def fac(n):
result = 1 # not the result yet!
for i in range(1, n + 1): # range start at 0, add one!
result *= i # result = result * i
return result # return the result
fac(5)
```
## Quiz
```python
x = 0
for i in range(4):
x += 10
print(x)
```
Wat wordt geprint voor `x`?
### Oplossing
`40`
```python
S = "time to think this over! "
result = ""
for i in range(len(S)):
if S[i - 1] == " ":
result += S[i]
print(result)
```
Wat wordt geprint voor `result`?
### Oplossing
`'tttto'`
| `result` | `S[i - 1]` | `S[i]` | `i` |
|----------|------------|--------|-----|
| `'t'` | `' '` | `'t'` | `0` |
| | `'t'` | `'i'` | `1` |
| | `'i'` | `'m'` | `2` |
| | `'m'` | `'e'` | `3` |
| | `'e'` | `' '` | `4` |
| `'tt'` | `' '` | `'t'` | `5` |
| | `'t'` | `'o'` | `6` |
| | `'o'` | `' '` | `7` |
| `'ttt'` | `' '` | `'t'` | `8` |
## Twee typen `for`
Elementen versus index
### Op basis van element
```python
L = [3, 15, 17, 7]
for x in L:
print(x)
```
```python
S = "een fijne lus"
for c in S:
print(c)
```
### Op basis van index
```python
L = [3, 15, 17, 7]
for i in range(len(L)):
print(L[i])
```
```python
S = "een fijne lus"
for i in range(len(S)):
print(S[i])
```
Let op, het is niet heel gewoon om in lussen te printen, maar is wel heel nuttig voor debuggen!
### Welke van de twee?
Elementen: eenvoudiger
Indices: flexibeler
Denk aan het "time to think this over! " voorbeeld, in de lus kon op basis van de index steeds "terug" worden gekeken in de string met `S[i - 1]`!
### Of ... beide?
Index én element? Met de ingebouwde functie [`enumerate`](https://docs.python.org/3/library/functions.html#enumerate)
```
L = [3, 15, 17, 7]
for i, x in enumerate(L):
print("index", i, "element", x)
```
Misschien ben je `enumerate` al eens tegengekomen? De kans is in ieder geval groot als je op het net gaat zoeken, we staan er om deze reden even bij stil.
### Een klein uitstapje
Welkom bij de cursus Advanced Python!
```
a, b = [1, 2]
print(a)
print(b)
```
Deze techniek heet "tuple unpacking". Misschien heb je de term "tuple" al eens gehoord (of misschien gezien in een foutmelding!). Een tuple is een type dat we niet verder gaan behandelen maar zie het als een list met een vaste lengte (een list heeft géén vaste lengte, je kan daar elementen aan toevoegen en van vewijderen).
Een tuple heeft hetzelfde gedrag als een list en wordt vaak "onzichtbaar" toegepast door Python, bijvoorbeeld
```
x = 3, 4
type(x)
x
```
### Vraag
```python
a = 1
b = 2
```
Welke handeling(en) zijn nodig om de waarden om te draaien, zodat `a` gelijk is aan `b` en `b` gelijk aan `a`?
Stel, je hebt een kopje koffie en een kopje water, hoe kan je de inhoud de kopjes omwisselen?
```
a, b = b, a
print(a)
print(b)
```
Python maakt het jou makkelijk, geen derde variabele is nodig voor een wisseling!
### `LoL`'s uitpakken
`enumerate` geeft een `LoL` terug!
Hoewel, technisch gezien is het een `LoT`, een List of Tuples ...
```
W = [[5, "jan"], [12, "feb"], [15, "mar"]]
for temp, month in W:
print("month", month, "avg temp", temp)
```
Een alternatieve benadering
```python
for x in W:
temp, month = x
print("month", month, "avg temp", temp)
```
## Extreme lussen
```python
guess = 42
print("It keeps on")
while guess == 42:
print("going and")
print("Phew! I'm done!")
```
Wat doet deze lus?
## `while` lussen
Een lus tot een bepaalde **conditie** is bereikt.
Tests?
- `42 == 42`
- `guess > 42`
- ...
### Ontsnappen
```
import random
guess = 0 # starting value, not the final or desired value!
while guess != 42: # test to see if we keep looping
print('Help! Let me out!')
guess = random.choice([41, 42, 43]) # watch out for infinite loops!
print('At last!') # after the loop ends
```
### Simulatie
```python
def guess(hidden):
"""De computer raadt een geheim getal
"""
comp_guess = choice(list(range(100))) # [0, ..., 99]
print("Ik koos", comp_guess) # print de keus
time.sleep(0.5) # pauzeer een halve seconde
if comp_guess == hidden: # base case, eindelijk...
print("Gevonden!") # de computer is blij :)
return 1 # poging
else: # recursive case
return 1 + guess(hidden) # volgende poging!
```
```
from random import choice
def escape(hidden):
guess = 0
count = 0
while guess != hidden:
guess = choice(range(100))
count += 1
return count
LC = [escape(42) for i in range(1000)]
```
Bekijk de eerste 10 resutaten
```
LC[:10]
```
Het snelst geraden
```
min(LC)
```
Het minste geluk
```
max(LC)
```
Gemiddeld aantal keren
```
sum(LC)/len(LC)
```
|
github_jupyter
|
```
# Setup directories
from pathlib import Path
basedir = Path().absolute()
libdir = basedir.parent.parent.parent
# Other imports
import pandas as pd
import numpy as np
from datetime import datetime
from ioos_qc.plotting import bokeh_plot_collected_results
from bokeh import plotting
from bokeh.io import output_notebook
# Install QC library
#!pip install git+git://github.com/ioos/ioos_qc.git
# # Alternative installation (install specific branch):
# !pip uninstall -y ioos_qc
# !pip install git+git://github.com/ioos/ioos_qc.git@new_configs
# # Alternative installation (run with local updates):
# !pip uninstall -y ioos_qc
# import sys
# sys.path.append(str(libdir))
```
## Configuration
```
erddap_server = 'https://ferret.pmel.noaa.gov/pmel/erddap'
dataset_id = 'sd1055'
```
## Get data from ERDDAP as an xarray object
```
from erddapy import ERDDAP
e = ERDDAP(
server=erddap_server,
protocol='tabledap',
)
e.response = 'csv'
e.dataset_id = dataset_id
ds = e.to_xarray()
ds
```
## Generate a QC configuration for each variable
```
# Dataset level metadata to drive climatology extraction
min_t = str(ds.time.min().dt.floor("D").dt.strftime("%Y-%m-%d").data)
max_t = str(ds.time.max().dt.ceil("D").dt.strftime("%Y-%m-%d").data)
min_x = float(ds.longitude.min().data)
min_y = float(ds.latitude.min().data)
max_x = float(ds.longitude.max().data)
max_y = float(ds.latitude.max().data)
bbox = [min_x, min_y, max_x, max_y]
time
# Configure how each variable's config will be generated
default_config = {
"bbox": bbox,
"start_time": min_t,
"end_time": max_t,
"tests": {
"spike_test": {
"suspect_threshold": "1",
"fail_threshold": "2"
},
"gross_range_test": {
"suspect_min": "min - std * 2",
"suspect_max": "max + std / 2",
"fail_min": "mean / std",
"fail_max": "mean * std"
}
}
}
# For any variable name or standard_name you can define a custom config
custom_config = {
'air_temperature': {
"variable": "air"
},
'air_pressure': {
"variable": "pres"
},
'relative_humidity': {
"variable": "rhum"
},
'sea_water_temperature': {
"variable": "temperature"
},
'sea_water_practical_salinity': {
"variable": "salinity"
},
'eastward_wind': {
"variable": "uwnd"
},
'northward_wind': {
"variable": "vwnd"
}
}
# Generate climatology configs
from ioos_qc.config_creator import CreatorConfig, QcConfigCreator, QcVariableConfig, QC_CONFIG_CREATOR_SCHEMA
creator_config = {
"datasets": [
{
"name": "ocean_atlas",
"file_path": "../../../resources/ocean_atlas.nc",
"variables": {
"o2": "o_an",
"salinity": "s_an",
"temperature": "t_an"
},
"3d": "depth"
},
{
"name": "narr",
"file_path": "../../../resources/narr.nc",
"variables": {
"air": "air",
"pres": "slp",
"rhum": "rhum",
"uwnd": "uwnd",
"vwnd": "vwnd"
}
}
]
}
cc = CreatorConfig(creator_config)
qccc = QcConfigCreator(cc)
# Break down variable by standard name
def not_stddev(v):
return v and not v.endswith(' SD')
#air_temp_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='air_temperature')
#pressure_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='air_pressure')
# humidity_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='relative_humidity')
# water_temp_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='sea_water_temperature')
# salinity_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='sea_water_practical_salinity')
# uwind_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='eastward_wind')
# vwind_vars = ds.filter_by_attrs(long_name=not_stddev, standard_name='northward_wind')
# all_vars = [air_temp_vars, pressure_vars, humidity_vars, water_temp_vars, salinity_vars, uwind_vars, vwind_vars]
# all_vars
air_temp = ['air_temperature']
pressure = ['air_pressure']
humidity = ['relative_humidity']
water_temp = ['sea_water_temperature']
salt = ['sea_water_practical_salinity']
u = ['eastward_wind']
v = ['northward_wind']
run_tests = air_temp + pressure + humidity + water_temp + salt + u + v
final_config = {}
for v in ds:
da = ds[v]
# Don't run tests for unknown variables
if 'standard_name' not in da.attrs or da.attrs['standard_name'] not in run_tests:
continue
# The standard names are identical for the mean and the stddev
# so ignore the stddev version of the variable
if v.endswith('_STDDEV'):
continue
config = default_config.copy()
min_t = str(da.time.min().dt.floor("D").dt.strftime("%Y-%m-%d").data)
max_t = str(da.time.max().dt.ceil("D").dt.strftime("%Y-%m-%d").data)
min_x = float(da.longitude.min().data)
min_y = float(da.latitude.min().data)
max_x = float(da.longitude.max().data)
max_y = float(da.latitude.max().data)
bbox = [min_x, min_y, max_x, max_y]
config["bbox"] = bbox
config["start_time"] = min_t
config["end_time"] = max_t
# Allow custom overrides on a variable name basis
if v in custom_config:
config.update(custom_config[v])
# Allow custom overrides on a standard_name name basis
if da.attrs['standard_name'] in custom_config:
config.update(custom_config[da.attrs['standard_name']])
# Generate the ioos_qc Config object
qc_var = QcVariableConfig(config)
qc_config = qccc.create_config(qc_var)
# Strip off the variable that create_config added
qc_config = list(qc_config.values())[0]
# Add it to the final config
final_config[v] = qc_config
final_config
from ioos_qc.config import Config
from ioos_qc.streams import XarrayStream
from ioos_qc.stores import NetcdfStore
from ioos_qc.results import collect_results
c = Config(final_config)
xs = XarrayStream(ds, time='time', lat='latitude', lon='longitude')
qc_results = xs.run(c)
list_results = collect_results(qc_results, how='list')
list_results
# output_notebook()
# plot = bokeh_plot_collected_results(list_results)
# plotting.show(plot)
```
|
github_jupyter
|
Early stopping of model simulations
===================
For certain distance functions and certain models it is possible to calculate the
distance on-the-fly while the model is running. This is e.g. possible if the distance is calculated as a cumulative sum and the model is a stochastic process. For example, Markov Jump Processes belong to this class. However, we want to keep things simple here and only demonstrate how to use the pyABC interface in such cases. So don't expect a sophisticated (or even useful) model implementation here.
In this example we'll use in particular the following classes for integrated simulation and accepting/rejecting a parameter:
Let's start with the necessary imports:
```
# install if not done yet
!pip install pyabc --quiet
%matplotlib inline
import pyabc
from pyabc import (ABCSMC,
RV, Distribution,
IntegratedModel, ModelResult,
MedianEpsilon,
LocalTransition,
NoDistance)
from pyabc.sampler import SingleCoreSampler
import matplotlib.pyplot as plt
import os
import tempfile
import pandas as pd
import numpy as np
pyabc.settings.set_figure_params('pyabc') # for beautified plots
db_path = ("sqlite:///" +
os.path.join(tempfile.gettempdir(), "test.db"))
```
We define here a (very) simple stochastic process, purely for demonstrative reasons.
First, we fix the number of steps *n_steps* to 30.
```
n_steps = 30
```
We then define our process as follows:
$$
x(t+1) = x(t) + s \xi,
$$
in which $\xi \sim U(0, 1)$ denotes a uniformly in $[0, 1]$ distributed
random variable, and $s$ is the step size, $s = $ step_size.
The function `simulate` implements this stochastic process:
```
def simulate(step_size):
trajectory = np.zeros(n_steps)
for t in range(1, n_steps):
xi = np.random.uniform()
trajectory[t] = trajectory[t-1] + xi * step_size
return trajectory
```
We take as distance function between two such generated trajectories
the sum of the absolute values of the pointwise differences.
```
def distance(trajectory_1, trajectory_2):
return np.absolute(trajectory_1 - trajectory_2).sum()
```
Let's run the simulation and plot the trajectories to get a better
idea of the so generated data.
We set the ground truth step size *gt_step_size* to
```
gt_step_size = 5
```
This will be used to generate the data which will be subject to inference later on.
```
gt_trajectory = simulate(gt_step_size)
trajectoy_2 = simulate(2)
dist_1_2 = distance(gt_trajectory, trajectoy_2)
plt.plot(gt_trajectory,
label="Step size = {} (Ground Truth)".format(gt_step_size))
plt.plot(trajectoy_2,
label="Step size = 2")
plt.legend();
plt.title("Distance={:.2f}".format(dist_1_2));
```
As you might have noted already we could calculate the distance on the fly.
After each step in the stochastic process, we could increment the cumulative sum.
This will supposedly save time in the ABC-SMC run later on.
Let's start with the code first and explain it afterwards.
```
class MyStochasticProcess(IntegratedModel):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.n_early_stopped = 0
def integrated_simulate(self, pars, eps):
cumsum = 0
trajectory = np.zeros(n_steps)
for t in range(1, n_steps):
xi = np.random.uniform()
next_val = trajectory[t-1] + xi * pars["step_size"]
cumsum += abs(next_val - gt_trajectory[t])
trajectory[t] = next_val
if cumsum > eps:
self.n_early_stopped += 1
return ModelResult(accepted=False)
return ModelResult(accepted=True,
distance=cumsum,
sum_stat={"trajectory": trajectory})
```
Our `MyStochasticProcess` class is a subclass of `IntegratedModel <pyabc.model.IntegratedModel>`.
The `__init__` method is not really necessary. Here, we just want to keep
track of how often early stopping has actually happened.
More interesting is the `integrated_simulate` method. This is where the real thing
happens.
As already said, we calculate the cumulative sum on the fly.
In each simulation step, we update the cumulative sum.
Note that *gt_trajectory* is actually a global variable here.
If *cumsum > eps* at some step of the simulation, we return immediately,
indicating that the parameter was not accepted
by returning `ModelResult(accepted=False)`.
If the *cumsum* never passed *eps*, the parameter got accepted. In this case
we return an accepted result together with the calculated distance and the trajectory.
Note that, while it is mandatory to return the distance, returning the trajectory is optional. If it is returned, it is stored in the database.
We define a uniform prior over the interval $[0, 10]$ over the step size
```
prior = Distribution(step_size=RV("uniform", 0 , 10))
```
and create and instance of our integrated model MyStochasticProcess
```
model = MyStochasticProcess()
```
We then configure the ABC-SMC run.
As the distance function is calculated within `MyStochasticProcess`, we just pass
`None` to the `distance_function` parameter.
As sampler, we use the `SingleCoreSampler` here. We do so to correctly keep track of `MyStochasticProcess.n_early_stopped`. Otherwise, the counter gets incremented in subprocesses and we don't see anything here.
Of course, you could also use the `MyStochasticProcess` model in a multi-core or
distributed setting.
Importantly, we pre-specify the initial acceptance threshold to a given value, here to 300. Otherwise, pyABC will try to automatically determine it by drawing samples from the prior and evaluating the distance function.
However, we do not have a distance function here, so this approach would break down.
```
abc = ABCSMC(models=model,
parameter_priors=prior,
distance_function=NoDistance(),
sampler=SingleCoreSampler(),
population_size=30,
transitions=LocalTransition(k_fraction=.2),
eps=MedianEpsilon(300, median_multiplier=0.7))
```
We then indicate that we want to start a new ABC-SMC run:
```
abc.new(db_path)
```
We do not need to pass any data here. However, we could still pass additionally
a dictionary `{"trajectory": gt_trajectory}` only for storage purposes
to the `new` method. The data will however be ignored during the ABC-SMC run.
Then, let's start the sampling
```
h = abc.run(minimum_epsilon=40, max_nr_populations=3)
```
and check how often the early stopping was used:
```
model.n_early_stopped
```
Quite a lot actually.
Lastly we estimate KDEs of the different populations to inspect our results
and plot everything (the vertical dashed line is the ground truth step size).
```
from pyabc.visualization import plot_kde_1d
fig, ax = plt.subplots()
for t in range(h.max_t+1):
particles = h.get_distribution(m=0, t=t)
plot_kde_1d(*particles, "step_size",
label="t={}".format(t), ax=ax,
xmin=0, xmax=10, numx=300)
ax.axvline(gt_step_size, color="k", linestyle="dashed");
```
That's it. You should be able to see how the distribution
contracts around the true parameter.
|
github_jupyter
|
### This notebook covers how to get statistics on videos returned for a list of search terms on YouTube with the use of YouTube Data API v3.
First go to [Google Developer](http://console.developers.google.com/) and enable YouTube Data API v3 by clicking on the button "+ ENABLE APIS AND SERVICES" and searching for YouTube Data API v3. Next, get your unique developer key from the Credentials setting. Then run the following in command line to install the necessary client: `pip install --upgrade google-api-python-client`
There are only two things which you need to modify in this notebook. 1) Paste the developer key in the code below. 2) Change the search terms in the keywords list to what you want.
<img src="https://static.wixstatic.com/media/1ea3da_6e02db1850d845ec9e4325ee8c56eb12~mv2.png/v1/fill/w_1035,h_338,al_c,q_80,usm_0.66_1.00_0.01/1ea3da_6e02db1850d845ec9e4325ee8c56eb12~mv2.webp">
```
from apiclient.discovery import build
from apiclient.errors import HttpError
from oauth2client.tools import argparser
import pandas as pd
import numpy as np
import pprint
import matplotlib.pyplot as plt
DEVELOPER_KEY = "" #paste key here in between ""
YOUTUBE_API_SERVICE_NAME = "youtube"
YOUTUBE_API_VERSION = "v3"
#input list of search terms of interest
keywords = ["bts","blackpink","twice","one direction"]
youtube = build(YOUTUBE_API_SERVICE_NAME, YOUTUBE_API_VERSION,developerKey=DEVELOPER_KEY)
max_results=50
order="relevance"
token=None
location=None
location_radius=None
title = []
channelId = []
channelTitle = []
categoryId = []
videoId = []
pubDate = []
viewCount = []
likeCount = []
dislikeCount = []
commentCount = []
favoriteCount = []
category = []
tags = []
videos = []
keyword = []
for q in keywords:
search_response = youtube.search().list(
q=q,
type="video",
pageToken=token,
order = order,
part="id,snippet",
maxResults=max_results,
location=location,
locationRadius=location_radius).execute()
for search_result in search_response.get("items", []):
keyword.append(q)
if search_result["id"]["kind"] == "youtube#video":
title.append(search_result['snippet']['title'])
videoId.append(search_result['id']['videoId'])
response = youtube.videos().list(
part='statistics, snippet',
id=search_result['id']['videoId']).execute()
channelId.append(response['items'][0]['snippet']['channelId'])
channelTitle.append(response['items'][0]['snippet']['channelTitle'])
pubDate.append(response['items'][0]['snippet']['publishedAt'])
categoryId.append(response['items'][0]['snippet']['categoryId'])
favoriteCount.append(response['items'][0]['statistics']['favoriteCount'])
viewCount.append(response['items'][0]['statistics']['viewCount'])
try:
likeCount.append(response['items'][0]['statistics']['likeCount'])
except:
likeCount.append("NaN")
try:
dislikeCount.append(response['items'][0]['statistics']['dislikeCount'])
except:
dislikeCount.append("NaN")
if 'commentCount' in response['items'][0]['statistics'].keys():
commentCount.append(response['items'][0]['statistics']['commentCount'])
else:
commentCount.append(0)
if 'tags' in response['items'][0]['snippet'].keys():
tags.append(response['items'][0]['snippet']['tags'])
else:
tags.append("No Tags")
youtube_dict = {'pubDate': pubDate,'tags': tags,'channelId': channelId,'channelTitle': channelTitle,'categoryId':categoryId,'title':title,'videoId':videoId,'viewCount':viewCount,'likeCount':likeCount,'dislikeCount':dislikeCount,'favoriteCount':favoriteCount, 'commentCount':commentCount, 'keyword':keyword}
df = pd.DataFrame(youtube_dict)
df.head()
df.shape
df['pubDate'] = pd.to_datetime(df.pubDate)
df['publishedDate'] = df['pubDate'].dt.strftime('%d/%m/%Y')
#rearranging order of columns
df1 = df[['keyword','publishedDate','title','viewCount','channelTitle','commentCount','likeCount','dislikeCount','tags','favoriteCount','videoId','channelId','categoryId']]
df1.columns = ['keyword','publishedDate','Title','viewCount','channelTitle','commentCount','likeCount','dislikeCount','tags','favoriteCount','videoId','channelId','categoryId']
df1.head()
df1.to_csv("youtube_bands4.csv") #download to local drive as a csv file
data = pd.read_csv("youtube_bands4.csv")
data.dtypes
list(data)
data = data.drop('Unnamed: 0',1)
list(data)
data[data.isna().any(axis=1)] #see how many rows has missing data
numeric_dtype = ['viewCount','commentCount','likeCount','dislikeCount','favoriteCount']
for i in numeric_dtype:
data[i] = data[i]/1000000 #converting to millions
data
#sorting the data by likeCount in descending order
data.sort_values("likeCount", axis = 0, ascending = False,
inplace = True, na_position ='last')
focus1 = data.head(10) #select top 10 results by likeCount
focus1
focus1.shape #check number of rows and columns
fig, ax = plt.subplots()
ax.barh(range(focus1.shape[0]),focus1['likeCount'])
ax.set_yticks(range(focus1.shape[0]))
ax.set_yticklabels(focus1['Title'])
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('likeCount in Millions')
plt.show()
#sorting the data by viewCount in descending order
data.sort_values("viewCount", axis = 0, ascending = False,
inplace = True, na_position ='last')
focus1 = data.head(10) #select top 10 results by viewCount
fig, ax = plt.subplots()
ax.barh(range(focus1.shape[0]),focus1['viewCount'])
ax.set_yticks(range(focus1.shape[0]))
ax.set_yticklabels(focus1['Title'])
ax.invert_yaxis() # labels read top-to-bottom
ax.set_xlabel('viewCount in Millions')
plt.show()
```
#### References:
https://medium.com/greyatom/youtube-data-in-python-6147160c5833
https://medium.com/@RareLoot/extract-youtube-video-statistics-based-on-a-search-query-308afd786bfe
https://developers.google.com/youtube/v3/getting-started
|
github_jupyter
|
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os.path
import sys
import time
import tensorflow as tf
TRAIN_FILE = 'train.tfrecords'
def read_and_decode(filename_queue):
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
features = tf.parse_single_example(
serialized_example,
# Defaults are not specified since both keys are required.
features={
'image_raw': tf.FixedLenFeature([],tf.string),
'age': tf.FixedLenFeature([], tf.int64),
'gender':tf.FixedLenFeature([], tf.int64)
})
# Convert from a scalar string tensor (whose single string has
# length mnist.IMAGE_PIXELS) to a uint8 tensor with shape
# [mnist.IMAGE_PIXELS].
image = tf.image.decode_jpeg(features['image_raw'], channels=3)
image = tf.image.resize_images(image,[128,128])
#image = tf.cast(image,tf.uint8)
# image.set_shape([mnist.IMAGE_PIXELS])
# OPTIONAL: Could reshape into a 28x28 image and apply distortions
# here. Since we are not applying any distortions in this
# example, and the next step expects the image to be flattened
# into a vector, we don't bother.
# Convert from [0, 255] -> [-0.5, 0.5] floats.
#image = tf.cast(image, tf.float32) * (1. / 255) - 0.5
# Convert label from a scalar uint8 tensor to an int32 scalar.
age = features['age']#tf.cast(features['age'], tf.int32)
gender = tf.cast(features['gender'], tf.int32)
return image, age, gender
def inputs(train, batch_size, num_epochs):
"""Reads input data num_epochs times.
Args:
train: Selects between the training (True) and validation (False) data.
batch_size: Number of examples per returned batch.
num_epochs: Number of times to read the input data, or 0/None to
train forever.
Returns:
A tuple (images, labels), where:
* images is a float tensor with shape [batch_size, mnist.IMAGE_PIXELS]
in the range [-0.5, 0.5].
* labels is an int32 tensor with shape [batch_size] with the true label,
a number in the range [0, mnist.NUM_CLASSES).
Note that an tf.train.QueueRunner is added to the graph, which
must be run using e.g. tf.train.start_queue_runners().
"""
if not num_epochs: num_epochs = None
# filename = os.path.join(FLAGS.train_dir,
# TRAIN_FILE if train else VALIDATION_FILE)
with tf.name_scope('input'):
filename_queue = tf.train.string_input_producer(
[TRAIN_FILE], num_epochs=num_epochs)
# Even when reading in multiple threads, share the filename
# queue.
image, age, gender = read_and_decode(filename_queue)
# Shuffle the examples and collect them into batch_size batches.
# (Internally uses a RandomShuffleQueue.)
# We run this in two threads to avoid being a bottleneck.
images, sparse_labels = tf.train.shuffle_batch(
[image, age], batch_size=batch_size, num_threads=2,
capacity=1000 + 3 * batch_size,
# Ensures a minimum amount of shuffling of examples.
min_after_dequeue=1000)
return images, sparse_labels
def run_training():
"""Train MNIST for a number of steps."""
# Tell TensorFlow that the model will be built into the default Graph.
with tf.Graph().as_default():
# Input images and labels.
images, labels = inputs(train=True, batch_size=16,
num_epochs=1)
# Build a Graph that computes predictions from the inference model.
#logits = mnist.inference(images,
# FLAGS.hidden1,
# FLAGS.hidden2)
# Add to the Graph the loss calculation.
#loss = mnist.loss(logits, labels)
# Add to the Graph operations that train the model.
#train_op = mnist.training(loss, FLAGS.learning_rate)
# The op for initializing the variables.
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
# Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (the trained variables and the
# epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
try:
step = 0
while not coord.should_stop():
start_time = time.time()
# Run one step of the model. The return values are
# the activations from the `train_op` (which is
# discarded) and the `loss` op. To inspect the values
# of your ops or variables, you may include them in
# the list passed to sess.run() and the value tensors
# will be returned in the tuple from the call.
#_, loss_value = sess.run([train_op, loss])
images_val = sess.run([images])
print(images_val.shape)
duration = time.time() - start_time
# Print an overview fairly often.
#if step % 100 == 0:
# print('Step %d: loss = %.2f (%.3f sec)' % (step, loss_value,
# duration))
step += 1
except tf.errors.OutOfRangeError:
print('Done training for %d epochs, %d steps.' % (1, step))
finally:
# When done, ask the threads to stop.
coord.request_stop()
# Wait for threads to finish.
coord.join(threads)
sess.close()
images, labels = inputs(train=True, batch_size=16,
num_epochs=1)
# Build a Graph that computes predictions from the inference model.
#logits = mnist.inference(images,
# FLAGS.hidden1,
# FLAGS.hidden2)
# Add to the Graph the loss calculation.
#loss = mnist.loss(logits, labels)
# Add to the Graph operations that train the model.
#train_op = mnist.training(loss, FLAGS.learning_rate)
# The op for initializing the variables.
init_op = tf.group(tf.global_variables_initializer(),
tf.local_variables_initializer())
# Create a session for running operations in the Graph.
sess = tf.Session()
# Initialize the variables (the trained variables and the
# epoch counter).
sess.run(init_op)
# Start input enqueue threads.
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
images_val,age_val = sess.run([images,labels])
images_val[0].shape
import matplotlib.pyplot as plt
plt.imshow(images_val[6]/255.0)
plt.show()
sess.close()
```
|
github_jupyter
|
```
import os
os.environ['CUDA_VISIBLE_DEVICES'] = ''
import numpy as np
from numpy.random import default_rng
import random
import collections
import re
import tensorflow as tf
from tqdm import tqdm
max_seq_length_encoder = 512
max_seq_length_decoder = 128
masked_lm_prob = 0.2
max_predictions_per_seq = int(masked_lm_prob * max_seq_length_encoder)
do_whole_word_mask = True
EOS_ID = 1
MaskedLmInstance = collections.namedtuple(
'MaskedLmInstance', ['index', 'label']
)
class TrainingInstance(object):
"""A single training instance (sentence pair)."""
def __init__(self, tokens, tokens_y, masked_lm_positions, masked_lm_labels):
self.tokens = tokens
self.tokens_y = tokens_y
self.masked_lm_positions = masked_lm_positions
self.masked_lm_labels = masked_lm_labels
def sliding(strings, n = 5):
results = []
for i in range(len(strings) - n):
results.append(strings[i : i + n])
return results
def _get_ngrams(n, text):
ngram_set = set()
text_length = len(text)
max_index_ngram_start = text_length - n
for i in range(max_index_ngram_start + 1):
ngram_set.add(tuple(text[i : i + n]))
return ngram_set
def _get_word_ngrams(n, sentences):
assert len(sentences) > 0
assert n > 0
words = sum(sentences, [])
return _get_ngrams(n, words)
def cal_rouge(evaluated_ngrams, reference_ngrams):
reference_count = len(reference_ngrams)
evaluated_count = len(evaluated_ngrams)
overlapping_ngrams = evaluated_ngrams.intersection(reference_ngrams)
overlapping_count = len(overlapping_ngrams)
if evaluated_count == 0:
precision = 0.0
else:
precision = overlapping_count / evaluated_count
if reference_count == 0:
recall = 0.0
else:
recall = overlapping_count / reference_count
f1_score = 2.0 * ((precision * recall) / (precision + recall + 1e-8))
return {'f': f1_score, 'p': precision, 'r': recall}
def _rouge_clean(s):
return re.sub(r'[^a-zA-Z0-9 ]', '', s)
def get_rouges(strings, n = 1):
rouges = []
for i in range(len(strings)):
abstract = strings[i]
doc_sent_list = [strings[k] for k in range(len(strings)) if k != i]
sents = _rouge_clean(' '.join(doc_sent_list)).split()
abstract = _rouge_clean(abstract).split()
evaluated_1grams = _get_word_ngrams(n, [sents])
reference_1grams = _get_word_ngrams(n, [abstract])
rouges.append(cal_rouge(evaluated_1grams, reference_1grams)['f'])
return rouges
# Principal Select top-m scored sentences according to importance.
# As a proxy for importance we compute ROUGE1-F1 (Lin, 2004) between the sentence and the rest of the document
def get_rouge(strings, top_k = 1, minlen = 4):
rouges = get_rouges(strings)
s = np.argsort(rouges)[::-1]
s = [i for i in s if len(strings[i].split()) >= minlen]
return s[:top_k]
# Random Uniformly select m sentences at random.
def get_random(strings, top_k = 1):
return rng.choice(len(strings), size = top_k, replace = False)
# Lead Select the first m sentences.
def get_lead(strings, top_k = 1):
return [i for i in range(top_k)]
def combine(l):
r = []
for s in l:
if s[-1] != '.':
if s in ['[MASK]', '[MASK2]']:
e = ' .'
else:
e = '.'
s = s + e
r.append(s)
return ' '.join(r)
def is_number_regex(s):
if re.match('^\d+?\.\d+?$', s) is None:
return s.isdigit()
return True
def reject(token):
t = token.replace('##', '')
if is_number_regex(t):
return True
if t.startswith('RM'):
return True
if token in '!{<>}:;.,"\'':
return True
return False
def create_masked_lm_predictions(
tokens,
vocab_words,
rng,
):
"""Creates the predictions for the masked LM objective."""
cand_indexes = []
for (i, token) in enumerate(tokens):
if token == '[CLS]' or token == '[SEP]' or token == '[MASK2]':
continue
if reject(token):
continue
if (
do_whole_word_mask
and len(cand_indexes) >= 1
and token.startswith('##')
):
cand_indexes[-1].append(i)
else:
cand_indexes.append([i])
rng.shuffle(cand_indexes)
output_tokens = list(tokens)
num_to_predict = min(
max_predictions_per_seq,
max(1, int(round(len(tokens) * masked_lm_prob))),
)
masked_lms = []
covered_indexes = set()
for index_set in cand_indexes:
if len(masked_lms) >= num_to_predict:
break
if len(masked_lms) + len(index_set) > num_to_predict:
continue
is_any_index_covered = False
for index in index_set:
if index in covered_indexes:
is_any_index_covered = True
break
if is_any_index_covered:
continue
for index in index_set:
covered_indexes.add(index)
masked_token = None
# 80% of the time, replace with [MASK]
if rng.random() < 0.8:
masked_token = '[MASK]'
else:
# 10% of the time, keep original
if rng.random() < 0.5:
masked_token = tokens[index]
# 10% of the time, replace with random word
else:
masked_token = vocab_words[
np.random.randint(0, len(vocab_words) - 1)
]
output_tokens[index] = masked_token
masked_lms.append(
MaskedLmInstance(index = index, label = tokens[index])
)
assert len(masked_lms) <= num_to_predict
masked_lms = sorted(masked_lms, key = lambda x: x.index)
masked_lm_positions = []
masked_lm_labels = []
for p in masked_lms:
masked_lm_positions.append(p.index)
masked_lm_labels.append(p.label)
return (output_tokens, masked_lm_positions, masked_lm_labels)
def get_feature(x, y, tokenizer, vocab_words, rng, dedup_factor = 5, **kwargs):
tokens = tokenizer.tokenize(x)
if len(tokens) > (max_seq_length_encoder - 2):
tokens = tokens[:max_seq_length_encoder - 2]
if '[MASK2]' not in tokens:
return []
tokens = ['[CLS]'] + tokens + ['[SEP]']
tokens_y = tokenizer.tokenize(y)
if len(tokens_y) > (max_seq_length_decoder - 1):
tokens_y = tokens_y[:max_seq_length_decoder - 1]
tokens_y = tokenizer.convert_tokens_to_ids(tokens_y)
tokens_y = tokens_y + [EOS_ID]
results = []
for i in range(dedup_factor):
output_tokens, masked_lm_positions, masked_lm_labels = create_masked_lm_predictions(
tokens, vocab_words, rng, **kwargs
)
output_tokens = tokenizer.convert_tokens_to_ids(output_tokens)
masked_lm_labels = tokenizer.convert_tokens_to_ids(masked_lm_labels)
t = TrainingInstance(
output_tokens, tokens_y, masked_lm_positions, masked_lm_labels
)
results.append(t)
return results
def group_doc(data):
results, result = [], []
for i in data:
if not len(i) and len(result):
results.append(result)
result = []
else:
result.append(i)
if len(result):
results.append(result)
return results
def create_int_feature(values):
feature = tf.train.Feature(
int64_list = tf.train.Int64List(value = list(values))
)
return feature
def create_float_feature(values):
feature = tf.train.Feature(
float_list = tf.train.FloatList(value = list(values))
)
return feature
def write_instance_to_example_file(
instances,
output_file
):
writer = tf.python_io.TFRecordWriter(output_file)
for (inst_index, instance) in enumerate(instances):
input_ids = list(instance.tokens)
target_ids = list(instance.tokens_y)
while len(input_ids) < max_seq_length_encoder:
input_ids.append(0)
target_ids.append(0)
masked_lm_positions = list(instance.masked_lm_positions)
masked_lm_ids = list(instance.masked_lm_labels)
masked_lm_weights = [1.0] * len(masked_lm_ids)
while len(masked_lm_positions) < max_predictions_per_seq:
masked_lm_positions.append(0)
masked_lm_ids.append(0)
masked_lm_weights.append(0.0)
features = collections.OrderedDict()
features['input_ids'] = create_int_feature(input_ids)
features['targets_ids'] = create_int_feature(target_ids)
features['masked_lm_positions'] = create_int_feature(
masked_lm_positions
)
features['masked_lm_ids'] = create_int_feature(masked_lm_ids)
features['masked_lm_weights'] = create_float_feature(masked_lm_weights)
tf_example = tf.train.Example(
features = tf.train.Features(feature = features)
)
writer.write(tf_example.SerializeToString())
tf.logging.info('Wrote %d total instances', inst_index)
def process_documents(
file,
output_file,
tokenizer,
min_slide = 5,
max_slide = 13,
dedup_mask = 2,
):
with open(file) as fopen:
data = fopen.read().split('\n')
rng = default_rng()
vocab_words = list(tokenizer.vocab.keys())
grouped = group_doc(data)
results = []
for s in range(min_slide, max_slide, 1):
for r in tqdm(grouped):
slided = sliding(r, s)
X, Y = [], []
for i in range(len(slided)):
try:
strings = slided[i]
rouge_ = get_rouge(strings)
y = strings[rouge_[0]]
strings[rouge_[0]] = '[MASK2]'
x = combine(strings)
result = get_feature(
x,
y,
tokenizer,
vocab_words,
rng,
dedup_factor = dedup_mask,
)
results.extend(result)
except:
pass
write_instance_to_example_file(results, output_file)
import tokenization
tokenizer = tokenization.FullTokenizer(vocab_file = 'pegasus.wordpiece', do_lower_case=False)
file = 'dumping-cleaned-news.txt'
output_file = 'news.tfrecord'
min_slide = 5
max_slide = 13
dedup_mask = 5
with open(file) as fopen:
data = fopen.read().split('\n')
rng = default_rng()
vocab_words = list(tokenizer.vocab.keys())
grouped = group_doc(data)
results = []
for s in range(min_slide, max_slide, 1):
for r in tqdm(grouped[:100]):
slided = sliding(r, s)
X, Y = [], []
for i in range(len(slided)):
try:
strings = slided[i]
rouge_ = get_rouge(strings)
y = strings[rouge_[0]]
strings[rouge_[0]] = '[MASK2]'
x = combine(strings)
result = get_feature(
x,
y,
tokenizer,
vocab_words,
rng,
dedup_factor = dedup_mask,
)
results.extend(result)
except:
pass
write_instance_to_example_file(results, output_file)
!rm {output_file}
```
|
github_jupyter
|
# Python Basics with Numpy (optional assignment)
Welcome to your first assignment. This exercise gives you a brief introduction to Python. Even if you've used Python before, this will help familiarize you with functions we'll need.
**Instructions:**
- You will be using Python 3.
- Avoid using for-loops and while-loops, unless you are explicitly told to do so.
- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.
- After coding your function, run the cell right below it to check if your result is correct.
**After this assignment you will:**
- Be able to use iPython Notebooks
- Be able to use numpy functions and numpy matrix/vector operations
- Understand the concept of "broadcasting"
- Be able to vectorize code
Let's get started!
## About iPython Notebooks ##
iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing "SHIFT"+"ENTER" or by clicking on "Run Cell" (denoted by a play symbol) in the upper bar of the notebook.
We will often specify "(≈ X lines of code)" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter.
**Exercise**: Set test to `"Hello World"` in the cell below to print "Hello World" and run the two cells below.
```
### START CODE HERE ### (≈ 1 line of code)
test = "Hello World"
### END CODE HERE ###
print ("test: " + test)
```
**Expected output**:
test: Hello World
<font color='blue'>
**What you need to remember**:
- Run your cells using SHIFT+ENTER (or "Run cell")
- Write code in the designated areas using Python 3 only
- Do not modify the code outside of the designated areas
## 1 - Building basic functions with numpy ##
Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape. You will need to know how to use these functions for future assignments.
### 1.1 - sigmoid function, np.exp() ###
Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp().
**Exercise**: Build a function that returns the sigmoid of a real number x. Use math.exp(x) for the exponential function.
**Reminder**:
$sigmoid(x) = \frac{1}{1+e^{-x}}$ is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.
<img src="images/Sigmoid.png" style="width:500px;height:228px;">
To refer to a function belonging to a specific package you could call it using package_name.function(). Run the code below to see an example with math.exp().
```
# GRADED FUNCTION: basic_sigmoid
import math
def basic_sigmoid(x):
"""
Compute sigmoid of x.
Arguments:
x -- A scalar
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+math.exp(-1*x))
### END CODE HERE ###
return s
basic_sigmoid(3)
```
**Expected Output**:
<table style = "width:40%">
<tr>
<td>** basic_sigmoid(3) **</td>
<td>0.9525741268224334 </td>
</tr>
</table>
Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.
```
### One reason why we use "numpy" instead of "math" in Deep Learning ###
x = [1, 2, 3]
basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
```
In fact, if $ x = (x_1, x_2, ..., x_n)$ is a row vector then $np.exp(x)$ will apply the exponential function to every element of x. The output will thus be: $np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})$
```
import numpy as np
# example of np.exp
x = np.array([1, 2, 3])
print(np.exp(x)) # result is (exp(1), exp(2), exp(3))
```
Furthermore, if x is a vector, then a Python operation such as $s = x + 3$ or $s = \frac{1}{x}$ will output s as a vector of the same size as x.
```
# example of vector operation
x = np.array([1, 2, 3])
print (x + 3)
```
Any time you need more info on a numpy function, we encourage you to look at [the official documentation](https://docs.scipy.org/doc/numpy-1.10.1/reference/generated/numpy.exp.html).
You can also create a new cell in the notebook and write `np.exp?` (for example) to get quick access to the documentation.
**Exercise**: Implement the sigmoid function using numpy.
**Instructions**: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices...) are called numpy arrays. You don't need to know more for now.
$$ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix}
x_1 \\
x_2 \\
... \\
x_n \\
\end{pmatrix} = \begin{pmatrix}
\frac{1}{1+e^{-x_1}} \\
\frac{1}{1+e^{-x_2}} \\
... \\
\frac{1}{1+e^{-x_n}} \\
\end{pmatrix}\tag{1} $$
```
# GRADED FUNCTION: sigmoid
import numpy as np # this means you can access numpy functions by writing np.function() instead of numpy.function()
def sigmoid(x):
"""
Compute the sigmoid of x
Arguments:
x -- A scalar or numpy array of any size
Return:
s -- sigmoid(x)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1/(1+np.exp(-1*x))
### END CODE HERE ###
return s
x = np.array([1, 2, 3])
sigmoid(x)
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid([1,2,3])**</td>
<td> array([ 0.73105858, 0.88079708, 0.95257413]) </td>
</tr>
</table>
### 1.2 - Sigmoid gradient
As you've seen in lecture, you will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.
**Exercise**: Implement the function sigmoid_grad() to compute the gradient of the sigmoid function with respect to its input x. The formula is: $$sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2}$$
You often code this function in two steps:
1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
2. Compute $\sigma'(x) = s(1-s)$
```
# GRADED FUNCTION: sigmoid_derivative
def sigmoid_derivative(x):
"""
Compute the gradient (also called the slope or derivative) of the sigmoid function with respect to its input x.
You can store the output of the sigmoid function into variables and then use it to calculate the gradient.
Arguments:
x -- A scalar or numpy array
Return:
ds -- Your computed gradient.
"""
### START CODE HERE ### (≈ 2 lines of code)
s = sigmoid(x)
ds = s*(1-s)
### END CODE HERE ###
return ds
x = np.array([1, 2, 3])
print ("sigmoid_derivative(x) = " + str(sigmoid_derivative(x)))
```
**Expected Output**:
<table>
<tr>
<td> **sigmoid_derivative([1,2,3])**</td>
<td> [ 0.19661193 0.10499359 0.04517666] </td>
</tr>
</table>
### 1.3 - Reshaping arrays ###
Two common numpy functions used in deep learning are [np.shape](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.shape.html) and [np.reshape()](https://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html).
- X.shape is used to get the shape (dimension) of a matrix/vector X.
- X.reshape(...) is used to reshape X into some other dimension.
For example, in computer science, an image is represented by a 3D array of shape $(length, height, depth = 3)$. However, when you read an image as the input of an algorithm you convert it to a vector of shape $(length*height*3, 1)$. In other words, you "unroll", or reshape, the 3D array into a 1D vector.
<img src="images/image2vector_kiank.png" style="width:500px;height:300;">
**Exercise**: Implement `image2vector()` that takes an input of shape (length, height, 3) and returns a vector of shape (length\*height\*3, 1). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (a*b,c) you would do:
``` python
v = v.reshape((v.shape[0]*v.shape[1], v.shape[2])) # v.shape[0] = a ; v.shape[1] = b ; v.shape[2] = c
```
- Please don't hardcode the dimensions of image as a constant. Instead look up the quantities you need with `image.shape[0]`, etc.
```
# GRADED FUNCTION: image2vector
def image2vector(image):
"""
Argument:
image -- a numpy array of shape (length, height, depth)
Returns:
v -- a vector of shape (length*height*depth, 1)
"""
### START CODE HERE ### (≈ 1 line of code)
v = image.reshape((image.shape[0]*image.shape[2]*3), 1)
### END CODE HERE ###
return v
# This is a 3 by 3 by 2 array, typically images will be (num_px_x, num_px_y,3) where 3 represents the RGB values
image = np.array([[[ 0.67826139, 0.29380381],
[ 0.90714982, 0.52835647],
[ 0.4215251 , 0.45017551]],
[[ 0.92814219, 0.96677647],
[ 0.85304703, 0.52351845],
[ 0.19981397, 0.27417313]],
[[ 0.60659855, 0.00533165],
[ 0.10820313, 0.49978937],
[ 0.34144279, 0.94630077]]])
print ("image2vector(image) = " + str(image2vector(image)))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td> **image2vector(image)** </td>
<td> [[ 0.67826139]
[ 0.29380381]
[ 0.90714982]
[ 0.52835647]
[ 0.4215251 ]
[ 0.45017551]
[ 0.92814219]
[ 0.96677647]
[ 0.85304703]
[ 0.52351845]
[ 0.19981397]
[ 0.27417313]
[ 0.60659855]
[ 0.00533165]
[ 0.10820313]
[ 0.49978937]
[ 0.34144279]
[ 0.94630077]]</td>
</tr>
</table>
### 1.4 - Normalizing rows
Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to $ \frac{x}{\| x\|} $ (dividing each row vector of x by its norm).
For example, if $$x =
\begin{bmatrix}
0 & 3 & 4 \\
2 & 6 & 4 \\
\end{bmatrix}\tag{3}$$ then $$\| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix}
5 \\
\sqrt{56} \\
\end{bmatrix}\tag{4} $$and $$ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix}
0 & \frac{3}{5} & \frac{4}{5} \\
\frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\
\end{bmatrix}\tag{5}$$ Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it in part 5.
**Exercise**: Implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).
```
# GRADED FUNCTION: normalizeRows
def normalizeRows(x):
"""
Implement a function that normalizes each row of the matrix x (to have unit length).
Argument:
x -- A numpy matrix of shape (n, m)
Returns:
x -- The normalized (by row) numpy matrix. You are allowed to modify x.
"""
### START CODE HERE ### (≈ 2 lines of code)
# Compute x_norm as the norm 2 of x. Use np.linalg.norm(..., ord = 2, axis = ..., keepdims = True)
x_norm = np.linalg.norm(x, axis=1, keepdims = True)
# Divide x by its norm.
x = x/x_norm
### END CODE HERE ###
return x
x = np.array([
[0, 3, 4],
[1, 6, 4]])
print("normalizeRows(x) = " + str(normalizeRows(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **normalizeRows(x)** </td>
<td> [[ 0. 0.6 0.8 ]
[ 0.13736056 0.82416338 0.54944226]]</td>
</tr>
</table>
**Note**:
In normalizeRows(), you can try to print the shapes of x_norm and x, and then rerun the assessment. You'll find out that they have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it now!
### 1.5 - Broadcasting and the softmax function ####
A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official [broadcasting documentation](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
**Exercise**: Implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes. You will learn more about softmax in the second course of this specialization.
**Instructions**:
- $ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix}
x_1 &&
x_2 &&
... &&
x_n
\end{bmatrix}) = \begin{bmatrix}
\frac{e^{x_1}}{\sum_{j}e^{x_j}} &&
\frac{e^{x_2}}{\sum_{j}e^{x_j}} &&
... &&
\frac{e^{x_n}}{\sum_{j}e^{x_j}}
\end{bmatrix} $
- $\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, $x_{ij}$ maps to the element in the $i^{th}$ row and $j^{th}$ column of $x$, thus we have: }$ $$softmax(x) = softmax\begin{bmatrix}
x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\
x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn}
\end{bmatrix} = \begin{bmatrix}
\frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\
\frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}}
\end{bmatrix} = \begin{pmatrix}
softmax\text{(first row of x)} \\
softmax\text{(second row of x)} \\
... \\
softmax\text{(last row of x)} \\
\end{pmatrix} $$
```
# GRADED FUNCTION: softmax
def softmax(x):
"""Calculates the softmax for each row of the input x.
Your code should work for a row vector and also for matrices of shape (n, m).
Argument:
x -- A numpy matrix of shape (n,m)
Returns:
s -- A numpy matrix equal to the softmax of x, of shape (n,m)
"""
### START CODE HERE ### (≈ 3 lines of code)
# Apply exp() element-wise to x. Use np.exp(...).
x_exp = np.exp(x)
# Create a vector x_sum that sums each row of x_exp. Use np.sum(..., axis = 1, keepdims = True).
x_sum = np.sum(x_exp, axis = 1, keepdims = True)
# Compute softmax(x) by dividing x_exp by x_sum. It should automatically use numpy broadcasting.
s = x_exp/x_sum
### END CODE HERE ###
return s
x = np.array([
[9, 2, 5, 0, 0],
[7, 5, 0, 0 ,0]])
print("softmax(x) = " + str(softmax(x)))
```
**Expected Output**:
<table style="width:60%">
<tr>
<td> **softmax(x)** </td>
<td> [[ 9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
1.21052389e-04]
[ 8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
8.01252314e-04]]</td>
</tr>
</table>
**Note**:
- If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). **x_exp/x_sum** works due to python broadcasting.
Congratulations! You now have a pretty good understanding of python numpy and have implemented a few useful functions that you will be using in deep learning.
<font color='blue'>
**What you need to remember:**
- np.exp(x) works for any np.array x and applies the exponential function to every coordinate
- the sigmoid function and its gradient
- image2vector is commonly used in deep learning
- np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
- numpy has efficient built-in functions
- broadcasting is extremely useful
## 2) Vectorization
In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.
```
import time
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### CLASSIC DOT PRODUCT OF VECTORS IMPLEMENTATION ###
tic = time.process_time()
dot = 0
for i in range(len(x1)):
dot+= x1[i]*x2[i]
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC OUTER PRODUCT IMPLEMENTATION ###
tic = time.process_time()
outer = np.zeros((len(x1),len(x2))) # we create a len(x1)*len(x2) matrix with only zeros
for i in range(len(x1)):
for j in range(len(x2)):
outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC ELEMENTWISE IMPLEMENTATION ###
tic = time.process_time()
mul = np.zeros(len(x1))
for i in range(len(x1)):
mul[i] = x1[i]*x2[i]
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### CLASSIC GENERAL DOT PRODUCT IMPLEMENTATION ###
W = np.random.rand(3,len(x1)) # Random 3*len(x1) numpy array
tic = time.process_time()
gdot = np.zeros(W.shape[0])
for i in range(W.shape[0]):
for j in range(len(x1)):
gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
print ("gdot = " + str(gdot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
### VECTORIZED DOT PRODUCT OF VECTORS ###
tic = time.process_time()
dot = np.dot(x1,x2)
toc = time.process_time()
print ("dot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED OUTER PRODUCT ###
tic = time.process_time()
outer = np.outer(x1,x2)
toc = time.process_time()
print ("outer = " + str(outer) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED ELEMENTWISE MULTIPLICATION ###
tic = time.process_time()
mul = np.multiply(x1,x2)
toc = time.process_time()
print ("elementwise multiplication = " + str(mul) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
### VECTORIZED GENERAL DOT PRODUCT ###
tic = time.process_time()
dot = np.dot(W,x1)
toc = time.process_time()
print ("gdot = " + str(dot) + "\n ----- Computation time = " + str(1000*(toc - tic)) + "ms")
```
As you may have noticed, the vectorized implementation is much cleaner and more efficient. For bigger vectors/matrices, the differences in running time become even bigger.
**Note** that `np.dot()` performs a matrix-matrix or matrix-vector multiplication. This is different from `np.multiply()` and the `*` operator (which is equivalent to `.*` in Matlab/Octave), which performs an element-wise multiplication.
### 2.1 Implement the L1 and L2 loss functions
**Exercise**: Implement the numpy vectorized version of the L1 loss. You may find the function abs(x) (absolute value of x) useful.
**Reminder**:
- The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions ($ \hat{y} $) are from the true values ($y$). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost.
- L1 loss is defined as:
$$\begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m|y^{(i)} - \hat{y}^{(i)}| \end{align*}\tag{6}$$
```
# GRADED FUNCTION: L1
def L1(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L1 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(abs(yhat - y),axis = 0, keepdims = True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L1 = " + str(L1(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L1** </td>
<td> 1.1 </td>
</tr>
</table>
**Exercise**: Implement the numpy vectorized version of the L2 loss. There are several way of implementing the L2 loss but you may find the function np.dot() useful. As a reminder, if $x = [x_1, x_2, ..., x_n]$, then `np.dot(x,x)` = $\sum_{j=0}^n x_j^{2}$.
- L2 loss is defined as $$\begin{align*} & L_2(\hat{y},y) = \sum_{i=0}^m(y^{(i)} - \hat{y}^{(i)})^2 \end{align*}\tag{7}$$
```
# GRADED FUNCTION: L2
def L2(yhat, y):
"""
Arguments:
yhat -- vector of size m (predicted labels)
y -- vector of size m (true labels)
Returns:
loss -- the value of the L2 loss function defined above
"""
### START CODE HERE ### (≈ 1 line of code)
loss = np.sum(abs(yhat - y)**2,axis = 0, keepdims = True)
### END CODE HERE ###
return loss
yhat = np.array([.9, 0.2, 0.1, .4, .9])
y = np.array([1, 0, 0, 1, 1])
print("L2 = " + str(L2(yhat,y)))
```
**Expected Output**:
<table style="width:20%">
<tr>
<td> **L2** </td>
<td> 0.43 </td>
</tr>
</table>
Congratulations on completing this assignment. We hope that this little warm-up exercise helps you in the future assignments, which will be more exciting and interesting!
<font color='blue'>
**What to remember:**
- Vectorization is very important in deep learning. It provides computational efficiency and clarity.
- You have reviewed the L1 and L2 loss.
- You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc...
|
github_jupyter
|
# Explore the generated data
Here we explore the data that is generated with the [generate-data.ipynb](generate-data.ipynb) notebook.
You can either run the simulations or download the data set. See [README.md](README.md) for the download link and instructions.
### Joining the seperate data files of one simulation together, example:
```python
# for example if the generated files have the following names:
# 'tmp/1d_alpha_vs_B_x_000.hdf',
# 'tmp/1d_alpha_vs_B_x_001.hdf',
# 'tmp/1d_alpha_vs_B_x_002.hdf', ...
# The following line with join the files and save it as 'data/new_name.hdf'.
df = common.combine_dfs('tmp/1d_alpha_vs_B_x_*.hdf', 'data/new_name.hdf')
```
```
import holoviews as hv
import numpy as np
import pandas as pd
import common
hv.notebook_extension()
def add_energy_gs(df):
hbar = df.hbar.unique()[0]
eV = df.eV.unique()[0]
flux_quantum_over_2pi = hbar / (2 * eV) / (eV * 1e6)
df['E'] = df['currents'].apply(np.cumsum)
df['E'] *= flux_quantum_over_2pi
df['phase_gs_arg'] = df['E'].apply(np.argmin)
df['phase_gs'] = [row['phases'][row['phase_gs_arg']] for i, row in df.iterrows()]
# Move the phase_gs from -π to +π if they are within the tolerance
tol = np.diff(df['phases'].iloc[0]).max()
df['phase_gs'] = [-row['phase_gs'] if row['phase_gs'] < -(np.pi - tol) else row['phase_gs']
for i, row in df.iterrows()]
return df
```
# Data like Figure 4 but with all combinations
```
%%opts Curve (color='k') Scatter (s=200)
def plot(orbital, g, alpha, mu, disorder, salt, B_x):
gr = gb.get_group((orbital, g, alpha, mu, disorder, salt))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[0:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['I'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('orbital', values=df.orbital.unique()),
hv.Dimension('g', values=df.g.unique()),
hv.Dimension('alpha', values=df.alpha.unique()),
hv.Dimension('mu', values=df.mu.unique()),
hv.Dimension('disorder', values=df.disorder.unique()),
hv.Dimension('salt', values=df.salt.unique()),
hv.Dimension('B_x', values=df.B_x.unique())]
df = pd.read_hdf('data/I_c(B_x)_mu10,20meV_disorder0,75meV_T0.1K_all_combinations_of_effects.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
First mode, no disorder, T=50mK, with orbital and SOI
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K_orbital.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
First mode, no disorder, T=50mK, without orbital and SOI, Zeeman only
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
%%opts Curve (color='k') Scatter (s=200)
def plot(orbital, g, alpha, mu, disorder, salt, B_x):
gr = gb.get_group((orbital, g, alpha, mu, disorder, salt))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[0:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['I'])
IB = hv.Curve((gr.mu, gr.current_c), kdims=['potential'], vdims=['I_c'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('orbital', values=df.orbital.unique()),
hv.Dimension('g', values=df.g.unique()),
hv.Dimension('alpha', values=df.alpha.unique()),
hv.Dimension('mu', values=df.mu.unique()),
hv.Dimension('disorder', values=df.disorder.unique()),
hv.Dimension('salt', values=df.salt.unique()),
hv.Dimension('B_x', values=df.B_x.unique())]
```
First mode, no disorder, T=50mK, with orbital but no spin-orbital
```
df = pd.read_hdf('data/I_c(B_x)_mu10_disorder0_T0.05K_onlyorbital.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
df = pd.read_hdf('data/I_c(B_x)_mu5,10,20meV_disorder0,75meV_T0.05K_orbital_SOI_Zeeman.hdf')
df = add_energy_gs(df)
params = ['orbital', 'g', 'alpha', 'mu', 'disorder', 'salt']
gb = df.groupby(params)
hv.DynamicMap(plot, kdims=kdims)
```
# Different $T$, with or without leads, different lenghts of the system
```
df2 = pd.read_hdf('data/I_c(B_x)_no_disorder_combinations_of_effects_and_geometries.hdf')
df2 = add_energy_gs(df2)
params = ['T', 'L', 'orbital', 'g', 'alpha', 'mu', 'with_leads']
gb = df2.groupby(params)
%%opts Curve (color='k') Scatter (s=200)
def plot(T, L, orbital, g, alpha, mu, with_leads, B_x):
gr = gb.get_group((T, L, orbital, g, alpha, mu, with_leads))
gr = gr.set_index('B_x', drop=False)
x = gr.loc[B_x]
current = hv.Curve((gr.B_x, gr.current_c), kdims=['B_x'], vdims=['I_c'])[:, 0:]
phase_gs = hv.Curve((gr.B_x, gr.phase_gs), kdims=['B_x'], vdims=['theta_gs'])[:, -3.2:3.2]
cpr = hv.Curve((x.phases, x.currents), kdims=['phi'], vdims=['I'])
energy = hv.Curve((x.phases, x.E), kdims=['phi'], vdims=['E'])
E_min = hv.Scatter((x.phase_gs, x.E[x.phase_gs_arg]), kdims=['phi'], vdims=['E'])
VLine = hv.VLine(B_x)
return (current * VLine + phase_gs * VLine + cpr + energy * E_min).cols(2)
kdims = [hv.Dimension('T', values=df2['T'].unique()),
hv.Dimension('L', values=df2.L.unique()),
hv.Dimension('orbital', values=df2.orbital.unique()),
hv.Dimension('g', values=df2.g.unique()),
hv.Dimension('alpha', values=df2.alpha.unique()),
hv.Dimension('mu', values=df2.mu.unique()),
hv.Dimension('with_leads', values=df2.with_leads.unique()),
hv.Dimension('B_x', values=df2.B_x.unique())]
dm = hv.DynamicMap(plot, kdims=kdims)
dm
ds = hv.Dataset(df2)
ds.to.curve(['B_x'], ['current_c'], groupby=params, dynamic=True).overlay('L').select(B=(0, 0.5))
params = ['T', 'B_x', 'orbital', 'g', 'alpha', 'mu', 'with_leads']
curve = ds.to.curve(['L'], ['current_c'], groupby=params, dynamic=True)
curve.redim(current_c=dict(range=(0, None)))
```
# Rotation of field
```
%%opts Path [aspect='square']
df = pd.read_hdf('data/I_c(B_x)_mu20meV_rotation_of_field_in_xy_plane.hdf')
df = add_energy_gs(df)
df2 = common.drop_constant_columns(df)
ds = hv.Dataset(df2)
current = ds.to.curve(kdims='B', vdims='current_c', groupby=['theta', 'disorder']).redim(current_c=dict(range=(0, None)))
phase = ds.to.curve(kdims='B', vdims='phase_gs', groupby=['theta', 'disorder'])
current + phase
```
|
github_jupyter
|
## In this notebook, images and their corresponding metadata are organized. We take note of the actual existing images, combine with available metadata, and scraped follower counts. After merging and dropping image duplicates, we obtain 7702 total images.
```
import pandas as pd
import numpy as np
import os
from PIL import Image
import json
from pandas.io.json import json_normalize
import ast
IMAGE_DIR = "./images/training/resized/"
```
### Dataframe (df_imagename) of all existing images: 11181 Images
```
# Directory of museum folders
im_dirs = os.listdir(IMAGE_DIR)
folder = []
for f in im_dirs:
if f != '.DS_Store':
print(IMAGE_DIR+f)
folder = folder + os.listdir(IMAGE_DIR+f)
# df_imagename : Dataframe of existing images
df_imagename = pd.DataFrame({"filename": folder})
df_imagename.head()
print("Number of existing images: {}".format(df_imagename.filename.size))
# Takes metadata for museum and returns a dataframe
def load_metadata(file, folder):
data = json.load(file)
df = pd.DataFrame.from_dict(json_normalize(data), orient = 'columns')
df['museum'] = folder
df = df.rename(index=str, columns={"id": "insta_id"})
df.drop(labels = ['comments_disabled', 'edge_media_preview_like.count',
'edge_media_to_caption.edges', 'edge_media_to_comment.count', 'is_video', 'thumbnail_resources', 'thumbnail_src', 'urls',
'video_view_count'], axis = 1, inplace = True)
df['display_url'] = df['display_url'].str.split('/').str[-1]
return df
```
### Dataframe (df) of images in metadata: Metadata for 8362 images
```
# Load all the metadata
df = pd.DataFrame()
for folder in im_dirs:
if folder != ".DS_Store":
print("Loading {} images".format(folder))
meta_file = open("{image_dir}{folder}/{folder}.json".format(image_dir=IMAGE_DIR, folder = folder))
if df.empty:
df = load_metadata(meta_file, folder)
else:
df = pd.concat([df, load_metadata(meta_file, folder)], ignore_index = True)
columns = ['height',
'width',
'filename',
'liked_count',
'insta_id',
'user_id',
'shortcode',
'tags',
'timestamp',
'museum']
df.to_csv('./images/training/data/merged_metadata.csv', header = columns)
df.head()
print("Number of images in metadata: {}".format(df.shortcode.size))
```
## Script for scraping follower counts. Some of the shortcodes used were not valid, possibly because the images were removed.
```
from selenium import webdriver
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from lxml import html
import csv
def write_out_csv(data, filename_base, fieldnames):
print("Writing to output file %s.csv" % filename_base)
with open("%s.csv" % filename_base, "w") as csvfile:
fields = fieldnames
writer = csv.DictWriter(csvfile, fieldnames=fields)
writer.writeheader()
for row in data:
writer.writerow(row)
def scrape_followers(lst, output_filename):
instagram_data = []
error_sc = []
for code in lst:
url = "https://instagram.com/p/" + code
try:
browser.get(url)
elem = wait.until(
EC.element_to_be_clickable(
(By.XPATH, '//div[@class = "e1e1d"]//a[@class = "FPmhX notranslate nJAzx"]')
)
)
elem.click()
elem = wait.until(
EC.element_to_be_clickable((By.XPATH, '//div[@class = "v9tJq "]'))
)
el = browser.find_element_by_xpath("//*")
parser = html.fromstring(el.get_attribute("outerHTML"))
# print(el.get_attribute("outerHTML"))
raw_followers = parser.xpath(
'.//ul[@class="k9GMp "]/li[position()=2]//span[@class = "g47SY "]/@title'
)[0].replace(",", "")
data = {"shortcode": code, "followers": int(raw_followers)}
instagram_data.append(data)
except:
error_sc.append(code)
pass
browser.close()
fields = ["shortcode", "followers"]
print(error_sc)
write_out_csv(instagram_data, "{}".format(output_filename), fields)
# Uncomment the code below to run scraping for a list of shortcodes
# Load the shortcodes of images for which the followers was not scraped
# with open('error_sc4.txt', 'r') as f:
# error_sc4 = ast.literal_eval(f.read())
# print(len(error_sc4))
# browser = webdriver.Chrome()
# wait = WebDriverWait(browser, 15)
# scrape_followers(error_sc3, "followers4")
```
### Dataframe (df_followers) of follower number for each shortcode: 8138 counts, 8068 shortcodes are unique
```
# Follower counts are merged
# lst_followers = [pd.read_csv("followers.csv"), pd.read_csv("followers2.csv"), pd.read_csv("followers3.csv"), pd.read_csv("followers4.csv")]
# df_followers = pd.concat(lst_followers, ignore_index = True)
# df_followers.to_csv("scraped_follower_counts.csv")
# Follower count df: df_followers
# Metadata df: df_images
df_followers = pd.read_csv("./images/training/data/scraped_follower_counts.csv")
df_images = pd.read_csv("./images/training/data/merged_metadata.csv")
print("Number of Follower counts", df_followers.shortcode.size)
print("Number of Follower counts based on unique shortcodes", df_followers.shortcode.unique().size)
print("Number of Images with metadata", df_images.shortcode.size)
print("Number of actual Images", df_imagename.size)
```
### Dataframe (df_final): merge metadata with scraped followers counts.
```
df_final = df_followers.merge(df_images, on = "shortcode")
print("From Metadata - Number of unique filenames: {}".format(df_images.filename.unique().size))
print("From Metadata - Number of filenames: {}".format(df_images.filename.size))
print("Metadata + Followers - Number of unique filenames : {}".format(df_final.filename.unique().size))
print("Metadata + Followers - Number of filenames: {}".format(df_final.filename.size))
df_final.drop_duplicates(subset = ["shortcode"], inplace = True)
df_final.shortcode.unique().size
df_final.shortcode.size
df_final['score'] = df_final.liked_count/df_final.followers
df_final = df_final[df_final['score'] != float('inf')]
print("min: {}, max: {}".format(min(df_final.score), max(df_final.score)))
df_final['norm_score'] = (df_final['score'] - min(df_final.score))/(max(df_final.score) - min(df_final.score))
print("normalized - min: {}, max: {}".format(min(df_final.norm_score), max(df_final.norm_score)))
df_final.head()
```
### Dataframe (df_final) -- existing images merged with metadata images:
```
df_final = df_final.merge(df_imagename, on="filename")
print(df_imagename.filename.unique().size)
print(df_imagename.filename.size)
df_final.filename.unique().size
df_final = df_final.sort_values(by = "score", ascending=False)[['filename', 'museum', 'score', 'liked_count', 'followers', 'norm_score']]
df_final.drop_duplicates(subset = "filename", inplace = True)
print("Number of existing images merged with metadata: {}".format(df_final.filename.size))
df_final.to_csv('./images/training/data/image_data_final.csv')
df_final.read_csv('./images/training/data/image_data_final.csv')
df.filename.size
# Dataframe of follower counts
df_followers = pd.read_csv("./images/training/data/scraped_follower_counts.csv")
df_followers.head()
# Dataframe of metadata
df_images = pd.read_csv("./images/training/data/merged_metadata.csv")
df_images.head()
# Final dataframe of images that are existing and have follower counts and metadata
df_final = pd.read_csv('./images/training/data/image_data_final.csv')
df_final.head()
```
|
github_jupyter
|
<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-iss.png' width=15% style="float: right;">
<img src='https://www.iss.nus.edu.sg/Sitefinity/WebsiteTemplates/ISS/App_Themes/ISS/Images/branding-nus.png' width=15% style="float: right;">
---
```
import IPython.display
IPython.display.YouTubeVideo('leVZjVahdKs')
```
# 如何使用和开发微信聊天机器人的系列教程
# A workshop to develop & use an intelligent and interactive chat-bot in WeChat
### WeChat is a popular social media app, which has more than 800 million monthly active users.
<img src='https://www.iss.nus.edu.sg/images/default-source/About-Us/7.6.1-teaching-staff/sam-website.tmb-.png' width=8% style="float: right;">
<img src='reference/WeChat_SamGu_QR.png' width=10% style="float: right;">
by: GU Zhan (Sam)
October 2018 : Update to support Python 3 in local machine, e.g. iss-vm.
April 2017 ======= Scan the QR code to become trainer's friend in WeChat =====>>
### 第五课:视频识别和处理
### Lesson 5: Video Recognition & Processing
* 识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as "dog", "flower" or "car")
* 识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)
* 识别受限内容 (Explicit Content Detection: Detect adult content within a video)
* 生成视频字幕 (Video Transcription BETA: Transcribes video content in English)
### Using Google Cloud Platform's Machine Learning APIs
From the same API console, choose "Dashboard" on the left-hand menu and "Enable API".
Enable the following APIs for your project (search for them) if they are not already enabled:
<ol>
**<li> Google Cloud Video Intelligence API </li>**
</ol>
Finally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)
```
# Copyright 2016 Google Inc. Licensed under the Apache License, Version 2.0 (the "License");
# !pip install --upgrade google-api-python-client
```
---
### 短片预览 / Video viewing
```
# 多媒体文件的二进制base64码转换 (Define media pre-processing functions)
# Import the base64 encoding library.
import base64, io, sys, IPython.display
# Python 2
if sys.version_info[0] < 3:
import urllib2
# Python 3
else:
import urllib.request
# Pass the media data to an encoding function.
def encode_media(media_file):
with io.open(media_file, "rb") as media_file:
media_content = media_file.read()
# Python 2
if sys.version_info[0] < 3:
return base64.b64encode(media_content).decode('ascii')
# Python 3
else:
return base64.b64encode(media_content).decode('utf-8')
video_file = 'reference/video_IPA.mp4'
# video_file = 'reference/SampleVideo_360x240_1mb.mp4'
# video_file = 'reference/SampleVideo_360x240_2mb.mp4'
IPython.display.HTML(data=
'''<video alt="test" controls><source src="data:video/mp4;base64,{0}" type="video/mp4" /></video>'''
.format(encode_media(video_file)))
```
---
## <span style="color:blue">Install the client library</span> for Video Intelligence / Processing
```
!pip install --upgrade google-cloud-videointelligence
```
---
```
# Imports the Google Cloud client library
from google.cloud import videointelligence
# [Optional] Display location of service account API key if defined in GOOGLE_APPLICATION_CREDENTIALS
!echo $GOOGLE_APPLICATION_CREDENTIALS
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
```
### * 识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as "dog", "flower" or "car")
https://cloud.google.com/video-intelligence/docs/analyze-labels
didi_video_label_detection()
```
from google.cloud import videointelligence
def didi_video_label_detection(path):
"""Detect labels given a local file path. (Demo)"""
""" Detects labels given a GCS path. (Exercise / Workshop Enhancement)"""
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.LABEL_DETECTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
operation = video_client.annotate_video(
features=features, input_content=input_content)
print('\nProcessing video for label annotations:')
result = operation.result(timeout=90)
print('\nFinished processing.')
# Process video/segment level label annotations
segment_labels = result.annotation_results[0].segment_label_annotations
for i, segment_label in enumerate(segment_labels):
print('Video label description: {}'.format(
segment_label.entity.description))
for category_entity in segment_label.category_entities:
print('\tLabel category description: {}'.format(
category_entity.description))
for i, segment in enumerate(segment_label.segments):
start_time = (segment.segment.start_time_offset.seconds +
segment.segment.start_time_offset.nanos / 1e9)
end_time = (segment.segment.end_time_offset.seconds +
segment.segment.end_time_offset.nanos / 1e9)
positions = '{}s to {}s'.format(start_time, end_time)
confidence = segment.confidence
print('\tSegment {}: {}'.format(i, positions))
print('\tConfidence: {}'.format(confidence))
print('\n')
# Process shot level label annotations
shot_labels = result.annotation_results[0].shot_label_annotations
for i, shot_label in enumerate(shot_labels):
print('Shot label description: {}'.format(
shot_label.entity.description))
for category_entity in shot_label.category_entities:
print('\tLabel category description: {}'.format(
category_entity.description))
for i, shot in enumerate(shot_label.segments):
start_time = (shot.segment.start_time_offset.seconds +
shot.segment.start_time_offset.nanos / 1e9)
end_time = (shot.segment.end_time_offset.seconds +
shot.segment.end_time_offset.nanos / 1e9)
positions = '{}s to {}s'.format(start_time, end_time)
confidence = shot.confidence
print('\tSegment {}: {}'.format(i, positions))
print('\tConfidence: {}'.format(confidence))
print('\n')
# Process frame level label annotations
frame_labels = result.annotation_results[0].frame_label_annotations
for i, frame_label in enumerate(frame_labels):
print('Frame label description: {}'.format(
frame_label.entity.description))
for category_entity in frame_label.category_entities:
print('\tLabel category description: {}'.format(
category_entity.description))
# Each frame_label_annotation has many frames,
# here we print information only about the first frame.
frame = frame_label.frames[0]
time_offset = frame.time_offset.seconds + frame.time_offset.nanos / 1e9
print('\tFirst frame time offset: {}s'.format(time_offset))
print('\tFirst frame confidence: {}'.format(frame.confidence))
print('\n')
return segment_labels, shot_labels, frame_labels
# video_file = 'reference/video_IPA.mp4'
didi_segment_labels, didi_shot_labels, didi_frame_labels = didi_video_label_detection(video_file)
didi_segment_labels
didi_shot_labels
didi_frame_labels
```
### * 识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)
https://cloud.google.com/video-intelligence/docs/shot_detection
didi_video_shot_detection()
```
from google.cloud import videointelligence
def didi_video_shot_detection(path):
""" Detects camera shot changes given a local file path """
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.SHOT_CHANGE_DETECTION]
# features = [videointelligence.enums.Feature.LABEL_DETECTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
# operation = video_client.annotate_video(path, features=features)
operation = video_client.annotate_video(features=features, input_content=input_content)
print('\nProcessing video for shot change annotations:')
result = operation.result(timeout=180)
print('\nFinished processing.')
for i, shot in enumerate(result.annotation_results[0].shot_annotations):
start_time = (shot.start_time_offset.seconds +
shot.start_time_offset.nanos / 1e9)
end_time = (shot.end_time_offset.seconds +
shot.end_time_offset.nanos / 1e9)
print('\tShot {}: {} to {}'.format(i, start_time, end_time))
return result
# video_file = 'reference/video_IPA.mp4'
didi_result = didi_video_shot_detection(video_file)
didi_result
```
### * 识别受限内容 (Explicit Content Detection: Detect adult content within a video)
didi_video_safesearch_detection()
```
from google.cloud import videointelligence
def didi_video_safesearch_detection(path):
""" Detects explicit content given a local file path. """
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.EXPLICIT_CONTENT_DETECTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
# operation = video_client.annotate_video(path, features=features)
operation = video_client.annotate_video(features=features, input_content=input_content)
print('\nProcessing video for explicit content annotations:')
result = operation.result(timeout=90)
print('\nFinished processing.')
likely_string = ("Unknown", "Very unlikely", "Unlikely", "Possible",
"Likely", "Very likely")
# first result is retrieved because a single video was processed
for frame in result.annotation_results[0].explicit_annotation.frames:
frame_time = frame.time_offset.seconds + frame.time_offset.nanos / 1e9
print('Time: {}s'.format(frame_time))
print('\tpornography: {}'.format(
likely_string[frame.pornography_likelihood]))
return result
# video_file = 'reference/video_IPA.mp4'
didi_result = didi_video_safesearch_detection(video_file)
```
### <span style="color:red">[ Beta Features ]</span> * 生成视频字幕 (Video Transcription BETA: Transcribes video content in English)
https://cloud.google.com/video-intelligence/docs/beta
Cloud Video Intelligence API includes the following beta features in version v1p1beta1:
Speech Transcription - the Video Intelligence API can transcribe speech to text from the audio in supported video files. Learn more.
```
# Beta Features: videointelligence_v1p1beta1
from google.cloud import videointelligence_v1p1beta1 as videointelligence
def didi_video_speech_transcription(path):
"""Transcribe speech given a local file path."""
##################################################################
# (1) Instantiates a client - using GOOGLE_APPLICATION_CREDENTIALS
# video_client = videointelligence.VideoIntelligenceServiceClient()
#
# (2) Instantiates a client - using 'service account json' file
video_client = videointelligence.VideoIntelligenceServiceClient.from_service_account_json(
"/media/sf_vm_shared_folder/000-cloud-api-key/mtech-ai-7b7e049cf5f6.json")
##################################################################
features = [videointelligence.enums.Feature.SPEECH_TRANSCRIPTION]
with io.open(path, 'rb') as movie:
input_content = movie.read()
config = videointelligence.types.SpeechTranscriptionConfig(
language_code='en-US',
enable_automatic_punctuation=True)
video_context = videointelligence.types.VideoContext(
speech_transcription_config=config)
# operation = video_client.annotate_video(
# input_uri,
# features=features,
# video_context=video_context)
operation = video_client.annotate_video(
features=features,
input_content=input_content,
video_context=video_context)
print('\nProcessing video for speech transcription.')
result = operation.result(timeout=180)
# There is only one annotation_result since only
# one video is processed.
annotation_results = result.annotation_results[0]
speech_transcription = annotation_results.speech_transcriptions[0]
if str(speech_transcription) == '': # result.annotation_results[0].speech_transcriptions[0] == ''
print('\nNOT FOUND: video for speech transcription.')
else:
alternative = speech_transcription.alternatives[0]
print('Transcript: {}'.format(alternative.transcript))
print('Confidence: {}\n'.format(alternative.confidence))
print('Word level information:')
for word_info in alternative.words:
word = word_info.word
start_time = word_info.start_time
end_time = word_info.end_time
print('\t{}s - {}s: {}'.format(
start_time.seconds + start_time.nanos * 1e-9,
end_time.seconds + end_time.nanos * 1e-9,
word))
return result
# video_file = 'reference/video_IPA.mp4'
didi_result = didi_video_speech_transcription(video_file)
didi_result
```
---
## <span style="color:blue">Wrap cloud APIs into Functions() for conversational virtual assistant (VA):</span>
Reuse above defined Functions().
```
def didi_video_processing(video_file):
didi_video_reply = u'[ Video 视频处理结果 ]\n\n'
didi_video_reply += u'[ didi_video_label_detection 识别视频消息中的物体名字 ]\n\n' \
+ str(didi_video_label_detection(video_file)) + u'\n\n'
didi_video_reply += u'[ didi_video_shot_detection 识别视频的场景片段 ]\n\n' \
+ str(didi_video_shot_detection(video_file)) + u'\n\n'
didi_video_reply += u'[ didi_video_safesearch_detection 识别受限内容 ]\n\n' \
+ str(didi_video_safesearch_detection(video_file)) + u'\n\n'
didi_video_reply += u'[ didi_video_speech_transcription 生成视频字幕 ]\n\n' \
+ str(didi_video_speech_transcription(video_file)) + u'\n\n'
return didi_video_reply
# [Optional] Agile testing:
# parm_video_response = didi_video_processing(video_file)
# print(parm_video_response)
```
**Define a global variable for future 'video search' function enhancement**
```
parm_video_response = {} # Define a global variable for future 'video search' function enhancement
```
---
## <span style="color:blue">Start interactive conversational virtual assistant (VA):</span>
### Import ItChat, etc. 导入需要用到的一些功能程序库:
```
import itchat
from itchat.content import *
```
### Log in using QR code image / 用微信App扫QR码图片来自动登录
```
# itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。
itchat.auto_login(enableCmdQR=-2) # enableCmdQR=-2: 命令行显示QR图片
# @itchat.msg_register([VIDEO], isGroupChat=True)
@itchat.msg_register([VIDEO])
def download_files(msg):
msg.download(msg.fileName)
print('\nDownloaded video file name is: %s' % msg['FileName'])
##############################################################################################################
# call video analysis APIs #
##############################################################################################################
global parm_video_response # save into global variable, which can be accessed by next WeChat keyword search
# python 2 version WeChat Bot
# parm_video_response = KudosData_VIDEO_DETECTION(encode_media(msg['FileName']))
# python 3 version WeChat Bot
parm_video_response = didi_video_processing(msg['FileName'])
##############################################################################################################
# format video API results #
##############################################################################################################
# python 2 version WeChat Bot
# video_analysis_reply = KudosData_video_generate_reply(parm_video_response)
# python 3 version WeChat Bot
video_analysis_reply = parm_video_response # Exercise / Workshop Enhancement: To pase and format result nicely.
print ('')
print(video_analysis_reply)
return video_analysis_reply
itchat.run()
```
---
```
# interupt kernel, then logout
itchat.logout() # 安全退出
```
---
## <span style="color:blue">Exercise / Workshop Enhancement:</span>
<font color='blue'>
<font color='blue'>
[提问 1] 使用文字来搜索视频内容?需要怎么处理?
[Question 1] Can we use text (keywords) as input to search video content? How?
</font>
<font color='blue'>
<font color='blue'>
[提问 2] 使用图片来搜索视频内容?需要怎么处理?
[Question 2] Can we use an image as input to search video content? How?
</font>
```
'''
# Private conversational mode / 单聊模式,基于关键词进行视频搜索:
@itchat.msg_register([TEXT])
def text_reply(msg):
# if msg['isAt']:
list_keywords = [x.strip() for x in msg['Text'].split(',')]
# call video search function:
search_responses = KudosData_search(list_keywords) # return is a list
# Format search results:
search_reply = u'[ Video Search 视频搜索结果 ]' + '\n'
if len(search_responses) == 0:
search_reply += u'[ Nill 无结果 ]'
else:
for i in range(len(search_responses)): search_reply += '\n' + str(search_responses[i])
print ('')
print (search_reply)
return search_reply
'''
'''
# Group conversational mode / 群聊模式,基于关键词进行视频搜索:
@itchat.msg_register([TEXT], isGroupChat=True)
def text_reply(msg):
if msg['isAt']:
list_keywords = [x.strip() for x in msg['Text'].split(',')]
# call video search function:
search_responses = KudosData_search(list_keywords) # return is a list
# Format search results:
search_reply = u'[ Video Search 视频搜索结果 ]' + '\n'
if len(search_responses) == 0:
search_reply += u'[ Nill 无结果 ]'
else:
for i in range(len(search_responses)): search_reply += '\n' + str(search_responses[i])
print ('')
print (search_reply)
return search_reply
'''
```
### 恭喜您!已经完成了:
### 第五课:视频识别和处理
### Lesson 5: Video Recognition & Processing
* 识别视频消息中的物体名字 (Label Detection: Detect entities within the video, such as "dog", "flower" or "car")
* 识别视频的场景片段 (Shot Change Detection: Detect scene changes within the video)
* 识别受限内容 (Explicit Content Detection: Detect adult content within a video)
* 生成视频字幕 (Video Transcription BETA: Transcribes video content in English)
### 下一课是:
### 第六课:交互式虚拟助手的智能应用
### Lesson 6: Interactive Conversatioinal Virtual Assistant Applications / Intelligent Process Automations
* 虚拟员工: 贷款填表申请审批一条龙自动化流程 (Virtual Worker: When Chat-bot meets RPA-bot for mortgage loan application automation)
* 虚拟员工: 文字指令交互(Conversational automation using text/message command)
* 虚拟员工: 语音指令交互(Conversational automation using speech/voice command)
* 虚拟员工: 多种语言交互(Conversational automation with multiple languages)
<img src='reference/WeChat_SamGu_QR.png' width=80% style="float: left;">
---
|
github_jupyter
|
# Sequence to Sequence attention model for machine translation
This notebook trains a sequence to sequence (seq2seq) model with two different attentions implemented for Spanish to English translation.
The codes are built on TensorFlow Core tutorials: https://www.tensorflow.org/tutorials/text/nmt_with_attention
```
import tensorflow as tf
print(tf.__version__)
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
from sklearn.model_selection import train_test_split
import unicodedata
import re
import numpy as np
import os
import io
import time
```
# Load data set
* Clean the sentences by removing special characters.
* Add a start and end token to each sentence.
* Create a word index and reverse word index (dictionaries mapping from word → id and id → word).
* Pad each sentence to a maximum length.
```
# Download the file
path_to_zip = tf.keras.utils.get_file(
'spa-eng.zip', origin='http://storage.googleapis.com/download.tensorflow.org/data/spa-eng.zip',
extract=True)
path_to_file = os.path.dirname(path_to_zip)+"/spa-eng/spa.txt"
# Converts the unicode file to ascii
def unicode_to_ascii(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
def preprocess_sentence(w):
w = unicode_to_ascii(w.lower().strip())
# creating a space between a word and the punctuation following it
# eg: "he is a boy." => "he is a boy ."
# Reference:- https://stackoverflow.com/questions/3645931/python-padding-punctuation-with-white-spaces-keeping-punctuation
w = re.sub(r"([?.!,¿])", r" \1 ", w)
w = re.sub(r'[" "]+', " ", w)
# replacing everything with space except (a-z, A-Z, ".", "?", "!", ",","¿")
w = re.sub(r"[^a-zA-Z?.!,¿]+", " ", w)
# remove extra space
w = w.strip()
# adding a start and an end token to the sentence
# so that the model know when to start and stop predicting.
w = '<start> ' + w + ' <end>'
return w
en_sentence = u"May I borrow this @ book?"
sp_sentence = u"¿Puedo tomar prestado este libro?"
print(preprocess_sentence(en_sentence))
print(preprocess_sentence(sp_sentence))
print(preprocess_sentence(sp_sentence).encode("UTF-8"))
# Return word pairs in the format: [ENGLISH, SPANISH]
def create_dataset(path, num_examples):
lines = io.open(path, encoding='UTF-8').read().strip().split('\n')
word_pairs = [[preprocess_sentence(w) for w in l.split('\t')] for l in lines[:num_examples]]
return zip(*word_pairs)
en, sp = create_dataset(path_to_file, None)
print(en[-1])
print(sp[-1])
print(len(en), len(sp))
# Tokenize the sentence into list of words(integers) and pad the sequence to the same length
def tokenize(lang):
lang_tokenizer = tf.keras.preprocessing.text.Tokenizer(
filters='')
lang_tokenizer.fit_on_texts(lang)
tensor = lang_tokenizer.texts_to_sequences(lang)
tensor = tf.keras.preprocessing.sequence.pad_sequences(tensor,
padding='post')
return tensor, lang_tokenizer
def load_dataset(path, num_examples=None):
# creating cleaned input, output pairs
targ_lang, inp_lang = create_dataset(path, num_examples)
input_tensor, inp_lang_tokenizer = tokenize(inp_lang)
target_tensor, targ_lang_tokenizer = tokenize(targ_lang)
return input_tensor, target_tensor, inp_lang_tokenizer, targ_lang_tokenizer
# Try experimenting with the size of that dataset
num_examples = 30000
input_tensor, target_tensor, inp_lang, targ_lang = load_dataset(path_to_file, num_examples)
# Calculate max_length of the target tensors
max_length_targ, max_length_inp = target_tensor.shape[1], input_tensor.shape[1]
print(max_length_targ, max_length_inp)
# Creating training and validation sets using an 80-20 split
input_tensor_train, input_tensor_val, target_tensor_train, target_tensor_val = train_test_split(input_tensor, target_tensor, test_size=0.2)
# Show length
print(len(input_tensor_train), len(target_tensor_train), len(input_tensor_val), len(target_tensor_val))
print(input_tensor_train[0])
print(target_tensor_train[0])
```
# Create a tf.data datasest
The tf.data.Dataset API supports writing descriptive and efficient input pipelines. Dataset usage follows a common pattern:
* Create a source dataset from your input data.
* Apply dataset transformations to preprocess the data.
* Iterate over the dataset and process the elements.
Iteration happens in a streaming fashion, so the full dataset does not need to fit into memory.
```
# Configuration
BUFFER_SIZE = len(input_tensor_train)
BATCH_SIZE = 64
steps_per_epoch = len(input_tensor_train)//BATCH_SIZE
steps_per_epoch_val = len(input_tensor_val)//BATCH_SIZE
embedding_dim = 256 # for word embedding
units = 1024 # dimensionality of the output space of RNN
vocab_inp_size = len(inp_lang.word_index)+1
vocab_tar_size = len(targ_lang.word_index)+1
dataset = tf.data.Dataset.from_tensor_slices((input_tensor_train, target_tensor_train)).shuffle(BUFFER_SIZE)
dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
validation_dataset = tf.data.Dataset.from_tensor_slices((input_tensor_val, target_tensor_val)).shuffle(BUFFER_SIZE)
validation_dataset = validation_dataset.batch(BATCH_SIZE, drop_remainder=True)
example_input_batch, example_target_batch = next(iter(dataset))
example_input_batch.shape, example_target_batch.shape
```
# Basic seq2seq model: encoder and decoder
Model groups layers into an object with training and inference features. Two ways to define tf model:

Basic sequence to sequence model without attention:

```
class Encoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz):
super(Encoder, self).__init__()
self.batch_sz = batch_sz
self.enc_units = enc_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.enc_units,
return_sequences=True, # Whether to return the last output in the output sequence, or the full sequence.
return_state=True, # Whether to return the last state in addition to the output.
recurrent_initializer='glorot_uniform')
def call(self, x, hidden):
x = self.embedding(x)
output, state = self.gru(x, initial_state = hidden)
return output, state
def initialize_hidden_state(self):
return tf.zeros((self.batch_sz, self.enc_units))
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
# sample input
sample_hidden = encoder.initialize_hidden_state()
sample_output, sample_hidden = encoder(example_input_batch, sample_hidden)
print ('Encoder output shape: (batch size, sequence length, units) {}'.format(sample_output.shape))
print ('Encoder Hidden state shape: (batch size, units) {}'.format(sample_hidden.shape))
class Decoder(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz):
super(Decoder, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
def call(self, x, hidden):
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
# passing the concatenated vector to the GRU
output, state = self.gru(x, initial_state = hidden)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state
tf.reshape([[1,2,3],[4,5,6]], (-1, 2))
decoder = Decoder(vocab_tar_size, embedding_dim, units, BATCH_SIZE)
sample_decoder_output, _ = decoder(tf.random.uniform((BATCH_SIZE, 1)),
sample_hidden)
print ('Decoder output shape: (batch_size, vocab size) {}'.format(sample_decoder_output.shape))
```
# Dot-product attention


```
class DotProductAttention(tf.keras.layers.Layer):
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
# we are doing this to broadcast addition along the time axis to calculate the score
query_with_time_axis = tf.expand_dims(query, 1)
# inner product, score shape == (batch_size, max_length, 1)
score = query_with_time_axis * values
score = tf.reduce_sum(score, axis=2)
score = tf.expand_dims(score, 2)
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
attention_layer = DotProductAttention()
attention_result, attention_weights = attention_layer(sample_hidden, sample_output)
print("Attention result shape: (batch size, units) {}".format(attention_result.shape))
print("Attention weights shape: (batch_size, sequence_length, 1) {}".format(attention_weights.shape))
```
# Additive attention

```
class BahdanauAttention(tf.keras.layers.Layer):
def __init__(self, units):
super(BahdanauAttention, self).__init__()
self.W1 = tf.keras.layers.Dense(units)
self.W2 = tf.keras.layers.Dense(units)
self.V = tf.keras.layers.Dense(1)
def call(self, query, values):
# query hidden state shape == (batch_size, hidden size)
# query_with_time_axis shape == (batch_size, 1, hidden size)
# values shape == (batch_size, max_len, hidden size)
query_with_time_axis = tf.expand_dims(query, 1)
# score shape == (batch_size, max_length, 1)
# we get 1 at the last axis because we are applying score to self.V
# the shape of the tensor before applying self.V is (batch_size, max_length, units)
score = self.V(tf.nn.tanh(
self.W1(values) + self.W2(query_with_time_axis)))
# attention_weights shape == (batch_size, max_length, 1)
attention_weights = tf.nn.softmax(score, axis=1)
# context_vector shape after sum == (batch_size, hidden_size)
context_vector = attention_weights * values
context_vector = tf.reduce_sum(context_vector, axis=1)
return context_vector, attention_weights
```
# Decoder layer with attention

```
class DecoderWithAttention(tf.keras.Model):
def __init__(self, vocab_size, embedding_dim, dec_units, batch_sz, attention_layer = None):
super(DecoderWithAttention, self).__init__()
self.batch_sz = batch_sz
self.dec_units = dec_units
self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim)
self.gru = tf.keras.layers.GRU(self.dec_units,
return_sequences=True,
return_state=True,
recurrent_initializer='glorot_uniform')
self.fc = tf.keras.layers.Dense(vocab_size)
# used for attention
self.attention = attention_layer
def call(self, x, hidden, enc_output):
# x shape after passing through embedding == (batch_size, 1, embedding_dim)
x = self.embedding(x)
attention_weights = None
if self.attention:
# enc_output shape == (batch_size, max_length, hidden_size)
context_vector, attention_weights = self.attention(hidden, enc_output)
# x shape after concatenation == (batch_size, 1, embedding_dim + hidden_size)
x = tf.concat([tf.expand_dims(context_vector, 1), x], axis=-1)
# passing the concatenated vector to the GRU
output, state = self.gru(x, initial_state = hidden)
# output shape == (batch_size * 1, hidden_size)
output = tf.reshape(output, (-1, output.shape[2]))
# output shape == (batch_size, vocab)
x = self.fc(output)
return x, state, attention_weights
```
# Define loss function
Cross-entropy loss, or log loss, measures the performance of a classification model whose output is a probability value between 0 and 1. Cross-entropy loss increases as the predicted probability diverges from the actual label.

```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True, reduction='none')
def loss_function(real, pred):
mask = tf.math.logical_not(tf.math.equal(real, 0))
loss_ = loss_object(real, pred)
mask = tf.cast(mask, dtype=loss_.dtype)
loss_ *= mask
return tf.reduce_mean(loss_)
print(loss_object([1,2],[[0,0.6,0.3,0.1],[0,0.6,0.3,0.1]]))
print(loss_function([1,2],[[0,0.6,0.3,0.1],[0,0.6,0.3,0.1]]))
```
# Training
@tf.function
In TensorFlow 2, eager execution is turned on by default. The user interface is intuitive and flexible (running one-off operations is much easier and faster), but this can come at the expense of performance and deployability. It is recommended to debug in eager mode, then decorate with @tf.function for better performance.
In TensorFlow 2.0, users should refactor their code into smaller functions which are called as needed. In general, it's not necessary to decorate each of these smaller functions with tf.function; only use tf.function to decorate high-level computations - for example, one step of training, or the forward pass of your model.
TensorFlow provides the tf.GradientTape API for automatic differentiation; that is, computing the gradient of a computation with respect to some inputs, usually tf.Variables. TensorFlow "records" relevant operations executed inside the context of a tf.GradientTape onto a "tape". TensorFlow then uses that tape to compute the gradients of a "recorded" computation using reverse mode differentiation.
```
optimizer = tf.keras.optimizers.Adam()
def get_train_step_func():
@tf.function
def train_step(inp, targ, enc_hidden, encoder, decoder):
loss = 0
with tf.GradientTape() as tape: # for automatic differentiation
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
# passing enc_output to the decoder
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
# using teacher forcing
dec_input = tf.expand_dims(targ[:, t], 1)
batch_loss = (loss / int(targ.shape[1]))
variables = encoder.trainable_variables + decoder.trainable_variables
gradients = tape.gradient(loss, variables)
optimizer.apply_gradients(zip(gradients, variables))
return batch_loss
return train_step
def caculate_validation_loss(inp, targ, enc_hidden, encoder, decoder):
loss = 0
enc_output, enc_hidden = encoder(inp, enc_hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']] * BATCH_SIZE, 1)
# Teacher forcing - feeding the target as the next input
for t in range(1, targ.shape[1]):
predictions, dec_hidden, _ = decoder(dec_input, dec_hidden, enc_output)
loss += loss_function(targ[:, t], predictions)
dec_input = tf.expand_dims(targ[:, t], 1)
loss = loss / int(targ.shape[1])
return loss
def training_seq2seq(epochs, attention):
encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE)
decoder = DecoderWithAttention(vocab_tar_size, embedding_dim, units, BATCH_SIZE, attention)
train_step_func = get_train_step_func()
training_loss = []
validation_loss = []
for epoch in range(epochs):
start = time.time()
enc_hidden = encoder.initialize_hidden_state()
total_loss = 0
for (batch, (inp, targ)) in enumerate(dataset.take(steps_per_epoch)):
batch_loss = train_step_func(inp, targ, enc_hidden, encoder, decoder)
total_loss += batch_loss
if batch % 100 == 0:
print('Epoch {} Batch {} Loss {:.4f}'.format(epoch + 1, batch, batch_loss))
enc_hidden = encoder.initialize_hidden_state()
total_val_loss = 0
for (batch, (inp, targ)) in enumerate(validation_dataset.take(steps_per_epoch)):
val_loss = caculate_validation_loss(inp, targ, enc_hidden, encoder, decoder)
total_val_loss += val_loss
training_loss.append(total_loss / steps_per_epoch)
validation_loss.append(total_val_loss / steps_per_epoch_val)
print('Epoch {} Loss {:.4f} Validation Loss {:.4f}'.format(epoch + 1,
training_loss[-1], validation_loss[-1]))
print('Time taken for 1 epoch {} sec\n'.format(time.time() - start))
return encoder, decoder, training_loss, validation_loss
```
## Training seq2seq without attention
```
epochs = 10
attention = None
print("Running seq2seq model without attention")
encoder, decoder, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = training_loss
vloss = validation_loss
```
## Training seq2seq with dot product attention
```
attention = DotProductAttention()
print("Running seq2seq model with dot product attention")
encoder_dp, decoder_dp, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = np.vstack((tloss, training_loss))
vloss = np.vstack((vloss, validation_loss))
```
## Training seq2seq with Bahdanau attention
```
epochs = 10
attention = BahdanauAttention(units)
print("Running seq2seq model with Bahdanau attention")
encoder_bah, decoder_bah, training_loss, validation_loss = training_seq2seq(epochs, attention)
tloss = np.vstack((tloss, training_loss))
vloss = np.vstack((vloss, validation_loss))
import matplotlib.pyplot as plt
ax = plt.subplot(111)
t = np.arange(1, epochs+1)
for i in range(0, vloss.shape[0]):
line, = plt.plot(t, vloss[i,:], lw=2)
ax.legend(('No attention', 'Dot product', 'Bahdanau'))
ax.set_title("Validation loss")
```
# Translation
```
def translate(sentence, encoder, decoder):
attention_plot = np.zeros((max_length_targ, max_length_inp))
sentence = preprocess_sentence(sentence)
inputs = [inp_lang.word_index[i] for i in sentence.split(' ')]
inputs = tf.keras.preprocessing.sequence.pad_sequences([inputs],
maxlen=max_length_inp,
padding='post')
inputs = tf.convert_to_tensor(inputs)
result = ''
hidden = [tf.zeros((1, units))]
enc_out, enc_hidden = encoder(inputs, hidden)
dec_hidden = enc_hidden
dec_input = tf.expand_dims([targ_lang.word_index['<start>']], 0)
for t in range(max_length_targ):
predictions, dec_hidden, attention_weights = decoder(dec_input,
dec_hidden,
enc_out)
predicted_id = tf.argmax(predictions[0]).numpy()
result += targ_lang.index_word[predicted_id] + ' '
# until the predicted word is <end>.
if targ_lang.index_word[predicted_id] == '<end>':
return result, sentence
# the predicted ID is fed back into the model, no teacher forcing.
dec_input = tf.expand_dims([predicted_id], 0)
return result, sentence
result, sentence = translate(u'esta es mi vida.', encoder_bah, decoder_bah)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
result, sentence = translate(u'esta es mi vida.', encoder_dp, decoder_dp)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
result, sentence = translate(u'¿todavia estan en casa?', encoder_bah, decoder_bah)
print('Input: %s' % (sentence))
print('Predicted translation: {}'.format(result))
```
# Next Steps
* Training on larger dataset
* Model tuning
* Try out other attention scores such as multiplicative
* Train on other seq2seq tasks
|
github_jupyter
|
# QCoDeS Example with DynaCool PPMS
This notebook explains how to control the DynaCool PPMS from QCoDeS.
For this setup to work, the proprietary `PPMS Dynacool` application (or, alternatively `Simulate PPMS Dynacool`) must be running on some PC. On that same PC, the `server.py` script (found in `qcodes/instrument_drivers/QuantumDesign/DynaCoolPPMS/private`) must be running. The script can be run from the command line with no arguments and will run under python 3.6+.
The architecture is as follows:
The QCoDeS driver sends strings via VISA to the server who passes those same strings on to the `CommandHandler` (found in `qcodes/instrument_drivers/QuantumDesign/DynaCoolPPMS/commandhandler`). The `CommandHandler` makes the calls into the proprietary API. The QCoDeS driver can thus be called on any machine that can communicate with the machine hosting the server.
Apart from that, the driver is really simple. For this notebook, we used the `Simulate PPMS Dynacool` application running on the same machine as QCoDeS.
```
%matplotlib notebook
from qcodes.instrument_drivers.QuantumDesign.DynaCoolPPMS.DynaCool import DynaCool
```
To instantiate the driver, simply provide the address and port in the standard VISA format.
The connect message is not too pretty, but there does not seem to be a way to query serial and firmware versions.
```
dynacool = DynaCool('dynacool', address="TCPIP0::127.0.0.1::5000::SOCKET")
```
To get an overview over all available parameters, use `print_readable_snapshot`.
A value of "Not available" means (for this driver) that the parameter has been deprecated.
```
dynacool.print_readable_snapshot(update=True)
```
## Temperature Control
As soon as ANY of the temperature rate, the temperature setpoint, or the temperature settling mode parameters has been set, the system will start moving to the given temperature setpoint at the given rate using the given settling mode.
The system can continuously be queried for its temperature.
```
from time import sleep
import matplotlib.pyplot as plt
import numpy as np
# example 1
dynacool.temperature_rate(0.1)
dynacool.temperature_setpoint(dynacool.temperature() - 1.3)
temps = []
while dynacool.temperature_state() == 'tracking':
temp = dynacool.temperature()
temps.append(temp)
sleep(0.75)
print(f'Temperature is now {temp} K')
plt.figure()
timeax = np.linspace(0, len(temps)*0.2, len(temps))
plt.plot(timeax, temps, '-o')
plt.xlabel('Time (s)')
plt.ylabel('Temperature (K)')
```
## Field Control
The field has **four** related parameters:
- `field_measured`: (read-only) the field strength right now
- `field_target`: the target field that the `ramp` method will ramp to when called. Setting this parameter does **not** trigger a ramp
- `field_rate`: the field ramp rate. NB: setting this parameter **will** trigger a ramp
- `field_approach`: the approach the system should use to ramp. NB: setting this parameter **will** trigger a ramp
- `field_ramp`: this is a convenience parameter that sets the target and then triggers a blocking ramp.
The idea is that the user first sets the `field_target` and then ramps the field to that target using the `ramp` method. The ramp method takes a `mode` argument that controls whether the ramp is blocking or non-blocking.
Using the simulation software, the field change is instanteneous irrespective of rate. We nevertheless include two examples of ramping here.
### A blocking ramp
First, we set a field target:
```
field_now = dynacool.field_measured()
target = field_now + 1
dynacool.field_target(target)
```
Note that the field has not changed yet:
```
assert dynacool.field_measured() == field_now
```
And now we ramp:
```
dynacool.ramp(mode='blocking')
```
The ramping will take some finite time on a real instrument. The field value is now at the target field:
```
print(f'Field value: {dynacool.field_measured()} T')
print(f'Field target: {dynacool.field_target()} T')
```
### A non-blocking ramp
The non-blocking ramp is very similar to the the blocking ramp.
```
field_now = dynacool.field_measured()
target = field_now - 0.5
dynacool.field_target(target)
assert dynacool.field_measured() == field_now
dynacool.ramp(mode='non-blocking')
# Here you can do stuff while the magnet ramps
print(f'Field value: {dynacool.field_measured()} T')
print(f'Field target: {dynacool.field_target()} T')
```
### Using the `field_ramp` parameter
The `field_ramp` parameter sets the target field and ramp when being set.
```
print(f'Now the field is {dynacool.field_measured()} T...')
print(f'...and the field target is {dynacool.field_target()} T.')
dynacool.field_ramp(1)
print(f'Now the field is {dynacool.field_measured()} T...')
print(f'...and the field target is {dynacool.field_target()} T.')
```
|
github_jupyter
|
```
import json
import os
import random
import re
from itertools import product
import numpy as np
import pandas as pd
from more_itertools import distinct_combinations
from plotnine import *
from sklearn import feature_extraction, metrics
ROOT_PATH = os.path.dirname(os.path.abspath(os.getcwd()))
def inspect_df(df: pd.DataFrame, n : int=5) -> pd.DataFrame:
"""Helper method to easily inspect DataFrames."""
print(f'shape: {df.shape}')
return df.head(n)
```
# Table of Contents
- [Exploratory Data Analysis](#Exploratory-Data-Analysis)
- [A Baseline Model: random classifier](#A-Baseline-Model:-random-classifier)
- [A Better Baseline Model: \<page title\> similarity](#A-Better-Baseline-Model:-<page-title>-similarity)
- [Feature Extraction](#Feature-Extraction)
# Exploratory Data Analysis
```
def json_loader(dirpath: str) -> list:
"""Discover all .json files and gather their respective data, given a `dirpath`.
"""
data = []
for subdir in os.listdir(dirpath):
temp = os.path.join(dirpath, subdir)
for datafile in os.listdir(temp):
with open(os.path.join(temp, datafile), 'r') as f:
spec = json.loads(f.read())
# keep global identifier, format it as in the labelled dataset
spec['id'] = subdir + '//' + datafile.split('.json')[0]
data.append(spec)
return data
data = json_loader(dirpath=os.path.join(ROOT_PATH, 'data/2013_camera_specs'))
specs = pd.DataFrame(data)
inspect_df(specs)
specs.set_index('id', inplace=True)
labels = pd.read_csv(os.path.join(ROOT_PATH, 'data/sigmod_medium_labelled_dataset.csv'))
inspect_df(labels)
matched_products = labels['label'] == 1
matched_products.value_counts()
ggplot() + \
geom_bar(mapping=aes(x=matched_products), colour='white') + \
labs(title='same products ?', x='') + \
coord_flip()
specs_info = specs.describe()
specs_info = specs_info.transpose()
inspect_df(specs_info)
specs_info['support'] = specs_info['count'] / len(specs.index)
specs_info = specs_info.sort_values(by='support', ascending=False)
specs_info.head(10)
top10 = list(specs_info.head(10).index)
```
These are the 10 camera specs (attributes) with the highest support.
```
specs[top10]
```
# A Baseline Model: random classifier
```
inspect_df(labels)
def random_classifier(*args):
"""A random classifier.
Returns: True of False (i.e. if products are the same)
"""
return random.random() > 0.5
predictions = labels.apply(random_classifier, axis=1)
metrics.accuracy_score(predictions, labels['label'])
metrics.precision_score(predictions, labels['label'])
metrics.recall_score(predictions, labels['label'])
metrics.f1_score(predictions, labels['label'])
```
This is a good indication of the model performance: **f1 = 0.1337**
```
metrics.confusion_matrix(predictions, labels['label'])
```
This is the initial, baseline performance. Our model should easily outperform this random classifier.
# A Better Baseline Model: \<page title\> similarity
```
ggplot() + \
geom_histogram(mapping=aes(x=specs['<page title>'].map(len)), colour='white', bins=30) + \
xlab('<page title>: no. of characters ')
ggplot() + \
geom_histogram(mapping=aes(x=specs['<page title>'].map(lambda title: len(title.split()))), colour='white', bins=30) + \
xlab('<page title>: no. of words')
```
We will use a BoW model + a text similarity algorithm + a suitable threshold in order to assert whether two cameras are the same.
```
def get_corpus(data: pd.DataFrame) -> np.ndarray:
return data['<page title>'].values
vectorizer = feature_extraction.text.CountVectorizer()
vectorizer.fit(get_corpus(specs))
def create_dataset(data: pd.DataFrame, labels: pd.DataFrame, features: list):
"""Helper method that creates a dataset.
"""
left_part = pd.merge(labels, data[features], how='inner', left_on='left_spec_id', right_on='id')
right_part = pd.merge(labels, data[features], how='inner', left_on='right_spec_id', right_on='id')
dataset = pd.merge(left_part, right_part, how='inner', on=('left_spec_id', 'right_spec_id'),
suffixes=('_left', '_right'))
dataset['label'] = dataset['label_left']
dataset.drop(['label_left', 'label_right'], axis=1, inplace=True)
dataset.set_index(['left_spec_id', 'right_spec_id'], inplace=True)
return dataset
X = create_dataset(data=specs, labels=labels, features=top10[0])
inspect_df(X)
def pagetitle_similarity(title1: str, title2: str) -> float:
vec1 = vectorizer.transform([title1])
vec2 = vectorizer.transform([title2])
return metrics.pairwise.cosine_similarity(vec1, vec2).take(0)
X['similarity'] = X[['<page title>_left', '<page title>_right']].apply(lambda x: pagetitle_similarity(x[0], x[1]), axis=1)
X[X['label'] == 1]['similarity'].mean()
X[X['label'] == 0]['similarity'].mean()
X['predictions'] = X['similarity'].map(lambda score: score > 0.5)
metrics.accuracy_score(X['predictions'], X['label'])
metrics.precision_score(X['predictions'], X['label'])
metrics.recall_score(X['predictions'], X['label'])
metrics.f1_score(X['predictions'], X['label'])
```
With this model we improved: **f1 = 0.3843**
```
metrics.confusion_matrix(X['predictions'], X['label'])
```
We can definitely improve over this by a better choice of text embeddings or similarity algorithm.
But, all things considered, any approach that relies on the notion of similarity between page titles could not drastically improve the 0.38 F1 score.
It is time to proceed with an ML approach.
# Feature Extraction
```
MAX_PRODUCTS = 1000
camera_pairs = list(distinct_combinations(specs.index[:MAX_PRODUCTS], 2))
inspect_df(specs[top10])
```
### brand
```
for brand in specs[specs['brand'].notna()]['brand'].tolist():
if not isinstance(brand, str):
print(brand)
def get_brand(value: str) -> str:
if isinstance(value, str):
return value
try:
brands = sorted(value, key=len, reverse=True)
return brands[0]
except (KeyError, TypeError):
return None
specs['brand'] = specs['brand'].map(get_brand)
specs['brand'].value_counts()[:40]
```
### model
```
for model in specs[specs['model'].notna()]['model'].tolist():
if not isinstance(model, str):
print(model)
def get_model(value: str) -> str:
if isinstance(value, str):
return value
try:
models = sorted(value, key=len, reverse=True)
return models[0]
except (KeyError, TypeError):
return None
specs['model'] = specs['model'].map(get_model)
```
### megapixels
```
for mp in specs[specs['megapixels'].notna()]['megapixels'].tolist():
if not isinstance(mp, str):
print(mp)
def extract_number(value: str) -> int:
match = re.search(r'\d{0,2}(.\d)?', value)
try:
return float(match.group(0)) if match else None
except ValueError:
return None
def get_megapixels(value: str) -> int:
if isinstance(value, str):
return extract_number(value)
try:
mps = sorted(value, key=len, reverse=True)
return extract_number(mps[0])
except (KeyError, TypeError):
return None
specs['megapixels'] = specs['megapixels'].map(get_megapixels)
specs['megapixels'] = pd.to_numeric(specs['megapixels'])
specs['megapixels'].value_counts()
```
### type
```
for ctype in specs[specs['type'].notna()]['type'].tolist():
if not isinstance(ctype, str):
print(ctype)
def get_type(value: str) -> str:
if isinstance(value, str):
return value
try:
types = sorted(value, key=len, reverse=True)
return types[0]
except (KeyError, TypeError):
return None
specs['type'] = specs['type'].map(get_type)
specs['type'].value_counts()[0:40]
```
|
github_jupyter
|
```
# default_exp checker
```
# Dependency Checker
> A pragmatic way to talk with pypi and find out what dependencies are out of date
```
#hide
from nbverbose.showdoc import *
```
## Dependency Traversing
Sometimes, we may want to check the current installed versions of a project's basic dependencies, and further check if those dependencies are out of date. `dependency_checker` is designed around this concept, utilizing the `pipdeptree` library.
```
#export
import json, ast, pipdeptree, sys, subprocess
#export
def get_installed_dependencies(
package_name:str, # The name of a python package
depth_limit:int=1, # How deep to follow nested dependencies
include_self:bool=False, # Whether to include the original library in the results
) -> dict: # A dictionary of {package:version}
"Recursively grabs dependencies of python package"
pkgs = pipdeptree.get_installed_distributions(local_only=False, user_only=False)
tree = pipdeptree.PackageDAG.from_pkgs(pkgs)
tree = tree.filter([package_name], None)
curr_depth=0
def _get_deps(j, dep_dict={}, curr_depth=0):
if curr_depth > depth_limit: return dep_dict
if isinstance(j, list):
for a in j:
_get_deps(a, dep_dict, curr_depth)
elif isinstance(j, dict):
if 'package_name' in j.keys():
if j['package_name'] not in dep_dict.keys():
dep_dict[j['package_name']] = j['installed_version']
if 'dependencies' in j.keys():
curr_depth += 1
return _get_deps(j['dependencies'], dep_dict, curr_depth)
return dep_dict
deps = _get_deps(ast.literal_eval(pipdeptree.render_json_tree(tree, 4)), {})
if not include_self: deps.pop(package_name, None)
return deps
```
This function operates by traversing a DAG and grabbing dependencies of projects found from it. Generally a depth of 1 is recommended, below is a quick guide to what will be returned at each depth.
**0**: A depth of zero will an empty dictionary unless `include_self` is `True`. If so, it will include only the library name:
```
deps = get_installed_dependencies('pipdeptree', depth_limit=0)
assert deps == {}
deps = get_installed_dependencies('pipdeptree', depth_limit=0, include_self=True)
assert deps == {'pipdeptree':'2.1.0'}
```
**1**: A depth of one will return the project and its main dependencies (if `include_self` is `True`), such as those stated in the `requirements.txt` as well as packages such as `pip`
```
deps = get_installed_dependencies('pipdeptree', depth_limit=1, include_self=True)
assert len(deps.keys()) == 2
assert all(package in deps.keys() for package in ('pipdeptree', 'pip'))
deps = get_installed_dependencies('pipdeptree', depth_limit=1, include_self=False)
assert len(deps.keys()) == 1
assert 'pip' in deps.keys()
```
**2+**: A depth of two or greater will return the dependencies for each of the dependencies above that layer. These allow for more fine-grained requirements
## Checking for New Versions
Given these dependencies, we can also then check for a new version to see if an upgrade is available. This is what the `is_latest_version` function is designed for:
```
#export
def is_latest_version(
package_name:str, # The name of a pip python package
current_version:str, # The installed version of a package, such as "1.2.3"
) -> bool: # Whether the versions are the same
"Compares the current version with the latest version, and returns if they are different"
latest_version = str(subprocess.run([sys.executable, '-m', 'pip', 'install', '{}==random'.format(package_name)], capture_output=True, text=True))
latest_version = latest_version[latest_version.find('(from versions:')+15:]
latest_version = latest_version[:latest_version.find(')')]
latest_version = latest_version.replace(' ','').split(',')[-1]
if latest_version == current_version:
return True
else:
return False
using_latest_version = is_latest_version('pipdeptree', '2.0.9')
assert using_latest_version == False
```
Here we tested if `pipdeptree` is the latest version. The version we specified is one less than that of the latest release at the time of development. We got `False`, meaning a newer version is available.
|
github_jupyter
|
SPARQL Transformer evaluation
=========================
This notebook contains some quantitative measures for the evaluation of SPARQL Transformer.
```
import json
import os
import time
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from ipywidgets import FloatProgress
from IPython.display import display
from SPARQLWrapper import SPARQLWrapper, JSON
from SPARQLTransformer import sparqlTransformer
input_folder = './sparql'
ENDPOINT = 'http://0.0.0.0:7790/sparql'
# ENDPOINT = 'http://dbpedia.org/sparql'
json_queries_files = list(filter(lambda x: x.endswith('.json'), os.listdir(input_folder)))
json_queries_files.sort()
rq_queries_files = [f.replace('.json', '.rq') for f in json_queries_files]
json_queries = [json.load(open('%s/%s' % (input_folder, f), 'r')) for f in json_queries_files]
rq_queries = [open('%s/%s' % (input_folder, f), 'r').read() for f in rq_queries_files]
json_queries_files
```
The test queries have been taken from the __[DBpedia wiki](https://wiki.dbpedia.org/OnlineAccess)__.
Those SELECT queries have been manually converted in json query, making sure that the transformed query was equal to the original one (variable names apart).
The following table shows, for each query:
- `n vars`, how many variable are selected
- `levels`, how many levels are present in the json prototype, considered that `1` refers to a flat object (all properties attached to the root) and `2` at one level of nested object
- `features` included in the query
| name | n vars | levels | features |
|--------------------------|--------|--------|----------------------|
|1.Born_in_Berlin | 4 | 1 | filter, orderby |
|2.German_musicians | 4 | 1 | lang filter, optional|
|3.Musicians_born_in_Berlin| 4 | 1 | lang filter |
|4.Soccer_players | 5 | 2 | filter, orderby |
|5.Games | 2 | 1 | orderby |
Functions for executing the query and returning the bindings.
- For JSON queries, we use **SPARQLTransformer**.
- For SPARQL queries, we use **SPARQLWrapper** (which is also internally used by SPARQLTransformer).
```
def sparql_exec(query):
sparql = SPARQLWrapper(ENDPOINT)
sparql.setQuery(query)
sparql.setReturnFormat(JSON)
result = sparql.query().convert()
return result["results"]["bindings"]
def json_exec(query, debug=False):
return sparqlTransformer(query, {'endpoint': ENDPOINT, 'debug': debug})
```
Functions for running the test for a particular query (sparql or json).
The test measure the **execution time** of the query (including any parsing task) and the **number of results**.
```
def test_atom(query, typ='sparql'):
start = time.time()
if typ == 'sparql':
r = sparql_exec(query)
else:
r = json_exec(query)
end = time.time()
timing = end - start
return len(r), timing
```
We will execute the test multiple times for each query, to obtain an average result as much as possible not correlated to the network/server workload.
In particular, each test would be executed `num_iteration` times. Each couple of consecutive iteration will be separated by `sleep_time` seconds.
```
num_iteration = 100
sleep_time = 5
def mean_without_outliers(x):
df = pd.DataFrame(x)
Q1 = df.quantile(0.25)
Q3 = df.quantile(0.75)
IQR = Q3 - Q1
return float(df[(df >= Q1-1.5*IQR ) | (df <= Q3+1.5*IQR)].mean())
test_results = []
all_timings = []
for i, json_query in enumerate(json_queries):
# queries
json_query = json_queries[i]
rq_query = rq_queries[i]
title = rq_queries_files[i].replace('.rq', '')
print(title)
# progress bars
fs = FloatProgress(min=0, max=num_iteration, description='SPARQL test:')
display(fs)
fj = FloatProgress(min=0, max=num_iteration, description='JSON test:')
display(fj)
sparql_time = []
sparql_results = 0
json_time = []
json_results = 0
for j in np.arange(num_iteration):
if (i + j) > 0 :
time.sleep(sleep_time)
sparql_results, t = test_atom(rq_query, typ='sparql')
sparql_time.append(t)
fs.value += 1
for j in np.arange(num_iteration):
time.sleep(sleep_time)
json_results, t = test_atom(json_query, typ='json')
json_time.append(t)
fj.value += 1
ts = np.mean(sparql_time)
tj = np.mean(json_time)
time_diff = (tj - ts)
time_diff_percent = 100 * time_diff / np.mean([ts,tj])
test_results.append({
'name': title,
'time_sparql': ts,
'result_sparql': sparql_results,
'time_json': tj ,
'result_json': json_results,
'time_diff': '{0:.2g}'.format(time_diff),
'time_diff_percent': '{0:.2g}%'.format(time_diff_percent)
});
all_timings.append({
'name': title,
'json': json_time,
'sparql': sparql_time
})
```
Those plots show that over the whole test, some query tooks much longer to be executed. The **outliers** are clearly visible as dots.
When computing the mean, we excluded all the outliers, where an outlier stands outside the IQR (see [definition](https://www.purplemath.com/modules/boxwhisk3.htm)).
```
for i, json_query in enumerate(json_queries):
tim = all_timings[i]
a = np.array([np.hstack(tim['sparql']), np.hstack(tim['json'])]).transpose()
df = pd.DataFrame(a, columns=['SPARQL', 'JSON'])
bp = df.boxplot(vert=False, figsize=(16,4))
fig = np.asarray(bp).reshape(-1)[0].get_figure()
fig.suptitle(tim['name'])
plt.show()
pd.DataFrame.from_dict(test_results)
```
The table give us two different informations.
#### Time difference
The execution time of JSON queries (`time_json`) is quite close to the one of SPARQL ones (`time_sparql`). The difference in percentage (`time_diff`) never overcomes few hundredths of a second.
#### Result difference
The number of results (bindings) returned by SPARQL Transformer (`result_json`) is always lower than the ones returned by the endpoint (`result_json`). This is due to the fact that the latter represents all the combination of values as distinct bindings, while the former aggregates the results with the same id.
### Example of result for `1.Born_in_Berlin`.
An interest case is the 2nd result about [Prince Adalbert of Prussia](http://dbpedia.org/resource/Prince_Adalbert_of_Prussia_(1811–1873)), which has 4 names and 2 differently formatted death date. This is represented with 4 * 2 = 8 bindings, then merged with SPARQL Transformer
```
# SPARQL query
sparql_exec(rq_queries[0])[1:9]
# SPARQL query
json_exec(json_queries[0])[1]
test_results
```
|
github_jupyter
|
```
#Import the necessary methods from tweepy library
from tweepy.streaming import StreamListener
from tweepy import OAuthHandler
from tweepy import Stream
#Variables that contains the user credentials to access Twitter API
access_token = "your_access_token"
access_token_secret = "your_access_secret_token"
consumer_key = "your_consumer_key"
consumer_secret = "your_consumer_secret"
#This is a basic listener that just prints received tweets to stdout.
class StdOutListener(StreamListener):
def on_data(self, data):
print(data)
return True
def on_error(self, status):
print(status)
if __name__ == '__main__':
#This handles Twitter authetification and the connection to Twitter Streaming API
l = StdOutListener()
auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_token_secret)
stream = Stream(auth, l)
#This line filter Twitter Streams to capture data by the keywords: 'honda'
stream.filter(track=['honda','Honda','HONDA'])
```
### Then from your terminal, execute this script with output piped to a text file: your_script.py > tweets_data.txt
# Then run this script below to create a Python dataframe of the tweets data
```
%matplotlib inline
import json
import string
import pandas as pd
import matplotlib.pyplot as plt
from os import path
pd.set_option("display.max_rows",1000)
pd.set_option("display.max_columns",20)
pd.set_option("display.max_colwidth",150)
d = path.dirname('/home/pybokeh/temp/')
tweets_data = []
tweets_file = open(path.join(d, 'cancer_tweets_data.txt'),'r')
for line in tweets_file:
try:
tweet = json.loads(line)
tweets_data.append(tweet)
except:
continue
print(len(tweets_data))
tweets = pd.DataFrame()
tweets['text'] = [tweet['text'] for tweet in tweets_data]
tweets['lang'] = [tweet['lang'] for tweet in tweets_data]
tweets['retweeted'] = [tweet['retweeted'] for tweet in tweets_data]
tweets.head()
english_tweets = tweets[(tweets['lang']=='en') & (tweets['retweeted']==False)]
english_tweets.drop_duplicates(subset='text');
text = ''
for line in english_tweets['text']:
text = text + ' ' + line
text = text.replace("'s",'')
%matplotlib inline
from os import path
from scipy.misc import imread
import matplotlib.pyplot as plt
from wordcloud import WordCloud, STOPWORDS
d = path.dirname('/home/pybokeh/Downloads/')
# Read the whole text.
#text = strWords
#text = open(path.join(d, 'alice.txt')).read()
additional_words = [
'rt',
'ebay',
'co',
't',
'amp',
'https'
]
for word in additional_words:
STOPWORDS.add(word)
# read the mask image
# taken from
# http://www.stencilry.org/stencils/movies/alice%20in%20wonderland/255fk.jpg
#honda_mask = imread(path.join(d, "honda_logo_mask.png"), flatten=True)
#wc = WordCloud(background_color="black", max_words=2000, mask=honda_mask, stopwords=STOPWORDS)
# generate word cloud
wc = WordCloud(width=800, height=600).generate(text)
# store to file
wc.to_file(path.join(d, "cancer_word_cloud.png"))
# show
plt.imshow(wc)
plt.axis("off")
#plt.figure()
#plt.imshow(honda_mask, cmap=plt.cm.gray)
#plt.axis("off")
plt.show()
prevent = tweets[(tweets['text'].str.contains('food')) | (tweets['text'].str.contains('nutrient'))]
prevent['text']
wc.process_text(text)[:50]
STOPWORDS
```
|
github_jupyter
|
# hello paddle: 从普通程序走向机器学习程序
**作者:** [PaddlePaddle](https://github.com/PaddlePaddle) <br>
**日期:** 2021.12 <br>
**摘要:** 这篇示例向你介绍普通程序跟机器学习程序的区别,并带着你用飞桨框架,实现第一个机器学习程序。
## 一、普通程序跟机器学习程序的逻辑区别
作为一名开发者,你最熟悉的开始学习一门编程语言,或者一个深度学习框架的方式,可能是通过一个hello world程序。
学习飞桨也可以这样,这篇小示例教程将会通过一个非常简单的示例来向你展示如何开始使用飞桨。
机器学习程序跟通常的程序最大的不同是,通常的程序是在给定输入的情况下,通过告诉计算机处理数据的规则,然后得到处理后的结果。而机器学习程序则是在并不知道这些规则的情况下,让机器来从数据当中**学习**出来规则。
作为热身,先来看看通常的程序所做的事情。
现在面临这样一个任务:
乘坐出租车的时候,会有一个10元的起步价,只要上车就需要收取。出租车每行驶1公里,需要再支付每公里2元的行驶费用。当一个乘客坐完出租车之后,车上的计价器需要算出来该乘客需要支付的乘车费用。
如果用python来实现该功能,会如下所示:
```
def calculate_fee(distance_travelled):
return 10 + 2 * distance_travelled
for x in [1.0, 3.0, 5.0, 9.0, 10.0, 20.0]:
print(calculate_fee(x))
```
接下来,把问题稍微变换一下,现在知道乘客每次乘坐出租车的公里数,也知道乘客每次下车的时候支付给出租车司机的总费用。但是并不知道乘车的起步价,以及每公里行驶费用是多少。希望让机器从这些数据当中学习出来计算总费用的规则。
更具体的,想要让机器学习程序通过数据学习出来下面的公式当中的参数 `w` 和参数 `b`(这是一个非常简单的示例,所以`w`和`b`都是浮点数,随着对深度学习了解的深入,你将会知道`w`和`b`通常情况下会是矩阵和向量)。这样,当下次乘车的时候,知道了行驶里程`distance_travelled`的时候,就可以估算出来用户的总费用`total_fee`了。
```
total_fee = w * distance_travelled + b
```
接下来,看看用飞桨如何实现这个hello, world级别的机器学习程序。
## 二、导入飞桨
为了能够使用飞桨,需要先用python的`import`语句导入飞桨`paddle`。
同时,为了能够更好的对数组进行计算和处理,还需要导入`numpy`。
如果你是在本机运行这个notebook,而且还没有安装飞桨,请先参考官网[安装](https://www.paddlepaddle.org.cn/install/quick) Paddle 2.2.0。
```
import paddle
print("paddle " + paddle.__version__)
```
## 三、准备数据
在这个机器学习任务中,已经知道了乘客的行驶里程`distance_travelled`,和对应的,这些乘客的总费用`total_fee`。
通常情况下,在机器学习任务中,像`distance_travelled`这样的输入值,一般被称为`x`(或者特征`feature`),像`total_fee`这样的输出值,一般被称为`y`(或者标签`label`)。
可以用`paddle.to_tensor`把示例数据转换为paddle的Tensor数据。
```
x_data = paddle.to_tensor([[1.], [3.0], [5.0], [9.0], [10.0], [20.0]])
y_data = paddle.to_tensor([[12.], [16.0], [20.0], [28.0], [30.0], [50.0]])
```
## 四、用飞桨定义模型的计算
使用飞桨定义模型的计算的过程,本质上,是用python,通过飞桨提供的API,来告诉飞桨计算规则的过程。回顾一下,想要通过飞桨用机器学习方法,从数据当中学习出来如下公式当中的`w`和`b`。这样在未来,给定`x`时就可以估算出来`y`值(估算出来的`y`记为`y_predict`)
```
y_predict = w * x + b
```
将会用飞桨的线性变换层:`paddle.nn.Linear`来实现这个计算过程,这个公式里的变量`x, y, w, b, y_predict`,对应着飞桨里面的[Tensor概念](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/tensor.html)。
**稍微补充一下**
在这里的示例中,根据经验,已经事先知道了`distance_travelled`和`total_fee`之间是线性的关系,而在更实际的问题当中,`x`和`y`的关系通常是非线性的,因此也就需要使用更多类型,也更复杂的神经网络。(比如,BMI指数跟你的身高就不是线性关系,一张图片里的某个像素值跟这个图片是猫还是狗也不是线性关系。)
```
linear = paddle.nn.Linear(in_features=1, out_features=1)
```
## 五、准备好运行飞桨
机器(计算机)在一开始的时候会随便猜`w`和`b`,先看看机器猜的怎么样。你应该可以看到,这时候的`w`是一个随机值,`b`是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。
```
w_before_opt = linear.weight.numpy().item()
b_before_opt = linear.bias.numpy().item()
print("w before optimize: {}".format(w_before_opt))
print("b before optimize: {}".format(b_before_opt))
```
## 六、告诉飞桨怎么样学习
前面定义好了神经网络(尽管是一个最简单的神经网络),还需要告诉飞桨,怎么样去**学习**,从而能得到参数`w`和`b`。
这个过程简单的来陈述一下,你应该就会大致明白了(尽管背后的理论和知识还需要逐步的去学习)。在机器学习/深度学习当中,机器(计算机)在最开始的时候,得到参数`w`和`b`的方式是随便猜一下,用这种随便猜测得到的参数值,去进行计算(预测)的时候,得到的`y_predict`,跟实际的`y`值一定是有**差距**的。接下来,机器会根据这个差距来**调整`w`和`b`**,随着这样的逐步的调整,`w`和`b`会越来越正确,`y_predict`跟`y`之间的差距也会越来越小,从而最终能得到好用的`w`和`b`。这个过程就是机器**学习**的过程。
用更加技术的语言来说,衡量**差距**的函数(一个公式)就是损失函数,用来**调整**参数的方法就是优化算法。
在本示例当中,用最简单的均方误差(mean square error)作为损失函数(`paddle.nn.MSELoss`);和最常见的优化算法SGD(stocastic gradient descent)作为优化算法(传给`paddle.optimizer.SGD`的参数`learning_rate`,你可以理解为控制每次调整的步子大小的参数)。
```
mse_loss = paddle.nn.MSELoss()
sgd_optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters = linear.parameters())
```
## 七、运行优化算法
接下来,让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量`y`和`y_predict`的差距的`loss`)在不断的降低。
```
total_epoch = 5000
for i in range(total_epoch):
y_predict = linear(x_data)
loss = mse_loss(y_predict, y_data)
loss.backward()
sgd_optimizer.step()
sgd_optimizer.clear_grad()
if i%1000 == 0:
print("epoch {} loss {}".format(i, loss.numpy()))
print("finished training, loss {}".format(loss.numpy()))
```
## 八、机器学习出来的参数
经过了这样的对参数`w`和`b`的调整(**学习**),再通过下面的程序,来看看现在的参数变成了多少。你应该会发现`w`变成了很接近2.0的一个值,`b`变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)
```
w_after_opt = linear.weight.numpy().item()
b_after_opt = linear.bias.numpy().item()
print("w after optimize: {}".format(w_after_opt))
print("b after optimize: {}".format(b_after_opt))
```
## 九、hello paddle
通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。
```
print("hello paddle")
```
|
github_jupyter
|
# Accumulation of roundoof error
In this notebook we'll study some effects of accumulation of roundoof error.
# Unstable Algorithms
We need to solve this integral for $n=1,2,....8$
$$y_n=\int_0^1\frac{x^n}{x+5}$$
We write the equation like this:
$$y_n = \frac{1}{n} - 5y_{n-1}$$
$$y_{1}=1-5(y_{0}+\epsilon )=1-5y_{0}-5\epsilon$$
$$y_{2}={\frac {1}{2}}-5(1-5y_{0}-5\epsilon )={\frac {1}{2}}-5+25y_{0}+5^{2}\epsilon$$
$$\vdots$$
$$y_{n}=\ldots +5^{n}\epsilon$$
The roundoff error is amplified ,$\mathcal{O}(5^n)$, in succeeding calculations so this algorithm is unstable.
```
import numpy as np
import scipy as sc
import matplotlib.pyplot as plt
import sys
def function(y0, n):
y_sol = np.zeros(n)
y_sol[0] = y0
for i in range (1,n):
y_sol[i] = 1/n - 5*y_sol[i-1]
return y_sol
n = 8
x = np.linspace(-1,1,8)
y0 = 0
y = function(y0, n)
plt.plot(x,y)
# The value of 'y' goes to infinity
```
# Conditioned problems
Even if a stable algorithm is used, the solution to a problem is still inaccurate due to the accumulation of roundoff error when the problem itself is ill-conditioned.
## Dangers of Higher-Order Polynomial Interpolation
In 1901, Carl Runge published a study on the dangers of higher-
order polynomial interpolation. He looked at the following simple-looking function:
$$f(x) = \frac{1}{1+25x^2}$$
which is now called Runge’s function
```
x = np.linspace(-1,1,10)
y = 1/(1 + 25*x**2)
xx = np.linspace(-1,1,100)
p = np.polyfit(x,y,4)
y4 = np.polyval(p,xx)
yr = 1/(1 + 25*xx**2)
plt.plot(x,y,'o')
plt.plot(xx,y4)
plt.plot(xx,yr,'--')
plt.legend(['','Polynomial fit','Runge function'])
# The polynomial does a poor job of following Runge’s function
# Continuing with the analysis,
# the 20th-order polynomial can be generated and plotted
x = np.linspace(-1,1,10)
y = 1/(1 + 25*x**2)
xx = np.linspace(-1,1,100)
p = np.polyfit(x,y,20)
y4 = np.polyval(p,xx)
yr = 1/(1+25*xx**2)
plt.plot(x,y,'o')
plt.plot(xx,y4)
plt.plot(xx,yr,'--')
plt.legend(['','Polynomial fit','Runge function'])
# The polynomial does a poor job of following Runge’s function
```
Although there may be certain contexts where higher-order polynomials are neces-
sary, they are usually to be avoided. In most engineering and scientific contexts, lower-
order polynomials of the type described in this chapter can be used effectively to capture
the curving trends of data without suffering from oscillations
[Real world example: Patriot missile failure due to magnification of roundoff error](https://en.wikipedia.org/wiki/Round-off_error)
# Error Estimates for Iterative Methods
The approximation of $e$ using Maclaurin series expansion
$$e^x = 1+ x+ \frac{x^2}{2!}+\frac{x^3}{3!}+\frac{x^4}{4!} ... \frac{x^n}{n!}$$
```
def maclaurin(x, esp, max_int):
"""
Maclaurin series of exponential function
input:
x = value at which series evaluated
esp = stopping criterion (default = 0.0001)
max_int = maximum iterations (default = 50)
output:
fx = estimated value
ea = approximate relative error (%)
iter = number of iterations
"""
iter = 1
sol = 1
ea = 100
while sol:
sol_old = sol
sol = sol + x**iter / np.math.factorial(iter)
iter += 1
if sol != 0:
ea = np.abs((sol - sol_old)/sol)*100
if ea <= esp and iter>= max_int:
break
fx = sol
return fx, ea, iter
maclaurin(1,1e-6,100)
e, a, inte = maclaurin(1,1e-6,100)
# np.exp(1) return the true value of the number 'e'
# At least if a better approximation than our method
print('The error is: '+ str(np.exp(1) - e))
print("The epsilon funciton build in python is: "+str(sys.float_info.epsilon))
```
The 52 bits used for the mantissa correspond to about 15 to 16 base-10 digits, so in our programming language the machine epsilvol is $10^{-16}$
Remember that?
$$lim_{n\to\infty}(1 + \frac{1}{n})^n = e = 2.718281828...$$
Let's use the power of python to calculate
```
def euler(n):
return (1 + 1/n)**n
euler(10000)
# We can write 10^16 like 10E16 or 10e16
# What just happen?
euler(10e16)
```
When $n$ becames bigger than $10^{15}$ our functions stop to increase and start to oscillating
```
x = np.linspace(1,1e16,100)
y = euler(x)
y2 = np.exp(1)
plt.xscale('log')
plt.axhline(y=y2, color='r', linestyle='--')
plt.plot(x,y)
plt.title("euler function in lin-log scale")
plt.legend(["Real Value of Euler Number", "f(n) "])
```
|
github_jupyter
|
# Inventory Control with Lead Times and Multiple Suppliers
## Description
One potential application of reinforcement learning involves ordering supplies with mutliple suppliers having various lead times and costs in order to meet a changing demand. Lead time in inventory management is the lapse in time between when an order is placed to replenish inventory and when the order is received. This affects the amount of stock a supplier needs to hold at any point in time. Moreover, due to having multiple suppliers, at every stage the supplier is faced with a decision on how much to order from each supplier, noting that more costly suppliers might have to be used to replenish the inventory from a shorter lead time.
The inventory control model addresses this by modeling an environment where there are multiplie suppliers with different costs and lead times. Orders must be placed with these suppliers to have an on-hand inventory to meet a changing demand. However, both having supplies on backorder and holding unused inventory have associated costs. The goal of the agent is to choose the amount to order from each supplier to maximize the revenue earned.
At each time step, an order is placed to each supplier. If previous orders have waited for the length of their supplier's lead time, then these orders will become part of the on-hand inventory. The demand is then randomly chosen from a user-selected distribution and is subtracted from the on-hand inventory. If the on-hand inventory would become less than zero, than items are considered to be on backorder which decreases the reward. The demand is subtracted from the on-hand inventory to calculate on-hand inventory for the start of the next time step. A remaining inventory (a positive nonzero number) at the end of this calculation negatively influences the reward proportional to the holding costs. There are two ways that the inventory can be setup for the environment. The first allows negative inventory to be accumulated. In this case the on-hand inventory is offset by adding the value of the maximum inventory. This is done so that the observation space can be properly represented using AI Gym. This allows for backorder costs to be calculated if the inventory were to go become negative. The second way does not allow for inventory to become negative. Backorders are still calculated and they still negatively influence reward, but the inventory is reset to 0 for the next timestep after the reward calculation. The inventory is not offset by any number in this version of the environment.
## Model Assumptions
* Backorders are not retroactively fulfilled. If a high demand would cause inventory to become negative, this unfulfilled demand is not met later when there may be some inventory being held at the end of a timestep.
## Environment
### Dynamics
#### State Space
The state space is $S = [0,\text{Max-Order}]^{L_1} \times [0,\text{Max-Order}]^{L_2} \times ... \times [0,\text{Max-Order}]^{L_N} \times I$ where $N$ is the number of suppliers and $[0,\text{Max-Order}]^{L_i}$ represents a list of integers between zero and the max order amount, maxorder (specified in the configuration), with the length of the lead time of supplier $i$. This represents how many timesteps back each order is from being added to the inventory. $I$ represents the current on-hand inventory. To represent a timestep, an order will be moved up an index in the array unless it is added to the inventory, in which case it is removed from the array. Each supplier has their own set of indices in the array that represent its lead times. Each index in the list (except for $ I $) has a maximum value of the max_order parameter.
If negative inventory is allowed, the last index, the on-hand inventory, is offset by adding the maximum inventory value to it. It is in the range $[0, 2 * maxinventory]$ This is done so that a negative value of the on-hand inventory can be temporarily kept to use in reward calculations for backorders and so that the observation space can be represented properly. Before this value is used in any calculations, the value of the max inventory is subtracted so that the true value of the inventory is used. Otherwise if negative inventory is not allowed, the on-hand inventory must be in the range of $[0,maxinventory]$ and directly corresponds to the current inventory.
#### Action Space
The action space is $A = [0,\text{Max-Order}]^N$ where N is the number of suppliers. This represents the amount to order from each supplier for the current timestep. The order amount cannot be greater than the max_order paramter (set in the initialization of the environment).
#### Reward
The reward is $R = - (Order + holdcost \times max(0,I) + backordercost \times max(0, -I))$ where $Order = \sum_{i = 1}^{N} c_i \times a_i$ and represents the sum of the amount most recently ordered from each supplier, $a_i$, multiplied by the appropriate ordering cost, $c_i$. $holdcost$ represents the holding cost for excess inventory, and $backordercost$ represents the backorder cost for when the inventory would become negative.
#### Transitions
At each timestep, orders are placed into each supplier for a certain amount of resources. These orders are processed and will add to the on-hand inventory once the lead time for the appropriate supplier has passed. The time that has passed for each order is trakced using the state at each timestep. If any lead times have passed, the ordered amount is added to the on-hand inventory. Then, the randomly chosen demand is subtracted from the on-hand inventory. If the demand is higher than the current inventory, then the inventory does become negative for the next state. The reward is then calculated proportional to the revenue earned from meeting the demand, but is inversely proportional to the amount that is backordered (the difference between the inventory and demand). If the demand is lower than the current inventory, the inventory remains positive for the next state. The reward is still proportional to the revenue earned from meeting the demand, but is inversely proportional to the amount of inventory left over multiplied by the holding costs.
#### Configuration Paramters
* lead_times: array of ints representing the lead times of each supplier
* demand_dist: The random number sampled from the given distribution to be used to calculate the demand
* supplier_costs: array of ints representing the costs of each supplier
* hold_cost: The int holding cost.
* backorder_cost: The backorder holding cost.
* max_inventory: The maximum value (int) that can be held in inventory
* max_order: The maximum value (int) that can be ordered from each supplier
* epLen: The int number of time steps to run the experiment for.
* starting_state: An int list containing enough indices for the sum of all the lead times, plus an additional index for the initial on-hand inventory.
* neg_inventory: A bool that says whether the on-hand inventory can be negative or not.
## Heuristic Agents
### Random Agent
This agent randomly samples from the action space. For this environment, the amount ordered from each supplier is an integer from $[0, maxorder]$.
### Base Surge Agent (TBS)
The base surge agent has 2 parameters, $r$ and $S$. Each action is expressed as $[r,[orderamount]]$. $r$ is a vector of the order amounts for all suppliers except the one with the greatest lead time. $S$ represents the "order up to amount". orderamount is calculated by calculating $S - I$ where $I$ is the current on-hand inventory. This value is then made 0 if it is negative or is reduced to the $maxorder$ if it is greater. This order amount is used for the supplier with the greatest lead time.
|
github_jupyter
|
```
import numpy as np
import pandas as pd
import tensorflow as tf
from data_process import build_vocab, batch_iter, sentence_to_index
from models import LSTM, biLSTM, deepBiLSTM
train = pd.read_csv('./data/train-5T.txt', delimiter='\t')
test = pd.read_csv('./data/test-1T.txt', delimiter='\t')
X_train = train.document
Y_train = train.label
X_test = test.document
Y_test = test.label
max_vocab = 50000
vocab, _, vocab_size = build_vocab(X_train, max_vocab)
```
# Sentiment Analysis with LSTM
```
batches = batch_iter(list(zip(X_train, Y_train)), batch_size=64, num_epochs=15)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
tf.reset_default_graph()
sess = tf.Session(config=config)
model = LSTM(sess=sess, vocab_size=vocab_size, lr=1e-2)
train_acc = []
avgLoss = []
x_test = sentence_to_index(X_test, vocab)
for step, batch in enumerate(batches):
x_train, y_train = zip(*batch)
x_train = sentence_to_index(x_train, vocab)
acc = model.get_accuracy(x_train, y_train)
l, _ = model.train(x_train, y_train)
train_acc.append(acc)
avgLoss.append(l)
if step % 100 == 0:
test_loss = model.get_loss(x_test, Y_test)
print('batch:', '%04d' % step, '\ntrain loss:', '%.5f' % np.mean(avgLoss), '\ttest loss:', '%.5f' % test_loss)
test_acc = model.get_accuracy(x_test, Y_test)
print('train accuracy:', '%.3f' % np.mean(train_acc), '\ttest accuracy:', '%.3f' % test_acc, '\n')
avgLoss = []
train_acc = []
```
# Sentiment Analysis with biLSTM
```
batches = batch_iter(list(zip(X_train, Y_train)), batch_size=64, num_epochs=15)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
tf.reset_default_graph()
sess = tf.Session(config=config)
model = biLSTM(sess=sess, vocab_size=vocab_size, lr=1e-2)
train_acc = []
avgLoss = []
x_test = sentence_to_index(X_test, vocab)
for step, batch in enumerate(batches):
x_train, y_train = zip(*batch)
x_train = sentence_to_index(x_train, vocab)
acc = model.get_accuracy(x_train, y_train)
l, _ = model.train(x_train, y_train)
train_acc.append(acc)
avgLoss.append(l)
if step % 100 == 0:
test_loss = model.get_loss(x_test, Y_test)
print('batch:', '%04d' % step, '\ntrain loss:', '%.5f' % np.mean(avgLoss), '\ttest loss:', '%.5f' % test_loss)
test_acc = model.get_accuracy(x_test, Y_test)
print('train accuracy:', '%.3f' % np.mean(train_acc), '\ttest accuracy:', '%.3f' % test_acc, '\n')
avgLoss = []
train_acc = []
```
# Sentiment Analysis with deepBiLSTM
```
batches = batch_iter(list(zip(X_train, Y_train)), batch_size=64, num_epochs=15)
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
tf.reset_default_graph()
sess = tf.Session(config=config)
model = deepBiLSTM(sess=sess, vocab_size=vocab_size, lr=1e-2)
train_acc = []
avgLoss = []
x_test = sentence_to_index(X_test, vocab)
for step, batch in enumerate(batches):
x_train, y_train = zip(*batch)
x_train = sentence_to_index(x_train, vocab)
acc = model.get_accuracy(x_train, y_train)
l, _ = model.train(x_train, y_train)
train_acc.append(acc)
avgLoss.append(l)
if step % 100 == 0:
test_loss = model.get_loss(x_test, Y_test)
print('batch:', '%04d' % step, '\ntrain loss:', '%.5f' % np.mean(avgLoss), '\ttest loss:', '%.5f' % test_loss)
test_acc = model.get_accuracy(x_test, Y_test)
print('train accuracy:', '%.3f' % np.mean(train_acc), '\ttest accuracy:', '%.3f' % test_acc, '\n')
avgLoss = []
train_acc = []
```
|
github_jupyter
|
[Table of Contents](http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb)
# Nonlinear Filtering
```
#format the book
%matplotlib inline
from __future__ import division, print_function
from book_format import load_style
load_style()
```
## Introduction
The Kalman filter that we have developed uses linear equations, and so the filter can only handle linear problems. But the world is nonlinear, and so the classic filter that we have been studying to this point can have very limited utility.
There can be nonlinearity in the process model. Suppose we want to track an object falling through the atmosphere. The acceleration of the object depends on the drag it encounters. Drag depends on air density, and the air density decreases with altitude. In one dimension this can be modelled with the nonlinear differential equation
$$\ddot x = \frac{0.0034ge^{-x/22000}\dot x^2}{2\beta} - g$$
A second source of nonlinearity comes from the measurements. For example, radars measure the slant range to an object, and we are typically interested in the aircraft's position over the ground. We invoke Pythagoras and get the nonlinear equation:
$$x=\sqrt{\mathtt{slant}^2 - \mathtt{altitude}^2}$$
These facts were not lost on the early adopters of the Kalman filter. Soon after Dr. Kalman published his paper people began working on how to extend the Kalman filter for nonlinear problems.
It is almost true to state that the only equation anyone knows how to solve is $\mathbf{Ax}=\mathbf{b}$. We only really know how to do linear algebra. I can give you any linear set of equations and you can either solve it or prove that it has no solution.
Anyone with formal education in math or physics has spent years learning various analytic ways to solve integrals, differential equations and so on. Yet even trivial physical systems produce equations that cannot be solved analytically. I can take an equation that you are able to integrate, insert a $\log$ term, and render it insolvable. This leads to jokes about physicists stating "assume a spherical cow on a frictionless surface in a vacuum...". Without making extreme simplifications most physical problems do not have analytic solutions.
How do we do things like model airflow over an aircraft in a computer, or predict weather, or track missiles with a Kalman filter? We retreat to what we know: $\mathbf{Ax}=\mathbf{b}$. We find some way to linearize the problem, turning it into a set of linear equations, and then use linear algebra software packages to compute an approximate solution.
Linearizing a nonlinear problem gives us inexact answers, and in a recursive algorithm like a Kalman filter or weather tracking system these small errors can sometimes reinforce each other at each step, quickly causing the algorithm to spit out nonsense.
What we are about to embark upon is a difficult problem. There is not one obvious, correct, mathematically optimal solution anymore. We will be using approximations, we will be introducing errors into our computations, and we will forever be battling filters that *diverge*, that is, filters whose numerical errors overwhelm the solution.
In the remainder of this short chapter I will illustrate the specific problems the nonlinear Kalman filter faces. You can only design a filter after understanding the particular problems the nonlinearity in your problem causes. Subsequent chapters will then teach you how to design and implement different kinds of nonlinear filters.
## The Problem with Nonlinearity
The mathematics of the Kalman filter is beautiful in part due to the Gaussian equation being so special. It is nonlinear, but when we add and multiply them we get another Gaussian as a result. That is very rare. $\sin{x}*\sin{y}$ does not yield a $\sin$ as an output.
What I mean by linearity may be obvious, but there are some subtleties. The mathematical requirements are twofold:
* additivity: $f(x+y) = f(x) + f(y)$
* homogeneity: $f(ax) = af(x)$
This leads us to say that a linear system is defined as a system whose output is linearly proportional to the sum of all its inputs. A consequence of this is that to be linear if the input is zero than the output must also be zero. Consider an audio amp - if I sing into a microphone, and you start talking, the output should be the sum of our voices (input) scaled by the amplifier gain. But if amplifier outputs a nonzero signal such as a hum for a zero input the additive relationship no longer holds. This is because you linearity requires that $amp(voice) = amp(voice + 0)$ This clearly should give the same output, but if amp(0) is nonzero, then
$$
\begin{aligned}
amp(voice) &= amp(voice + 0) \\
&= amp(voice) + amp(0) \\
&= amp(voice) + non\_zero\_value
\end{aligned}
$$
which is clearly nonsense. Hence, an apparently linear equation such as
$$L(f(t)) = f(t) + 1$$
is not linear because $L(0) = 1$. Be careful!
## An Intuitive Look at the Problem
I particularly like the following way of looking at the problem, which I am borrowing from Dan Simon's *Optimal State Estimation* [[1]](#[1]). Consider a tracking problem where we get the range and bearing to a target, and we want to track its position. The reported distance is 50 km, and the reported angle is 90$^\circ$. Assume that the errors in both range and angle are distributed in a Gaussian manner. Given an infinite number of measurements what is the expected value of the position?
I have been recommending using intuition to gain insight, so let's see how it fares for this problem. We might reason that since the mean of the range will be 50 km, and the mean of the angle will be 90$^\circ$, that the answer will be x=0 km, y=50 km.
Let's plot that and find out. Here are 3000 points plotted with a normal distribution of the distance of 0.4 km, and the angle having a normal distribution of 0.35 radians. We compute the average of the all of the positions, and display it as a star. Our intuition is displayed with a large circle.
```
import numpy as np
from numpy.random import randn
import matplotlib.pyplot as plt
N = 3000
a = np.pi/2. + (randn(N) * 0.35)
r = 50.0 + (randn(N) * 0.4)
xs = r * np.cos(a)
ys = r * np.sin(a)
plt.figure()
plt.scatter(xs, ys, label='Sensor', color='k', marker='.', s=2)
xs, ys = sum(xs)/N, sum(ys)/N
plt.scatter(xs, ys, c='r', marker='*', s=200, label='Mean')
plt.scatter(0, 50, c='k', marker='o', s=300, label='Intuition')
plt.axis('equal')
plt.legend();
```
We can see that out intuition failed us because the nonlinearity of the problem forced all of the errors to be biased in one direction. This bias, over many iterations, can cause the Kalman filter to diverge. Even if it doesn't diverge the solution will not be optimal. Linear approximations applied to nonlinear problems yields inaccurate results.
## The Effect of Nonlinear Functions on Gaussians
Gaussians are not closed under an arbitrary nonlinear function. Recall the equations of the Kalman filter - at each evolution we pass the Gaussian representing the state through the process function to get the Gaussian at time $k$. Our process function was always linear, so the output was always another Gaussian. Let's look at that on a graph. I will take an arbitrary Gaussian and pass it through the function $f(x) = 2x + 1$ and plot the result. We know how to do this analytically, but let's use sampling. I will generate 500,000 points with a normal distribution, pass them through $f(x)$, and plot the results. I do it this way because the next example will be nonlinear, and we will have no way to compute this analytically.
```
import numpy as np
from numpy.random import normal
gaussian = (0., 1.)
data = normal(loc=gaussian[0], scale=gaussian[1], size=500000)
plt.figure()
plt.hist(2*data + 1, 1000);
```
This is an unsurprising result. The result of passing the Gaussian through $f(x)=2x+1$ is another Gaussian centered around 1. Let's look at the input, nonlinear function, and output at once.
```
from kf_book.book_plots import set_figsize, figsize
from kf_book.nonlinear_plots import plot_nonlinear_func
def g1(x):
return 2*x+1
plt.figure()
plot_nonlinear_func(data, g1, gaussian)
```
> I explain how to plot Gaussians, and much more, in the Notebook *Computing_and_Plotting_PDFs* in the
Supporting_Notebooks folder. You can also read it online [here](https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb)[1]
The plot labeled 'Input' is the histogram of the original data. This is passed through the function $f(x)=2x+1$ which is displayed in the chart on the bottom left. The red lines shows how one value, $x=0$ is passed through the function. Each value from input is passed through in the same way to the output function on the right. For the output I computed the mean by taking the average of all the points, and drew the results with the dotted blue line. A solid blue line shows the actual mean for the point $x=0$. The output looks like a Gaussian, and is in fact a Gaussian. We can see that the variance in the output is larger than the variance in the input, and the mean has been shifted from 0 to 1, which is what we would expect given the transfer function $f(x)=2x+1$ The $2x$ affects the variance, and the $+1$ shifts the mean The computed mean, represented by the dotted blue line, is nearly equal to the actual mean. If we used more points in our computation we could get arbitrarily close to the actual value.
Now let's look at a nonlinear function and see how it affects the probability distribution.
```
def g2(x):
return (np.cos(3*(x/2 + 0.7))) * np.sin(0.3*x) - 1.6*x
plt.figure()
plot_nonlinear_func(data, g2, gaussian)
```
This result may be somewhat surprising to you. The function looks "fairly" linear, but the probability distribution of the output is completely different from a Gaussian. Recall the equations for multiplying two univariate Gaussians:
$$\begin{aligned}
\mu &=\frac{\sigma_1^2 \mu_2 + \sigma_2^2 \mu_1} {\sigma_1^2 + \sigma_2^2} \\
\sigma &= \frac{1}{\frac{1}{\sigma_1^2} + \frac{1}{\sigma_2^2}}
\end{aligned}$$
These equations do not hold for non-Gaussians, and certainly do not hold for the probability distribution shown in the 'Output' chart above.
Think of what this implies for the Kalman filter algorithm of the previous chapter. All of the equations assume that a Gaussian passed through the process function results in another Gaussian. If this is not true then all of the assumptions and guarantees of the Kalman filter do not hold. Let's look at what happens when we pass the output back through the function again, simulating the next step time step of the Kalman filter.
```
y = g2(data)
gaussian2 = (np.mean(y), np.var(y))
plt.figure()
plot_nonlinear_func(y, g2, gaussian2)
```
As you can see the probability function is further distorted from the original Gaussian. However, the graph is still somewhat symmetric around x=0, let's see what the mean is.
```
print('input mean, variance: %.4f, %.4f' %
(np.mean(data), np.var(data)))
print('output mean, variance: %.4f, %.4f' %
(np.mean(y), np.var(y)))
```
Let's compare that to the linear function that passes through (-2,3) and (2,-3), which is very close to the nonlinear function we have plotted. Using the equation of a line we have
$$m=\frac{-3-3}{2-(-2)}=-1.5$$
```
def g3(x):
return -1.5 * x
plt.figure()
plot_nonlinear_func(data, g3, gaussian)
out = g3(data)
print('output mean, variance: %.4f, %.4f' %
(np.mean(out), np.var(out)))
```
Although the shapes of the output are very different, the mean and variance of each are almost the same. This may lead us to reasoning that perhaps we can ignore this problem if the nonlinear equation is 'close to' linear. To test that, we can iterate several times and then compare the results.
```
out = g3(data)
out2 = g2(data)
for i in range(10):
out = g3(out)
out2 = g2(out2)
print('linear output mean, variance: %.4f, %.4f' %
(np.average(out), np.std(out)**2))
print('nonlinear output mean, variance: %.4f, %.4f' %
(np.average(out2), np.std(out2)**2))
```
Unfortunately the nonlinear version is not stable. It drifted significantly from the mean of 0, and the variance is half an order of magnitude larger.
I minimized the issue by using a function that is quite close to a straight line. What happens if the function is $y(x)=x^2$?
```
def g3(x):
return -x*x
x0 = (1, 1)
data = normal(loc=x0[0], scale=x0[1], size=500000)
plt.figure()
plot_nonlinear_func(data, g3, gaussian=x0)
```
Despite the curve being smooth and reasonably straight at $x=1$ the probability distribution of the output doesn't look anything like a Gaussian and the computed mean of the output is quite different than the value computed directly. This is not an unusual function - a ballistic object moves in a parabola, and this is the sort of nonlinearity your filter will need to handle. If you recall we've tried to track a ball and failed miserably. This graph should give you insight into why the filter performed so poorly.
## A 2D Example
It is hard to look at probability distributions and reason about what will happen in a filter. So let's think about tracking an aircraft with radar. The estimate may have a covariance that looks like this:
```
import kf_book.nonlinear_internal as nonlinear_internal
nonlinear_internal.plot1()
```
What happens when we try to linearize this problem? The radar gives us a range to the aircraft. Suppose the radar is directly under the aircraft (x=10) and the next measurement states that the aircraft is 3 miles away (y=3). The positions that could match that measurement form a circle with radius 3 miles, like so.
```
nonlinear_internal.plot2()
```
We can see by inspection that the probable position of the aircraft is somewhere near x=11.4, y=2.7 because that is where the covariance ellipse and range measurement overlap. But the range measurement is nonlinear so we have to linearize it. We haven't covered this material yet, but the Extended Kalman filter will linearize at the last position of the aircraft - (10,2). At x=10 the range measurement has y=3, and so we linearize at that point.
```
nonlinear_internal.plot3()
```
Now we have a linear representation of the problem (literally a straight line) which we can solve. Unfortunately you can see that the intersection of the line and the covariance ellipse is a long way from the actual aircraft position.
```
nonlinear_internal.plot4()
```
That sort of error often leads to disastrous results. The error in this estimate is large. But in the next innovation of the filter that very bad estimate will be used to linearize the next radar measurement, so the next estimate is likely to be markedly worse than this one. After only a few iterations the Kalman filter will diverge, and start producing results that have no correspondence to reality.
This covariance ellipse spans miles. I exaggerated the size to illustrate the difficulties of highly nonlinear systems. In real radar tracking problems the nonlinearity is usually not that bad, but the errors will still accumulate. Other systems you may be work could have this amount of nonlinearity - this was not an exaggeration only to make a point. You will always be battling divergence when working with nonlinear systems.
## The Algorithms
You may be impatient to solve a specific problem, and wondering which filter to use. I will quickly survey the options. The subsequent chapters are somewhat independent of each other, and you can fruitfully skip around, though I recommend reading linearly if you truly want to master all of the material.
The workhorses of nonlinear filters are the *linearized Kalman filter* and *extended Kalman filter* (EKF). These two techniques were invented shortly after Kalman published his paper and they have been the main techniques used since then. The flight software in airplanes, the GPS in your car or phone almost certainly use one of these techniques.
However, these techniques are extremely demanding. The EKF linearizes the differential equations at one point, which requires you to find a solution to a matrix of partial derivatives (a Jacobian). This can be difficult or impossible to do analytically. If impossible, you have to use numerical techniques to find the Jacobian, but this is expensive computationally and introduces more error into the system. Finally, if the problem is quite nonlinear the linearization leads to a lot of error being introduced in each step, and the filters frequently diverge. You can not throw some equations into some arbitrary solver and expect to to get good results. It's a difficult field for professionals. I note that most Kalman filtering textbooks merely gloss over the EKF despite it being the most frequently used technique in real world applications.
Recently the field has been changing in exciting ways. First, computing power has grown to the point that we can use techniques that were once beyond the ability of a supercomputer. These use *Monte Carlo* techniques - the computer generates thousands to tens of thousands of random points and tests all of them against the measurements. It then probabilistically kills or duplicates points based on how well they match the measurements. A point far away from the measurement is unlikely to be retained, whereas a point very close is quite likely to be retained. After a few iterations there is a clump of particles closely tracking your object, and a sparse cloud of points where there is no object.
This has two benefits. First, the algorithm is robust even for extremely nonlinear problems. Second, the algorithm can track arbitrarily many objects at once - some particles will match the behavior on one object, and other particles will match other objects. So this technique is often used to track automobile traffic, people in crowds, and so on.
The costs should be clear. It is computationally expensive to test tens of thousands of points for every step in the filter. But modern CPUs are very fast, and this is a good problem for GPUs because the part of the algorithm is parallelizable. Another cost is that the answer is not mathematical. With a Kalman filter my covariance matrix gives me important information about the amount of error in the estimate. The particle filter does not give me a rigorous way to compute this. Finally, the output of the filter is a cloud of points; I then have to figure out how to interpret it. Usually you will be doing something like taking the mean and standard deviations of the points, but this is a difficult problem. There are still many points that do not 'belong' to a tracked object, so you first have to run some sort of clustering algorithm to first find the points that seem to be tracking an object, and then you need another algorithm to produce an state estimate from those points. None of this is intractable, but it is all quite computationally expensive.
Finally, we have a new algorithm called the *unscented Kalman filter* (UKF). It does not require you to find analytic solutions to nonlinear equations, and yet almost always performs better than the EKF. It does well with nonlinear problems - problems where the EKF has significant difficulties. Designing the filter is extremely easy. Some will say the jury is still out on the UKF, but to my mind the UKF is superior in almost every way to the EKF. I suggest that the UKF should be the starting point for any implementation, especially if you are not a Kalman filter professional with a graduate degree in control theory. The main downside is that the UKF can be a few times slower than the EKF, but this really depends on whether the EKF solves the Jacobian analytically or numerically. If numerically the UKF is almost certainly faster. It has not been proven (and probably it cannot be proven) that the UKF always yields more accurate results than the EKF. In practice it almost always does, often significantly so. It is very easy to understand and implement, and I strongly suggest this filter as your starting point.
## Summary
The world is nonlinear, but we only really know how to solve linear problems. This introduces significant difficulties for Kalman filters. We've looked at how nonlinearity affects filtering in 3 different but equivalent ways, and I've given you a brief summary of the major appoaches: the linearized Kalman filter, the extended Kalman filter, the Unscented Kalman filter, and the particle filter.
Until recently the linearized Kalman filter and EKF have been the standard way to solve these problems. They are very difficult to understand and use, and they are also potentially very unstable.
Recent developments have offered what are to my mind superior approaches. The UKF dispenses with the need to find solutions to partial differential equations, yet it is also usually more accurate than the EKF. It is easy to use and understand. I can get a basic UKF going in a few minutes by using FilterPy. The particle filter dispenses with mathimatical modeling completely in favor of a Monte Carlo technique of generating a random cloud of thousands of points. It runs slowly, but it can solve otherwise intractable problems with relative ease.
I get more email about the EKF than anything else; I suspect that this is because most treatments in books, papers, and on the internet use the EKF. If your interest is in mastering the field of course you will want to learn about the EKF. But if you are just trying to get good results I point you to the UKF and particle filter first. They are much easier to implement, understand, and use, and they are typically far more stable than the EKF.
Some will quibble with that advice. A lot of recent publications are devoted to a comparison of the EKF, UKF, and perhaps a few other choices for a given problem. Do you not need to perform a similar comparison for your problem? If you are sending a rocket to Mars then of course you do. You will be balancing issues such as accuracy, round off errors, divergence, mathematical proof of correctness, and the computational effort required. I can't imagine not knowing the EKF intimately.
On the other hand the UKF works spectacularly! I use it at work for real world applications. I mostly haven't even tried to implement an EKF for these applications because I can verify that the UKF is working fine. Is it possible that I might eke out another 0.2% of performance from the EKF in certain situations? Sure! Do I care? No! I completely understand the UKF implementation, it is easy to test and verify, I can pass the code to others and be confident that they can understand and modify it, and I am not a masochist that wants to battle difficult equations when I already have a working solution. If the UKF or particle filters start to perform poorly for some problem then I will turn other to techniques, but not before then. And realistically, the UKF usually provides substantially better performance than the EKF over a wide range of problems and conditions. If "really good" is good enough I'm going to spend my time working on other problems.
I'm belaboring this point because in most textbooks the EKF is given center stage, and the UKF is either not mentioned at all or just given a 2 page gloss that leaves you completely unprepared to use the filter. The UKF is still relatively new, and it takes time to write new editions of books. At the time many books were written the UKF was either not discovered yet, or it was just an unproven but promising curiosity. But I am writing this now, the UKF has had enormous success, and it needs to be in your toolkit. That is what I will spend most of my effort trying to teach you.
## References
<A name="[1]">[1]</A> https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/Supporting_Notebooks/Computing_and_plotting_PDFs.ipynb
|
github_jupyter
|
# Classifying Business Documents using Deep Learning
## IBM Coursera Advanced Data Science Capstone - Results Demo
## Sumudu Tennakoon
```
import pandas as pd
import numpy as np
import sys
import os
import re
import matplotlib.pyplot as plt
from datetime import date
from sklearn.model_selection import train_test_split
import tensorflow as tf
import tensorflow.keras as keras
print('TensorFlow Version: ', tf.__version__)
from DocumentClassifierV1 import * # Custom library created for the Capstone project.
```
## 1. Read Pre-saved Input dataset (Test Sample-not used in modeling)
```
DocumentFilesData = pd.read_pickle('Data/DocumentClassification_IBM_ADV_DS_Capstone_TestSample_128x128_20190316.pkl')
```
## 2. Organize Classs Labels
```
ClassLabels = list(DocumentFilesData.FileClass.unique())
ClassNumbers = list(range(len(ClassLabels)))
ClassLabelMap = list((zip(ClassLabels, ClassNumbers)))
print(ClassLabelMap)
for clm in ClassLabelMap:
DocumentFilesData.loc[DocumentFilesData['FileClass']==clm[0] , 'ClassNumber'] = clm[1]
```
## 3. Separate Features and Response
```
NClasses = len(ClassLabels)
imgRows = 128
imgCols = 128
X = np.asarray(list(DocumentFilesData['DocumentMatrix'].values), dtype ='int')
y = DocumentFilesData['ClassNumber'].values
#Shape of datasets
print(X.shape)
print(y.shape)
```
## 4. Plot sample image
```
#Plot sample image with scale
plt.imshow(X[10000])
plt.colorbar()
```
## 5. Send data into the Model
```
if keras.backend.image_data_format() == 'channels_first':
X = X.reshape(X.shape[0], 1, imgRows, imgCols)
input_shape = (1, imgRows, imgCols)
else:
X = X.reshape(X.shape[0], imgRows, imgCols, 1)
input_shape = (imgRows, imgCols, 1)
X = X.astype('float32') #convert interger image tensor to float
X = X/255 # Normalize grayscale to a number between 0 and 1
print(X.shape[0], 'samples')
# Record actuals
y_act = y
y = keras.utils.to_categorical(y, NClasses)
ClassificationModel = TFModel(ModelFile='Models/DocumentClassification_IBM_ADV_DS_Capstone_CNN_V03_128x128_20190316.pkl', Model=keras.models.load_model('Models/DocumentClassification_IBM_ADV_DS_Capstone_CNN_V03_128x128_20190316.h5'))
Output = ClassificationModel.Classify(InputFiles=X, size=(imgRows,imgCols), ActualClasses=list(y_act),
ReturnImageMatrix=True, ReturnJSON=False, ReturnFullPath=True, TransformedData=True)
```
## 6. Proces output
```
Output['actual'] = Output['actual'].astype('int')
for clm in ClassLabelMap:
Output.loc[Output['actual']==clm[1] , 'actual'] = clm[0]
Output.head()
```
## 7. Performance Evaluation
### Confusion Matrix
```
cf = pd.crosstab(Output.actual, Output.prediction, margins=True)
cf
import seaborn as sns
sns.heatmap(pd.crosstab(Output.actual, Output.prediction, margins=False), annot=True)
```
### Accuracy
```
CorrectPredictions = np.sum(np.diagonal(pd.crosstab(Output.actual, Output.prediction, margins=False).values))
TotalDocuments = np.sum(pd.crosstab(Output.actual, Output.prediction, margins=False).values)
Accuracy = CorrectPredictions/TotalDocuments
print('CorrectPredictions= {}'.format(CorrectPredictions))
print('TotalDocuments= {}'.format(TotalDocuments))
print('Accuracy= {}'.format(Accuracy))
```
### Model Robustness
```
bins=np.array([0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
labels=np.array([ 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0])
Output['MaxProbabilityScore']=pd.cut(Output.probability, bins=bins) #, labels=labels)
Output['PredictedCorrect'] = np.where(Output['actual']==Output['prediction'], 1, 0)
Robustness = Output.groupby(by='MaxProbabilityScore').agg({'probability':'mean', 'PredictedCorrect':'sum', 'filename':'count'})
Robustness.columns = ['MeanProbability', 'PredictedCorrect', 'BucketCount']
Robustness['BucketPrecision']=Robustness['PredictedCorrect']/Robustness['BucketCount']
Robustness['BucketFraction']=Robustness['BucketCount']/(Robustness['BucketCount'].sum())
Robustness
```
## 8. Run the model on sample Image file
```
InputFiles = ['Data/test1.png']
Output_single = ClassificationModel.Classify(InputFiles=InputFiles, size=(imgRows,imgCols), ActualClasses=None,
ReturnImageMatrix=True, ReturnJSON=True, ReturnFullPath=False, TransformedData=False)
Output_single
OutputDashboard = Dashboard()
fig = OutputDashboard.ImageOutput(Output_single, NSamples=1, Format='JSON', ClassLabels=ClassificationModel.ClassLabels)
plt.show()
```
<hr>
<p> This notebook and related materials were developed by <b> Sumudu Tennakoon</b> for the capstone project in partial fulfillment of the requirements for the <b> Advanced Data Science with IBM Specialization</b>. <br>
March 2019. <br>
Apache License, Version 2.0 (http://www.apache.org/licenses/LICENSE-2.0)</p>
|
github_jupyter
|
# Numpy
The basis of most scientific programming in Pyhton is the *numerical Python* library, `numpy`. NumPy gives us many tools - including a fast and efficient data type, the `numpy Array` - for working with numerical data.
## Numpy Array
NumPy is built around the `array`. This is a data structure defined in NumPy which is *ordered* and *mutable*, must like the `list`. Although very similar to the list, the numpy array only allows *numerical* data as elements, like the `int` and `float`. Let's explore!
```
# Frist we need to import the numpy package. It is commonly shortened to "np"
import numpy as np
```
The easiest way to define numpy arrays is to define a list or tuple, and convert it to an array with the `numpy.array()` function.
```
a = [0, 1, 2, 3, 4]
b = np.array(a)
print(type(a))
print(type(b))
```
We can index and slice numpy arrays much like lists:
```
print(b[0], b[1:3], b[-1])
```
Try running the following to get help on the NumPy array
```
help(np.ndarray)
```
Woah. That's a really long help page. Often when you are working with a new package, `help()` won't be the most convenient or easy to read way to get help. Instead, we can search for online *documentation* for the package we are using.
If you Google **numpy documentation**, you will likely see links to info about *numpy* and another package we will explore later, *scipy*. If you follow the links to **NumPy**, you should find a [NumPy user Guide](https://docs.scipy.org/doc/numpy-1.15.0/user/index.html) and from there, several pages of tutorials and documentation about the package. The [Quickstart tutorial](https://docs.scipy.org/doc/numpy-1.15.0/user/quickstart.html), will give a much more legible intro to the package.
## Numpy Attributes
NumPy arrays have some built in **attributes**, i.e. info stored in an object, accessible with `object.attribute` (note: no parentheses after).
```
# Let's print some attributes of our b array
print("Num dimensions:", b.ndim,
"\nShape:", b.shape,
"\nSize:", b.size)
```
A common way to define NumPy arrays with with the `arange` function.
```
np.arange(10)
help(np.arange)
```
The numpy `arange` function allows us to quickly build integer arrays. It takes `start`, `stop`, and `step` as arguments.
```
x = np.arange(1, 10)
y = np.arange(2, 20, 2)
print(x)
print(y)
```
We can apply any mathematical operation to a NumPy array, and it will apply that operation to every element in the array.
```
x = np.arange(-3, 4)
y = x**2
print(y)
```
Another way to make NumPy arrays is with the `linspace()` function. This allows us to choose the bounds of an interval and the number of points we want to divide it into. Numpy also has useful math constants like `pi` and `e` and math functions like `sin`, `cos`, `tan`.
```
import matplotlib.pyplot as plt
x = np.linspace(-2*np.pi, 2*np.pi, 100)
y = np.sin(x)
plt.plot(x, y)
# Linspace can be useful for adding more resolution to continuous functions
xarange = np.arange(-np.pi, np.pi)
yarange = np.cos(xarange)
xlinspace = np.linspace(-np.pi, np.pi, 1000)
ylinspace = np.cos(xlinspace)
plt.subplot(1, 2, 1)
plt.plot(xarange, yarange)
plt.subplot(1, 2, 2)
plt.plot(xlinspace, ylinspace)
```
If we want to plot a bell curve we can use the `np.random` module to randomly sample a normal distribution.
```
norm = np.random.standard_normal(100000) # Draw 1000 random points from normal distribution
hist, bins = np.histogram(norm, bins=10, density=True) # Make histogram of our samples
plt.plot(bins[1:], hist)
hist, bins = np.histogram(norm, bins=100, density=True)
plt.plot(bins[1:], hist)
```
This is barely scratching the surface of the `numpy` package, but should be enough to get you started. The [Quickstart tutorial](https://docs.scipy.org/doc/numpy-1.15.0/user/quickstart.html) is a great resource for more of the basics and some more advanced usage. Finally, don't forget to use the most powerful tool at our disposal: *Google*. Most programmers only have the most common syntax memorized, everything else can be found with Google!
Next we will further explore the `matplotlib` package that we briefly introduced above!
|
github_jupyter
|
# Ray RLlib - Sample Application: CartPole
© 2019-2021, Anyscale. All Rights Reserved

We were briefly introduced to the `CartPole` example and the OpenAI gym `CartPole-v1` environment ([gym.openai.com/envs/CartPole-v1/](https://gym.openai.com/envs/CartPole-v1/)) in the [reinforcement learning introduction](../01-Introduction-to-Reinforcement-Learning.ipynb). This lesson uses [RLlib](https://ray.readthedocs.io/en/latest/rllib.html) to train a policy for `CartPole`.
Recall that the `gym` Python module provides MDP interfaces to a variety of simulators, like the simple simulator for the physics of balancing a pole on a cart that is used by the CartPole environment. The `CartPole` problem is described at https://gym.openai.com/envs/CartPole-v1.

([source](https://gym.openai.com/envs/CartPole-v1/))
Even though this is a relatively simple and quick example to run, its results can be understood quite visually. `CartPole` is one of OpenAI Gym's ["classic control"](https://gym.openai.com/envs/#classic_control) examples.
For more background about this problem, see:
* ["Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problem"](https://ieeexplore.ieee.org/document/6313077), AG Barto, RS Sutton, and CW Anderson, *IEEE Transactions on Systems, Man, and Cybernetics* (1983). The same Sutton and Barto who wrote [*Reinforcement Learning: An Introduction*](https://mitpress.mit.edu/books/reinforcement-learning-second-edition).
* ["Cartpole - Introduction to Reinforcement Learning (DQN - Deep Q-Learning)"](https://towardsdatascience.com/cartpole-introduction-to-reinforcement-learning-ed0eb5b58288), [Greg Surma](https://twitter.com/GSurma).
First, import Ray and the PPO module in RLlib, then start Ray.
```
import ray
import ray.rllib.agents.ppo as ppo
import pandas as pd
import json
import os
import shutil
import sys
```
Model *checkpoints* will get saved after each iteration into directories under `tmp/ppo/cart`, i.e., relative to this directory.
The default directories for checkpoints are `$HOME/ray_results/<algo_env>/.../checkpoint_N`.
> **Note:** If you prefer to use a different directory root, change it in the next cell _and_ in the `rllib rollout` command below.
```
checkpoint_root = "tmp/ppo/cart"
```
Clean up output of previous lessons (optional):
```
# Where checkpoints are written:
shutil.rmtree(checkpoint_root, ignore_errors=True, onerror=None)
# Where some data will be written and used by Tensorboard below:
ray_results = f'{os.getenv("HOME")}/ray_results/'
shutil.rmtree(ray_results, ignore_errors=True, onerror=None)
```
Start Ray:
```
info = ray.init(ignore_reinit_error=True)
```
The Ray Dashboard is useful for monitoring Ray:
```
print("Dashboard URL: http://{}".format(info["webui_url"]))
```
Next we'll train an RLlib policy with the [`CartPole-v1` environment](https://gym.openai.com/envs/CartPole-v1/).
If you've gone through the _Multi-Armed Bandits_ lessons, you may recall that we used [Ray Tune](http://tune.io), the Ray Hyperparameter Tuning system, to drive training. Here we'll do it ourselves.
By default, training runs for `10` iterations. Increase the `N_ITER` setting if you want to train longer and see the resulting rewards improve. However, if the max score of `200` is achieved early, you can use a smaller number of iterations.
- `num_workers` is the number of actors that the agent will create. This determines the degree of parallelism that will be used. In a cluster, these actors will be spread over the available nodes.
- `num_sgd_iter` is the number of epochs of SGD (stochastic gradient descent, i.e., passes through the data) that will be used to optimize the PPO surrogate objective at each iteration of PPO, for each _minibatch_ ("chunk") of training data. Using minibatches is more efficient than training with one record at a time.
- `sgd_minibatch_size` is the SGD minibatch size (batches of data) that will be used to optimize the PPO surrogate objective.
- `model` contains a dictionary of parameters describing the neural net used to parameterize the policy. The `fcnet_hiddens` parameter is a list of the sizes of the hidden layers. Here, we have two hidden layers of size 100, each.
- `num_cpus_per_worker` when set to 0 prevents Ray from pinning a CPU core to each worker, which means we could run out of workers in a constrained environment like a laptop or a cloud VM.
> **Note:** If you change the values shown for `config['model']['fcnet_hiddens']`, make the same change in the `rllib rollout` command below!
```
SELECT_ENV = "CartPole-v1" # Specifies the OpenAI Gym environment for Cart Pole
N_ITER = 10 # Number of training runs.
config = ppo.DEFAULT_CONFIG.copy() # PPO's default configuration. See the next code cell.
config["log_level"] = "WARN" # Suppress too many messages, but try "INFO" to see what can be printed.
# Other settings we might adjust:
config["num_workers"] = 1 # Use > 1 for using more CPU cores, including over a cluster
config["num_sgd_iter"] = 10 # Number of SGD (stochastic gradient descent) iterations per training minibatch.
# I.e., for each minibatch of data, do this many passes over it to train.
config["sgd_minibatch_size"] = 250 # The amount of data records per minibatch
config["model"]["fcnet_hiddens"] = [100, 50] #
config["num_cpus_per_worker"] = 0 # This avoids running out of resources in the notebook environment when this cell is re-executed
```
Out of curiousity, let's see what configuration settings are defined for PPO. Note in particular the parameters for the deep learning `model`:
```
ppo.DEFAULT_CONFIG
agent = ppo.PPOTrainer(config, env=SELECT_ENV)
results = []
episode_data = []
episode_json = []
for n in range(N_ITER):
result = agent.train()
results.append(result)
episode = {
"n": n,
"episode_reward_min": result["episode_reward_min"],
"episode_reward_mean": result["episode_reward_mean"],
"episode_reward_max": result["episode_reward_max"],
"episode_len_mean": result["episode_len_mean"],
}
episode_data.append(episode)
episode_json.append(json.dumps(episode))
file_name = agent.save(checkpoint_root)
print(f'{n:3d}: Min/Mean/Max reward: {result["episode_reward_min"]:8.4f}/{result["episode_reward_mean"]:8.4f}/{result["episode_reward_max"]:8.4f}. Checkpoint saved to {file_name}')
```
The episode rewards should increase after multiple iterations. Try tweaking the config parameters. Smaller values for the `num_sgd_iter`, `sgd_minibatch_size`, or the `model`'s `fcnet_hiddens` will train faster, but take longer to improve the policy.
```
df = pd.DataFrame(data=episode_data)
df
df.plot(x="n", y=["episode_reward_mean", "episode_reward_min", "episode_reward_max"], secondary_y=True)
```
Also, print out the policy and model to see the results of training in detail…
```
import pprint
policy = agent.get_policy()
model = policy.model
pprint.pprint(model.variables())
pprint.pprint(model.value_function())
print(model.base_model.summary())
```
## Rollout
Next we'll use the [RLlib rollout CLI](https://ray.readthedocs.io/en/latest/rllib-training.html#evaluating-trained-policies), to evaluate the trained policy.
This visualizes the `CartPole` agent operating within the simulation: moving the cart left or right to avoid having the pole fall over.
We'll use the last saved checkpoint, `checkpoint_10` (or whatever you set for `N_ITER` above) for the rollout, evaluated through `2000` steps.
> **Notes:**
>
> 1. If you changed `checkpoint_root` above to be different than `tmp/ppo/cart`, then change it here, too. Note that bugs in variable substitution in Jupyter notebooks, we can't use variables in the next cell, unfortunately.
> 2. If you changed the model parameters, specifically the `fcnet_hiddens` array in the `config` object above, make the same change here.
You may need to make one more modification, depending on how you are running this tutorial:
1. Running on your laptop? - Remove the line `--no-render`.
2. Running on the Anyscale Service? The popup windows that would normally be created by the rollout can't be viewed in this case. Hence, the `--no-render` flag suppresses them. The code cell afterwords provides a sample video. You can try adding `--video-dir tmp/ppo/cart`, which will generate MP4 videos, then download them to view them. Or copy the `Video` cell below and use it to view the movies.
```
!rllib rollout tmp/ppo/cart/checkpoint_10/checkpoint-10 \
--config "{\"env\": \"CartPole-v1\", \"model\": {\"fcnet_hiddens\": [100, 50]}}" \
--run PPO \
--no-render \
--steps 2000
```
Here is a sample episode.
> **Note:** This video was created by running the previous `rllib rollout` command with the argument `--video-dir some_directory`. It creates one video per episode.
```
from IPython.display import Video
cart_pole_sample_video = "../images/rllib/Cart-Pole-Example-Video.mp4"
Video(cart_pole_sample_video)
```
Finally, launch [TensorBoard](https://ray.readthedocs.io/en/latest/rllib-training.html#getting-started). Select the Cart Pole runs and visualize the key metrics from training with RLlib.
```shell
tensorboard --logdir=$HOME/ray_results
```
```
ray.shutdown()
```
|
github_jupyter
|
```
import tensorflow as tf
tf.config.experimental.list_physical_devices()
tf.test.is_built_with_cuda()
```
# Importing Libraries
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import os.path as op
import pickle
import tensorflow as tf
from tensorflow import keras
from keras.models import Model,Sequential,load_model
from keras.layers import Input, Embedding
from keras.layers import Dense, Bidirectional
from keras.layers.recurrent import LSTM
import keras.metrics as metrics
import itertools
from tensorflow.python.keras.utils.data_utils import Sequence
from decimal import Decimal
from keras import backend as K
from keras.layers import Conv1D,MaxPooling1D,Flatten,Dense
```
# Data Fetching
```
A1=np.empty((0,5),dtype='float32')
U1=np.empty((0,7),dtype='float32')
node=['150','149','147','144','142','140','136','61']
mon=['Apr','Mar','Aug','Jun','Jul','Sep','May','Oct']
for j in node:
for i in mon:
inp= pd.read_csv('data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[1,2,3,15,16])
out= pd.read_csv('data_gkv/AT510_Node_'+str(j)+'_'+str(i)+'19_OutputFile.csv',usecols=[5,6,7,8,17,18,19])
inp=np.array(inp,dtype='float32')
out=np.array(out,dtype='float32')
A1=np.append(A1, inp, axis=0)
U1=np.append(U1, out, axis=0)
print(A1)
print(U1)
```
# Min Max Scaler
```
from sklearn.preprocessing import MinMaxScaler
import warnings
scaler_obj=MinMaxScaler()
X1=scaler_obj.fit_transform(A1)
Y1=scaler_obj.fit_transform(U1)
warnings.filterwarnings(action='ignore', category=UserWarning)
X1=X1[:,np.newaxis,:]
Y1=Y1[:,np.newaxis,:]
def rmse(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
def coeff_determination(y_true, y_pred):
SS_res = K.sum(K.square( y_true-y_pred ))
SS_tot = K.sum(K.square( y_true - K.mean(y_true) ) )
return ( 1 - SS_res/(SS_tot + K.epsilon()) )
```
# Model
```
model1 = Sequential()
model1.add(keras.Input(shape=(1,5)))
model1.add(tf.keras.layers.LSTM(7,activation="tanh",use_bias=True,kernel_initializer="glorot_uniform",bias_initializer="zeros"))
model1.add(Dense(7))
model1.add(keras.layers.BatchNormalization(axis=-1,momentum=0.99,epsilon=0.001,center=True,scale=True,
beta_initializer="zeros",gamma_initializer="ones",
moving_mean_initializer="zeros",moving_variance_initializer="ones",trainable=True))
model1.add(keras.layers.ReLU())
model1.compile(optimizer=keras.optimizers.Adam(learning_rate=1e-5), loss='binary_crossentropy',metrics=['accuracy','mse','mae',rmse])
model1.summary()
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42)
model_fit8 = model1.fit(x_train,y_train,batch_size=256,epochs=50, validation_split=0.1)
model1.evaluate(x_test,y_test)
model1.evaluate(x_train,y_train)
```
# Saving Model as File
```
model_json = model1.to_json()
with open("Model_File/lstm_tanh.json", "w") as json_file:
json_file.write(model_json)
# serialize weights to HDF5
model1.save_weights("Model_File/lstm_tanh.h5")
print("Saved model to disk")
from keras.models import model_from_json
json_file = open('Model_File/lstm_tanh.json', 'r')
loaded_model_json = json_file.read()
json_file.close()
loaded_model = model_from_json(loaded_model_json)
# load weights into new model
loaded_model.load_weights("Model_File/lstm_tanh.h5")
print("Loaded model from disk")
loaded_model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.001), loss='binary_crossentropy',metrics=['accuracy','mse','mae',rmse])
```
# Error Analysis
```
# summarize history for loss
plt.plot(model_fit8.history['loss'])
plt.plot(model_fit8.history['val_loss'])
plt.title('Model Loss',fontweight ='bold',fontsize = 15)
plt.ylabel('Loss',fontweight ='bold',fontsize = 15)
plt.xlabel('Epoch',fontweight ='bold',fontsize = 15)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
# summarize history for accuracy
plt.plot(model_fit8.history['accuracy'])
plt.plot(model_fit8.history['val_accuracy'])
plt.title('Model accuracy',fontweight ='bold',fontsize = 15)
plt.ylabel('Accuracy',fontweight ='bold',fontsize = 15)
plt.xlabel('Epoch',fontweight ='bold',fontsize = 15)
plt.legend(['Train', 'Test'], loc='upper left')
plt.show()
#Creating csv file of prediction
from sklearn.model_selection import train_test_split
x_train, x_test, y_train, y_test = train_test_split(X1, Y1, test_size=0.25, random_state=42)
y_test_pred=loaded_model.predict(x_test)
y_test_pred
y_test
y_test=y_test[:,0]
from numpy import savetxt
savetxt('ARRAY_DATA/lstm_y_test_pred.csv', y_test_pred[:1001], delimiter=',')
from numpy import savetxt
savetxt('ARRAY_DATA/lstm_y_test.csv', y_test[:1001], delimiter=',')
#completed
```
|
github_jupyter
|
# Paper Trends
## Imports
```
%load_ext autoreload
%autoreload 2
%aimport
%matplotlib inline
import os
import sys
nb_dir = os.path.dirname(os.path.split(os.getcwd())[0])
if nb_dir not in sys.path:
sys.path.append(nb_dir)
from tqdm import tqdm_notebook as tqdm
import pandas as pd
from turicreate import SFrame, load_sframe
from pathlib import Path
import turicreate.aggregate as agg
import numpy as np
import json
import os
import matplotlib.pyplot as plt
import pandas as pd
import math
import glob
import ntpath
from tqdm import tqdm
import seaborn as sns
from matplotlib.ticker import FuncFormatter
import datetime
from matplotlib.backends.backend_pdf import PdfPages
import seaborn as sns
```
## Utility Functions
```
def convert_to_barchart_format(sf,x, year_column="Year", count_column="count", year_range=(1786,2019)):
year_sf = SFrame()
year_sf[year_column] = np.linspace(year_range[0],year_range[1],year_range[1]-year_range[0]+1).tolist()
year_sf[year_column] = year_sf[year_column]
sf[year_column] = sf[year_column].astype(float)
res_sf = SFrame()
for d in tqdm(sf[x].unique()):
temp_sf = SFrame()
temp_sf[x] = [d]*len(year_sf)
temp_sf[year_column] = year_sf[year_column]
res_sf = res_sf.append(temp_sf)
sf = sf.join(res_sf,how="right").sort(year_column)
sf = sf.fillna(count_column,0)
df = sf.to_dataframe()
df = df.sort_values([x,year_column])
df['value'] = df.groupby([x])[count_column].cumsum()
df["lastValue"] = df.groupby([x])["value"].shift(1)
df = df.fillna(0)
df["rank"] =df.groupby([year_column])["value"].rank(ascending=False)
return df.rename(columns={x:"name", year_column: "year",count_column:"count"})[["year","name","value","lastValue","rank"]]
def chunks(l, n):
# For item i in a range that is a length of l,
for i in range(0, len(l), n):
# Create an index range for l of n items:
yield l[i:i + n]
def get_d(sf_corr, diseases_id):
for data in sf_corr.groupby("id"):
if len(data[1]) >5:
yield f"{data[0]}: {diseases_id[diseases_id['id']==data[0]][0]['Disease'].title()}", data[1].sort_values("year")
sns.set(style="ticks")
def create_gird(df, col, hue,x,y,sharey=True, legend=False):
# Initialize a grid of plots with an Axes for each walk
grid = sns.FacetGrid(df, col=col, hue=hue, palette=sns.color_palette("hls", 4),sharey=sharey,
col_wrap=3, height=4.5)
plt.gca().xaxis.set_major_formatter(FuncFormatter(lambda x, _: int(x)))
# Draw a horizontal line to show the starting point
grid.map(plt.axhline, y=0, ls=":", c=".5")
# Draw a line plot to show the trajectory of each random walk
grid.map(plt.plot, x, y)
grid.set_titles("{col_name}")
if legend:
grid.add_legend()
# Adjust the arrangement of the plots
grid.fig.tight_layout(w_pad=1)
return grid
```
## Analysis
```
spothlight = ["SARS","MERS Coronavirus", "Avian Influenza","Ebola", "Influenza", "HIV/AIDS","Hepatitis B","Hepatitis C", "Swine Flu"]
years = [2002,2012,1878,1976,1878,1981,1966,1987,1918 ]
min_refs = 5
```
### Data Loading
```
diseases_id= load_sframe("Data/diseases_id.csv")
disease_names = SFrame.read_csv("Data/disease_names.csv")
```
General MAG Medicine Publications:
```
med_mag = load_sframe("Data/mag/med_mag.sframe")
len(med_mag)
```
MAG Medicine Publications about the specific diseases:
```
diseases_mag = load_sframe("Data/mag/diseases_med_mag.sframe")
```
General MAG Virology Publications:
```
len(diseases_mag)
viro_mag = load_sframe("Data/mag/viro_mag.sframe")
```
MAG Virology Publications about the specific diseases"
```
len(viro_mag)
diseases_viro_mag = load_sframe("Data/mag/diseases_viro_mag.sframe")
len(diseases_viro_mag)
```
### Number of papers by diseases from 2001
```
diseases = diseases_mag[(diseases_mag["Year"]>2001)&(diseases_mag["Ref Number"]>min_refs)]
diseases = diseases.filter_by(spothlight, "disease")["disease"].value_counts()
diseases = diseases.rename({"value":"Disease", "count": "Numer of Papers"})
plt.figure(figsize=(20,10))
sns.set()
colors = ["#4374B3", "#4374B3"]
# Set your custom color palette
sns.set_palette(sns.color_palette(colors))
ax = sns.barplot(x="Disease", y="Numer of Papers", data=diseases.to_dataframe(), color="#4374B3")
ax.set_xticklabels(ax.get_xticklabels(),rotation=90)
plt.tight_layout()
plt.savefig("output/Papers/disease_count.svg")
```
We filter all publication that are not academic papers (editorials, letters, etc.).
This type of publication rarely cite other papers filtering the number of refernces removes this kind of publications from the dataset.
```
med_mag = med_mag[med_mag["Ref Number"]>min_refs]
viro_mag = viro_mag[viro_mag["Ref Number"]>min_refs]
diseases_mag = diseases_mag[diseases_mag["Ref Number"]>min_refs].filter_by(spothlight, "disease")
diseases_viro_mag = diseases_viro_mag[diseases_viro_mag["Ref Number"]>min_refs].filter_by(spothlight, "disease")
```
### Publications - Citation
#### NPR
Publication data normaliztion
```
def nomalize_disease_publications(diseases_sf, general_sf):
diseases_pub_count = diseases_sf.groupby(["disease","Year"], {"Number of papers": agg.COUNT()})
papers_year = general_sf.groupby("Year", {"Total Number of papers": agg.COUNT()})
diseases_pub_count = diseases_pub_count.join(papers_year,{"Year":"Year"})
diseases_pub_count["NPR"] = diseases_pub_count["Number of papers"] / diseases_pub_count["Total Number of papers"]
diseases_pub_count = diseases_pub_count.rename({"disease":"Disease"})
return diseases_pub_count.sort(["Disease","Year"])
diseases_pub_count_viro = nomalize_disease_publications(diseases_viro_mag, viro_mag)
diseases_pub_count_med = nomalize_disease_publications(diseases_mag, med_mag)
diseases_pub_count_viro["Type"] = "Virolgy"
diseases_pub_count_med["Type"] = "Medicine"
diseases_pub_count = diseases_pub_count_viro.append(diseases_pub_count_med)
def chunks(l, n):
# For item i in a range that is a length of l,
for i in range(0, len(l), n):
# Create an index range for l of n items:
yield l[i:i + n]
def get_data(sf_corr):
for data in sf_corr.groupby("Disease"):
if len(data[1]) >5:
yield data[1].sort_values("Year")
```
Filter the data:
```
pub = SFrame()
for d,y in zip(spothlight, years):
pub = pub.append( diseases_pub_count[(diseases_pub_count["Disease"]==d)&(diseases_pub_count["Year"]>=y)])
pub["Normalized Paper Rate"] = pub["NPR"]
```
Generate SVG
```
sns.set(font_scale=1.3)
plt.rc('text', usetex=False)
plt.figure(figsize=(16, 12))
des = list(get_data(pub[(pub["Year"]>=1980)&(pub["Type"]== "Virolgy")].to_dataframe()))
for i, curr_f in enumerate(tqdm(chunks(des, 20), total=((len(des) // 20)+1))):
create_gird(pd.concat(curr_f),"Disease","Type","Year", "Normalized Paper Rate",False,False)
plt.savefig(f"output/Papers/Virolgy_NPR_{i}.svg")
# plt.close()
sns.set(font_scale=1.3)
plt.rc('text', usetex=False)
plt.figure(figsize=(16, 12))
des = list(get_data(pub[(pub["Year"]>=1980)&(pub["Type"]== "Medicine")].to_dataframe()))
for i, curr_f in enumerate(tqdm(chunks(des, 20), total=((len(des) // 20)+1))):
create_gird(pd.concat(curr_f),"Disease","Type","Year", "Normalized Paper Rate",False,False)
plt.savefig(f"output/Papers/Medicine_NPR_{i}.svg")
# plt.close()
```
Generate multi-page PDF
```
sns.set(font_scale=1.3)
# Create the PdfPages object to which we will save the pages:
# The with statement makes sure that the PdfPages object is closed properly at
# the end of the block, even if an Exception occurs.
with PdfPages('output/Papers/Medicine_NPR.pdf') as pdf:
# if LaTeX is not installed or error caught, change to `usetex=False`
plt.rc('text', usetex=False)
plt.figure(figsize=(8, 6))
des = list(get_data(pub[(pub["Year"]>=1980)&(pub["Type"]== "Medicine")].to_dataframe()))
for i, curr_f in enumerate(tqdm(chunks(des, 20), total=((len(des) // 20)+1))):
create_gird(pd.concat(curr_f),"Disease","Type","Year", "Normalized Paper Rate",False,False)
pdf.savefig()
plt.close()
pub["Normalized Paper Rate"] = np.log(pub["NPR"])
import plotly.express as px
fig = px.line(pub[(pub["Type"]=="Virolgy")&(pub["Year"]>1959)].to_dataframe(), x="Year", y="Normalized Paper Rate",color="Disease", width=1600, height=800)
fig.update_layout({"legend":{"x":0,"y":1.1}, "legend_orientation":"h"}, font=dict(
size=20,
))
fig.show()
# import plotly.io as pio
# pio.orca.config.server_url = "http://localhost:9091"
# fig.write_image("output/Papers/disease-npr.svg")
```
Plot Similarity Using DTW
```
data = pub[(pub["Year"]>=1980)&(pub["Type"]== "Virolgy")&(pub["Year"]<2019)][["Disease","Year","NPR"]].to_dataframe()
data = data.sort_values(["Disease","Year"])
from tslearn.metrics import dtw
res= {"Disease1":[], "Disease2":[], "dtw":[]}
for d1, df1 in data.groupby("Disease"):
for d2, df2 in data.groupby("Disease"):
res["Disease1"].append(d1)
res["Disease2"].append(d2)
disease1 = df1["NPR"].values
disease2 = df2["NPR"].values
res["dtw"].append(dtw(disease1, disease2))
piv_data = []
for d, df in data.groupby("Disease"):
piv_data.append(df["NPR"].values)
sns.set(font_scale=2.0)
corr = pd.DataFrame(res).pivot(index='Disease1', columns='Disease2', values='dtw')
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
plt.figure(figsize=(40,20))
ax = sns.heatmap(corr, mask=mask, vmax=.3, square=True, annot=True, fmt='0.3f', cmap=sns.light_palette("#cc0000" , reverse=True, as_cmap=True))
plt.savefig("output/Papers/dtw_npr.svg")
from tslearn.utils import to_time_series_dataset
from tslearn.clustering import TimeSeriesKMeans
km = TimeSeriesKMeans(n_clusters=2, metric="dtw", max_iter=10, tol=1e-5).fit(to_time_series_dataset(piv_data))
from collections import defaultdict
clusters = defaultdict(lambda: [])
for d, c in zip(corr.index, km.labels_):
clusters[c].append(d)
clusters
```
#### NCR
```
# Calculte the number of citaions for each diseses per year.
def diseses_citations_year(publication_sf):
disease_citations = publication_sf.stack("Dict of Year_Citation Number",new_column_name=["cite year", "Citations"], drop_na=True)
disease_citations = disease_citations.groupby(["disease","cite year"], {"Citations": agg.SUM("Citations")})
disease_citations["cite year"] = disease_citations["cite year"].astype(int)
return disease_citations.rename({"cite year": "year"})
disease_citations_viro = diseses_citations_year(diseases_viro_mag)
disease_citations_med = diseses_citations_year(diseases_mag)
# The total number of citaions for a year, used to normalize the data.
def citaion_year_mag(publication_sf):
med_citations = publication_sf.stack("Dict of Year_Citation Number",new_column_name=["cite year", "Citations"], drop_na=True)
med_citations = med_citations.rename({"cite year": "year"})
return med_citations.groupby(["year"], operations={"Total Citations": agg.SUM("Citations")})
citations_year_viro = citaion_year_mag(viro_mag)
citations_year_med = citaion_year_mag(med_mag)
citations_year_med["year"] = citations_year_med["year"].astype(int)
citations_year_med.sort("Total Citations",False)
```
Medicine citaions over time
```
citations_year_med.to_dataframe().sort_values("year").plot(x="year", y="Total Citations")
```
Citaion data normaliztion
```
def norm_disease_citations(disease_citations, citations_year):
disease_citations = disease_citations.join(citations_year, on="year")
disease_citations["Citations Norm"] = disease_citations["Citations"]/disease_citations["Total Citations"]
return disease_citations.join(disease_names)
disease_citations_med = norm_disease_citations(disease_citations_med, citations_year_med)
disease_citations_viro = norm_disease_citations(disease_citations_viro, citations_year_viro)
def clean_disease_citations(disease_citations):
disease_citations = disease_citations.rename({"year":"Year","Citations Norm":"NCR", "disease": "Disease"})
disease_citations = disease_citations.join(disease_names, {"id":"id"})
disease_citations = disease_citations.sort(["Disease", "Year"])
disease_citations = disease_citations.to_dataframe()
disease_citations = disease_citations[disease_citations["Year"].notna()]
disease_citations = disease_citations[disease_citations["Year"]<2019]
return disease_citations.reset_index()
disease_citations_med = clean_disease_citations(disease_citations_med)
disease_citations_viro = clean_disease_citations(disease_citations_viro)
disease_citations_med["Type"] = "Medicine"
disease_citations_viro["Type"] = "Virology"
disease_citations = disease_citations_med.append(disease_citations_viro)
cite = pd.DataFrame()
for d,y in zip(spothlight, years):
cite = cite.append( disease_citations[(disease_citations["Disease"]==d)&(disease_citations["Year"]>=y)])
cite["Normalized Citaion Rate"] = cite["NCR"]
cite = cite.rename(columns={"Normalized Citaion Rate":"Normalized Citation Rate"})
sns.set(font_scale=1.3)
# sns.set(style="ticks")
plt.rc('text', usetex=False)
plt.figure(figsize=(8, 6))
des = list(get_data(cite[(cite["Year"]>=1980)&(cite["Type"]== "Medicine")]))
for i, curr_f in enumerate(tqdm(chunks(des, 20), total=((len(des) // 20)+1))):
create_gird(pd.concat(curr_f),"Disease","Type","Year", "Normalized Citation Rate", False, legend=False)
plt.savefig(f"output/Papers/Medicine_NCR_{i}.svg")
# plt.close()
sns.set(font_scale=1.3)
plt.rc('text', usetex=False)
plt.figure(figsize=(8, 6))
des = list(get_data(cite[(cite["Year"]>=1980)&(cite["Type"]== "Virology")]))
for i, curr_f in enumerate(tqdm(chunks(des, 20), total=((len(des) // 20)+1))):
create_gird(pd.concat(curr_f),"Disease","Type","Year", "Normalized Citation Rate", False, legend=False)
plt.savefig(f"output/Papers/Virolgy_NCR_{i}.svg")
# plt.close()
np.log(10)
10 ** np.log(6)
cite["Normalized Citation Rate"] = np.log(cite["NCR"])
import plotly.express as px
fig = px.line(cite, x="Year", y="Normalized Citaion Rate",color="Disease", width=1600, height=800)
fig.show()
data = cite[(cite["Year"]>=1980)&(cite["Type"]== "Virology")&(cite["Year"]<2019)][["Disease","Year","NCR"]]
data = data.sort_values(["Disease","Year"])
from tslearn.metrics import dtw
res= {"Disease1":[], "Disease2":[], "dtw":[]}
for d1, df1 in data.groupby("Disease"):
for d2, df2 in data.groupby("Disease"):
res["Disease1"].append(d1)
res["Disease2"].append(d2)
disease1 = df1["NCR"].values
disease2 = df2["NCR"].values
res["dtw"].append(dtw(disease1, disease2))
piv_data = []
for d, df in data.groupby("Disease"):
piv_data.append(df["NCR"].values)
sns.set( font_scale=2.0)
corr = pd.DataFrame(res).pivot(index='Disease1', columns='Disease2', values='dtw')
mask = np.zeros_like(corr)
mask[np.triu_indices_from(mask)] = True
with sns.axes_style("white"):
plt.figure(figsize=(40,20))
ax = sns.heatmap(corr, mask=mask, vmax=.3, square=True, annot=True, fmt='0.3f', cmap=sns.light_palette("#cc0000" , reverse=True, as_cmap=True))
plt.savefig("output/Papers/dtw-ncr.svg")
from tslearn.generators import random_walks
from tslearn.clustering import TimeSeriesKMeans
# X = random_walks(n_ts=50, sz=32, d=1)
km = TimeSeriesKMeans(n_clusters=2, metric="dtw", max_iter=10, tol=1e-5).fit(to_time_series_dataset(piv_data))
from collections import defaultdict
clusters = defaultdict(lambda: [])
for d, c in zip(corr.index, km.labels_):
clusters[c].append(d)
clusters
```
### Data and Code in research
```
from ScienceDynamics.datasets.microsoft_academic_graph import MicrosoftAcademicGraph
from ScienceDynamics.config.configs import DATASETS_BASE_DIR
mag = MicrosoftAcademicGraph(DATASETS_BASE_DIR)
resources = diseases_mag.join(mag.paper_resources, on="PaperId")
```
ResourceType. 1 = Project, 2 = Data, 4 = Code
```
resources[resources["ResourceType"]==2]["disease"].value_counts()
len(resources[resources["ResourceType"]==2]["disease"])
len(resources[resources["ResourceType"]==4]["disease"])
resources[resources["ResourceType"]==4]["disease"].value_counts()
resources[resources["ResourceType"]==1]["disease"].value_counts()
```
## Data Fusion
```
diseases_pubmed = load_sframe("Data/pubmed/diseases_pubmed.sframe")
pubmed_papers_year = diseases_pubmed.groupby("year",{"PubMed":agg.COUNT()})
mag_papers_year = diseases_mag.groupby("Year",{"MAG":agg.COUNT()})
pubmed = load_sframe("Data/pubmed/pubmed.sframe")
pubmed_papers_year = pubmed.groupby("year",{"PubMed":agg.COUNT()})
mag_papers_year = med_mag.groupby("Year",{"MAG":agg.COUNT()})
df = pubmed_papers_year.join(mag_papers_year,{"year":"Year"}).sort("year")
df =df.rename({"year":"Year"})
df2 = df.pack_columns(column_names=["MAG","PubMed"], dtype=dict, new_column_name='Papers').stack("Papers", new_column_name=['Dataset', 'Total Papers'])
import plotly.express as px
fig = px.line(df2[df2["Year"]<2016].to_dataframe(), x="Year", y="Total Papers",color="Dataset", width=1600, height=800)
fig.update_layout({"legend":{"x":0,"y":1.1}, "legend_orientation":"h"}, font=dict(
size=20,
))
fig.show()
# fig.write_image("output/Papers/Total Papers.svg")
```
|
github_jupyter
|
# Residual Networks
Welcome to the second assignment of this week! You will learn how to build very deep convolutional networks, using Residual Networks (ResNets). In theory, very deep networks can represent very complex functions; but in practice, they are hard to train. Residual Networks, introduced by [He et al.](https://arxiv.org/pdf/1512.03385.pdf), allow you to train much deeper networks than were previously practically feasible.
**In this assignment, you will:**
- Implement the basic building blocks of ResNets.
- Put together these building blocks to implement and train a state-of-the-art neural network for image classification.
This assignment will be done in Keras.
Before jumping into the problem, let's run the cell below to load the required packages.
```
import numpy as np
from keras import layers
from keras.layers import Input, Add, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D, AveragePooling2D, MaxPooling2D, GlobalMaxPooling2D
from keras.models import Model, load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from resnets_utils import *
from keras.initializers import glorot_uniform
import scipy.misc
from matplotlib.pyplot import imshow
%matplotlib inline
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)
```
## 1 - The problem of very deep neural networks
Last week, you built your first convolutional neural network. In recent years, neural networks have become deeper, with state-of-the-art networks going from just a few layers (e.g., AlexNet) to over a hundred layers.
The main benefit of a very deep network is that it can represent very complex functions. It can also learn features at many different levels of abstraction, from edges (at the lower layers) to very complex features (at the deeper layers). However, using a deeper network doesn't always help. A huge barrier to training them is vanishing gradients: very deep networks often have a gradient signal that goes to zero quickly, thus making gradient descent unbearably slow. More specifically, during gradient descent, as you backprop from the final layer back to the first layer, you are multiplying by the weight matrix on each step, and thus the gradient can decrease exponentially quickly to zero (or, in rare cases, grow exponentially quickly and "explode" to take very large values).
During training, you might therefore see the magnitude (or norm) of the gradient for the earlier layers descrease to zero very rapidly as training proceeds:
<img src="images/vanishing_grad_kiank.png" style="width:450px;height:220px;">
<caption><center> <u> <font color='purple'> **Figure 1** </u><font color='purple'> : **Vanishing gradient** <br> The speed of learning decreases very rapidly for the early layers as the network trains </center></caption>
You are now going to solve this problem by building a Residual Network!
## 2 - Building a Residual Network
In ResNets, a "shortcut" or a "skip connection" allows the gradient to be directly backpropagated to earlier layers:
<img src="images/skip_connection_kiank.png" style="width:650px;height:200px;">
<caption><center> <u> <font color='purple'> **Figure 2** </u><font color='purple'> : A ResNet block showing a **skip-connection** <br> </center></caption>
The image on the left shows the "main path" through the network. The image on the right adds a shortcut to the main path. By stacking these ResNet blocks on top of each other, you can form a very deep network.
We also saw in lecture that having ResNet blocks with the shortcut also makes it very easy for one of the blocks to learn an identity function. This means that you can stack on additional ResNet blocks with little risk of harming training set performance. (There is also some evidence that the ease of learning an identity function--even more than skip connections helping with vanishing gradients--accounts for ResNets' remarkable performance.)
Two main types of blocks are used in a ResNet, depending mainly on whether the input/output dimensions are same or different. You are going to implement both of them.
### 2.1 - The identity block
The identity block is the standard block used in ResNets, and corresponds to the case where the input activation (say $a^{[l]}$) has the same dimension as the output activation (say $a^{[l+2]}$). To flesh out the different steps of what happens in a ResNet's identity block, here is an alternative diagram showing the individual steps:
<img src="images/idblock2_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 3** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 2 layers. </center></caption>
The upper path is the "shortcut path." The lower path is the "main path." In this diagram, we have also made explicit the CONV2D and ReLU steps in each layer. To speed up training we have also added a BatchNorm step. Don't worry about this being complicated to implement--you'll see that BatchNorm is just one line of code in Keras!
In this exercise, you'll actually implement a slightly more powerful version of this identity block, in which the skip connection "skips over" 3 hidden layers rather than 2 layers. It looks like this:
<img src="images/idblock3_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Identity block.** Skip connection "skips over" 3 layers.</center></caption>
Here're the individual steps.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2a'`. Use 0 as the seed for the random initialization.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of shape $(f,f)$ and a stride of (1,1). Its padding is "same" and its name should be `conv_name_base + '2b'`. Use 0 as the seed for the random initialization.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of shape (1,1) and a stride of (1,1). Its padding is "valid" and its name should be `conv_name_base + '2c'`. Use 0 as the seed for the random initialization.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Final step:
- The shortcut and the input are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the ResNet identity block. We have implemented the first component of the main path. Please read over this carefully to make sure you understand what it is doing. You should implement the rest.
- To implement the Conv2D step: [See reference](https://keras.io/layers/convolutional/#conv2d)
- To implement BatchNorm: [See reference](https://faroit.github.io/keras-docs/1.2.2/layers/normalization/) (axis: Integer, the axis that should be normalized (typically the channels axis))
- For the activation, use: `Activation('relu')(X)`
- To add the value passed forward by the shortcut: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: identity_block
def identity_block(X, f, filters, stage, block):
"""
Implementation of the identity block as defined in Figure 3
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
Returns:
X -- output of the identity block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value. You'll need this later to add back to the main path.
X_shortcut = X
# First component of main path
X = Conv2D(filters = F1, kernel_size = (1, 1), strides = (1,1), padding = 'valid', name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(filters = F2, kernel_size = (f,f), strides = (1,1), padding= 'same', name = conv_name_base + '2b', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(filters = F3, kernel_size = (1,1), strides = (1,1), padding = 'valid', name = conv_name_base + '2c', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2c')(X)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = identity_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.94822985 0. 1.16101444 2.747859 0. 1.36677003]
</td>
</tr>
</table>
## 2.2 - The convolutional block
You've implemented the ResNet identity block. Next, the ResNet "convolutional block" is the other type of block. You can use this type of block when the input and output dimensions don't match up. The difference with the identity block is that there is a CONV2D layer in the shortcut path:
<img src="images/convblock_kiank.png" style="width:650px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 4** </u><font color='purple'> : **Convolutional block** </center></caption>
The CONV2D layer in the shortcut path is used to resize the input $x$ to a different dimension, so that the dimensions match up in the final addition needed to add the shortcut value back to the main path. (This plays a similar role as the matrix $W_s$ discussed in lecture.) For example, to reduce the activation dimensions's height and width by a factor of 2, you can use a 1x1 convolution with a stride of 2. The CONV2D layer on the shortcut path does not use any non-linear activation function. Its main role is to just apply a (learned) linear function that reduces the dimension of the input, so that the dimensions match up for the later addition step.
The details of the convolutional block are as follows.
First component of main path:
- The first CONV2D has $F_1$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '2a'`.
- The first BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2a'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Second component of main path:
- The second CONV2D has $F_2$ filters of (f,f) and a stride of (1,1). Its padding is "same" and it's name should be `conv_name_base + '2b'`.
- The second BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2b'`.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
Third component of main path:
- The third CONV2D has $F_3$ filters of (1,1) and a stride of (1,1). Its padding is "valid" and it's name should be `conv_name_base + '2c'`.
- The third BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '2c'`. Note that there is no ReLU activation function in this component.
Shortcut path:
- The CONV2D has $F_3$ filters of shape (1,1) and a stride of (s,s). Its padding is "valid" and its name should be `conv_name_base + '1'`.
- The BatchNorm is normalizing the channels axis. Its name should be `bn_name_base + '1'`.
Final step:
- The shortcut and the main path values are added together.
- Then apply the ReLU activation function. This has no name and no hyperparameters.
**Exercise**: Implement the convolutional block. We have implemented the first component of the main path; you should implement the rest. As before, always use 0 as the seed for the random initialization, to ensure consistency with our grader.
- [Conv Hint](https://keras.io/layers/convolutional/#conv2d)
- [BatchNorm Hint](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- For the activation, use: `Activation('relu')(X)`
- [Addition Hint](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: convolutional_block
def convolutional_block(X, f, filters, stage, block, s = 2):
"""
Implementation of the convolutional block as defined in Figure 4
Arguments:
X -- input tensor of shape (m, n_H_prev, n_W_prev, n_C_prev)
f -- integer, specifying the shape of the middle CONV's window for the main path
filters -- python list of integers, defining the number of filters in the CONV layers of the main path
stage -- integer, used to name the layers, depending on their position in the network
block -- string/character, used to name the layers, depending on their position in the network
s -- Integer, specifying the stride to be used
Returns:
X -- output of the convolutional block, tensor of shape (n_H, n_W, n_C)
"""
# defining name basis
conv_name_base = 'res' + str(stage) + block + '_branch'
bn_name_base = 'bn' + str(stage) + block + '_branch'
# Retrieve Filters
F1, F2, F3 = filters
# Save the input value
X_shortcut = X
##### MAIN PATH #####
# First component of main path
X = Conv2D(F1, (1, 1), strides = (s,s), name = conv_name_base + '2a', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = bn_name_base + '2a')(X)
X = Activation('relu')(X)
### START CODE HERE ###
# Second component of main path (≈3 lines)
X = Conv2D(F2,(f, f),strides=(1, 1),padding='same',name=conv_name_base +'2b',kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3,name=bn_name_base+'2b')(X)
X = Activation('relu')(X)
# Third component of main path (≈2 lines)
X = Conv2D(F3,(1,1),strides=(1,1),padding='valid',name=conv_name_base+'2c',kernel_initializer=glorot_uniform(seed=0))(X)
X = BatchNormalization(axis=3,name=bn_name_base+'2c')(X)
##### SHORTCUT PATH #### (≈2 lines)
X_shortcut = Conv2D(F3,(1,1),strides=(s,s),padding='valid',name=conv_name_base+'1',kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
X_shortcut = BatchNormalization(axis=3,name=bn_name_base+'1')(X_shortcut)
# Final step: Add shortcut value to main path, and pass it through a RELU activation (≈2 lines)
X = Add()([X, X_shortcut])
X = Activation('relu')(X)
### END CODE HERE ###
return X
tf.reset_default_graph()
with tf.Session() as test:
np.random.seed(1)
A_prev = tf.placeholder("float", [3, 4, 4, 6])
X = np.random.randn(3, 4, 4, 6)
A = convolutional_block(A_prev, f = 2, filters = [2, 4, 6], stage = 1, block = 'a')
test.run(tf.global_variables_initializer())
out = test.run([A], feed_dict={A_prev: X, K.learning_phase(): 0})
print("out = " + str(out[0][1][1][0]))
```
**Expected Output**:
<table>
<tr>
<td>
**out**
</td>
<td>
[ 0.09018463 1.23489773 0.46822017 0.0367176 0. 0.65516603]
</td>
</tr>
</table>
## 3 - Building your first ResNet model (50 layers)
You now have the necessary blocks to build a very deep ResNet. The following figure describes in detail the architecture of this neural network. "ID BLOCK" in the diagram stands for "Identity block," and "ID BLOCK x3" means you should stack 3 identity blocks together.
<img src="images/resnet_kiank.png" style="width:850px;height:150px;">
<caption><center> <u> <font color='purple'> **Figure 5** </u><font color='purple'> : **ResNet-50 model** </center></caption>
The details of this ResNet-50 model are:
- Zero-padding pads the input with a pad of (3,3)
- Stage 1:
- The 2D Convolution has 64 filters of shape (7,7) and uses a stride of (2,2). Its name is "conv1".
- BatchNorm is applied to the channels axis of the input.
- MaxPooling uses a (3,3) window and a (2,2) stride.
- Stage 2:
- The convolutional block uses three set of filters of size [64,64,256], "f" is 3, "s" is 1 and the block is "a".
- The 2 identity blocks use three set of filters of size [64,64,256], "f" is 3 and the blocks are "b" and "c".
- Stage 3:
- The convolutional block uses three set of filters of size [128,128,512], "f" is 3, "s" is 2 and the block is "a".
- The 3 identity blocks use three set of filters of size [128,128,512], "f" is 3 and the blocks are "b", "c" and "d".
- Stage 4:
- The convolutional block uses three set of filters of size [256, 256, 1024], "f" is 3, "s" is 2 and the block is "a".
- The 5 identity blocks use three set of filters of size [256, 256, 1024], "f" is 3 and the blocks are "b", "c", "d", "e" and "f".
- Stage 5:
- The convolutional block uses three set of filters of size [512, 512, 2048], "f" is 3, "s" is 2 and the block is "a".
- The 2 identity blocks use three set of filters of size [512, 512, 2048], "f" is 3 and the blocks are "b" and "c".
- The 2D Average Pooling uses a window of shape (2,2) and its name is "avg_pool".
- The flatten doesn't have any hyperparameters or name.
- The Fully Connected (Dense) layer reduces its input to the number of classes using a softmax activation. Its name should be `'fc' + str(classes)`.
**Exercise**: Implement the ResNet with 50 layers described in the figure above. We have implemented Stages 1 and 2. Please implement the rest. (The syntax for implementing Stages 3-5 should be quite similar to that of Stage 2.) Make sure you follow the naming convention in the text above.
You'll need to use this function:
- Average pooling [see reference](https://keras.io/layers/pooling/#averagepooling2d)
Here're some other functions we used in the code below:
- Conv2D: [See reference](https://keras.io/layers/convolutional/#conv2d)
- BatchNorm: [See reference](https://keras.io/layers/normalization/#batchnormalization) (axis: Integer, the axis that should be normalized (typically the features axis))
- Zero padding: [See reference](https://keras.io/layers/convolutional/#zeropadding2d)
- Max pooling: [See reference](https://keras.io/layers/pooling/#maxpooling2d)
- Fully conected layer: [See reference](https://keras.io/layers/core/#dense)
- Addition: [See reference](https://keras.io/layers/merge/#add)
```
# GRADED FUNCTION: ResNet50
def ResNet50(input_shape = (64, 64, 3), classes = 6):
"""
Implementation of the popular ResNet50 the following architecture:
CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> CONVBLOCK -> IDBLOCK*2 -> CONVBLOCK -> IDBLOCK*3
-> CONVBLOCK -> IDBLOCK*5 -> CONVBLOCK -> IDBLOCK*2 -> AVGPOOL -> TOPLAYER
Arguments:
input_shape -- shape of the images of the dataset
classes -- integer, number of classes
Returns:
model -- a Model() instance in Keras
"""
# Define the input as a tensor with shape input_shape
X_input = Input(input_shape)
# Zero-Padding
X = ZeroPadding2D((3, 3))(X_input)
# Stage 1
X = Conv2D(64, (7, 7), strides = (2, 2), name = 'conv1', kernel_initializer = glorot_uniform(seed=0))(X)
X = BatchNormalization(axis = 3, name = 'bn_conv1')(X)
X = Activation('relu')(X)
X = MaxPooling2D((3, 3), strides=(2, 2))(X)
# Stage 2
X = convolutional_block(X, f = 3, filters = [64, 64, 256], stage = 2, block='a', s = 1)
X = identity_block(X, 3, [64, 64, 256], stage=2, block='b')
X = identity_block(X, 3, [64, 64, 256], stage=2, block='c')
### START CODE HERE ###
# Stage 3 (≈4 lines)
X = convolutional_block(X, f=3, filters= [128, 128, 512], stage = 3, block='a', s=2)
X = identity_block(X, f=3, filters= [128, 128, 512], stage = 3, block='b')
X = identity_block(X, 3, [128, 128, 512], stage=3, block='c')
X = identity_block(X, 3, [128, 128, 512], stage =3, block='d')
# Stage 4 (≈6 lines)
X = convolutional_block(X, f=3, filters=[256, 256, 1024], stage=4, block='a', s=2)
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='b')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='c')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='d')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block='e')
X = identity_block(X, 3, [256, 256, 1024], stage=4, block ='f')
# Stage 5 (≈3 lines)
X = convolutional_block(X, f=3, filters=[512, 512, 2048], stage=5, block='a', s=2)
X = identity_block(X, 3, [512, 512, 2048], stage=5, block='b')
X = identity_block(X, 3, [512, 512, 2048], stage =5, block='c')
# AVGPOOL (≈1 line). Use "X = AveragePooling2D(...)(X)"
X = AveragePooling2D((2,2),name='avg_pool')(X)
### END CODE HERE ###
# output layer
X = Flatten()(X)
X = Dense(classes, activation='softmax', name='fc' + str(classes), kernel_initializer = glorot_uniform(seed=0))(X)
# Create model
model = Model(inputs = X_input, outputs = X, name='ResNet50')
return model
```
Run the following code to build the model's graph. If your implementation is not correct you will know it by checking your accuracy when running `model.fit(...)` below.
```
model = ResNet50(input_shape = (64, 64, 3), classes = 6)
```
As seen in the Keras Tutorial Notebook, prior training a model, you need to configure the learning process by compiling the model.
```
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
```
The model is now ready to be trained. The only thing you need is a dataset.
Let's load the SIGNS Dataset.
<img src="images/signs_data_kiank.png" style="width:450px;height:250px;">
<caption><center> <u> <font color='purple'> **Figure 6** </u><font color='purple'> : **SIGNS dataset** </center></caption>
```
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
```
Run the following cell to train your model on 2 epochs with a batch size of 32. On a CPU it should take you around 5min per epoch.
```
model.fit(X_train, Y_train, epochs = 2, batch_size = 32)
```
**Expected Output**:
<table>
<tr>
<td>
** Epoch 1/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, although your results can be different from ours.
</td>
</tr>
<tr>
<td>
** Epoch 2/2**
</td>
<td>
loss: between 1 and 5, acc: between 0.2 and 0.5, you should see your loss decreasing and the accuracy increasing.
</td>
</tr>
</table>
Let's see how this model (trained on only two epochs) performs on the test set.
```
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
**Expected Output**:
<table>
<tr>
<td>
**Test Accuracy**
</td>
<td>
between 0.16 and 0.25
</td>
</tr>
</table>
For the purpose of this assignment, we've asked you to train the model only for two epochs. You can see that it achieves poor performances. Please go ahead and submit your assignment; to check correctness, the online grader will run your code only for a small number of epochs as well.
After you have finished this official (graded) part of this assignment, you can also optionally train the ResNet for more iterations, if you want. We get a lot better performance when we train for ~20 epochs, but this will take more than an hour when training on a CPU.
Using a GPU, we've trained our own ResNet50 model's weights on the SIGNS dataset. You can load and run our trained model on the test set in the cells below. It may take ≈1min to load the model.
```
model = load_model('ResNet50.h5')
preds = model.evaluate(X_test, Y_test)
print ("Loss = " + str(preds[0]))
print ("Test Accuracy = " + str(preds[1]))
```
ResNet50 is a powerful model for image classification when it is trained for an adequate number of iterations. We hope you can use what you've learnt and apply it to your own classification problem to perform state-of-the-art accuracy.
Congratulations on finishing this assignment! You've now implemented a state-of-the-art image classification system!
## 4 - Test on your own image (Optional/Ungraded)
If you wish, you can also take a picture of your own hand and see the output of the model. To do this:
1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub.
2. Add your image to this Jupyter Notebook's directory, in the "images" folder
3. Write your image's name in the following code
4. Run the code and check if the algorithm is right!
```
img_path = 'images/my_image.jpg'
img = image.load_img(img_path, target_size=(64, 64))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x = preprocess_input(x)
print('Input image shape:', x.shape)
my_image = scipy.misc.imread(img_path)
imshow(my_image)
print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(x))
```
You can also print a summary of your model by running the following code.
```
model.summary()
```
Finally, run the code below to visualize your ResNet50. You can also download a .png picture of your model by going to "File -> Open...-> model.png".
```
plot_model(model, to_file='model.png')
SVG(model_to_dot(model).create(prog='dot', format='svg'))
```
<font color='blue'>
**What you should remember:**
- Very deep "plain" networks don't work in practice because they are hard to train due to vanishing gradients.
- The skip-connections help to address the Vanishing Gradient problem. They also make it easy for a ResNet block to learn an identity function.
- There are two main type of blocks: The identity block and the convolutional block.
- Very deep Residual Networks are built by stacking these blocks together.
### References
This notebook presents the ResNet algorithm due to He et al. (2015). The implementation here also took significant inspiration and follows the structure given in the github repository of Francois Chollet:
- Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun - [Deep Residual Learning for Image Recognition (2015)](https://arxiv.org/abs/1512.03385)
- Francois Chollet's github repository: https://github.com/fchollet/deep-learning-models/blob/master/resnet50.py
|
github_jupyter
|
# Training Job in Internet-free Mode
If you want to isolate your training data and training container from the rest of the Internet, then you should create the training job in a private subnet. A private subnet is a subnet in your VPC without a route to an Internet Gateway. This means, by default, no inbound calls to your container from the Internet is possible and your container cannot make outbound calls to the Internet. If you need the training container to access your S3 resource, you need to **explicitly** add a VPC endpoint and attach it to the route table of your private subnet to allow traffic to your S3 bucket.
In this notebook, you will walk through an example of creating such a training job. you will
- Build a simple training image
- Set up a VPC
- Set up a private subnet in the VPC
- Set up a security group in the VPC
- Create a training job in your private subnet && security group and watch it to fail (because it cannot access your S3 resource)
- Add a VPC endpoint to allow traffic to S3
- Create another training job in your private subnet and watch it to succeeed
If you are not familiar with VPC security configuration, the following materials can help you
- [Security in Amazon Virtual Private Cloud](https://docs.aws.amazon.com/vpc/latest/userguide/security.html)
- [Training and Inference Containers in Internet-Free Mode](https://docs.aws.amazon.com/sagemaker/latest/dg/mkt-algo-model-internet-free.html)
It's okay if you don't understand everything from the official docs above. The code samples you will see in this notebook will help you grasp those concepts.
```
# import libraries
import boto3
import pprint
import datetime
import time
pp = pprint.PrettyPrinter(indent=1)
```
## Permissions
If you are running this notebook on an EC2 instance with an IAM user (you) as the default profile, then you will need policies to allow you to create VPC / Subnet / Secruity group / VPC endpoint. Likewise, if you are running this notebook on a SageMaker notebook instance or Studio, the service role needs to have those permission as well.
## Build a training image
You will follow the same procedure for building a training image as in [this notebook](https://github.com/hsl89/amazon-sagemaker-examples/blob/sagemaker-fundamentals/sagemaker-fundamentals/create-training-job/create_training_job.ipynb). We will refer to this image as `example-image`. Please go through that notebook if you are not familiar with `CreateTrainingJob` API.
```
# create a repo in your ECR
ecr = boto3.client("ecr")
try:
# The repository might already exist
# in your ECR
cr_res = ecr.create_repository(repositoryName="example-image")
pp.pprint(cr_res)
except Exception as e:
print(e)
%%sh
# build the image
cd container/
# tag it as example-image:latest
docker build -t example-image:latest .
# test the container
python local_test/test_container.py
account=$(aws sts get-caller-identity --query Account | sed -e 's/^"//' -e 's/"$//')
region=$(aws configure get region)
ecr_account=${account}.dkr.ecr.${region}.amazonaws.com
# Give docker your ECR login password
aws ecr get-login-password --region $region | docker login --username AWS --password-stdin $ecr_account
# Fullname of the repo
fullname=$ecr_account/example-image:latest
# Tag the image with the fullname
docker tag example-image:latest $fullname
# Push to ECR
docker push $fullname
```
## Create a VPC
You can think of Amazon VPC as the traditional network in a data center in the cloud.
The following are the key concepts for VPCs:
* Virtual private cloud (VPC) — A virtual network dedicated to your AWS account.
* Subnet — A range of IP addresses in your VPC.
* Route table — A set of rules, called routes, that are used to determine where network traffic is directed.
* Internet gateway — A gateway that you attach to your VPC to enable communication between resources in your VPC and the internet.
* VPC endpoint — Enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network. For more information, see AWS PrivateLink and VPC endpoints.
* CIDR block —Classless Inter-Domain Routing. An internet protocol address allocation and route aggregation methodology. For more information, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation) in Wikipedia.
All of these concepts are explained in the [official docs](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html).
```
# Create a VPC in your default region
ec2 = boto3.client("ec2")
vpc_res = ec2.create_vpc(
CidrBlock="10.0.0.0/20", # 2^(32 - 20) = 4906 private ipv4 addrs
AmazonProvidedIpv6CidrBlock=False,
DryRun=False,
TagSpecifications=[
{
"ResourceType": "vpc",
"Tags": [
{"Key": "Name", "Value": "hello-world"},
],
},
],
)
pp.pprint(vpc_res)
# inspect this VPC in details
vpc_des = ec2.describe_vpcs(VpcIds=[vpc_res["Vpc"]["VpcId"]])
pp.pprint(vpc_des["Vpcs"])
```
## Create a subnet
The VPC you just created has the capacity to host 4906 compute instances. Think of the VPC as you just created as the entire data center for your organization. Of course, you did not spin up any instances yet, so you are not billed for 4906 instances (rest assured). Suppose you are running a real data center, part of your cluster might be pubic facing (for example, machines that host your frontend applications), part of your cluster might be insulated from the internet and is only accessible from other machines in your data center (for example, your backend or database servers). You can define the scope of your cluster (public / private) via **subnet**. Using subnet, you can define which part of your VPC (via its CIDR block) are public and which part are private.
If want to run a SageMaker training job in network isolation mode, then you will need to pass a private subnet id to the `CreateTrainingJob` API. SageMaker service will then start instances in the private subnet that run your training container.
So first off, let's create a private subnet. A subnet is defined within an availability zone, whereas a VPC is defined within a region.
```
# create subnet and associate it with route table
def get_first_availability_zone():
region_name = boto3.Session().region_name
avz_res = ec2.describe_availability_zones(
Filters=[{"Name": "region-name", "Values": [region_name]}],
AllAvailabilityZones=True,
)
for az in avz_res["AvailabilityZones"]:
if az["ZoneType"] == "availability-zone":
return az
else:
return None
def create_subnet(vpc_id, cidr_block, dry_run):
"""Create a subnet in the first availability zone in your current region"""
az = get_first_availability_zone()
if az is not None:
subnet_res = ec2.create_subnet(
AvailabilityZone=az["ZoneName"], VpcId=vpc_id, CidrBlock=cidr_block, DryRun=dry_run
)
return subnet_res
else:
raise "No availability zone"
sn_res = create_subnet(
vpc_id=vpc_res["Vpc"]["VpcId"],
cidr_block="10.0.0.0/28", # I want 2 ^ (32 - 28) private ipv4 in this subnet
dry_run=False,
)
pp.pprint(sn_res)
```
## Create a security group
A [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html) is another layer of security configuration for instances running in your VPC. It acts as a firewall for your instance that controls its inbound and outbound calls. You need a security group for a SageMaker training job, because in complicated training job that involves distributed training, you need a security group configuration that allows traffics between instances that runs the training job. For the purpose of this notebook, the default setting of a security group (deny all inbound traffic; allow all outbound traffic) is enough. For more complicated training job, you will need to configure the security group accordingly. This will be discussed in more advanced notebooks for `CreateTrainingJob`.
```
# create a security group
sg_res = ec2.create_security_group(
Description="security group for SageMaker instances",
GroupName="sagemaker-private",
VpcId=vpc_res["Vpc"]["VpcId"],
TagSpecifications=[
{
"ResourceType": "security-group",
"Tags": [
{
"Key": "Service", # Tag the sec gp by service, this can be used to filter sec gps
"Value": "SageMaker",
}
],
}
],
)
pp.pprint(sg_res)
# inspect the security group in detail
ec2.describe_security_groups(GroupIds=[sg_res["GroupId"]])
```
## Creat a training job
Now let's create a training job within your private subnet you just created. First, let's download some helper functions for creating service role for SageMaker.
```
%%bash
cp ../execution-role/iam_helpers.py .
# set up service role for SageMaker
from iam_helpers import create_execution_role
iam = boto3.client("iam")
sts = boto3.client("sts")
caller = sts.get_caller_identity()
if ":user/" in caller["Arn"]: # as IAM user
# either paste in a role_arn with or create a new one and attach
# AmazonSageMakerFullAccess
role_name = "example-sm"
role_arn = create_execution_role(role_name=role_name)["Role"]["Arn"]
iam.attach_role_policy(
RoleName=role_name,
PolicyArn="arn:aws:iam::aws:policy/AmazonSageMakerFullAccess",
)
elif "assumed-role" in caller["Arn"]: # on SageMaker infra
role_arn = caller["Arn"]
else:
print("I assume you are on an EC2 instance launched with an IAM role")
role_arn = caller["Arn"]
# some helpers
def current_time():
ct = datetime.datetime.now()
return str(ct.now()).replace(":", "-").replace(" ", "-")[:19]
def account_id():
return boto3.client("sts").get_caller_identity()["Account"]
```
To make this notebook self-contained, you will create a bucket and upload some data there to pass to training container as you did in the [basic create training job notebook](https://github.com/hsl89/amazon-sagemaker-examples/blob/sagemaker-fundamentals/sagemaker-fundamentals/create-training-job/create_training_job.ipynb). But you don't have to do so, if you already have a bucket that SageMaker service can access (i.e. a bucket with bucket name containing `sagemaker`, see `AmazonSageMakerFullAccessPolicy`), then you can use that bucket as well.
```
# create a bucket for SageMaker in your region
def create_bucket():
"""Create an S3 bucket that is intended to be used for short term"""
bucket = f"sagemaker-{current_time()}"
region_name = boto3.Session().region_name
create_bucket_config = {}
if region_name != "us-east-1":
# us-east-1 is the default region for S3 bucket
# specify LocationConstraint if your VPC is not
# in us-east-1
create_bucket_config["LocationConstraint"] = region_name
boto3.client("s3").create_bucket(Bucket=bucket, CreateBucketConfiguration=create_bucket_config)
return bucket
# replace it with your own SageMaker-accessible bucket
# if you don't want to create a new one
bucket = create_bucket()
# upload some mock data to your bucket
import os
s3 = boto3.client("s3")
input_prefix = "input_data"
for fname in os.listdir("data"):
with open(os.path.join("data", fname), "rb") as f:
key = input_prefix + fname
s3.upload_fileobj(f, bucket, key)
```
Now, you will configure the training job.
```
sm_cli = boto3.client("sagemaker")
# name training job
training_job_name = "example-training-job-{}".format(current_time())
data_path = "s3://" + bucket + "/" + input_prefix
# location that SageMaker saves the model artifacts
output_prefix = "output/"
output_path = "s3://" + bucket + "/" + output_prefix
# ECR URI of your image
region = boto3.Session().region_name
account = account_id()
image_uri = "{}.dkr.ecr.{}.amazonaws.com/example-image:latest".format(account, region)
algorithm_specification = {
"TrainingImage": image_uri,
"TrainingInputMode": "File",
}
input_data_config = [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": data_path,
"S3DataDistributionType": "FullyReplicated",
}
},
},
{
"ChannelName": "test",
"DataSource": {
"S3DataSource": {
"S3DataType": "S3Prefix",
"S3Uri": data_path,
"S3DataDistributionType": "FullyReplicated",
}
},
},
]
vpc_config = {
# security groups need to be configured to communicate
# with each other for distributed training job
"SecurityGroupIds": [sg_res["GroupId"]],
"Subnets": [sn_res["Subnet"]["SubnetId"]],
}
output_data_config = {"S3OutputPath": output_path}
resource_config = {"InstanceType": "ml.m5.large", "InstanceCount": 1, "VolumeSizeInGB": 5}
stopping_condition = {
"MaxRuntimeInSeconds": 120,
}
enable_network_isolation = True
ct_res = sm_cli.create_training_job(
TrainingJobName=training_job_name,
AlgorithmSpecification=algorithm_specification,
RoleArn=role_arn,
InputDataConfig=input_data_config,
OutputDataConfig=output_data_config,
VpcConfig=vpc_config,
ResourceConfig=resource_config,
StoppingCondition=stopping_condition,
EnableNetworkIsolation=enable_network_isolation,
EnableManagedSpotTraining=False,
)
```
The training job is expected to fail, because the subnet you created is isolated from the Internet and you have not created any mechanism for it to access your the data in your S3 bucket.
```
# see the training job to fail
stopped = False
while not stopped:
tj_state = sm_cli.describe_training_job(TrainingJobName=training_job_name)
if tj_state["TrainingJobStatus"] in ["Completed", "Stopped", "Failed"]:
stopped = True
else:
print("Training in progress")
time.sleep(30)
if tj_state["TrainingJobStatus"] == "Failed":
print("Training job failed ")
print("Failed Reason: {}".format(tj_state["FailureReason"]))
else:
print("Training job completed")
```
## Add a VPC endpoint
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. **Traffic between your VPC and the other service does not leave the Amazon network**. For more information, see [AWS PrivateLink and VPC endpoints](https://docs.aws.amazon.com/vpc/latest/userguide/endpoint-services-overview.html).
There are three types of VPC endpoints as of March 2021.
A **Gateway** endpoint serves as a target for a route in your route table for traffic destined for the AWS service. You can specify an endpoint policy to attach to the endpoint, which will control access to the service from your VPC. You can also specify the VPC route tables that use the endpoint.
An **Interface** endpoint is a network interface in your subnet that serves as an endpoint for communicating with the specified service. You can specify the subnets in which to create an endpoint, and the security groups to associate with the endpoint network interface.
A **GatewayLoadBalancer** endpoint is a network interface in your subnet that serves an endpoint for communicating with a Gateway Load Balancer that you've configured as a VPC endpoint service.
---
Only Gateway endpoint is a viable option for SageMaker service. So you will add a Gateway endpoint here. A Gateway endpoint needs to be added to a route table, so you will need to create a route table and associated it with your subnet first.
```
# Create a route table
rt_res = ec2.create_route_table(
VpcId=vpc_res["Vpc"]["VpcId"],
TagSpecifications=[
{"ResourceType": "route-table", "Tags": [{"Key": "Service", "Value": "SageMaker"}]}
],
)
pp.pprint(rt_res)
# Associate the route table with the subnet
ass_rt_res = ec2.associate_route_table(
RouteTableId=rt_res["RouteTable"]["RouteTableId"], SubnetId=sn_res["Subnet"]["SubnetId"]
)
pp.pprint(ass_rt_res)
```
Next, let's check service name of S3 bucket.
```
# Check out service name for S3
services = ec2.describe_vpc_endpoint_services()
for s in services["ServiceNames"]:
if "s3" in s:
print(s)
# Create a gateway endpoint
region_name = boto3.Session().region_name
iep_res = ec2.create_vpc_endpoint(
VpcEndpointType="Gateway",
VpcId=vpc_res["Vpc"]["VpcId"],
ServiceName=f"com.amazonaws.{region_name}.s3", # return of previous cell
RouteTableIds=[rt_res["RouteTable"]["RouteTableId"]],
# you don't need to add a tag, it is only
# used as a convenient way to filter through your
# endpoints in the future
TagSpecifications=[
{"ResourceType": "vpc-endpoint", "Tags": [{"Key": "Service", "Value": "SageMaker"}]}
],
)
pp.pprint(iep_res)
```
Now you have added a Gateway endpoint to the route table of the subnet. This endpoint allows the subnet to talk to your S3 bucket **privately**. The traffic between the subnet and your S3 bucket does not leave AWS network. Let's create another training job to verify that the training container can access the data in your S3 bucket.
```
training_job_name = "example-training-job-{}".format(current_time())
ct_res = sm_cli.create_training_job(
TrainingJobName=training_job_name,
AlgorithmSpecification=algorithm_specification,
RoleArn=role_arn,
InputDataConfig=input_data_config,
OutputDataConfig=output_data_config,
VpcConfig=vpc_config,
ResourceConfig=resource_config,
StoppingCondition=stopping_condition,
EnableNetworkIsolation=enable_network_isolation,
EnableManagedSpotTraining=False,
)
# watch to to succeed
stopped = False
while not stopped:
tj_state = sm_cli.describe_training_job(TrainingJobName=training_job_name)
if tj_state["TrainingJobStatus"] in ["Completed", "Stopped", "Failed"]:
stopped = True
else:
print("Training in progress")
time.sleep(30)
if tj_state["TrainingJobStatus"] == "Failed":
print("Training job failed ")
print("Failed Reason: {}".format(tj_state["FailureReason"]))
else:
print("Training job completed")
```
## Review
Let's review what you did in this notebook: you have created
- a VPC
- a subnet inside the VPC
- a security group inside the VPC
The VPC is isolated from the Internet, because you did not add an Internet Gateway to it.
You created a training job in the subnet. The traffic in and out the SageMaker Instance running your training container is controlled by the security group permissions. You verified that this training job failed, because SageMaker cannot download data from your S3 bucket.
Next, you added
- a route table to your subnet
- an S3 Gateway Endpoint to the route table
Then you verified that once you added the S3 Gateway Endpoint to your VPC, the same training job can go through.
## Practical considerations
If you are an ML practioner, then most likely you will not need to touch VPC, because the network admin in your organization should have configured the VPC, subnet, security group, route table and VPC endpoints for you. The reason we discussed VPC configuration in this notebook is to get you familiar with the basic concepts of network engineering, so that when something goes wrong, you can message your network admin with more precise questions or requests.
One common situation is that your org owns a VPC has has both public and private subnet. You are configuring a SageMaker training job on an EC2 / Notebook Instance / Studio in the public subnet and you want the training job to be executed in the private subnet. In that case, all you need to to is to pass the subnet id and security group id to the `CreateTrainingJob` API and set the `EnableNetworkIsolation` flag to `True`.
## Clean up
Now, let's tear down all resources you created in this notebook.
```
# delete the entire VPC and its associated resources
# adapted from https://gist.github.com/alberto-morales/b6d7719763f483185db27289d51f8ec5
def vpc_cleanup(vpcid):
"""Remove VPC from AWS
Set your region/access-key/secret-key from env variables or boto config.
:param vpcid: id of vpc to delete
"""
if not vpcid:
return
print("Removing VPC ({}) from AWS".format(vpcid))
ec2 = boto3.resource("ec2")
ec2client = ec2.meta.client
vpc = ec2.Vpc(vpcid)
# detach default dhcp_options if associated with the vpc
dhcp_options_default = ec2.DhcpOptions("default")
if dhcp_options_default:
dhcp_options_default.associate_with_vpc(VpcId=vpc.id)
# detach and delete all gateways associated with the vpc
for gw in vpc.internet_gateways.all():
vpc.detach_internet_gateway(InternetGatewayId=gw.id)
gw.delete()
# delete any instances
for subnet in vpc.subnets.all():
for instance in subnet.instances.all():
instance.terminate()
# delte all subnets
for subnet in vpc.subnets.all():
for interface in subnet.network_interfaces.all():
interface.delete()
subnet.delete()
# delete all route table associations
for rt in vpc.route_tables.all():
for rta in rt.associations:
if not rta.main:
rta.delete()
try:
rt.delete()
except Exception as e:
pass
# delete our endpoints
for ep in ec2client.describe_vpc_endpoints(Filters=[{"Name": "vpc-id", "Values": [vpcid]}])[
"VpcEndpoints"
]:
ec2client.delete_vpc_endpoints(VpcEndpointIds=[ep["VpcEndpointId"]])
# delete our security groups
for sg in vpc.security_groups.all():
if sg.group_name != "default":
sg.delete()
# delete any vpc peering connections
for vpcpeer in ec2client.describe_vpc_peering_connections(
Filters=[{"Name": "requester-vpc-info.vpc-id", "Values": [vpcid]}]
)["VpcPeeringConnections"]:
ec2.VpcPeeringConnection(vpcpeer["VpcPeeringConnectionId"]).delete()
# delete non-default network acls
for netacl in vpc.network_acls.all():
if not netacl.is_default:
netacl.delete()
# finally, delete the vpc
ec2client.delete_vpc(VpcId=vpcid)
return
vpc_cleanup(vpc_res["Vpc"]["VpcId"])
```
|
github_jupyter
|
# Algorithm used :

```
%matplotlib inline
import gym
import itertools
import matplotlib
import numpy as np
import pandas as pd
import sys
if "../" not in sys.path:
sys.path.append("../")
from collections import defaultdict
from lib.envs.windy_gridworld import WindyGridworldEnv
from lib import plotting
matplotlib.style.use('ggplot')
env = WindyGridworldEnv()
def make_epsilon_greedy_policy(Q, epsilon, nA):
"""
Creates an epsilon-greedy policy based on a given Q-function and epsilon.
Args:
Q: A dictionary that maps from state -> action-values.
Each value is a numpy array of length nA (see below)
epsilon: The probability to select a random action . float between 0 and 1.
nA: Number of actions in the environment.
Returns:
A function that takes the observation as an argument and returns
the probabilities for each action in the form of a numpy array of length nA.
"""
def policy_fn(observation):
A = np.ones(nA, dtype=float) * epsilon / nA
best_action = np.argmax(Q[observation])
A[best_action] += (1.0 - epsilon)
return A
return policy_fn
def sarsa(env, num_episodes, discount_factor=1.0, alpha=0.5, epsilon=0.1):
"""
SARSA algorithm: On-policy TD control. Finds the optimal epsilon-greedy policy.
Args:
env: OpenAI environment.
num_episodes: Number of episodes to run for.
discount_factor: Gamma discount factor.
alpha: TD learning rate.
epsilon: Chance the sample a random action. Float betwen 0 and 1.
Returns:
A tuple (Q, stats).
Q is the optimal action-value function, a dictionary mapping state -> action values.
stats is an EpisodeStats object with two numpy arrays for episode_lengths and episode_rewards.
"""
# The final action-value function.
# A nested dictionary that maps state -> (action -> action-value).
Q = defaultdict(lambda: np.zeros(env.action_space.n))
# Keeps track of useful statistics
stats = plotting.EpisodeStats(
episode_lengths=np.zeros(num_episodes),
episode_rewards=np.zeros(num_episodes))
# The policy we're following
policy = make_epsilon_greedy_policy(Q, epsilon, env.action_space.n)
for i_episode in range(num_episodes):
# Print out which episode we're on, useful for debugging.
if (i_episode + 1) % 100 == 0:
print("\rEpisode {}/{}.".format(i_episode + 1, num_episodes), end="")
sys.stdout.flush()
# Reset the environment and pick the first action
state = env.reset()
#Chaque action est modelisé par un numero, on va mettre une prob
#qui suit le epsilon greedy pour choisir l'action a prendre selon
#l'etat.
action_probs = policy(state)
action = np.random.choice(np.arange(len(action_probs)), p=action_probs)
# One step in the environment
for t in itertools.count():
# Take a step
next_state, reward, done, _ = env.step(action)
# Pick the next action
next_action_probs = policy(next_state)
next_action = np.random.choice(np.arange(len(next_action_probs)), p=next_action_probs)
# Update statistics
stats.episode_rewards[i_episode] += reward
stats.episode_lengths[i_episode] = t
# TD Update
td_target = reward + discount_factor * Q[next_state][next_action]
td_delta = td_target - Q[state][action]
Q[state][action] += alpha * td_delta
if done:
break
action = next_action
state = next_state
return Q, stats
Q, stats = sarsa(env, 200)
plotting.plot_episode_stats(stats)
```
|
github_jupyter
|
```
# Deprecated
# packages: random
import random
# packages: data structure
import numpy as np
import pandas as pd
import astropy.io as io
# packages: image generation and plot generation
from matplotlib import pyplot as plt
# pandas
# https://pandas.pydata.org/pandas-docs/stable/tutorials.html
# https://pandas.pydata.org/pandas-docs/stable/10min.html
# ascii:io
# http://docs.astropy.org/en/stable/io/ascii/
# matplotlib
# https://nickcharlton.net/posts/drawing-animating-shapes-matplotlib.html
# numpy: empty canvas
def empty_canvas(image_side_length=100):
return np.indices((image_side_length, image_side_length))
# scikit learn: circle
def circle_sk(canvas, x_center=50, y_center=50, radius=30):
y, x = canvas
circle = (x - x_center)**2 + (y - y_center)**2 < radius**2
img = circle.astype(float)
return img
# scikit learn: rectangle
def rect_sk(canvas, x_center=50, y_center=50, radius=30):
y, x = canvas
rect = (x < x_center + radius) & (x > x_center - radius) & (y < y_center + radius) & (y > y_center - radius)
img = rect.astype(float)
return img
# scikit learn: rectangle
#def triangle_sk(canvas, x_center=50, y_center=50, radius=30):
# y, x = canvas
# rect = (x < x_center + radius) & (x > x_center - radius) & (y < y_center + radius) & (y > y_center - radius)
# img = rect.astype(float)
# return img
# plot for SPI package
def plot_spi(img):
plt.axes()
plt.imshow(img)
plt.clf()
# matplotlib pyplot
def circle_plt(x_center=0, y_center=0, radius=0.75, fc='r', show=False):
plt.axes()
circle = plt.Circle((x_center, y_center), radius=radius, fc=fc)
plt.gca().add_patch(circle)
plt.axis('scaled')
imgplot = plt.imshow(img)
imgplot = plt.savefig("test3.png", dpi = (200))
#imgplot = plt.imshow()
if show:
plt.show()
# test each individual function
def test_individual():
#circle()
img = circle_sk()
plot_spi(img)
#star()
return
# generate one image data set
def generate_dataset(nb_obj,
image_side_length=100,
index_start=0,
shape='rect',
x_min=32,
x_max=32,
y_min=32,
y_max=32,
radius_min=10,
radius_max=10,
show_plot=False,
verbose=False):
# initiate image values
fac = -1.0
#x_center_list = np.random.uniform(0 + fac* radius_max, image_side_length + fac* radius_max, nb_obj)
#y_center_list = np.random.uniform(0 + fac* radius_max, image_side_length + fac* radius_max, nb_obj)
x_center_list = np.random.uniform(x_min, x_max, nb_obj)
y_center_list = np.random.uniform(y_min, y_max, nb_obj)
radius_list = np.random.uniform(radius_min, radius_max, nb_obj)
print('x ranges', min(x_center_list), max(x_center_list))
print('y ranges', min(y_center_list), max(y_center_list))
column_names = ['ident', 'x_center', 'y_center', 'radius', 'shape']
# create empty data structures
tab_list = np.empty((nb_obj, len(column_names)))
img_list = np.empty((nb_obj, image_side_length, image_side_length))
# create empty canvas for a single image
canvas = empty_canvas(image_side_length=image_side_length)
# loop over objects
icount = 0
for i_obj in np.arange(nb_obj):
# draw object properties from list
x_center = x_center_list[i_obj]
y_center = y_center_list[i_obj]
radius = radius_list[i_obj]
# identification value
ident = int(index_start + i_obj)
# create object
if shape == 'rect':
img = rect_sk(canvas, x_center=x_center, y_center=y_center, radius=radius)
shape_num = 0
elif shape == 'circ':
img = circle_sk(canvas, x_center=x_center, y_center=y_center, radius=radius)
shape_num = 1
# add tabular data to data list structure
tab_list[i_obj] = [ident, x_center, y_center, radius, int(shape_num)]
# add image data to image list structure
img_list[i_obj] = img
# plot image
if show_plot and icount <20:
icount+=1
plt.figure()
plt.axes()
plt.imshow(img)
# Data Frame: Tabular Data for Objects
tab_list = pd.DataFrame(tab_list,columns=column_names)
# verbose
if verbose:
print(tab_list[0:10])
print(img_list[0:10])
return tab_list, img_list
# save data
def save_data(f_data_list, f_img_list, data_list, img_list, verbose=False):
# Pandas Data Frame for tabular data: save to file
data_list.to_csv(f_data_list)
# Numpy Array for image data: save to file
np.save(f_img_list, img_list)
# verbose
if verbose:
print(f_data_list_pd)
print(f_img_list)
return
# combine data sets
def combine_data(frames, data_type='tab'):
if data_type=='tab':
data = pd.concat(frames)
elif data_type=='img':
data = np.concatenate(frames)
return data
# randomize data
def randomize_data(tab, img, seed=5, verbose=False):
if verbose:
print('Before:', tab)
# create randomized indices
random.seed(seed)
nb_tab = len(tab)
ind_random = np.arange(nb_tab)
random.shuffle(ind_random)
# re-order data based on randomized indices
tab = tab.iloc[ind_random]
img = img[ind_random]
if verbose:
print('After:', tab)
return tab, img
# split data
def split_data(nb_train, nb_valid, nb_test, tab, img, printcheck=0):
ind_start_train = 0
ind_end_train = ind_start_valid = ind_start_train + nb_train
ind_end_valid = ind_start_test = ind_start_valid + nb_valid
ind_end_test = ind_start_test + nb_test
if printcheck > 0:
print(tab[0:printcheck])
print(ind_start_train, ind_end_train)
# good place for unit test
# split data in train, valid, test
tab_train = tab[ind_start_train: ind_end_train]
img_train = img[ind_start_train: ind_end_train]
tab_valid = tab[ind_start_valid: ind_end_valid]
img_valid = img[ind_start_valid: ind_end_valid]
tab_test = tab[ind_start_test: ind_end_test]
img_test = img[ind_start_test: ind_end_test]
return tab_train, tab_valid, tab_test, img_train, img_valid, img_test
# Generate Data Parameters
nb_obj = 5000
#seed = 47283
image_side_length = 64
x_min, x_max = 10, 54
y_min, y_max = 10, 54
radius_min, radius_max = 4,30
show_plot = True
# Generate Data
tab_a, img_a = generate_dataset(nb_obj, image_side_length=image_side_length, radius_min=radius_min, radius_max=radius_max, x_min=x_min, x_max=x_max, y_min=y_min, y_max=y_max, shape='rect', show_plot=show_plot)
tab_b, img_b = generate_dataset(nb_obj, image_side_length=image_side_length, radius_min=radius_min, radius_max=radius_max, x_min=x_min, x_max=x_max, y_min=y_min, y_max=y_max, shape='circ', show_plot=show_plot, index_start=nb_obj)
# combine data
tab = combine_data([tab_a, tab_b])
img = combine_data([img_a, img_b], data_type='img')
# randomize data
tab, img = randomize_data(tab, img, verbose=True)
print('range', np.min(img), np.max(img))
# save data
f_tab = 'test_generate_pipeline_circle_data.csv'
f_img = 'test_generate_pipeline_circle_image.npy'
save_data(f_tab, f_img, tab, img, verbose=False )
```
# Example: read data file and prepare data for network
```
# read data from file
data_list = pd.read_csv(f_tab)
img_list = np.load(f_img)
print('range', np.min(img), np.max(img))
# Training parameters
batch_size = 20
num_classes = 2
epochs = 5
train_me = True
nb_train = 1000
nb_valid = 100
nb_test = 1000
img_rows = img_cols = img_list.shape[1]
# Prepare data
# ... split data
output = split_data(nb_train, nb_valid, nb_test, tab, img, printcheck=0)
y_train_temp, y_valid_temp, y_test_temp, x_train, x_valid, x_test = output
print(np.min(x_train), np.max(x_train))
# ... identify value to train on
y_train = y_train_temp['shape'].values
y_valid = y_valid_temp['shape'].values
y_test = y_test_temp['shape'].values
print("X train, valid, test shapes:", "\n", x_train.shape,"\n", x_valid.shape,"\n", x_test.shape)
print("y train, valid, test shapes:", "\n", y_train.shape,"\n", y_valid.shape,"\n", y_test.shape)
''' MY DATA
'''
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_valid = x_valid.reshape(x_valid.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_valid = x_valid.reshape(x_valid.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_valid = x_valid.astype('float32')
x_test = x_test.astype('float32')
print('range', np.min(x_train), np.max(x_train))
print('x_train shape:', x_train.shape)
print(x_train.shape[0], 'train samples')
print(x_valid.shape[0], 'valid samples')
print(x_test.shape[0], 'test samples')
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_valid = keras.utils.to_categorical(y_valid, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
# create model
model = Sequential()
model.add(Conv2D(32, kernel_size=(2, 2), activation='relu', input_shape=input_shape))
#model.add(Conv2D(64, kernel_size=(2, 2), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.3))
model.add(Dense(num_classes, activation='softmax'))
# compile model
model.compile(loss=keras.losses.categorical_crossentropy, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy'])
if train_me:
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
verbose=1,
validation_data=(x_valid, y_valid))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
a = np.array([1.])
b = a.astype('float32')
print(a, b)
import PIL.ImageDraw as ImageDraw,PIL.Image as Image, PIL.ImageShow as ImageShow
im = Image.new("RGB", (400,300))
draw = ImageDraw.Draw(im)
draw.arc((100,100,300,200),0,270,fill=255)
im.show()
```
|
github_jupyter
|
# Chapter 2: Conditional probability
----
```
import numpy as np
```
## Simulating the frequentist interpretation
Recall that the frequentist interpretation of conditional probability based on a large number `n` of repetitions of an experiment is $P(A|B) ≈ n_{AB}/n_{B}$, where $n_{AB}$ is the number of times that $A \cap B$ occurs and $n_{B}$ is the number of times that $B$ occurs. Let's try this out by simulation, and verify the results of Example 2.2.5. So let's use [`numpy.random.choice`](https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.random.choice.html) to simulate `n` families, each with two children.
```
np.random.seed(34)
n = 10**5
child1 = np.random.choice([1,2], n, replace=True)
child2 = np.random.choice([1,2], n, replace=True)
print('child1:\n{}\n'.format(child1))
print('child2:\n{}\n'.format(child2))
```
Here `child1` is a NumPy `array` of length `n`, where each element is a 1 or a 2. Letting 1 stand for "girl" and 2 stand for "boy", this `array` represents the gender of the elder child in each of the `n` families. Similarly, `child2` represents the gender of the younger child in each family.
Alternatively, we could have used
```
np.random.choice(["girl", "boy"], n, replace=True)
```
but it is more convenient working with numerical values.
Let $A$ be the event that both children are girls and $B$ the event that the elder is a girl. Following the frequentist interpretation, we count the number of repetitions where $B$ occurred and name it `n_b`, and we also count the number of repetitions where $A \cap B$ occurred and name it `n_ab`. Finally, we divide `n_ab` by ` n_b` to approximate $P(A|B)$.
```
n_b = np.sum(child1==1)
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | elder is girl) = {:0.2F}'.format(n_ab / n_b))
```
The ampersand `&` is an elementwise $AND$, so `n_ab` is the number of families where both the first child and the second child are girls. When we ran this code, we got 0.50, confirming our answer $P(\text{both girls | elder is a girl}) = 1/2$.
Now let $A$ be the event that both children are girls and $B$ the event that at least one of the children is a girl. Then $A \cap B$ is the same, but `n_b` needs to count the number of families where at least one child is a girl. This is accomplished with the elementwise $OR$ operator `|` (this is not a conditioning bar; it is an inclusive $OR$, returning `True` if at least one element is `True`).
```
n_b = np.sum((child1==1) | (child2==2))
n_ab = np.sum((child1==1) & (child2==1))
print('P(both girls | at least one girl) = {:0.2F}'.format(n_ab / n_b))
```
For us, the result was 0.33, confirming that $P(\text{both girls | at least one girl}) = 1/3$.
## Monty Hall simulation
Many long, bitter debates about the Monty Hall problem could have been averted by trying it out with a simulation. To study how well the never-switch strategy performs, let's generate 10<sup>5</sup> runs of the Monty Hall game. To simplify notation, assume the contestant always chooses door 1. Then we can generate a vector specifying which door has the car for each repetition:
```
np.random.seed(55)
n = 10**5
cardoor = np.random.choice([1,2,3] , n, replace=True)
print('The never-switch strategy has success rate {:.3F}'.format(np.sum(cardoor==1) / n))
```
At this point we could generate the vector specifying which doors Monty opens, but that's unnecessary since the never-switch strategy succeeds if and only if door 1 has the car! So the fraction of times when the never-switch strategy succeeds is `numpy.sum(cardoor==1)/n`, which was 0.331in our simulation. This is very close to 1/3.
What if we want to play the Monty Hall game interactively? We can do this by programming a Python class that would let us play interactively or let us run a simulation across many trials.
```
class Monty():
def __init__(self):
""" Object creation function. """
self.state = 0
self.doors = np.array([1, 2, 3])
self.prepare_game()
def get_success_rate(self):
""" Return the rate of success in this series of plays: num. wins / num. plays. """
if self.num_plays > 0:
return 1.0*self.num_wins / self.num_plays
else:
return 0.0
def prepare_game(self):
""" Prepare initial values for game play, and randonly choose the door with the car. """
self.num_plays = 0
self.num_wins = 0
self.cardoor = np.random.choice(self.doors)
self.players_choice = None
self.montys_choice = None
def choose_door(self, door):
""" Player chooses a door at state 0. Monty will choose a remaining door to reveal a goat. """
self.state = 1
self.players_choice = door
self.montys_choice = np.random.choice(self.doors[(self.doors!=self.players_choice) & (self.doors!=self.cardoor)])
def switch_door(self, do_switch):
""" Player has the option to switch from the door she has chosen to the remaining unopened door.
If the door the player has selected is the same as the cardoor, then num. of wins is incremented.
Finally, number of plays will be incremented.
"""
self.state = 2
if do_switch:
self.players_choice = self.doors[(self.doors!=self.players_choice) & (self.doors!=self.montys_choice)][0]
if self.players_choice == self.cardoor:
self.num_wins += 1
self.num_plays += 1
def continue_play(self):
""" Player opts to continue playing in this series.
The game is returned to state 0, but the counters for num. wins and num. plays
will be kept intact and running.
A new cardoor is randomly chosen.
"""
self.state = 0
self.cardoor = np.random.choice(self.doors)
self.players_choice = None
self.montys_choice = None
def reset(self):
""" The entire game state is returned to its initial state.
All counters and variable holdling state are re-initialized.
"""
self.state = 0
self.prepare_game()
```
In brief:
* The `Monty` class represents a simple state model for the game.
* When an instance of the `Monty` game is created, game state-holding variables are initialized and a `cardoor` randomly chosen.
* After the player initially picks a door, `Monty` will choose a remaining door that does not have car behind it.
* The player can then choose to switch to the other, remaining unopened door, or stick with her initial choice.
* `Monty` will then see if the player wins or not, and updates the state-holding variables for num. wins and num. plays.
* The player can continue playing, or stop and reset the game to its original state.
### As a short simulation program
Here is an example showing how to use the `Monty` class above to run a simulation to see how often the switching strategy succeeds.
```
np.random.seed(89)
trials = 10**5
game = Monty()
for _ in range(trials):
game.choose_door(np.random.choice([1,2,3]))
game.switch_door(True)
game.continue_play()
print('In {} trials, the switching strategy won {} times.'.format(game.num_plays, game.num_wins))
print('Success rate is {:.3f}'.format(game.get_success_rate()))
```
### As an interactive widget in this Jupyter notebook
Optionally, the `Monty` Python class above can also be used as an engine to power an interactive widget that lets you play the three-door game _in the browser_ using [`ipywidgets` ](https://ipywidgets.readthedocs.io/en/stable/user_guide.html).
To run the interactive widget, make sure you have the `ipywidgets` package installed (v7.4.2 or greater).
To install with the `conda` package manager, execute the following command:
conda install ipywidgets
To install with the `pip` package manager, execute the following command:
pip install ipywidgets
```
from ipywidgets import Box, Button, ButtonStyle, FloatText, GridBox, IntText, Label, Layout, HBox
from IPython.display import display
```
The doors in the game are represented by [`ipywidgets.Button`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#Button).
```
door1 = Button(description='Door 1', layout=Layout(flex='1 1 auto', width='auto'))
door2 = Button(description='Door 2', layout=door1.layout)
door3 = Button(description='Door 3', layout=door1.layout)
doors_arr = [door1, door2, door3]
doors = Box(doors_arr, layout=Layout(width='auto', grid_area='doors'))
```
State-holding variables in the `Monty` object are displayed using [`ipywidgets.IntText`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#IntText) (for the `num_wins` and `num_plays`); and [`ipywidgets.FloatText`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#FloatText) (for the success rate).
```
label1 = Label(value='number of plays', layout=Layout(width='auto', grid_area='label1'))
text1 = IntText(disabled=True, layout=Layout(width='auto', grid_area='text1'))
label2 = Label(value='number of wins', layout=Layout(width='auto', grid_area='label2'))
text2 = IntText(disabled=True, layout=Layout(width='auto', grid_area='text2'))
label3 = Label(value='success rate', layout=Layout(width='auto', grid_area='label3'))
text3 = FloatText(disabled=True, layout=Layout(width='auto', grid_area='text3'))
```
[`ipywidgets.Label`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20List.html#Label) is used to display the title and descriptive text in the game widget.
```
banner = Box([Label(value='Interactive widget: Monty Hall problem',
layout=Layout(width='50%'))],
layout=Layout(width='auto', justify_content='center', grid_area='banner'))
status = Label(value='Pick a door...', layout=Layout(width='auto', grid_area='status'))
```
Buttons allowing for further user actions are located at the bottom of the widget.
* The `reveal` button is used to show what's behind all of the doors after the player makes her final choice.
* After the player completes a round of play, she can click the `continue` button to keep counting game state (num. wins and num. plays)
* The `reset` button lets the player return the game to its original state after completing a round of play.
```
button_layout = Layout(flex='1 1 auto', width='auto')
reveal = Button(description='reveal', tooltip='open selected door', layout=button_layout, disabled=True)
contin = Button(description='continue', tooltip='continue play', layout=button_layout, disabled=True)
reset = Button(description='reset', tooltip='reset game', layout=button_layout, disabled=True)
actions = Box([reveal, contin, reset], layout=Layout(width='auto', grid_area='actions'))
```
[`ipywidgets.GridBox`](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Styling.html#The-Grid-layout) helps us lay out the user interface elements for the `Monty` game widget.
```
ui = GridBox(children=[banner, doors, label1, text1, label2, text2, label3, text3, status, actions],
layout=Layout(
width='50%',
grid_template_rows='auto auto auto auto auto auto auto',
grid_template_columns='25% 25% 25% 25%',
grid_template_areas='''
"banner banner banner banner"
"doors doors doors doors"
"label1 label1 text1 text1"
"label2 label2 text2 text2"
"label3 label3 text3 text3"
"status status status status"
". . actions actions"
'''
)
)
```
We lastly create some functions to connect the widget to the `Monty` game object. These functions adapt player action [events](https://ipywidgets.readthedocs.io/en/stable/examples/Widget%20Events.html#Example) to state changes in the `Monty` object, and then update the widget user interface accordingly.
```
uigame = Monty()
def reset_ui(disable_reset=True):
""" Return widget elements to their initial state.
Do not disable the reset button in the case of continue.
"""
for i,d in enumerate(doors_arr):
d.description = 'Door {}'.format(i+1)
d.disabled = False
d.icon = ''
d.button_style = ''
reveal.disabled = True
contin.disabled = True
reset.disabled = disable_reset
def update_status(new_status):
""" Update the widget text fields for displaying present game status. """
text1.value = uigame.num_plays
text2.value = uigame.num_wins
text3.value = uigame.get_success_rate()
status.value = new_status
def update_ui_reveal():
""" Helper function to update the widget after the player clicks the reveal button. """
if uigame.players_choice == uigame.cardoor:
new_status = 'You win! Continue playing?'
else:
new_status = 'Sorry, you lose. Continue playing?'
for i,d in enumerate(doors_arr):
d.disabled = True
if uigame.cardoor == i+1:
d.description = 'car'
else:
d.description = 'goat'
if uigame.players_choice == i+1:
if uigame.players_choice == uigame.cardoor:
d.button_style = 'success'
d.icon = 'check'
else:
d.button_style = 'danger'
d.icon = 'times'
update_status(new_status)
reveal.disabled = True
contin.disabled = False
reset.disabled = False
def on_button_clicked(b):
""" Event-handling function that maps button click events in the widget
to corresponding functions in Monty, and updates the user interface
according to the present game state.
"""
if uigame.state == 0:
if b.description in ['Door 1', 'Door 2', 'Door 3']:
c = int(b.description.split()[1])
uigame.choose_door(c)
b.disabled = True
b.button_style = 'info'
m = doors_arr[uigame.montys_choice-1]
m.disabled = True
m.description = 'goat'
unopened = uigame.doors[(uigame.doors != uigame.players_choice) &
(uigame.doors != uigame.montys_choice)][0]
status.value = 'Monty reveals a goat behind Door {}. Click Door {} to switch, or \'reveal\' Door {}.' \
.format(uigame.montys_choice, unopened, uigame.players_choice)
reveal.disabled = False
reset.disabled = False
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
elif uigame.state == 1:
if b.description in ['Door 1', 'Door 2', 'Door 3']:
prev_choice = uigame.players_choice
uigame.switch_door(True)
pb = doors_arr[prev_choice-1]
pb.icon = ''
pb.button_style = ''
b.disabled = True
b.button_style = 'info'
status.value = 'Now click \'reveal\' to see what\'s behind Door {}.'.format(uigame.players_choice)
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
elif b.description == 'reveal':
uigame.switch_door(False)
update_ui_reveal()
elif uigame.state == 2:
if b.description == 'reveal':
update_ui_reveal()
else:
if b.description == 'continue':
uigame.continue_play()
reset_ui(False)
update_status('Pick a door once more...')
elif b.description == 'reset':
uigame.reset()
reset_ui()
update_status('Pick a door...')
# hook up all buttons to our event-handling function
door1.on_click(on_button_clicked)
door2.on_click(on_button_clicked)
door3.on_click(on_button_clicked)
reveal.on_click(on_button_clicked)
contin.on_click(on_button_clicked)
reset.on_click(on_button_clicked)
display(ui)
```
How to play:
* Click a door to select.
* Monty will select a remaining door and open to reveal a goat.
* Click the `reveal` button to open your selected door.
* Or click the remaining unopened Door button to switch your door choice, and then click `reveal`.
* Click the `continue` button to keep playing.
* You may click the `reset` button at any time to return the game back to its initial state.
|
github_jupyter
|
# Building your Deep Neural Network: Step by Step
Welcome to your week 4 assignment (part 1 of 2)! You have previously trained a 2-layer Neural Network (with a single hidden layer). This week, you will build a deep neural network, with as many layers as you want!
- In this notebook, you will implement all the functions required to build a deep neural network.
- In the next assignment, you will use these functions to build a deep neural network for image classification.
**After this assignment you will be able to:**
- Use non-linear units like ReLU to improve your model
- Build a deeper neural network (with more than 1 hidden layer)
- Implement an easy-to-use neural network class
**Notation**:
- Superscript $[l]$ denotes a quantity associated with the $l^{th}$ layer.
- Example: $a^{[L]}$ is the $L^{th}$ layer activation. $W^{[L]}$ and $b^{[L]}$ are the $L^{th}$ layer parameters.
- Superscript $(i)$ denotes a quantity associated with the $i^{th}$ example.
- Example: $x^{(i)}$ is the $i^{th}$ training example.
- Lowerscript $i$ denotes the $i^{th}$ entry of a vector.
- Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the $l^{th}$ layer's activations).
Let's get started!
## 1 - Packages
Let's first import all the packages that you will need during this assignment.
- [numpy](www.numpy.org) is the main package for scientific computing with Python.
- [matplotlib](http://matplotlib.org) is a library to plot graphs in Python.
- dnn_utils provides some necessary functions for this notebook.
- testCases provides some test cases to assess the correctness of your functions
- np.random.seed(1) is used to keep all the random function calls consistent. It will help us grade your work. Please don't change the seed.
```
import numpy as np
import h5py
import matplotlib.pyplot as plt
from testCases import *
from dnn_utils import sigmoid, sigmoid_backward, relu, relu_backward
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
```
## 2 - Outline of the Assignment
To build your neural network, you will be implementing several "helper functions". These helper functions will be used in the next assignment to build a two-layer neural network and an L-layer neural network. Each small helper function you will implement will have detailed instructions that will walk you through the necessary steps. Here is an outline of this assignment, you will:
- Initialize the parameters for a two-layer network and for an $L$-layer neural network.
- Implement the forward propagation module (shown in purple in the figure below).
- Complete the LINEAR part of a layer's forward propagation step (resulting in $Z^{[l]}$).
- We give you the ACTIVATION function (relu/sigmoid).
- Combine the previous two steps into a new [LINEAR->ACTIVATION] forward function.
- Stack the [LINEAR->RELU] forward function L-1 time (for layers 1 through L-1) and add a [LINEAR->SIGMOID] at the end (for the final layer $L$). This gives you a new L_model_forward function.
- Compute the loss.
- Implement the backward propagation module (denoted in red in the figure below).
- Complete the LINEAR part of a layer's backward propagation step.
- We give you the gradient of the ACTIVATE function (relu_backward/sigmoid_backward)
- Combine the previous two steps into a new [LINEAR->ACTIVATION] backward function.
- Stack [LINEAR->RELU] backward L-1 times and add [LINEAR->SIGMOID] backward in a new L_model_backward function
- Finally update the parameters.
<img src="images/final outline.png" style="width:800px;height:500px;">
<caption><center> **Figure 1**</center></caption><br>
**Note** that for every forward function, there is a corresponding backward function. That is why at every step of your forward module you will be storing some values in a cache. The cached values are useful for computing gradients. In the backpropagation module you will then use the cache to calculate the gradients. This assignment will show you exactly how to carry out each of these steps.
## 3 - Initialization
You will write two helper functions that will initialize the parameters for your model. The first function will be used to initialize parameters for a two layer model. The second one will generalize this initialization process to $L$ layers.
### 3.1 - 2-layer Neural Network
**Exercise**: Create and initialize the parameters of the 2-layer neural network.
**Instructions**:
- The model's structure is: *LINEAR -> RELU -> LINEAR -> SIGMOID*.
- Use random initialization for the weight matrices. Use `np.random.randn(shape)*0.01` with the correct shape.
- Use zero initialization for the biases. Use `np.zeros(shape)`.
```
# GRADED FUNCTION: initialize_parameters
def initialize_parameters(n_x, n_h, n_y):
"""
Argument:
n_x -- size of the input layer
n_h -- size of the hidden layer
n_y -- size of the output layer
Returns:
parameters -- python dictionary containing your parameters:
W1 -- weight matrix of shape (n_h, n_x)
b1 -- bias vector of shape (n_h, 1)
W2 -- weight matrix of shape (n_y, n_h)
b2 -- bias vector of shape (n_y, 1)
"""
np.random.seed(1)
### START CODE HERE ### (≈ 4 lines of code)
W1 = np.random.randn(n_h, n_x) * 0.01
b1 = np.zeros(shape=(n_h, 1))
W2 = np.random.randn(n_y, n_h) * 0.01
b2 = np.zeros(shape=(n_y, 1))
### END CODE HERE ###
assert(W1.shape == (n_h, n_x))
assert(b1.shape == (n_h, 1))
assert(W2.shape == (n_y, n_h))
assert(b2.shape == (n_y, 1))
parameters = {"W1": W1,
"b1": b1,
"W2": W2,
"b2": b2}
return parameters
parameters = initialize_parameters(2,2,1)
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td> [[ 0.01624345 -0.00611756]
[-0.00528172 -0.01072969]] </td>
</tr>
<tr>
<td> **b1**</td>
<td>[[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2**</td>
<td> [[ 0.00865408 -0.02301539]]</td>
</tr>
<tr>
<td> **b2** </td>
<td> [[ 0.]] </td>
</tr>
</table>
### 3.2 - L-layer Neural Network
The initialization for a deeper L-layer neural network is more complicated because there are many more weight matrices and bias vectors. When completing the `initialize_parameters_deep`, you should make sure that your dimensions match between each layer. Recall that $n^{[l]}$ is the number of units in layer $l$. Thus for example if the size of our input $X$ is $(12288, 209)$ (with $m=209$ examples) then:
<table style="width:100%">
<tr>
<td> </td>
<td> **Shape of W** </td>
<td> **Shape of b** </td>
<td> **Activation** </td>
<td> **Shape of Activation** </td>
<tr>
<tr>
<td> **Layer 1** </td>
<td> $(n^{[1]},12288)$ </td>
<td> $(n^{[1]},1)$ </td>
<td> $Z^{[1]} = W^{[1]} X + b^{[1]} $ </td>
<td> $(n^{[1]},209)$ </td>
<tr>
<tr>
<td> **Layer 2** </td>
<td> $(n^{[2]}, n^{[1]})$ </td>
<td> $(n^{[2]},1)$ </td>
<td>$Z^{[2]} = W^{[2]} A^{[1]} + b^{[2]}$ </td>
<td> $(n^{[2]}, 209)$ </td>
<tr>
<tr>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$ </td>
<td> $\vdots$</td>
<td> $\vdots$ </td>
<tr>
<tr>
<td> **Layer L-1** </td>
<td> $(n^{[L-1]}, n^{[L-2]})$ </td>
<td> $(n^{[L-1]}, 1)$ </td>
<td>$Z^{[L-1]} = W^{[L-1]} A^{[L-2]} + b^{[L-1]}$ </td>
<td> $(n^{[L-1]}, 209)$ </td>
<tr>
<tr>
<td> **Layer L** </td>
<td> $(n^{[L]}, n^{[L-1]})$ </td>
<td> $(n^{[L]}, 1)$ </td>
<td> $Z^{[L]} = W^{[L]} A^{[L-1]} + b^{[L]}$</td>
<td> $(n^{[L]}, 209)$ </td>
<tr>
</table>
Remember that when we compute $W X + b$ in python, it carries out broadcasting. For example, if:
$$ W = \begin{bmatrix}
j & k & l\\
m & n & o \\
p & q & r
\end{bmatrix}\;\;\; X = \begin{bmatrix}
a & b & c\\
d & e & f \\
g & h & i
\end{bmatrix} \;\;\; b =\begin{bmatrix}
s \\
t \\
u
\end{bmatrix}\tag{2}$$
Then $WX + b$ will be:
$$ WX + b = \begin{bmatrix}
(ja + kd + lg) + s & (jb + ke + lh) + s & (jc + kf + li)+ s\\
(ma + nd + og) + t & (mb + ne + oh) + t & (mc + nf + oi) + t\\
(pa + qd + rg) + u & (pb + qe + rh) + u & (pc + qf + ri)+ u
\end{bmatrix}\tag{3} $$
**Exercise**: Implement initialization for an L-layer Neural Network.
**Instructions**:
- The model's structure is *[LINEAR -> RELU] $ \times$ (L-1) -> LINEAR -> SIGMOID*. I.e., it has $L-1$ layers using a ReLU activation function followed by an output layer with a sigmoid activation function.
- Use random initialization for the weight matrices. Use `np.random.rand(shape) * 0.01`.
- Use zeros initialization for the biases. Use `np.zeros(shape)`.
- We will store $n^{[l]}$, the number of units in different layers, in a variable `layer_dims`. For example, the `layer_dims` for the "Planar Data classification model" from last week would have been [2,4,1]: There were two inputs, one hidden layer with 4 hidden units, and an output layer with 1 output unit. Thus means `W1`'s shape was (4,2), `b1` was (4,1), `W2` was (1,4) and `b2` was (1,1). Now you will generalize this to $L$ layers!
- Here is the implementation for $L=1$ (one layer neural network). It should inspire you to implement the general case (L-layer neural network).
```python
if L == 1:
parameters["W" + str(L)] = np.random.randn(layer_dims[1], layer_dims[0]) * 0.01
parameters["b" + str(L)] = np.zeros((layer_dims[1], 1))
```
```
# GRADED FUNCTION: initialize_parameters_deep
def initialize_parameters_deep(layer_dims):
"""
Arguments:
layer_dims -- python array (list) containing the dimensions of each layer in our network
Returns:
parameters -- python dictionary containing your parameters "W1", "b1", ..., "WL", "bL":
Wl -- weight matrix of shape (layer_dims[l], layer_dims[l-1])
bl -- bias vector of shape (layer_dims[l], 1)
"""
np.random.seed(3)
parameters = {}
L = len(layer_dims) # number of layers in the network
for l in range(1, L):
### START CODE HERE ### (≈ 2 lines of code)
parameters['W' + str(l)] = np.random.randn(layer_dims[l], layer_dims[l - 1]) * 0.01
parameters['b' + str(l)] = np.zeros((layer_dims[l], 1))
### END CODE HERE ###
assert(parameters['W' + str(l)].shape == (layer_dims[l], layer_dims[l - 1]))
assert(parameters['b' + str(l)].shape == (layer_dims[l], 1))
return parameters
parameters = initialize_parameters_deep([5,4,3])
print("W1 = " + str(parameters["W1"]))
print("b1 = " + str(parameters["b1"]))
print("W2 = " + str(parameters["W2"]))
print("b2 = " + str(parameters["b2"]))
```
**Expected output**:
<table style="width:80%">
<tr>
<td> **W1** </td>
<td>[[ 0.01788628 0.0043651 0.00096497 -0.01863493 -0.00277388]
[-0.00354759 -0.00082741 -0.00627001 -0.00043818 -0.00477218]
[-0.01313865 0.00884622 0.00881318 0.01709573 0.00050034]
[-0.00404677 -0.0054536 -0.01546477 0.00982367 -0.01101068]]</td>
</tr>
<tr>
<td>**b1** </td>
<td>[[ 0.]
[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
<tr>
<td>**W2** </td>
<td>[[-0.01185047 -0.0020565 0.01486148 0.00236716]
[-0.01023785 -0.00712993 0.00625245 -0.00160513]
[-0.00768836 -0.00230031 0.00745056 0.01976111]]</td>
</tr>
<tr>
<td>**b2** </td>
<td>[[ 0.]
[ 0.]
[ 0.]]</td>
</tr>
</table>
## 4 - Forward propagation module
### 4.1 - Linear Forward
Now that you have initialized your parameters, you will do the forward propagation module. You will start by implementing some basic functions that you will use later when implementing the model. You will complete three functions in this order:
- LINEAR
- LINEAR -> ACTIVATION where ACTIVATION will be either ReLU or Sigmoid.
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID (whole model)
The linear forward module (vectorized over all the examples) computes the following equations:
$$Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}\tag{4}$$
where $A^{[0]} = X$.
**Exercise**: Build the linear part of forward propagation.
**Reminder**:
The mathematical representation of this unit is $Z^{[l]} = W^{[l]}A^{[l-1]} +b^{[l]}$. You may also find `np.dot()` useful. If your dimensions don't match, printing `W.shape` may help.
```
# GRADED FUNCTION: linear_forward
def linear_forward(A, W, b):
"""
Implement the linear part of a layer's forward propagation.
Arguments:
A -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
Returns:
Z -- the input of the activation function, also called pre-activation parameter
cache -- a python dictionary containing "A", "W" and "b" ; stored for computing the backward pass efficiently
"""
### START CODE HERE ### (≈ 1 line of code)
Z = np.dot(W, A) + b
### END CODE HERE ###
assert(Z.shape == (W.shape[0], A.shape[1]))
cache = (A, W, b)
return Z, cache
A, W, b = linear_forward_test_case()
Z, linear_cache = linear_forward(A, W, b)
print("Z = " + str(Z))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **Z** </td>
<td> [[ 3.1980455 7.85763489]] </td>
</tr>
</table>
### 4.2 - Linear-Activation Forward
In this notebook, you will use two activation functions:
- **Sigmoid**: $\sigma(Z) = \sigma(W A + b) = \frac{1}{ 1 + e^{-(W A + b)}}$. We have provided you with the `sigmoid` function. This function returns **two** items: the activation value "`a`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = sigmoid(Z)
```
- **ReLU**: The mathematical formula for ReLu is $A = RELU(Z) = max(0, Z)$. We have provided you with the `relu` function. This function returns **two** items: the activation value "`A`" and a "`cache`" that contains "`Z`" (it's what we will feed in to the corresponding backward function). To use it you could just call:
``` python
A, activation_cache = relu(Z)
```
For more convenience, you are going to group two functions (Linear and Activation) into one function (LINEAR->ACTIVATION). Hence, you will implement a function that does the LINEAR forward step followed by an ACTIVATION forward step.
**Exercise**: Implement the forward propagation of the *LINEAR->ACTIVATION* layer. Mathematical relation is: $A^{[l]} = g(Z^{[l]}) = g(W^{[l]}A^{[l-1]} +b^{[l]})$ where the activation "g" can be sigmoid() or relu(). Use linear_forward() and the correct activation function.
```
# GRADED FUNCTION: linear_activation_forward
def linear_activation_forward(A_prev, W, b, activation):
"""
Implement the forward propagation for the LINEAR->ACTIVATION layer
Arguments:
A_prev -- activations from previous layer (or input data): (size of previous layer, number of examples)
W -- weights matrix: numpy array of shape (size of current layer, size of previous layer)
b -- bias vector, numpy array of shape (size of the current layer, 1)
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
A -- the output of the activation function, also called the post-activation value
cache -- a python dictionary containing "linear_cache" and "activation_cache";
stored for computing the backward pass efficiently
"""
if activation == "sigmoid":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = sigmoid(Z)
### END CODE HERE ###
elif activation == "relu":
# Inputs: "A_prev, W, b". Outputs: "A, activation_cache".
### START CODE HERE ### (≈ 2 lines of code)
Z, linear_cache = linear_forward(A_prev, W, b)
A, activation_cache = relu(Z)
### END CODE HERE ###
assert (A.shape == (W.shape[0], A_prev.shape[1]))
cache = (linear_cache, activation_cache)
return A, cache
A_prev, W, b = linear_activation_forward_test_case()
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "sigmoid")
print("With sigmoid: A = " + str(A))
A, linear_activation_cache = linear_activation_forward(A_prev, W, b, activation = "relu")
print("With ReLU: A = " + str(A))
```
**Expected output**:
<table style="width:35%">
<tr>
<td> **With sigmoid: A ** </td>
<td > [[ 0.96076066 0.99961336]]</td>
</tr>
<tr>
<td> **With ReLU: A ** </td>
<td > [[ 3.1980455 7.85763489]]</td>
</tr>
</table>
**Note**: In deep learning, the "[LINEAR->ACTIVATION]" computation is counted as a single layer in the neural network, not two layers.
### d) L-Layer Model
For even more convenience when implementing the $L$-layer Neural Net, you will need a function that replicates the previous one (`linear_activation_forward` with RELU) $L-1$ times, then follows that with one `linear_activation_forward` with SIGMOID.
<img src="images/model_architecture_kiank.png" style="width:600px;height:300px;">
<caption><center> **Figure 2** : *[LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model</center></caption><br>
**Exercise**: Implement the forward propagation of the above model.
**Instruction**: In the code below, the variable `AL` will denote $A^{[L]} = \sigma(Z^{[L]}) = \sigma(W^{[L]} A^{[L-1]} + b^{[L]})$. (This is sometimes also called `Yhat`, i.e., this is $\hat{Y}$.)
**Tips**:
- Use the functions you had previously written
- Use a for loop to replicate [LINEAR->RELU] (L-1) times
- Don't forget to keep track of the caches in the "caches" list. To add a new value `c` to a `list`, you can use `list.append(c)`.
```
# GRADED FUNCTION: L_model_forward
def L_model_forward(X, parameters):
"""
Implement forward propagation for the [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID computation
Arguments:
X -- data, numpy array of shape (input size, number of examples)
parameters -- output of initialize_parameters_deep()
Returns:
AL -- last post-activation value
caches -- list of caches containing:
every cache of linear_relu_forward() (there are L-1 of them, indexed from 0 to L-2)
the cache of linear_sigmoid_forward() (there is one, indexed L-1)
"""
caches = []
A = X
L = len(parameters) // 2 # number of layers in the neural network
# Implement [LINEAR -> RELU]*(L-1). Add "cache" to the "caches" list.
for l in range(1, L):
A_prev = A
### START CODE HERE ### (≈ 2 lines of code)
A, cache = linear_activation_forward(A_prev,
parameters['W' + str(l)],
parameters['b' + str(l)],
activation='relu')
caches.append(cache)
### END CODE HERE ###
# Implement LINEAR -> SIGMOID. Add "cache" to the "caches" list.
### START CODE HERE ### (≈ 2 lines of code)
AL, cache = linear_activation_forward(A,
parameters['W' + str(L)],
parameters['b' + str(L)],
activation='sigmoid')
caches.append(cache)
### END CODE HERE ###
assert(AL.shape == (1, X.shape[1]))
return AL, caches
X, parameters = L_model_forward_test_case()
AL, caches = L_model_forward(X, parameters)
print("AL = " + str(AL))
print("Length of caches list = " + str(len(caches)))
```
<table style="width:40%">
<tr>
<td> **AL** </td>
<td > [[ 0.0844367 0.92356858]]</td>
</tr>
<tr>
<td> **Length of caches list ** </td>
<td > 2</td>
</tr>
</table>
Great! Now you have a full forward propagation that takes the input X and outputs a row vector $A^{[L]}$ containing your predictions. It also records all intermediate values in "caches". Using $A^{[L]}$, you can compute the cost of your predictions.
## 5 - Cost function
Now you will implement forward and backward propagation. You need to compute the cost, because you want to check if your model is actually learning.
**Exercise**: Compute the cross-entropy cost $J$, using the following formula: $$-\frac{1}{m} \sum\limits_{i = 1}^{m} (y^{(i)}\log\left(a^{[L] (i)}\right) + (1-y^{(i)})\log\left(1- a^{[L](i)}\right)) \tag{7}$$
```
# GRADED FUNCTION: compute_cost
def compute_cost(AL, Y):
"""
Implement the cost function defined by equation (7).
Arguments:
AL -- probability vector corresponding to your label predictions, shape (1, number of examples)
Y -- true "label" vector (for example: containing 0 if non-cat, 1 if cat), shape (1, number of examples)
Returns:
cost -- cross-entropy cost
"""
m = Y.shape[1]
# Compute loss from aL and y.
### START CODE HERE ### (≈ 1 lines of code)
cost = (-1 / m) * np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1 - Y, np.log(1 - AL)))
### END CODE HERE ###
cost = np.squeeze(cost) # To make sure your cost's shape is what we expect (e.g. this turns [[17]] into 17).
assert(cost.shape == ())
return cost
Y, AL = compute_cost_test_case()
print("cost = " + str(compute_cost(AL, Y)))
```
**Expected Output**:
<table>
<tr>
<td>**cost** </td>
<td> 0.41493159961539694</td>
</tr>
</table>
## 6 - Backward propagation module
Just like with forward propagation, you will implement helper functions for backpropagation. Remember that back propagation is used to calculate the gradient of the loss function with respect to the parameters.
**Reminder**:
<img src="images/backprop_kiank.png" style="width:650px;height:250px;">
<caption><center> **Figure 3** : Forward and Backward propagation for *LINEAR->RELU->LINEAR->SIGMOID* <br> *The purple blocks represent the forward propagation, and the red blocks represent the backward propagation.* </center></caption>
<!--
For those of you who are expert in calculus (you don't need to be to do this assignment), the chain rule of calculus can be used to derive the derivative of the loss $\mathcal{L}$ with respect to $z^{[1]}$ in a 2-layer network as follows:
$$\frac{d \mathcal{L}(a^{[2]},y)}{{dz^{[1]}}} = \frac{d\mathcal{L}(a^{[2]},y)}{{da^{[2]}}}\frac{{da^{[2]}}}{{dz^{[2]}}}\frac{{dz^{[2]}}}{{da^{[1]}}}\frac{{da^{[1]}}}{{dz^{[1]}}} \tag{8} $$
In order to calculate the gradient $dW^{[1]} = \frac{\partial L}{\partial W^{[1]}}$, you use the previous chain rule and you do $dW^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial W^{[1]}}$. During the backpropagation, at each step you multiply your current gradient by the gradient corresponding to the specific layer to get the gradient you wanted.
Equivalently, in order to calculate the gradient $db^{[1]} = \frac{\partial L}{\partial b^{[1]}}$, you use the previous chain rule and you do $db^{[1]} = dz^{[1]} \times \frac{\partial z^{[1]} }{\partial b^{[1]}}$.
This is why we talk about **backpropagation**.
!-->
Now, similar to forward propagation, you are going to build the backward propagation in three steps:
- LINEAR backward
- LINEAR -> ACTIVATION backward where ACTIVATION computes the derivative of either the ReLU or sigmoid activation
- [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID backward (whole model)
### 6.1 - Linear backward
For layer $l$, the linear part is: $Z^{[l]} = W^{[l]} A^{[l-1]} + b^{[l]}$ (followed by an activation).
Suppose you have already calculated the derivative $dZ^{[l]} = \frac{\partial \mathcal{L} }{\partial Z^{[l]}}$. You want to get $(dW^{[l]}, db^{[l]} dA^{[l-1]})$.
<img src="images/linearback_kiank.png" style="width:250px;height:300px;">
<caption><center> **Figure 4** </center></caption>
The three outputs $(dW^{[l]}, db^{[l]}, dA^{[l]})$ are computed using the input $dZ^{[l]}$.Here are the formulas you need:
$$ dW^{[l]} = \frac{\partial \mathcal{L} }{\partial W^{[l]}} = \frac{1}{m} dZ^{[l]} A^{[l-1] T} \tag{8}$$
$$ db^{[l]} = \frac{\partial \mathcal{L} }{\partial b^{[l]}} = \frac{1}{m} \sum_{i = 1}^{m} dZ^{[l](i)}\tag{9}$$
$$ dA^{[l-1]} = \frac{\partial \mathcal{L} }{\partial A^{[l-1]}} = W^{[l] T} dZ^{[l]} \tag{10}$$
**Exercise**: Use the 3 formulas above to implement linear_backward().
```
# GRADED FUNCTION: linear_backward
def linear_backward(dZ, cache):
"""
Implement the linear portion of backward propagation for a single layer (layer l)
Arguments:
dZ -- Gradient of the cost with respect to the linear output (of current layer l)
cache -- tuple of values (A_prev, W, b) coming from the forward propagation in the current layer
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
A_prev, W, b = cache
m = A_prev.shape[1]
### START CODE HERE ### (≈ 3 lines of code)
dW = np.dot(dZ, cache[0].T) / m
db = np.squeeze(np.sum(dZ, axis=1, keepdims=True)) / m
dA_prev = np.dot(cache[1].T, dZ)
### END CODE HERE ###
assert (dA_prev.shape == A_prev.shape)
assert (dW.shape == W.shape)
assert (isinstance(db, float))
return dA_prev, dW, db
# Set up some test inputs
dZ, linear_cache = linear_backward_test_case()
dA_prev, dW, db = linear_backward(dZ, linear_cache)
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected Output**:
<table style="width:90%">
<tr>
<td> **dA_prev** </td>
<td > [[ 2.38272385 5.85438014]
[ 6.31969219 15.52755701]
[ -3.97876302 -9.77586689]] </td>
</tr>
<tr>
<td> **dW** </td>
<td > [[ 2.77870358 -0.05500058 -5.13144969]] </td>
</tr>
<tr>
<td> **db** </td>
<td> 5.527840195 </td>
</tr>
</table>
### 6.2 - Linear-Activation backward
Next, you will create a function that merges the two helper functions: **`linear_backward`** and the backward step for the activation **`linear_activation_backward`**.
To help you implement `linear_activation_backward`, we provided two backward functions:
- **`sigmoid_backward`**: Implements the backward propagation for SIGMOID unit. You can call it as follows:
```python
dZ = sigmoid_backward(dA, activation_cache)
```
- **`relu_backward`**: Implements the backward propagation for RELU unit. You can call it as follows:
```python
dZ = relu_backward(dA, activation_cache)
```
If $g(.)$ is the activation function,
`sigmoid_backward` and `relu_backward` compute $$dZ^{[l]} = dA^{[l]} * g'(Z^{[l]}) \tag{11}$$.
**Exercise**: Implement the backpropagation for the *LINEAR->ACTIVATION* layer.
```
# GRADED FUNCTION: linear_activation_backward
def linear_activation_backward(dA, cache, activation):
"""
Implement the backward propagation for the LINEAR->ACTIVATION layer.
Arguments:
dA -- post-activation gradient for current layer l
cache -- tuple of values (linear_cache, activation_cache) we store for computing backward propagation efficiently
activation -- the activation to be used in this layer, stored as a text string: "sigmoid" or "relu"
Returns:
dA_prev -- Gradient of the cost with respect to the activation (of the previous layer l-1), same shape as A_prev
dW -- Gradient of the cost with respect to W (current layer l), same shape as W
db -- Gradient of the cost with respect to b (current layer l), same shape as b
"""
linear_cache, activation_cache = cache
if activation == "relu":
### START CODE HERE ### (≈ 2 lines of code)
dZ = relu_backward(dA, activation_cache)
### END CODE HERE ###
elif activation == "sigmoid":
### START CODE HERE ### (≈ 2 lines of code)
dZ = sigmoid_backward(dA, activation_cache)
### END CODE HERE ###
# Shorten the code
dA_prev, dW, db = linear_backward(dZ, linear_cache)
return dA_prev, dW, db
AL, linear_activation_cache = linear_activation_backward_test_case()
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "sigmoid")
print ("sigmoid:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db) + "\n")
dA_prev, dW, db = linear_activation_backward(AL, linear_activation_cache, activation = "relu")
print ("relu:")
print ("dA_prev = "+ str(dA_prev))
print ("dW = " + str(dW))
print ("db = " + str(db))
```
**Expected output with sigmoid:**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td >[[ 0.08982777 0.00226265]
[ 0.23824996 0.00600122]
[-0.14999783 -0.00377826]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[-0.06001514 -0.09687383 -0.10598695]] </td>
</tr>
<tr>
<td > db </td>
<td > 0.061800984273 </td>
</tr>
</table>
**Expected output with relu**
<table style="width:100%">
<tr>
<td > dA_prev </td>
<td > [[ 2.38272385 5.85438014]
[ 6.31969219 15.52755701]
[ -3.97876302 -9.77586689]] </td>
</tr>
<tr>
<td > dW </td>
<td > [[ 2.77870358 -0.05500058 -5.13144969]] </td>
</tr>
<tr>
<td > db </td>
<td > 5.527840195 </td>
</tr>
</table>
### 6.3 - L-Model Backward
Now you will implement the backward function for the whole network. Recall that when you implemented the `L_model_forward` function, at each iteration, you stored a cache which contains (X,W,b, and z). In the back propagation module, you will use those variables to compute the gradients. Therefore, in the `L_model_backward` function, you will iterate through all the hidden layers backward, starting from layer $L$. On each step, you will use the cached values for layer $l$ to backpropagate through layer $l$. Figure 5 below shows the backward pass.
<img src="images/mn_backward.png" style="width:450px;height:300px;">
<caption><center> **Figure 5** : Backward pass </center></caption>
** Initializing backpropagation**:
To backpropagate through this network, we know that the output is,
$A^{[L]} = \sigma(Z^{[L]})$. Your code thus needs to compute `dAL` $= \frac{\partial \mathcal{L}}{\partial A^{[L]}}$.
To do so, use this formula (derived using calculus which you don't need in-depth knowledge of):
```python
dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) # derivative of cost with respect to AL
```
You can then use this post-activation gradient `dAL` to keep going backward. As seen in Figure 5, you can now feed in `dAL` into the LINEAR->SIGMOID backward function you implemented (which will use the cached values stored by the L_model_forward function). After that, you will have to use a `for` loop to iterate through all the other layers using the LINEAR->RELU backward function. You should store each dA, dW, and db in the grads dictionary. To do so, use this formula :
$$grads["dW" + str(l)] = dW^{[l]}\tag{15} $$
For example, for $l=3$ this would store $dW^{[l]}$ in `grads["dW3"]`.
**Exercise**: Implement backpropagation for the *[LINEAR->RELU] $\times$ (L-1) -> LINEAR -> SIGMOID* model.
```
# GRADED FUNCTION: L_model_backward
def L_model_backward(AL, Y, caches):
"""
Implement the backward propagation for the [LINEAR->RELU] * (L-1) -> LINEAR -> SIGMOID group
Arguments:
AL -- probability vector, output of the forward propagation (L_model_forward())
Y -- true "label" vector (containing 0 if non-cat, 1 if cat)
caches -- list of caches containing:
every cache of linear_activation_forward() with "relu" (it's caches[l], for l in range(L-1) i.e l = 0...L-2)
the cache of linear_activation_forward() with "sigmoid" (it's caches[L-1])
Returns:
grads -- A dictionary with the gradients
grads["dA" + str(l)] = ...
grads["dW" + str(l)] = ...
grads["db" + str(l)] = ...
"""
grads = {}
L = len(caches) # the number of layers
m = AL.shape[1]
Y = Y.reshape(AL.shape) # after this line, Y is the same shape as AL
# Initializing the backpropagation
### START CODE HERE ### (1 line of code)
dAL = dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL))
### END CODE HERE ###
# Lth layer (SIGMOID -> LINEAR) gradients. Inputs: "AL, Y, caches". Outputs: "grads["dAL"], grads["dWL"], grads["dbL"]
### START CODE HERE ### (approx. 2 lines)
current_cache = caches[-1]
grads["dA" + str(L)], grads["dW" + str(L)], grads["db" + str(L)] = linear_backward(sigmoid_backward(dAL,
current_cache[1]),
current_cache[0])
### END CODE HERE ###
for l in reversed(range(L-1)):
# lth layer: (RELU -> LINEAR) gradients.
# Inputs: "grads["dA" + str(l + 2)], caches". Outputs: "grads["dA" + str(l + 1)] , grads["dW" + str(l + 1)] , grads["db" + str(l + 1)]
### START CODE HERE ### (approx. 5 lines)
current_cache = caches[l]
dA_prev_temp, dW_temp, db_temp = linear_backward(sigmoid_backward(dAL, caches[1]), caches[0])
grads["dA" + str(l + 1)] = dA_prev_temp
grads["dW" + str(l + 1)] = dW_temp
grads["db" + str(l + 1)] = db_temp
### END CODE HERE ###
return grads
X_assess, Y_assess, AL, caches = L_model_backward_test_case()
grads = L_model_backward(AL, Y_assess, caches)
print ("dW1 = "+ str(grads["dW1"]))
print ("db1 = "+ str(grads["db1"]))
print ("dA1 = "+ str(grads["dA1"]))
```
**Expected Output**
<table style="width:60%">
<tr>
<td > dW1 </td>
<td > [[-0.09686122 -0.04840482 -0.11864308]] </td>
</tr>
<tr>
<td > db1 </td>
<td > -0.262594998379 </td>
</tr>
<tr>
<td > dA1 </td>
<td > [[-0.71011462 -0.22925516]
[-0.17330152 -0.05594909]
[-0.03831107 -0.01236844]] </td>
</tr>
</table>
### 6.4 - Update Parameters
In this section you will update the parameters of the model, using gradient descent:
$$ W^{[l]} = W^{[l]} - \alpha \text{ } dW^{[l]} \tag{16}$$
$$ b^{[l]} = b^{[l]} - \alpha \text{ } db^{[l]} \tag{17}$$
where $\alpha$ is the learning rate. After computing the updated parameters, store them in the parameters dictionary.
**Exercise**: Implement `update_parameters()` to update your parameters using gradient descent.
**Instructions**:
Update parameters using gradient descent on every $W^{[l]}$ and $b^{[l]}$ for $l = 1, 2, ..., L$.
```
# GRADED FUNCTION: update_parameters
def update_parameters(parameters, grads, learning_rate):
"""
Update parameters using gradient descent
Arguments:
parameters -- python dictionary containing your parameters
grads -- python dictionary containing your gradients, output of L_model_backward
Returns:
parameters -- python dictionary containing your updated parameters
parameters["W" + str(l)] = ...
parameters["b" + str(l)] = ...
"""
L = len(parameters) // 2 # number of layers in the neural network
# Update rule for each parameter. Use a for loop.
### START CODE HERE ### (≈ 3 lines of code)
for l in range(L):
parameters["W" + str(l + 1)] = parameters["W" + str(l + 1)] - learning_rate * grads["dW" + str(l + 1)]
parameters["b" + str(l + 1)] = parameters["b" + str(l + 1)] - learning_rate * grads["db" + str(l + 1)]
### END CODE HERE ###
return parameters
parameters, grads = update_parameters_test_case()
parameters = update_parameters(parameters, grads, 0.1)
print ("W1 = " + str(parameters["W1"]))
print ("b1 = " + str(parameters["b1"]))
print ("W2 = " + str(parameters["W2"]))
print ("b2 = " + str(parameters["b2"]))
print ("W3 = " + str(parameters["W3"]))
print ("b3 = " + str(parameters["b3"]))
```
**Expected Output**:
<table style="width:100%">
<tr>
<td > W1 </td>
<td > [[ 1.72555789 0.3700272 0.07818896]
[-1.8634927 -0.2773882 -0.35475898]
[-0.08274148 -0.62700068 -0.04381817]
[-0.47721803 -1.31386475 0.88462238]] </td>
</tr>
<tr>
<td > b1 </td>
<td > [[-0.07593768]
[-0.07593768]
[-0.07593768]
[-0.07593768]] </td>
</tr>
<tr>
<td > W2 </td>
<td > [[ 0.71838378 1.70957306 0.05003364 -0.40467741]
[-0.54535995 -1.54647732 0.98236743 -1.10106763]
[-1.18504653 -0.2056499 1.48614836 0.23671627]] </td>
</tr>
<tr>
<td > b2 </td>
<td > [[-0.08616376]
[-0.08616376]
[-0.08616376]] </td>
</tr>
<tr>
<td > W3 </td>
<td > [[-0.88352436 -0.7129932 0.62524497]
[-0.02025258 -0.76883635 -0.23003072]] </td>
</tr>
<tr>
<td > b3 </td>
<td > [[ 0.08416196]
[ 0.08416196]] </td>
</tr>
</table>
## 7 - Conclusion
Congrats on implementing all the functions required for building a deep neural network!
We know it was a long assignment but going forward it will only get better. The next part of the assignment is easier.
In the next assignment you will put all these together to build two models:
- A two-layer neural network
- An L-layer neural network
You will in fact use these models to classify cat vs non-cat images!
|
github_jupyter
|
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
```
# Text Classification of Movie Reviews
```
from helpers import Timer
from sklearn.datasets import load_files
reviews_train = load_files("aclImdb/train/")
text_train, y_train = reviews_train.data, reviews_train.target
print("Number of documents in training data: %d" % len(text_train))
print(np.bincount(y_train))
reviews_test = load_files("aclImdb/test/")
text_test, y_test = reviews_test.data, reviews_test.target
print("Number of documents in test data: %d" % len(text_test))
print(np.bincount(y_test))
print(text_train[1])
print(y_train[1])
```
### Bag of words reminder:
<img src="bag_of_words.svg" width=80%>
```
from sklearn.feature_extraction.text import CountVectorizer
cv = CountVectorizer()
cv.fit(text_train)
len(cv.vocabulary_)
print(cv.get_feature_names()[:50])
print(cv.get_feature_names()[50000:50050])
X_train = cv.transform(text_train)
X_train
print(text_train[19726])
X_train[19726].nonzero()[1]
X_test = cv.transform(text_test)
from sklearn.svm import LinearSVC
svm = LinearSVC()
with Timer():
svm.fit(X_train, y_train)
svm.score(X_train, y_train)
svm.score(X_test, y_test)
def visualize_coefficients(classifier, feature_names, n_top_features=25):
# get coefficients with large absolute values
coef = classifier.coef_.ravel()
positive_coefficients = np.argsort(coef)[-n_top_features:]
negative_coefficients = np.argsort(coef)[:n_top_features]
interesting_coefficients = np.hstack([negative_coefficients, positive_coefficients])
# plot them
plt.figure(figsize=(15, 5))
colors = ["red" if c < 0 else "blue" for c in coef[interesting_coefficients]]
plt.bar(np.arange(2 * n_top_features), coef[interesting_coefficients], color=colors)
feature_names = np.array(feature_names)
plt.xticks(np.arange(1, 1 + 2 * n_top_features), feature_names[interesting_coefficients], rotation=60, ha="right");
visualize_coefficients(svm, cv.get_feature_names())
from sklearn.pipeline import make_pipeline
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
with Timer():
text_pipe.fit(text_train, y_train)
text_pipe.score(text_test, y_test)
from sklearn.grid_search import GridSearchCV
param_grid = {'linearsvc__C': np.logspace(-5, 0, 6)}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
with Timer():
grid.fit(text_train, y_train);
from figures import plot_grid_1d
plot_grid_1d(grid)
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.best_score_
grid.score(text_test, y_test)
```
# Text Classification continuation.
## TfidfVectorizer
```
from sklearn.feature_extraction.text import TfidfVectorizer
tfidf_pipe = make_pipeline(TfidfVectorizer(), LinearSVC())
param_grid = {'linearsvc__C': np.logspace(-3, 2, 6)}
grid = GridSearchCV(tfidf_pipe, param_grid, cv=5)
with Timer():
grid.fit(text_train, y_train)
plot_grid_1d(grid)
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['tfidfvectorizer'].get_feature_names())
grid.best_score_
grid.score(text_test, y_test)
```
# N-Grams
```
text_pipe = make_pipeline(CountVectorizer(), LinearSVC())
param_grid = {'linearsvc__C': np.logspace(-3, 2, 6),
"countvectorizer__ngram_range": [(1, 1), (1, 2), (1, 3)]}
grid = GridSearchCV(text_pipe, param_grid, cv=5)
with Timer():
grid.fit(text_train, y_train)
scores = np.array([score.mean_validation_score for score in grid.grid_scores_]).reshape(3, -1)
plt.matshow(scores)
plt.ylabel("n-gram range")
plt.yticks(range(3), param_grid["countvectorizer__ngram_range"])
plt.xlabel("C")
plt.xticks(range(6), param_grid["linearsvc__C"]);
plt.colorbar()
grid.best_params_
visualize_coefficients(grid.best_estimator_.named_steps['linearsvc'],
grid.best_estimator_.named_steps['countvectorizer'].get_feature_names())
grid.score(text_test, y_test)
```
## Look at the Natural Laguage Tool Kit (NLTK)
|
github_jupyter
|
<p style="font-family: Arial; font-size:3.75vw;color:purple; font-style:bold"><br>
matplotlib Exercise Notebook
</p><br>
# Exercise Notebook Instructions
### 1. Important: Only modify the cells which instruct you to modify them - leave "do not modify" cells alone.
The code which tests your responses assumes you have run the startup/read-only code exactly.
### 2. Work through the notebook in order.
Some of the steps depend on previous, so you'll want to move through the notebook in order.
### 3. It is okay to use numpy libraries.
You may find some of these questions are fairly straightforward to answer using built-in numpy functions. That's totally okay - part of the point of these exercises is to familiarize you with the commonly used numpy functions.
### 4. Seek help if stuck
If you get stuck, don't worry! You can either review the videos/notebooks from this week, ask in the course forums, or look to the solutions for the correct answer. BUT, be careful about looking to the solutions too quickly. Struggling to get the right answer is an important part of the learning process.
```
# DO NOT MODIFY
# import appropriate libraries
import matplotlib.pyplot as plt
import numpy as np
import sqlite3
import pandas as pd
%matplotlib inline
# DO NOT MODIFY
# we will use this dataset for some portions of this exercise.
# source: https://www.kaggle.com/hugomathien/soccer
def get_data():
cnx = sqlite3.connect('database.sqlite')
df = pd.read_sql_query("SELECT * FROM Player_Attributes", cnx)
return df
df = get_data()
#DO NOT MODIFY
# Let's see what is in our dataset
df.describe()
```
<p style="font-family: Arial; font-size:2.75vw;color:purple; font-style:bold"><br>
Exercise 1: Line Plot<br><br></p>
In the cell below, modify the function to plot x vs y, where x and y
are column names of dataframe (df) which is also entered as input to the function. The function should
- First sort the dataframe by the column 'x'
- Take the first 50 rows for plotting (discard the remaining)
- Provide a title
- Label x and y axes
```
# modify this cell
def line_plot(df, x, y):
### BEGIN SOLUTION
pass
### END SOLUTION
# DO NOT MODIFY
# your function should give a plot similar to the following:
line_plot(df, 'potential', 'overall_rating')
```
Your solution to Exercise 1 should look like this:

<p style="font-family: Arial; font-size:2.75vw;color:purple; font-style:bold"><br>
Exercise 2: Histogram <br><br></p>
In the cell below, modify the function to plot a histogram. The function should take an input parameter X which is a column name of the dataframe df, also passed to the function. Be sure to drop NULL values before you plot the histogram.
```
# modify this cell
def plot_histogram(df, X):
### BEGIN SOLUTION
### END SOLUTION
# DO NOT MODIFY
# your plot should look similar to the following:
plot_histogram(df, 'overall_rating')
```
Your solution for Exercise 2 should look like this:

<p style="font-family: Arial; font-size:2.75vw;color:purple; font-style:bold"><br>
Exercise 3: Scatter Plot<br><br></p>
In the cell below, modify the function to plot...
```
# modify this cell
def plot_scatter(df, x, y):
### BEGIN SOLUTION
### END SOLUTION
# DO NOT MODIFY
# your plot should look similar to the following:
plot_scatter(df, 'gk_diving', 'gk_handling')
```
Your solution to Excercise 3 should look like this:

|
github_jupyter
|
# Variational Inference and Learning in the Big Data Regime
Many real-world modelling solutions require fitting models with large numbers of data-points and parameters, which is made convenient recently through software implementing automatic differentiation, but also require uncertainty quantification. Variational inference is a generic family of tools that reformulates (Bayesian) model inference into an optimisation problem, thereby making use of modern software tools but also having the ability to give model uncertainty. This talk will motivate how variational inference works and what the state-of-the-art methods are. We will also accompany the theory with implementations on some simple probabilistic models, such as variational autoencoders (VAE). If time-permitting, we will briefly talk about some of the recent frontiers of variational inference, namely normalising flows and Stein Variational Gradient Descent.
💻 Content covered:
Current inference methods: maximum likelihood and Markov chain Monte Carlo
Information theory and KL divergence
Mean field variational inference
Bayesian linear regression
Monte Carlo variational inference (MCVI), reparameterisation trick and law of the unconscious statistician (LOTUS)
Example software implementations: VAE
👾 This lecture will be held online on Microsoft Teams.
🔴The event will be recorded and will be publicly available.
🎉 Attendance is FREE for members! Whether you are a student at Imperial College or not, sign up to be a member at www.icdss.club/joinus
⭐️ We encourage participants of this workshop to have looked at our previous sessions on YouTube. Prerequisites: basic understanding of Bayesian statistics
📖 A schedule of our lecture series is currently available
## Background
- Variational Inference: A Review for Statisticians: https://www.tandfonline.com/doi/full/10.1080/01621459.2017.1285773
- Auto-Encoding Variational Bayes: https://arxiv.org/pdf/1312.6114.pdf
- http://yingzhenli.net/home/en/approximateinference
- https://github.com/ethanluoyc/pytorch-vae
Consider crop yields $y$ and we have a likelihood $p(y|z)$ where $z$ are latent parameters. Suppose $z$ has some prior distribution $p(z)$, then the posterior distribution is
$$
p(z|y) \propto p(y|z)p(z) := \tilde{p}(z|y).
$$
We then want to be able to compute quantities $\mathbb{E}_{z\sim p(z|y)}[h(Z)]$, for certain functions $h$ e.g. $h(z)=z$ for the posterior mean of $Z$.
We could compute $p(z|y$) analytically if we have nice priors (conjugate priors), but this is usually not the case for most models e.g. Autoencoders with latent parameters or certain Gaussian mixture models.
Markov chain Monte Carlo (MCMC) allows us to obtain samples from $z\sim p(z|y)$ using samplers (e.g. Hamiltonian Monte Carlo (HMC) or Metropolis-Hastings), but it could be very expensive and prohibits it from being used for the big data setting.
### Variational Inference
Variational Inference (VI)/Variational Bayes/Variational Approximation turns this problem into an optimisation problem. We now seek $q(z)$ in a space of functions $\mathcal{Q}$, instead of computing the exact $p(z|y)$, in which
$$KL(q(z) || p(z|y)) = \int \log\frac{q(z)}{p(z|y)} q(z) dq$$
is minimised. This KL denotes the KL-divergence, which is a divergence measure that looks at how close 2 distributions are to one-another. It is:
- Non-negative
- Is equal to 0 if and only if $q(z) = p(z|y)$
- Note: $KL(q(z)||p(z|y)) \neq KL(p(z|y) || q(z))$. Minimising $KL(p(z|y) || q(z))$ is the objective of Expectation Propagation, which is another method for approximating posterior distributions.
Note that maximum likelihood estimation (MLE) is done by maximising the log-likelihood, which is the same as minimising the KL divergence:
$$
\text{argmin}_{\theta} KL(\hat{p}(y|\theta^*) || p(y|\theta)) = \text{argmin}_{\theta} \frac{1}{n}\sum_{i=1}^n \log \frac{p(y_i|\hat{\theta})}{p(y_i|\theta)} = \text{argmin}_{\theta} \frac{1}{n}\sum_{i=1}^n \log \frac{1}{p(y_i|\theta)} = \text{argmax}_{\theta} \frac{1}{n}\sum_{i=1}^n \log p(y_i|\theta).
$$
**Evidence Lower-Bound**
Suppose I pose a family of posteriors $q(z)$, then
\begin{align*}
KL(q(z) || p(z|y)) = \int \log\frac{q(z)}{p(z|y)} q(z) dq &= \mathbb{E}_{z\sim q(z)}[\log q(z)] - \mathbb{E}_{z\sim q(z)}[\log p(z|y)] \\
&= \mathbb{E}_{z\sim q(z)}[\log q(z)] - \mathbb{E}_{z\sim q(z)}[\log p(z,y)] + \log p(y) \\
&= \mathbb{E}_{z\sim q(z)}[\log q(z)] - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] - \mathbb{E}_{z\sim q(z)}[p(z)] + \log p(y) \\
&=\log p(y) + \mathbb{E}_{z\sim q(z)}[\log \frac{q(z)}{p(z)}] - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] \\
&= \log p(y) + KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)].
\end{align*}
Since the left term is positive and $\log p(y)$ is fixed, it is sufficient to minimise:
$$
KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)].
$$
The evidence lower-bound is $ELBO(q) = \mathbb{E}_{z\sim q(z)}[\log p(y|z)] - KL(q(z) || p(z))$, which is maximised.
### Mean-Field Variational Inference
As fancy as it sounds, it just means specifying a family of posteriors $\mathcal{Q}$ such that
$$
q(z) = \prod_{j=1}^m q_j(z_j),
$$
where $m$ is the number of parameters.
**Coordinate Ascent Variational Inference (CAVI)**
Blei et al. (2017)

Let's look at an example (Li (2021)):
$$
y|x \sim \mathcal{N}(y; x^\intercal\theta, \sigma^2),\qquad \theta\sim\mathcal{N}(\theta; \mu_0, \Gamma_0^{-1}).
$$
This has an analytical solution
$$
p(\theta|\mathcal{D}) = \mathcal{N}(\theta; \mu,\Gamma^{-1})
$$
with
\begin{align*}
\Gamma &= \Gamma_0 + \frac{1}{\sigma^2}X^\intercal X \\
\mu &= \frac{1}{\sigma^2}(X^\intercal X + \Gamma_0)^{-1}X^Ty,
\end{align*}
where $X=(x_1,\ldots,x_n)^\intercal$ and $y=(y_1,\ldots,y_n)^\intercal$. **Let's try CAVI**:
\begin{align*}
\log q_1(\theta_1) =& \int q_2(\theta_2) \log \tilde{p}(\theta_1, \theta_2) d\theta_2\\
=& \int -\frac{1}{2}\left[(\theta_1-\mu_1)^2\Gamma_{11} + 2(\theta_1-\mu_1)\Gamma_{12}(\theta_2-\mu_2) \right]q_2(\theta_2) d\theta_2 + const \\
=& -\frac{1}{2}\left[(\theta_1-\mu_1)^2\Gamma_{11} + 2(\theta_1-\mu_1)\Gamma_{12}(\mathbb{E}_{\theta_2\sim q_2}[\theta_2]-\mu_2) \right] + const,
\end{align*}
which is Gaussian with mean and variance
$$
\tilde{\mu}_1 = \mu_1 - \Gamma_{11}^{-1}\Gamma_{12}(\mathbb{E}_{q_2}[\theta_2] - \mu_2),\qquad \tilde{\gamma}_2^{-1} = \Gamma_{11}.
$$
Similarly, you can obtain a similar expression for $q_2(\theta_2)$. For CAVI to convergence, it can be shown that $(\tilde{\mu}_1, \tilde{\mu}_2)^\intercal = \mu$, giving
$$
\tilde{\mu}_1 = \mu_1, \qquad \tilde{\mu}_2 = \mu_2.
$$
In this case, CAVI gives a Gaussian posteriors.
### Monte Carlo Variational Inference (MCVI)
For big data situations, the variational expectation term can be (1) very expensive and (2) is not available in closed form. We can also add some more complexity to the posterior instead of just having a mean-field approximation. Recall the bound:
$$
\mathcal{L}(q; p) = KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)].
$$
MCVI calculates the variational expectation using Monte Carlo integration
$$
\mathbb{E}_{z\sim q(z)}[\log p(y_i|z)] \approx \frac{1}{M}\sum_{j=1}^M \log p(y_i|z^j),\qquad z^j\sim q(z).
$$
Even better, we can calculate this using mini-batches:
$$
\sum_{i=1}^n\mathbb{E}_{z\sim q(z)}[\log p(y_i|z)] = \mathbb{E}_{S\sim \{1,\ldots,n\}}\left[\frac{n}{|S|}\sum_{i\in S} \mathbb{E}_q[\log p(y_i|z)] \right],
$$
where the inner expectation can be calculated as before. Now, to minimise $\mathcal{L}(q; p)$, we differentiate with respect to the parameters, let's call it $\theta$. Therefore, we need
\begin{align*}
\nabla_\theta \mathcal{L}(q; p) =& \nabla_\theta\left[KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] \right] \\
=& \nabla_\theta \left[ \frac{1}{M}\sum_{j=1}^M \log\frac{q(z^j)}{p(z^j)} \right] - \nabla_\theta\left[\mathbb{E}_{S\sim \{1,\ldots,n\}}\left[\frac{n}{|S|}\sum_{i\in S} \frac{1}{M}\sum_{j=1}^M \log p(y_i|z^j)\right] \right],
\end{align*}
where $z^j\sim q(z)$. We can get rid of the expectation with respect to the mini-batches and get a nice approximation for the bound for each batch $S$.
**Reparameterisation Trick/Law of the Unconcious Statistician (LOTUS)**
LOTUS basically refers to the identity:
$$
E_X[f(X)] = \int f(x) p(x) dx = \int f(g(\epsilon)) p(\epsilon) dx = E_\epsilon[f(g(\epsilon))]
$$
for $x=g(\epsilon)$, via the inverse function theorem and the change of variable theorem. The reparameterisation trick thus makes it easier to compute the bound by allowing us to sample from a simpler distribution $p(\epsilon)$ to get $q(z)$:
\begin{align*}
\nabla_\theta \mathcal{L}(q; p) =& \nabla_\theta\left[KL(q(z) || p(z)) - \mathbb{E}_{z\sim q(z)}[\log p(y|z)] \right] \\
=& \nabla_\theta\left[KL(q(z) || p(z)) - \mathbb{E}_{\epsilon}[\log p(y|g_\theta(\epsilon))] \right]\\
=& \nabla_\theta KL(q(z) || p(z)) - \mathbb{E}_{\epsilon}[\nabla_g \log p(y|g_\theta(\epsilon)) \times \nabla_\theta g_\theta(\epsilon)].
\end{align*}
Then repeat using the same MCVI integration method to approximate the variational expectation. In practice, we can also use automatic differentiation to calculate the gradients.
**Example: Variational Autoencoders (VAEs)**
Model (Taken from https://lilianweng.github.io/lil-log/2018/08/12/from-autoencoder-to-beta-vae.html)

**(1)**
The decoder represents the likelihood $p(y|z)$, where $y$ is an image. In the upcoming example, we have
$$
\log p(y|z) = \log N(y; f_\theta(z), I) \equiv ||y - f_\theta(z)||_2^2,
$$
the MSE loss.
**(2)**
The prior is $z\sim \mathcal{N}(0, I)$.
**(3)**
As you will see in many applications, they people only use 1 sample to calculate the variational expectation. i.e. taking $M=1$.
**(4)**
The variational distribution that we are going for is $$q(z|y) = N(g_\phi(y)[0], g_\phi(y)[1] I),$$
where the variational distribution is parameterised by the encoder network.
**(5)**
We note that we can actually analytically compute the KL divergence as they are 2 Gaussians (proceed to Wikipedia for the formula...)
## Experiments
```
# from https://github.com/ethanluoyc/pytorch-vae/blob/master/vae.py
import torch
from torch.autograd import Variable
import numpy as np
import torch.nn.functional as F
import torchvision
from torchvision import transforms
import torch.optim as optim
from torch import nn
import matplotlib.pyplot as plt
from six.moves import urllib
opener = urllib.request.build_opener()
opener.addheaders = [('User-agent', 'Mozilla/5.0')]
urllib.request.install_opener(opener)
class Normal(object):
def __init__(self, mu, sigma, log_sigma, v=None, r=None):
self.mu = mu
self.sigma = sigma # either stdev diagonal itself, or stdev diagonal from decomposition
self.logsigma = log_sigma
dim = mu.get_shape()
if v is None:
v = torch.FloatTensor(*dim)
if r is None:
r = torch.FloatTensor(*dim)
self.v = v
self.r = r
class Encoder(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(Encoder, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
return F.relu(self.linear2(x))
class Decoder(torch.nn.Module):
def __init__(self, D_in, H, D_out):
super(Decoder, self).__init__()
self.linear1 = torch.nn.Linear(D_in, H)
self.linear2 = torch.nn.Linear(H, D_out)
def forward(self, x):
x = F.relu(self.linear1(x))
return F.relu(self.linear2(x))
class VAE(torch.nn.Module):
latent_dim = 8
def __init__(self, encoder, decoder):
super(VAE, self).__init__()
self.encoder = encoder
self.decoder = decoder
self._enc_mu = torch.nn.Linear(100, 8)
self._enc_log_sigma = torch.nn.Linear(100, 8)
def _sample_latent(self, h_enc):
"""
Return the latent normal sample z ~ N(mu, sigma^2)
"""
mu = self._enc_mu(h_enc)
log_sigma = self._enc_log_sigma(h_enc)
sigma = torch.exp(log_sigma)
std_z = torch.from_numpy(np.random.normal(0, 1, size=sigma.size())).float()
self.z_mean = mu
self.z_sigma = sigma
return mu + sigma * Variable(std_z, requires_grad=False) # Reparameterization trick
def forward(self, state):
h_enc = self.encoder(state)
z = self._sample_latent(h_enc)
return self.decoder(z)
def latent_loss(z_mean, z_stddev):
mean_sq = z_mean * z_mean
stddev_sq = z_stddev * z_stddev
return 0.5 * torch.mean(mean_sq + stddev_sq - torch.log(stddev_sq) - 1)
input_dim = 28 * 28
batch_size = 32
transform = transforms.Compose(
[transforms.ToTensor()])
mnist = torchvision.datasets.MNIST('./', download=True, transform=transform)
dataloader = torch.utils.data.DataLoader(mnist, batch_size=batch_size,
shuffle=True, num_workers=2)
print('Number of samples: ', len(mnist))
encoder = Encoder(input_dim, 100, 100)
decoder = Decoder(8, 100, input_dim)
vae = VAE(encoder, decoder)
criterion = nn.MSELoss()
optimizer = optim.Adam(vae.parameters(), lr=0.001)
l = None
for epoch in range(5):
for i, data in enumerate(dataloader, 0):
inputs, classes = data
inputs, classes = Variable(inputs.resize_(batch_size, input_dim)), Variable(classes)
optimizer.zero_grad()
dec = vae(inputs)
ll = latent_loss(vae.z_mean, vae.z_sigma)
loss = criterion(dec, inputs) + ll
loss.backward()
optimizer.step()
l = loss.item()
print(epoch, l)
plt.imshow(vae(inputs).data[0].numpy().reshape(28, 28), cmap='gray')
plt.show(block=True)
plt.imshow(inputs[0].numpy().reshape(28, 28), cmap='gray')
```
### Normalising Flows
Using a "nice" class of diffeomorphisms, one can obtain diagonal Jacobians from the diffeomorphisms, we apply the change of variables formula:
\begin{align*}
q(z_L) = q(z) \prod_{l=1}^L |\det(\nabla_{z_{l-1}} T_l(z_{l-1}))|^{-1}
\end{align*}
|
github_jupyter
|
# ML Strategy
* Collect more data
* Collect more diverse trainign set
* Train algorithm longer with gradient descetn
* Try adam isntead of gradient descent
* Try bigger networks
* Try smaller networks
* Try dropout
* Add L2 regularizatión
* Network architecture
* Network archicteture
- Activvation
- \# hidden units
# Orthogonalization
For a supervised learning system to do well, you usually need to tune the knobs of your system to make sure that four things hold true.
1. **Fit training set well on cost function** First, is that you usually have to make sure that you're at least doing well on the training set. So performance on the training set needs to pass some acceptability assessment. For some applications, this might mean doing comparably to human level performance. But this will depend on your application, and we'll talk more about comparing to human level performance next week.
2. **Fit dev set well on cost function**
3. **Fit test set well on cost function**
3. **Performs well in real world**
el priemro se soluccion con bigget network, the optiization algorithm
el segundo con regularization o con un bigger traingin set
et tres con bigger dev set
y el cuarto cambiando el dev set o la cost function
The exact details of what's precision and recall don't matter too much for this example. But briefly, the definition of precision is, of the examples that your classifier recognizes as cats,
Play video starting at 1 minute 23 seconds and follow transcript1:23
What percentage actually are cats?
Play video starting at 1 minute 32 seconds and follow transcript1:32
So if classifier A has 95% precision, this means that when classifier A says something is a cat, there's a 95% chance it really is a cat. And recall is, of all the images that really are cats, what percentage were correctly recognized by your classifier? So what percentage of actual cats, Are correctly recognized?
<img align='center' src='images/metric.PNG' width='650'/>
* I often recommend that you set up a single real number evaluation metric for your problem. Let's look at an example.
precision: the examples that your classifier recognizes as cats, What percentage actually are cats
o if classifier A has 95% precision, this means that when classifier A says something is a cat, there's a 95% chance it really is a cat.
recall: of all the images that really are cats, what percentage were correctly recognized by your classifier? So what percentage of actual cats, Are correctly recognized? So if classifier A is 90% recall, this means that of all of the images in, say, your dev sets that really are cats, classifier A accurately pulled out 90% of them.
trade-off between precision and recall
The problem with using precision recall as your evaluation metric is that if classifier A does better on recall, which it does here, the classifier B does better on precision, then you're not sure which classifier is better.
you just have to find a new evaluation metric that combines precision and recall.
In the machine learning literature, the standard way to combine precision and recall is something called an F1 score. Think as average of precision (P) and recall
$$F1 = \frac{2}{\frac{1}{P}+\frac{1}{R}}$$ Harmonic mean of precition P and Recall R
what I recommend in this example is, in addition to tracking your performance in the four different geographies, to also compute the average. And assuming that average performance is a reasonable single real number evaluation metric, by computing the average, you can quickly tell that it looks like algorithm C has a lowest average error.
---
**Satisficing and Optimizing metric**
To summarize, if there are multiple things you care about by say there's one as the optimizing metric that you want to do as well as possible on and one or more as satisficing metrics were you'll be satisfice. Almost it does better than some threshold you can now have an almost automatic way of quickly looking at multiple core size and picking the, quote, best one. Now these evaluation matrix must be evaluated or calculated on a training set or a development set or maybe on the test set. So one of the things you also need to do is set up training, dev or development, as well as test sets. In the next video, I want to share with you some guidelines for how to set up training, dev, and test sets. So let's go on to the next.
cost = accuracy - 0.5 * running time
maximize accuracy but subject
that maximizes accuracy but subject to that the running time, that is the time it takes to classify an image, that that has to be less than or equal to 100 milliseconds.
that running time is what we call a satisficing metric
So in this case accuracy is the optimizing metric and a number of false positives every 24 hours is the satisficing metric
**Train/dev/test distributions**
The way you set up your training dev, or development sets and test sets, can have a huge impact on how rapidly you or your team can make progress on building machine learning application.
* Dev set So, that dev set is also called the development set, or sometimes called the hold out cross validation set. And, workflow in machine learning is that you try a lot of ideas, train up different models on the training set, and then use the dev set to evaluate the different ideas and pick one. And, keep iterating to improve dev set performance until, finally, you have one clause that you're happy with that you then evaluate on your test set.
* choose a dev set and test set to reflect data you expect to get in future and consider important to do well on. And, in particular, the dev set and the test set here, should come from the same distribution. So, whatever type of data you expect to get in the future, and once you do well on, try to get data that looks like that.
## Size of Dev set
* So if you had a hundred examples in total, these 70/30 or 60/20/20 rule of thumb would be pretty reasonable. If you had thousand examples, maybe if you had ten thousand examples, these heuristics are not unreasonable.
* say you have a million training examples. it might be quite reasonable to set up your data so that you have 98% in the training set, 1% dev, and 1% test.
## Size of test set
* Set your test set to be enough to give high confidence in the overall performance of your system.
* Maybe all you need is a train and dev set, And I think, not having a test set might be okay
* I do find it reassuring to have a separate test set you can use to get an unbiased estimate of how I was doing before you shift it, but if you have a very large dev set so that you think you won't overfit the dev set too bad
So to summarize, in the era of big data, I think the old rule of thumb of a 70/30 is that, that no longer applies. And the trend has been to use more data for training and less for dev and test, especially when you have a very large data sets. And the rule of thumb is really to try to set the dev set to big enough for its purpose, which helps you evaluate different ideas and pick this up from AOP better. And the purpose of test set is to help you evaluate your final cost buys. You just have to set your test set big enough for that purpose, and that could be much less than 30% of the data. So, I hope that gives some guidance or some suggestions on how to set up your dev and test sets in the Deep Learning era. Next, it turns out that sometimes, part way through a machine learning problem, you might want to change your evaluation metric, or change your dev and test sets. Let's talk about it when you might want to do that.
---
### When to change dev/test sets and metrics
You've seen how set to have a dev set and evaluation metric is like placing a target somewhere for your team to aim at.
**Orthogonalization for better performance**
1. PLace target
$$Error = \frac{1}{\sum_i w^{(i)}}\sum_i w^{(i)}L\{Y_{pred}^{(i)}, y^{(i)} \}$$
$w^{(i)}$ = 1 if $x^{(i)}$ is non-porn
$w^{(i)}$ = 10 if $x^{(i)}$ is porn
*Orthogonalization for cat picture: anti-porn*
* So far weve only discussed how to define a metric to evaluate classifiers (Place the target)
* Worry separately about how to do wel on this metric
---
Bayes optimal error. Best posible error. That can't never being surpass.
*Why compare to human-level performance*
Humans are quite good a lot of task. So long as ML is worse than human, you can:
* get labeled data from humans
* Gain insight from manual error analysis. Why did a person get this right
* Better analysis of bias/variances
# Avoidable Bias
* o the fact that there's a huge gap between how well your algorithm does on your training set versus how humans do shows that your algorithm isn't even fitting the training set well. So in terms of tools to reduce bias or variance, in this case I would say focus on reducing bias. So you want to do things like train a bigger neural network or run training set longer, just try to do better on the training set
* Avoidable bias: difference between bayes and training error. You don't actually want to get below Bayes error.
* Variance: Difference betweem training error and dev error.
* Focus on either bias or variance reduction techniques
<img align='center' src='images/AvoidableBIAS.PNG' width='650'/>
# Understanding Human Level Performance
**Proxy for Bayes Error**"
So to recap, having an estimate of human-level performance gives you an estimate of Bayes error. And this allows you to more quickly make decisions as to whether you should focus on trying to reduce a bias or trying to reduce the variance of your algorithm.
Suppose:
### Surpassing human-level performance
* once you've surpassed this 0.5% threshold, your options, your ways of making progress on the machine learning problem are just less clear. It doesn't mean you can't make progress, you might still be able to make significant progress, but some of the tools you have for pointing you in a clear direction just don't work as well
*Problems where ML significantly surpasses human -level performance*
* Online advertising
* Product recommendations
* Logistic (predicting transit time)
* Loan approvals
1. All this examples are actually learning from structured data. Where you might have a databse of waht has users clicked on. This are not actual perception problems. These are not computer vision
2. today there are speech recognition systems that can surpass human-level performance. And there are also some computer vision, some image recognition tasks, where computers have surpassed human-level performance
3. Medical
ECGs, skin cancer, narrow radiology task
## Improving your model performance
**supervised learning algorithm to work well**
1. You can fit the training set pretty well
- LOW Avoidable bias
- Problem is solved by training a bigger network or training longer
2. The Training set performance generalizes pretty well to the dev/test set
- Variance
- Problem is solved regularization or getting more training data that could help you generalize better to dev set dat
**Steps.**
1. looking at the difference between your training error and your proxy for Bayes error and just gives you a sense of the avoidable bias. In other words, just how much better do you think you should be trying to do on your training set.
2. And then look at the difference between your dev error and your training error as an estimate of how much of a variance problem you have. In other words, how much harder you should be working to make your performance generalized from the training set to the dev set that it wasn't trained on explicitly.
<img align='left' src='images/ruleofthumb.PNG' width='650'/>
```
[9]*7
```
# Carrying out error analysis
If you're trying to get a learning algorithm to do a task that humans can do. And if your learning algorithm is not yet at the performance of a human. Then manually examining mistakes that your algorithm is making, can give you insights into what to do next. This process is called error analysis.
In machine learning, sometimes we call this the ceiling on performance. Which just means, what's in the best case? How well could working on the dog problem help you?
error analysis, can save you a lot of time. In terms of deciding what's the most important, or what's the most promising direction to focus on.
In this slide, we'll describe using error analysis to evaluate whether or not a single idea, dogs in this case, is worth working on.
*Look at dev examples to evaluate ideas*
Error analysis:
* Get ~100 mislabeled dev set examples
* count up how many are dogs
5/100
"Ceiling" upper bound on how much you could improve performance
In other case 50/100 are dogs, it is worth spending time on the dog problem.
*Evaluate multiple idea in parallel*
Ideas for cat detection:
* Fix pictures of dogs being recognized as cats
* Fix great cats (lions, panther, ) being misrecognized
* Improve performance on blurry images
# Cleaning up incorrectly labeled data
**deep learning algorithms are quite robust to random errors in the training set.**
* They are less robust to systematic errors.
So for example, if your labeler consistently labels white dogs as cats, then that is a problem because your classifier will learn to classify all white colored dogs as cats
**here are a few additional guidelines or principles to consider**
* f you're going in to fix something on the dev set, I would apply the same process to the test set to make sure that they continue to come from the same distribution
* It's super important that your dev and test sets come from the same distribution.
# Build your first system quickly, then iterate
If you're working on a brand new machine learning application, one of the piece of advice I often give people is that, I think you should build your first system quickly and then iterate. your main goal is to build something that works, as opposed to if your main goal is to invent a new machine learning algorithm which is a different goal, then your main goal is to get something that works really well. I'd encourage you to build something quick and dirty. Use that to do bias/variance analysis, use that to do error analysis and use the results of those analysis to help you prioritize where to go next.
* Set up dev/set and metric
* Build initial system quickly
* Use bias/variance analysis & error analysis to prioritize next steps
# Training and testing on different distributions
et. So in this video, you've seen a couple examples of when allowing your training set data to come from a different distribution than your dev and test set allows you to have much more training data. And in these examples, it will cause your learning algorithm to perform better. Now one question you might ask is, should you always use all the data you have? The answer is subtle, it is not always yes.
# Bias and Variance with mismatched data distributions
Previously we had set up some training sets and some dev sets and some test sets as follows. And the dev and test sets have the same distribution, but the training sets will have some different distribution. What we're going to do is randomly shuffle the training sets and then carve out just a piece of the training set to be the training-dev set. So just as the dev and test set have the same distribution, the training set and the training-dev set, also have the same distribution.
**Key QUantities**
- HUman Level error
- Train set error
- Train dev - set error
- Dev error
<img align='center' src='images/biasmismatch.PNG' width='400'/>
**More general formulation**
So what we've seen is that by using training data that can come from a different distribution as a dev and test set, this could give you a lot more data and therefore help the performance of your learning algorithm. But instead of just having bias and variance as two potential problems, you now have this third potential problem, data mismatch. So what if you perform error analysis and conclude that data mismatch is a huge source of error, how do you go about addressing that? It turns out that unfortunately there are super systematic ways to address data mismatch, but there are a few things you can try that could help. Let's take a look at them in the next video.
<img align='center' src='images/data_mismatch.PNG' width='700'/>
# Addressing data mismatch
If your training set comes from a different distribution, than your dev and test set, and if error analysis shows you that you have a data mismatch problem, what can you do?
1. Carry out manual error analysis and try to understand the differences between the training set and the dev/test sets. To avoid overfitting the test set, technically for error analysis, you should manually only look at a dev set and not at the test set
2. try to collect more data similar to your dev and test sets.
So, to summarize, if you think you have a data mismatch problem, I recommend you do error analysis, or look at the training set, or look at the dev set to try this figure out, to try to gain insight into how these two distributions of data might differ. And then see if you can find some ways to get more training data that looks a bit more like your dev set. One of the ways we talked about is artificial data synthesis. And artificial data synthesis does work. In speech recognition, I've seen artificial data synthesis significantly boost the performance of what were already very good speech recognition system. So, it can work very well. But, if you're using artificial data synthesis, just be cautious and bear in mind whether or not you might be accidentally simulating data only from a tiny subset of the space of all possible examples. So, that's it for how to deal with data mismatch.
# Transfer learning
But if you have a lot of data, then maybe you can retrain all the parameters in the network. And if you retrain all the parameters in the neural network, then this initial phase of training on image recognition is sometimes called pre-training, because you're using image recognitions data to pre-initialize or really pre-train the weights of the neural network. And then if you are updating all the weights afterwards, then training on the radiology data sometimes that's called fine tuning.
- Pre-training
- FIne tuning
And the reason this can be helpful is that a lot of the low level features such as detecting edges, detecting curves, detecting positive objects. Learning from that, from a very large image recognition data set, might help your learning algorithm do better in radiology diagnosis. It's just learned a lot about the structure and the nature of how images look like and some of that knowledge will be useful. So having learned to recognize images, it might have learned enough about you know, just what parts of different images look like, that that knowledge about lines, dots, curves, and so on, maybe small parts of objects, that knowledge could help your radiology diagnosis network learn a bit faster or learn with less data
you're transferring from a problem with a lot of data to a problem with relatively little data.
**When transfer learning makes sense**
* Task A and B have the same input X
* You have a lot more data for Task A than Task B
* Low level features from A could be helpful for learning B
# Multi-task learning
So whereas in transfer learning, you have a sequential process where you learn from task A and then transfer that to task B. In multi-task learning, you start off simultaneously, trying to have one neural network do several things at the same time. And then each of these task helps hopefully all of the other task. Let's look at an example.
So to summarize, multi-task learning enables you to train one neural network to do many tasks and this can give you better performance than if you were to do the tasks in isolation. Now one note of caution, in practice I see that transfer learning is used much more often than multi-task learning. So I do see a lot of tasks where if you want to solve a machine learning problem but you have a relatively small data set, then transfer learning can really help. Where if you find a related problem but you have a much bigger data set, you can train in your neural network from there and then transfer it to the problem where we have very low data. So transfer learning is used a lot today. There are some applications of transfer multi-task learning as well, but multi-task learning I think is used much less often than transfer learning. And maybe the one exception is computer vision object detection,
<img align='center' src='images/multi.PNG' width='900'/>
# What is end-to-end deep learning?
Briefly, there have been some data processing systems, or learning systems that require multiple stages of processing. And what end-to-end deep learning does, is it can take all those multiple stages, and replace it usually with just a single neural network.
Example: Face recognition
first crop the face,
Then train ML to recognize the person.
It is not a good approach to train ML to images where people is approaching to the camera
**Pros**
* Let the data speak
* Less hand-designing of components needed
**Cons**
* May need large amount of data
* Excludes potentially useful hand-designed components
## Whether to use end-to-end deep learning
* Use DL to learn individual components
* when applying supervised learning you should carefully choose what types of X to Y mappings you want to learn depending on what task you can get data fo
|
github_jupyter
|
# Travail Ecrit - Python
* Gymnase du Bugnon, site de l'Ours
* OC informatique
* Sujet : chapitres 1-10 du livre *Pensez en Python*
* Mirko Pirona
* Date : jeudi 13 novembre 2018
## **Exercice : expression arithmétique**
Initialisez les variables `(a, b, c, x)` avec les valeurs `(2, 3, 4, 5)`.
Calculez l'expression
$$y = a x^2 + b x +c$$
et imprimez le résultat.
```
a=2
b=3
c=4
x=5
y=a*x^2+b*x+c
print(y)
```
## **Exercice : fonction surface**
Importez le module `math`,
définissez une fonction `surface(r)` qui calcule $s = \pi r^2$,
affichez avec un texte descriptif le résultat pour `r=5`
```
import math
def surface(r):
s=math.pi*r^2
print(s, '=', 'la surface de rayon r')
```
## **Exercice : formule quadratique**
La solution d'une formule quadratique de forme
$$ a x^2 + b x +c = 0 $$
dépend du terme $\Delta = b^2 - 4 a c$
* Si $\Delta < 0$ il n'y a pas de solution
* Si $\Delta = 0$ il y a une solution : $x = \frac{-b}{2 a}$
* Si $\Delta > 0$ il y a deux solutions :
$x_1 = \frac{-b +\sqrt\Delta}{2 a}$ and $x_2 = \frac{-b -\sqrt\Delta}{2 a}$
Définissez une fonction `quadratique(a, b, c)` qui retourne la solution à l'équation quadratique dans les 3 cas: `None`, `x`, `[x1, x2]`
.
Montrez la solution pour `quadratique(1, 2, 3)`, `quadratique(1, 2, 1)` et `quadratique(1, 2, -1)`
```
def quadratique(a, b, c):
t=b^2-4a*c
if b^2-4a*c < 0:
print (None)
elif b^2-4a*c = 0:
x=(-b/(2a))
print(x)
else b^2-4*a*c > 0:
q=(-b+(math.sqrt(t)))/(2*a)
s=(-b-(math.sqrt(t)))/(2*a)
print(q, s)
print(quadratique(1, 2, 3))
print(quadratique(1, 2, 1))
print(quadratique(1, 2, -1))
```
## **Exercice : capitalize**
Créez une fonction `capitalize(c)` qui transforme une lettre en majuscule si c'est une minuscule, ou la laisse inchangée autrement.
```
def capitalize(c):
if type (c) ==str:
if c is str(c.upper()):
return
else c is not str(c.upper()):
a=c.upper()
print(a)
capitalize('a'), capitalize('B'), capitalize('3')
```
## **Exercice : capitalize words**
Créez une fonction `capitalize_words(s)` qui transforme la première lettre de tous les mots en majuscule.
```
def capitalize_words(s):
for i in s:
x=s.upper()
print(x)
capitalize_words('hello world, how are you?')
```
## **Exercice : tranches**
Expliquez ce que font les 6 opérateurs de **tranches** ci-dessous.
```
s=['a', 'b', 'c', 'd', 'e',]
s[::2]
s[:2]
s[::-1]
# s[2] - Cela affiche le troisième élément d'une liste
# s[:2] - Cela affiche le 2 premier élément d'une liste
# s[::2] - Cela prend un élément sur 2 d'une liste et l'affiche
# s[-1] -Cela prend le dernier élément d'une liste
# s[:-1] -Cela prend tout sauf le dernier élément
# s[::-1] -Cela affiche la liste à l'envers
```
## **Exercice : longueur spécifique**
Le fichier `words.text` contient 58110 mots en anglais.
Affichez les $n=5$ premiers mots qui ont une longueur de $m=10$ et affichez leur nombre total.
```
fin = open('words.txt')
for n in fin:
if len(n) < 10:
break
else len(n) >10:
print(n)
n = 5
m = 10
```
## **Exercice : répétition**.
Affchez les $m=5$ premiers mots qui sont composé de deux parties répétées (par exemple **bonbon**).
```
fin = open('words.txt')
for m in fin:
if m[0:((len(m)/2)-1)] == m[((len(m)/2)-1):(len(m)-1)]
print(i)
m = 5
```
## **Exercice : minimum**
Créez une fonction `min(L)` qui retourne le minimum d'une liste et l'index de sa position sous forme de list `[val, pos]`
.
```
def min(L):
x=len(L)
print(L[-1], x)
L = [1, 3, 34, -4, -2, 100]
min(L)
```
## **Exercice : moyenne**
Ecrivez une fonction `mean(L)` qui retourne la moyenne d'une liste.
.
```
def mean(L):
x=sum(L)
print(x/2)
L = [1, 3, 34, -4, -2, 100]
mean(L)
```
|
github_jupyter
|
```
# Visualization of the KO+ChIP Gold Standard from:
# Miraldi et al. (2018) "Leveraging chromatin accessibility for transcriptional regulatory network inference in Th17 Cells"
# TO START: In the menu above, choose "Cell" --> "Run All", and network + heatmap will load
# NOTE: Default limits networks to TF-TF edges in top 1 TF / gene model (.93 quantile), to see the full
# network hit "restore" (in the drop-down menu in cell below) and set threshold to 0 and hit "threshold"
# You can search for gene names in the search box below the network (hit "Match"), and find regulators ("targeted by")
# Change "canvas" to "SVG" (drop-down menu in cell below) to enable drag interactions with nodes & labels
# Change "SVG" to "canvas" to speed up layout operations
# More info about jp_gene_viz and user interface instructions are available on Github:
# https://github.com/simonsfoundation/jp_gene_viz/blob/master/doc/dNetwork%20widget%20overview.ipynb
# directory containing gene expression data and network folder
directory = "."
# folder containing networks
netPath = 'Networks'
# network file name
networkFile = 'ChIP_A17_KOall_bias100_TFmRNA_sp.tsv'
# title for network figure
netTitle = 'ChIP/ATAC(Th17)+KO, bias = 100_TFmRNA, TFA = TF mRNA'
# name of gene expression file
expressionFile = 'Th0_Th17_48hTh.txt'
# column of gene expression file to color network nodes
rnaSampleOfInt = 'Th17(48h)'
# edge cutoff -- for Inferelator TRNs, corresponds to signed quantile (rank of edges in 15 TFs / gene models),
# increase from 0 --> 1 to get more significant edges (e.g., .33 would correspond to edges only in 10 TFs / gene
# models)
edgeCutoff = .93
import sys
if ".." not in sys.path:
sys.path.append("..")
from jp_gene_viz import dNetwork
dNetwork.load_javascript_support()
# from jp_gene_viz import multiple_network
from jp_gene_viz import LExpression
LExpression.load_javascript_support()
# Load network linked to gene expression data
L = LExpression.LinkedExpressionNetwork()
L.show()
# Load Network and Heatmap
L.load_network(directory + '/' + netPath + '/' + networkFile)
L.load_heatmap(directory + '/' + expressionFile)
N = L.network
N.set_title(netTitle)
N.threshhold_slider.value = edgeCutoff
N.apply_click(None)
N.draw()
# Add labels to nodes
N.labels_button.value=True
# Limit to TFs only, remove unconnected TFs, choose and set network layout
N.restore_click()
N.tf_only_click()
N.connected_only_click()
N.layout_dropdown.value = 'fruchterman_reingold'
N.layout_click()
# Interact with Heatmap
# Limit genes in heatmap to network genes
L.gene_click(None)
# Z-score heatmap values
L.expression.transform_dropdown.value = 'Z score'
L.expression.apply_transform()
# Choose a column in the heatmap (e.g., 48h Th17) to color nodes
L.expression.col = rnaSampleOfInt
L.condition_click(None)
# Switch SVG layout to get line colors, then switch back to faster canvas mode
N.force_svg(None)
```
|
github_jupyter
|
# <center> #DHBSI 2016: Computational Text Analysis </center>
## <center> Laura Nelson <br/> <em>Postdoctoral Fellow | Digital Humanities @ Berkeley | Berkeley Institute for Data Science </em> </center>
## <center> Teddy Roland <br/> <em> Coordinator, Digital Humanities @ Berkeley <br/> Lecturer, UC Berkeley </em> </center>
# <center> Summary </center>
## <center> Text Analysis Demystified </center>
### <center> It's Just Counting! <br/> </center>

## <center> The Dark Side of DH: An Invitation

## <center> Text Analysis in Research </center>

## <center> Lessons </center>
### <center> Our workshop included 5 days and 7 lessons to learn how counting, sometimes creative counting, can amplify and augment close readings of text </center>
# Lesson 1: Introduction to Natural Language Processing
```
import nltk
from nltk import word_tokenize
from nltk.corpus import stopwords
import string
punctuations = list(string.punctuation)
#read the two text files from your hard drive, assign first mystery text to variable 'text1' and second mystery text to variable 'text2'
text1 = open('../01-Intro-to-NLP/text1.txt').read()
text2 = open('../01-Intro-to-NLP/text2.txt').read()
###word frequencies
#tokenize texts
text1_tokens = word_tokenize(text1)
text2_tokens = word_tokenize(text2)
#pre-process for word frequency
#lowercase
text1_tokens_lc = [word.lower() for word in text1_tokens]
text2_tokens_lc = [word.lower() for word in text2_tokens]
#remove stopwords
text1_tokens_clean = [word for word in text1_tokens_lc if word not in stopwords.words('english')]
text2_tokens_clean = [word for word in text2_tokens_lc if word not in stopwords.words('english')]
#remove punctuation using the list of punctuation from the string pacage
text1_tokens_clean = [word for word in text1_tokens_clean if word not in punctuations]
text2_tokens_clean = [word for word in text2_tokens_clean if word not in punctuations]
#frequency distribution
text1_word_frequency = nltk.FreqDist(text1_tokens_clean)
text2_word_frequency = nltk.FreqDist(text2_tokens_clean)
print("Frequent Words for Text1")
print("________________________")
for word in text1_word_frequency.most_common(20):
print(word[0])
print()
print("Frequent Words for Text2")
print("________________________")
for word in text2_word_frequency.most_common(20):
print(word[0])
### Can you guess the novel from most frequent words?
```
# Lesson 2: Basics of Python
```
# Nothing to see here, folks
```
# Lesson 3: Operationalizing
```
import pandas
dialogue_df = pandas.read_csv('../03-Operationalizing/antigone_dialogue.csv', index_col=0)
dialogue_tokens = [character.split() for character in dialogue_df['DIALOGUE']]
dialogue_len = [len(tokens) for tokens in dialogue_tokens]
dialogue_df['WORDS_SPOKEN'] = dialogue_len
dialogue_df = dialogue_df.sort_values('WORDS_SPOKEN', ascending = False)
# Let's visualize!
# Tells Jupyter to produce images in notebook
% pylab inline
# Makes images look good
style.use('ggplot')
dialogue_df['WORDS_SPOKEN'].plot(kind='bar')
###Who is the main protagonist? Maybe not Antigone?
```
# Lesson 4: Discriminating Words
```
from sklearn.feature_extraction.text import TfidfVectorizer
df = pandas.read_csv("../04-Discriminating-Words/BDHSI2016_music_reviews.csv", sep = '\t')
tfidfvec = TfidfVectorizer()
#create the dtm, but with cells weigthed by the tf-idf score.
dtm_tfidf_df = pandas.DataFrame(tfidfvec.fit_transform(df.body).toarray(), columns=tfidfvec.get_feature_names(), index = df.index)
df_genre = df['genre'].to_frame()
merged_df = df_genre.join(dtm_tfidf_df, how = 'right', lsuffix='_x')
#pull out the reviews for three genres, Rap, Alternative/Indie Rock, and Jazz
dtm_rap = merged_df[merged_df['genre_x']=="Rap"]
dtm_indie = merged_df[merged_df['genre_x']=="Alternative/Indie Rock"]
dtm_jazz = merged_df[merged_df['genre_x']=="Jazz"]
#print the words with the highest tf-idf scores for each genre
print("Rap Words")
print(dtm_rap.max(numeric_only=True).sort_values(ascending=False)[0:20])
print()
print("Indie Words")
print(dtm_indie.max(numeric_only=True).sort_values(ascending=False)[0:20])
print()
print("Jazz Words")
print(dtm_jazz.max(numeric_only=True).sort_values(ascending=False)[0:20])
###What words are distinct to reviews of Rap albums, Indie albums, and Jazz albums?
##Notice the word weights for the Rap albums compared to others. Are these reviews more different than other reviews?
```
# Lesson 5: Sentiment Analysis using the Dictionary Method
```
pos_sent = open("../05-Dictionary-Method/positive_words.txt").read()
neg_sent = open("../05-Dictionary-Method/negative_words.txt").read()
positive_words=pos_sent.split('\n')
negative_words=neg_sent.split('\n')
text1_pos = [word for word in text1_tokens_clean if word in positive_words]
text2_pos = [word for word in text2_tokens_clean if word in positive_words]
text1_neg = [word for word in text1_tokens if word in negative_words]
text2_neg = [word for word in text2_tokens if word in negative_words]
print("Postive words in Melville")
print(len(text1_pos)/len(text1_tokens))
print()
print("Negative words in Melville")
print(len(text1_neg)/len(text1_tokens))
print()
print("Postive words in Austen")
print(len(text2_pos)/len(text2_tokens))
print()
print("Negative words in Austen")
print(len(text2_neg)/len(text2_tokens))
## Who is more postive, Melville or Austen?
## Melville has a similar precentage of postive and negative words (a whale is a whale, neither good nor bad)
## Austen is decidedly more positive than negative (it's the gentleman thing to do)
```
# Lesson 6: Literary Distinction
```
from sklearn.naive_bayes import MultinomialNB
from sklearn.feature_extraction.text import CountVectorizer
import os
review_path = '../06-Literary Distinction (Probably)/poems/reviewed/'
random_path = '../06-Literary Distinction (Probably)/poems/random/'
review_files = os.listdir(review_path)
random_files = os.listdir(random_path)
review_texts = [open(review_path+file_name).read() for file_name in review_files]
random_texts = [open(random_path+file_name).read() for file_name in random_files]
all_texts = review_texts + random_texts
all_file_names = review_files + random_files
all_labels = ['reviewed'] * len(review_texts) + ['random'] * len(random_texts)
cv = CountVectorizer(stop_words = 'english', min_df=180, binary = True, max_features = None)
dtm = cv.fit_transform(all_texts).toarray()
nb = MultinomialNB()
nb.fit(dtm, all_labels)
dickinson_canonic = """Because I could not stop for Death –
He kindly stopped for me –
The Carriage held but just Ourselves –
And Immortality.
We slowly drove – He knew no haste
And I had put away
My labor and my leisure too,
For His Civility –
We passed the School, where Children strove
At Recess – in the Ring –
We passed the Fields of Gazing Grain –
We passed the Setting Sun –
Or rather – He passed us –
The Dews drew quivering and chill –
For only Gossamer, my Gown –
My Tippet – only Tulle –
We paused before a House that seemed
A Swelling of the Ground –
The Roof was scarcely visible –
The Cornice – in the Ground –
Since then – ‘tis Centuries – and yet
Feels shorter than the Day
I first surmised the Horses’ Heads
Were toward Eternity – """
anthem_patriotic = """O! say can you see, by the dawn's early light,
What so proudly we hailed at the twilight's last gleaming,
Whose broad stripes and bright stars through the perilous fight,
O'er the ramparts we watched, were so gallantly streaming?
And the rockets' red glare, the bombs bursting in air,
Gave proof through the night that our flag was still there;
O! say does that star-spangled banner yet wave
O'er the land of the free and the home of the brave?"""
unknown_dtm = cv.transform([dickinson_canonic,anthem_patriotic]).toarray()
nb.predict(unknown_dtm)
## Can a computer predict whether a poem would be considered 'presitgious'?
```
# Lesson 6: Topic Modeling
```
import gensim
import pandas
from nltk.corpus import stopwords, words
metadata_df = pandas.read_csv('../07-Topic Modeling/txtlab_Novel150_English.csv')
fiction_path = '../07-Topic Modeling/txtalb_Novel150_English/'
novel_list = [open(fiction_path+file_name).read() for file_name in metadata_df['filename']]
novel_tokens_list = [novel.lower().split() for novel in novel_list]
dictionary = gensim.corpora.dictionary.Dictionary(novel_tokens_list)
proper_names = [word.lower() for word in words.words() if word.istitle()]
noise_tokens = [word for word in dictionary.values() if word.isalpha()==False or len(word)<=2]
bad_words = stopwords.words('english') + proper_names + noise_tokens
stop_ids = [_id for _id, count in dictionary.doc2bow(bad_words)]
dictionary.filter_tokens(bad_ids = stop_ids)
dictionary.filter_extremes(no_below = 40)
corpus = [dictionary.doc2bow(text) for text in novel_tokens_list]
lda_model = gensim.models.LdaModel(corpus, num_topics=25, alpha='auto', id2word=dictionary, iterations=2500, passes = 4)
list_of_doctopics = [lda_model.get_document_topics(text, minimum_probability=0) for text in corpus]
list_of_probabilities = [[probability for label,probability in distribution] for distribution in list_of_doctopics]
proba_distro_df = pandas.DataFrame(list_of_probabilities)
metadata_df = pandas.concat([metadata_df, pandas.DataFrame(list_of_probabilities)], axis=1)
annual_means_df = metadata_df.groupby('date').mean()
annual_means_df[8].plot(kind='bar', figsize=(8,8))
lda_model.show_topic(8)
```
|
github_jupyter
|
# Data Preprocessing for Topic Monitoring(Facebook)
```
import pandas as pd
import numpy as np
import re
import csv
from langdetect import detect
import nltk
# nltk.download('punkt')
# nltk.download('maxent_treebank_pos_tagger')
# nltk.download('wordnet')
# nltk.download('averaged_perceptron_tagger')
# nltk.download('stopwords')
from nltk.corpus import stopwords
from nltk import wordpunct_tokenize
from IPython.display import Image
from IPython.display import display
### Load the Crawled Facebook Dataset
# Remove duplicates, NA, sorted time
disease = pd.read_csv('Final_utf16.csv', encoding = 'utf-16LE', sep=',',
dtype={"key": object, "id.x": object,"like_count.x": float, "from_id.x":float,
"from_name.x":object, "message.x":object, "created_time.x":object, "type":object,
"link":object, "story":object, "comments_count.x":float,"shares_count":float,
"love_count":float, "haha_count":float, "wow_count":float, "sad_count": float,
"angry_count":float, "join_id":object, "from_id.y":float, "from_name.y":object,
"message.y":object, "created_time.y":object, "likes_count.y":float,
"comments_count.y": float, "id.y":object})
df = pd.DataFrame(disease, columns=['key', 'created_time.x', 'id.x','message.x' , 'id.y', 'message.y'])
df.columns = ['key', 'created_time.x', 'id.x','message.x' , 'id.y', 'message.y']
rm_duplicates = df.drop_duplicates(subset=['message.x', 'message.y'])
dtime = rm_duplicates.sort_values(['created_time.x'])
dtime.index = range(len(dtime))
dlang = dtime
dlang = dlang[dlang['key']!='johnson & johnson']
dlang = dlang[dlang['key']!='johnson&johnson']
dlang.index = range(len(dlang))
display(dlang.head(3))
print(len(dlang))
# Detect the text language by majority vote
def calculate_languages_ratios(text):
languages_ratios = {}
tokens = wordpunct_tokenize(text)
words = [word.lower() for word in tokens]
for language in stopwords.fileids():
stopwords_set = set(stopwords.words(language))
words_set = set(words)
common_elements = words_set.intersection(stopwords_set)
languages_ratios[language] = len(common_elements)
return languages_ratios
def detect_language(text):
ratios = calculate_languages_ratios(text)
most_rated_language = max(ratios, key=ratios.get)
return most_rated_language
```
# Final Preprocessing
In this section, preprocessing is implemented into following steps.<br>
| Preprocessing Steps| Packages | Notes |
|------------------- |-----------------------------|-------------------------------------|
| Language Detection | Self-defined function, nktk |Check the language of each post |
| Remove Stopwords | nltk.corpus |Remove stopwords of detected language|
| Remove Url | Regular expression | |
| Remove Punctuation | string.punctuation | |
| Lemmatizing | nltk.stem |Lemmatize words in Noun and Verb |
| Part of Speech(POS)| nltk.pos_tag |Preserve Noun, Adverb and Adjective |
| Tokenize | split |Unigram |
| Remove NA | pandas | |
| Drop Duplicates | pandas | |
```
import gensim
from gensim import corpora, models, similarities
from nltk.corpus import stopwords
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from nltk.stem import WordNetLemmatizer
import string
import time
import os
exclude = set(string.punctuation)
lemma = WordNetLemmatizer()
# Create a new csv file to store the result after data preprocessing
with open('facebook_preprocessing.csv', 'w', encoding = 'UTF-8', newline = '') as csvfile:
column = [['key', 'created_time.x', 'id.x', 'message.x', 'id.y', 'message.y',
'lang.x', 're_message.x', 'lang.y', 're_message.y']]
writer = csv.writer(csvfile)
writer.writerows(column)
# Data preprocessing steps
for i in range(len(dlang['message.x'])):
features = []
features.append(dlang['key'][i])
features.append(dlang['created_time.x'][i])
features.append(dlang['id.x'][i])
features.append(dlang['message.x'][i])
features.append(dlang['id.y'][i])
features.append(dlang['message.y'][i])
if(str(dlang['message.x'][i]) == "nan"):
features.append('english')
features.append(dlang['message.x'][i])
else:
lang = detect_language(dlang['message.x'][i])
features.append(lang)
stop = set(stopwords.words(lang))
reurl = re.sub(r"http\S+", "", str(dlang['message.x'][i]))
tokens = ' '.join(re.findall(r"[\w']+", reurl)).lower().split()
x = [''.join(c for c in s if c not in string.punctuation) for s in tokens]
x = ' '.join(x)
stop_free = " ".join([i for i in x.lower().split() if i not in stop])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = " ".join(lemma.lemmatize(word,pos = 'n') for word in punc_free.split())
normalized = " ".join(lemma.lemmatize(word,pos = 'v') for word in normalized.split())
word = " ".join(word for word in normalized.split() if len(word)>3)
postag = nltk.pos_tag(word.split())
irlist = [',','.',':','#',';','CD','WRB','RB','PRP','...',')','(','-','``','@']
poslist = ['NN','NNP','NNS','RB','RBR','RBS','JJ','JJR','JJS']
wordlist = ['co', 'https', 'http','rt','com','amp','fe0f','www','ve','dont',"i'm","it's",'isnt','âźă','âąă','âł_','kf4pdwe64k']
adjandn = [word for word,pos in postag if pos in poslist and word not in wordlist and len(word)>3]
stop = set(stopwords.words(lang))
wordlist = [i for i in adjandn if i not in stop]
features.append(' '.join(wordlist))
if(str(dlang['message.y'][i]) == "nan"):
features.append('english')
features.append(dlang['message.y'][i])
else:
lang = detect_language(dlang['message.y'][i])
features.append(lang)
stop = set(stopwords.words(lang))
reurl = re.sub(r"http\S+", "", str(dlang['message.y'][i]))
tokens = ' '.join(re.findall(r"[\w']+", reurl)).lower().split()
x = [''.join(c for c in s if c not in string.punctuation) for s in tokens]
x = ' '.join(x)
stop_free = " ".join([i for i in x.lower().split() if i not in stop])
punc_free = ''.join(ch for ch in stop_free if ch not in exclude)
normalized = " ".join(lemma.lemmatize(word,pos='n') for word in punc_free.split())
normalized = " ".join(lemma.lemmatize(word,pos='v') for word in normalized.split())
word = " ".join(word for word in normalized.split() if len(word)>3)
postag = nltk.pos_tag(word.split())
irlist = [',','.',':','#',';','CD','WRB','RB','PRP','...',')','(','-','``','@']
poslist = ['NN','NNP','NNS','RB','RBR','RBS','JJ','JJR','JJS']
wordlist = ['co', 'https', 'http','rt','com','amp','fe0f','www','ve','dont',"i'm","it's",'isnt','âźă','âąă','âł_','kf4pdwe64k']
adjandn = [word for word,pos in postag if pos in poslist and word not in wordlist and len(word)>3]
stop = set(stopwords.words(lang))
wordlist = [i for i in adjandn if i not in stop]
features.append(' '.join(wordlist))
with open('facebook_preprocessing.csv', 'a', encoding='UTF-8', newline='') as csvfile:
writer = csv.writer(csvfile)
writer.writerows([features])
df_postncomment = pd.read_csv('facebook_preprocessing.csv', encoding = 'UTF-8', sep = ',')
rm_na = df_postncomment[pd.notnull(df_postncomment['re_message.x'])]
rm_na.index = range(len(rm_na))
dfinal_fb = pd.DataFrame(
rm_na,
columns = ['key', 'created_time.x', 'id.x', 'message.x', 'id.y', 'message.y',
'lang.x', 're_message.x', 'lang.y', 're_message.y'])
dfinal_fb.to_csv(
'final_facebook_preprocessing.csv',
encoding = 'UTF-8',
columns = ['key', 'created_time.x', 'id.x', 'message.x', 'id.y', 'message.y',
'lang.x', 're_message.x', 'lang.y', 're_message.y'])
os.remove('facebook_preprocessing.csv')
#print(rm_na['re_message.x'][8])
test = pd.read_csv('final_facebook_preprocessing.csv', encoding = 'UTF-8', sep = ',', index_col = 0)
display(test.head(3))
print(len(test))
```
|
github_jupyter
|
```
%load_ext autoreload
%autoreload 2
import sys
import warnings
warnings.filterwarnings("ignore")
sys.path.append("../")
from modules.data.conll2003.prc import conll2003_preprocess
data_dir = "/home/eartemov/ae/work/conll2003/"
conll2003_preprocess(data_dir)
```
## IO markup
### Train
```
from modules.data import bert_data
data = bert_data.LearnData.create(
train_df_path="/home/eartemov/ae/work/conll2003/eng.train.train.csv",
valid_df_path="/home/eartemov/ae/work/conll2003/eng.testa.dev.csv",
idx2labels_path="/home/eartemov/ae/work/conll2003/idx2labels5.txt",
clear_cache=True,
model_name="bert-base-cased"
)
from modules.models.bert_models import BERTBiLSTMAttnNCRF
model = BERTBiLSTMAttnNCRF.create(
len(data.train_ds.idx2label), model_name="bert-base-cased",
lstm_dropout=0., crf_dropout=0.3, nbest=len(data.train_ds.idx2label))
from modules.train.train import NerLearner
num_epochs = 100
learner = NerLearner(
model, data, "/home/eartemov/ae/work/models/conll2003-BERTBiLSTMAttnNCRF-base-IO.cpt",
t_total=num_epochs * len(data.train_dl))
model.get_n_trainable_params()
learner.fit(epochs=num_epochs)
```
### Predict
```
from modules.data.bert_data import get_data_loader_for_predict
dl = get_data_loader_for_predict(data, df_path=data.valid_ds.config["df_path"])
preds = learner.predict(dl)
from sklearn_crfsuite.metrics import flat_classification_report
from modules.analyze_utils.utils import bert_labels2tokens, voting_choicer
from modules.analyze_utils.plot_metrics import get_bert_span_report
pred_tokens, pred_labels = bert_labels2tokens(dl, preds)
true_tokens, true_labels = bert_labels2tokens(dl, [x.bert_labels for x in dl.dataset])
assert pred_tokens == true_tokens
tokens_report = flat_classification_report(true_labels, pred_labels, labels=data.train_ds.idx2label[4:], digits=4)
print(tokens_report)
```
### Test
```
from modules.data.bert_data import get_data_loader_for_predict
dl = get_data_loader_for_predict(data, df_path="/home/eartemov/ae/work/conll2003/eng.testb.dev.csv")
preds = learner.predict(dl)
from sklearn_crfsuite.metrics import flat_classification_report
from modules.analyze_utils.utils import bert_labels2tokens, voting_choicer
from modules.analyze_utils.plot_metrics import get_bert_span_report
pred_tokens, pred_labels = bert_labels2tokens(dl, preds)
true_tokens, true_labels = bert_labels2tokens(dl, [x.bert_labels for x in dl.dataset])
assert pred_tokens == true_tokens
tokens_report = flat_classification_report(true_labels, pred_labels, labels=data.train_ds.idx2label[4:], digits=4)
print(tokens_report)
```
|
github_jupyter
|
```
import torch
from torchvision import transforms
import torch.nn.functional as F
import torch.nn as nn
from PIL import Image
import imageio
import os
from google.colab import drive
from google.colab import drive
drive.mount('/content/drive')
class YOLO(nn.Module):
def __init__(self, img_width, row_size):
super(YOLO, self).__init__()
self.row_size = row_size
self.conv1 = nn.Conv2d(1, 16, 7, stride=2)
self.mp1 = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(16, 32, (3, 3), stride=1)
self.mp2 = nn.MaxPool2d(2, 2)
self.conv3 = nn.Conv2d(32, 64, (3, 3), stride=1)
self.mp3 = nn.MaxPool2d(2, 2)
self.fc1 = nn.Linear(64*53*36, 4096)
self.fc2 = nn.Linear(4096, row_size * 5)
self.dropout = nn.Dropout()
def forward(self, x):
# Conv + ReLU + max pooling for two layers
x = F.relu(self.conv1(x))
x = self.mp1(x)
x = F.relu(self.conv2(x))
x = self.mp2(x)
x = F.relu(self.conv3(x))
x = self.mp3(x)
x = x.view(-1, 64*53*36)
x = F.relu(self.dropout(self.fc1(x)))
x = self.fc2(x)
x = x.view(-1, self.row_size, 5)
x = torch.sigmoid(x)
return x
def calc_x_y(row, tensor):
"""calc coordinates"""
x = tensor[1] * 619
y = tensor[2] * (885 / 50) + row * (885 / 50)
width = tensor[3] * 619
height = tensor[4] * 885
return torch.FloatTensor([1, x, y, width, height])
def calc_box(tensor):
"""calc box for output line"""
x1 = max(0, tensor[1] - 0.5 * tensor[3])
y1 = max(0, tensor[2] - 0.5 * tensor[4])
x2 = min(619, tensor[1] + 0.5 * tensor[3])
y2 = min(885, tensor[2] + 0.5 * tensor[4])
box = [x1, y1, x2, y2]
return box
def non_maximum_suppression(tensor, percent):
"""choose predicted lines by highest propability.
Lines who overlap a actual choosen line by percent or higher will delete."""
for j in range(tensor.size(1)):
if(tensor[j,0].item() < 0.5):
tensor[j,0] = torch.tensor(0)
found = []
while(True):
maximum = 0
index = 0
for j in range(tensor.size(1)):
if(tensor[j,0].item() > maximum and j not in found):
maximum = tensor[j,0].item()
index = j
if(maximum == 0):
break
found.append(index)
tensor[index,0] = torch.tensor(1)
for j in range(tensor.size(1)):
if(j != index and tensor[j,0] >= 0.5):
x_y_max = calc_x_y(index, tensor[index])
x_y_other = calc_x_y(j, tensor[j])
box1 = calc_box(x_y_max)
box2 = calc_box(x_y_other)
if(calc_iou(box1, box2) > percent):
tensor[j,0] = 0
imgs_path = "drive/My Drive/data_small/forms/forms_train_small/"
imgs_paths = os.listdir(imgs_path)
weight_path = "drive/My Drive/evaluation_small/weights_small.pt"
predict_path = "drive/My Drive/testlines_predicted_small/"
transform = transforms.Compose([transforms.Resize((885, 619)),
transforms.ToTensor()])
# set a boolean flag that indicates whether a cuda capable GPU is available
is_gpu = torch.cuda.is_available()
print("GPU is available:", is_gpu)
print("If you are receiving False, try setting your runtime to GPU")
# set the device to cuda if a GPU is available
device = torch.device("cuda" if is_gpu else "cpu")
model = torch.load(weight_path)
print(model)
def predict_lines(model,imgs_path, predict_path):
""" predict images to lines from image path to predict_path"""
img_count = 0
for path in imgs_paths:
count = 0
img_tensor = transform(Image.open(imgs_path + path))
output = model(torch.stack([img_tensor]).to(device))[0]
# find right boxes
non_maximum_suppression(output, 0.5)
img = imageio.imread(imgs_path + path)
yscale = round(img.shape[0] / 885)
xscale = round(img.shape[1] / 619)
print(xscale, xscale)
for i in range(50):
if(output[i][0] > 0.5):
print(output[i])
box = calc_box(calc_x_y(i, output[i]))
x1 = (int(box[0])) * xscale
x2 = (int(box[2])) * xscale
y1 = (int(box[1])) * yscale
y2 = (int(box[3])) * yscale
print(box)
imageio.imwrite(predict_path + "pic" + str(img_count) + "line" + str(count) + '.jpg', img[y1:y2, x1:x2])
count += 1
img_count += 1
predict_lines(model, imgs_path, predict_path)
```
|
github_jupyter
|
# Db2 11.5.4 RESTful Programming
The following notebook is a brief example of how to use the Db2 11.5.4 RESTful Endpoint service to extend the capabilies of Db2.
Programmers can create Representational State Transfer (REST) endpoints that can be used to interact with Db2.
Each endpoint is associated with a single SQL statement. Authenticated users of web, mobile, or cloud applications can use these REST endpoints from any REST HTTP client without having to install any Db2 drivers.
The Db2 REST server accepts an HTTP request, processes the request body, and returns results in JavaScript Object Notation (JSON).
The Db2 REST server is pre-installed and running on Docker on host3 (10.0.0.4) in the Demonstration cluster. As a programmer you can communicate with the service on port 50050. Your welcome note includes the external port you can use to interact with the Db2 RESTful Endpoint service directly.
You can find more information about this service at: https://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.admin.rest.doc/doc/c_rest.html.
### Finding the Db2 RESTful Endpoint Service API Documentation
If you are running this notebook from a browser running inside the Cloud Pak for Data cluster, click: http://10.0.0.4:50050/docs If you are running this from a browser from your own desktop, check your welcome note for the address of the Db2 RESTful Service at port 50050.
## Getting Started
Before you can start submitting SQL or creating your own services you need to complete a few setup steps.
### Import the required programming libraries
The requests library is the minimum required by Python to construct RESTful service calls. The Pandas library is used to format and manipulate JSON result sets as tables. The urllib3 library is used to manage secure https requests.
```
import requests
import pandas as pd
import urllib3
```
### Create the Header File required for getting an authetication token
We have to provide the location of the RESTful service for our calls.
The RESTful call to the Db2 RESTful Endpoint service is contructed and transmitted as JSON. The first part of the JSON structure is the headers that define the content tyoe of the request.
```
headers = {
"content-type": "application/json"
}
```
### Define the RESTful Host
The next part defines where the request is sent to. It provides the location of the RESTful service for our calls.
```
Db2RESTful = "https://10.0.0.201:32115"
```
### API Authentication Service
Each service has its own path in the RESTful call. For authentication we need to point to the `v1/auth` service.
```
API_Auth = "/v1/auth"
```
### Database Connection Information
To authenticate to the RESTful service you must provide the connection information for the database along with the userid and password that you are using to authenticate with. You can also provide an expiry time so that the access token that gets returned will be invalidated after that time period.
```
body = {
"dbParms": {
"dbHost": "10.0.0.201",
"dbName": "BLUDB",
"dbPort": 31684,
"isSSLConnection": False,
"username": "admin",
"password": "password"
},
"expiryTime": "8760h"
}
```
### Disabling HTTPS Warnings
```
urllib3.disable_warnings()
```
### Retrieving an Access Token
When communicating with the RESTful service, you must provide the name of the service that you want to interact with. In this case the authentication service is */v1/auth*.
```
try:
response = requests.post("{}{}".format(Db2RESTful,API_Auth), verify=False, headers=headers, json=body)
print (response)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
```
A response code of 200 means that the authentication worked properly, otherwise the error that was generated is printed. The response includes a connection token that is reused throughout the rest of this lab. It ensures secure a connection without requiring that you reenter a userid and password with each request.
```
if (response.status_code == 200):
token = response.json()["token"]
print("Token: {}".format(token))
else:
print(response.json()["errors"])
```
### Creating a standard reusable JSON header
The standard header for all subsequent calls will use this format. It includes the access token.
```
headers = {
"authorization": f"{token}",
"content-type": "application/json"
}
```
## Executing an SQL Statement
Before you try creating your own customer service endpoint, you can try using some of the built in services. These let you submit SQL statements in a variety of ways.
Executing SQL requires a different service endpoint. In this case we will use "/services/execsql"
```
API_execsql = "/v1/services/execsql"
```
In this example the code requests that the RESTful function waits until the command is complete.
```
sql = \
"""
SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE"
FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC
WHERE AC."TAIL_NUMBER" = OT.TAILNUM
AND ORIGINSTATE = 'NJ'
AND DESTSTATE = 'CA'
AND AC.MANUFACTURER = 'Boeing'
AND AC.MODEL LIKE 'B737%'
AND OT.TAXIOUT > 30
AND OT.DISTANCE > 2000
AND OT.DEPDELAY > 300
ORDER BY OT.ARRDELAY;
"""
body = {
"isQuery": True,
"sqlStatement": sql,
"sync": True
}
print(body)
def runStatement(sql, isQuery) :
body = {
"isQuery": isQuery,
"sqlStatement": sql,
"sync": True
}
try:
response = requests.post("{}{}".format(Db2RESTful,API_execsql), verify=False, headers=headers, json=body)
return response
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
response = runStatement(sql, True)
```
If the successful call returns a **200** response code.
```
print(response)
```
Now that you know the call is a success, you can retrieve the json in the result set.
```
print(response.json()["resultSet"])
```
To format the results, use a Pandas Dataframe class to convert the json result set into a table. Dataframes can be used to further manipulate results in Python.
```
display(pd.DataFrame(response.json()['resultSet']))
```
## Use Parameters in a SQL Statement
Simple parameter passing is also available through the execsql service. In this case we are passing the employee number into the query to retrieve the full employee record. Try substituting different employee numbers and run the REST call again. For example, you can change "000010" to "000020", or "000030".
```
sqlparm = \
"""
SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE"
FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC
WHERE AC."TAIL_NUMBER" = OT.TAILNUM
AND ORIGINSTATE = 'NJ'
AND DESTSTATE = 'CA'
AND AC.MANUFACTURER = 'Boeing'
AND AC.MODEL LIKE 'B737%'
AND OT.TAXIOUT > 30
AND OT.DISTANCE > 2000
AND OT.DEPDELAY > ?
ORDER BY OT.ARRDELAY;
"""
body = {
"isQuery": True,
"parameters" : {
"1" : 300
},
"sqlStatement": sqlparm,
"sync": True
}
try:
response = requests.post("{}{}".format(Db2RESTful,API_execsql), verify=False, headers=headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print(response)
response.json()["resultSet"]
display(pd.DataFrame(response.json()['resultSet']))
```
## Generate a Call and don't wait for the results
If you know that your statement will take a long time to return a result, you can check back later. Turn **sync** off to avoid waiting.
```
sql = \
"""
SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE"
FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC
WHERE AC."TAIL_NUMBER" = OT.TAILNUM
AND ORIGINSTATE = 'NJ'
AND DESTSTATE = 'CA'
AND AC.MANUFACTURER = 'Boeing'
AND AC.MODEL LIKE 'B737%'
AND OT.TAXIOUT > 30
AND OT.DISTANCE > 2000
AND OT.DEPDELAY > 300
ORDER BY OT.ARRDELAY;
"""
body = {
"isQuery": True,
"sqlStatement": sql,
"sync": False
}
try:
response = requests.post("{}{}".format(Db2RESTful,API_execsql), verify=False, headers=headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print(response)
```
Retrieve the job id, so that you can retrieve the results later.
```
job_id = response.json()["id"]
print(job_id)
```
## Retrieve Result set using Job ID
The service API needs to be appended with the Job ID.
```
API_get = "/v1/services/"
```
We can limit the number of rows that we return at a time. Setting the limit to zero means all of the rows are to be returned.
```
body = {
"limit": 0
}
```
Get the results.
```
try:
response = requests.get("{}{}{}".format(Db2RESTful,API_get,job_id), verify=False, headers=headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print(response)
```
Retrieve the results.
```
display(pd.DataFrame(response.json()['resultSet']))
```
Now that you have some experience with the built in SQL service, you can try creating your own endpoint service.
## Using RESTful Endpoint Services
The most common way of interacting with the service is to fully encapsulate an SQL statement, including any parameters, in a unique RESTful service. This creates a secure separation between the database service and the RESTful programming service. It also allows you to create versions of the same service to make maintenance and evolution of programming models simple and predictable.
### Setup the Meta Data Tables and Stored Procedures to manage Endpoint Services
Before you can start defining and running your own RESTful Endpoint services you need call the service to create the table and stored procedures in the database you are using.
```
API_makerest = "/v1/metadata/setup"
```
You can specify the schema that the new table and stored procedures will be created in. In this example we will use **DB2REST**
```
body = {
"schema": "DB2REST"
}
try:
response = requests.post("{}{}".format(Db2RESTful,API_makerest), verify=False, headers=headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
```
If the process is successful the service returns a 201 status code.
```
if (response.status_code == 201):
print(response.reason)
else:
print(response.json())
```
### Create a RESTful Service
Now that the RESTful Service metadata is created in your database, you can create your first service. In this example you will pass an employee numb er, a 6 character string, to the service. It will return the department number of the employee.
```
API_makerest = "/v1/services"
```
The first step is to define the SQL that we want in the RESTful call. Parameters are identified using an ampersand "@". Notice that our SQL is nicely formatted to make this notebook easier to ready. However when creating a service it is good practice to remove the line break characters from your SQL statement.
```
sql = \
"""
SELECT COUNT(AC."TAIL_NUMBER") FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC
WHERE AC."TAIL_NUMBER" = OT.TAILNUM
AND ORIGINSTATE = @STATE
AND DESTSTATE = 'CA'
AND AC.MANUFACTURER = 'Boeing'
AND AC.MODEL LIKE 'B737%'
AND OT.TAXIOUT > 30
AND OT.DISTANCE > 2000
AND OT.DEPDELAY > @DELAY
FETCH FIRST 5 ROWS ONLY
"""
sql = sql.replace("\n","")
```
The next step is defining the json body to send along with the REST call.
```
body = {"isQuery": True,
"parameters": [
{
"datatype": "CHAR(2)",
"name": "@STATE"
},
{
"datatype": "INT",
"name": "@DELAY"
}
],
"schema": "DEMO",
"serviceDescription": "Delay",
"serviceName": "delay",
"sqlStatement": sql,
"version": "1.0"
}
```
Now submit the full RESTful call to create the new service.
```
try:
response = requests.post("{}{}".format(Db2RESTful,API_makerest), verify=False, headers=headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print(response)
```
### Call the new RESTful Service
Now you can call the RESTful service. In this case we will pass the stock symbol CAT. But like in the previous example you can try rerunning the service call with different stock symbols.
```
API_runrest = "/v1/services/delay/1.0"
body = {
"parameters": {
"@STATE": "NY","@DELAY":"300"
},
"sync": True
}
try:
response = requests.post("{}{}".format(Db2RESTful,API_runrest), verify=False, headers=headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print("{}{}".format(Db2RESTful,API_runrest))
print(response)
print(response.json())
```
You can retrieve the result set, convert it into a Dataframe and display the table.
```
display(pd.DataFrame(response.json()['resultSet']))
```
## Loop through the new call
Now you can call the RESTful service with different values.
```
API_runrest = "/v1/services/delay/1.0"
repeat = 2
for x in range(0, repeat):
for state in ("OH", "NJ", "NY", "FL", "MI"):
body = {
"parameters": {
"@STATE": state,"@DELAY": "240"
},
"sync": True
}
try:
response = requests.post("{}{}".format(Db2RESTful,API_runrest), verify=False, headers=headers, json=body)
print(state + ": " + str(response.json()['resultSet']))
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
```
## Managing Your Services
There are several service calls you can use to help manage the Db2 RESTful Endpoint service.
## List Available Services
You can also list all the user defined services you have access to
```
API_listrest = "/v1/services"
try:
response = requests.get("{}{}".format(Db2RESTful,API_listrest), verify=False, headers=headers)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print(response.json())
display(pd.DataFrame(response.json()['Db2Services']))
```
## Get Service Details
You can also get the details of a service
```
API_getDetails = "/v1/services/delay/3.0"
try:
response = requests.get("{}{}".format(Db2RESTful,API_getDetails), verify=False, headers=headers)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
json = response.json()
print(json)
```
You can format the result to make it easier to ready. For example, here are the input and outputs.
```
display(pd.DataFrame(json['inputParameters']))
display(pd.DataFrame(json['resultSetFields']))
```
## Delete a Service
A single call is also available to delete a service
```
API_deleteService = "/v1/services"
Service = "/delay"
Version = "/1.0"
try:
response = requests.delete("{}{}{}{}".format(Db2RESTful,API_deleteService,Service,Version), verify=False, headers=headers)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
print (response)
```
## Get Service Logs
You can also easily download the Db2 RESTful Endpoint service logs.
```
API_listrest = "/v1/logs"
try:
response = requests.get("{}{}".format(Db2RESTful,API_listrest), verify=False, headers=headers)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
if (response.status_code == 200):
myFile = response.content
open('/tmp/logs.zip', 'wb').write(myFile)
print("Downloaded",len(myFile),"bytes.")
else:
print(response.json())
```
To see the content of the logs, open the Files browser on machine host3 (10.0.0.4). Navigate to the **/tmp** directory and unzip the logs file.
## Using the Db2 REST Class
For your convenience, everything in the lessons above has been included into a Db2REST Python Class. You can add or use this code as part of your own Jupyter notebooks to make working with the Db2 RESTful Endpoint service quick and easy.
There are also lots of examples in the following lesson on how to use the class.
```
# Run the Db2REST Class library
# Used to construct and reuse an Autentication Key
# Used to construct RESTAPI URLs and JSON payloads
import json
import requests
import pandas as pd
class Db2REST():
def __init__(self, RESTServiceURL):
self.headers = {"content-type": "application/json"}
self.RESTServiceURL = RESTServiceURL
self.version = "/v1"
self.API_auth = self.version + "/auth"
self.API_makerest = self.version + "/metadata/setup"
self.API_services = self.version + "/services/"
self.API_version = self.version + "/version/"
self.API_execsql = self.API_services + "execsql"
self.API_monitor = self.API_services + "monitor"
self.Verify = False
import urllib3
urllib3.disable_warnings()
def connectDatabase(self, dbHost, dbName, dbPort, isSSLConnection, dbUsername, dbPassword, expiryTime="300m"):
self.dbHost = dbHost
self.dbName = dbName
self.dbPort = dbPort
self.isSSLConnection = isSSLConnection
self.dbusername = dbUsername
self.dbpassword = dbPassword
self.connectionBody = {
"dbParms": {
"dbHost": dbHost,
"dbName": dbName,
"dbPort": dbPort,
"isSSLConnection": isSSLConnection,
"username": dbUsername,
"password": dbPassword
},
"expiryTime": expiryTime
}
try:
response = requests.post("{}{}".format(self.RESTServiceURL,self.API_auth), verify=self.Verify, headers=self.headers, json=self.connectionBody)
print (response)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
if (response.status_code == 200):
self.token = response.json()["token"]
print("Successfully connected and retrieved access token")
else:
print(response)
print(response.json())
print(response.json()["errors"])
self.headers = {
"authorization": f"{self.token}",
"content-type": "application/json"
}
def getConnection(self):
return self.connectionBody
def getService(self):
return self.RESTServiceURL
def getToken(self):
return("Token: {}".format(self.token))
def getVersion(self):
try:
print("{}{}".format(self.RESTServiceURL,self.API_version))
response = requests.get("{}{}".format(self.RESTServiceURL,self.API_version),verify=self.Verify)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
if (response.status_code == 200):
return response.json()['version']
else:
print(response)
print(response.json()['errors'][0]['more_info'])
def runStatement(self, sql, isQuery=True, sync=True, parameters={}):
body = {
"isQuery": isQuery,
"sqlStatement": sql,
"sync": sync,
"parameters": parameters
}
try:
response = requests.post("{}{}".format(self.RESTServiceURL,self.API_execsql), verify=self.Verify, headers=self.headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
if (response.status_code == 200):
return pd.DataFrame(response.json()['resultSet'])
elif (response.status_code == 202):
return response.json()["id"]
else:
print(response.json()['errors'][0]['more_info'])
def getResult(self, job_id, limit=0):
body = {"limit": limit}
try:
response = requests.get("{}{}{}".format(self.RESTServiceURL,self.API_services,job_id), verify=self.Verify, headers=self.headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
if (response.status_code == 200):
json = response.json()
if (json['jobStatus'] == 2):
return json['jobStatusDescription']
elif (json['jobStatus'] == 3):
return pd.DataFrame(json['resultSet'])
elif (json['jobStatus'] == 4):
return pd.DataFrame(json['resultSet'])
else:
return json
elif (response.status_code == 404):
print(response.json()['errors'])
elif (response.status_code == 500):
print(response.json()['errors'][0]['more_info'])
else:
print(response.json())
def createServiceMetadata(self, serviceSchema="Db2REST"):
self.serviceSchema = serviceSchema
body = {"schema": self.serviceSchema}
try:
response = requests.post("{}{}".format(self.RESTServiceURL,self.API_makerest), verify=self.Verify, headers=self.headers, json=body)
if (response.status_code == 201):
print(response.reason)
else:
print(response.json())
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
def listServices(self):
try:
response = requests.get("{}{}".format(self.RESTServiceURL,self.API_services), verify=self.Verify, headers=self.headers)
return pd.DataFrame(response.json()['Db2Services'])
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
def getServiceDetails(self, serviceName, version):
try:
response = requests.get("{}{}{}{}".format(self.RESTServiceURL,self.API_services,"/" + serviceName,"/" + version), verify=self.Verify, headers=self.headers)
print(response.status_code)
if (response.status_code == 200):
description = response.json()
print("Input parameters:")
print(description["inputParameters"])
print("Result format:")
print(description["resultSetFields"])
else:
print(response.json())
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
def createService(self, schema, serviceDescription, serviceName, sql, version, parameters=False, isQuery=True):
if (parameters==False):
body = {"isQuery": isQuery,
"schema": schema,
"serviceDescription": serviceDescription,
"serviceName": serviceName,
"sqlStatement": sql.replace("\n",""),
"version": version
}
else:
body = {"isQuery": isQuery,
"schema": schema,
"serviceDescription": serviceDescription,
"serviceName": serviceName,
"sqlStatement": sql.replace("\n",""),
"version": version,
"parameters": parameters
}
try:
response = requests.post("{}{}".format(self.RESTServiceURL,self.API_services), verify=self.Verify, headers=self.headers, json=body)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
if (response.status_code == 201):
print("Service: " + serviceName + " Version: " + version + " created")
else:
print(response.json())
def deleteService(self, serviceName, version):
try:
response = requests.delete("{}{}{}{}".format(self.RESTServiceURL,self.API_services,"/" + serviceName,"/" + version), verify=self.Verify, headers=self.headers)
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
if (response.status_code == 204):
print("Service: " + serviceName + " Version: " + version + " deleted")
else:
print(response.json())
def callService(self, serviceName, version, parameters, sync=True):
body = {
"parameters": parameters,
"sync": sync
}
try:
response = requests.post("{}{}{}{}".format(self.RESTServiceURL,self.API_services,"/" + serviceName,"/" + version), verify=self.Verify, headers=self.headers, json=body)
if (response.status_code == 200):
return pd.DataFrame(response.json()['resultSet'])
elif (response.status_code == 202):
return response.json()["id"]
else:
print(response.json()['errors'][0]['more_info'])
except Exception as e:
if (repr(e) == "KeyError('more_info',)"):
print("Service not found")
else:
print("Unable to call RESTful service. Error={}".format(repr(e)))
def monitorJobs(self):
try:
response = requests.get("{}{}".format(self.RESTServiceURL,self.API_monitor), verify=self.Verify, headers=self.headers)
if (response.status_code == 200):
return pd.DataFrame(response.json()['MonitorServices'])
else:
print(response.json())
except Exception as e:
print("Unable to call RESTful service. Error={}".format(repr(e)))
```
### Setting up a Db2 RESTful Endpoint Service Class instance
To use the class first create an instance of the class. The cell below creates an object called **Db2RESTService** from the **Db2REST** class. The first call to the object is **getVersion** to confirm the version of the RESTful Endpoint Service you are connected to.
#### Connecting to the service to the database
Unless your service is already bound to a single database, the call below connects it to a single Db2 database. You can run this command again to connect to a different database from the same RESTful Endpoint service.
```
Db2RESTService = Db2REST("https://10.0.0.201:31315")
print("Db2 RESTful Endpoint Service Version: " + Db2RESTService.getVersion())
```
#### Connect to Db2 OLTP
```
Db2RESTService.connectDatabase("10.0.0.201", "STOCKS", 32443, False, "admin", "CP4DDataFabric")
```
#### Connect to DV
```
Db2RESTService.connectDatabase("10.0.0.201", "BIGSQL", 31193, False, "admin", "CP4DDataFabric")
```
#### Confirming the service settings
Once the connection to the RESTful Endpoint Service and Db2 is established you can always check your settings using the following calls.
```
print(Db2RESTService.getService())
print(Db2RESTService.getConnection())
print(Db2RESTService.getToken())
```
### Running SQL Through the Service
You can run an SQL Statement through the RESTful service as a simple text string.
Let's start by defining the SQL to run:
```
sql = \
"""
SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE"
FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC
WHERE AC."TAIL_NUMBER" = OT.TAILNUM
AND ORIGINSTATE = 'NJ'
AND DESTSTATE = 'CA'
AND AC.MANUFACTURER = 'Boeing'
AND AC.MODEL LIKE 'B737%'
AND OT.TAXIOUT > 30
AND OT.DISTANCE > 2000
AND OT.DEPDELAY > 300
ORDER BY OT.DEPDELAY DESC
FETCH FIRST 5 ROWS ONLY;
"""
```
Now a single call to the **runStatement** routine runs the SQL synchronously and returns the result as a DataFrame
```
sql = "SELECT * FROM SYSCAT.TABLES"
result = (Db2RESTService.runStatement(sql))
display(result)
```
You can also run the statement asynchronously so you don't have to wait for the result. In this case the result is the statement identifier that you can use to check the statement status.
```
statementID = (Db2RESTService.runStatement(sql, sync=False))
display(statementID)
```
If you have several statements running at the same time you can check to see their status with the **monitorStatus** routine and see where they are in the service queue.
```
services = Db2RESTService.monitorJobs()
display(services)
```
You can try to get the results of the statment by passing the statement identifier into the getResults routine. If the statement has finished running it will return a result set as a DataFrame. It is still running, a message is returned.
```
result = (Db2RESTService.getResult(statementID))
display(result)
```
#### Passing Parameters when running SQL Statements
You can also define a single SQL statement with ? parameters and call that statement with different values using the same **runStatement** routine.
```
sqlparm = \
"""
SELECT AC."TAIL_NUMBER", AC."MANUFACTURER", AC."MODEL", OT."FLIGHTDATE", OT."UNIQUECARRIER", OT."AIRLINEID", OT."CARRIER", OT."TAILNUM", OT."FLIGHTNUM", OT."ORIGINAIRPORTID", OT."ORIGINAIRPORTSEQID", OT."ORIGINCITYNAME", OT."ORIGINSTATE", OT."DESTAIRPORTID", OT."DESTCITYNAME", OT."DESTSTATE", OT."DEPTIME", OT."DEPDELAY", OT."TAXIOUT", OT."WHEELSOFF", OT."WHEELSON", OT."TAXIIN", OT."ARRTIME", OT."ARRDELAY", OT."ARRDELAYMINUTES", OT."CANCELLED", OT."AIRTIME", OT."DISTANCE"
FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC
WHERE AC."TAIL_NUMBER" = OT.TAILNUM
AND ORIGINSTATE = ?
AND DESTSTATE = ?
AND AC.MANUFACTURER = 'Boeing'
AND AC.MODEL LIKE 'B737%'
AND OT.TAXIOUT > 30
AND OT.DISTANCE > 2000
AND OT.DEPDELAY > ?
ORDER BY OT.DEPDELAY DESC
FETCH FIRST 10 ROWS ONLY;
"""
result = Db2RESTService.runStatement(sqlparm,parameters={"1": 'NY', "2": 'CA', "3" : 300})
display(result)
result = Db2RESTService.runStatement(sqlparm,parameters={"1": 'NJ', "2": 'CA', "3" : 200})
display(result)
```
#### Limiting Results
You also have full control of how many rows in an answer set to return. Run the following statement using **sync=False**
```
statementID = Db2RESTService.runStatement(sqlparm, sync=False, parameters={"1": 'NJ', "2": 'CA', "3" : 200})
display(statementID)
result = (Db2RESTService.getResult(statementID))
display(result)
```
This time the **getResult** routine include a parameter to limit the result set to 5 rows.
```
result = (Db2RESTService.getResult(statementID, limit=5))
display(result)
```
The next cell retrieves the remaining rows.
```
result = (Db2RESTService.getResult(statementID))
display(result)
```
After all the rows have been returned the job history is removed. If you try to retrieve the results for this statement now the service won't find it.
```
result = (Db2RESTService.getResult(statementID))
display(result)
```
### Creating and Running Endpoint Services
If the MetaData tables have not already been created in your database you can use the following call to create the MetaData in the schema of your choice. In this case **DB2REST**.
```
Db2RESTService.createServiceMetadata("DB2REST")
```
Let's start by defining the SQL statement. It can include parameters that have to be idenfied with an amersand "@".
```
sql = \
"""
SELECT COUNT(AC."TAIL_NUMBER") FROM "ONTIME"."ONTIME" OT, "ONTIME"."AIRCRAFT" AC
WHERE AC."TAIL_NUMBER" = OT.TAILNUM
AND ORIGINSTATE = @STATE
AND DESTSTATE = 'CA'
AND AC.MANUFACTURER = 'Boeing'
AND AC.MODEL LIKE 'B737%'
AND OT.TAXIOUT > 30
AND OT.DISTANCE > 2000
AND OT.DEPDELAY > @DELAY
FETCH FIRST 5 ROWS ONLY
"""
```
Now we can create the service, including the two parameters, using the **createService** routine.
```
parameters = [{"datatype": "CHAR(2)","name": "@STATE"},{"datatype": "INT","name": "@DELAY"}]
schema = 'DEMO'
serviceDescription = 'Delay'
serviceName = 'delay'
version = '1.0'
Db2RESTService.createService(schema, serviceDescription, serviceName, sql, version, parameters)
```
A call to the **listServices** routine confirms that you have created the new service.
```
services = Db2RESTService.listServices()
display(services)
```
You can also see the details for any service using the **getServiceDetails** routine.
```
details = Db2RESTService.getServiceDetails("delay","1.0")
display(details)
```
You can all the new service using the **callService** routine. The parameters are passed into call using an array of values. By default the call is synchronous so you have to wait for the results.
```
serviceName = 'delay'
version = '1.0'
parameters = {"@STATE": "NJ","@DELAY":"200"}
result = Db2RESTService.callService(serviceName, version, parameters)
display(result)
```
You can also call the service asychronously, just like we did with SQL statements earlier. Notice the additional parameter **sync=False**. Since the cell below immediately checks the status of the job you can see it has been queued.
```
serviceName = 'delay'
version = '1.0'
parameters = {"@STATE": "NJ","@DELAY":"200"}
statementID = Db2RESTService.callService(serviceName, version, parameters, sync=False)
display(statementID)
display(Db2RESTService.monitorJobs())
```
Run **monitorJobs** again to confirm that the endpoint service has completed the request.
```
services = Db2RESTService.monitorJobs()
display(services)
```
And retrieve the result set.
```
result = (Db2RESTService.getResult(statementID))
display(result)
```
You can also delete an existing endpoint service with a call to the **deleteService** routine.
```
serviceName = 'delay'
version = '1.0'
Db2RESTService.deleteService(serviceName, version)
```
#### Using a service to query the Catalog
You can also think about creating services to explore the database catalog. For example, here is a service that accepts a schema as an input parameter and returns a list of tables in the schema.
```
sql = \
"""
SELECT TABSCHEMA, TABNAME, ALTER_TIME FROM SYSCAT.TABLES WHERE TABSCHEMA = @SCHEMA
"""
parameters = [{"datatype": "VARCHAR(64)","name": "@SCHEMA"}]
schema = 'DEMO'
serviceDescription = 'Tables'
serviceName = 'tables'
version = '1.0'
Db2RESTService.createService(schema, serviceDescription, serviceName, sql, version, parameters)
serviceName = 'tables'
version = '1.0'
result = Db2RESTService.callService(serviceName, version, parameters = {"@SCHEMA": "SYSCAT"}, sync=True)
display(result)
```
### Incorporating the Db2 RESTFul Endpoint Class into your Python scipts
The Db2 RESTful Endpoint Class is available on GIT at https://github.com/Db2-DTE-POC/CPDDVHOL4/blob/main/RESTfulEndpointServiceClass402.ipynb. You can download a copy into your own Python library and add **%run db2restendpoint.ipynb** to your own Python notebook. You can also include the following two lines which will automatically download a copy of the library from GIT and run the Class code.
```
!wget -O db2restendpoint.ipynb https://raw.githubusercontent.com/Db2-DTE-POC/CPDDVHOL4/main/RESTfulEndpointServiceClass402.ipynb
%run db2restendpoint.ipynb
Db2RESTService = Db2REST("https://10.0.0.201:31315")
print("Db2 RESTful Endpoint Service Version: " + Db2RESTService.getVersion())
```
## What's Next
Try experimenting. Create your own services. You can find out more at: https://www.ibm.com/support/producthub/db2/docs/content/SSEPGG_11.5.0/com.ibm.db2.luw.admin.rest.doc/doc/c_rest.html.
Also check out the OpenAPI specification for the service. It includes coding examples in Python, CURL and JavaScript.
If you are running this notebook from a browser running inside the Cloud Pak for Data cluster, click: http://10.0.0.4:50050/docs If you are running this from a browser from your own desktop, check your welcome note for the address of the Db2 RESTful Service at port 50050 and add **docs** to the end of the URL.
|
github_jupyter
|
# Inference acceleration of `T5` for large batch size / long sequence length / > large models
Every week or so, a new impressive few shots learning work taking advantage of autoregressive models is released by some team around the world.
Still `LLM` inference is rarely discussed and few projects are focusing on this aspect.
In this notebook, we describe our take to significantly improve autoregressive model latency.
We plan to intensively test large autoregressive models, so we want something:
* which **scales**: the improvement exists on small and large models, for short and long sequences, in greedy and beam search;
* This is very important in a few shots learning where sequences are most of the time hundreds or thousands tokens long and beam search is used to improve text quality.
* that has **no hidden cost**: no big increase in memory usage, no degradation in quality of generated text, support state-of-the-art decoding algorithms;
* that is **generic**: works for any transformer based architecture, and not specific to an inference engine;
* that is **easy to maintain**: no hard-coded behaviors or other technical debt if it doesn't bring a clear advantage.
To be clear, **we are not targeting the best performance ever but the right trade off** (for us at least) between simplicity to use/maintain and acceptable latency.
## The challenge
In most situations, performing inference with `Onnx Runtime` or `TensorRT` usually bring large improvement over `Pytorch` implementations.
It's very true with `transformer` based models.
The main reason is that these tools will perform `kernel fusions` (merging several operations into a single one) and therefore reduce the number of memory bounded operations. Sometimes they also replace some operations by a much faster approximation.
In the very specific case of autoregressive languages, things are a bit more complicated.
On most `Pytorch` implementations of these models, there is a `cache` of `K` and `V` values.
Let's remind us that in attention blocks, each token is projected on 3 matrices called `Query`, `Key`, and `Value`.
Then, those projections will be used to compute a representation of each token which takes into account the information from the related other tokens of the sequence.
As autoregressive models generate the sequence one token at a time, they should recompute final representation of all past tokens for each new token to generate.
Because each token can only attend to the past, the result of these computations never changes; therefore one simple trick to reduce latency is to just memorize them and reuse them later, avoiding lots of computation.
Out of the box, the cache mechanism can't be exported to `Onnx` from `Hugging Face` models (and all other `Pytorch` implementations we are aware of).
The reason is that those models are not `torchscript` scripting compliant (it requires `Pytorch` code to follow some [restrictive rules](https://pytorch.org/docs/stable/jit_builtin_functions.html)).
Because of that, `Onnx` export is done through `tracing` which erases any control flow instructions (including the `If` instruction to enable or not a cache).
## Existing solutions
Some interesting solutions targeting inference latency that we have considered and/or tested:
* [TensorRT](https://developer.nvidia.com/blog/optimizing-t5-and-gpt-2-for-real-time-inference-with-`TensorRT`/), which targets `GPU`, heavily optimizes the computation graph, making `T5` inference very fast (they report X10 speedup on `small-T5`). The trick is that it doesn't use any cache (see below for more details), so it's very fast on short sequence and small models, as it avoids many memory bounded operations by redoing full computation again and again... but as several users have already found ([1](https://github.com/NVIDIA/TensorRT/issues/1807), [2](https://github.com/NVIDIA/TensorRT/issues/1642), [3](https://github.com/NVIDIA/TensorRT/issues/1799), [4](https://github.com/NVIDIA/TensorRT/issues/1845), ...), this approach doesn't scale when the computation intensity increases, i.e., when base or large models are used instead of a small one, when generation is done on moderately long sequence of few hundred of tokens or if beam search is used instead of a greedy search;
* [FastT5](https://github.com/Ki6an/fastT5), which targets `CPU`, exports 2 versions of the decoder, one with cache and one without. You need the `no cache` version to compute the first token and the first `past state` tensors (aka the cached tensors), and for all the other tokens you use the `cache` version of the computation graph. Basically, it makes the memory foot print 2 times bigger as all weights are duplicated. As generative models tend to be huge, they work around the memory issue by using dynamic `int-8` quantization, the final memory foot print of the decoders is now the same as `Hugging Face` in `FP16`... but 1/ dynamic quantization only works on `CPU`, and 2/ according to several reports dynamic quantization degrades significantly generative model output, to a point where it may make them useless ([1](https://github.com/huggingface/transformers/issues/2466#issuecomment-572781378), [2](https://github.com/huggingface/transformers/issues/2466#issuecomment-982710520), and [here](https://github.com/microsoft/onnxruntime/issues/6549#issuecomment-1016948837) you can find a report in the `GPT-2` context from a Microsoft engineer: "*int8 quantization are not recommended due to accuracy loss*").
* [Onnx Runtime T5 export tool](https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/transformers/models/t5) targets both `GPU` and `CPU`. It works in a similar way than `FastT5`: `decoder` module is exported 2 times. Like `FastT5`, the memory footprint of the decoder part is doubled (this time there is no `int-8` quantization).
* [FasterTransformer](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/t5_guide.md#translation-process) targets `GPU` and is a mix of `Pytorch` and `CUDA`/`C++` dedicated code. The performance boost is huge on `T5`, they report a 10X speedup like `TensorRT`. However, it may significantly decrease the accuracy of the model ([here](https://github.com/NVIDIA/FasterTransformer/blob/main/docs/t5_guide.md#translation-process) when sampling is enabled, it reduces BLEU score of translation task by 8 points, the cause may be a bug in the decoding algorithm or an approximation a bit too aggressive) plus the speedup is computed on a [translation task](https://github.com/NVIDIA/FasterTransformer/blob/main/examples/pytorch/decoding/utils/translation/test.en) where sequences are 25 tokens long on average. In our experience, improvement on very short sequences tends to decrease by large margins on longer sequences. It seems to us that their objectives are different from ours.
With the existing solutions, you need to choose one or two items of the following:
* double decoder memory footprint;
* be slower than `Hugging Face` for moderately long sequence length / beam search;
* degrade output quality.
## Our approach
Our approach to make autoregressive `transformer` based models 2X faster than `Hugging Face` `Pytorch` implementation (the base line) is based on 3 key ingredients:
* storing 2 computation graphs in a single `Onnx` file: this let us have both cache and no cache support without having any duplicated weights,
* `zero copy` to retrieve output from `Onnx Runtime`: we built over our past work to connect in the most efficient way `Pytorch` tensors (used in the decoding part) and `Onnx Runtime`. Our previous work was to avoid `host` <-> `GPU` tensor copy, but it still required a `GPU` <-> `GPU`. It is now part of the official `Onnx Runtime` documentation (apparently [thanks of our project](https://github.com/microsoft/onnxruntime/pull/10651)!). This time we found out a way to directly expose the internal state of `Onnx Runtime` through a `Pytorch` tensor in zero copy way. Combined with cache mechanism, this is responsible for most of the speedup we have obtained.
* a generic tool to convert any model (whatever the architecture) to `FP16` without any risk of having out of range values or rounding to zero: `FP16` is still the way to reduce memory footprint of a model. The main issue is that some nodes may output values outside of `FP16` range or round others to zero, resulting in `NaN` output; moreover, very small values may be rounded to zero which is an issue for log and div operations. We have built a tool which detect those nodes so we can keep their precision in `FP32`. It's quite important to reduce memory footprint of these models, not just because they tend to be huge, but also because past states (that we cache) and internal buffers can be even bigger than the weights of the model itself.
## Results
As demonstrated at the end of this notebook, **we are able to provide a X2 speedup** whatever the batch size, the sequence length or the model size.
> For `TensorRT` we have our own implementation of our approach described above which helps to provide similar latency to `Onnx Runtime`. It's in a Python script in the same folder as this notebook. We had to work around a documented limitation. Because of that the code is slightly more complex and we wanted to keep this notebook easy to follow.
```
! nvidia-smi
```
## `Onnx Runtime` compilation
Version 1.11.1 of `Onnx Runtime` and older have a bug which makes them much slower when most inputs are used by subgraphs of an `If` node.
Unfortunately, it's exactly what will do below, so we need to compile our own version of `Onnx Runtime` until the version 1.12 is released (in June 2022).
Code below has been tested on Ubuntu 22.04 and supposes that your machine has `CUDA` 11.4 installed.
If not, use the Docker image of this library.
We use a specific commit of `Onnx Runtime` with a better management of `If`/`Else`/`Then` `Onnx` nodes:
```shell
git clone --recursive https://github.com/Microsoft/onnxruntime
cd onnxruntime
git checkout -b fix_if 81d78706feb1dc923f3e43f7ba8ac30b55f5b19b
CUDACXX=/usr/local/cuda-11.4/bin/nvcc ./build.sh \
--config Release \
--build_wheel \
--parallel \
--use_cuda \
--cuda_home /usr/local/cuda-11.4 \
--cudnn_home /usr/lib/x86_
-linux-gnu/ \
--skip_test
# pip install ...
# other required dependencies
# pip install nvtx seaborn
```
On our machine, it takes around 20 minutes.
> to clear previous compilation, delete content of `./build` folder
```
import json
import random
from transformer_deploy.backends.ort_utils import get_keep_fp32_nodes
from transformer_deploy.backends.ort_utils import convert_fp16
import time
from typing import Callable, Dict, Optional, List
import matplotlib.pylab as plt
from onnxruntime import IOBinding
import numpy as np
import onnx
import torch
from pathlib import Path
from typing import Tuple
from onnx import GraphProto, ModelProto, helper
from torch.nn import Linear
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, PretrainedConfig, T5ForConditionalGeneration, TensorType
from transformers.generation_utils import GenerationMixin
from transformers.modeling_outputs import BaseModelOutputWithPastAndCrossAttentions, Seq2SeqLMOutput
from transformers.models.t5.modeling_t5 import T5Stack
from nvtx import nvtx
from copy import copy
from transformer_deploy.backends.ort_utils import create_model_for_provider, inference_onnx_binding
from transformer_deploy.backends.pytorch_utils import convert_to_onnx
import seaborn as sns
import operator
from collections import defaultdict
import gc
```
## Loading `Hugging Face` model / tokenizer
Below we load the model and set global variables of this notebook.
```
np.random.seed(123)
torch.random.manual_seed(123)
# other possible values: t5-small, t5-base, t5-large. t5-3b should work when ORT library is fixed
model_name = "t5-large"
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_ids: torch.Tensor = tokenizer(
"translate English to French: This model is now very fast!", return_tensors=TensorType.PYTORCH
).input_ids
input_ids = input_ids.type(torch.int32).to("cuda")
pytorch_model: T5ForConditionalGeneration = AutoModelForSeq2SeqLM.from_pretrained(model_name)
pytorch_model = pytorch_model.eval()
pytorch_model = pytorch_model.cuda()
pytorch_model.config.use_cache = True # not really needed, just to make things obvious
num_layers = pytorch_model.config.num_layers
# tolerance between Onnx FP16 and Pytorch FP32.
# Rounding errors increase with number of layers: 1e-1 for t5-small, 5e-1 for large, 3 for 3b. 11b not tested.
# Do not impact final quality
fp16_default_tolerance = 5e-1
def are_equal(a: torch.Tensor, b: torch.Tensor, atol: float = fp16_default_tolerance) -> None:
assert np.allclose(a=a.detach().cpu().numpy(), b=b.detach().cpu().numpy(), atol=atol), f"{a}\n\nVS\n\n{b}"
def save_onnx(proto: onnx.ModelProto, model_path: str) -> None:
# protobuff doesn't support files > 2Gb, in this case, weights are stored in another binary file
save_external_data: bool = proto.ByteSize() > 2 * 1024**3
filename = Path(model_path).name
onnx.save_model(
proto=proto,
f=model_path,
save_as_external_data=save_external_data,
all_tensors_to_one_file=True,
location=filename + ".data",
)
def prepare_folder(path: str) -> Tuple[str, str]:
p = Path(path)
p.mkdir(parents=True, exist_ok=True)
[item.unlink() for item in Path(path).glob("*") if item.is_file()]
return path + "/model.onnx", path + "/model_fp16.onnx"
# create/clean folders where each model will be stored.
# as multiple files will be saved for T5-3B and 11B, we use different folders for the encoder and the decoders.
encoder_model_path, encoder_fp16_model_path = prepare_folder(path="./test-enc")
dec_cache_model_path, dec_cache_fp16_model_path = prepare_folder(path="./test-dec-cache")
dec_no_cache_model_path, dec_no_cache_fp16_model_path = prepare_folder(path="./test-dec-no-cache")
_, dec_if_fp16_model_path = prepare_folder(path="./test-dec-if")
# some outputs to compare with
out_enc: BaseModelOutputWithPastAndCrossAttentions = pytorch_model.encoder(input_ids=input_ids)
out_full: Seq2SeqLMOutput = pytorch_model(input_ids=input_ids, decoder_input_ids=input_ids)
```
# Export to Onnx
First step is to export the model to `Onnx` graph.
`T5` is made of 2 parts, an `encoder` and a `decoder`.
## Export encoder part
The `encoder` part export doesn't imply any specific challenge.
We use export function built for `Bert` like model, exported model is in `FP16`.
```
pytorch_model = pytorch_model.to("cuda")
convert_to_onnx(
model_pytorch=pytorch_model.encoder,
output_path=encoder_model_path,
inputs_pytorch={"input_ids": input_ids},
var_output_seq=True,
quantization=False,
)
```
## Conversion to mixed precision
### Why mixed precision?
As `T5` can have up to 11 billion parameters, it requires lots of computation, and even more important, it takes lots of space in device memory.
We convert the `encoder` to half precision.
If we blindly convert the whole graph to `FP16`, we will have 2 issues:
* `overflow`: some nodes, like exponential nodes, will try to output values out of the `FP16` range, at the end you get some `NaN`.
* `underflow`: values very close to 0 will be rounded to 0, which may be an issue for some operations like `Div` and `Log` .
### The challenge
Mixed precision is done out of the box by `Pytorch` and follow some strict rules described in https://pytorch.org/docs/stable/amp.html
Those rules are generic and quite conservative. Many nodes will be kept in `FP32` even if their output is always in the `FP16` range.
Other approaches we have found:
* `Onnx Runtime T5` [demo](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/transformers/models/t5/t5_helper.py): provide a list of operations to keep in `FP32` (Pow, ReduceMean, Add, Sqrt, Div, Mul, Softmax, Relu). We have found this approach to need more an more tweaking on larger networks and encoder part (decoder part seems simpler to manage, https://github.com/microsoft/onnxruntime/issues/11119);
* `TensorRT T5` [demo](https://github.com/NVIDIA/TensorRT/tree/main/demo/HuggingFace/notebooks): provide the exact pattern of nodes to keep in `FP32`. This approach is much more effective, but imply lots of code to describe the patterns and may not generalize well, basically what works for a `base` model may not work for 11 billion parameters model. And it does not scale to other architectures without adaptations, for a library like `transformer-deploy`, it would lead to unmaintainable technical debt.
### Our approach
We have chosen an architecture agnostic approach: we inject random input sequences and audit the output of each computation graph node; finally, we make a list of all nodes that have output values out of the `FP16` range /close to zero values and perform some cleaning (to avoid unnecessary casting).
We have chosen to use random values only for the `input_ids` field as the search space is limited: positive integers lower than the vocabulary size.
You can also decide to send real data from a dataset you want to work on.
To finish, we provide the list of nodes to keep in `FP32` to the conversion function.
```
def get_random_input_encoder() -> Dict[str, torch.Tensor]:
max_seq = 512
seq_len = random.randint(a=1, b=max_seq)
batch = max_seq // seq_len
random_input_ids = torch.randint(
low=0, high=tokenizer.vocab_size, size=(batch, seq_len), dtype=torch.int32, device="cuda"
)
inputs = {"input_ids": random_input_ids}
return inputs
keep_fp32_encoder = get_keep_fp32_nodes(onnx_model_path=encoder_model_path, get_input=get_random_input_encoder)
assert len(keep_fp32_encoder) > 0
enc_model_onnx = convert_fp16(onnx_model=encoder_model_path, nodes_to_exclude=keep_fp32_encoder)
save_onnx(proto=enc_model_onnx, model_path=encoder_fp16_model_path)
del enc_model_onnx
torch.cuda.empty_cache()
gc.collect()
print(f"20 first nodes to keep in FP32 (total {len(keep_fp32_encoder)}):")
keep_fp32_encoder[:20]
```
Compare the output of the `Onnx` `FP16` model with `Pytorch` one
```
enc_fp16_onnx = create_model_for_provider(encoder_fp16_model_path, "CUDAExecutionProvider")
enc_fp16_onnx_binding: IOBinding = enc_fp16_onnx.io_binding()
enc_onnx_out = inference_onnx_binding(
model_onnx=enc_fp16_onnx,
binding=enc_fp16_onnx_binding,
inputs={"input_ids": input_ids},
device=input_ids.device.type,
)["output"]
are_equal(a=enc_onnx_out, b=out_enc.last_hidden_state)
```
## Export decoder
The decoder export part is more challenging:
* we first need to wrap it in a `Pytorch` model to add the final layer so it's output provide scores for each vocabulary token and can be directly used by the `Hugging Face` `decoding` algorithm
* then, we need to manipulate the `Onnx` graph to add support of `Key`/`Value` cache
The second point is the key ingredient of the observed acceleration of `Onnx` vs `Hugging Face` inference.
### Wrapper to include some post-processing on the decoder output
The post-processing is mainly a projection of the decoder output on a matrix with one of its dimensions equal to model vocabulary size, so we have scores for each possible token.
```
class ExportT5(torch.nn.Module):
def __init__(self, decoder: T5Stack, lm_head: Linear):
super(ExportT5, self).__init__()
self.decoder = decoder
self.lm_head = lm_head
def forward(self, input_ids: torch.Tensor, encoder_hidden_states: torch.Tensor, past_key_values: Tuple = None):
out_dec = self.decoder.forward(
input_ids=input_ids, encoder_hidden_states=encoder_hidden_states, past_key_values=past_key_values
)
# Rescale output before projecting on vocab
out_dec["last_hidden_state"] = out_dec["last_hidden_state"] * (pytorch_model.model_dim**-0.5)
out_dec["last_hidden_state"] = self.lm_head(out_dec["last_hidden_state"])
return out_dec
pytorch_model.cuda()
model_decoder = ExportT5(decoder=pytorch_model.decoder, lm_head=pytorch_model.lm_head).eval()
out_model_export: torch.Tensor = model_decoder(input_ids=input_ids, encoder_hidden_states=out_enc.last_hidden_state)
are_equal(a=out_model_export["last_hidden_state"], b=out_full.logits)
```
### Export decoder part to `Onnx`
Below we export 2 versions of the decoder, one without cache support and one with it.
Model inputs with past states (cache support):
```
model_decoder.cuda()
# decoder output one step before
out_dec_pytorch = model_decoder(input_ids=input_ids[:, :-1], encoder_hidden_states=out_enc.last_hidden_state)
model_inputs = {
"input_ids": input_ids[:, -1:].type(torch.int32),
"encoder_hidden_states": out_enc.last_hidden_state,
"past_key_values": out_dec_pytorch.past_key_values,
}
input_names = ["input_ids", "encoder_hidden_states"]
for i in range(num_layers):
input_names.append(f"past_key_values.{i}.decoder.key")
input_names.append(f"past_key_values.{i}.decoder.value")
input_names.append(f"past_key_values.{i}.encoder.key")
input_names.append(f"past_key_values.{i}.encoder.value")
output_names = ["logits"]
for i in range(num_layers):
output_names.append(f"present.{i}.decoder.key")
output_names.append(f"present.{i}.decoder.value")
output_names.append(f"present.{i}.encoder.key")
output_names.append(f"present.{i}.encoder.value")
dynamic_axis = {
"input_ids": {0: "batch", 1: "encoder_sequence"},
"encoder_hidden_states": {0: "batch", 1: "encoder_sequence"},
"logits": {0: "batch", 1: "decoder_sequence"},
}
for i in range(num_layers):
dynamic_axis[f"past_key_values.{i}.decoder.key"] = {0: "batch", 2: "past_decoder_sequence"}
dynamic_axis[f"past_key_values.{i}.decoder.value"] = {0: "batch", 2: "past_decoder_sequence"}
dynamic_axis[f"past_key_values.{i}.encoder.key"] = {0: "batch", 2: "encoder_sequence_length"}
dynamic_axis[f"past_key_values.{i}.encoder.value"] = {0: "batch", 2: "encoder_sequence_length"}
dynamic_axis[f"present.{i}.decoder.key"] = {0: "batch", 2: "decoder_sequence"}
dynamic_axis[f"present.{i}.decoder.value"] = {0: "batch", 2: "decoder_sequence"}
dynamic_axis[f"present.{i}.encoder.key"] = {0: "batch", 2: "encoder_sequence_length"}
dynamic_axis[f"present.{i}.encoder.value"] = {0: "batch", 2: "encoder_sequence_length"}
```
Export of the model with cache support:
```
with torch.no_grad():
pytorch_model.config.return_dict = True
pytorch_model.eval()
# export can works with named args but the dict containing named args as to be last element of the args tuple
torch.onnx.export(
model_decoder,
(model_inputs,),
f=dec_cache_model_path,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axis,
do_constant_folding=True,
opset_version=13,
)
```
Export of the model computing Key/Values for the whole sequence (we basically just remove past states from the input, the `Pytorch` code will recompute them):
```
model_inputs_no_cache = {
"input_ids": input_ids,
"encoder_hidden_states": out_enc.last_hidden_state,
}
with torch.no_grad():
pytorch_model.config.return_dict = True
pytorch_model.eval()
# export can works with named args but the dict containing named args as to be last element of the args tuple
torch.onnx.export(
model_decoder,
(model_inputs_no_cache,),
f=dec_no_cache_model_path,
input_names=list(model_inputs_no_cache.keys()),
output_names=output_names,
dynamic_axes={k: v for k, v in dynamic_axis.items() if "past_key_values" not in k},
do_constant_folding=True,
opset_version=13,
)
_ = pytorch_model.cpu() # free cuda memory
torch.cuda.empty_cache()
```
## Conversion to mixed precision
Decoder module has different kinds of inputs, `input_ids` but also some float tensors.
It would a bit more complicated to generate random values for those tensors: in theory it can be of any value in the FP32 range, but because of how models are initialized and trained, most of them are close to 0.
To avoid too much guessing, we have decided to just take the output of the real model being fed with random `input_ids`.
```
def get_random_input_no_cache() -> Dict[str, torch.Tensor]:
inputs = get_random_input_encoder()
encoder_hidden_states = inference_onnx_binding(
model_onnx=enc_fp16_onnx,
binding=enc_fp16_onnx_binding,
inputs=inputs,
device="cuda",
clone_tensor=False,
)["output"]
# it will serve as input of a FP32 model
inputs["encoder_hidden_states"] = encoder_hidden_states.type(torch.float32)
return inputs
keep_fp32_no_cache = get_keep_fp32_nodes(onnx_model_path=dec_no_cache_model_path, get_input=get_random_input_no_cache)
onnx_model_no_cache_fp16 = convert_fp16(onnx_model=dec_no_cache_model_path, nodes_to_exclude=keep_fp32_no_cache)
save_onnx(proto=onnx_model_no_cache_fp16, model_path=dec_no_cache_fp16_model_path)
print(f"20 first nodes to keep in FP32 (total {len(keep_fp32_no_cache)}):")
keep_fp32_no_cache[:20]
dec_no_cache_ort_model = create_model_for_provider(dec_no_cache_model_path, "CUDAExecutionProvider")
# use info from tokenizer size and max shape provided through the command line
def get_random_input_cache() -> Dict[str, torch.Tensor]:
inputs = get_random_input_no_cache()
dec_past_states = inference_onnx_binding(
model_onnx=dec_no_cache_ort_model,
inputs=inputs,
device="cuda",
clone_tensor=False,
)
for k, v in dec_past_states.items():
if k == "logits":
continue
new_k = k.replace("present", "past_key_values")
inputs[new_k] = v
batch, _ = inputs["input_ids"].shape
complement = torch.randint(low=0, high=tokenizer.vocab_size, size=(batch, 1), dtype=torch.int32, device="cuda")
inputs["input_ids"] = torch.concat(tensors=[inputs["input_ids"], complement], dim=1)
return inputs
keep_fp32_cache = get_keep_fp32_nodes(onnx_model_path=dec_cache_model_path, get_input=get_random_input_cache)
del dec_no_cache_ort_model # free cuda memory
torch.cuda.empty_cache()
gc.collect()
onnx_model_cache_fp16 = convert_fp16(onnx_model=dec_cache_model_path, nodes_to_exclude=keep_fp32_cache)
save_onnx(proto=onnx_model_cache_fp16, model_path=dec_cache_fp16_model_path)
print(f"20 first nodes to keep in FP32 (total {len(keep_fp32_cache)}):")
keep_fp32_cache[:20]
```
## Merge `Onnx` computation graph to deduplicate weights
Finally, we will merge the 2 decoders together.
The idea is simple:
* we prefix the node / edge names of one of them to avoid naming collision
* we deduplicate the weights (the same weight matrix will have different names in the 2 models)
* we join the 2 computation graphs through an `If` node
* we generate the `Onnx` file
The new model will take a new input, `enable_cache`. When it contains a `True` value, computation graph with cache support is used.
> code below is written to be easy to read, but could be made much faster to run
```
prefix = "cache_node_"
mapping_initializer_cache_to_no_cache = dict()
# search for not-duplicated weights, called initializer in Onnx
to_add = list()
for node_cache in onnx_model_cache_fp16.graph.initializer:
found = False
for node_no_cache in onnx_model_no_cache_fp16.graph.initializer:
if node_cache.raw_data == node_no_cache.raw_data:
found = True
mapping_initializer_cache_to_no_cache[node_cache.name] = node_no_cache.name
break
if not found:
node_cache.name = prefix + node_cache.name
to_add.append(node_cache)
mapping_initializer_cache_to_no_cache[node_cache.name] = node_cache.name
onnx_model_no_cache_fp16.graph.initializer.extend(to_add)
# I/O model names should not be prefixed
model_io_names = [n.name for n in list(onnx_model_cache_fp16.graph.input) + list(onnx_model_cache_fp16.graph.output)]
# replace pointers to duplicated weights to their deduplicated version
for node in onnx_model_cache_fp16.graph.node:
for index, input_name in enumerate(node.input):
if input_name in model_io_names:
continue
node.input[index] = mapping_initializer_cache_to_no_cache.get(input_name, prefix + input_name)
for index, output_name in enumerate(node.output):
if output_name in model_io_names:
continue
node.output[index] = prefix + output_name
node.name = prefix + node.name
model_io_names = [n.name for n in list(onnx_model_cache_fp16.graph.input) + list(onnx_model_cache_fp16.graph.output)]
# prefix nodes to avoid naming collision
prefix = "init_"
cache = dict()
for node in onnx_model_no_cache_fp16.graph.initializer:
if node.name in model_io_names:
new_name = prefix + node.name
cache[node.name] = new_name
node.name = new_name
for node in onnx_model_no_cache_fp16.graph.node:
for input_index, n in enumerate(node.input):
node.input[input_index] = cache.get(n, n)
# mandatory for subgraph in if/else node
assert len(onnx_model_cache_fp16.graph.output) == len(
onnx_model_no_cache_fp16.graph.output
), f"{len(onnx_model_cache_fp16.graph.output)} vs {len(onnx_model_no_cache_fp16.graph.output)}"
# build a computation graph with cache support
graph_cache: onnx.GraphProto = onnx.helper.make_graph(
nodes=list(onnx_model_cache_fp16.graph.node),
name="graph-cache",
inputs=[],
outputs=list(onnx_model_cache_fp16.graph.output),
initializer=[],
)
# build a computation which doesn't need past states to run
graph_no_cache: onnx.GraphProto = onnx.helper.make_graph(
nodes=list(onnx_model_no_cache_fp16.graph.node),
name="graph-no-cache",
inputs=[],
outputs=list(onnx_model_no_cache_fp16.graph.output),
initializer=[],
)
# a new input to decide if we use past state or not
enable_cache_input = onnx.helper.make_tensor_value_info(name="enable_cache", elem_type=onnx.TensorProto.BOOL, shape=[1])
if_node = onnx.helper.make_node(
op_type="If",
inputs=["enable_cache"],
outputs=[o.name for o in list(onnx_model_no_cache_fp16.graph.output)],
then_branch=graph_cache,
else_branch=graph_no_cache,
)
# final model which can disable its cache
if_graph_def: GraphProto = helper.make_graph(
nodes=[if_node],
name="if-model",
inputs=list(onnx_model_cache_fp16.graph.input) + [enable_cache_input],
outputs=list(onnx_model_no_cache_fp16.graph.output),
initializer=list(onnx_model_no_cache_fp16.graph.initializer),
)
# serialization and cleaning
model_if: ModelProto = helper.make_model(
if_graph_def, producer_name="onnx-example", opset_imports=[helper.make_opsetid(onnx.defs.ONNX_DOMAIN, 13)]
)
save_onnx(proto=model_if, model_path=dec_if_fp16_model_path)
del model_if
torch.cuda.empty_cache()
gc.collect()
```
### Check `Onnx` decoder output
Compare `Onnx` output with and without cache, plus compare with `Pytorch` output.
```
pytorch_model = pytorch_model.cuda()
model_decoder = model_decoder.cuda()
input_ids = input_ids.cuda()
pytorch_model = pytorch_model.eval()
model_decoder = model_decoder.eval()
dec_onnx = create_model_for_provider(dec_if_fp16_model_path, "CUDAExecutionProvider", log_severity=3)
dec_onnx_binding: IOBinding = dec_onnx.io_binding()
```
## Zero copy output
Below, we check that the new model output is similar to the ones from `Pytorch`.
We use our new implementation of inference call.
The idea is the following:
* we ask `Onnx Runtime` to output a pointer to the `CUDA` array containing the result of the inference;
* we use `Cupy` API to wrap the array and provide information regarding tensor shape and type. `Cupy` doesn't own the data;
* we use `Dlpack` support to convert the `Cupy` tensor to `Pytorch`, another zero copy process.
This pipeline is unsafe, as the content of the tensor may change or disappear silently: only `Onnx Runtime` has the control of the array containing the data. It will happen at the next inference call. Because we know that during the text generation we discard each output before recalling `Onnx Runtime`, it works well in our case.
A second benefit of this approach is that we do not have anymore to guess the output shape.
Before using this approach, to avoid the output to be stored on host memory (RAM) which made inference slower, we had to provide `Onnx Runtime` with a pointer to `Pytorch` tensor with the right size. As the size change with the sequence length (so it changes for each generated token), we had to store the logic to guess the size somewhere in the code. The new approach frees us from this burden.
```
pytorch_model = pytorch_model.half()
with torch.inference_mode():
out_enc_pytorch: BaseModelOutputWithPastAndCrossAttentions = pytorch_model.encoder(input_ids=input_ids)
previous_step_pytorch: BaseModelOutputWithPastAndCrossAttentions = model_decoder(
input_ids=input_ids[:, :-1], encoder_hidden_states=out_enc_pytorch.last_hidden_state
)
out_dec_pytorch: BaseModelOutputWithPastAndCrossAttentions = model_decoder(
input_ids=input_ids, encoder_hidden_states=out_enc_pytorch.last_hidden_state
)
def decoder_pytorch_inference(decoder_input_ids: torch.Tensor, encoder_hidden_states: torch.Tensor, **_):
with torch.inference_mode():
return model_decoder(input_ids=decoder_input_ids, encoder_hidden_states=encoder_hidden_states)
def decoder_onnx_inference(
decoder_input_ids: torch.Tensor,
encoder_hidden_states: torch.Tensor,
enable_cache: torch.Tensor,
past_key_values: Optional[torch.Tensor],
):
inputs_onnx_dict = {
"input_ids": decoder_input_ids,
"encoder_hidden_states": encoder_hidden_states,
"enable_cache": enable_cache,
}
if past_key_values is not None:
for index, (k_dec, v_dec, k_enc, v_enc) in enumerate(past_key_values):
inputs_onnx_dict[f"past_key_values.{index}.decoder.key"] = k_dec
inputs_onnx_dict[f"past_key_values.{index}.decoder.value"] = v_dec
inputs_onnx_dict[f"past_key_values.{index}.encoder.key"] = k_enc
inputs_onnx_dict[f"past_key_values.{index}.encoder.value"] = v_enc
result_dict = inference_onnx_binding(
model_onnx=dec_onnx,
inputs=inputs_onnx_dict,
binding=dec_onnx_binding, # recycle the binding
device=decoder_input_ids.device.type,
clone_tensor=False, # no memory copy -> best perf and lowest memory footprint!
)
past_states = list()
for index in range(pytorch_model.config.num_layers):
kv = (
result_dict[f"present.{index}.decoder.key"],
result_dict[f"present.{index}.decoder.value"],
result_dict[f"present.{index}.encoder.key"],
result_dict[f"present.{index}.encoder.value"],
)
past_states.append(kv)
return BaseModelOutputWithPastAndCrossAttentions(
last_hidden_state=result_dict["logits"],
past_key_values=past_states,
)
out_dec_onnx_no_cache = decoder_onnx_inference(
decoder_input_ids=input_ids,
encoder_hidden_states=out_enc_pytorch.last_hidden_state,
enable_cache=torch.tensor([False], device="cuda", dtype=torch.bool),
past_key_values=None,
)
are_equal(a=out_dec_onnx_no_cache.last_hidden_state[:, -1:, :], b=out_dec_pytorch.last_hidden_state[:, -1:, :])
# check that past states are identical between Onnx and Pytorch
assert len(out_dec_onnx_no_cache.past_key_values) == len(out_dec_pytorch.past_key_values)
for (o_dec_k, o_dev_v, o_enc_k, o_enc_v), (p_dec_k, p_dev_v, p_enc_k, p_enc_v) in zip(
out_dec_onnx_no_cache.past_key_values, out_dec_pytorch.past_key_values
):
are_equal(a=o_dec_k, b=p_dec_k)
are_equal(a=o_dev_v, b=p_dev_v)
are_equal(a=o_enc_k, b=p_enc_k)
are_equal(a=o_enc_v, b=p_enc_v)
out_dec_onnx_cache = decoder_onnx_inference(
decoder_input_ids=input_ids[:, -1:],
encoder_hidden_states=out_enc_pytorch.last_hidden_state,
enable_cache=torch.tensor([True], device="cuda", dtype=torch.bool),
past_key_values=previous_step_pytorch.past_key_values,
)
are_equal(a=out_dec_onnx_cache.last_hidden_state[:, -1:, :], b=out_dec_pytorch.last_hidden_state[:, -1:, :])
# check that past states are identical between Onnx and Pytorch
assert len(out_dec_onnx_cache.past_key_values) == len(out_dec_pytorch.past_key_values)
for (o_dec_k, o_dev_v, o_enc_k, o_enc_v), (p_dec_k, p_dev_v, p_enc_k, p_enc_v) in zip(
out_dec_onnx_cache.past_key_values, out_dec_pytorch.past_key_values
):
are_equal(a=o_dec_k, b=p_dec_k)
are_equal(a=o_dev_v, b=p_dev_v)
are_equal(a=o_enc_k, b=p_enc_k)
are_equal(a=o_enc_v, b=p_enc_v)
```
## Benchmarks!
Finally, we will compare the performances of 4 setup in end-to-end scenarii:
* `Pytorch`
* `Pytorch` + cache
* `Onnx`
* `Onnx` + cache
For the comparison, we first do a sanity check by just generating a short sequence (we already have checked that output tensors are OK).
Then we force each model to generate:
* 256 tokens + batch size 1 (similar to `TensorRT` demo)
* 1000 tokens + batch size 4
```
def encoder_onnx_inference(input_ids: torch.Tensor, **_) -> BaseModelOutputWithPastAndCrossAttentions:
last_hidden_state = inference_onnx_binding(
model_onnx=enc_fp16_onnx, # noqa: F821
inputs={"input_ids": input_ids},
device=input_ids.device.type,
binding=enc_fp16_onnx_binding,
)["output"]
return BaseModelOutputWithPastAndCrossAttentions(last_hidden_state=last_hidden_state.type(torch.float16))
def encoder_pytorch_inference(input_ids, **_) -> BaseModelOutputWithPastAndCrossAttentions:
with torch.inference_mode():
res = pytorch_model.encoder(input_ids=input_ids).type(torch.float16)
return res
# https://github.com/NVIDIA/TensorRT/blob/main/demo/HuggingFace/T5/export.py
class ExtT5(torch.nn.Module, GenerationMixin):
def __init__(self, config: PretrainedConfig, device: torch.device, encoder_func: Callable, decoder_func: Callable):
super(ExtT5, self).__init__()
self.main_input_name = "input_ids" # https://github.com/huggingface/transformers/pull/14803
self.config: PretrainedConfig = config
self.device: torch.device = device
self.encoder_func = encoder_func
self.decoder_func = decoder_func
self.use_cache = True
self.timings = list()
def get_encoder(self):
return self.encoder_func
def get_decoder(self):
return self.decoder_func
def set_cache(self, enable: bool) -> None:
self.use_cache = enable
# from transformers library (modeling_t5.py)
def _reorder_cache(self, past, beam_idx):
reordered_decoder_past = ()
for layer_past_states in past:
# get the correct batch idx from layer past batch dim
# batch dim of `past` is at 2nd position
reordered_layer_past_states = ()
for layer_past_state in layer_past_states:
# need to set correct `past` for each of the four key / value states
reordered_layer_past_states = reordered_layer_past_states + (
layer_past_state.index_select(0, beam_idx),
)
assert reordered_layer_past_states[0].shape == layer_past_states[0].shape
assert len(reordered_layer_past_states) == len(layer_past_states)
reordered_decoder_past = reordered_decoder_past + (reordered_layer_past_states,)
return reordered_decoder_past
def prepare_inputs_for_generation(self, input_ids, past=None, use_cache=None, **kwargs) -> Dict[str, torch.Tensor]:
params = {
"encoder_hidden_states": kwargs["encoder_outputs"]["last_hidden_state"],
}
if past is None: # this is the 1st inferred token
self.timings = list()
if not self.use_cache:
past = None
if past is None:
params[self.main_input_name] = input_ids
params["enable_cache"] = torch.tensor([False], device="cuda", dtype=torch.bool)
else:
params[self.main_input_name] = input_ids[:, -1:]
params["enable_cache"] = torch.tensor([True], device="cuda", dtype=torch.bool)
params["past_key_values"] = past
return params
def forward(
self,
input_ids: torch.Tensor,
encoder_hidden_states: torch.Tensor,
enable_cache: torch.Tensor,
past_key_values: Optional[torch.Tensor] = None,
**_,
):
start_timer = time.monotonic()
dec_output = self.get_decoder()(
decoder_input_ids=input_ids,
encoder_hidden_states=encoder_hidden_states,
enable_cache=enable_cache,
past_key_values=past_key_values,
)
self.timings.append(time.monotonic() - start_timer)
return Seq2SeqLMOutput(logits=dec_output.last_hidden_state, past_key_values=dec_output.past_key_values)
model_gen = (
ExtT5(
config=pytorch_model.config,
device=pytorch_model.device,
encoder_func=encoder_onnx_inference, # encoder_pytorch_inference
decoder_func=decoder_onnx_inference, # decoder_pytorch_inference
)
.cuda()
.eval()
)
torch.cuda.synchronize()
with torch.inference_mode():
print("Onnx:")
print(
tokenizer.decode(
model_gen.generate(
inputs=input_ids,
min_length=3,
max_length=60,
num_beams=4,
no_repeat_ngram_size=2,
)[0],
skip_special_tokens=True,
)
)
print("Pytorch:")
print(
tokenizer.decode(
pytorch_model.generate(
input_ids=input_ids,
min_length=3,
max_length=60,
num_beams=4,
no_repeat_ngram_size=2,
)[0],
skip_special_tokens=True,
)
)
def print_timings(name: str, total: float, inference: float):
percent_inference = 100 * inference / total
print(f"{name}: {total:.1f}, including inference: {inference:.1f} ({percent_inference:.1f}%)")
all_timings: Dict[str, Dict[str, List[float]]] = dict()
for seq_len, num_beam in [(256, 1), (1000, 4)]:
timings = dict()
print(f"seq len: {seq_len} / # beam (batch size): {num_beam}")
task = "Onnx"
with nvtx.annotate(
task, color="red"
): # nvtx is for Nvidia nsight profiler, you can remove the line or install the library
model_gen.set_cache(enable=False)
# warmup
model_gen.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
start = time.monotonic()
model_gen.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
print_timings(name=task, total=total_time, inference=sum(model_gen.timings))
timings[f"{task}"] = model_gen.timings
task = "Onnx + cache"
with nvtx.annotate(task, color="red"):
model_gen.set_cache(enable=True)
# warmup
model_gen.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
start = time.monotonic()
model_gen.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
print_timings(name=task, total=total_time, inference=sum(model_gen.timings))
timings[f"{task}"] = model_gen.timings
# monckey patching of forward function to add a timer per generated token
old_fw = pytorch_model.forward
timing_pytorch = list()
def new_fw(self, *args, **kwargs):
timer_start = time.monotonic()
res = old_fw(self, *args, **kwargs)
torch.cuda.synchronize() # makes timings correct without having significant impact on e2e latency
total_time = time.monotonic() - timer_start
timing_pytorch.append(total_time)
return res
task = "Pytorch"
with nvtx.annotate(task, color="orange"):
pytorch_model.config.use_cache = False
with torch.inference_mode():
with torch.cuda.amp.autocast():
# warmup
pytorch_model.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
pytorch_model.forward = new_fw.__get__(pytorch_model)
start = time.monotonic()
pytorch_model.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
pytorch_model.forward = old_fw
inference_time = np.sum(timing_pytorch)
print_timings(name="Pytorch", total=total_time, inference=inference_time)
timing_pytorch_no_cache = copy(timing_pytorch)
timings[f"{task}"] = copy(timing_pytorch)
timing_pytorch.clear()
torch.cuda.empty_cache()
task = "Pytorch + cache"
with nvtx.annotate("Pytorch + cache", color="green"):
pytorch_model.config.use_cache = True
with torch.inference_mode():
with torch.cuda.amp.autocast():
# warmup
pytorch_model.generate(inputs=input_ids, max_length=10, num_beams=num_beam, min_length=10)
pytorch_model.forward = new_fw.__get__(pytorch_model)
start = time.monotonic()
pytorch_model.generate(inputs=input_ids, max_length=seq_len, num_beams=num_beam, min_length=seq_len)
total_time = time.monotonic() - start
pytorch_model.forward = old_fw
print_timings(name="Pytorch + cache", total=total_time, inference=sum(timing_pytorch))
timings[f"{task}"] = copy(timing_pytorch)
timing_pytorch.clear()
all_timings[f"{seq_len} / {num_beam}"] = timings
torch.cuda.empty_cache()
```
## Benchmark analysis
Below, we plot for each setup (short and long sequence):
* the time spent on each token generation
* the full time to generate the sequence (for each length)
We can see that for short sequence and batch size of 1, cache or not, latency appears to be stable.
However, for longer sequences, we can see that the no cache approach (being `Pytorch` or `Onnx` based) doesn't scale well, and at some point, `Onnx` is even slower than `Hugging Face` code with cache support.
On the other side, `Onnx` timings are mostly stable whatever the sequence length which is quite remarkable.
It's because we are working one token at a time and converted a quadratic complexity in the attention layer into a linear one.
```
sns.set_style("darkgrid") # darkgrid, whitegrid, dark, white and ticks
plt.rc("axes", titlesize=15) # fontsize of the axes title
plt.rc("axes", labelsize=14) # fontsize of the x and y labels
plt.rc("xtick", labelsize=13) # fontsize of the tick labels
plt.rc("ytick", labelsize=13) # fontsize of the tick labels
plt.rc("legend", fontsize=15) # legend fontsize
plt.rc("font", size=13) # controls default text sizes
colors = sns.color_palette("deep")
fig = plt.figure(constrained_layout=True, figsize=(12, 8))
subfigs = fig.subfigures(nrows=2, ncols=1)
fig.supxlabel("seq len (# tokens)")
fig.supylabel("latency (s)")
fig.suptitle(f"Small seq len and greedy search on {model_name} don't tell the whole (inference) story...")
for row, (plot_name, timings) in enumerate(all_timings.items()):
subfigs[row].suptitle(f"setup #{1+row}: {plot_name} (seq len / beam search)")
axs = subfigs[row].subplots(nrows=1, ncols=2)
for col, accumulated in enumerate([False, True]):
plot_axis = axs[col]
for index, (k, v) in enumerate(timings.items()):
axis = range(len(v))
color = colors[index]
v = np.array(v)
# remove extreme values
p99 = np.percentile(v, 99)
v[v > p99] = p99
v = np.cumsum(v) if accumulated else v
plot_axis.scatter(axis, v, label=k, s=2)
title = f"latency for the full sequence" if accumulated else f"latency for each token"
plot_axis.title.set_text(title)
# legend deduplication
handles, labels = plt.gca().get_legend_handles_labels()
by_label = dict(zip(labels, handles))
fig.legend(by_label.values(), by_label.keys(), bbox_to_anchor=(1, 1), loc="upper left", markerscale=5)
plt.show()
```
## Profiling model at the kernel level
Below we reload the decoder model with `Onnx Runtime` kernel profiling enabled.
It will help us to understand on which part of the computation graph the GPU spends its time.
The number of events that `Onnx Runtime` can save is limited to [1 million](https://github.com/microsoft/onnxruntime/blob/a4b5fa334aa939fb159bdc571ed3d56ca8d31fc7/onnxruntime/core/common/profiler.cc#L10).
It is not an issue as we have seen that timings per token are mostly stable, so having only n first token information don't change anything.
The main information it gives us is that 30% of the time is spent on matrix multiplication when caching is used.
The rest of the time is spent on mostly memory bound operations:
* element-wise operations which require little computation (`add`, `mul`, `div`, etc.)
* copy pasting tensors `GPU` <-> `GPU` with little transformation in between (`transpose`, `concat`, `cast`, etc.)
It matches the information provided by both `nvidia-smi` and `Nvidia Nsight` (the GPU profiler from Nvidia): the GPU is under utilized.
That's why we think that a tool like `TensorRT` which will perform aggressive kernel fusion, reducing time spent on memory bounded operations, should be a good fit for autoregressive models.
> there is a nice opportunity to increase the speedup by reducing the number of casting operations. We keep this work for the future.
```
dec_onnx = create_model_for_provider(
dec_if_fp16_model_path, "CUDAExecutionProvider", enable_profiling=True, log_severity=3
)
dec_onnx_binding: IOBinding = dec_onnx.io_binding()
_ = model_gen.generate(inputs=input_ids, max_length=10, num_beams=4, min_length=10)
profile_name = dec_onnx.end_profiling()
with open(profile_name) as f:
content = json.load(f)
op_timings = defaultdict(lambda: 0)
for c in content:
if "op_name" not in c["args"]:
continue
op_name = c["args"]["op_name"]
if op_name == "If":
continue # subgraph
time_taken = c["dur"]
op_timings[op_name] += time_taken
op_timings_filter = dict(sorted(op_timings.items(), key=operator.itemgetter(1), reverse=True)[:10])
total_kernel_timing = sum(op_timings.values())
op_timings_percent = {k: 100 * v / total_kernel_timing for k, v in op_timings_filter.items()}
plt.barh(list(op_timings_percent.keys()), list(op_timings_percent.values()))
plt.title("Time spent per kernel\n(top 10 kernels)")
plt.xlabel("% total inference time")
plt.show()
```
|
github_jupyter
|
```
!wget -q https://github.com/CISC-372/Notebook/releases/download/a4/test.csv
!wget -q https://github.com/CISC-372/Notebook/releases/download/a4/train.csv
# comment your understanding of each function
import pandas as pd
import csv
xy_train_df = pd.read_csv('train.csv')
x_test_df = pd.read_csv('test.csv', index_col='id')
xy_train_df['length'] = xy_train_df.apply(lambda x: len(x.review), axis=1)
xy_train_df = xy_train_df.sort_values('length')
xy_train_df
# comment your understanding of each function and each parameter below:
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from sklearn.model_selection import train_test_split
vocab_size = 10000
max_len = 256
xy_train, xy_validation = train_test_split(
xy_train_df, test_size=0.2)
# build vocabulary from training set
tokenizer = Tokenizer(num_words=vocab_size)
tokenizer.fit_on_texts(xy_train.review)
def _preprocess(texts):
return pad_sequences(
tokenizer.texts_to_sequences(texts),
maxlen=max_len,
padding='post'
)
x_train = _preprocess(xy_train.review)
y_train = xy_train.rating
x_valid = _preprocess(xy_validation.review)
y_valid = xy_validation.rating
x_test = _preprocess(x_test_df.review)
print(x_train.shape)
print(x_valid.shape)
print(x_test.shape)
from __future__ import absolute_import, division, print_function, unicode_literals
import collections
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras.optimizers import Adam
# comment your understanding of each line and
# the output shape of each line below. for each dimensionality, explains its
# meaning. (e.g. None is the batch size)
x = keras.Input((max_len))
embeded = keras.layers.Embedding(vocab_size, 20)(x)
averaged = tf.reduce_mean(embeded, axis=1)
pred = keras.layers.Dense(1, activation=tf.nn.sigmoid)(averaged)
model = keras.Model(x, pred)
model.compile(
optimizer=Adam(clipnorm=4.),
loss='binary_crossentropy',
metrics=['accuracy'])
history = model.fit(x_train,
y_train,
epochs=5,
batch_size=64,
validation_data=(x_valid, y_valid),
verbose=1)
model.evaluate(x_valid, y_valid)
def predict_class(_dataset):
classes = model.predict(_dataset) > 0.5
return np.squeeze(classes * 1)
y_predict = predict_class(x_valid)
from sklearn.metrics import f1_score
from sklearn.metrics import confusion_matrix
print(f1_score(y_valid, y_predict, average='micro'))
# submission
pd.DataFrame(
{'id': x_test_df.index,
'rating': predict_class(x_test)}).to_csv('sample_submission.csv', index=False)
```
|
github_jupyter
|
## Exploratory analysis of the US Airport Dataset
This dataset contains data for 25 years[1995-2015] of flights between various US airports and metadata about these routes. Taken from Bureau of Transportation Statistics, United States Department of Transportation.
Let's see what can we make out of this!
```
%matplotlib inline
import networkx as nx
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import warnings
warnings.filterwarnings('ignore')
pass_air_data = pd.read_csv('datasets/passengers.csv')
```
In the `pass_air_data` dataframe we have the information of number of people that fly every year on a particular route on the list of airlines that fly that route.
```
pass_air_data.head()
# Create a MultiDiGraph from this dataset
passenger_graph = nx.from_pandas_edgelist(pass_air_data, source='ORIGIN', target='DEST', edge_attr=['YEAR', 'PASSENGERS', 'UNIQUE_CARRIER_NAME'], create_using=nx.MultiDiGraph())
```
### Cleveland to Chicago, how many people fly this route?
```
passenger_graph['CLE']['ORD'][25]
temp = [(i['YEAR'], i['PASSENGERS'])for i in dict(passenger_graph['CLE']['ORD']).values()]
x, y = zip(*temp)
plt.plot(x, y)
plt.show()
```
## Exercise
Find the busiest route in 1990 and in 2015 according to number of passengers, and plot the time series of number of passengers on these routes.
You can use the DataFrame instead of working with the network. It will be faster ;)
[5 mins]
```
temp = pass_air_data.groupby(['YEAR'])['PASSENGERS'].transform(max) == pass_air_data['PASSENGERS']
pass_air_data[temp][pass_air_data.YEAR.isin([1990, 2015])]
pass_air_data[(pass_air_data['ORIGIN'] == 'LAX') & (pass_air_data['DEST'] == 'HNL')].plot('YEAR', 'PASSENGERS')
pass_air_data[(pass_air_data['ORIGIN'] == 'LAX') & (pass_air_data['DEST'] == 'SFO')].plot('YEAR', 'PASSENGERS')
```
So let's have a look at the important nodes in this network, i.e. important airports in this network. We'll use pagerank, betweenness centrality and degree centrality.
```
# nx.pagerank(passenger_graph)
def year_network(G, year):
temp_g = nx.DiGraph()
for i in G.edges(data=True):
if i[2]['YEAR'] == year:
temp_g.add_edge(i[0], i[1], weight=i[2]['PASSENGERS'])
return temp_g
pass_2015 = year_network(passenger_graph, 2015)
len(pass_2015)
len(pass_2015.edges())
# Load in the GPS coordinates of all the airports
lat_long = pd.read_csv('datasets/GlobalAirportDatabase.txt', delimiter=':', header=None)
lat_long[lat_long[1].isin(list(pass_2015.nodes()))]
pos_dict = {}
for airport in lat_long[lat_long[1].isin(list(pass_2015.nodes()))].iterrows():
pos_dict[airport[1][1]] = (airport[1][15], airport[1][14])
pos_dict
```
## Exercise
Using the position dictionary `pos_dict` create a plot of the airports, only the nodes not the edges.
- As we don't have coordinates for all the airports we have to create a subgraph first.
- Use `nx.subgraph(Graph, iterable of nodes)` to create the subgraph
- Use `nx.draw_networkx_nodes(G, pos)` to map the nodes.
or
- Just use a scatter plot :)
```
plt.figure(figsize=(20, 9))
G = nx.subgraph(pass_2015, pos_dict.keys())
nx.draw_networkx_nodes(G, pos=pos_dict, node_size=10, alpha=0.6, node_color='b')
# nx.draw_networkx_edges(G, pos=pos_dict, width=0.1, arrows=False)
plt.show()
plt.figure(figsize=(20, 9))
x = [i[0] for i in pos_dict.values()]
y = [i[1] for i in pos_dict.values()]
plt.scatter(x, y)
```
### What about degree distribution of this network?
```
plt.hist(list(nx.degree_centrality(pass_2015).values()))
plt.show()
```
Let's plot a log log plot to get a better overview of this.
```
d = {}
for i, j in dict(nx.degree(pass_2015)).items():
if j in d:
d[j] += 1
else:
d[j] = 1
x = np.log2(list((d.keys())))
y = np.log2(list(d.values()))
plt.scatter(x, y, alpha=0.4)
plt.show()
```
### Directed Graphs

```
G = nx.DiGraph()
G.add_edge(1, 2, weight=1)
# print(G.edges())
# G[1][2]
# G[2][1]
# G.is_directed()
# type(G)
G.add_edges_from([(1, 2), (3, 2), (4, 2), (5, 2), (6, 2), (7, 2)])
nx.draw_circular(G, with_labels=True)
G.in_degree()
nx.pagerank(G)
G.add_edge(5, 6)
nx.draw_circular(G, with_labels=True)
nx.pagerank(G)
G.add_edge(2, 8)
nx.draw_circular(G, with_labels=True)
nx.pagerank(G)
```
### Moving back to Airports
```
sorted(nx.pagerank(pass_2015, weight=None).items(), key=lambda x:x[1], reverse=True)[:10]
sorted(nx.betweenness_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.degree_centrality(pass_2015).items(), key=lambda x:x[1], reverse=True)[0:10]
```
'ANC' is the airport code of Anchorage airport, a place in Alaska, and according to pagerank and betweenness centrality it is the most important airport in this network Isn't that weird? Thoughts?
related blog post: https://toreopsahl.com/2011/08/12/why-anchorage-is-not-that-important-binary-ties-and-sample-selection/
Let's look at weighted version, i.e taking into account the number of people flying to these places.
```
sorted(nx.betweenness_centrality(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.pagerank(pass_2015, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
```
## How reachable is this network?
We calculate the average shortest path length of this network, it gives us an idea about the number of jumps we need to make around the network to go from one airport to any other airport in this network.
```
# nx.average_shortest_path_length(pass_2015)
```
Wait, What??? This network is not connected. That seems like a really stupid thing to do.
```
list(nx.weakly_connected_components(pass_2015))
```
### SPB, SSB, AIK anyone?
```
pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['ORIGIN'] == 'AIK')]
pass_2015.remove_nodes_from(['SPB', 'SSB', 'AIK'])
nx.is_weakly_connected(pass_2015)
nx.is_strongly_connected(pass_2015)
```
### Strongly vs weakly connected graphs.
```
G = nx.DiGraph()
G.add_edge(1, 2)
G.add_edge(2, 3)
G.add_edge(3, 1)
nx.draw(G)
G.add_edge(3, 4)
nx.draw(G)
nx.is_strongly_connected(G)
list(nx.strongly_connected_components(pass_2015))
pass_air_data[(pass_air_data['YEAR'] == 2015) & (pass_air_data['DEST'] == 'TSP')]
pass_2015_strong = max(nx.strongly_connected_component_subgraphs(pass_2015), key=len)
len(pass_2015_strong)
nx.average_shortest_path_length(pass_2015_strong)
```
#### Exercise! (Actually this is a game :D)
How can we decrease the avg shortest path length of this network?
Think of an effective way to add new edges to decrease the avg shortest path length.
Let's see if we can come up with a nice way to do this, and the one who gets the highest decrease wins!!!
The rules are simple:
- You can't add more than 2% of the current edges( ~500 edges)
[10 mins]
```
sort_degree = sorted(nx.degree_centrality(pass_2015_strong).items(), key=lambda x:x[1], reverse=True)
top_count = 0
for n, v in sort_degree:
count = 0
for node, val in sort_degree:
if node != n:
if node not in pass_2015_strong.adj[n]:
pass_2015_strong.add_edge(n, node)
count += 1
if count == 25:
break
top_count += 1
if top_count == 20:
break
nx.average_shortest_path_length(pass_2015_strong)
```
### What about airlines? Can we find airline specific reachability?
```
passenger_graph['JFK']['SFO'][25]
def str_to_list(a):
return a[1:-1].split(', ')
for i in str_to_list(passenger_graph['JFK']['SFO'][25]['UNIQUE_CARRIER_NAME']):
print(i)
%%time
for origin, dest in passenger_graph.edges():
for key in passenger_graph[origin][dest]:
passenger_graph[origin][dest][key]['airlines'] = str_to_list(passenger_graph[origin][dest][key]['UNIQUE_CARRIER_NAME'])
```
### Exercise
Play around with United Airlines network.
- Extract a network for United Airlines flights from the metagraph `passenger_graph` for the year 2015
- Make sure it's a weighted network, where weight is the number of passengers.
- Find the number of airports and connections in this network
- Find the most important airport, according to PageRank and degree centrality.
```
united_network = nx.DiGraph()
for origin, dest in passenger_graph.edges():
if 25 in passenger_graph[origin][dest]:
if "'United Air Lines Inc.'" in passenger_graph[origin][dest][25]['airlines']:
united_network.add_edge(origin, dest, weight=passenger_graph[origin][dest][25]['PASSENGERS'])
len(united_network)
len(united_network.edges())
sorted(nx.pagerank(united_network, weight='weight').items(), key=lambda x:x[1], reverse=True)[0:10]
sorted(nx.degree_centrality(united_network).items(), key=lambda x:x[1], reverse=True)[0:10]
```
### Exercise
We are in Cleveland so what should we do?
Obviously we will make a time series of number of passengers flying out of Cleveland with United Airlines over the years.
There are 2 ways of doing it.
- Create a new multidigraph specifically for this exercise.
OR
- exploit the `pass_air_data` dataframe.
```
pass_air_data[(pass_air_data.ORIGIN == 'CLE') &
(pass_air_data.UNIQUE_CARRIER_NAME.str.contains('United Air Lines Inc.'))
].groupby('YEAR')['PASSENGERS'].sum().plot()
```
|
github_jupyter
|
500 hPa Vorticity Advection
===========================
Plot an 500-hPa map with calculating vorticity advection using MetPy calculations.
Beyond just plotting 500-hPa level data, this uses calculations from `metpy.calc` to find
the vorticity and vorticity advection. Currently, this needs an extra helper function to
calculate the distance between lat/lon grid points.
Imports
```
from datetime import datetime
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import metpy.calc as mpcalc
import numpy as np
import scipy.ndimage as ndimage
from metpy.units import units
from netCDF4 import num2date
from siphon.catalog import TDSCatalog
```
Data Aquisition
---------------
```
dt = datetime(2016, 4, 16, 18)
# Assemble our URL to the THREDDS Data Server catalog,
# and access our desired dataset within via NCSS
base_url = 'https://www.ncei.noaa.gov/thredds/catalog/model-namanl-old/'
cat = TDSCatalog(f'{base_url}{dt:%Y%m}/{dt:%Y%m%d}/catalog.xml')
ncss = cat.datasets[f'namanl_218_{dt:%Y%m%d}_{dt:%H}00_000.grb'].subset()
# Query for Latest GFS Run
query = ncss.query()
query.time(dt)
query.accept('netcdf')
query.variables('Geopotential_height_isobaric',
'u-component_of_wind_isobaric',
'v-component_of_wind_isobaric')
query.add_lonlat()
# Obtain our queried data
ds = ncss.get_data(query)
lon = ds.variables['lon'][:]
lat = ds.variables['lat'][:]
times = ds.variables[ds.variables['Geopotential_height_isobaric'].dimensions[0]]
vtime = num2date(times[:].squeeze(), units=times.units)
lev_500 = np.where(ds.variables['isobaric'][:] == 500)[0][0]
hght_500 = ds.variables['Geopotential_height_isobaric'][0, lev_500, :, :]
hght_500 = ndimage.gaussian_filter(hght_500, sigma=3, order=0) * units.meter
uwnd_500 = units('m/s') * ds.variables['u-component_of_wind_isobaric'][0, lev_500, :, :]
vwnd_500 = units('m/s') * ds.variables['v-component_of_wind_isobaric'][0, lev_500, :, :]
```
Begin Data Calculations
-----------------------
```
dx, dy = mpcalc.lat_lon_grid_deltas(lon, lat)
f = mpcalc.coriolis_parameter(np.deg2rad(lat)).to(units('1/sec'))
avor = mpcalc.vorticity(uwnd_500, vwnd_500, dx, dy, dim_order='yx') + f
avor = ndimage.gaussian_filter(avor, sigma=3, order=0) * units('1/s')
vort_adv = mpcalc.advection(avor, [uwnd_500, vwnd_500], (dx, dy), dim_order='yx') * 1e9
```
Map Creation
------------
```
# Set up Coordinate System for Plot and Transforms
dproj = ds.variables['LambertConformal_Projection']
globe = ccrs.Globe(ellipse='sphere', semimajor_axis=dproj.earth_radius,
semiminor_axis=dproj.earth_radius)
datacrs = ccrs.LambertConformal(central_latitude=dproj.latitude_of_projection_origin,
central_longitude=dproj.longitude_of_central_meridian,
standard_parallels=[dproj.standard_parallel],
globe=globe)
plotcrs = ccrs.LambertConformal(central_latitude=45., central_longitude=-100.,
standard_parallels=[30, 60])
fig = plt.figure(1, figsize=(14., 12))
gs = gridspec.GridSpec(2, 1, height_ratios=[1, .02], bottom=.07, top=.99,
hspace=0.01, wspace=0.01)
ax = plt.subplot(gs[0], projection=plotcrs)
# Plot Titles
plt.title(r'500-hPa Heights (m), AVOR$*10^5$ ($s^{-1}$), AVOR Adv$*10^8$ ($s^{-2}$)',
loc='left')
plt.title(f'VALID: {vtime}', loc='right')
# Plot Background
ax.set_extent([235., 290., 20., 58.], ccrs.PlateCarree())
ax.coastlines('50m', edgecolor='black', linewidth=0.75)
ax.add_feature(cfeature.STATES, linewidth=.5)
# Plot Height Contours
clev500 = np.arange(5100, 6061, 60)
cs = ax.contour(lon, lat, hght_500.m, clev500, colors='black', linewidths=1.0,
linestyles='solid', transform=ccrs.PlateCarree())
plt.clabel(cs, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Plot Absolute Vorticity Contours
clevvort500 = np.arange(-9, 50, 5)
cs2 = ax.contour(lon, lat, avor*10**5, clevvort500, colors='grey',
linewidths=1.25, linestyles='dashed', transform=ccrs.PlateCarree())
plt.clabel(cs2, fontsize=10, inline=1, inline_spacing=10, fmt='%i',
rightside_up=True, use_clabeltext=True)
# Plot Colorfill of Vorticity Advection
clev_avoradv = np.arange(-30, 31, 5)
cf = ax.contourf(lon, lat, vort_adv.m, clev_avoradv[clev_avoradv != 0], extend='both',
cmap='bwr', transform=ccrs.PlateCarree())
cax = plt.subplot(gs[1])
cb = plt.colorbar(cf, cax=cax, orientation='horizontal', extendrect='True', ticks=clev_avoradv)
cb.set_label(r'$1/s^2$', size='large')
# Plot Wind Barbs
# Transform Vectors and plot wind barbs.
ax.barbs(lon, lat, uwnd_500.m, vwnd_500.m, length=6, regrid_shape=20,
pivot='middle', transform=ccrs.PlateCarree())
```
|
github_jupyter
|
```
"""
Snowflake Batch Prediction API Snowflake S3 scoring job
v1.0 Mike Taveirne (doyouevendata) 3/21/2020
"""
import pandas as pd
import requests
import time
from pandas.io.json import json_normalize
import snowflake.connector
import my_creds
#from imp import reload
#reload(my_creds)
# datarobot parameters
API_KEY = my_creds.API_KEY
USERNAME = my_creds.USERNAME
DEPLOYMENT_ID = my_creds.DEPLOYMENT_ID
DATAROBOT_KEY = my_creds.DATAROBOT_KEY
# replace with the load balancer for your prediction instance(s)
DR_PREDICTION_HOST = my_creds.DR_PREDICTION_HOST
DR_APP_HOST = 'https://app.datarobot.com'
DR_MODELING_HEADERS = {'Content-Type': 'application/json', 'Authorization': 'token %s' % API_KEY}
# snowflake parameters
SNOW_ACCOUNT = my_creds.SNOW_ACCOUNT
SNOW_USER = my_creds.SNOW_USER
SNOW_PASS = my_creds.SNOW_PASS
SNOW_DB = 'TITANIC'
SNOW_SCHEMA = 'PUBLIC'
# ETL parameters
JOB_NAME = 'pass_scoring'
```
### Retrieve or Create S3 Credentials
```
# get a saved credential set, return None if not found
def dr_get_catalog_credentials(name, cred_type):
if cred_type not in ['basic', 's3']:
print('credentials type must be: basic, s3 - value passed was {ct}'.format(ct=cred_type))
return None
credentials_id = None
response = requests.get(
DR_APP_HOST + '/api/v2/credentials/',
headers=DR_MODELING_HEADERS,
)
if response.status_code == 200:
df = pd.io.json.json_normalize(response.json()['data'])[['credentialId', 'name', 'credentialType']]
if df[(df['name'] == name) & (df['credentialType'] == cred_type)]['credentialId'].size > 0:
credentials_id = df[(df['name'] == name) & (df['credentialType'] == cred_type)]['credentialId'].iloc[0]
else:
print('Request failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
return credentials_id
# create credentials set
def dr_create_catalog_credentials(name, cred_type, user, password, token=None):
if cred_type not in ['basic', 's3']:
print('credentials type must be: basic, s3 - value passed was {ct}'.format(ct=cred_type))
return None
if cred_type == 'basic':
json = {
"credentialType": cred_type,
"user": user,
"password": password,
"name": name
}
elif cred_type == 's3' and token != None:
json = {
"credentialType": cred_type,
"awsAccessKeyId": user,
"awsSecretAccessKey": password,
"awsSessionToken": token,
"name": name
}
elif cred_type == 's3' and token == None:
json = {
"credentialType": cred_type,
"awsAccessKeyId": user,
"awsSecretAccessKey": password,
"name": name
}
response = requests.post(
url = DR_APP_HOST + '/api/v2/credentials/',
headers=DR_MODELING_HEADERS,
json=json
)
if response.status_code == 201:
return response.json()['credentialId']
else:
print('Request failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
# get or create a credential set
def dr_get_or_create_catalog_credentials(name, cred_type, user, password, token=None):
cred_id = dr_get_catalog_credentials(name, cred_type)
if cred_id == None:
return dr_create_catalog_credentials(name, cred_type, user, password, token=None)
else:
return cred_id
credentials_id = dr_get_or_create_catalog_credentials('s3_community',
's3', my_creds.SNOW_USER, my_creds.SNOW_PASS)
```
### Extract Data to S3 via Snowflake
```
# create a connection
ctx = snowflake.connector.connect(
user=SNOW_USER,
password=SNOW_PASS,
account=SNOW_ACCOUNT,
database=SNOW_DB,
schema=SNOW_SCHEMA,
protocol='https'
)
# create a cursor
cur = ctx.cursor()
# execute sql to get start/end timestamps to use
sql = "select last_ts_scored_through, current_timestamp::TIMESTAMP_NTZ cur_ts " \
"from etl_history " \
"where job_nm = '{job}' " \
"order by last_ts_scored_through desc " \
"limit 1 ".format(job=JOB_NAME)
cur.execute(sql)
# fetch results into dataframe
df = cur.fetch_pandas_all()
start_ts = df['LAST_TS_SCORED_THROUGH'][0]
end_ts = df['CUR_TS'][0]
# execute sql to dump data into a single file in S3 stage bucket
# AWS single file snowflake limit 5 GB
sql = "COPY INTO @S3_SUPPORT/titanic/community/" + JOB_NAME + ".csv " \
"from " \
"( " \
" select passengerid, pclass, name, sex, age, sibsp, parch, ticket, fare, cabin, embarked " \
" from passengers_500k_ts " \
" where nvl(updt_ts, crt_ts) >= '{start}' " \
" and nvl(updt_ts, crt_ts) < '{end}' " \
") " \
"file_format = (format_name='default_csv' compression='none') header=true overwrite=true single=true;".format(start=start_ts, end=end_ts)
cur.execute(sql)
```
### Create DataRobot Session and Running Batch Prediction API Job
```
session = requests.Session()
session.headers = {
'Authorization': 'Bearer {}'.format(API_KEY)
}
INPUT_FILE = 's3://'+ my_creds.S3_BUCKET + '/titanic/community/' + JOB_NAME + '.csv'
OUTPUT_FILE = 's3://'+ my_creds.S3_BUCKET + '/titanic/community/' + JOB_NAME + '_scored.csv'
job_details = {
'deploymentId': DEPLOYMENT_ID,
'passthroughColumns': ['PASSENGERID'],
'numConcurrent': 4,
"predictionInstance" : {
"hostName": DR_PREDICTION_HOST,
"datarobotKey": DATAROBOT_KEY
},
'intakeSettings': {
'type': 's3',
'url': INPUT_FILE,
'credentialId': credentials_id
},
'outputSettings': {
'type': 's3',
'url': OUTPUT_FILE,
'credentialId': credentials_id
}
}
response = session.post(
DR_APP_HOST + '/api/v2/batchPredictions',
json=job_details
)
```
### Monitor S3 Scoring Status and Return Control Upon Completion
```
if response.status_code == 202:
job = response.json()
print('queued batch job: {}'.format(job['links']['self']))
while job['status'] == 'INITIALIZING':
time.sleep(3)
response = session.get(job['links']['self'])
response.raise_for_status()
job = response.json()
print('completed INITIALIZING')
if job['status'] == 'RUNNING':
while job['status'] == 'RUNNING':
time.sleep(3)
response = session.get(job['links']['self'])
response.raise_for_status()
job = response.json()
print('completed RUNNING')
print('status is now {status}'.format(status=job['status']))
if job['status'] != 'COMPLETED':
for i in job['logs']:
print(i)
else:
print('Job submission failed; http error {code}: {content}'.format(code=response.status_code, content=response.content))
```
### Truncate and Reload STG Staging Table with Results
```
# multi-statement executions
# https://docs.snowflake.com/en/user-guide/python-connector-api.html#execute_string
# truncate and load STG schema table with scored results
sql = "truncate titanic.stg.PASSENGERS_SCORED_BATCH_API; " \
" copy into titanic.stg.PASSENGERS_SCORED_BATCH_API from @S3_SUPPORT/titanic/community/" + JOB_NAME + "_scored.csv" \
" FILE_FORMAT = 'DEFAULT_CSV' ON_ERROR = 'ABORT_STATEMENT' PURGE = FALSE;"
ctx.execute_string(sql)
```
### Update Presentation Target Table With Results
```
# update target presentation table and ETL history table in transaction
sql = \
"begin; " \
"update titanic.public.passengers_500k_ts trg " \
"set trg.survival = src.survived_1_prediction " \
"from titanic.stg.PASSENGERS_SCORED_BATCH_API src " \
"where src.passengerid = trg.passengerid; " \
"insert into etl_history values ('{job}', '{run_through_ts}'); " \
"commit; ".format(job=JOB_NAME, run_through_ts=end_ts)
ctx.execute_string(sql)
```
|
github_jupyter
|
# Inference and Validation
Now that you have a trained network, you can use it for making predictions. This is typically called **inference**, a term borrowed from statistics. However, neural networks have a tendency to perform *too well* on the training data and aren't able to generalize to data that hasn't been seen before. This is called **overfitting** and it impairs inference performance. To test for overfitting while training, we measure the performance on data not in the training set called the **validation** set. We avoid overfitting through regularization such as dropout while monitoring the validation performance during training. In this notebook, I'll show you how to do this in PyTorch.
As usual, let's start by loading the dataset through torchvision. You'll learn more about torchvision and loading data in a later part. This time we'll be taking advantage of the test set which you can get by setting `train=False` here:
```python
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
```
The test set contains images just like the training set. Typically you'll see 10-20% of the original dataset held out for testing and validation with the rest being used for training.
```
import torch
from torchvision import datasets, transforms
# Define a transform to normalize the data
transform = transforms.Compose([transforms.ToTensor(),
transforms.Normalize((0.5,), (0.5,))])
# Download and load the training data
trainset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=True, transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)
# Download and load the test data
testset = datasets.FashionMNIST('~/.pytorch/F_MNIST_data/', download=True, train=False, transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=64, shuffle=True)
```
Here I'll create a model like normal, using the same one from my solution for part 4.
```
from torch import nn, optim
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
The goal of validation is to measure the model's performance on data that isn't part of the training set. Performance here is up to the developer to define though. Typically this is just accuracy, the percentage of classes the network predicted correctly. Other options are [precision and recall](https://en.wikipedia.org/wiki/Precision_and_recall#Definition_(classification_context)) and top-5 error rate. We'll focus on accuracy here. First I'll do a forward pass with one batch from the test set.
```
model = Classifier()
images, labels = next(iter(testloader))
# Get the class probabilities
ps = torch.exp(model(images))
# Make sure the shape is appropriate, we should get 10 class probabilities for 64 examples
print(ps.shape)
```
With the probabilities, we can get the most likely class using the `ps.topk` method. This returns the $k$ highest values. Since we just want the most likely class, we can use `ps.topk(1)`. This returns a tuple of the top-$k$ values and the top-$k$ indices. If the highest value is the fifth element, we'll get back 4 as the index.
```
top_p, top_class = ps.topk(1, dim=1)
# Look at the most likely classes for the first 10 examples
print(top_class[:10,:])
```
Now we can check if the predicted classes match the labels. This is simple to do by equating `top_class` and `labels`, but we have to be careful of the shapes. Here `top_class` is a 2D tensor with shape `(64, 1)` while `labels` is 1D with shape `(64)`. To get the equality to work out the way we want, `top_class` and `labels` must have the same shape.
If we do
```python
equals = top_class == labels
```
`equals` will have shape `(64, 64)`, try it yourself. What it's doing is comparing the one element in each row of `top_class` with each element in `labels` which returns 64 True/False boolean values for each row.
```
equals = top_class == labels.view(top_class.shape)
equals
```
Now we need to calculate the percentage of correct predictions. `equals` has binary values, either 0 or 1. This means that if we just sum up all the values and divide by the number of values, we get the percentage of correct predictions. This is the same operation as taking the mean, so we can get the accuracy with a call to `torch.mean`. If only it was that simple. If you try `torch.mean(equals)`, you'll get an error
```
RuntimeError: mean is not implemented for type torch.ByteTensor
```
This happens because `equals` has type `torch.ByteTensor` but `torch.mean` isn't implemented for tensors with that type. So we'll need to convert `equals` to a float tensor. Note that when we take `torch.mean` it returns a scalar tensor, to get the actual value as a float we'll need to do `accuracy.item()`.
```
accuracy = torch.mean(equals.type(torch.FloatTensor))
print(f'Accuracy: {accuracy.item()*100}%')
```
The network is untrained so it's making random guesses and we should see an accuracy around 10%. Now let's train our network and include our validation pass so we can measure how well the network is performing on the test set. Since we're not updating our parameters in the validation pass, we can speed up our code by turning off gradients using `torch.no_grad()`:
```python
# turn off gradients
with torch.no_grad():
# validation pass here
for images, labels in testloader:
...
```
>**Exercise:** Implement the validation loop below and print out the total accuracy after the loop. You can largely copy and paste the code from above, but I suggest typing it in because writing it out yourself is essential for building the skill. In general you'll always learn more by typing it rather than copy-pasting. You should be able to get an accuracy above 80%.
```
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in range(epochs):
running_loss = 0
for images, labels in trainloader:
images = images.view(images.shape[0], -1)
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
## TODO: Implement the validation pass and print out the validation accuracy
with torch.no_grad():
for images, labels in testloader:
images = images.view(images.shape[0], -1)
log_ps_test = model(images)
test_loss += criterion(log_ps_test, labels)
output = torch.exp(log_ps_test)
top_p, top_class = output.topk(1, dim=1)
equals = top_class == labels.view(top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
```
## Overfitting
If we look at the training and validation losses as we train the network, we can see a phenomenon known as overfitting.
<img src='assets/overfitting.png' width=450px>
The network learns the training set better and better, resulting in lower training losses. However, it starts having problems generalizing to data outside the training set leading to the validation loss increasing. The ultimate goal of any deep learning model is to make predictions on new data, so we should strive to get the lowest validation loss possible. One option is to use the version of the model with the lowest validation loss, here the one around 8-10 training epochs. This strategy is called *early-stopping*. In practice, you'd save the model frequently as you're training then later choose the model with the lowest validation loss.
The most common method to reduce overfitting (outside of early-stopping) is *dropout*, where we randomly drop input units. This forces the network to share information between weights, increasing it's ability to generalize to new data. Adding dropout in PyTorch is straightforward using the [`nn.Dropout`](https://pytorch.org/docs/stable/nn.html#torch.nn.Dropout) module.
```python
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784, 256)
self.fc2 = nn.Linear(256, 128)
self.fc3 = nn.Linear(128, 64)
self.fc4 = nn.Linear(64, 10)
# Dropout module with 0.2 drop probability
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
# make sure input tensor is flattened
x = x.view(x.shape[0], -1)
# Now with dropout
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
# output so no dropout here
x = F.log_softmax(self.fc4(x), dim=1)
return x
```
During training we want to use dropout to prevent overfitting, but during inference we want to use the entire network. So, we need to turn off dropout during validation, testing, and whenever we're using the network to make predictions. To do this, you use `model.eval()`. This sets the model to evaluation mode where the dropout probability is 0. You can turn dropout back on by setting the model to train mode with `model.train()`. In general, the pattern for the validation loop will look like this, where you turn off gradients, set the model to evaluation mode, calculate the validation loss and metric, then set the model back to train mode.
```python
# turn off gradients
with torch.no_grad():
# set model to evaluation mode
model.eval()
# validation pass here
for images, labels in testloader:
...
# set model back to train mode
model.train()
```
> **Exercise:** Add dropout to your model and train it on Fashion-MNIST again. See if you can get a lower validation loss or higher accuracy.
```
## TODO: Define your model with dropout added
from torch import nn
import torch.nn.functional as F
class Classifier(nn.Module):
def __init__(self):
super().__init__()
self.fc1 = nn.Linear(784,256)
self.fc2 = nn.Linear(256,128)
self.fc3 = nn.Linear(128,64)
self.fc4 = nn.Linear(64,10)
self.dropout = nn.Dropout(p=0.2)
def forward(self, x):
x = x.view(x.shape[0], -1)
x = self.dropout(F.relu(self.fc1(x)))
x = self.dropout(F.relu(self.fc2(x)))
x = self.dropout(F.relu(self.fc3(x)))
x = F.log_softmax(self.fc4(x),dim = 1)
return x
## TODO: Train your model with dropout, and monitor the training progress with the validation loss and accuracy
from torch import optim
from tqdm import tqdm
model = Classifier()
criterion = nn.NLLLoss()
optimizer = optim.Adam(model.parameters(), lr = 0.005)
epochs = 30
steps = 0
train_losses, test_losses = [], []
for e in tqdm(range(epochs)):
running_loss = 0
for images, labels in trainloader:
optimizer.zero_grad()
log_ps = model(images)
loss = criterion(log_ps, labels)
loss.backward()
optimizer.step()
running_loss += loss.item()
else:
test_loss = 0
accuracy = 0
with torch.no_grad():
model.eval()
for images, labels in testloader:
log_ps_test = model(images)
test_loss += criterion(log_ps_test, labels)
output = torch.exp(log_ps_test)
top_p, top_class = output.topk(1, dim = 1)
equals = top_class == labels.view(top_class.shape)
accuracy += torch.mean(equals.type(torch.FloatTensor))
model.train()
train_losses.append(running_loss/len(trainloader))
test_losses.append(test_loss/len(testloader))
print("Epoch: {}/{}.. ".format(e+1, epochs),
"Training Loss: {:.3f}.. ".format(running_loss/len(trainloader)),
"Test Loss: {:.3f}.. ".format(test_loss/len(testloader)),
"Test Accuracy: {:.3f}".format(accuracy/len(testloader)))
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
plt.plot(train_losses, label='Training loss')
plt.plot(test_losses, label='Validation loss')
plt.legend(frameon=False)
```
## Inference
Now that the model is trained, we can use it for inference. We've done this before, but now we need to remember to set the model in inference mode with `model.eval()`. You'll also want to turn off autograd with the `torch.no_grad()` context.
```
# Import helper module (should be in the repo)
import helper
# Test out your network!
model.eval()
dataiter = iter(testloader)
images, labels = dataiter.next()
img = images[0]
# Convert 2D image to 1D vector
img = img.view(1, 784)
# Calculate the class probabilities (softmax) for img
with torch.no_grad():
output = model.forward(img)
ps = torch.exp(output)
# Plot the image and probabilities
helper.view_classify(img.view(1, 28, 28), ps, version='Fashion')
import numpy as np
A = np.random.randn(100,100)
A.reshape(100*100, -1)
A.shape
```
## Next Up!
In the next part, I'll show you how to save your trained models. In general, you won't want to train a model everytime you need it. Instead, you'll train once, save it, then load the model when you want to train more or use if for inference.
|
github_jupyter
|
**Import library**
```
import pandas as pd
import numpy as np
import calendar
from datetime import datetime
import time
# Standard plotly imports
import plotly.express as px
import plotly.graph_objects as go
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_context("paper", font_scale=1.3)
sns.set_style('white')
# stats
from statsmodels.tsa.statespace.sarimax import SARIMAX
from random import random
from statsmodels.tsa.stattools import adfuller
#Prophet
from fbprophet import Prophet
# SKLEARN
from sklearn.metrics import mean_squared_error
```
**Import data**
```
# Read in the raw temperature dataset
raw_global = pd.read_csv('GLB.Ts+dSST.csv', skiprows=1)
raw_global = raw_global.iloc[:,:13]
raw_global.head()
raw_global.tail()
```
**Data Preprocessing**
```
def clean_value(raw_value):
try:
return float(raw_value)
except:
return np.NaN
def preprocess_data(raw):
data_horizon = pd.date_range(start='1/1/1880', end='12/31/2019', freq='M')
data = pd.DataFrame(data_horizon, columns=['Date'])
#extract temperature data
temp_list = []
for idx in range(raw.shape[0]):
temp_list.extend(raw.iloc[idx,1:])
data['Temp'] = temp_list
#clean value
data['Temp'] = data['Temp'].apply(lambda x: clean_value(x))
data.fillna(method='ffill', inplace=True)
return data
global_t = preprocess_data(raw_global)
global_t.head()
global_t.tail()
```
**Data Visualization**
```
fig = px.line(global_t, x="Date", y="Temp", title='Global-mean monthly Combined Land-Surface Air and Sea-Surface Water Temperature Anomalies')
fig.show()
fig = px.line(global_t.resample('A', on='Date').mean().reset_index(), x="Date", y="Temp", title='Global-mean yearly Combined Land-Surface Air and Sea-Surface Water Temperature Anomalies')
fig.show()
```
Test stationarity
```
def test_stationarity(timeseries):
rolmean = timeseries.rolling(window=30).mean()
rolstd = timeseries.rolling(window=30).std()
plt.figure(figsize=(14,5))
sns.despine(left=True)
orig = plt.plot(timeseries, color='blue',label='Original')
mean = plt.plot(rolmean, color='red', label='Rolling Mean')
std = plt.plot(rolstd, color='black', label = 'Rolling Std')
plt.legend(loc='best'); plt.title('Rolling Mean & Standard Deviation')
plt.show()
print ('<Results of Dickey-Fuller Test>')
dftest = adfuller(timeseries, autolag='AIC')
dfoutput = pd.Series(dftest[0:4],
index=['Test Statistic','p-value','#Lags Used','Number of Observations Used'])
for key,value in dftest[4].items():
dfoutput['Critical Value (%s)'%key] = value
print(dfoutput)
test_stationarity(global_t.Temp.dropna())
```
since the p-value > 0.05, we accept the null hypothesis (H0), the data has a unit root and is non-stationary.
**Time Series Prediction - SARIMA**
The Seasonal Autoregressive Integrated Moving Average (SARIMA) method models the next step in the sequence as a linear function of the differenced observations, errors, differenced seasonal observations, and seasonal errors at prior time steps.
It combines the ARIMA model with the ability to perform the same autoregression, differencing, and moving average modeling at the seasonal level.
The notation for the model involves specifying the order for the AR(p), I(d), and MA(q) models as parameters to an ARIMA function and AR(P), I(D), MA(Q) and m parameters at the seasonal level, e.g. SARIMA(p, d, q)(P, D, Q)m where “m” is the number of time steps in each season (the seasonal period). A SARIMA model can be used to develop AR, MA, ARMA and ARIMA models.
The method is suitable for univariate time series with trend and/or seasonal components.
```
def plot(y_true,y_pred):
# Plot
fig = go.Figure()
x = global_t['Date'][global_t.shape[0]-len(y_true):]
fig.add_trace(go.Scatter(x=x, y=y_true, mode='lines', name='actual'))
fig.add_trace(go.Scatter(x=x, y=y_pred, mode='lines', name='predicted'))
# Edit the layout
fig.update_layout(title='Southern Hemisphere-mean Temperature: Predicted v.s. Actual',
xaxis_title='Month',
yaxis_title='Temperature')
fig.show()
def SARIMA_prediction(temp_data):
y_true = []
y_pred = []
temperature = temp_data['Temp'].tolist()
train = temperature[:-336]
test = temperature[len(train):]
#predict the latest 336 values (20% of data)
for idx in range(len(test)):
true_val = test[idx]
if len(y_pred)>0:
record = train+y_pred
else:
record = train
# fit model
model = SARIMAX(record, order=(1, 1, 1), seasonal_order=(1, 1, 1, 1))
model_fit = model.fit(disp=False,low_memory=True)
# make predictions
yhat = model_fit.predict(len(record), len(record))
# save value
y_true.append(true_val)
y_pred.extend(yhat)
print(mean_squared_error(y_true, y_pred))
plot(y_true,y_pred)
start_time = time.time()
SARIMA_prediction(global_t)
print("--- %s seconds ---" % (time.time() - start_time))
```
**Time Series Prediction - Prophet**
Prophet is a procedure for forecasting time series data based on an additive model where non-linear trends are fit with yearly, weekly, and daily seasonality, plus holiday effects. It works best with time series that have strong seasonal effects and several seasons of historical data. Prophet is robust to missing data and shifts in the trend, and typically handles outliers well.
```
def prophet_prediction(temp_data):
#removing the last 336 values (10 years)
df = temp_data.iloc[:-336]
df = df.rename(columns={'Date':'ds', 'Temp':'y'})
#load prophet model
model = Prophet(weekly_seasonality=True)
model.fit(df)
#prediction
future = model.make_future_dataframe(periods=336, freq = 'm')
forecast = model.predict(future)
model.plot(forecast)
return forecast
start_time = time.time()
prophet_forecast = prophet_prediction(global_t)
print("--- %s seconds ---" % (time.time() - start_time))
prophet_forecast_last = prophet_forecast.iloc[prophet_forecast.shape[0]-336:]
global_t_last = global_t.iloc[global_t.shape[0]-336:]
mean_squared_error(global_t_last.Temp, prophet_forecast_last.yhat)
```
**Time series prediction - LSTM**
```
from keras.models import Sequential
from keras.layers.recurrent import LSTM
from keras.layers.core import Dense, Activation, Dropout
from sklearn.preprocessing import MinMaxScaler
from sklearn.metrics import mean_squared_error
from sklearn.utils import shuffle
from keras.callbacks import EarlyStopping
earlyStop=EarlyStopping(monitor="val_loss",verbose=2,mode='min',patience=5)
```
**Data preparation**
```
temp_raw = np.array(global_t.Temp.astype("float32")).reshape(-1,1)
# Apply the MinMax scaler from sklearn to normalize data in the (0, 1) interval.
scaler = MinMaxScaler(feature_range = (0, 1))
temp_LSTM = scaler.fit_transform(temp_raw)
# Train test split - Using 80% of data for training, 20% for validation.
ratio = 0.6
train_size = int(len(temp_LSTM) * ratio)
val_size = int(len(temp_LSTM) * 0.2)
test_size = len(temp_LSTM) - train_size - val_size
train, val, test = temp_LSTM[0:train_size, :], temp_LSTM[train_size:train_size+val_size, :], temp_LSTM[train_size+val_size:len(temp_LSTM), :]
print("Number of entries (training set, val set, test set): " + str((len(train), len(val), len(test))))
def create_dataset(dataset):
window_size = 1
data_X, data_Y = [], []
for i in range(len(dataset) - window_size - 1):
a = dataset[i:(i + window_size), 0]
data_X.append(a)
data_Y.append(dataset[i + window_size, 0])
return(np.array(data_X), np.array(data_Y))
# Create test and training sets for one-step-ahead regression.
train_X, train_Y = create_dataset(train)
val_X, val_Y = create_dataset(val)
test_X, test_Y = create_dataset(test)
# Reshape the input data into appropriate form for Keras.
train_X = np.reshape(train_X, (train_X.shape[0], 1,train_X.shape[1]))
val_X = np.reshape(val_X, (val_X.shape[0], 1,val_X.shape[1]))
test_X = np.reshape(test_X, (test_X.shape[0], 1,test_X.shape[1]))
print("Training data for Keras shape:")
print(train_X.shape)
```
**LSTM Model**
The LSTM architecture here consists of:
- One input layer.
- One LSTM layer of 4 blocks.
- One Dense layer to produce a single output.
- Use MSE as loss function.
```
def LSTM_modelone(train_X, train_Y, window_size):
model = Sequential()
model.add(LSTM(4,
input_shape = (1, window_size)))
model.add(Dense(1))
model.compile(loss = "mean_squared_error",
optimizer = "adam")
model.fit(train_X,
train_Y,
epochs = 100,
batch_size = 10,
verbose = 2,
validation_data=(val_X,val_Y),callbacks=[earlyStop])
return model
start_time = time.time()
LSTM_model1 = LSTM_modelone(train_X, train_Y, window_size=1)
print("--- %s seconds ---" % (time.time() - start_time))
def predict_and_score(model, X, Y):
# Make predictions on the original scale of the data.
pred = scaler.inverse_transform(model.predict(X))
# Prepare Y data to also be on the original scale for interpretability.
orig_data = scaler.inverse_transform([Y])
# Calculate RMSE.
score = mean_squared_error(orig_data[0], pred[:, 0])
return score
print("Test data score: %.3f MSE" % predict_and_score(LSTM_model1,test_X, test_Y))
```
The second model architecture is slightly more complex. Its elements are:
- Define the LSTM with 100 neurons in the first hidden layer and 1 neuron in the output layer
- Dropout 20%.
- Use the MSE loss function and the efficient Adam version of stochastic gradient descent.
- The model will be fit for 50 training epochs with a batch size of 5.
```
def LSTM_modeltwo(train_X, train_Y):
model = Sequential()
model.add(LSTM(50, input_shape=(train_X.shape[1], train_X.shape[2])))
model.add(Dropout(0.2))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
print(model.summary())
model.fit(train_X, train_Y, epochs=50, batch_size=5, verbose=2, shuffle=False, validation_data=(val_X,val_Y),callbacks=[earlyStop])
return model
start_time = time.time()
LSTM_model2 = LSTM_modeltwo(train_X, train_Y)
print("--- %s seconds ---" % (time.time() - start_time))
print("Test data score: %.3f MSE" % predict_and_score(LSTM_model2,test_X, test_Y))
def predict_and_plot(model, X, Y):
# Make predictions on the original scale of the data.
pred = scaler.inverse_transform(model.predict(X))
# Prepare Y data to also be on the original scale for interpretability.
orig_data = scaler.inverse_transform([Y])
# Plot
fig = go.Figure()
x = global_t['Date'][global_t.shape[0]-len(orig_data[0]):]
fig.add_trace(go.Scatter(x=x, y=orig_data[0], mode='lines', name='actual'))
fig.add_trace(go.Scatter(x=x, y=pred[:, 0], mode='lines', name='predicted'))
# Edit the layout
fig.update_layout(title='Global Temperature: Predicted v.s. Actual',
xaxis_title='Month',
yaxis_title='Temperature')
fig.show()
predict_and_plot(LSTM_model2,test_X, test_Y)
```
**MLP Model**
```
def MLP_model(train_X, train_Y):
model = Sequential()
model.add(Dense(100, input_shape=(1,)))
model.add(Activation('relu'))
model.add(Dropout(0.25))
model.add(Dense(50))
model.add(Activation('relu'))
model.add(Dense(1))
model.add(Activation('linear'))
model.compile(optimizer='adam', loss='mse')
print(model.summary())
model.fit(train_X, train_Y, epochs=50, batch_size=10, verbose=2, shuffle=False, validation_data=(val_X,val_Y),callbacks=[earlyStop])
return model
start_time = time.time()
MLP_model_result = MLP_model(train_X, train_Y)
print("--- %s seconds ---" % (time.time() - start_time))
print("Test data score: %.3f MSE" % predict_and_score(MLP_model_result,test_X, test_Y))
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/ewotawa/secure_private_ai/blob/master/Section_2_Federated_Learning_Final_Project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Federated Learning Final Project
## Overview
* See <a href="https://classroom.udacity.com/nanodegrees/nd185/parts/3fe1bb10-68d7-4d84-9c99-9539dedffad5/modules/28d685f0-0cb1-4f94-a8ea-2e16614ab421/lessons/c8fe481d-81ea-41be-8206-06d2deeb8575/concepts/a5fb4b4c-e38a-48de-b2a7-4e853c62acbe">video</a> for additional details.
* Do Federated Learning where the central server is not trusted with the raw gradients.
* In the final project notebook, you'll receive a dataset.
* Train on the dataset using Federated Learning.
* The gradients should not come up to the server in raw form.
* Instead, use the new .move() command to move all of the gradients to one of the workers, sum them up there, and then bring that batch up to the central server and then bring that batch up
* Idea: the central server never actually sees the raw gradient for any person.
* We'll look at secure aggregation in course 3.
* For now, do a larger-scale Federated Learning case where you handle the gradients in a special way.
## Approach
* Use the method illustrated in the "DEEP LEARNING" article referenced below. Update the code such that the MNIST model trains locally. Updated for my personal code style preferences.
* Per conversation in the SPAIC Slack channel, use of a federated data loader approach trains the model and keeps the disaggregated gradients off of the local machine. The aggregate model returns when model.get() is called.
* Contacted the team at OpenMined. They confirmed that PySyft currently does not work with GPUs, although updates are in progress. (7/18/2019).
## References
* <a href = "https://blog.openmined.org/upgrade-to-federated-learning-in-10-lines/">DEEP LEARNING -> FEDERATED LEARNING IN 10 LINES OF PYTORCH + PYSYFT</a>
* <a href ="https://github.com/udacity/private-ai/pull/10">added data for Federated Learning project</a>
* <a href="https://github.com/OpenMined/PySyft/blob/master/examples/tutorials/Part%206%20-%20Federated%20Learning%20on%20MNIST%20using%20a%20CNN.ipynb">Part 6 - Federated Learning on MNIST using a CNN.ipynb</a>
* <a href="https://docs.google.com/spreadsheets/d/1x-QQK-3Wn86bvSbNTf2_p2FXVCqiic2QwjcArQEuQlg/edit#gid=0">Slack Channel's reference sheet </a>
* <a href="https://github.com/ucalyptus/Federated-Learning/blob/master/Federated%20Learning.ipynb">Federated Learning Example from Slack Channel reference sheet</a>
### Install libraries and dependencies
```
!pip install syft
import syft as sy
!pip install torch
!pip install torchvision
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import torchvision
from torchvision import datasets, transforms
import numpy as np
hook = sy.TorchHook(torch) # <-- NEW: hook PyTorch ie add extra functionalities to support Federated Learning
vw00 = sy.VirtualWorker(hook, id="vw00")
vw01 = sy.VirtualWorker(hook, id="vw01")
aggr = sy.VirtualWorker(hook, id="aggr")
class Arguments():
def __init__(self):
self.batch_size = 64
self.test_batch_size = 1000
self.epochs = 10
self.lr = 0.01
self.momentum = 0.5
self.no_cuda = False
self.seed = 1
self.log_interval = 10
self.save_model = False
args = Arguments()
use_cuda = not args.no_cuda and torch.cuda.is_available()
torch.manual_seed(args.seed)
device = torch.device("cuda" if use_cuda else "cpu")
kwargs = {'num_workers': 1, 'pin_memory': True} if use_cuda else {}
# Note: removed **kwargs from end of federated_train_loader and test_loader definitions.
transform = transform=transforms.Compose([
transforms.ToTensor(),
transforms.Normalize((0.1307,), (0.3081,))
])
federated_train_loader = sy.FederatedDataLoader(datasets.MNIST('../data', train=True, download=True, transform=transform).federate((vw00, vw01)),
batch_size=args.batch_size, shuffle=True)
test_loader = torch.utils.data.DataLoader(datasets.MNIST('../data', train=False, transform=transform),
batch_size=args.test_batch_size, shuffle=True)
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5, 1)
self.conv2 = nn.Conv2d(20, 50, 5, 1)
self.fc1 = nn.Linear(4*4*50, 500)
self.fc2 = nn.Linear(500, 10)
def forward(self, x):
x = F.relu(self.conv1(x))
x = F.max_pool2d(x, 2, 2)
x = F.relu(self.conv2(x))
x = F.max_pool2d(x, 2, 2)
x = x.view(-1, 4*4*50)
x = F.relu(self.fc1(x))
x = self.fc2(x)
return F.log_softmax(x, dim=1)
def train(args, model, device, train_loader, optimizer, epoch):
model.train()
for batch_idx, (data, target) in enumerate(federated_train_loader): # <-- now it is a distributed dataset
model.send(data.location) # <-- NEW: send the model to the right location
# data, target = data.to(device), target.to(device)
optimizer.zero_grad()
output = model(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
model.get() # <-- NEW: get the model back
if batch_idx % args.log_interval == 0:
loss = loss.get() # <-- NEW: get the loss back
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * args.batch_size, len(train_loader) * args.batch_size, #batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
def test(args, model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
# data, target = data.to(device), target.to(device)
output = model(data)
test_loss += F.nll_loss(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
# model = Net().to(device)
model = Net()
optimizer = optim.SGD(model.parameters(), lr=args.lr) # TODO momentum is not supported at the moment
for epoch in range(1, args.epochs + 1):
train(args, model, device, federated_train_loader, optimizer, epoch)
test(args, model, device, test_loader)
if (args.save_model):
torch.save(model.state_dict(), "mnist_cnn.pt")
```
|
github_jupyter
|
```
% load_ext autoreload
% autoreload 2
% matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import torch
device = 'cuda' if torch.cuda.is_available() else 'cpu'
import os, sys
opj = os.path.join
from tqdm import tqdm
from ex_mnist import p
from dset import get_dataloader
sys.path.append('../../src/models')
from models import CNN, FFN
# load data
train_loader, test_loader = get_dataloader(p.data_path,
batch_size=p.batch_size)
# import models
cnn = CNN().to(device)
ffn = FFN().to(device)
```
# train cnn
```
optimizer = torch.optim.Adam(cnn.parameters(), lr=0.001)
criterion = torch.nn.CrossEntropyLoss()
num_epochs = 50
train_losses = []
for epoch in range(num_epochs):
epoch_loss = 0.
for batch_idx, (data, y) in enumerate(train_loader):
data = data.to(device)
y = y.to(device)
# zero grad
optimizer.zero_grad()
output = cnn(data)
loss = criterion(output, y)
# backward
loss.backward()
# update step
optimizer.step()
iter_loss = loss.item()
epoch_loss += iter_loss
print('\rTrain Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), iter_loss), end='')
mean_epoch_loss = epoch_loss / (batch_idx + 1)
train_losses.append(mean_epoch_loss)
# save model
torch.save(cnn.state_dict(), opj(p.model_path, 'CNN.pth'))
plt.plot(train_losses)
```
# train ffn
```
optimizer = torch.optim.Adam(ffn.parameters(), lr=0.001)
criterion = torch.nn.CrossEntropyLoss()
num_epochs = 50
train_losses = []
for epoch in range(num_epochs):
epoch_loss = 0.
for batch_idx, (data, y) in enumerate(train_loader):
data = data.to(device)
y = y.to(device)
# zero grad
optimizer.zero_grad()
output = ffn(data)
loss = criterion(output, y)
# backward
loss.backward()
# update step
optimizer.step()
iter_loss = loss.item()
epoch_loss += iter_loss
print('\rTrain Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), iter_loss), end='')
mean_epoch_loss = epoch_loss / (batch_idx + 1)
train_losses.append(mean_epoch_loss)
# save model
torch.save(ffn.state_dict(), opj(p.model_path, 'FFN.pth'))
plt.plot(train_losses)
```
# model prediction
```
# check prediction
m = len(test_loader.dataset)
batch_size = test_loader.batch_size
y_pred_cnn = np.zeros(m)
y_pred_ffn = np.zeros(m)
y_true = np.zeros(m)
with torch.no_grad():
for batch_idx, (data, y) in tqdm(enumerate(test_loader, 0), total=int(np.ceil(m / batch_size))):
data = data.to(device)
# cnn prediction
outputs_cnn = cnn(data)
_, y_pred = torch.max(outputs_cnn.data, 1)
y_pred_cnn[batch_idx * batch_size:(batch_idx + 1) * batch_size] = y_pred.cpu().numpy()
# ffn prediction
outputs_ffn = ffn(data)
_, y_pred = torch.max(outputs_ffn.data, 1)
y_pred_ffn[batch_idx * batch_size:(batch_idx + 1) * batch_size] = y_pred.cpu().numpy()
# labels
y_true[batch_idx * batch_size:(batch_idx + 1) * batch_size] = y.numpy()
print("CNN accuracy {:.5f}% FFN accuracy {:.5f}%".format((y_true == y_pred_cnn).sum() / m * 100,
(y_true == y_pred_ffn).sum() / m * 100))
```
|
github_jupyter
|
# Twitter Konversationen zu einem Thema als Netzwerk untersuchen
- Aus Twitter-Daten kann man besonders gut Netzwerke basteln.
- Dabei können wir frei definieren,wann eigentlich ein Nutzer mit einem anderen verbunden ist. Die gebräuchlichsten Definitionen sind:
1. Nutzer A retweetet Nutzer B (RT plotti was für ein super tweet)
2. Nutzer A erwähnt Nutzer B (Ich geh das so die Straße lang und seh @plotti)
3. Nutzer A schreibt Nutzer B (@plotti was geht heute)
4. (Nutzer A folgt Nutzer B (Leider um die Struktur einer Konversationen nicht sooo hilfreich. Außerdem muss man über Twarc recht viele User sammeln um diese Information zu erhalten, es geht aber.))
# Daten Sammeln über Twarc
- https://github.com/DocNow/twarc
- Twarc: A command line tool (and Python library) for archiving Twitter JSON
- Sehr praktisch um Tweets zu einem Stichwort zu sammeln.
- Man muss eine Twiter app beantragen :(
- ```pip install twarc```
- ```twarc configure```

## Daten Sammeln
```twarc search zürich > zürich.json```
```
import sys
import json
import re
import numpy as np
from datetime import datetime
import pandas as pd
import networkx as nx
tweetfile = 'zürich.json'
```
# 1. Kanten erzeugen durch Retweets
- Personen retweeten sich und deswegen erzeugen wir eine Kante zwischen ihnen.
```
# 1. Export edges from Retweets
fh = open(tweetfile, 'r')
userdata = pd.DataFrame(columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count' ))
edges = pd.DataFrame(columns=('Source','Target','Time', "Strength"))
for line in fh:
try:
tweet = json.loads(line)
except:
continue
if 'retweeted_status' not in tweet:
continue
userdata = userdata.append(pd.DataFrame([[tweet['user']['id_str'],
tweet['user']['screen_name'],
tweet['user']['created_at'],
tweet['user']['profile_image_url_https'],
tweet['user']['followers_count'],
tweet['user']['friends_count']]], columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count')), ignore_index=True)
userdata = userdata.append(pd.DataFrame([[tweet['retweeted_status']['user']['id_str'],
tweet['retweeted_status']['user']['screen_name'],
tweet['retweeted_status']['user']['created_at'],
tweet['retweeted_status']['user']['profile_image_url_https'],
tweet['retweeted_status']['user']['followers_count'],
tweet['retweeted_status']['user']['friends_count']]], columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count')), ignore_index=True)
edges = edges.append(pd.DataFrame([[tweet['user']['id_str'],
tweet['retweeted_status']['user']['id_str'],
str(datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S +0000 %Y')),1]]
, columns=('Source','Target',"Time",'Strength')), ignore_index=True)
userdata.head()
edges.head()
```
# 2. Kanten erzeugen durch Mentions
- Personen erwähnen sich und deshalb erzeugen wir eine Kante zwischen den Personen.
```
fh = open(tweetfile, 'r')
userdata = pd.DataFrame(columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count' ))
edges = pd.DataFrame(columns=('Source','Target','Strength'))
for line in fh:
try:
tweet = json.loads(line)
except:
continue
if len(tweet['entities']['user_mentions']) == 0:
continue
for mention in tweet['entities']['user_mentions']:
userdata = userdata.append(pd.DataFrame([[tweet['user']['id_str'],
tweet['user']['screen_name'],
tweet['user']['created_at'],
tweet['user']['profile_image_url_https'],
tweet['user']['followers_count'],
tweet['user']['friends_count']]], columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count')), ignore_index=True)
if len(userdata[userdata['Id'].str.contains(mention['id_str'])]) == 0:
userdata = userdata.append(pd.DataFrame([[tweet['user']['id_str'],
tweet['user']['screen_name'],
np.nan,
np.nan,
np.nan,
np.nan]], columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count')), ignore_index=True)
edges = edges.append(pd.DataFrame([[tweet['user']['id_str'],
mention['id_str'],
str(datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S +0000 %Y'))]]
, columns=('Source','Target','Strength')), ignore_index=True)
```
# 3. Kanten erzeugen durch gemeinsame Kommunikation
- Personen diskutieren miteinander und deshalb erzeugen wir eine Kante zwischen ihnen.
```
fh = open(tweetfile, 'r')
userdata = pd.DataFrame(columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count' ))
edges = pd.DataFrame(columns=('Source','Target','Strength'))
for line in fh:
try:
tweet = json.loads(line)
except:
continue
if tweet['in_reply_to_user_id_str'] is None:
continue
userdata = userdata.append(pd.DataFrame([[tweet['user']['id_str'],
tweet['user']['screen_name'],
tweet['user']['created_at'],
tweet['user']['profile_image_url_https'],
tweet['user']['followers_count'],
tweet['user']['friends_count']]], columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count')), ignore_index=True)
if len(userdata[userdata['Id'].str.contains(tweet['in_reply_to_user_id_str'])]) == 0:
userdata = userdata.append(pd.DataFrame([[tweet['in_reply_to_user_id_str'],
tweet['in_reply_to_screen_name'],
np.nan,
np.nan,
np.nan,
np.nan]], columns=('Id','Label','user_created_at','profile_image','followers_count','friends_count')), ignore_index=True)
edges = edges.append(pd.DataFrame([[tweet['user']['id_str'],
tweet['in_reply_to_user_id_str'],
str(datetime.strptime(tweet['created_at'], '%a %b %d %H:%M:%S +0000 %Y'))]]
, columns=('Source','Target','Strength')), ignore_index=True)
```
# Nur jene Kanten behalten die eine gewisse Stärke haben.
```
strengthLevel = 3 # Network connection strength level: the number of times in total each of the tweeters responded to or mentioned the other.
# If you have 1 as the level, then all tweeters who mentioned or replied to another at least once will be displayed. But if you have 5, only those who have mentioned or responded to a particular tweeter at least 5 times will be displayed, which means that only the strongest bonds are shown.
edges2 = edges.groupby(['Source','Target'])['Strength'].count()
edges2 = edges2.reset_index()
edges2 = edges2[edges2['Strength'] >= strengthLevel]
len(edges2)
```
# Daten als Gephi Netzwerk Exportieren
```
def robust_decode(bs):
'''Takes a byte string as param and convert it into a unicode one.
First tries UTF8, and fallback to Latin1 if it fails'''
cr = None
cr = bs.decode('ascii', 'ignore').encode('ascii')
return cr
import sys
reload(sys)
sys.setdefaultencoding('utf8')
userdata = userdata.sort_values(['Id','followers_count'], ascending=[True, False])
userdata = userdata.drop_duplicates(['Id'], keep='first')
ids = edges2['Source'].append(edges2['Target']).to_frame()
ids.columns = ['Id']
ids = ids.drop_duplicates()
nodes = pd.merge(ids, userdata, on='Id', how='left')
nodes = nodes.dropna()
nodes["Label"] = nodes["Label"].astype(str)
nodes["Id"] = nodes["Id"].astype(str)
G = nx.DiGraph(name="zürich")
for i, row in nodes.iterrows():
G.add_node(robust_decode(row["Id"]), label=robust_decode(row["Label"]))
for i, row in edges2.iterrows():
G.add_edge(robust_decode(row["Source"]),robust_decode(row["Target"]),weight=row["Strength"])
nx.write_gexf(G,"Zürich.gexf")
```
# Alternativ als csv speichern für Kumu.io
```
# Export nodes from the edges and add node attributes for both Sources and Targets.
userdata = userdata.sort_values(['Id','followers_count'], ascending=[True, False])
userdata = userdata.drop_duplicates(['Id'], keep='first')
ids = edges2['Source'].append(edges2['Target']).to_frame()
ids.columns = ['Id']
ids = ids.drop_duplicates()
nodes = pd.merge(ids, userdata, on='Id', how='left')
# change column names for Kumu import (Run this when using Kumu)
nodes.columns = ['Id', 'Label', 'Date', 'Image', 'followers_count', 'friends_count']
edges2.columns = ['From','To','Strength']
# Export nodes and edges to csv files
nodes.to_csv('nodes.csv', encoding='utf-8', index=False)
edges2.to_csv('edges.csv', encoding='utf-8', index=False)
```
|
github_jupyter
|
```
%matplotlib inline
import matplotlib.pyplot as plt # for plotting
import numpy as np # for matrix and vector computations
import pandas as pd
import seaborn as sns
```
### Debugging
* Python array indices start from zero
* Vector/matrix operations work only with numpy arrays.Inspect matrix operations to make sure that you are adding and multiplying matrices of compatible dimensions. Printing the dimensions of numpy arrays using the shape property will help you debug.
* If you want to do matrix multiplication, you need to use the dot function in numpy. For, example if A and B are two numpy matrices, then the matrix operation AB is np.dot(A, B)
## Return a 5x5 Identity Matrix
```
A = np.eye(5) # using eye()
A
```
Implement linear regression with one variable to predict profits for a food truck. Suppose you are the CEO of a restaurant franchise and are considering different cities for opening a new outlet. The chain already has trucks in various cities and you have data for profits and populations from the cities. You would like to use this data to help you select which city to expand to next.
The file Data/ex1data1.txt contains the dataset for our linear regression problem. The first column is the population of a city (in 10,000s) and the second column is the profit of a food truck in that city (in $10,000s). A negative value for profit indicates a loss.
## 1) Load the dataset
```
# Load the dataset
data = np.loadtxt('ex1data1.txt',delimiter=',')
X = data[:,0]
y = data[:,1]
# X and y are matrices
m = y.size # number of training samples
m
X.shape, y.shape, X.ndim, y.ndim
```
## 2) Plotting the Data
Before starting on any task, it is often useful to understand the data by visualizing it. For this dataset has only two properties to plot (profit and population).
```
"""
Plots the data points x and y into a new figure. Plots the data
points and gives the figure axes labels of population and profit.
Parameters
----------
x : array_like
Data point values for x-axis.
y : array_like
Data point values for y-axis. Note x and y should have the same size.
----
You can use the 'ro' option with plot to have the markers
appear as red circles. Furthermore, you can make the markers larger by
using plot(..., 'ro', ms=10), where `ms` refers to marker size. You
can also set the marker edge color using the `mec` property.
"""
def plotData(x,y):
fig = plt.figure(figsize=(8,6))
plt.plot(x,y,'ro',ms=10,mec='k')
plt.xlabel('Profit in $10,000')
plt.ylabel('Population of a city in 10,000')
plotData(X,y)
```
## 3) Gradient Descent
Fit the linear regression parameters $\theta$ to the dataset using gradient descent.
<a id="section2"></a>
### 3.1 Update Equations
The objective of linear regression is to minimize the cost function $J(\theta)$
$$ J(\theta) = \frac{1}{2m} \sum_{i=1}^m \left( h_{\theta}(x^{(i)}) - y^{(i)}\right)^2$$
where the hypothesis $h_\theta(x)$ is given by the linear model
$$ h_\theta(x) = \theta^Tx = \theta_0 + \theta_1 x_1$$
Recall that the parameters of your model are the $\theta_j$ values. These are
the values you will adjust to minimize cost $J(\theta)$. One way to do this is to
use the **batch gradient descent algorithm**. In batch gradient descent, each
iteration performs the update
$$ \theta_j = \theta_j - \alpha \frac{1}{m} \sum_{i=1}^m \left( h_\theta(x^{(i)}) - y^{(i)}\right)x_j^{(i)} \qquad \text{simultaneously update } \theta_j \text{ for all } j$$
With each step of gradient descent, your parameters $\theta_j$ come closer to the optimal values that will achieve the lowest cost J($\theta$).
<div class="alert alert-block alert-warning">
**Implementation Note:** We store each sample as a row in the the $X$ matrix in Python `numpy`. To take into account the intercept term ($\theta_0$), we add an additional first column to $X$ and set it to all ones. This allows us to treat $\theta_0$ as simply another 'feature'.
</div>
```
# initially X contains features x1,x2. Add x0 = 1, so X will now contain the features x0,x1,x2
#### Add a column of ones to X. The numpy function stack() joins arrays along a given axis.
# The first axis (axis=0) refers to rows (training samples), and second axis (axis=1) refers to columns (features).
X = np.stack([np.ones(m),X],axis=1) # This cell is executed only once!
```
<a id="section2"></a>
### 3.2 Computing the cost $J(\theta)$
As you perform gradient descent to minimize the cost function $J(\theta)$, it is helpful to monitor the convergence by computing the cost. Implement a function to calculate $J(\theta)$ so you can check the convergence of your gradient descent implementation.
Remember that the variables $X$ and $y$ are not scalar values. $X$ is a matrix whose rows represent the samples from the training set (feature) and $y$ (label) is a vector whose each element represent the value at a given row of $X$.
<a id="computeCost"></a>
```
"""
Compute cost for linear regression. Computes the cost of using theta as the
parameter for linear regression to fit the data points in X and y.
Parameters
----------
X : array_like
The input dataset of shape (m x n+1) dimesnions, where m is the number of samples,
and n is the number of features. We assume a vector of one's already
appended to the features so we have n+1 columns.
y : array_like
The values of the function at each data point. This is a vector of
shape (m, ) i.e. (mx1) dimensions
theta : array_like
The parameters for the hypothesis/regression function. This is a vector of
shape (n+1, ) i.e. (n+1)x1 dimensions.
Returns
-------
J : float - The value of the regression cost function.
"""
def computeCost(X,y,theta):
m = y.size # no. of training samples
J = 0
h = np.dot(X,theta) # X and theta are matrices
J = (1/(2 * m)) * np.sum(np.square(np.dot(X, theta) - y))
return J
# take random values of theta0 and theta1
J = computeCost(X,y ,theta=np.array([0.0,0.0])) # two values for theta0 and theta1
print(f"With theta = [0, 0] \nCost computed = {J:.2f}")
print()
J = computeCost(X,y ,theta=np.array([-1,2]))
print(f"With theta = [-1, 2] \nCost computed = {J:.2f}")
```
<a id="section3"></a>
### 3.3 Gradient descent
Complete a function which Implements gradient descent. Update $\theta$ with each iteration of the loop.
As you program, make sure you understand what you are trying to optimize and what is being updated. Keep in mind that the cost $J(\theta)$ is parameterized by the vector $\theta$, not $X$ and $y$. That is, we minimize the value of $J(\theta)$ by changing the values of the vector $\theta$, not by changing $X$ or $y$.
A good way to verify that gradient descent is working correctly is to look at the value of $J(\theta)$ and check that it is decreasing with each step.
```
"""
Performs gradient descent to learn `theta`. Updates theta by taking `num_iters`
gradient steps with learning rate `alpha`.
Parameters
----------
X : array_like
The input dataset of shape (m x n+1).
y : array_like
Value at given features. A vector of shape (m, ), i.e. (mx1) dimensions
theta : array_like
Initial values for the linear regression parameters.
A vector of shape (n+1, ), i.e. (n+1)x1 dimensions
alpha : float
The learning rate.
num_iters : int
The number of iterations for gradient descent.
Returns
-------
theta : array_like
The learned linear regression parameters. A vector of shape (n+1, ). This is the optimal theta
for which J is minimum
J_history : list
A python list for the values of the cost function after each iteration.
Instructions
------------
Peform a single gradient step on the parameter vector theta.
While debugging, it can be useful to print out the values of
the cost function (computeCost) and gradient here.
"""
def gradient_descent(X,y,theta,alpha,num_iters):
m = y.size # or y.shape[0] # number of training samples
# make a copy of theta, to avoid changing the original array, since numpy arrays are passed by reference to functions
theta = theta.copy()
J_history = [] # Use a python list to store cost in every iteration
for i in range(num_iters):
theta = theta - (alpha/m) * (np.dot(X,theta) - y).dot(X)
# print(theta)
# save the cost J in every iteration
min_cost = computeCost(X,y,theta)
J_history.append(min_cost)
# print(J_history[i])
return theta, J_history # theta will return 2 values --> theta0, theta1
# randomly initialize fitting parameters
theta = np.zeros(2)
# some gradient descent settings
iterations = 1500
alpha = 0.01
theta, J_history = gradient_descent(X,y,theta,alpha,iterations)
print('Theta found by gradient descent: {:.4f}, {:.4f}'.format(*theta)) # adds theta to empty string
```
## 4) Plot the linear fit
```
plotData(X[:,1],y) # plot the samples - excluding x1=0 (0th column)
# Linear regression line/hypothesis line of best fit --> y = h(x) = theta0 + theta1*X
# x is feature except x1=0,y is entire equation
plt.plot(X[:,1],np.dot(X,theta),ls='-')
plt.legend(['Training Data','Linear Regression']); # x is training data, y is linear regression line
```
## 5) Predict some values
```
# we now have the optimal theta
# Predict values for population sizes of 35,000 and 70,000
# Note that the first argument to the `numpy` function `dot` is a python list.
# `numpy` can internally convert **valid** python lists to numpy arrays when explicitly provided as arguments to `numpy` functions.
# profit x in 10,000 and population y in 10,000, so 3.5 --> 350000, 1 -> 10000
predict1 = np.dot([1,3.5],theta)
print(f"For population = 35,000, we predict a profit of {predict1 * 10000:.2f}")
predict2 = np.dot([1,7],theta)
print(f"For population = 35,000, we predict a profit of {predict2 * 10000:.2f}")
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/hf2000510/infectious_disease_modelling/blob/master/part_two.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Make sure to open in Colab to see the plots!
### Importing the libraries
```
from scipy.integrate import odeint
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
!pip install mpld3
import mpld3
mpld3.enable_notebook()
```
### Plot Function
```
def plotseird(t, S, E, I, R, D=None, L=None, R0=None, Alpha=None, CFR=None):
f, ax = plt.subplots(1,1,figsize=(10,4))
ax.plot(t, S, 'b', alpha=0.7, linewidth=2, label='Susceptible')
ax.plot(t, E, 'y', alpha=0.7, linewidth=2, label='Exposed')
ax.plot(t, I, 'r', alpha=0.7, linewidth=2, label='Infected')
ax.plot(t, R, 'g', alpha=0.7, linewidth=2, label='Recovered')
if D is not None:
ax.plot(t, D, 'k', alpha=0.7, linewidth=2, label='Dead')
ax.plot(t, S+E+I+R+D, 'c--', alpha=0.7, linewidth=2, label='Total')
else:
ax.plot(t, S+E+I+R, 'c--', alpha=0.7, linewidth=2, label='Total')
ax.set_xlabel('Time (days)')
ax.yaxis.set_tick_params(length=0)
ax.xaxis.set_tick_params(length=0)
ax.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax.legend(borderpad=2.0)
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
if L is not None:
plt.title("Lockdown after {} days".format(L))
plt.show();
if R0 is not None or CFR is not None:
f = plt.figure(figsize=(12,4))
if R0 is not None:
# sp1
ax1 = f.add_subplot(121)
ax1.plot(t, R0, 'b--', alpha=0.7, linewidth=2, label='R_0')
ax1.set_xlabel('Time (days)')
ax1.title.set_text('R_0 over time')
# ax.set_ylabel('Number (1000s)')
# ax.set_ylim(0,1.2)
ax1.yaxis.set_tick_params(length=0)
ax1.xaxis.set_tick_params(length=0)
ax1.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax1.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
if Alpha is not None:
# sp2
ax2 = f.add_subplot(122)
ax2.plot(t, Alpha, 'r--', alpha=0.7, linewidth=2, label='alpha')
ax2.set_xlabel('Time (days)')
ax2.title.set_text('fatality rate over time')
# ax.set_ylabel('Number (1000s)')
# ax.set_ylim(0,1.2)
ax2.yaxis.set_tick_params(length=0)
ax2.xaxis.set_tick_params(length=0)
ax2.grid(b=True, which='major', c='w', lw=2, ls='-')
legend = ax2.legend()
legend.get_frame().set_alpha(0.5)
for spine in ('top', 'right', 'bottom', 'left'):
ax.spines[spine].set_visible(False)
plt.show();
```
## Basic SIR Equations
```
def deriv(y, t, N, beta, gamma):
S, I, R = y
dSdt = -beta * S * I / N
dIdt = beta * S * I / N - gamma * I
dRdt = gamma * I
return dSdt, dIdt, dRdt
```
## The Exposed-Compartment
```
def deriv(y, t, N, beta, gamma, delta):
S, E, I, R = y
dSdt = -beta * S * I / N
dEdt = beta * S * I / N - delta * E
dIdt = delta * E - gamma * I
dRdt = gamma * I
return dSdt, dEdt, dIdt, dRdt
```
### Variables that we define:
```
N = 1_000_000 # total population
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0 = 5.0
beta = R_0 * gamma # R_0 = beta / gamma, so beta = R_0 * gamma
S0, E0, I0, R0 = N-1, 1, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta))
S, E, I, R = ret.T
```
### Plot the result:
```
plotseird(t, S, E, I, R)
```
## Programming the Dead-Compartment
```
def deriv(y, t, N, beta, gamma, delta, alpha, rho):
S, E, I, R, D = y
dSdt = -beta * S * I / N
dEdt = beta * S * I / N - delta * E
dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
dRdt = (1 - alpha) * gamma * I
dDdt = alpha * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
```
### New variables:
```
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0 = 5.0
beta = R_0 * gamma # R_0 = beta / gamma, so beta = R_0 * gamma
alpha = 0.2 # 20% death rate
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha, rho))
S, E, I, R, D = ret.T
```
### Plot the result:
```
plotseird(t, S, E, I, R, D)
```
## Time-Dependent $R_{0}$
### Simple Approach: Single Lockdown
```
def deriv(y, t, N, beta, gamma, delta, alpha, rho):
S, E, I, R, D = y
dSdt = -beta(t) * S * I / N
dEdt = beta(t) * S * I / N - delta * E
dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
dRdt = (1 - alpha) * gamma * I
dDdt = alpha * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
L = 40
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
def R_0(t):
return 5.0 if t < L else 0.9
def beta(t):
return R_0(t) * gamma
alpha = 0.2 # 20% death rate
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha, rho))
S, E, I, R, D = ret.T
```
### Plot the result:
```
plotseird(t, S, E, I, R, D, L)
```
### Advanced Approach: logistic $R_{0}$
```
### we will use the logistic R in our model, because R probably never “jumps” from one value to another. Rather, it continuously changes.
def deriv(y, t, N, beta, gamma, delta, alpha, rho):
S, E, I, R, D = y
dSdt = -beta(t) * S * I / N
dEdt = beta(t) * S * I / N - delta * E
dIdt = delta * E - (1 - alpha) * gamma * I - alpha * rho * I
dRdt = (1 - alpha) * gamma * I
dDdt = alpha * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0_start, k, x0, R_0_end = 5.0, 0.5, 50, 0.5
def logistic_R_0(t):
return (R_0_start-R_0_end) / (1 + np.exp(-k*(-t+x0))) + R_0_end
def beta(t):
return logistic_R_0(t) * gamma
alpha = 0.2 # 20% death rate
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha, rho))
S, E, I, R, D = ret.T
R0_over_time = [logistic_R_0(i) for i in range(len(t))] # to plot R_0 over time: get function values
```
### Plot the result:
```
plotseird(t, S, E, I, R, D, R0=R0_over_time)
```
## Resource- and Age-Dependent Fatality Rate
```
def deriv(y, t, N, beta, gamma, delta, alpha_opt, rho):
S, E, I, R, D = y
def alpha(t):
return s * I/N + alpha_opt
dSdt = -beta(t) * S * I / N
dEdt = beta(t) * S * I / N - delta * E
dIdt = delta * E - (1 - alpha(t)) * gamma * I - alpha(t) * rho * I
dRdt = (1 - alpha(t)) * gamma * I
dDdt = alpha(t) * rho * I
return dSdt, dEdt, dIdt, dRdt, dDdt
### New variables:
N = 1_000_000
D = 4.0 # infections lasts four days
gamma = 1.0 / D
delta = 1.0 / 5.0 # incubation period of five days
R_0_start, k, x0, R_0_end = 5.0, 0.5, 50, 0.5
def logistic_R_0(t):
return (R_0_start-R_0_end) / (1 + np.exp(-k*(-t+x0))) + R_0_end
def beta(t):
return logistic_R_0(t) * gamma
alpha_by_agegroup = {"0-29": 0.01, "30-59": 0.05, "60-89": 0.2, "89+": 0.3}
proportion_of_agegroup = {"0-29": 0.1, "30-59": 0.3, "60-89": 0.4, "89+": 0.2}
s = 0.01
alpha_opt = sum(alpha_by_agegroup[i] * proportion_of_agegroup[i] for i in list(alpha_by_agegroup.keys()))
rho = 1/9 # 9 days from infection until death
S0, E0, I0, R0, D0 = N-1, 1, 0, 0, 0 # initial conditions: one exposed
t = np.linspace(0, 99, 100) # Grid of time points (in days)
y0 = S0, E0, I0, R0, D0 # Initial conditions vector
# Integrate the SIR equations over the time grid, t.
ret = odeint(deriv, y0, t, args=(N, beta, gamma, delta, alpha_opt, rho))
S, E, I, R, D = ret.T
R0_over_time = [logistic_R_0(i) for i in range(len(t))] # to plot R_0 over time: get function values
Alpha_over_time = [s * I[i]/N + alpha_opt for i in range(len(t))] # to plot alpha over time
```
### Plot the result:
```
plotseird(t, S, E, I, R, D, R0=R0_over_time, Alpha=Alpha_over_time)
```
|
github_jupyter
|
<a href="https://colab.research.google.com/github/ayulockin/Explore-NFNet/blob/main/Train_Basline_With_Gradient_Clipping.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# 🧰 Setups, Installations and Imports
```
%%capture
!pip install wandb --upgrade
!pip install albumentations
!git clone https://github.com/ayulockin/Explore-NFNet
import tensorflow as tf
print(tf.__version__)
import tensorflow_datasets as tfds
import sys
sys.path.append("Explore-NFNet")
import os
import cv2
import numpy as np
from functools import partial
import matplotlib.pyplot as plt
# Imports from the cloned repository
from models.resnet import resnet_v1
from models.mini_vgg import get_mini_vgg
# Augmentation related imports
import albumentations as A
# Seed everything for reproducibility
def seed_everything():
# Set the random seeds
os.environ['TF_CUDNN_DETERMINISTIC'] = '1'
np.random.seed(hash("improves reproducibility") % 2**32 - 1)
tf.random.set_seed(hash("by removing stochasticity") % 2**32 - 1)
seed_everything()
# Avoid TensorFlow to allocate all the GPU at once.
# Ref: https://www.tensorflow.org/guide/gpu
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices('GPU')
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
import wandb
from wandb.keras import WandbCallback
wandb.login()
DATASET_NAME = 'cifar10'
IMG_HEIGHT = 32
IMG_WIDTH = 32
NUM_CLASSES = 10
SHUFFLE_BUFFER = 1024
BATCH_SIZE = 256
EPOCHS = 100
AUTOTUNE = tf.data.experimental.AUTOTUNE
print(f'Global batch size is: {BATCH_SIZE}')
```
# ⛄ Download and Prepare Dataset
```
(train_ds, val_ds, test_ds), info = tfds.load(name=DATASET_NAME,
split=["train[:85%]", "train[85%:]", "test"],
with_info=True,
as_supervised=True)
@tf.function
def preprocess(image, label):
# preprocess image
image = tf.cast(image, tf.float32)
image = image/255.0
return image, label
# Define the augmentation policies. Note that they are applied sequentially with some probability p.
transforms = A.Compose([
A.HorizontalFlip(p=0.7),
A.Rotate(limit=30, p=0.7)
])
# Apply augmentation policies.
def aug_fn(image):
data = {"image":image}
aug_data = transforms(**data)
aug_img = aug_data["image"]
return aug_img
@tf.function
def apply_augmentation(image, label):
aug_img = tf.numpy_function(func=aug_fn, inp=[image], Tout=tf.float32)
aug_img.set_shape((IMG_HEIGHT, IMG_WIDTH, 3))
return aug_img, label
train_ds = (
train_ds
.shuffle(SHUFFLE_BUFFER)
.map(preprocess, num_parallel_calls=AUTOTUNE)
.map(apply_augmentation, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
val_ds = (
val_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
test_ds = (
test_ds
.map(preprocess, num_parallel_calls=AUTOTUNE)
.batch(BATCH_SIZE)
.prefetch(AUTOTUNE)
)
def show_batch(image_batch, label_batch):
plt.figure(figsize=(10,10))
for n in range(25):
ax = plt.subplot(5,5,n+1)
plt.imshow(image_batch[n])
# plt.title(f'{np.argmax(label_batch[n].numpy())}')
plt.title(f'{label_batch[n].numpy()}')
plt.axis('off')
image_batch, label_batch = next(iter(train_ds))
show_batch(image_batch, label_batch)
print(image_batch.shape, label_batch.shape)
```
# 🐤 Model
```
class ResNetModel(tf.keras.Model):
def __init__(self, resnet):
super(ResNetModel, self).__init__()
self.resnet = resnet
def train_step(self, data):
images, labels = data
with tf.GradientTape() as tape:
predictions = self.resnet(images)
loss = self.compiled_loss(labels, predictions)
trainable_params = self.resnet.trainable_variables
gradients = tape.gradient(loss, trainable_params)
gradients_clipped = [tf.clip_by_norm(g, 0.01) for g in gradients]
self.optimizer.apply_gradients(zip(gradients_clipped, trainable_params))
self.compiled_metrics.update_state(labels, predictions)
return {m.name: m.result() for m in self.metrics}
def test_step(self, data):
images, labels = data
predictions = self.resnet(images, training=False)
loss = self.compiled_loss(labels, predictions)
self.compiled_metrics.update_state(labels, predictions)
return {m.name: m.result() for m in self.metrics}
def save_weights(self, filepath):
self.resnet.save_weights(filepath=filepath, save_format="tf")
def call(self, inputs, *args, **kwargs):
return self.resnet(inputs)
tf.keras.backend.clear_session()
test_model = ResNetModel(resnet_v1((IMG_HEIGHT, IMG_WIDTH, 3), 20, num_classes=NUM_CLASSES, use_bn=False))
test_model.build((1, IMG_HEIGHT, IMG_WIDTH, 3))
test_model.summary()
print(f"Total learnable parameters: {test_model.count_params()/1e6} M")
```
# 📲 Callbacks
```
earlystopper = tf.keras.callbacks.EarlyStopping(
monitor='val_loss', patience=10, verbose=0, mode='auto',
restore_best_weights=True
)
reducelronplateau = tf.keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.5,
patience=3, verbose=1
)
```
# 🚋 Train with W&B
```
tf.keras.backend.clear_session()
# Intialize model
model = ResNetModel(resnet_v1((IMG_HEIGHT, IMG_WIDTH, 3), 20, num_classes=NUM_CLASSES, use_bn=False))
model.compile('adam', 'sparse_categorical_crossentropy', metrics=['acc'])
# Intialize W&B run
run = wandb.init(entity='ayush-thakur', project='nfnet', job_type='train-baseline')
# Train model
model.fit(train_ds,
epochs=EPOCHS,
validation_data=val_ds,
callbacks=[WandbCallback(),
reducelronplateau,
earlystopper])
# Evaluate model on test set
loss, acc = model.evaluate(test_ds)
wandb.log({'Test Accuracy': round(acc, 3)})
# Close W&B run
run.finish()
```

|
github_jupyter
|
# Expression Quality Control (Part 2)
This is a template notebook for performing the final quality control on your organism's expression data. This requires a curated metadata sheet.
## Setup
```
import itertools
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from os import path
from scipy import stats
from tqdm.notebook import tqdm
sns.set_style('ticks')
```
### Inputs
```
logTPM_file = path.join('..','data','raw_data','log_tpm.csv') # Enter log-TPM filename here
all_metadata_file = path.join('..','data','interim','metadata_qc_part1_all.tsv') # Enter full metadata filename here
metadata_file = path.join('..','data','interim','metadata_qc_part1_curated.tsv') # Enter curated metadata filename here
```
### Load expression data
```
DF_log_tpm = pd.read_csv(logTPM_file,index_col=0).fillna(0)
print('Number of genes:',DF_log_tpm.shape[0])
print('Number of samples:',DF_log_tpm.shape[1])
DF_log_tpm.head()
```
### Load metadata
```
DF_metadata = pd.read_csv(metadata_file,index_col=0,sep='\t')
print('Number of samples with curated metadata:',DF_metadata.shape[0])
DF_metadata.head()
DF_metadata_all = pd.read_csv(all_metadata_file,index_col=0,sep='\t')
```
## Remove samples due to poor metadata
After curation, some samples either did not have enough replicates or metadata to warrant inclusion in this database.
```
DF_metadata_passed_step4 = DF_metadata[~DF_metadata.skip.fillna(False)].copy()
print('New number of samples with curated metadata:',DF_metadata_passed_step4.shape[0])
DF_metadata_passed_step4.head()
```
### Check curation
Since manual curation is error-prone, we want to make sure that all samples have labels for their project and condition. In addition, there should only be one reference condition in each project, and it should be in the project itself.
Any samples that fail these checks will be printed below.
```
assert(DF_metadata_passed_step4.project.notnull().all())
assert(DF_metadata_passed_step4.condition.notnull().all())
for name,group in DF_metadata_passed_step4.groupby('project'):
ref_cond = group.reference_condition.unique()
# Ensure that there is only one reference condition per project
if not len(ref_cond) == 1:
print('Multiple reference conditions for:, name')
# Ensure the reference condition is in fact in the project
ref_cond = ref_cond[0]
if not ref_cond in group.condition.tolist():
print('Reference condition not in project:', name)
```
Next, make a new column called ``full_name`` that gives every experimental condition a unique, human-readable identifier.
```
DF_metadata_passed_step4['full_name'] = DF_metadata_passed_step4['project'].str.cat(DF_metadata_passed_step4['condition'],sep=':')
```
### Remove samples with only one replicate
First, find sample names that have at least two replicates.
```
counts = DF_metadata_passed_step4.full_name.value_counts()
keep_samples = counts[counts >= 2].index
print(keep_samples[:5])
```
Only keep these samples
```
DF_metadata_passed_step4 = DF_metadata_passed_step4[DF_metadata_passed_step4.full_name.isin(keep_samples)]
print('New number of samples with curated metadata:',DF_metadata_passed_step4.shape[0])
DF_metadata_passed_step4.head()
```
### Save this information to the full metadata dataframe
```
DF_metadata_all['passed_curation'] = DF_metadata_all.index.isin(DF_metadata_passed_step4.index)
```
## Check correlations between replicates
### Remove failed data from log_tpm files
```
DF_log_tpm = DF_log_tpm[DF_metadata_passed_step4.index]
```
### Compute Pearson R Score
Biological replicates should have a Pearson R correlation above 0.95. For samples with more than 2 replicates, the replicates must have R >= 0.95 with at least one other replicate or it will be dropped. The correlation threshold can be changed below:
```
rcutoff = 0.95
```
The following code computes correlations between all samples and collects correlations between replicates and non-replicates.
```
rep_corrs = {}
rand_corrs = {}
num_comparisons = len(DF_metadata_passed_step4)*(len(DF_metadata_passed_step4)-1)/2
for exp1,exp2 in tqdm(itertools.combinations(DF_metadata_passed_step4.index,2),total=num_comparisons):
if DF_metadata_passed_step4.loc[exp1,'full_name'] == DF_metadata_passed_step4.loc[exp2,'full_name']:
rep_corrs[(exp1,exp2)] = stats.pearsonr(DF_log_tpm[exp1],DF_log_tpm[exp2])[0]
else:
rand_corrs[(exp1,exp2)] = stats.pearsonr(DF_log_tpm[exp1],DF_log_tpm[exp2])[0]
```
Correlations can be plotted on a histogram
```
fig,ax = plt.subplots(figsize=(5,5))
ax2 = ax.twinx()
ax2.hist(rep_corrs.values(),bins=50,range=(0.2,1),alpha=0.8,color='green',linewidth=0)
ax.hist(rand_corrs.values(),bins=50,range=(0.2,1),alpha=0.8,color='blue',linewidth=0)
ax.set_title('Pearson R correlation between experiments',fontsize=14)
ax.set_xlabel('Pearson R correlation',fontsize=14)
ax.set_ylabel('Different Conditions',fontsize=14)
ax2.set_ylabel('Known Replicates',fontsize=14)
med_corr = np.median([v for k,v in rep_corrs.items()])
print('Median Pearson R between replicates: {:.2f}'.format(med_corr))
```
Remove samples without any high-correlation replicates
```
dissimilar = []
for idx, grp in DF_metadata_passed_step4.groupby('full_name'):
ident = np.identity(len(grp))
corrs = (DF_log_tpm[grp.index].corr() - ident).max()
dissimilar.extend(corrs[corrs<rcutoff].index)
# Save this information in both the original metadata dataframe and the new metadata dataframe
DF_metadata_all['passed_replicate_correlations'] = ~DF_metadata_all.index.isin(dissimilar)
DF_metadata_passed_step4['passed_replicate_correlations'] = ~DF_metadata_passed_step4.index.isin(dissimilar)
DF_metadata_final = DF_metadata_passed_step4[DF_metadata_passed_step4['passed_replicate_correlations']]
print('# Samples that passed replicate correlations:',len(DF_metadata_final))
```
## Check that reference conditions still exist
If a reference condition was removed due to poor replicate correlations, a new reference condition needs to be defined.
Again, any samples that fail these checks will be printed below.
```
project_exprs = []
for name,group in DF_metadata_final.groupby('project'):
# Get reference condition
ref_cond = group.reference_condition.iloc[0]
# Ensure the reference condition is still in the project
if ref_cond not in group.condition.tolist():
print('Reference condition missing from:', name)
# Check that each project has at least two conditions (a reference and at least one test condition)
if len(group.condition.unique()) <= 1:
print('Only one condition in:', name)
```
If necessary, choose a new condition for failed projects and re-run notebook.
## Normalize dataset to reference conditions
```
DF_log_tpm_final = DF_log_tpm[DF_metadata_final.index]
project_exprs = []
for name,group in DF_metadata_final.groupby('project'):
# Get reference condition
ref_cond = group.reference_condition.iloc[0]
# Get reference condition sample ids
ref_samples = group[group.condition == ref_cond].index
# Get reference condition expression
ref_expr = DF_log_tpm_final[ref_samples].mean(axis=1)
# Subtract reference expression from project
project_exprs.append(DF_log_tpm_final[group.index].sub(ref_expr,axis=0))
DF_log_tpm_norm = pd.concat(project_exprs,axis=1)
```
## Save final datasets
```
logTPM_qc_file = path.join('..','data','processed_data','log_tpm.csv')
logTPM_norm_file = path.join('..','data','processed_data','log_tpm_norm.csv')
final_metadata_file = path.join('..','data','processed_data','metadata.tsv')
final_metadata_all_file = path.join('..','data','interim','metadata_qc_part2_all.tsv')
DF_log_tpm_final.to_csv(logTPM_qc_file)
DF_log_tpm_norm.to_csv(logTPM_norm_file)
DF_metadata_final.to_csv(final_metadata_file, sep='\t')
DF_metadata_all.to_csv(final_metadata_all_file, sep='\t')
```
|
github_jupyter
|
# Hyperparams And Distributions
This page introduces the hyperparams, and distributions in Neuraxle. You can find [Hyperparams Distribution API here](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html), and
[Hyperparameter Samples API here](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.space.html).
Hyperparameter is a parameter drawn from a prior distribution. In Neuraxle, we have a few built-in distributions, and we are also compatible with scipy distributions.
Create a [Uniform Distribution](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.Uniform):
```
from neuraxle.hyperparams.distributions import Uniform
hd = Uniform(
min_included=-10,
max_included=10,
null_default_value=0
)
```
Sample the random variable using [rvs](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.rvs):
```
sample = hd.rvs()
print(sample)
```
Nullify the random variable using [nullify](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.nullify):
```
nullified_sample = hd.nullify()
assert nullified_sample == hd.null_default_value
```
Get the probability distribution function value at `x` using [pdf](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.pdf):
```
pdf = hd.pdf(1)
print('pdf: {}'.format(pdf))
```
Get the cumulative probability distribution function value at `x` using [cdf](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution.cdf)
```
cdf = hd.cdf(1)
print('cdf: {}'.format(cdf))
```
## Setting And Updating Hyperparams
In Neuraxle, each step has hyperparams of type [HyperparameterSamples](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.space.html#neuraxle.hyperparams.space.HyperparameterSamples), and spaces of type [HyperparameterSpace](https://www.neuraxle.org/stable/api/neuraxle.hyperparams.distributions.html#neuraxle.hyperparams.distributions.HyperparameterDistribution).
Consider a simple pipeline that contains 2 MultiplyByN steps, and one PCA component inside a nested pipeline:
```
from sklearn.decomposition import PCA
from neuraxle.hyperparams.distributions import RandInt
from neuraxle.hyperparams.space import HyperparameterSpace, HyperparameterSamples
from neuraxle.pipeline import Pipeline
from neuraxle.steps.numpy import MultiplyByN
p = Pipeline([
('step1', MultiplyByN(2)),
('step2', MultiplyByN(2)),
Pipeline([
PCA(n_components=4)
])
])
```
We can set or update the hyperparams, and spaces by doing the following:
```
p.set_hyperparams(HyperparameterSamples({
'step1__multiply_by': 42,
'step2__multiply_by': -10,
'Pipeline__PCA__n_components': 2
}))
p.update_hyperparams(HyperparameterSamples({
'Pipeline__PCA__n_components': 3
}))
p.set_hyperparams_space(HyperparameterSpace({
'step1__multiply_by': RandInt(42, 50),
'step2__multiply_by': RandInt(-10, 0),
'Pipeline__PCA__n_components': RandInt(2, 3)
}))
```
We can sample the space of random variables:
```
samples = p.get_hyperparams_space().rvs()
assert 42 <= samples['step1__multiply_by'] <= 50
assert -10 <= samples['step2__multiply_by'] <= 0
assert samples['Pipeline__PCA__n_components'] in [2, 3]
```
We can get all hyperparams:
```
samples = p.get_hyperparams()
assert 42 <= samples['step1__multiply_by'] <= 50
assert -10 <= samples['step2__multiply_by'] <= 0
assert samples['Pipeline__PCA__n_components'] in [2, 3]
assert p['Pipeline']['PCA'].get_wrapped_sklearn_predictor().n_components in [2, 3]
```
## Neuraxle Custom Distributions
## Scipy Distributions
To define a scipy distribution that is compatible with Neuraxle, you need to wrap the scipy distribution with ScipyDistributionWrapper:
```
from neuraxle.hyperparams.scipy_distributions import ScipyDistributionWrapper, BaseContinuousDistribution, BaseDiscreteDistribution
from scipy.integrate import quad
from scipy.special import factorial
from scipy.stats import rv_continuous, norm, rv_discrete, rv_histogram, truncnorm, randint
import numpy as np
import math
hd = ScipyDistributionWrapper(
scipy_distribution=randint(low=0, high=10),
is_continuous=False,
null_default_value=0
)
```
### Discrete Distributions
For discrete distribution that inherit from [rv_discrete](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_discrete.html#scipy.stats.rv_discrete), you only need to implement _pmf. The rest is taken care of magically by scipy.
For example, here is a discrete poisson distribution:
```
class Poisson(BaseDiscreteDistribution):
def __init__(self, min_included: float, max_included: float, null_default_value: float = None, mu=0.6):
super().__init__(
min_included=min_included,
max_included=max_included,
name='poisson',
null_default_value=null_default_value
)
self.mu = mu
def _pmf(self, x):
return math.exp(-self.mu) * self.mu ** x / factorial(x)
```
### Continuous Distributions
For continous distribution that inherit from [rv_continuous](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.rv_continuous.html), you only need to implement _pdf function. The rest is taken care of magically by scipy.
For example, here is a continous gaussian distribution:
```
class Gaussian(BaseContinuousDistribution):
def __init__(self, min_included: int, max_included: int, null_default_value: float = None):
self.max_included = max_included
self.min_included = min_included
BaseContinuousDistribution.__init__(
self,
name='gaussian',
min_included=min_included,
max_included=max_included,
null_default_value=null_default_value
)
def _pdf(self, x):
return math.exp(-x ** 2 / 2.) / np.sqrt(2.0 * np.pi)
```
### Custom Arguments
If you want to add more properties to calculate your distributions, just add them in self. They will be available in all of the scipy private methods you can override like _pmf, and _pdf.
```
class LogNormal(BaseContinuousDistribution):
def __init__(
self,
log2_space_mean: float,
log2_space_std: float,
hard_clip_min: float,
hard_clip_max: float,
null_default_value: float = None
):
if null_default_value is None:
null_default_value = hard_clip_min
if hard_clip_min is None:
hard_clip_min = np.nan
if hard_clip_max is None:
hard_clip_max = np.nan
self.log2_space_mean = log2_space_mean
self.log2_space_std = log2_space_std
super().__init__(
name='log_normal',
min_included=hard_clip_min,
max_included=hard_clip_max,
null_default_value=null_default_value
)
def _pdf(self, x):
if x <= 0:
return 0.
cdf_min = 0.
cdf_max = 1.
pdf_x = 1 / (x * math.log(2) * self.log2_space_std * math.sqrt(2 * math.pi)) * math.exp(
-(math.log2(x) - self.log2_space_mean) ** 2 / (2 * self.log2_space_std ** 2))
return pdf_x / (cdf_max - cdf_min)
```
### Scipy methods
All of the scipy distribution methods are available:
```
def get_many_samples_for(hd, num_trial):
return [hd.rvs() for _ in range(num_trial)]
samples = get_many_samples_for(hd, 1000)
for s in samples:
assert type(s) == int
hd = Gaussian(min_included=0, max_included=10, null_default_value=0)
assert 0.0 <= hd.rvs() <= 10.0
assert hd.pdf(10) < 0.001
assert hd.pdf(0) < 0.42
assert 0.55 > hd.cdf(5.0) > 0.45
assert hd.cdf(0) == 0.0
assert hd.logpdf(5) == -13.418938533204672
assert hd.logcdf(5) == -0.6931477538632531
assert hd.sf(5) == 0.5000002866515718
assert hd.logsf(5) == -0.693146607256966
assert np.all(hd.ppf([0.0, 0.01, 0.05, 0.1, 1 - 0.10, 1 - 0.05, 1 - 0.01, 1.0], 10))
assert np.isclose(hd.moment(2), 50.50000000091249)
assert hd.stats()[0]
assert hd.stats()[1]
assert np.array_equal(hd.entropy(), np.array(0.7094692666023363))
assert hd.median()
assert hd.mean() == 5.398942280397029
assert np.isclose(hd.std(), 4.620759921685374)
assert np.isclose(hd.var(), 21.35142225385382)
assert np.isclose(hd.expect(), 0.39894228040143276)
interval = hd.interval(alpha=[0.25, 0.50])
assert np.all(interval[0])
assert np.all(interval[1])
assert hd.support() == (0, 10)
```
## SKLearn Hyperparams
SKLearnWrapper wraps sklearn predictors so that they can be compatible with Neuraxle. When you set the hyperparams of an SKLearnWrapper, it automatically sets the params of the sklearn predictor for you:
```
from neuraxle.hyperparams.distributions import Choice
from neuraxle.hyperparams.distributions import RandInt
from neuraxle.hyperparams.space import HyperparameterSpace
from neuraxle.steps.sklearn import SKLearnWrapper
from sklearn.tree import DecisionTreeClassifier
decision_tree_classifier = SKLearnWrapper(
DecisionTreeClassifier(),
HyperparameterSpace({
'criterion': Choice(['gini', 'entropy']),
'splitter': Choice(['best', 'random']),
'min_samples_leaf': RandInt(2, 5),
'min_samples_split': RandInt(1, 3)
})
).set_hyperparams(HyperparameterSamples({
'criterion': 'gini',
'splitter': 'best',
'min_samples_leaf': 3,
'min_samples_split': 3
}))
```
|
github_jupyter
|
# Hyperparameter Tuning using SageMaker Tensorflow Container
This tutorial focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using **SageMaker TensorFlow container**. It leverages hyperparameter tuning to kick off multiple training jobs with different hyperparameter combinations, to find the one with best model training result.
## Set up the environment
We will set up a few things before starting the workflow.
1. specify the s3 bucket and prefix where training data set and model artifacts will be stored
2. get the execution role which will be passed to sagemaker for accessing your resources such as s3 bucket
```
import sagemaker
import project_path
from lib import utils
bucket = '{{s3_workshop_bucket}}}'
prefix = 'sagemaker/DEMO-hpo-tensorflow-high' # you can customize the prefix (subfolder) here
role = sagemaker.get_execution_role() # we are using the notebook instance role for training in this example
```
Now we'll import the Python libraries we'll need.
```
import boto3
from time import gmtime, strftime
from sagemaker.tensorflow import TensorFlow
from sagemaker.tuner import IntegerParameter, CategoricalParameter, ContinuousParameter, HyperparameterTuner
```
## Download the MNIST dataset
```
import utils
from tensorflow.contrib.learn.python.learn.datasets import mnist
import tensorflow as tf
data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)
utils.convert_to(data_sets.train, 'train', 'data')
utils.convert_to(data_sets.validation, 'validation', 'data')
utils.convert_to(data_sets.test, 'test', 'data')
```
## Upload the data
We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value identifies the location -- we will use this later when we start the training job.
```
inputs = sagemaker.Session().upload_data(path='data', bucket=bucket, key_prefix=prefix+'/data/mnist')
print (inputs)
```
## Construct a script for distributed training
Here is the full code for the network model:
```
!cat '../scripts/mnist.py'
```
The script here is and adaptation of the [TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist). It provides a ```model_fn(features, labels, mode)```, which is used for training, evaluation and inference.
### A regular ```model_fn```
A regular **```model_fn```** follows the pattern:
1. [defines a neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L96)
- [applies the ```features``` in the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L178)
- [if the ```mode``` is ```PREDICT```, returns the output from the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L186)
- [calculates the loss function comparing the output with the ```labels```](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L188)
- [creates an optimizer and minimizes the loss function to improve the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L193)
- [returns the output, optimizer and loss function](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L205)
### Writing a ```model_fn``` for distributed training
When distributed training happens, the same neural network will be sent to the multiple training instances. Each instance will predict a batch of the dataset, calculate loss and minimize the optimizer. One entire loop of this process is called **training step**.
#### Syncronizing training steps
A [global step](https://www.tensorflow.org/api_docs/python/tf/train/global_step) is a global variable shared between the instances. It necessary for distributed training, so the optimizer will keep track of the number of **training steps** between runs:
```python
train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())
```
That is the only required change for distributed training!
## Set up hyperparameter tuning job
*Note, with the default setting below, the hyperparameter tuning job can take about 30 minutes to complete.*
Now we will set up the hyperparameter tuning job using SageMaker Python SDK, following below steps:
* Create an estimator to set up the TensorFlow training job
* Define the ranges of hyperparameters we plan to tune, in this example, we are tuning "learning_rate"
* Define the objective metric for the tuning job to optimize
* Create a hyperparameter tuner with above setting, as well as tuning resource configurations
Similar to training a single TensorFlow job in SageMaker, we define our TensorFlow estimator passing in the TensorFlow script, IAM role, and (per job) hardware configuration.
```
estimator = TensorFlow(entry_point='../scripts/mnist.py',
role=role,
framework_version='1.11.0',
training_steps=1000,
evaluation_steps=100,
train_instance_count=1,
train_instance_type='ml.m4.xlarge',
base_job_name='DEMO-hpo-tensorflow')
```
Once we've defined our estimator we can specify the hyperparameters we'd like to tune and their possible values. We have three different types of hyperparameters.
- Categorical parameters need to take one value from a discrete set. We define this by passing the list of possible values to `CategoricalParameter(list)`
- Continuous parameters can take any real number value between the minimum and maximum value, defined by `ContinuousParameter(min, max)`
- Integer parameters can take any integer value between the minimum and maximum value, defined by `IntegerParameter(min, max)`
*Note, if possible, it's almost always best to specify a value as the least restrictive type. For example, tuning learning rate as a continuous value between 0.01 and 0.2 is likely to yield a better result than tuning as a categorical parameter with values 0.01, 0.1, 0.15, or 0.2.*
```
hyperparameter_ranges = {'learning_rate': ContinuousParameter(0.01, 0.2)}
```
Next we'll specify the objective metric that we'd like to tune and its definition, which includes the regular expression (Regex) needed to extract that metric from the CloudWatch logs of the training job. In this particular case, our script emits loss value and we will use it as the objective metric, we also set the objective_type to be 'minimize', so that hyperparameter tuning seeks to minize the objective metric when searching for the best hyperparameter setting. By default, objective_type is set to 'maximize'.
```
objective_metric_name = 'loss'
objective_type = 'Minimize'
metric_definitions = [{'Name': 'loss',
'Regex': 'loss = ([0-9\\.]+)'}]
```
Now, we'll create a `HyperparameterTuner` object, to which we pass:
- The TensorFlow estimator we created above
- Our hyperparameter ranges
- Objective metric name and definition
- Tuning resource configurations such as Number of training jobs to run in total and how many training jobs can be run in parallel.
```
tuner = HyperparameterTuner(estimator,
objective_metric_name,
hyperparameter_ranges,
metric_definitions,
max_jobs=9,
max_parallel_jobs=3,
objective_type=objective_type)
```
## Launch hyperparameter tuning job
And finally, we can start our hyperprameter tuning job by calling `.fit()` and passing in the S3 path to our train and test dataset.
After the hyperprameter tuning job is created, you should be able to describe the tuning job to see its progress in the next step, and you can go to SageMaker console->Jobs to check out the progress of the progress of the hyperparameter tuning job.
```
tuner.fit(inputs)
```
Let's just run a quick check of the hyperparameter tuning jobs status to make sure it started successfully.
```
boto3.client('sagemaker').describe_hyper_parameter_tuning_job(
HyperParameterTuningJobName=tuner.latest_tuning_job.job_name)['HyperParameterTuningJobStatus']
```
## Analyze tuning job results - after tuning job is completed
Please refer to "HPO_Analyze_TuningJob_Results.ipynb" to see example code to analyze the tuning job results.
## Deploy the best model
Now that we have got the best model, we can deploy it to an endpoint. Please refer to other SageMaker sample notebooks or SageMaker documentation to see how to deploy a model.
|
github_jupyter
|
```
import numpy as np
import os
import sys
import xarray as xr
import scipy.io as sio
import matplotlib.pyplot as plt
import datetime
from dotenv import load_dotenv, find_dotenv
# find .env automagically by walking up directories until it's found
dotenv_path = find_dotenv()
load_dotenv(dotenv_path)
src_dir = os.environ.get('srcdir')
sys.path.append(src_dir)
# always reload modules marked with "%aimport"
%load_ext autoreload
%autoreload 1
%aimport features.bathy_smoothing
from features.resample_roms import resample
from features.bathy_smoothing import smoothing_PlusMinus_rx0,smoothing_PositiveVolume_rx0
from features.cartesian_grid_2d import haversine
#run = os.environ.get('run')
run ='waom10'
mr = 10 #km
smooth = False
deepen = False
#establish the grid with grid point distances of mr/2 in km
#we need double resolution to cover all of the staggered grid points (we subset to rho, psi, u, v points later)
#we need an extra line of u and v points at first to calculate all dx and dy on rho points
x,y = np.meshgrid(np.arange(-3000,3300+mr/2,mr/2),np.arange(-2700,2600+mr/2,mr/2))
#x,y = np.meshgrid(np.arange(-4300,4300+mr/2,mr/2),np.arange(-3700,3600+mr/2,mr/2))
#load south polar stereographic projection to convert from grid point distance in m to lat/lon and back
from mpl_toolkits.basemap import Basemap
m = Basemap(projection='spstere',lon_0=0,boundinglat=-50,lat_ts=-71)
#get lat/lon coordinates at all grid points by shifting the grid to the lower left corner of the map
lon,lat=m(x*1000+m.urcrnrx/2,y*1000+m.urcrnry/2,inverse=True)
#calculate curvilinear coordinate distances at rho points
dx = haversine(lon[1::2,0:-2:2],lat[1::2,0:-2:2],lon[1::2,2::2],lat[1::2,2::2])
dy = haversine(lon[0:-2:2,1::2],lat[0:-2:2,1::2],lon[2::2,1::2],lat[2::2,1::2])
#calculate curvilinear coordinate metrices
pm = 1.0/dx
pn = 1.0/dy
dndx = np.empty_like(pm)
dmde = np.empty_like(pn)
dndx[:,1:-1] = 0.5*(pn[:,2:] - pn[:,:-2])
dmde[1:-1,:] = 0.5*(pm[2:,:] - pm[:-2,:])
dndx[:,0] = 2*dndx[:,1] - dndx[:,2]
dndx[:,-1] = 2*dndx[:,-2] - dndx[:,-3]
dmde[0,:] = 2*dmde[1,:] - dmde[2,:]
dmde[-1,:] = 2*dmde[-2,:] - dmde[-3,:]
#subset lat and lon at rho, psi, u and v points
lon_rho = lon[1::2,1::2]
lat_rho = lat[1::2,1::2]
lon_psi = lon[2:-1:2,2:-1:2]
lat_psi = lat[2:-1:2,2:-1:2]
lon_u = lon[1::2,2:-1:2]
lat_u = lat[1::2,2:-1:2]
lon_v = lon[2:-1:2,1::2]
lat_v = lat[2:-1:2,1::2]
#load rtopo bed and ice topography and resample to rho points
rtopo_path = os.path.join(os.environ.get('extdir'),'rtopo','RTopo-2.0.1_30sec_*_S30.nc')
rtopo = xr.open_mfdataset(rtopo_path,data_vars='minimal')#.sel(latdim=np.arange(0,7501,50),londim=np.arange(0,43201,100))
rt_lon,rt_lat = np.meshgrid(rtopo.lon.values,rtopo.lat.values)
bed_raw = resample(rt_lon,rt_lat,lon_rho,lat_rho,rtopo.bedrock_topography.values)
ice_raw = resample(rt_lon,rt_lat,lon_rho,lat_rho,rtopo.ice_base_topography.values)
#make a copy of the raw bathymetry
bed = bed_raw.copy()
ice = ice_raw.copy()
#set bed minimum depth to 10 cm
bed[bed>-0.1]= -0.1
#set ice draft at these places to zero
ice[bed>0.1] = 0.0
#set ice mountains to zero
ice[ice>0]= 0.0
#set water column thickness to a small positive value (ROMS don't like when bed = ice draft)
wct = (-(bed-ice)).copy()
ice[wct==0] = bed[wct==0] + 0.1
#generate a land/ocean mask depending on water column thickness
#(distance between ice and bed or sea surface and bed)
#wct = (-(bed-ice)).copy()
mask = np.ones_like(wct)
mask[wct<20] = 0
#smooth=True
#deepen=True
if smooth:
#if smoothing is activated smooth wct and bed and set ice draft as bed + wct
mask = np.ones_like(wct)
mask[wct<=0.1] = 0
dA = 1.0/(pn*pm)
bed = -(smoothing_PositiveVolume_rx0(mask,-bed,0.8,dA))
wct = smoothing_PositiveVolume_rx0(mask,wct,0.8,dA)
ice = bed + wct
#update the minimum wct points as before
bed[bed>-0.1]= -0.1
ice[bed>0.1] = 0.0
ice[ice>0]= 0.0
wct = (-(bed-ice)).copy()
ice[wct==0] = bed[wct==0] + 0.1
#update the mask
wct = (-(bed-ice)).copy()
mask = np.ones_like(wct)
mask[wct<20] = 0
#if deepening is activated, deepen the bed to a minimum water column thickness of 50m
if deepen:
shallow = (wct<50)&(wct>=20)
bed[shallow] = ice[shallow]-50.0
#set spherical flag to 1, since we're creating a curvilinear spherical grid
spherical_da = xr.DataArray(int(1),name='spherical',attrs={'flag_meanings': 'Cartesian spherical',
'flag_values': np.array([0, 1], dtype=int),
'long_name': 'grid type logical switch'})
xl = mr*np.size(lat_rho,1)*1000
xl_da = xr.DataArray(xl,name='xl',attrs={'long_name': 'basin length in the XI-direction', 'units': 'meter'} )
el = mr*np.size(lon_rho,0)*1000
el_da = xr.DataArray(el,name='el',attrs={'long_name': 'basin length in the ETA-direction', 'units': 'meter'} )
angle = lon_rho/180.0*np.pi
angle_da = xr.DataArray(angle,name='angle',dims=['eta_rho','xi_rho'],attrs={'long_name': 'angle between XI-axis and EAST', 'units': 'radians'})
pn_da = xr.DataArray(pn,name="pn",dims=['eta_rho','xi_rho'],attrs={'long_name': 'curvilinear coordinate metric in ETA', 'units': 'meter-1'})
pm_da = xr.DataArray(pm,name='pm',dims=['eta_rho','xi_rho'],attrs={'long_name': 'curvilinear coordinate metric in XI', 'units': 'meter-1'})
dmde_da = xr.DataArray(dmde,name='dmde',dims=['eta_rho','xi_rho'],attrs={'long_name': 'ETA-derivative of inverse metric factor pm', 'units': 'meter'})
dndx_da = xr.DataArray(dndx,name='dndx',dims=['eta_rho','xi_rho'],attrs={'long_name': 'XI-derivative of inverse metric factor nm', 'units': 'meter'})
f = 2*7.29e-5*np.sin(lat_rho*np.pi/180)
f_da = xr.DataArray(f,name='f',dims=['eta_rho','xi_rho'],attrs={'long_name': 'Coriolis parameter at RHO-points', 'units': 'second-1'})
h_da = xr.DataArray(-bed,name='h',dims=['eta_rho','xi_rho'],attrs={'long_name': 'model bathymetry at RHO-points', 'units': 'meter'})
hraw_da = xr.DataArray(-bed_raw,name='hraw',dims=['eta_rho','xi_rho'],attrs={'long_name': 'Working bathymetry at RHO-points', 'units': 'meter'})
zice_da = xr.DataArray(ice,name='zice',dims=['eta_rho','xi_rho'],attrs={'long_name': 'model ice draft at RHO-points', 'units': 'meter'})
lon_rho_da = xr.DataArray(lon_rho,name='lon_rho',dims=['eta_rho','xi_rho'],attrs={'long_name': 'longitude of RHO-points',
'standard_name': 'longitude',
'units': 'degree_east'})
lat_rho_da = xr.DataArray(lat_rho,name='lat_rho',dims=['eta_rho','xi_rho'],attrs={'long_name': 'latitude of RHO-points',
'standard_name': 'latitude',
'units': 'degree_north'})
lon_psi_da = xr.DataArray(lon_psi,name='lon_psi',dims=['eta_psi','xi_psi'],attrs={'long_name': 'longitude of psi-points',
'standard_name': 'longitude',
'units': 'degree_east'})
lat_psi_da = xr.DataArray(lat_psi,name='lat_psi',dims=['eta_psi','xi_psi'],attrs={'long_name': 'latitude of psi-points',
'standard_name': 'latitude',
'units': 'degree_north'})
lon_u_da = xr.DataArray(lon_u,name='lon_u',dims=['eta_u','xi_u'],attrs={'long_name': 'longitude of u-points',
'standard_name': 'longitude',
'units': 'degree_east'})
lat_u_da = xr.DataArray(lat_u,name='lat_u',dims=['eta_u','xi_u'],attrs={'long_name': 'latitude of u-points',
'standard_name': 'latitude',
'units': 'degree_north'})
lon_v_da = xr.DataArray(lon_v,name='lon_v',dims=['eta_v','xi_v'],attrs={'long_name': 'longitude of v-points',
'standard_name': 'longitude',
'units': 'degree_east'})
lat_v_da = xr.DataArray(lat_v,name='lat_v',dims=['eta_v','xi_v'],attrs={'long_name': 'latitude of v-points',
'standard_name': 'latitude',
'units': 'degree_north'})
from features.mask_roms_uvp import uvp_masks
mask_rho = mask.copy()
mask_u,mask_v,mask_psi = uvp_masks(mask_rho)
mask_rho_da = xr.DataArray(mask_rho,name='mask_rho',dims=['eta_rho','xi_rho'],attrs={'flag_meanings': 'land water',
'flag_values': np.array([ 0., 1.]),
'long_name': 'mask on RHO-points'})
mask_psi_da = xr.DataArray(mask_psi,name='mask_psi',dims=['eta_psi','xi_psi'],attrs={'flag_meanings': 'land water',
'flag_values': np.array([ 0., 1.]),
'long_name': 'mask on psi-points'})
mask_u_da = xr.DataArray(mask_u,name='mask_u',dims=['eta_u','xi_u'],attrs={'flag_meanings': 'land water',
'flag_values': np.array([ 0., 1.]),
'long_name': 'mask on u-points'})
mask_v_da = xr.DataArray(mask_v,name='mask_v',dims=['eta_v','xi_v'],attrs={'flag_meanings': 'land water',
'flag_values': np.array([ 0., 1.]),
'long_name': 'mask on v-points'})
grd = xr.Dataset({'spherical':spherical_da,
'xl':xl_da,
'el':el_da,
'angle':angle_da,
'pm':pn_da,
'pn':pn_da,
'dndx':dndx_da,
'dmde':dmde_da,
'f':f_da,
'h':h_da,
'hraw':hraw_da,
'zice':zice_da,
'lon_rho':lon_rho_da,
'lat_rho':lat_rho_da,
'lon_psi':lon_psi_da,
'lat_psi':lat_psi_da,
'lon_u':lon_u_da,
'lat_u':lat_u_da,
'lon_v':lon_v_da,
'lat_v':lat_v_da,
'mask_rho':mask_rho_da,
'mask_psi':mask_psi_da,
'mask_u':mask_u_da,
'mask_v':mask_v_da,},
attrs={'history': 'GRID file using make_grid.py, smoothing='+str(smooth)+
', deepening='+str(deepen)+', '+str(datetime.date.today()),
'type': 'ROMS grid file'})
out_path = os.path.join(os.environ.get('intdir'),'waom'+str(mr)+'_grd_raw.nc')
#out_path = '~/raijin/short/m68/oxr581/waom10_test/waom10_grd_smooth.nc'
grd.to_netcdf(out_path,unlimited_dims='bath')
```
Below just left overs from development
|
github_jupyter
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.